Skip to content

API Reference

Complete API reference for Pico-Agent, auto-generated from source code docstrings.

Module Overview

Module Description
pico_agent Package exports and public API
pico_agent.config Agent configuration settings
pico_agent.decorators @agent, @tool decorators
pico_agent.interfaces Core interfaces and protocols
pico_agent.exceptions Exception hierarchy
pico_agent.bootstrap Bootstrap and initialization
pico_agent.providers LLM provider integrations
pico_agent.messages Message types and handling
pico_agent.tools Tool registration and execution
pico_agent.router Agent routing logic
pico_agent.registry Agent and tool registry
pico_agent.lifecycle Agent lifecycle management
pico_agent.tracing Observability and tracing
pico_agent.scheduler Task scheduling

pico_agent

pico_agent

Pico-Agent: Protocol-based AI agent framework with dependency injection.

Public API exports for the pico-agent library. All classes, functions, and constants listed in __all__ are considered stable public API.

AgentCapability

Abstract capability labels mapped to concrete models by ModelRouter.

Use these constants in the @agent decorator to declare what kind of model an agent needs. The ModelRouter translates these labels into provider-specific model names at runtime, allowing you to swap models globally without touching agent definitions.

Attributes:

Name Type Description
FAST

Optimised for low latency (default model: gpt-5-mini).

SMART

Balanced quality and cost (default model: gpt-5.1).

REASONING

Advanced reasoning tasks (default model: gemini-3-pro).

VISION

Vision / multimodal support (default model: gpt-4o).

CODING

Code generation (default model: claude-3-5-sonnet).

Example

from pico_agent import agent, AgentCapability @agent(name="fast_bot", capability=AgentCapability.FAST) ... class FastBot(Protocol): ... def run(self, q: str) -> str: ...

Source code in src/pico_agent/config.py
class AgentCapability:
    """Abstract capability labels mapped to concrete models by ``ModelRouter``.

    Use these constants in the ``@agent`` decorator to declare what kind of
    model an agent needs.  The ``ModelRouter`` translates these labels into
    provider-specific model names at runtime, allowing you to swap models
    globally without touching agent definitions.

    Attributes:
        FAST: Optimised for low latency (default model: ``gpt-5-mini``).
        SMART: Balanced quality and cost (default model: ``gpt-5.1``).
        REASONING: Advanced reasoning tasks (default model: ``gemini-3-pro``).
        VISION: Vision / multimodal support (default model: ``gpt-4o``).
        CODING: Code generation (default model: ``claude-3-5-sonnet``).

    Example:
        >>> from pico_agent import agent, AgentCapability
        >>> @agent(name="fast_bot", capability=AgentCapability.FAST)
        ... class FastBot(Protocol):
        ...     def run(self, q: str) -> str: ...
    """

    FAST = "fast"
    SMART = "smart"
    REASONING = "reasoning"
    VISION = "vision"
    CODING = "coding"

AgentConfig dataclass

Complete configuration for a single agent.

Instances are created automatically by the @agent decorator and stored in LocalAgentRegistry. The AgentConfigService merges local, remote (central), and runtime overrides to produce the final effective config.

Parameters:

Name Type Description Default
name str

Unique agent identifier (required).

required
system_prompt str

System-level prompt sent to the LLM.

''
user_prompt_template str

Template for the user message. Use {input} or any key matching the method signature.

'{input}'
description str

Human-readable description; used as AgentAsTool description when the agent is exposed as a tool.

''
capability str

AgentCapability constant that the ModelRouter resolves to a concrete model name.

SMART
enabled bool

Whether the agent is active. Disabled agents raise AgentDisabledError.

True
agent_type AgentType

Execution strategy (ONE_SHOT, REACT, or WORKFLOW).

ONE_SHOT
max_iterations int

Maximum ReAct loop iterations (only relevant for REACT agents).

5
tools List[str]

List of tool names to attach to this agent.

list()
agents List[str]

List of child agent names that will be wrapped as AgentAsTool instances.

list()
tags List[str]

Tags used for dynamic tool lookup via ToolRegistry.

list()
tracing_enabled bool

Whether TraceService records runs for this agent.

True
temperature float

LLM sampling temperature (0.0 -- 2.0).

0.7
max_tokens Optional[int]

Maximum tokens in the LLM response, or None for the provider default.

None
llm_profile Optional[str]

Named API-key / base-URL profile in LLMConfig.

None
workflow_config Dict[str, Any]

Extra parameters for WORKFLOW agents (e.g., {"type": "map_reduce", "splitter": "...", "reducer": "..."}).

dict()
Source code in src/pico_agent/config.py
@dataclass
class AgentConfig:
    """Complete configuration for a single agent.

    Instances are created automatically by the ``@agent`` decorator and stored
    in ``LocalAgentRegistry``.  The ``AgentConfigService`` merges local, remote
    (central), and runtime overrides to produce the final effective config.

    Args:
        name: Unique agent identifier (required).
        system_prompt: System-level prompt sent to the LLM.
        user_prompt_template: Template for the user message.  Use ``{input}``
            or any key matching the method signature.
        description: Human-readable description; used as ``AgentAsTool``
            description when the agent is exposed as a tool.
        capability: ``AgentCapability`` constant that the ``ModelRouter``
            resolves to a concrete model name.
        enabled: Whether the agent is active.  Disabled agents raise
            ``AgentDisabledError``.
        agent_type: Execution strategy (``ONE_SHOT``, ``REACT``, or
            ``WORKFLOW``).
        max_iterations: Maximum ReAct loop iterations (only relevant for
            ``REACT`` agents).
        tools: List of tool names to attach to this agent.
        agents: List of child agent names that will be wrapped as
            ``AgentAsTool`` instances.
        tags: Tags used for dynamic tool lookup via ``ToolRegistry``.
        tracing_enabled: Whether ``TraceService`` records runs for this agent.
        temperature: LLM sampling temperature (0.0 -- 2.0).
        max_tokens: Maximum tokens in the LLM response, or ``None`` for the
            provider default.
        llm_profile: Named API-key / base-URL profile in ``LLMConfig``.
        workflow_config: Extra parameters for ``WORKFLOW`` agents (e.g.,
            ``{"type": "map_reduce", "splitter": "...", "reducer": "..."}``).
    """

    name: str
    system_prompt: str = ""
    user_prompt_template: str = "{input}"
    description: str = ""
    capability: str = AgentCapability.SMART
    enabled: bool = True
    agent_type: AgentType = AgentType.ONE_SHOT
    max_iterations: int = 5
    tools: List[str] = field(default_factory=list)
    agents: List[str] = field(default_factory=list)
    tags: List[str] = field(default_factory=list)
    tracing_enabled: bool = True
    temperature: float = 0.7
    max_tokens: Optional[int] = None
    llm_profile: Optional[str] = None
    workflow_config: Dict[str, Any] = field(default_factory=dict)

AgentType

Bases: str, Enum

Execution strategy for an agent.

Determines how the agent processes requests and interacts with tools.

Attributes:

Name Type Description
ONE_SHOT

Single LLM call with no tool loop (default).

REACT

Iterative ReAct tool loop via LangGraph, up to max_iterations rounds.

WORKFLOW

Custom workflow execution (e.g., map-reduce).

Source code in src/pico_agent/config.py
class AgentType(str, Enum):
    """Execution strategy for an agent.

    Determines how the agent processes requests and interacts with tools.

    Attributes:
        ONE_SHOT: Single LLM call with no tool loop (default).
        REACT: Iterative ReAct tool loop via LangGraph, up to
            ``max_iterations`` rounds.
        WORKFLOW: Custom workflow execution (e.g., map-reduce).
    """

    ONE_SHOT = "one_shot"
    REACT = "react"
    WORKFLOW = "workflow"

LLMConfig dataclass

Centralised API-key and base-URL store for all LLM providers.

AgentLocator registers a default (empty) LLMConfig singleton via @provides. To populate it with your credentials, use @configure on a component method that receives LLMConfig as a parameter. Do not register your own LLMConfig with @factory + @provides -- that would conflict with the singleton already provided by AgentInfrastructureFactory.

Parameters:

Name Type Description Default
api_keys Dict[str, str]

Mapping of provider name (or profile name) to API key. Standard keys: "openai", "anthropic", "google", "azure", "deepseek", "qwen".

dict()
base_urls Dict[str, str]

Mapping of provider name (or profile name) to base URL override.

dict()
Example

from pico_ioc import component, configure from pico_agent import LLMConfig @component ... class AppConfig: ... @configure ... def setup(self, llm: LLMConfig): ... llm.api_keys["openai"] = "sk-..."

Source code in src/pico_agent/config.py
@dataclass
class LLMConfig:
    """Centralised API-key and base-URL store for all LLM providers.

    ``AgentLocator`` registers a default (empty) ``LLMConfig`` singleton via
    ``@provides``.  To populate it with your credentials, use ``@configure``
    on a component method that receives ``LLMConfig`` as a parameter.  Do
    **not** register your own ``LLMConfig`` with ``@factory`` + ``@provides``
    -- that would conflict with the singleton already provided by
    ``AgentInfrastructureFactory``.

    Args:
        api_keys: Mapping of provider name (or profile name) to API key.
            Standard keys: ``"openai"``, ``"anthropic"``, ``"google"``,
            ``"azure"``, ``"deepseek"``, ``"qwen"``.
        base_urls: Mapping of provider name (or profile name) to base URL
            override.

    Example:
        >>> from pico_ioc import component, configure
        >>> from pico_agent import LLMConfig
        >>> @component
        ... class AppConfig:
        ...     @configure
        ...     def setup(self, llm: LLMConfig):
        ...         llm.api_keys["openai"] = "sk-..."
    """

    api_keys: Dict[str, str] = field(default_factory=dict)
    base_urls: Dict[str, str] = field(default_factory=dict)

AgentConfigurationError

Bases: AgentError

Raised for missing or invalid agent / provider configuration.

Common causes include missing API keys in LLMConfig or unknown provider names.

Source code in src/pico_agent/exceptions.py
class AgentConfigurationError(AgentError):
    """Raised for missing or invalid agent / provider configuration.

    Common causes include missing API keys in ``LLMConfig`` or unknown
    provider names.
    """

    pass

AgentDisabledError

Bases: AgentError

Raised when an agent is invoked but its configuration has enabled=False.

The error message follows the pattern:

``Agent '<name>' is disabled via configuration.``

Parameters:

Name Type Description Default
agent_name str

The name of the disabled agent.

required
Source code in src/pico_agent/exceptions.py
class AgentDisabledError(AgentError):
    """Raised when an agent is invoked but its configuration has ``enabled=False``.

    The error message follows the pattern:

        ``Agent '<name>' is disabled via configuration.``

    Args:
        agent_name: The name of the disabled agent.
    """

    def __init__(self, agent_name: str):
        super().__init__(f"Agent '{agent_name}' is disabled via configuration.")

AgentError

Bases: Exception

Base exception for all pico-agent errors.

Source code in src/pico_agent/exceptions.py
class AgentError(Exception):
    """Base exception for all pico-agent errors."""

    pass

AgentLifecycleError

Bases: AgentError

Raised when an operation violates the agent system lifecycle.

For example, attempting to use the system before it has reached the READY phase.

Source code in src/pico_agent/exceptions.py
class AgentLifecycleError(AgentError):
    """Raised when an operation violates the agent system lifecycle.

    For example, attempting to use the system before it has reached the
    ``READY`` phase.
    """

    pass

ExperimentRegistry

Singleton registry for A/B experiments on agents.

Register experiments mapping a public name to weighted variants. When AgentLocator resolves an agent name, it passes through resolve_variant() to select the appropriate variant.

Example

registry = container.get(ExperimentRegistry) registry.register_experiment("summarizer", { ... "summarizer_v1": 0.8, ... "summarizer_v2": 0.2, ... })

Source code in src/pico_agent/experiments.py
@component(scope="singleton")
class ExperimentRegistry:
    """Singleton registry for A/B experiments on agents.

    Register experiments mapping a public name to weighted variants.  When
    ``AgentLocator`` resolves an agent name, it passes through
    ``resolve_variant()`` to select the appropriate variant.

    Example:
        >>> registry = container.get(ExperimentRegistry)
        >>> registry.register_experiment("summarizer", {
        ...     "summarizer_v1": 0.8,
        ...     "summarizer_v2": 0.2,
        ... })
    """

    def __init__(self):
        self._experiments: Dict[str, List[Tuple[str, float]]] = {}

    def register_experiment(self, public_name: str, variants: Dict[str, float]):
        """Register an A/B experiment.

        Weights are normalised so they sum to 1.0.

        Args:
            public_name: The public agent name that triggers variant
                selection.
            variants: Mapping of variant agent names to their relative
                weights.
        """
        total_weight = sum(variants.values())
        normalized_variants = []

        for name, weight in variants.items():
            normalized_variants.append((name, weight / total_weight))

        self._experiments[public_name] = normalized_variants

    def resolve_variant(self, name: str) -> str:
        """Resolve a public name to a variant using weighted random selection.

        If no experiment is registered for *name*, the name itself is
        returned unchanged.

        Args:
            name: The public agent name.

        Returns:
            The selected variant name, or *name* if no experiment exists.
        """
        if name not in self._experiments:
            return name

        variants = self._experiments[name]
        choices = [v[0] for v in variants]
        weights = [v[1] for v in variants]

        return random.choices(choices, weights=weights, k=1)[0]

register_experiment(public_name, variants)

Register an A/B experiment.

Weights are normalised so they sum to 1.0.

Parameters:

Name Type Description Default
public_name str

The public agent name that triggers variant selection.

required
variants Dict[str, float]

Mapping of variant agent names to their relative weights.

required
Source code in src/pico_agent/experiments.py
def register_experiment(self, public_name: str, variants: Dict[str, float]):
    """Register an A/B experiment.

    Weights are normalised so they sum to 1.0.

    Args:
        public_name: The public agent name that triggers variant
            selection.
        variants: Mapping of variant agent names to their relative
            weights.
    """
    total_weight = sum(variants.values())
    normalized_variants = []

    for name, weight in variants.items():
        normalized_variants.append((name, weight / total_weight))

    self._experiments[public_name] = normalized_variants

resolve_variant(name)

Resolve a public name to a variant using weighted random selection.

If no experiment is registered for name, the name itself is returned unchanged.

Parameters:

Name Type Description Default
name str

The public agent name.

required

Returns:

Type Description
str

The selected variant name, or name if no experiment exists.

Source code in src/pico_agent/experiments.py
def resolve_variant(self, name: str) -> str:
    """Resolve a public name to a variant using weighted random selection.

    If no experiment is registered for *name*, the name itself is
    returned unchanged.

    Args:
        name: The public agent name.

    Returns:
        The selected variant name, or *name* if no experiment exists.
    """
    if name not in self._experiments:
        return name

    variants = self._experiments[name]
    choices = [v[0] for v in variants]
    weights = [v[1] for v in variants]

    return random.choices(choices, weights=weights, k=1)[0]

LLM

Bases: Protocol

Protocol for a language-model adapter used by agent proxies.

LangChainAdapter is the built-in implementation.

Source code in src/pico_agent/interfaces.py
class LLM(Protocol):
    """Protocol for a language-model adapter used by agent proxies.

    ``LangChainAdapter`` is the built-in implementation.
    """

    def invoke(self, messages: List[Dict[str, str]], tools: List[Any]) -> str:
        """Send messages to the LLM and return the text response.

        Args:
            messages: List of message dicts with ``"role"`` and ``"content"``
                keys.
            tools: LangChain-compatible tool instances bound to the model.

        Returns:
            The LLM's text response.
        """
        ...

    def invoke_structured(self, messages: List[Dict[str, str]], tools: List[Any], output_schema: Type[Any]) -> Any:
        """Send messages and parse the response into a structured schema.

        Args:
            messages: List of message dicts.
            tools: LangChain-compatible tool instances.
            output_schema: A ``pydantic.BaseModel`` subclass for structured
                output.

        Returns:
            An instance of *output_schema*.
        """
        ...

    def invoke_agent_loop(
        self,
        messages: List[Dict[str, str]],
        tools: List[Any],
        max_iterations: int,
        output_schema: Optional[Type[Any]] = None,
    ) -> Any:
        """Run a ReAct-style tool loop via LangGraph.

        Args:
            messages: List of message dicts.
            tools: LangChain-compatible tool instances.
            max_iterations: Maximum number of reasoning iterations.
            output_schema: Optional Pydantic model for structured final
                output.

        Returns:
            The final text response, or an instance of *output_schema* if
            provided.
        """
        ...

invoke(messages, tools)

Send messages to the LLM and return the text response.

Parameters:

Name Type Description Default
messages List[Dict[str, str]]

List of message dicts with "role" and "content" keys.

required
tools List[Any]

LangChain-compatible tool instances bound to the model.

required

Returns:

Type Description
str

The LLM's text response.

Source code in src/pico_agent/interfaces.py
def invoke(self, messages: List[Dict[str, str]], tools: List[Any]) -> str:
    """Send messages to the LLM and return the text response.

    Args:
        messages: List of message dicts with ``"role"`` and ``"content"``
            keys.
        tools: LangChain-compatible tool instances bound to the model.

    Returns:
        The LLM's text response.
    """
    ...

invoke_structured(messages, tools, output_schema)

Send messages and parse the response into a structured schema.

Parameters:

Name Type Description Default
messages List[Dict[str, str]]

List of message dicts.

required
tools List[Any]

LangChain-compatible tool instances.

required
output_schema Type[Any]

A pydantic.BaseModel subclass for structured output.

required

Returns:

Type Description
Any

An instance of output_schema.

Source code in src/pico_agent/interfaces.py
def invoke_structured(self, messages: List[Dict[str, str]], tools: List[Any], output_schema: Type[Any]) -> Any:
    """Send messages and parse the response into a structured schema.

    Args:
        messages: List of message dicts.
        tools: LangChain-compatible tool instances.
        output_schema: A ``pydantic.BaseModel`` subclass for structured
            output.

    Returns:
        An instance of *output_schema*.
    """
    ...

invoke_agent_loop(messages, tools, max_iterations, output_schema=None)

Run a ReAct-style tool loop via LangGraph.

Parameters:

Name Type Description Default
messages List[Dict[str, str]]

List of message dicts.

required
tools List[Any]

LangChain-compatible tool instances.

required
max_iterations int

Maximum number of reasoning iterations.

required
output_schema Optional[Type[Any]]

Optional Pydantic model for structured final output.

None

Returns:

Type Description
Any

The final text response, or an instance of output_schema if

Any

provided.

Source code in src/pico_agent/interfaces.py
def invoke_agent_loop(
    self,
    messages: List[Dict[str, str]],
    tools: List[Any],
    max_iterations: int,
    output_schema: Optional[Type[Any]] = None,
) -> Any:
    """Run a ReAct-style tool loop via LangGraph.

    Args:
        messages: List of message dicts.
        tools: LangChain-compatible tool instances.
        max_iterations: Maximum number of reasoning iterations.
        output_schema: Optional Pydantic model for structured final
            output.

    Returns:
        The final text response, or an instance of *output_schema* if
        provided.
    """
    ...

CentralConfigClient

Bases: Protocol

Protocol for retrieving and persisting agent configuration remotely.

The default implementation (NoOpCentralClient) returns None for all lookups, meaning only local and runtime config is used. Provide a custom implementation (e.g., backed by a database or API) to enable central configuration management.

Source code in src/pico_agent/interfaces.py
class CentralConfigClient(Protocol):
    """Protocol for retrieving and persisting agent configuration remotely.

    The default implementation (``NoOpCentralClient``) returns ``None`` for
    all lookups, meaning only local and runtime config is used.  Provide a
    custom implementation (e.g., backed by a database or API) to enable
    central configuration management.
    """

    def get_agent_config(self, name: str) -> Optional[Any]:
        """Fetch the remote configuration for an agent.

        Args:
            name: The agent's unique identifier.

        Returns:
            An ``AgentConfig`` if one exists remotely, otherwise ``None``.
        """
        ...

    def upsert_agent_config(self, config: Any) -> None:
        """Create or update the remote configuration for an agent.

        Args:
            config: The ``AgentConfig`` to persist.
        """
        ...

get_agent_config(name)

Fetch the remote configuration for an agent.

Parameters:

Name Type Description Default
name str

The agent's unique identifier.

required

Returns:

Type Description
Optional[Any]

An AgentConfig if one exists remotely, otherwise None.

Source code in src/pico_agent/interfaces.py
def get_agent_config(self, name: str) -> Optional[Any]:
    """Fetch the remote configuration for an agent.

    Args:
        name: The agent's unique identifier.

    Returns:
        An ``AgentConfig`` if one exists remotely, otherwise ``None``.
    """
    ...

upsert_agent_config(config)

Create or update the remote configuration for an agent.

Parameters:

Name Type Description Default
config Any

The AgentConfig to persist.

required
Source code in src/pico_agent/interfaces.py
def upsert_agent_config(self, config: Any) -> None:
    """Create or update the remote configuration for an agent.

    Args:
        config: The ``AgentConfig`` to persist.
    """
    ...

LLMFactory

Bases: Protocol

Protocol for creating LLM instances from model parameters.

LangChainLLMFactory is the built-in implementation that supports OpenAI, Azure, Anthropic, Google, DeepSeek, and Qwen.

Source code in src/pico_agent/interfaces.py
class LLMFactory(Protocol):
    """Protocol for creating ``LLM`` instances from model parameters.

    ``LangChainLLMFactory`` is the built-in implementation that supports
    OpenAI, Azure, Anthropic, Google, DeepSeek, and Qwen.
    """

    def create(
        self, model_name: str, temperature: float, max_tokens: Optional[int], llm_profile: Optional[str] = None
    ) -> LLM:
        """Create an ``LLM`` instance for the given model.

        Args:
            model_name: Model identifier, optionally prefixed with a provider
                (e.g., ``"openai:gpt-5-mini"``).
            temperature: Sampling temperature (0.0 -- 2.0).
            max_tokens: Maximum response tokens, or ``None`` for the provider
                default.
            llm_profile: Named profile in ``LLMConfig`` for API key / base
                URL selection.

        Returns:
            A configured ``LLM`` instance ready for invocation.
        """
        ...

create(model_name, temperature, max_tokens, llm_profile=None)

Create an LLM instance for the given model.

Parameters:

Name Type Description Default
model_name str

Model identifier, optionally prefixed with a provider (e.g., "openai:gpt-5-mini").

required
temperature float

Sampling temperature (0.0 -- 2.0).

required
max_tokens Optional[int]

Maximum response tokens, or None for the provider default.

required
llm_profile Optional[str]

Named profile in LLMConfig for API key / base URL selection.

None

Returns:

Type Description
LLM

A configured LLM instance ready for invocation.

Source code in src/pico_agent/interfaces.py
def create(
    self, model_name: str, temperature: float, max_tokens: Optional[int], llm_profile: Optional[str] = None
) -> LLM:
    """Create an ``LLM`` instance for the given model.

    Args:
        model_name: Model identifier, optionally prefixed with a provider
            (e.g., ``"openai:gpt-5-mini"``).
        temperature: Sampling temperature (0.0 -- 2.0).
        max_tokens: Maximum response tokens, or ``None`` for the provider
            default.
        llm_profile: Named profile in ``LLMConfig`` for API key / base
            URL selection.

    Returns:
        A configured ``LLM`` instance ready for invocation.
    """
    ...

AgentSystem

Lifecycle coordinator that publishes phase transitions via EventBus.

Transitions are published as LifecycleEvent instances. The system moves through: INITIALIZING -> READY -> RUNNING -> SHUTTING_DOWN -> STOPPED.

Source code in src/pico_agent/lifecycle.py
@component(scope="singleton")
class AgentSystem:
    """Lifecycle coordinator that publishes phase transitions via ``EventBus``.

    Transitions are published as ``LifecycleEvent`` instances.  The system
    moves through: ``INITIALIZING`` -> ``READY`` -> ``RUNNING`` ->
    ``SHUTTING_DOWN`` -> ``STOPPED``.
    """

    def __init__(self):
        self._phase = LifecyclePhase.INITIALIZING
        self._event_bus: EventBus | None = None

    @property
    def phase(self) -> LifecyclePhase:
        """The current lifecycle phase."""
        return self._phase

    def _transition(self, phase: LifecyclePhase, detail: str = ""):
        self._phase = phase
        if self._event_bus:
            self._event_bus.publish_sync(LifecycleEvent(phase=phase, detail=detail))

    @configure
    def _on_ready(self, container: PicoContainer):
        if container.has(EventBus):
            self._event_bus = container.get(EventBus)
        self._transition(LifecyclePhase.READY, "Container configured")
        self._transition(LifecyclePhase.RUNNING)

    @cleanup
    def _on_shutdown(self):
        self._transition(LifecyclePhase.SHUTTING_DOWN)
        self._transition(LifecyclePhase.STOPPED)

phase property

The current lifecycle phase.

LifecycleEvent dataclass

Bases: Event

Event published when the system transitions between lifecycle phases.

Parameters:

Name Type Description Default
phase LifecyclePhase

The new LifecyclePhase.

required
detail str

Optional human-readable detail string.

''
Source code in src/pico_agent/lifecycle.py
@dataclass
class LifecycleEvent(Event):
    """Event published when the system transitions between lifecycle phases.

    Args:
        phase: The new ``LifecyclePhase``.
        detail: Optional human-readable detail string.
    """

    phase: LifecyclePhase
    detail: str = ""

LifecyclePhase

Bases: str, Enum

Phases of the pico-agent system lifecycle.

Attributes:

Name Type Description
INITIALIZING

Container is being built.

SCANNING

Agents and tools are being discovered.

READY

Container is fully configured.

RUNNING

System is accepting requests.

SHUTTING_DOWN

Graceful shutdown in progress.

STOPPED

System has stopped.

Source code in src/pico_agent/lifecycle.py
class LifecyclePhase(str, Enum):
    """Phases of the pico-agent system lifecycle.

    Attributes:
        INITIALIZING: Container is being built.
        SCANNING: Agents and tools are being discovered.
        READY: Container is fully configured.
        RUNNING: System is accepting requests.
        SHUTTING_DOWN: Graceful shutdown in progress.
        STOPPED: System has stopped.
    """

    INITIALIZING = "initializing"
    SCANNING = "scanning"
    READY = "ready"
    RUNNING = "running"
    SHUTTING_DOWN = "shutting_down"
    STOPPED = "stopped"

AgentLocator

Primary entry point for obtaining agent proxies.

Resolves agent names (or Protocol classes) to DynamicAgentProxy (for code-defined agents) or VirtualAgentRunner (for YAML-defined / runtime agents). Supports A/B experiment resolution via ExperimentRegistry.

Parameters:

Name Type Description Default
container PicoContainer

The pico-ioc container.

required
config_service AgentConfigService

Service for resolving agent configurations.

required
tool_registry ToolRegistry

Registry for tool lookup.

required
llm_factory LLMFactory

Factory for creating LLM instances.

required
local_registry LocalAgentRegistry

Registry of locally discovered agents.

required
model_router ModelRouter

Capability-to-model router.

required
experiment_registry ExperimentRegistry

A/B experiment variant selector.

required
scheduler PlatformScheduler

Concurrency scheduler for async operations.

required
Source code in src/pico_agent/locator.py
@component(scope="singleton")
class AgentLocator:
    """Primary entry point for obtaining agent proxies.

    Resolves agent names (or Protocol classes) to ``DynamicAgentProxy``
    (for code-defined agents) or ``VirtualAgentRunner`` (for YAML-defined
    / runtime agents).  Supports A/B experiment resolution via
    ``ExperimentRegistry``.

    Args:
        container: The pico-ioc container.
        config_service: Service for resolving agent configurations.
        tool_registry: Registry for tool lookup.
        llm_factory: Factory for creating LLM instances.
        local_registry: Registry of locally discovered agents.
        model_router: Capability-to-model router.
        experiment_registry: A/B experiment variant selector.
        scheduler: Concurrency scheduler for async operations.
    """

    def __init__(
        self,
        container: PicoContainer,
        config_service: AgentConfigService,
        tool_registry: ToolRegistry,
        llm_factory: LLMFactory,
        local_registry: LocalAgentRegistry,
        model_router: ModelRouter,
        experiment_registry: ExperimentRegistry,
        scheduler: PlatformScheduler,
    ):
        self.container = container
        self.config_service = config_service
        self.tool_registry = tool_registry
        self.llm_factory = llm_factory
        self.local_registry = local_registry
        self.model_router = model_router
        self.experiment_registry = experiment_registry
        self.scheduler = scheduler

    def get_agent(self, name_or_protocol: Any) -> Optional[Any]:
        """Retrieve an agent proxy by name or Protocol class.

        Resolution order:

        1. If *name_or_protocol* is a type, look up the agent name from its
           ``AGENT_META_KEY`` metadata or from ``LocalAgentRegistry``.
        2. If a string, pass through ``ExperimentRegistry.resolve_variant()``
           for A/B testing support.
        3. If a matching Protocol exists locally, create a
           ``DynamicAgentProxy``.
        4. Otherwise, attempt to create a ``VirtualAgentRunner`` from config.

        Args:
            name_or_protocol: Either an agent name string or a Protocol class.

        Returns:
            A ``DynamicAgentProxy`` or ``VirtualAgentRunner``, or ``None`` if
            no agent could be resolved.
        """
        agent_name = ""
        protocol = None

        if isinstance(name_or_protocol, type):
            protocol = name_or_protocol
            if hasattr(protocol, AGENT_META_KEY):
                agent_name = getattr(protocol, AGENT_META_KEY).name
            else:
                for n, p in self.local_registry._protocols.items():
                    if p == protocol:
                        agent_name = n
                        break
        else:
            requested_name = str(name_or_protocol)
            agent_name = self.experiment_registry.resolve_variant(requested_name)
            protocol = self.local_registry.get_protocol(agent_name)

        if not agent_name:
            return None

        if protocol:
            return self._create_proxy(agent_name, protocol)

        try:
            config = self.config_service.get_config(agent_name)
            if config:
                return VirtualAgentRunner(
                    config=config,
                    tool_registry=self.tool_registry,
                    llm_factory=self.llm_factory,
                    model_router=self.model_router,
                    container=self.container,
                    locator=self,
                    scheduler=self.scheduler,
                )
        except ValueError:
            pass

        return None

    def _create_proxy(self, name: str, protocol: Optional[Type]) -> Any:
        return DynamicAgentProxy(
            agent_name=name,
            protocol_cls=protocol,
            config_service=self.config_service,
            tool_registry=self.tool_registry,
            llm_factory=self.llm_factory,
            model_router=self.model_router,
            container=self.container,
            locator=self,
        )

    def create_proxy(self, protocol: Type) -> Any:
        """Create a ``DynamicAgentProxy`` directly from a Protocol class.

        Convenience method that extracts the agent name from the Protocol's
        ``AGENT_META_KEY`` metadata.

        Args:
            protocol: A Protocol class decorated with ``@agent``.

        Returns:
            A ``DynamicAgentProxy`` for the given Protocol.
        """
        config = getattr(protocol, AGENT_META_KEY)
        return self._create_proxy(config.name, protocol)

get_agent(name_or_protocol)

Retrieve an agent proxy by name or Protocol class.

Resolution order:

  1. If name_or_protocol is a type, look up the agent name from its AGENT_META_KEY metadata or from LocalAgentRegistry.
  2. If a string, pass through ExperimentRegistry.resolve_variant() for A/B testing support.
  3. If a matching Protocol exists locally, create a DynamicAgentProxy.
  4. Otherwise, attempt to create a VirtualAgentRunner from config.

Parameters:

Name Type Description Default
name_or_protocol Any

Either an agent name string or a Protocol class.

required

Returns:

Type Description
Optional[Any]

A DynamicAgentProxy or VirtualAgentRunner, or None if

Optional[Any]

no agent could be resolved.

Source code in src/pico_agent/locator.py
def get_agent(self, name_or_protocol: Any) -> Optional[Any]:
    """Retrieve an agent proxy by name or Protocol class.

    Resolution order:

    1. If *name_or_protocol* is a type, look up the agent name from its
       ``AGENT_META_KEY`` metadata or from ``LocalAgentRegistry``.
    2. If a string, pass through ``ExperimentRegistry.resolve_variant()``
       for A/B testing support.
    3. If a matching Protocol exists locally, create a
       ``DynamicAgentProxy``.
    4. Otherwise, attempt to create a ``VirtualAgentRunner`` from config.

    Args:
        name_or_protocol: Either an agent name string or a Protocol class.

    Returns:
        A ``DynamicAgentProxy`` or ``VirtualAgentRunner``, or ``None`` if
        no agent could be resolved.
    """
    agent_name = ""
    protocol = None

    if isinstance(name_or_protocol, type):
        protocol = name_or_protocol
        if hasattr(protocol, AGENT_META_KEY):
            agent_name = getattr(protocol, AGENT_META_KEY).name
        else:
            for n, p in self.local_registry._protocols.items():
                if p == protocol:
                    agent_name = n
                    break
    else:
        requested_name = str(name_or_protocol)
        agent_name = self.experiment_registry.resolve_variant(requested_name)
        protocol = self.local_registry.get_protocol(agent_name)

    if not agent_name:
        return None

    if protocol:
        return self._create_proxy(agent_name, protocol)

    try:
        config = self.config_service.get_config(agent_name)
        if config:
            return VirtualAgentRunner(
                config=config,
                tool_registry=self.tool_registry,
                llm_factory=self.llm_factory,
                model_router=self.model_router,
                container=self.container,
                locator=self,
                scheduler=self.scheduler,
            )
    except ValueError:
        pass

    return None

create_proxy(protocol)

Create a DynamicAgentProxy directly from a Protocol class.

Convenience method that extracts the agent name from the Protocol's AGENT_META_KEY metadata.

Parameters:

Name Type Description Default
protocol Type

A Protocol class decorated with @agent.

required

Returns:

Type Description
Any

A DynamicAgentProxy for the given Protocol.

Source code in src/pico_agent/locator.py
def create_proxy(self, protocol: Type) -> Any:
    """Create a ``DynamicAgentProxy`` directly from a Protocol class.

    Convenience method that extracts the agent name from the Protocol's
    ``AGENT_META_KEY`` metadata.

    Args:
        protocol: A Protocol class decorated with ``@agent``.

    Returns:
        A ``DynamicAgentProxy`` for the given Protocol.
    """
    config = getattr(protocol, AGENT_META_KEY)
    return self._create_proxy(config.name, protocol)

AgentConfigService

Merges central, local, and runtime configuration for agents.

Configuration priority (highest wins): central > local > runtime. Central config (from CentralConfigClient) takes precedence over the local config discovered by AgentScanner. Runtime overrides set via update_agent_config() are applied on top of whichever base config is found.

Parameters:

Name Type Description Default
central_client CentralConfigClient

Remote configuration client.

required
local_registry LocalAgentRegistry

Registry populated by AgentScanner.

required
Source code in src/pico_agent/registry.py
@component
class AgentConfigService:
    """Merges central, local, and runtime configuration for agents.

    Configuration priority (highest wins): **central > local > runtime**.
    Central config (from ``CentralConfigClient``) takes precedence over the
    local config discovered by ``AgentScanner``.  Runtime overrides set via
    ``update_agent_config()`` are applied on top of whichever base config is
    found.

    Args:
        central_client: Remote configuration client.
        local_registry: Registry populated by ``AgentScanner``.
    """

    def __init__(self, central_client: CentralConfigClient, local_registry: LocalAgentRegistry):
        self.central_client = central_client
        self.local_registry = local_registry
        self.auto_register = True
        self._runtime_overrides: Dict[str, Dict[str, Any]] = {}

    def get_config(self, name: str) -> AgentConfig:
        """Return the effective ``AgentConfig`` for the named agent.

        Merges remote, local, and runtime sources.

        Args:
            name: Agent identifier.

        Returns:
            The merged ``AgentConfig``.

        Raises:
            ValueError: If no configuration exists for the given name.
                Message: ``"No configuration found for agent: <name>"``.
        """
        remote_config = self.central_client.get_agent_config(name)
        local_config = self.local_registry.get_config(name)

        base_config = remote_config or local_config
        runtime_data = self._runtime_overrides.get(name)

        if base_config:
            if runtime_data:
                return replace(base_config, **runtime_data)
            return base_config

        elif runtime_data:
            config_data = runtime_data.copy()
            if "name" not in config_data:
                config_data["name"] = name
            return AgentConfig(**config_data)

        raise ValueError(f"No configuration found for agent: {name}")

    def update_agent_config(self, name: str, **kwargs):
        """Apply runtime overrides to an agent's configuration.

        Overrides are merged on each ``get_config()`` call.

        Args:
            name: Agent identifier.
            **kwargs: Fields of ``AgentConfig`` to override.
        """
        if name not in self._runtime_overrides:
            self._runtime_overrides[name] = {}
        self._runtime_overrides[name].update(kwargs)

    def reset_agent_config(self, name: str):
        """Remove all runtime overrides for an agent.

        Args:
            name: Agent identifier.
        """
        if name in self._runtime_overrides:
            del self._runtime_overrides[name]

get_config(name)

Return the effective AgentConfig for the named agent.

Merges remote, local, and runtime sources.

Parameters:

Name Type Description Default
name str

Agent identifier.

required

Returns:

Type Description
AgentConfig

The merged AgentConfig.

Raises:

Type Description
ValueError

If no configuration exists for the given name. Message: "No configuration found for agent: <name>".

Source code in src/pico_agent/registry.py
def get_config(self, name: str) -> AgentConfig:
    """Return the effective ``AgentConfig`` for the named agent.

    Merges remote, local, and runtime sources.

    Args:
        name: Agent identifier.

    Returns:
        The merged ``AgentConfig``.

    Raises:
        ValueError: If no configuration exists for the given name.
            Message: ``"No configuration found for agent: <name>"``.
    """
    remote_config = self.central_client.get_agent_config(name)
    local_config = self.local_registry.get_config(name)

    base_config = remote_config or local_config
    runtime_data = self._runtime_overrides.get(name)

    if base_config:
        if runtime_data:
            return replace(base_config, **runtime_data)
        return base_config

    elif runtime_data:
        config_data = runtime_data.copy()
        if "name" not in config_data:
            config_data["name"] = name
        return AgentConfig(**config_data)

    raise ValueError(f"No configuration found for agent: {name}")

update_agent_config(name, **kwargs)

Apply runtime overrides to an agent's configuration.

Overrides are merged on each get_config() call.

Parameters:

Name Type Description Default
name str

Agent identifier.

required
**kwargs

Fields of AgentConfig to override.

{}
Source code in src/pico_agent/registry.py
def update_agent_config(self, name: str, **kwargs):
    """Apply runtime overrides to an agent's configuration.

    Overrides are merged on each ``get_config()`` call.

    Args:
        name: Agent identifier.
        **kwargs: Fields of ``AgentConfig`` to override.
    """
    if name not in self._runtime_overrides:
        self._runtime_overrides[name] = {}
    self._runtime_overrides[name].update(kwargs)

reset_agent_config(name)

Remove all runtime overrides for an agent.

Parameters:

Name Type Description Default
name str

Agent identifier.

required
Source code in src/pico_agent/registry.py
def reset_agent_config(self, name: str):
    """Remove all runtime overrides for an agent.

    Args:
        name: Agent identifier.
    """
    if name in self._runtime_overrides:
        del self._runtime_overrides[name]

ToolRegistry

Central registry that stores tool classes/instances and supports tag-based lookup.

Tools are registered by ToolScanner during auto-discovery or manually via register(). At execution time, DynamicAgentProxy and VirtualAgentRunner query this registry to resolve tool dependencies.

Source code in src/pico_agent/registry.py
@component
class ToolRegistry:
    """Central registry that stores tool classes/instances and supports tag-based lookup.

    Tools are registered by ``ToolScanner`` during auto-discovery or manually
    via ``register()``.  At execution time, ``DynamicAgentProxy`` and
    ``VirtualAgentRunner`` query this registry to resolve tool dependencies.
    """

    def __init__(self):
        self._tools: Dict[str, Any] = {}
        self._tag_map: Dict[str, List[str]] = {}

    def register(self, name: str, tool_cls_or_instance: Any, tags: Optional[List[str]] = None) -> None:
        """Register a tool by name with optional tags.

        Args:
            name: Unique tool identifier.
            tool_cls_or_instance: The tool class or an already-instantiated
                tool object.
            tags: Optional list of tags for dynamic tool lookup.  Tools
                tagged ``"global"`` are attached to every agent automatically.
        """
        tags = tags or []
        self._tools[name] = tool_cls_or_instance
        for tag in tags:
            if tag not in self._tag_map:
                self._tag_map[tag] = []
            self._tag_map[tag].append(name)

    def get_tool(self, name: str) -> Optional[Any]:
        """Retrieve a tool by name.

        Args:
            name: The tool identifier.

        Returns:
            The tool class or instance, or ``None`` if not found.
        """
        return self._tools.get(name)

    def get_tool_names_by_tag(self, tag: str) -> List[str]:
        """Return all tool names associated with the given tag.

        Args:
            tag: The tag to search for.

        Returns:
            List of matching tool names (may be empty).
        """
        return self._tag_map.get(tag, [])

    def get_dynamic_tools(self, agent_tags: List[str]) -> List[Any]:
        """Collect tool instances matching any of the given tags, plus ``"global"`` tools.

        Duplicates are excluded.

        Args:
            agent_tags: Tags from the agent's ``AgentConfig.tags``.

        Returns:
            De-duplicated list of tool instances.
        """
        found_tools = []
        for tag in agent_tags:
            tool_names = self._tag_map.get(tag, [])
            for name in tool_names:
                t = self._tools.get(name)
                if t and t not in found_tools:
                    found_tools.append(t)

        global_names = self._tag_map.get("global", [])
        for name in global_names:
            t = self._tools.get(name)
            if t and t not in found_tools:
                found_tools.append(t)
        return found_tools

register(name, tool_cls_or_instance, tags=None)

Register a tool by name with optional tags.

Parameters:

Name Type Description Default
name str

Unique tool identifier.

required
tool_cls_or_instance Any

The tool class or an already-instantiated tool object.

required
tags Optional[List[str]]

Optional list of tags for dynamic tool lookup. Tools tagged "global" are attached to every agent automatically.

None
Source code in src/pico_agent/registry.py
def register(self, name: str, tool_cls_or_instance: Any, tags: Optional[List[str]] = None) -> None:
    """Register a tool by name with optional tags.

    Args:
        name: Unique tool identifier.
        tool_cls_or_instance: The tool class or an already-instantiated
            tool object.
        tags: Optional list of tags for dynamic tool lookup.  Tools
            tagged ``"global"`` are attached to every agent automatically.
    """
    tags = tags or []
    self._tools[name] = tool_cls_or_instance
    for tag in tags:
        if tag not in self._tag_map:
            self._tag_map[tag] = []
        self._tag_map[tag].append(name)

get_tool(name)

Retrieve a tool by name.

Parameters:

Name Type Description Default
name str

The tool identifier.

required

Returns:

Type Description
Optional[Any]

The tool class or instance, or None if not found.

Source code in src/pico_agent/registry.py
def get_tool(self, name: str) -> Optional[Any]:
    """Retrieve a tool by name.

    Args:
        name: The tool identifier.

    Returns:
        The tool class or instance, or ``None`` if not found.
    """
    return self._tools.get(name)

get_tool_names_by_tag(tag)

Return all tool names associated with the given tag.

Parameters:

Name Type Description Default
tag str

The tag to search for.

required

Returns:

Type Description
List[str]

List of matching tool names (may be empty).

Source code in src/pico_agent/registry.py
def get_tool_names_by_tag(self, tag: str) -> List[str]:
    """Return all tool names associated with the given tag.

    Args:
        tag: The tag to search for.

    Returns:
        List of matching tool names (may be empty).
    """
    return self._tag_map.get(tag, [])

get_dynamic_tools(agent_tags)

Collect tool instances matching any of the given tags, plus "global" tools.

Duplicates are excluded.

Parameters:

Name Type Description Default
agent_tags List[str]

Tags from the agent's AgentConfig.tags.

required

Returns:

Type Description
List[Any]

De-duplicated list of tool instances.

Source code in src/pico_agent/registry.py
def get_dynamic_tools(self, agent_tags: List[str]) -> List[Any]:
    """Collect tool instances matching any of the given tags, plus ``"global"`` tools.

    Duplicates are excluded.

    Args:
        agent_tags: Tags from the agent's ``AgentConfig.tags``.

    Returns:
        De-duplicated list of tool instances.
    """
    found_tools = []
    for tag in agent_tags:
        tool_names = self._tag_map.get(tag, [])
        for name in tool_names:
            t = self._tools.get(name)
            if t and t not in found_tools:
                found_tools.append(t)

    global_names = self._tag_map.get("global", [])
    for name in global_names:
        t = self._tools.get(name)
        if t and t not in found_tools:
            found_tools.append(t)
    return found_tools

AgentScanner

Bases: _ScannerBase

Discovers @agent-decorated Protocol classes and registers them.

Walks Python modules to find classes that carry the IS_AGENT_INTERFACE flag, extracts the AgentConfig from AGENT_META_KEY, and stores both the Protocol class and its config in LocalAgentRegistry.

Parameters:

Name Type Description Default
registry LocalAgentRegistry

The LocalAgentRegistry to populate.

required
Source code in src/pico_agent/scanner.py
@component
class AgentScanner(_ScannerBase):
    """Discovers ``@agent``-decorated Protocol classes and registers them.

    Walks Python modules to find classes that carry the ``IS_AGENT_INTERFACE``
    flag, extracts the ``AgentConfig`` from ``AGENT_META_KEY``, and stores
    both the Protocol class and its config in ``LocalAgentRegistry``.

    Args:
        registry: The ``LocalAgentRegistry`` to populate.
    """

    def __init__(self, registry: LocalAgentRegistry):
        self.registry = registry
        self._scanned_modules: Set[str] = set()

    def scan_module(self, module: Any):
        """Inspect a module for ``@agent``-decorated classes.

        Each discovered class is registered in the ``LocalAgentRegistry``.
        Modules are scanned at most once.

        Args:
            module: A Python module object.
        """
        mod_name = module.__name__
        if mod_name in self._scanned_modules:
            return
        self._scanned_modules.add(mod_name)

        try:
            members = inspect.getmembers(module)
        except (TypeError, ModuleNotFoundError) as e:
            logger.warning("Cannot inspect module %s: %s", mod_name, e)
            return

        for name, obj in members:
            if isinstance(obj, type) and getattr(obj, IS_AGENT_INTERFACE, False):
                config = getattr(obj, AGENT_META_KEY)
                self.registry.register(config.name, obj, config)

scan_module(module)

Inspect a module for @agent-decorated classes.

Each discovered class is registered in the LocalAgentRegistry. Modules are scanned at most once.

Parameters:

Name Type Description Default
module Any

A Python module object.

required
Source code in src/pico_agent/scanner.py
def scan_module(self, module: Any):
    """Inspect a module for ``@agent``-decorated classes.

    Each discovered class is registered in the ``LocalAgentRegistry``.
    Modules are scanned at most once.

    Args:
        module: A Python module object.
    """
    mod_name = module.__name__
    if mod_name in self._scanned_modules:
        return
    self._scanned_modules.add(mod_name)

    try:
        members = inspect.getmembers(module)
    except (TypeError, ModuleNotFoundError) as e:
        logger.warning("Cannot inspect module %s: %s", mod_name, e)
        return

    for name, obj in members:
        if isinstance(obj, type) and getattr(obj, IS_AGENT_INTERFACE, False):
            config = getattr(obj, AGENT_META_KEY)
            self.registry.register(config.name, obj, config)

ToolScanner

Bases: _ScannerBase

Discovers @tool-decorated classes and registers them.

Walks Python modules to find classes that carry TOOL_META_KEY, extracts the ToolConfig, and stores the class in ToolRegistry.

Parameters:

Name Type Description Default
registry ToolRegistry

The ToolRegistry to populate.

required
Source code in src/pico_agent/scanner.py
@component
class ToolScanner(_ScannerBase):
    """Discovers ``@tool``-decorated classes and registers them.

    Walks Python modules to find classes that carry ``TOOL_META_KEY``,
    extracts the ``ToolConfig``, and stores the class in ``ToolRegistry``.

    Args:
        registry: The ``ToolRegistry`` to populate.
    """

    def __init__(self, registry: ToolRegistry):
        self.registry = registry
        self._scanned_modules: Set[str] = set()

    def scan_module(self, module: Any):
        """Inspect a module for ``@tool``-decorated classes.

        Each discovered class is registered in the ``ToolRegistry``.
        Modules are scanned at most once.

        Args:
            module: A Python module object.
        """
        mod_name = module.__name__
        if mod_name in self._scanned_modules:
            return
        self._scanned_modules.add(mod_name)

        try:
            members = inspect.getmembers(module)
        except (TypeError, ModuleNotFoundError) as e:
            logger.warning("Cannot inspect module %s: %s", mod_name, e)
            return

        for name, obj in members:
            if isinstance(obj, type) and hasattr(obj, TOOL_META_KEY):
                config = getattr(obj, TOOL_META_KEY)
                self.registry.register(config.name, obj)

scan_module(module)

Inspect a module for @tool-decorated classes.

Each discovered class is registered in the ToolRegistry. Modules are scanned at most once.

Parameters:

Name Type Description Default
module Any

A Python module object.

required
Source code in src/pico_agent/scanner.py
def scan_module(self, module: Any):
    """Inspect a module for ``@tool``-decorated classes.

    Each discovered class is registered in the ``ToolRegistry``.
    Modules are scanned at most once.

    Args:
        module: A Python module object.
    """
    mod_name = module.__name__
    if mod_name in self._scanned_modules:
        return
    self._scanned_modules.add(mod_name)

    try:
        members = inspect.getmembers(module)
    except (TypeError, ModuleNotFoundError) as e:
        logger.warning("Cannot inspect module %s: %s", mod_name, e)
        return

    for name, obj in members:
        if isinstance(obj, type) and hasattr(obj, TOOL_META_KEY):
            config = getattr(obj, TOOL_META_KEY)
            self.registry.register(config.name, obj)

TraceRun dataclass

A single trace record for an agent, tool, or LLM invocation.

Attributes:

Name Type Description
id str

Unique run identifier (UUID).

name str

Human-readable name (e.g., agent name or "LLM: gpt-5").

run_type str

Category string -- "agent", "llm", or "tool".

inputs Dict[str, Any]

Input data (e.g., messages, arguments).

parent_id Optional[str]

ID of the parent run, or None for root runs.

start_time float

Unix timestamp when the run started.

end_time Optional[float]

Unix timestamp when the run ended (set by end_run).

outputs Optional[Dict[str, Any]]

Output data (set by end_run).

error Optional[str]

Error message if the run failed (set by end_run).

extra Dict[str, Any]

Arbitrary metadata (e.g., {"runtime_model": "gpt-4"}).

Source code in src/pico_agent/tracing.py
@dataclass
class TraceRun:
    """A single trace record for an agent, tool, or LLM invocation.

    Attributes:
        id: Unique run identifier (UUID).
        name: Human-readable name (e.g., agent name or ``"LLM: gpt-5"``).
        run_type: Category string -- ``"agent"``, ``"llm"``, or ``"tool"``.
        inputs: Input data (e.g., messages, arguments).
        parent_id: ID of the parent run, or ``None`` for root runs.
        start_time: Unix timestamp when the run started.
        end_time: Unix timestamp when the run ended (set by ``end_run``).
        outputs: Output data (set by ``end_run``).
        error: Error message if the run failed (set by ``end_run``).
        extra: Arbitrary metadata (e.g., ``{"runtime_model": "gpt-4"}``).
    """

    id: str
    name: str
    run_type: str
    inputs: Dict[str, Any]
    parent_id: Optional[str] = None
    start_time: float = field(default_factory=time.time)
    end_time: Optional[float] = None
    outputs: Optional[Dict[str, Any]] = None
    error: Optional[str] = None
    extra: Dict[str, Any] = field(default_factory=dict)

TraceService

Singleton service that collects hierarchical trace runs.

Traces are stored in memory and can be retrieved via get_traces(). On container shutdown (@cleanup), all traces are flushed.

Source code in src/pico_agent/tracing.py
@component(scope="singleton")
class TraceService:
    """Singleton service that collects hierarchical trace runs.

    Traces are stored in memory and can be retrieved via ``get_traces()``.
    On container shutdown (``@cleanup``), all traces are flushed.
    """

    def __init__(self):
        self.traces: List[TraceRun] = []

    def start_run(self, name: str, run_type: str, inputs: Dict[str, Any], extra: Dict[str, Any] = None) -> str:
        """Begin a new trace run.

        Automatically sets the parent ID from ``run_context`` and updates
        the context var to the new run ID.

        Args:
            name: Human-readable run name.
            run_type: Category (``"agent"``, ``"llm"``, ``"tool"``).
            inputs: Input data to record.
            extra: Optional metadata dict.

        Returns:
            The unique run ID (UUID string).
        """
        parent_id = run_context.get()
        run_id = str(uuid.uuid4())

        run = TraceRun(id=run_id, name=name, run_type=run_type, inputs=inputs, parent_id=parent_id, extra=extra or {})

        self.traces.append(run)
        run_context.set(run_id)
        return run_id

    def end_run(self, run_id: str, outputs: Any = None, error: Exception = None):
        """Complete a trace run, recording outputs or an error.

        Restores ``run_context`` to the parent run's ID.

        Args:
            run_id: The ID returned by ``start_run()``.
            outputs: Output data -- can be a string, dict, Pydantic model,
                or any object (converted via ``str()``).
            error: Exception instance if the run failed.
        """
        for run in reversed(self.traces):
            if run.id == run_id:
                run.end_time = time.time()
                if error:
                    run.error = str(error)
                else:
                    if isinstance(outputs, (str, int, float, bool)):
                        run.outputs = {"output": outputs}
                    elif hasattr(outputs, "dict"):
                        run.outputs = outputs.dict()
                    elif isinstance(outputs, dict):
                        run.outputs = outputs
                    else:
                        run.outputs = {"output": str(outputs)}

                run_context.set(run.parent_id)
                self._persist(run)
                break

    def _persist(self, run: TraceRun):
        pass

    @cleanup
    def _on_shutdown(self):
        logger.debug("TraceService: flushing %d traces", len(self.traces))
        self.traces.clear()

    def get_traces(self) -> List[Dict[str, Any]]:
        """Return all recorded traces as a list of dictionaries.

        Returns:
            List of dicts, each representing a ``TraceRun``.
        """
        return [asdict(t) for t in self.traces]

start_run(name, run_type, inputs, extra=None)

Begin a new trace run.

Automatically sets the parent ID from run_context and updates the context var to the new run ID.

Parameters:

Name Type Description Default
name str

Human-readable run name.

required
run_type str

Category ("agent", "llm", "tool").

required
inputs Dict[str, Any]

Input data to record.

required
extra Dict[str, Any]

Optional metadata dict.

None

Returns:

Type Description
str

The unique run ID (UUID string).

Source code in src/pico_agent/tracing.py
def start_run(self, name: str, run_type: str, inputs: Dict[str, Any], extra: Dict[str, Any] = None) -> str:
    """Begin a new trace run.

    Automatically sets the parent ID from ``run_context`` and updates
    the context var to the new run ID.

    Args:
        name: Human-readable run name.
        run_type: Category (``"agent"``, ``"llm"``, ``"tool"``).
        inputs: Input data to record.
        extra: Optional metadata dict.

    Returns:
        The unique run ID (UUID string).
    """
    parent_id = run_context.get()
    run_id = str(uuid.uuid4())

    run = TraceRun(id=run_id, name=name, run_type=run_type, inputs=inputs, parent_id=parent_id, extra=extra or {})

    self.traces.append(run)
    run_context.set(run_id)
    return run_id

end_run(run_id, outputs=None, error=None)

Complete a trace run, recording outputs or an error.

Restores run_context to the parent run's ID.

Parameters:

Name Type Description Default
run_id str

The ID returned by start_run().

required
outputs Any

Output data -- can be a string, dict, Pydantic model, or any object (converted via str()).

None
error Exception

Exception instance if the run failed.

None
Source code in src/pico_agent/tracing.py
def end_run(self, run_id: str, outputs: Any = None, error: Exception = None):
    """Complete a trace run, recording outputs or an error.

    Restores ``run_context`` to the parent run's ID.

    Args:
        run_id: The ID returned by ``start_run()``.
        outputs: Output data -- can be a string, dict, Pydantic model,
            or any object (converted via ``str()``).
        error: Exception instance if the run failed.
    """
    for run in reversed(self.traces):
        if run.id == run_id:
            run.end_time = time.time()
            if error:
                run.error = str(error)
            else:
                if isinstance(outputs, (str, int, float, bool)):
                    run.outputs = {"output": outputs}
                elif hasattr(outputs, "dict"):
                    run.outputs = outputs.dict()
                elif isinstance(outputs, dict):
                    run.outputs = outputs
                else:
                    run.outputs = {"output": str(outputs)}

            run_context.set(run.parent_id)
            self._persist(run)
            break

get_traces()

Return all recorded traces as a list of dictionaries.

Returns:

Type Description
List[Dict[str, Any]]

List of dicts, each representing a TraceRun.

Source code in src/pico_agent/tracing.py
def get_traces(self) -> List[Dict[str, Any]]:
    """Return all recorded traces as a list of dictionaries.

    Returns:
        List of dicts, each representing a ``TraceRun``.
    """
    return [asdict(t) for t in self.traces]

AgentValidator

Validates AgentConfig instances for correctness.

Checks performed:

  • name must not be empty.
  • capability must be defined.
  • temperature must be between 0.0 and 2.0 (warning if > 1.0).
  • system_prompt should not be empty (warning).
Source code in src/pico_agent/validation.py
class AgentValidator:
    """Validates ``AgentConfig`` instances for correctness.

    Checks performed:

    - ``name`` must not be empty.
    - ``capability`` must be defined.
    - ``temperature`` must be between 0.0 and 2.0 (warning if > 1.0).
    - ``system_prompt`` should not be empty (warning).
    """

    def validate(self, config: AgentConfig) -> ValidationReport:
        """Validate an agent configuration.

        Args:
            config: The ``AgentConfig`` to validate.

        Returns:
            A ``ValidationReport`` with ``valid=True`` if no errors were
            found, and a list of ``ValidationIssue`` items.
        """
        issues = []

        if not config.name or len(config.name.strip()) == 0:
            issues.append(ValidationIssue("name", "Agent name cannot be empty", Severity.ERROR))

        if not config.capability:
            issues.append(ValidationIssue("capability", "Agent capability must be defined", Severity.ERROR))

        if not (0.0 <= config.temperature <= 2.0):
            issues.append(ValidationIssue("temperature", "Temperature must be between 0.0 and 2.0", Severity.ERROR))
        elif config.temperature > 1.0:
            issues.append(
                ValidationIssue("temperature", "High temperature (>1.0) may cause hallucinations", Severity.WARNING)
            )

        if not config.system_prompt:
            issues.append(ValidationIssue("system_prompt", "System prompt is empty", Severity.WARNING))

        return ValidationReport(valid=not any(i.severity == Severity.ERROR for i in issues), issues=issues)

validate(config)

Validate an agent configuration.

Parameters:

Name Type Description Default
config AgentConfig

The AgentConfig to validate.

required

Returns:

Type Description
ValidationReport

A ValidationReport with valid=True if no errors were

ValidationReport

found, and a list of ValidationIssue items.

Source code in src/pico_agent/validation.py
def validate(self, config: AgentConfig) -> ValidationReport:
    """Validate an agent configuration.

    Args:
        config: The ``AgentConfig`` to validate.

    Returns:
        A ``ValidationReport`` with ``valid=True`` if no errors were
        found, and a list of ``ValidationIssue`` items.
    """
    issues = []

    if not config.name or len(config.name.strip()) == 0:
        issues.append(ValidationIssue("name", "Agent name cannot be empty", Severity.ERROR))

    if not config.capability:
        issues.append(ValidationIssue("capability", "Agent capability must be defined", Severity.ERROR))

    if not (0.0 <= config.temperature <= 2.0):
        issues.append(ValidationIssue("temperature", "Temperature must be between 0.0 and 2.0", Severity.ERROR))
    elif config.temperature > 1.0:
        issues.append(
            ValidationIssue("temperature", "High temperature (>1.0) may cause hallucinations", Severity.WARNING)
        )

    if not config.system_prompt:
        issues.append(ValidationIssue("system_prompt", "System prompt is empty", Severity.WARNING))

    return ValidationReport(valid=not any(i.severity == Severity.ERROR for i in issues), issues=issues)

Severity

Bases: str, Enum

Severity level for a validation issue.

Attributes:

Name Type Description
WARNING

Non-fatal issue (e.g., empty system prompt).

ERROR

Fatal issue that prevents the agent from running.

Source code in src/pico_agent/validation.py
class Severity(str, Enum):
    """Severity level for a validation issue.

    Attributes:
        WARNING: Non-fatal issue (e.g., empty system prompt).
        ERROR: Fatal issue that prevents the agent from running.
    """

    WARNING = "warning"
    ERROR = "error"

ValidationIssue dataclass

A single validation finding.

Parameters:

Name Type Description Default
field str

The AgentConfig field name that triggered the issue.

required
message str

Human-readable description of the problem.

required
severity Severity

WARNING or ERROR.

required
Source code in src/pico_agent/validation.py
@dataclass
class ValidationIssue:
    """A single validation finding.

    Args:
        field: The ``AgentConfig`` field name that triggered the issue.
        message: Human-readable description of the problem.
        severity: ``WARNING`` or ``ERROR``.
    """

    field: str
    message: str
    severity: Severity

ValidationReport dataclass

Result of validating an AgentConfig.

Parameters:

Name Type Description Default
valid bool

True if no ERROR-level issues were found.

required
issues List[ValidationIssue]

List of ValidationIssue instances.

list()
Source code in src/pico_agent/validation.py
@dataclass
class ValidationReport:
    """Result of validating an ``AgentConfig``.

    Args:
        valid: ``True`` if no ``ERROR``-level issues were found.
        issues: List of ``ValidationIssue`` instances.
    """

    valid: bool
    issues: List[ValidationIssue] = field(default_factory=list)

    @property
    def has_errors(self) -> bool:
        """Return ``True`` if any issue has ``Severity.ERROR``."""
        return any(i.severity == Severity.ERROR for i in self.issues)

has_errors property

Return True if any issue has Severity.ERROR.

VirtualAgent

Bases: Protocol

Protocol for virtual agents (config-only, no Protocol class).

Both VirtualAgentRunner and dynamically created agents conform to this interface.

Source code in src/pico_agent/virtual.py
class VirtualAgent(Protocol):
    """Protocol for virtual agents (config-only, no Protocol class).

    Both ``VirtualAgentRunner`` and dynamically created agents conform to
    this interface.
    """

    def run(self, input: str) -> str:
        """Execute the agent synchronously.

        Args:
            input: The user message.

        Returns:
            The agent's text response.
        """
        ...

    async def arun(self, input: str) -> str:
        """Execute the agent asynchronously.

        Args:
            input: The user message.

        Returns:
            The agent's text response.
        """
        ...

    def run_structured(self, input: str, schema: Type[T]) -> T:
        """Execute the agent and parse the response into a Pydantic model.

        Args:
            input: The user message.
            schema: A ``pydantic.BaseModel`` subclass.

        Returns:
            An instance of *schema*.
        """
        ...

    def run_with_args(self, args: Dict[str, Any]) -> str:
        """Execute the agent with a dictionary of arguments.

        Args:
            args: Key-value pairs used to fill prompt templates.

        Returns:
            The agent's text response.
        """
        ...

run(input)

Execute the agent synchronously.

Parameters:

Name Type Description Default
input str

The user message.

required

Returns:

Type Description
str

The agent's text response.

Source code in src/pico_agent/virtual.py
def run(self, input: str) -> str:
    """Execute the agent synchronously.

    Args:
        input: The user message.

    Returns:
        The agent's text response.
    """
    ...

arun(input) async

Execute the agent asynchronously.

Parameters:

Name Type Description Default
input str

The user message.

required

Returns:

Type Description
str

The agent's text response.

Source code in src/pico_agent/virtual.py
async def arun(self, input: str) -> str:
    """Execute the agent asynchronously.

    Args:
        input: The user message.

    Returns:
        The agent's text response.
    """
    ...

run_structured(input, schema)

Execute the agent and parse the response into a Pydantic model.

Parameters:

Name Type Description Default
input str

The user message.

required
schema Type[T]

A pydantic.BaseModel subclass.

required

Returns:

Type Description
T

An instance of schema.

Source code in src/pico_agent/virtual.py
def run_structured(self, input: str, schema: Type[T]) -> T:
    """Execute the agent and parse the response into a Pydantic model.

    Args:
        input: The user message.
        schema: A ``pydantic.BaseModel`` subclass.

    Returns:
        An instance of *schema*.
    """
    ...

run_with_args(args)

Execute the agent with a dictionary of arguments.

Parameters:

Name Type Description Default
args Dict[str, Any]

Key-value pairs used to fill prompt templates.

required

Returns:

Type Description
str

The agent's text response.

Source code in src/pico_agent/virtual.py
def run_with_args(self, args: Dict[str, Any]) -> str:
    """Execute the agent with a dictionary of arguments.

    Args:
        args: Key-value pairs used to fill prompt templates.

    Returns:
        The agent's text response.
    """
    ...

VirtualAgentManager

Creates and manages virtual agents programmatically at runtime.

Virtual agents are config-only agents that do not require a Protocol class. Use create_agent() to define a new agent with inline parameters, or get_agent() to retrieve an existing one.

Parameters:

Name Type Description Default
config_service AgentConfigService

Service for storing runtime agent configurations.

required
tool_registry ToolRegistry

Registry for tool lookup.

required
llm_factory LLMFactory

Factory for creating LLM instances.

required
model_router ModelRouter

Capability-to-model router.

required
container PicoContainer

The pico-ioc container.

required
scheduler PlatformScheduler

Concurrency scheduler for async operations.

required
Example

manager = container.get(VirtualAgentManager) agent = manager.create_agent( ... "greeter", ... system_prompt="You greet users warmly.", ... capability="fast", ... ) result = agent.run("Hello!")

Source code in src/pico_agent/virtual.py
@component
class VirtualAgentManager:
    """Creates and manages virtual agents programmatically at runtime.

    Virtual agents are config-only agents that do not require a Protocol
    class.  Use ``create_agent()`` to define a new agent with inline
    parameters, or ``get_agent()`` to retrieve an existing one.

    Args:
        config_service: Service for storing runtime agent configurations.
        tool_registry: Registry for tool lookup.
        llm_factory: Factory for creating LLM instances.
        model_router: Capability-to-model router.
        container: The pico-ioc container.
        scheduler: Concurrency scheduler for async operations.

    Example:
        >>> manager = container.get(VirtualAgentManager)
        >>> agent = manager.create_agent(
        ...     "greeter",
        ...     system_prompt="You greet users warmly.",
        ...     capability="fast",
        ... )
        >>> result = agent.run("Hello!")
    """

    def __init__(
        self,
        config_service: AgentConfigService,
        tool_registry: ToolRegistry,
        llm_factory: LLMFactory,
        model_router: ModelRouter,
        container: PicoContainer,
        scheduler: PlatformScheduler,
    ):
        self.config_service = config_service
        self.tool_registry = tool_registry
        self.llm_factory = llm_factory
        self.model_router = model_router
        self.container = container
        self.scheduler = scheduler

    def create_agent(self, name: str, **kwargs) -> VirtualAgent:
        """Create a new virtual agent and register its configuration.

        Args:
            name: Unique agent identifier.
            **kwargs: Any ``AgentConfig`` fields (e.g., ``system_prompt``,
                ``capability``, ``tools``, ``agent_type``).

        Returns:
            A ``VirtualAgentRunner`` conforming to the ``VirtualAgent``
            protocol.
        """
        config = AgentConfig(name=name, **kwargs)
        config_data = config.__dict__.copy()
        if "name" in config_data:
            del config_data["name"]
        self.config_service.update_agent_config(name, **config_data)
        return self.get_agent(name)

    def get_agent(self, name: str) -> VirtualAgent:
        """Retrieve (or re-create) a virtual agent by name.

        Args:
            name: The agent identifier previously used with ``create_agent()``
                or registered via ``AgentConfigService``.

        Returns:
            A ``VirtualAgentRunner`` for the named agent.

        Raises:
            ValueError: If no configuration exists for the given name.
        """
        from .locator import AgentLocator

        locator = self.container.get(AgentLocator)
        config = self.config_service.get_config(name)
        return VirtualAgentRunner(
            config=config,
            tool_registry=self.tool_registry,
            llm_factory=self.llm_factory,
            model_router=self.model_router,
            container=self.container,
            locator=locator,
            scheduler=self.scheduler,
        )

create_agent(name, **kwargs)

Create a new virtual agent and register its configuration.

Parameters:

Name Type Description Default
name str

Unique agent identifier.

required
**kwargs

Any AgentConfig fields (e.g., system_prompt, capability, tools, agent_type).

{}

Returns:

Type Description
VirtualAgent

A VirtualAgentRunner conforming to the VirtualAgent

VirtualAgent

protocol.

Source code in src/pico_agent/virtual.py
def create_agent(self, name: str, **kwargs) -> VirtualAgent:
    """Create a new virtual agent and register its configuration.

    Args:
        name: Unique agent identifier.
        **kwargs: Any ``AgentConfig`` fields (e.g., ``system_prompt``,
            ``capability``, ``tools``, ``agent_type``).

    Returns:
        A ``VirtualAgentRunner`` conforming to the ``VirtualAgent``
        protocol.
    """
    config = AgentConfig(name=name, **kwargs)
    config_data = config.__dict__.copy()
    if "name" in config_data:
        del config_data["name"]
    self.config_service.update_agent_config(name, **config_data)
    return self.get_agent(name)

get_agent(name)

Retrieve (or re-create) a virtual agent by name.

Parameters:

Name Type Description Default
name str

The agent identifier previously used with create_agent() or registered via AgentConfigService.

required

Returns:

Type Description
VirtualAgent

A VirtualAgentRunner for the named agent.

Raises:

Type Description
ValueError

If no configuration exists for the given name.

Source code in src/pico_agent/virtual.py
def get_agent(self, name: str) -> VirtualAgent:
    """Retrieve (or re-create) a virtual agent by name.

    Args:
        name: The agent identifier previously used with ``create_agent()``
            or registered via ``AgentConfigService``.

    Returns:
        A ``VirtualAgentRunner`` for the named agent.

    Raises:
        ValueError: If no configuration exists for the given name.
    """
    from .locator import AgentLocator

    locator = self.container.get(AgentLocator)
    config = self.config_service.get_config(name)
    return VirtualAgentRunner(
        config=config,
        tool_registry=self.tool_registry,
        llm_factory=self.llm_factory,
        model_router=self.model_router,
        container=self.container,
        locator=locator,
        scheduler=self.scheduler,
    )

DynamicTool

A tool created at runtime from a plain callable.

Exposes name, description, args_schema, and __call__ to satisfy the LangChain tool interface.

Parameters:

Name Type Description Default
name str

Unique tool identifier.

required
description str

Human-readable description for the LLM.

required
func Callable[..., str]

The callable that implements the tool logic.

required
args_schema Type[BaseModel]

Optional Pydantic model for the tool's arguments. If None, a default schema with a single payload field is generated.

None
Source code in src/pico_agent/virtual_tools.py
class DynamicTool:
    """A tool created at runtime from a plain callable.

    Exposes ``name``, ``description``, ``args_schema``, and ``__call__``
    to satisfy the LangChain tool interface.

    Args:
        name: Unique tool identifier.
        description: Human-readable description for the LLM.
        func: The callable that implements the tool logic.
        args_schema: Optional Pydantic model for the tool's arguments.
            If ``None``, a default schema with a single ``payload`` field
            is generated.
    """

    def __init__(self, name: str, description: str, func: Callable[..., str], args_schema: Type[BaseModel] = None):
        self.name = name
        self.description = description
        self.func = func
        self.args_schema = args_schema or self._create_default_schema()

        config = ToolConfig(name=name, description=description)
        setattr(self, TOOL_META_KEY, config)

    def _create_default_schema(self) -> Type[BaseModel]:
        return create_model(
            f"{self.name}Input",
            payload=(List[Dict[str, Any]], Field(description="List of data dictionaries to process")),
        )

    def __call__(self, **kwargs):
        return self.func(**kwargs)

VirtualToolManager

Creates and registers DynamicTool instances at runtime.

Parameters:

Name Type Description Default
tool_registry ToolRegistry

The ToolRegistry where created tools are stored.

required
Example

manager = container.get(VirtualToolManager) tool = manager.create_tool( ... name="echo", ... description="Echoes the input back", ... func=lambda text: text, ... )

Source code in src/pico_agent/virtual_tools.py
@component
class VirtualToolManager:
    """Creates and registers ``DynamicTool`` instances at runtime.

    Args:
        tool_registry: The ``ToolRegistry`` where created tools are stored.

    Example:
        >>> manager = container.get(VirtualToolManager)
        >>> tool = manager.create_tool(
        ...     name="echo",
        ...     description="Echoes the input back",
        ...     func=lambda text: text,
        ... )
    """

    def __init__(self, tool_registry: ToolRegistry):
        self.tool_registry = tool_registry

    def create_tool(
        self, name: str, description: str, func: Callable, schema: Optional[Type[BaseModel]] = None
    ) -> DynamicTool:
        """Create a ``DynamicTool`` and register it in the ``ToolRegistry``.

        Args:
            name: Unique tool identifier.
            description: Human-readable description for the LLM.
            func: The callable that implements the tool logic.
            schema: Optional Pydantic model for the tool's arguments.

        Returns:
            The created ``DynamicTool`` instance.
        """

        tool_instance = DynamicTool(name=name, description=description, func=func, args_schema=schema)

        self.tool_registry.register(name, tool_instance)

        return tool_instance

    def create_proto_tool(self, name: str, description: str, handler: Callable[[List[Dict[str, Any]]], str]):
        """Create a tool that accepts a list of dictionaries as its payload.

        Convenience wrapper around ``create_tool()`` for handlers that process
        structured batch data.

        Args:
            name: Unique tool identifier.
            description: Human-readable description for the LLM.
            handler: A callable that receives ``List[Dict[str, Any]]`` and
                returns a string result.

        Returns:
            The created ``DynamicTool`` instance.
        """

        def wrapper(payload: List[Dict[str, Any]]) -> str:
            return handler(payload)

        return self.create_tool(name, description, wrapper)

create_tool(name, description, func, schema=None)

Create a DynamicTool and register it in the ToolRegistry.

Parameters:

Name Type Description Default
name str

Unique tool identifier.

required
description str

Human-readable description for the LLM.

required
func Callable

The callable that implements the tool logic.

required
schema Optional[Type[BaseModel]]

Optional Pydantic model for the tool's arguments.

None

Returns:

Type Description
DynamicTool

The created DynamicTool instance.

Source code in src/pico_agent/virtual_tools.py
def create_tool(
    self, name: str, description: str, func: Callable, schema: Optional[Type[BaseModel]] = None
) -> DynamicTool:
    """Create a ``DynamicTool`` and register it in the ``ToolRegistry``.

    Args:
        name: Unique tool identifier.
        description: Human-readable description for the LLM.
        func: The callable that implements the tool logic.
        schema: Optional Pydantic model for the tool's arguments.

    Returns:
        The created ``DynamicTool`` instance.
    """

    tool_instance = DynamicTool(name=name, description=description, func=func, args_schema=schema)

    self.tool_registry.register(name, tool_instance)

    return tool_instance

create_proto_tool(name, description, handler)

Create a tool that accepts a list of dictionaries as its payload.

Convenience wrapper around create_tool() for handlers that process structured batch data.

Parameters:

Name Type Description Default
name str

Unique tool identifier.

required
description str

Human-readable description for the LLM.

required
handler Callable[[List[Dict[str, Any]]], str]

A callable that receives List[Dict[str, Any]] and returns a string result.

required

Returns:

Type Description

The created DynamicTool instance.

Source code in src/pico_agent/virtual_tools.py
def create_proto_tool(self, name: str, description: str, handler: Callable[[List[Dict[str, Any]]], str]):
    """Create a tool that accepts a list of dictionaries as its payload.

    Convenience wrapper around ``create_tool()`` for handlers that process
    structured batch data.

    Args:
        name: Unique tool identifier.
        description: Human-readable description for the LLM.
        handler: A callable that receives ``List[Dict[str, Any]]`` and
            returns a string result.

    Returns:
        The created ``DynamicTool`` instance.
    """

    def wrapper(payload: List[Dict[str, Any]]) -> str:
        return handler(payload)

    return self.create_tool(name, description, wrapper)

init(*args, **kwargs)

Initialise a pico-ioc container with pico-agent infrastructure.

Wraps pico_ioc.init() and:

  1. Prepends the pico_agent module to the module list.
  2. Loads plugin modules from the pico_agent.plugins entry point group (disable with PICO_AGENT_AUTO_PLUGINS=false).
  3. Harvests PICO_SCANNERS from all modules and adds them as custom scanners.

All positional and keyword arguments are forwarded to pico_ioc.init().

Returns:

Type Description
PicoContainer

A fully configured PicoContainer.

Example

from pico_agent import init container = init(modules=["myapp"])

Source code in src/pico_agent/bootstrap.py
def init(*args: Any, **kwargs: Any) -> "PicoContainer":
    """Initialise a pico-ioc container with pico-agent infrastructure.

    Wraps ``pico_ioc.init()`` and:

    1. Prepends the ``pico_agent`` module to the module list.
    2. Loads plugin modules from the ``pico_agent.plugins`` entry point group
       (disable with ``PICO_AGENT_AUTO_PLUGINS=false``).
    3. Harvests ``PICO_SCANNERS`` from all modules and adds them as custom
       scanners.

    All positional and keyword arguments are forwarded to
    ``pico_ioc.init()``.

    Returns:
        A fully configured ``PicoContainer``.

    Example:
        >>> from pico_agent import init
        >>> container = init(modules=["myapp"])
    """
    bound = _IOC_INIT_SIG.bind(*args, **kwargs)
    bound.apply_defaults()

    raw = _to_module_list(bound.arguments["modules"])
    raw_with_agent = [pico_agent] + list(raw)
    base_modules = _normalize_modules(raw_with_agent)

    auto_flag = os.getenv("PICO_AGENT_AUTO_PLUGINS", "true").lower()
    if auto_flag not in ("0", "false", "no"):
        plugin_modules = _load_plugin_modules()
        all_modules = _normalize_modules(list(base_modules) + plugin_modules)
    else:
        all_modules = base_modules

    bound.arguments["modules"] = all_modules

    harvested = _harvest_scanners(all_modules)
    if harvested:
        existing = bound.arguments.get("custom_scanners") or []
        bound.arguments["custom_scanners"] = list(existing) + harvested

    return _ioc_init(*bound.args, **bound.kwargs)

agent(name, capability=AgentCapability.SMART, system_prompt='', description='', user_prompt_template='{input}', agent_type=AgentType.ONE_SHOT, max_iterations=5, tools=None, agents=None, tags=None, tracing_enabled=True, temperature=0.7, llm_profile=None)

Declare a Protocol class as a pico-agent.

Attaches an AgentConfig instance (AGENT_META_KEY) and the IS_AGENT_INTERFACE flag to the decorated class so that AgentScanner can discover and register it automatically.

Parameters:

Name Type Description Default
name str

Unique agent identifier (required).

required
capability str

AgentCapability constant used by ModelRouter to select a concrete model.

SMART
system_prompt str

System-level prompt sent to the LLM.

''
description str

Human-readable description. If empty, falls back to the first line of the class docstring.

''
user_prompt_template str

Template for the user message. Placeholders (e.g., {input}) are filled from method arguments.

'{input}'
agent_type AgentType

Execution strategy -- ONE_SHOT, REACT, or WORKFLOW.

ONE_SHOT
max_iterations int

Maximum ReAct loop iterations (REACT only).

5
tools Optional[List[str]]

Tool names to attach to this agent.

None
agents Optional[List[str]]

Child agent names wrapped as AgentAsTool.

None
tags Optional[List[str]]

Tags for dynamic tool lookup via ToolRegistry.

None
tracing_enabled bool

Whether TraceService records runs.

True
temperature float

LLM sampling temperature (0.0 -- 2.0).

0.7
llm_profile Optional[str]

Named profile in LLMConfig for API key / base URL.

None

Returns:

Type Description
Callable[[Type], Type]

A class decorator that sets agent metadata on the target class.

Example

from typing import Protocol from pico_agent import agent, AgentCapability, AgentType @agent( ... name="summarizer", ... capability=AgentCapability.SMART, ... system_prompt="Summarize the following text.", ... agent_type=AgentType.ONE_SHOT, ... ) ... class Summarizer(Protocol): ... def summarize(self, text: str) -> str: ...

Source code in src/pico_agent/decorators.py
def agent(
    name: str,
    capability: str = AgentCapability.SMART,
    system_prompt: str = "",
    description: str = "",
    user_prompt_template: str = "{input}",
    agent_type: AgentType = AgentType.ONE_SHOT,
    max_iterations: int = 5,
    tools: Optional[List[str]] = None,
    agents: Optional[List[str]] = None,
    tags: Optional[List[str]] = None,
    tracing_enabled: bool = True,
    temperature: float = 0.7,
    llm_profile: Optional[str] = None,
) -> Callable[[Type], Type]:
    """Declare a Protocol class as a pico-agent.

    Attaches an ``AgentConfig`` instance (``AGENT_META_KEY``) and the
    ``IS_AGENT_INTERFACE`` flag to the decorated class so that
    ``AgentScanner`` can discover and register it automatically.

    Args:
        name: Unique agent identifier (required).
        capability: ``AgentCapability`` constant used by ``ModelRouter`` to
            select a concrete model.
        system_prompt: System-level prompt sent to the LLM.
        description: Human-readable description.  If empty, falls back to the
            first line of the class docstring.
        user_prompt_template: Template for the user message.  Placeholders
            (e.g., ``{input}``) are filled from method arguments.
        agent_type: Execution strategy -- ``ONE_SHOT``, ``REACT``, or
            ``WORKFLOW``.
        max_iterations: Maximum ReAct loop iterations (``REACT`` only).
        tools: Tool names to attach to this agent.
        agents: Child agent names wrapped as ``AgentAsTool``.
        tags: Tags for dynamic tool lookup via ``ToolRegistry``.
        tracing_enabled: Whether ``TraceService`` records runs.
        temperature: LLM sampling temperature (0.0 -- 2.0).
        llm_profile: Named profile in ``LLMConfig`` for API key / base URL.

    Returns:
        A class decorator that sets agent metadata on the target class.

    Example:
        >>> from typing import Protocol
        >>> from pico_agent import agent, AgentCapability, AgentType
        >>> @agent(
        ...     name="summarizer",
        ...     capability=AgentCapability.SMART,
        ...     system_prompt="Summarize the following text.",
        ...     agent_type=AgentType.ONE_SHOT,
        ... )
        ... class Summarizer(Protocol):
        ...     def summarize(self, text: str) -> str: ...
    """

    def decorator(cls_or_proto: Type) -> Type:
        final_desc = description
        if not final_desc and cls_or_proto.__doc__:
            final_desc = cls_or_proto.__doc__.strip().split("\n")[0]

        default_config = AgentConfig(
            name=name,
            capability=capability,
            system_prompt=system_prompt,
            description=final_desc,
            user_prompt_template=user_prompt_template,
            agent_type=agent_type,
            max_iterations=max_iterations,
            tools=tools or [],
            agents=agents or [],
            tags=tags or [],
            tracing_enabled=tracing_enabled,
            temperature=temperature,
            llm_profile=llm_profile,
        )

        setattr(cls_or_proto, AGENT_META_KEY, default_config)
        setattr(cls_or_proto, IS_AGENT_INTERFACE, True)
        return cls_or_proto

    return decorator

tool(name, description)

Declare a class as a pico-agent tool.

Attaches a ToolConfig instance (TOOL_META_KEY) to the decorated class so that ToolScanner can discover and register it. The class must implement one of __call__, run, execute, or invoke.

Parameters:

Name Type Description Default
name str

Unique tool identifier shown to the LLM.

required
description str

Human-readable description the LLM uses to decide when to invoke this tool.

required

Returns:

Type Description
Callable[[Type], Type]

A class decorator that sets tool metadata on the target class.

Example

from pico_ioc import component from pico_agent import tool @tool(name="calculator", description="Evaluate math expressions") ... @component ... class Calculator: ... def run(self, expression: str) -> str: ... return str(eval(expression))

Source code in src/pico_agent/decorators.py
def tool(name: str, description: str) -> Callable[[Type], Type]:
    """Declare a class as a pico-agent tool.

    Attaches a ``ToolConfig`` instance (``TOOL_META_KEY``) to the decorated
    class so that ``ToolScanner`` can discover and register it.  The class
    must implement one of ``__call__``, ``run``, ``execute``, or ``invoke``.

    Args:
        name: Unique tool identifier shown to the LLM.
        description: Human-readable description the LLM uses to decide when
            to invoke this tool.

    Returns:
        A class decorator that sets tool metadata on the target class.

    Example:
        >>> from pico_ioc import component
        >>> from pico_agent import tool
        >>> @tool(name="calculator", description="Evaluate math expressions")
        ... @component
        ... class Calculator:
        ...     def run(self, expression: str) -> str:
        ...         return str(eval(expression))
    """

    def decorator(cls: Type) -> Type:
        config = ToolConfig(name=name, description=description)
        setattr(cls, TOOL_META_KEY, config)
        return cls

    return decorator

configure_logging(level=logging.INFO, handler=None)

Configure logging for the pico_agent library.

Parameters:

Name Type Description Default
level int

Logging level (default: INFO)

INFO
handler Optional[Handler]

Custom handler. If None, uses StreamHandler to stderr.

None
Source code in src/pico_agent/logging.py
def configure_logging(level: int = logging.INFO, handler: Optional[logging.Handler] = None) -> None:
    """Configure logging for the pico_agent library.

    Args:
        level: Logging level (default: INFO)
        handler: Custom handler. If None, uses StreamHandler to stderr.
    """
    root_logger = logging.getLogger("pico_agent")
    root_logger.setLevel(level)

    if not root_logger.handlers:
        if handler is None:
            handler = logging.StreamHandler(sys.stderr)
        handler.setFormatter(logging.Formatter(DEFAULT_FORMAT))
        root_logger.addHandler(handler)

get_logger(name)

Get a logger with the pico_agent namespace.

Parameters:

Name Type Description Default
name str

Logger name. If not prefixed with 'pico_agent', it will be added.

required

Returns:

Type Description
Logger

A configured Logger instance.

Source code in src/pico_agent/logging.py
def get_logger(name: str) -> logging.Logger:
    """Get a logger with the pico_agent namespace.

    Args:
        name: Logger name. If not prefixed with 'pico_agent', it will be added.

    Returns:
        A configured Logger instance.
    """
    if not name.startswith("pico_agent"):
        name = f"pico_agent.{name}"
    return logging.getLogger(name)

Configuration

pico_agent.config

Configuration dataclasses and enumerations for pico-agent.

Defines the core configuration types used throughout the framework: AgentType, AgentCapability, AgentConfig, ToolConfig, and LLMConfig.

AgentType

Bases: str, Enum

Execution strategy for an agent.

Determines how the agent processes requests and interacts with tools.

Attributes:

Name Type Description
ONE_SHOT

Single LLM call with no tool loop (default).

REACT

Iterative ReAct tool loop via LangGraph, up to max_iterations rounds.

WORKFLOW

Custom workflow execution (e.g., map-reduce).

Source code in src/pico_agent/config.py
class AgentType(str, Enum):
    """Execution strategy for an agent.

    Determines how the agent processes requests and interacts with tools.

    Attributes:
        ONE_SHOT: Single LLM call with no tool loop (default).
        REACT: Iterative ReAct tool loop via LangGraph, up to
            ``max_iterations`` rounds.
        WORKFLOW: Custom workflow execution (e.g., map-reduce).
    """

    ONE_SHOT = "one_shot"
    REACT = "react"
    WORKFLOW = "workflow"

AgentCapability

Abstract capability labels mapped to concrete models by ModelRouter.

Use these constants in the @agent decorator to declare what kind of model an agent needs. The ModelRouter translates these labels into provider-specific model names at runtime, allowing you to swap models globally without touching agent definitions.

Attributes:

Name Type Description
FAST

Optimised for low latency (default model: gpt-5-mini).

SMART

Balanced quality and cost (default model: gpt-5.1).

REASONING

Advanced reasoning tasks (default model: gemini-3-pro).

VISION

Vision / multimodal support (default model: gpt-4o).

CODING

Code generation (default model: claude-3-5-sonnet).

Example

from pico_agent import agent, AgentCapability @agent(name="fast_bot", capability=AgentCapability.FAST) ... class FastBot(Protocol): ... def run(self, q: str) -> str: ...

Source code in src/pico_agent/config.py
class AgentCapability:
    """Abstract capability labels mapped to concrete models by ``ModelRouter``.

    Use these constants in the ``@agent`` decorator to declare what kind of
    model an agent needs.  The ``ModelRouter`` translates these labels into
    provider-specific model names at runtime, allowing you to swap models
    globally without touching agent definitions.

    Attributes:
        FAST: Optimised for low latency (default model: ``gpt-5-mini``).
        SMART: Balanced quality and cost (default model: ``gpt-5.1``).
        REASONING: Advanced reasoning tasks (default model: ``gemini-3-pro``).
        VISION: Vision / multimodal support (default model: ``gpt-4o``).
        CODING: Code generation (default model: ``claude-3-5-sonnet``).

    Example:
        >>> from pico_agent import agent, AgentCapability
        >>> @agent(name="fast_bot", capability=AgentCapability.FAST)
        ... class FastBot(Protocol):
        ...     def run(self, q: str) -> str: ...
    """

    FAST = "fast"
    SMART = "smart"
    REASONING = "reasoning"
    VISION = "vision"
    CODING = "coding"

AgentConfig dataclass

Complete configuration for a single agent.

Instances are created automatically by the @agent decorator and stored in LocalAgentRegistry. The AgentConfigService merges local, remote (central), and runtime overrides to produce the final effective config.

Parameters:

Name Type Description Default
name str

Unique agent identifier (required).

required
system_prompt str

System-level prompt sent to the LLM.

''
user_prompt_template str

Template for the user message. Use {input} or any key matching the method signature.

'{input}'
description str

Human-readable description; used as AgentAsTool description when the agent is exposed as a tool.

''
capability str

AgentCapability constant that the ModelRouter resolves to a concrete model name.

SMART
enabled bool

Whether the agent is active. Disabled agents raise AgentDisabledError.

True
agent_type AgentType

Execution strategy (ONE_SHOT, REACT, or WORKFLOW).

ONE_SHOT
max_iterations int

Maximum ReAct loop iterations (only relevant for REACT agents).

5
tools List[str]

List of tool names to attach to this agent.

list()
agents List[str]

List of child agent names that will be wrapped as AgentAsTool instances.

list()
tags List[str]

Tags used for dynamic tool lookup via ToolRegistry.

list()
tracing_enabled bool

Whether TraceService records runs for this agent.

True
temperature float

LLM sampling temperature (0.0 -- 2.0).

0.7
max_tokens Optional[int]

Maximum tokens in the LLM response, or None for the provider default.

None
llm_profile Optional[str]

Named API-key / base-URL profile in LLMConfig.

None
workflow_config Dict[str, Any]

Extra parameters for WORKFLOW agents (e.g., {"type": "map_reduce", "splitter": "...", "reducer": "..."}).

dict()
Source code in src/pico_agent/config.py
@dataclass
class AgentConfig:
    """Complete configuration for a single agent.

    Instances are created automatically by the ``@agent`` decorator and stored
    in ``LocalAgentRegistry``.  The ``AgentConfigService`` merges local, remote
    (central), and runtime overrides to produce the final effective config.

    Args:
        name: Unique agent identifier (required).
        system_prompt: System-level prompt sent to the LLM.
        user_prompt_template: Template for the user message.  Use ``{input}``
            or any key matching the method signature.
        description: Human-readable description; used as ``AgentAsTool``
            description when the agent is exposed as a tool.
        capability: ``AgentCapability`` constant that the ``ModelRouter``
            resolves to a concrete model name.
        enabled: Whether the agent is active.  Disabled agents raise
            ``AgentDisabledError``.
        agent_type: Execution strategy (``ONE_SHOT``, ``REACT``, or
            ``WORKFLOW``).
        max_iterations: Maximum ReAct loop iterations (only relevant for
            ``REACT`` agents).
        tools: List of tool names to attach to this agent.
        agents: List of child agent names that will be wrapped as
            ``AgentAsTool`` instances.
        tags: Tags used for dynamic tool lookup via ``ToolRegistry``.
        tracing_enabled: Whether ``TraceService`` records runs for this agent.
        temperature: LLM sampling temperature (0.0 -- 2.0).
        max_tokens: Maximum tokens in the LLM response, or ``None`` for the
            provider default.
        llm_profile: Named API-key / base-URL profile in ``LLMConfig``.
        workflow_config: Extra parameters for ``WORKFLOW`` agents (e.g.,
            ``{"type": "map_reduce", "splitter": "...", "reducer": "..."}``).
    """

    name: str
    system_prompt: str = ""
    user_prompt_template: str = "{input}"
    description: str = ""
    capability: str = AgentCapability.SMART
    enabled: bool = True
    agent_type: AgentType = AgentType.ONE_SHOT
    max_iterations: int = 5
    tools: List[str] = field(default_factory=list)
    agents: List[str] = field(default_factory=list)
    tags: List[str] = field(default_factory=list)
    tracing_enabled: bool = True
    temperature: float = 0.7
    max_tokens: Optional[int] = None
    llm_profile: Optional[str] = None
    workflow_config: Dict[str, Any] = field(default_factory=dict)

ToolConfig dataclass

Metadata for a tool, created by the @tool decorator.

Parameters:

Name Type Description Default
name str

Unique tool identifier shown to the LLM for tool selection.

required
description str

Human-readable description the LLM uses to decide when to invoke this tool.

required
Source code in src/pico_agent/config.py
@dataclass
class ToolConfig:
    """Metadata for a tool, created by the ``@tool`` decorator.

    Args:
        name: Unique tool identifier shown to the LLM for tool selection.
        description: Human-readable description the LLM uses to decide when
            to invoke this tool.
    """

    name: str
    description: str

LLMConfig dataclass

Centralised API-key and base-URL store for all LLM providers.

AgentLocator registers a default (empty) LLMConfig singleton via @provides. To populate it with your credentials, use @configure on a component method that receives LLMConfig as a parameter. Do not register your own LLMConfig with @factory + @provides -- that would conflict with the singleton already provided by AgentInfrastructureFactory.

Parameters:

Name Type Description Default
api_keys Dict[str, str]

Mapping of provider name (or profile name) to API key. Standard keys: "openai", "anthropic", "google", "azure", "deepseek", "qwen".

dict()
base_urls Dict[str, str]

Mapping of provider name (or profile name) to base URL override.

dict()
Example

from pico_ioc import component, configure from pico_agent import LLMConfig @component ... class AppConfig: ... @configure ... def setup(self, llm: LLMConfig): ... llm.api_keys["openai"] = "sk-..."

Source code in src/pico_agent/config.py
@dataclass
class LLMConfig:
    """Centralised API-key and base-URL store for all LLM providers.

    ``AgentLocator`` registers a default (empty) ``LLMConfig`` singleton via
    ``@provides``.  To populate it with your credentials, use ``@configure``
    on a component method that receives ``LLMConfig`` as a parameter.  Do
    **not** register your own ``LLMConfig`` with ``@factory`` + ``@provides``
    -- that would conflict with the singleton already provided by
    ``AgentInfrastructureFactory``.

    Args:
        api_keys: Mapping of provider name (or profile name) to API key.
            Standard keys: ``"openai"``, ``"anthropic"``, ``"google"``,
            ``"azure"``, ``"deepseek"``, ``"qwen"``.
        base_urls: Mapping of provider name (or profile name) to base URL
            override.

    Example:
        >>> from pico_ioc import component, configure
        >>> from pico_agent import LLMConfig
        >>> @component
        ... class AppConfig:
        ...     @configure
        ...     def setup(self, llm: LLMConfig):
        ...         llm.api_keys["openai"] = "sk-..."
    """

    api_keys: Dict[str, str] = field(default_factory=dict)
    base_urls: Dict[str, str] = field(default_factory=dict)

Decorators

pico_agent.decorators

Decorators for declaring agents and tools.

Provides the @agent and @tool class decorators that attach metadata used by AgentScanner and ToolScanner for automatic discovery.

AGENT_META_KEY = '_pico_agent_meta' module-attribute

str: Attribute name where AgentConfig metadata is stored on a decorated class.

TOOL_META_KEY = '_pico_tool_meta' module-attribute

str: Attribute name where ToolConfig metadata is stored on a decorated class.

IS_AGENT_INTERFACE = '_pico_is_agent_interface' module-attribute

str: Boolean flag attribute set on classes decorated with @agent.

agent(name, capability=AgentCapability.SMART, system_prompt='', description='', user_prompt_template='{input}', agent_type=AgentType.ONE_SHOT, max_iterations=5, tools=None, agents=None, tags=None, tracing_enabled=True, temperature=0.7, llm_profile=None)

Declare a Protocol class as a pico-agent.

Attaches an AgentConfig instance (AGENT_META_KEY) and the IS_AGENT_INTERFACE flag to the decorated class so that AgentScanner can discover and register it automatically.

Parameters:

Name Type Description Default
name str

Unique agent identifier (required).

required
capability str

AgentCapability constant used by ModelRouter to select a concrete model.

SMART
system_prompt str

System-level prompt sent to the LLM.

''
description str

Human-readable description. If empty, falls back to the first line of the class docstring.

''
user_prompt_template str

Template for the user message. Placeholders (e.g., {input}) are filled from method arguments.

'{input}'
agent_type AgentType

Execution strategy -- ONE_SHOT, REACT, or WORKFLOW.

ONE_SHOT
max_iterations int

Maximum ReAct loop iterations (REACT only).

5
tools Optional[List[str]]

Tool names to attach to this agent.

None
agents Optional[List[str]]

Child agent names wrapped as AgentAsTool.

None
tags Optional[List[str]]

Tags for dynamic tool lookup via ToolRegistry.

None
tracing_enabled bool

Whether TraceService records runs.

True
temperature float

LLM sampling temperature (0.0 -- 2.0).

0.7
llm_profile Optional[str]

Named profile in LLMConfig for API key / base URL.

None

Returns:

Type Description
Callable[[Type], Type]

A class decorator that sets agent metadata on the target class.

Example

from typing import Protocol from pico_agent import agent, AgentCapability, AgentType @agent( ... name="summarizer", ... capability=AgentCapability.SMART, ... system_prompt="Summarize the following text.", ... agent_type=AgentType.ONE_SHOT, ... ) ... class Summarizer(Protocol): ... def summarize(self, text: str) -> str: ...

Source code in src/pico_agent/decorators.py
def agent(
    name: str,
    capability: str = AgentCapability.SMART,
    system_prompt: str = "",
    description: str = "",
    user_prompt_template: str = "{input}",
    agent_type: AgentType = AgentType.ONE_SHOT,
    max_iterations: int = 5,
    tools: Optional[List[str]] = None,
    agents: Optional[List[str]] = None,
    tags: Optional[List[str]] = None,
    tracing_enabled: bool = True,
    temperature: float = 0.7,
    llm_profile: Optional[str] = None,
) -> Callable[[Type], Type]:
    """Declare a Protocol class as a pico-agent.

    Attaches an ``AgentConfig`` instance (``AGENT_META_KEY``) and the
    ``IS_AGENT_INTERFACE`` flag to the decorated class so that
    ``AgentScanner`` can discover and register it automatically.

    Args:
        name: Unique agent identifier (required).
        capability: ``AgentCapability`` constant used by ``ModelRouter`` to
            select a concrete model.
        system_prompt: System-level prompt sent to the LLM.
        description: Human-readable description.  If empty, falls back to the
            first line of the class docstring.
        user_prompt_template: Template for the user message.  Placeholders
            (e.g., ``{input}``) are filled from method arguments.
        agent_type: Execution strategy -- ``ONE_SHOT``, ``REACT``, or
            ``WORKFLOW``.
        max_iterations: Maximum ReAct loop iterations (``REACT`` only).
        tools: Tool names to attach to this agent.
        agents: Child agent names wrapped as ``AgentAsTool``.
        tags: Tags for dynamic tool lookup via ``ToolRegistry``.
        tracing_enabled: Whether ``TraceService`` records runs.
        temperature: LLM sampling temperature (0.0 -- 2.0).
        llm_profile: Named profile in ``LLMConfig`` for API key / base URL.

    Returns:
        A class decorator that sets agent metadata on the target class.

    Example:
        >>> from typing import Protocol
        >>> from pico_agent import agent, AgentCapability, AgentType
        >>> @agent(
        ...     name="summarizer",
        ...     capability=AgentCapability.SMART,
        ...     system_prompt="Summarize the following text.",
        ...     agent_type=AgentType.ONE_SHOT,
        ... )
        ... class Summarizer(Protocol):
        ...     def summarize(self, text: str) -> str: ...
    """

    def decorator(cls_or_proto: Type) -> Type:
        final_desc = description
        if not final_desc and cls_or_proto.__doc__:
            final_desc = cls_or_proto.__doc__.strip().split("\n")[0]

        default_config = AgentConfig(
            name=name,
            capability=capability,
            system_prompt=system_prompt,
            description=final_desc,
            user_prompt_template=user_prompt_template,
            agent_type=agent_type,
            max_iterations=max_iterations,
            tools=tools or [],
            agents=agents or [],
            tags=tags or [],
            tracing_enabled=tracing_enabled,
            temperature=temperature,
            llm_profile=llm_profile,
        )

        setattr(cls_or_proto, AGENT_META_KEY, default_config)
        setattr(cls_or_proto, IS_AGENT_INTERFACE, True)
        return cls_or_proto

    return decorator

tool(name, description)

Declare a class as a pico-agent tool.

Attaches a ToolConfig instance (TOOL_META_KEY) to the decorated class so that ToolScanner can discover and register it. The class must implement one of __call__, run, execute, or invoke.

Parameters:

Name Type Description Default
name str

Unique tool identifier shown to the LLM.

required
description str

Human-readable description the LLM uses to decide when to invoke this tool.

required

Returns:

Type Description
Callable[[Type], Type]

A class decorator that sets tool metadata on the target class.

Example

from pico_ioc import component from pico_agent import tool @tool(name="calculator", description="Evaluate math expressions") ... @component ... class Calculator: ... def run(self, expression: str) -> str: ... return str(eval(expression))

Source code in src/pico_agent/decorators.py
def tool(name: str, description: str) -> Callable[[Type], Type]:
    """Declare a class as a pico-agent tool.

    Attaches a ``ToolConfig`` instance (``TOOL_META_KEY``) to the decorated
    class so that ``ToolScanner`` can discover and register it.  The class
    must implement one of ``__call__``, ``run``, ``execute``, or ``invoke``.

    Args:
        name: Unique tool identifier shown to the LLM.
        description: Human-readable description the LLM uses to decide when
            to invoke this tool.

    Returns:
        A class decorator that sets tool metadata on the target class.

    Example:
        >>> from pico_ioc import component
        >>> from pico_agent import tool
        >>> @tool(name="calculator", description="Evaluate math expressions")
        ... @component
        ... class Calculator:
        ...     def run(self, expression: str) -> str:
        ...         return str(eval(expression))
    """

    def decorator(cls: Type) -> Type:
        config = ToolConfig(name=name, description=description)
        setattr(cls, TOOL_META_KEY, config)
        return cls

    return decorator

Interfaces

pico_agent.interfaces

Protocol interfaces that define the contracts for agents, LLMs, and configuration clients.

All core abstractions are expressed as typing.Protocol classes so that any conforming implementation can be used without explicit inheritance.

Agent

Bases: Protocol

Standard interface for an agent produced by pico-agent.

Agent Protocol classes decorated with @agent do not need to inherit from this Protocol; it exists as a reference contract.

Source code in src/pico_agent/interfaces.py
class Agent(Protocol):
    """Standard interface for an agent produced by pico-agent.

    Agent Protocol classes decorated with ``@agent`` do not need to
    inherit from this Protocol; it exists as a reference contract.
    """

    def run(self, input: str) -> str:
        """Execute the agent synchronously.

        Args:
            input: The user message / task description.

        Returns:
            The agent's text response.
        """
        ...

    async def arun(self, input: str) -> str:
        """Execute the agent asynchronously.

        Args:
            input: The user message / task description.

        Returns:
            The agent's text response.
        """
        ...

    def run_structured(self, input: str, schema: Type[T]) -> T:
        """Execute the agent and parse the response into a Pydantic model.

        Args:
            input: The user message / task description.
            schema: A ``pydantic.BaseModel`` subclass that defines the
                expected response structure.

        Returns:
            An instance of *schema* populated from the LLM response.
        """
        ...

    async def arun_structured(self, input: str, schema: Type[T]) -> T:
        """Async variant of ``run_structured``.

        Args:
            input: The user message / task description.
            schema: A ``pydantic.BaseModel`` subclass for structured output.

        Returns:
            An instance of *schema* populated from the LLM response.
        """
        ...

run(input)

Execute the agent synchronously.

Parameters:

Name Type Description Default
input str

The user message / task description.

required

Returns:

Type Description
str

The agent's text response.

Source code in src/pico_agent/interfaces.py
def run(self, input: str) -> str:
    """Execute the agent synchronously.

    Args:
        input: The user message / task description.

    Returns:
        The agent's text response.
    """
    ...

arun(input) async

Execute the agent asynchronously.

Parameters:

Name Type Description Default
input str

The user message / task description.

required

Returns:

Type Description
str

The agent's text response.

Source code in src/pico_agent/interfaces.py
async def arun(self, input: str) -> str:
    """Execute the agent asynchronously.

    Args:
        input: The user message / task description.

    Returns:
        The agent's text response.
    """
    ...

run_structured(input, schema)

Execute the agent and parse the response into a Pydantic model.

Parameters:

Name Type Description Default
input str

The user message / task description.

required
schema Type[T]

A pydantic.BaseModel subclass that defines the expected response structure.

required

Returns:

Type Description
T

An instance of schema populated from the LLM response.

Source code in src/pico_agent/interfaces.py
def run_structured(self, input: str, schema: Type[T]) -> T:
    """Execute the agent and parse the response into a Pydantic model.

    Args:
        input: The user message / task description.
        schema: A ``pydantic.BaseModel`` subclass that defines the
            expected response structure.

    Returns:
        An instance of *schema* populated from the LLM response.
    """
    ...

arun_structured(input, schema) async

Async variant of run_structured.

Parameters:

Name Type Description Default
input str

The user message / task description.

required
schema Type[T]

A pydantic.BaseModel subclass for structured output.

required

Returns:

Type Description
T

An instance of schema populated from the LLM response.

Source code in src/pico_agent/interfaces.py
async def arun_structured(self, input: str, schema: Type[T]) -> T:
    """Async variant of ``run_structured``.

    Args:
        input: The user message / task description.
        schema: A ``pydantic.BaseModel`` subclass for structured output.

    Returns:
        An instance of *schema* populated from the LLM response.
    """
    ...

CentralConfigClient

Bases: Protocol

Protocol for retrieving and persisting agent configuration remotely.

The default implementation (NoOpCentralClient) returns None for all lookups, meaning only local and runtime config is used. Provide a custom implementation (e.g., backed by a database or API) to enable central configuration management.

Source code in src/pico_agent/interfaces.py
class CentralConfigClient(Protocol):
    """Protocol for retrieving and persisting agent configuration remotely.

    The default implementation (``NoOpCentralClient``) returns ``None`` for
    all lookups, meaning only local and runtime config is used.  Provide a
    custom implementation (e.g., backed by a database or API) to enable
    central configuration management.
    """

    def get_agent_config(self, name: str) -> Optional[Any]:
        """Fetch the remote configuration for an agent.

        Args:
            name: The agent's unique identifier.

        Returns:
            An ``AgentConfig`` if one exists remotely, otherwise ``None``.
        """
        ...

    def upsert_agent_config(self, config: Any) -> None:
        """Create or update the remote configuration for an agent.

        Args:
            config: The ``AgentConfig`` to persist.
        """
        ...

get_agent_config(name)

Fetch the remote configuration for an agent.

Parameters:

Name Type Description Default
name str

The agent's unique identifier.

required

Returns:

Type Description
Optional[Any]

An AgentConfig if one exists remotely, otherwise None.

Source code in src/pico_agent/interfaces.py
def get_agent_config(self, name: str) -> Optional[Any]:
    """Fetch the remote configuration for an agent.

    Args:
        name: The agent's unique identifier.

    Returns:
        An ``AgentConfig`` if one exists remotely, otherwise ``None``.
    """
    ...

upsert_agent_config(config)

Create or update the remote configuration for an agent.

Parameters:

Name Type Description Default
config Any

The AgentConfig to persist.

required
Source code in src/pico_agent/interfaces.py
def upsert_agent_config(self, config: Any) -> None:
    """Create or update the remote configuration for an agent.

    Args:
        config: The ``AgentConfig`` to persist.
    """
    ...

LLM

Bases: Protocol

Protocol for a language-model adapter used by agent proxies.

LangChainAdapter is the built-in implementation.

Source code in src/pico_agent/interfaces.py
class LLM(Protocol):
    """Protocol for a language-model adapter used by agent proxies.

    ``LangChainAdapter`` is the built-in implementation.
    """

    def invoke(self, messages: List[Dict[str, str]], tools: List[Any]) -> str:
        """Send messages to the LLM and return the text response.

        Args:
            messages: List of message dicts with ``"role"`` and ``"content"``
                keys.
            tools: LangChain-compatible tool instances bound to the model.

        Returns:
            The LLM's text response.
        """
        ...

    def invoke_structured(self, messages: List[Dict[str, str]], tools: List[Any], output_schema: Type[Any]) -> Any:
        """Send messages and parse the response into a structured schema.

        Args:
            messages: List of message dicts.
            tools: LangChain-compatible tool instances.
            output_schema: A ``pydantic.BaseModel`` subclass for structured
                output.

        Returns:
            An instance of *output_schema*.
        """
        ...

    def invoke_agent_loop(
        self,
        messages: List[Dict[str, str]],
        tools: List[Any],
        max_iterations: int,
        output_schema: Optional[Type[Any]] = None,
    ) -> Any:
        """Run a ReAct-style tool loop via LangGraph.

        Args:
            messages: List of message dicts.
            tools: LangChain-compatible tool instances.
            max_iterations: Maximum number of reasoning iterations.
            output_schema: Optional Pydantic model for structured final
                output.

        Returns:
            The final text response, or an instance of *output_schema* if
            provided.
        """
        ...

invoke(messages, tools)

Send messages to the LLM and return the text response.

Parameters:

Name Type Description Default
messages List[Dict[str, str]]

List of message dicts with "role" and "content" keys.

required
tools List[Any]

LangChain-compatible tool instances bound to the model.

required

Returns:

Type Description
str

The LLM's text response.

Source code in src/pico_agent/interfaces.py
def invoke(self, messages: List[Dict[str, str]], tools: List[Any]) -> str:
    """Send messages to the LLM and return the text response.

    Args:
        messages: List of message dicts with ``"role"`` and ``"content"``
            keys.
        tools: LangChain-compatible tool instances bound to the model.

    Returns:
        The LLM's text response.
    """
    ...

invoke_structured(messages, tools, output_schema)

Send messages and parse the response into a structured schema.

Parameters:

Name Type Description Default
messages List[Dict[str, str]]

List of message dicts.

required
tools List[Any]

LangChain-compatible tool instances.

required
output_schema Type[Any]

A pydantic.BaseModel subclass for structured output.

required

Returns:

Type Description
Any

An instance of output_schema.

Source code in src/pico_agent/interfaces.py
def invoke_structured(self, messages: List[Dict[str, str]], tools: List[Any], output_schema: Type[Any]) -> Any:
    """Send messages and parse the response into a structured schema.

    Args:
        messages: List of message dicts.
        tools: LangChain-compatible tool instances.
        output_schema: A ``pydantic.BaseModel`` subclass for structured
            output.

    Returns:
        An instance of *output_schema*.
    """
    ...

invoke_agent_loop(messages, tools, max_iterations, output_schema=None)

Run a ReAct-style tool loop via LangGraph.

Parameters:

Name Type Description Default
messages List[Dict[str, str]]

List of message dicts.

required
tools List[Any]

LangChain-compatible tool instances.

required
max_iterations int

Maximum number of reasoning iterations.

required
output_schema Optional[Type[Any]]

Optional Pydantic model for structured final output.

None

Returns:

Type Description
Any

The final text response, or an instance of output_schema if

Any

provided.

Source code in src/pico_agent/interfaces.py
def invoke_agent_loop(
    self,
    messages: List[Dict[str, str]],
    tools: List[Any],
    max_iterations: int,
    output_schema: Optional[Type[Any]] = None,
) -> Any:
    """Run a ReAct-style tool loop via LangGraph.

    Args:
        messages: List of message dicts.
        tools: LangChain-compatible tool instances.
        max_iterations: Maximum number of reasoning iterations.
        output_schema: Optional Pydantic model for structured final
            output.

    Returns:
        The final text response, or an instance of *output_schema* if
        provided.
    """
    ...

LLMFactory

Bases: Protocol

Protocol for creating LLM instances from model parameters.

LangChainLLMFactory is the built-in implementation that supports OpenAI, Azure, Anthropic, Google, DeepSeek, and Qwen.

Source code in src/pico_agent/interfaces.py
class LLMFactory(Protocol):
    """Protocol for creating ``LLM`` instances from model parameters.

    ``LangChainLLMFactory`` is the built-in implementation that supports
    OpenAI, Azure, Anthropic, Google, DeepSeek, and Qwen.
    """

    def create(
        self, model_name: str, temperature: float, max_tokens: Optional[int], llm_profile: Optional[str] = None
    ) -> LLM:
        """Create an ``LLM`` instance for the given model.

        Args:
            model_name: Model identifier, optionally prefixed with a provider
                (e.g., ``"openai:gpt-5-mini"``).
            temperature: Sampling temperature (0.0 -- 2.0).
            max_tokens: Maximum response tokens, or ``None`` for the provider
                default.
            llm_profile: Named profile in ``LLMConfig`` for API key / base
                URL selection.

        Returns:
            A configured ``LLM`` instance ready for invocation.
        """
        ...

create(model_name, temperature, max_tokens, llm_profile=None)

Create an LLM instance for the given model.

Parameters:

Name Type Description Default
model_name str

Model identifier, optionally prefixed with a provider (e.g., "openai:gpt-5-mini").

required
temperature float

Sampling temperature (0.0 -- 2.0).

required
max_tokens Optional[int]

Maximum response tokens, or None for the provider default.

required
llm_profile Optional[str]

Named profile in LLMConfig for API key / base URL selection.

None

Returns:

Type Description
LLM

A configured LLM instance ready for invocation.

Source code in src/pico_agent/interfaces.py
def create(
    self, model_name: str, temperature: float, max_tokens: Optional[int], llm_profile: Optional[str] = None
) -> LLM:
    """Create an ``LLM`` instance for the given model.

    Args:
        model_name: Model identifier, optionally prefixed with a provider
            (e.g., ``"openai:gpt-5-mini"``).
        temperature: Sampling temperature (0.0 -- 2.0).
        max_tokens: Maximum response tokens, or ``None`` for the provider
            default.
        llm_profile: Named profile in ``LLMConfig`` for API key / base
            URL selection.

    Returns:
        A configured ``LLM`` instance ready for invocation.
    """
    ...

Exceptions

pico_agent.exceptions

Exception hierarchy for pico-agent.

All pico-agent exceptions inherit from AgentError, making it easy to catch any framework error with a single except AgentError clause.

AgentError

Bases: Exception

Base exception for all pico-agent errors.

Source code in src/pico_agent/exceptions.py
class AgentError(Exception):
    """Base exception for all pico-agent errors."""

    pass

AgentDisabledError

Bases: AgentError

Raised when an agent is invoked but its configuration has enabled=False.

The error message follows the pattern:

``Agent '<name>' is disabled via configuration.``

Parameters:

Name Type Description Default
agent_name str

The name of the disabled agent.

required
Source code in src/pico_agent/exceptions.py
class AgentDisabledError(AgentError):
    """Raised when an agent is invoked but its configuration has ``enabled=False``.

    The error message follows the pattern:

        ``Agent '<name>' is disabled via configuration.``

    Args:
        agent_name: The name of the disabled agent.
    """

    def __init__(self, agent_name: str):
        super().__init__(f"Agent '{agent_name}' is disabled via configuration.")

AgentConfigurationError

Bases: AgentError

Raised for missing or invalid agent / provider configuration.

Common causes include missing API keys in LLMConfig or unknown provider names.

Source code in src/pico_agent/exceptions.py
class AgentConfigurationError(AgentError):
    """Raised for missing or invalid agent / provider configuration.

    Common causes include missing API keys in ``LLMConfig`` or unknown
    provider names.
    """

    pass

AgentLifecycleError

Bases: AgentError

Raised when an operation violates the agent system lifecycle.

For example, attempting to use the system before it has reached the READY phase.

Source code in src/pico_agent/exceptions.py
class AgentLifecycleError(AgentError):
    """Raised when an operation violates the agent system lifecycle.

    For example, attempting to use the system before it has reached the
    ``READY`` phase.
    """

    pass

Bootstrap

pico_agent.bootstrap

Bootstrap helper that wraps pico_ioc.init() with pico-agent defaults.

The init() function automatically prepends the pico_agent module, loads plugin modules from the pico_agent.plugins entry point group, and harvests any PICO_SCANNERS declarations from user modules.

init(*args, **kwargs)

Initialise a pico-ioc container with pico-agent infrastructure.

Wraps pico_ioc.init() and:

  1. Prepends the pico_agent module to the module list.
  2. Loads plugin modules from the pico_agent.plugins entry point group (disable with PICO_AGENT_AUTO_PLUGINS=false).
  3. Harvests PICO_SCANNERS from all modules and adds them as custom scanners.

All positional and keyword arguments are forwarded to pico_ioc.init().

Returns:

Type Description
PicoContainer

A fully configured PicoContainer.

Example

from pico_agent import init container = init(modules=["myapp"])

Source code in src/pico_agent/bootstrap.py
def init(*args: Any, **kwargs: Any) -> "PicoContainer":
    """Initialise a pico-ioc container with pico-agent infrastructure.

    Wraps ``pico_ioc.init()`` and:

    1. Prepends the ``pico_agent`` module to the module list.
    2. Loads plugin modules from the ``pico_agent.plugins`` entry point group
       (disable with ``PICO_AGENT_AUTO_PLUGINS=false``).
    3. Harvests ``PICO_SCANNERS`` from all modules and adds them as custom
       scanners.

    All positional and keyword arguments are forwarded to
    ``pico_ioc.init()``.

    Returns:
        A fully configured ``PicoContainer``.

    Example:
        >>> from pico_agent import init
        >>> container = init(modules=["myapp"])
    """
    bound = _IOC_INIT_SIG.bind(*args, **kwargs)
    bound.apply_defaults()

    raw = _to_module_list(bound.arguments["modules"])
    raw_with_agent = [pico_agent] + list(raw)
    base_modules = _normalize_modules(raw_with_agent)

    auto_flag = os.getenv("PICO_AGENT_AUTO_PLUGINS", "true").lower()
    if auto_flag not in ("0", "false", "no"):
        plugin_modules = _load_plugin_modules()
        all_modules = _normalize_modules(list(base_modules) + plugin_modules)
    else:
        all_modules = base_modules

    bound.arguments["modules"] = all_modules

    harvested = _harvest_scanners(all_modules)
    if harvested:
        existing = bound.arguments.get("custom_scanners") or []
        bound.arguments["custom_scanners"] = list(existing) + harvested

    return _ioc_init(*bound.args, **bound.kwargs)

Providers

pico_agent.providers

LangChain-based LLM adapter and multi-provider factory.

LangChainAdapter implements the LLM protocol by delegating to a LangChain BaseChatModel. LangChainLLMFactory implements the LLMFactory protocol and creates adapters for OpenAI, Azure, Anthropic, Google/Gemini, DeepSeek, and Qwen providers.

LangChainAdapter

Bases: LLM

Adapts a LangChain BaseChatModel to the pico-agent LLM protocol.

Handles message conversion, tool binding, structured output, and the ReAct agent loop via LangGraph. Optionally records traces through TraceService.

Parameters:

Name Type Description Default
chat_model BaseChatModel

A LangChain chat model instance.

required
tracer Any

Optional TraceService for recording LLM invocations.

None
model_name str

Human-readable model identifier used in trace names.

''
Source code in src/pico_agent/providers.py
class LangChainAdapter(LLM):
    """Adapts a LangChain ``BaseChatModel`` to the pico-agent ``LLM`` protocol.

    Handles message conversion, tool binding, structured output, and the
    ReAct agent loop via LangGraph.  Optionally records traces through
    ``TraceService``.

    Args:
        chat_model: A LangChain chat model instance.
        tracer: Optional ``TraceService`` for recording LLM invocations.
        model_name: Human-readable model identifier used in trace names.
    """

    def __init__(self, chat_model: BaseChatModel, tracer: Any = None, model_name: str = ""):
        self.model = chat_model
        self.tracer = tracer
        self.model_name = model_name

    def _convert_messages(self, messages: List[Dict[str, str]]) -> List[BaseMessage]:
        """Convert pico-agent message dicts to LangChain ``BaseMessage`` objects.

        Args:
            messages: List of dicts with ``"role"`` and ``"content"`` keys.

        Returns:
            List of ``SystemMessage``, ``HumanMessage``, or ``AIMessage``.
        """
        lc_messages = []
        for msg in messages:
            if msg["role"] == "system":
                lc_messages.append(SystemMessage(content=msg["content"]))
            elif msg["role"] == "user":
                lc_messages.append(HumanMessage(content=msg["content"]))
            elif msg["role"] == "assistant":
                lc_messages.append(AIMessage(content=msg["content"]))
        return lc_messages

    def _trace(self, method_name, inputs, func):
        run_id = None
        if self.tracer:
            run_id = self.tracer.start_run(
                name=f"LLM: {self.model_name}",
                run_type="llm",
                inputs={"messages": inputs},
                extra={"method": method_name},
            )
        try:
            result = func()
            if self.tracer and run_id:
                self.tracer.end_run(run_id, outputs=str(result))
            return result
        except Exception as e:
            if self.tracer and run_id:
                self.tracer.end_run(run_id, error=e)
            raise

    def invoke(self, messages: List[Dict[str, str]], tools: List[Any]) -> str:
        """Send messages to the LLM and return the text response.

        Args:
            messages: List of message dicts with ``"role"`` and ``"content"``.
            tools: LangChain-compatible tool instances to bind.

        Returns:
            The model's text response.
        """

        def _exec():
            lc_messages = self._convert_messages(messages)
            model_with_tools = self.model
            if tools:
                model_with_tools = self.model.bind_tools(tools)
            response = model_with_tools.invoke(lc_messages)
            return str(response.content)

        return self._trace("invoke", messages, _exec)

    def invoke_structured(self, messages: List[Dict[str, str]], tools: List[Any], output_schema: Type[Any]) -> Any:
        """Send messages and parse the response into a Pydantic model.

        Args:
            messages: List of message dicts.
            tools: LangChain-compatible tool instances.
            output_schema: A ``pydantic.BaseModel`` subclass.

        Returns:
            An instance of *output_schema* populated from the LLM response.
        """

        def _exec():
            lc_messages = self._convert_messages(messages)
            structured_model = self.model.with_structured_output(output_schema)
            return structured_model.invoke(lc_messages)

        return self._trace("invoke_structured", messages, _exec)

    def invoke_agent_loop(
        self,
        messages: List[Dict[str, str]],
        tools: List[Any],
        max_iterations: int,
        output_schema: Optional[Type[Any]] = None,
    ) -> Any:
        """Run a ReAct-style tool loop via LangGraph.

        Args:
            messages: List of message dicts.
            tools: LangChain-compatible tool instances.
            max_iterations: Maximum reasoning iterations (``recursion_limit``).
            output_schema: Optional Pydantic model for structured final output.

        Returns:
            The final text response, or an *output_schema* instance if provided.
        """

        def _exec():
            from langgraph.prebuilt import create_react_agent

            lc_messages = self._convert_messages(messages)
            agent_executor = create_react_agent(self.model, tools=tools)
            inputs = {"messages": lc_messages}
            result = agent_executor.invoke(inputs, config={"recursion_limit": max_iterations})
            final_message = result["messages"][-1]

            if output_schema:
                structured_model = self.model.with_structured_output(output_schema)
                return structured_model.invoke([HumanMessage(content=str(final_message.content))])
            return str(final_message.content)

        return self._trace("invoke_agent_loop", messages, _exec)

invoke(messages, tools)

Send messages to the LLM and return the text response.

Parameters:

Name Type Description Default
messages List[Dict[str, str]]

List of message dicts with "role" and "content".

required
tools List[Any]

LangChain-compatible tool instances to bind.

required

Returns:

Type Description
str

The model's text response.

Source code in src/pico_agent/providers.py
def invoke(self, messages: List[Dict[str, str]], tools: List[Any]) -> str:
    """Send messages to the LLM and return the text response.

    Args:
        messages: List of message dicts with ``"role"`` and ``"content"``.
        tools: LangChain-compatible tool instances to bind.

    Returns:
        The model's text response.
    """

    def _exec():
        lc_messages = self._convert_messages(messages)
        model_with_tools = self.model
        if tools:
            model_with_tools = self.model.bind_tools(tools)
        response = model_with_tools.invoke(lc_messages)
        return str(response.content)

    return self._trace("invoke", messages, _exec)

invoke_structured(messages, tools, output_schema)

Send messages and parse the response into a Pydantic model.

Parameters:

Name Type Description Default
messages List[Dict[str, str]]

List of message dicts.

required
tools List[Any]

LangChain-compatible tool instances.

required
output_schema Type[Any]

A pydantic.BaseModel subclass.

required

Returns:

Type Description
Any

An instance of output_schema populated from the LLM response.

Source code in src/pico_agent/providers.py
def invoke_structured(self, messages: List[Dict[str, str]], tools: List[Any], output_schema: Type[Any]) -> Any:
    """Send messages and parse the response into a Pydantic model.

    Args:
        messages: List of message dicts.
        tools: LangChain-compatible tool instances.
        output_schema: A ``pydantic.BaseModel`` subclass.

    Returns:
        An instance of *output_schema* populated from the LLM response.
    """

    def _exec():
        lc_messages = self._convert_messages(messages)
        structured_model = self.model.with_structured_output(output_schema)
        return structured_model.invoke(lc_messages)

    return self._trace("invoke_structured", messages, _exec)

invoke_agent_loop(messages, tools, max_iterations, output_schema=None)

Run a ReAct-style tool loop via LangGraph.

Parameters:

Name Type Description Default
messages List[Dict[str, str]]

List of message dicts.

required
tools List[Any]

LangChain-compatible tool instances.

required
max_iterations int

Maximum reasoning iterations (recursion_limit).

required
output_schema Optional[Type[Any]]

Optional Pydantic model for structured final output.

None

Returns:

Type Description
Any

The final text response, or an output_schema instance if provided.

Source code in src/pico_agent/providers.py
def invoke_agent_loop(
    self,
    messages: List[Dict[str, str]],
    tools: List[Any],
    max_iterations: int,
    output_schema: Optional[Type[Any]] = None,
) -> Any:
    """Run a ReAct-style tool loop via LangGraph.

    Args:
        messages: List of message dicts.
        tools: LangChain-compatible tool instances.
        max_iterations: Maximum reasoning iterations (``recursion_limit``).
        output_schema: Optional Pydantic model for structured final output.

    Returns:
        The final text response, or an *output_schema* instance if provided.
    """

    def _exec():
        from langgraph.prebuilt import create_react_agent

        lc_messages = self._convert_messages(messages)
        agent_executor = create_react_agent(self.model, tools=tools)
        inputs = {"messages": lc_messages}
        result = agent_executor.invoke(inputs, config={"recursion_limit": max_iterations})
        final_message = result["messages"][-1]

        if output_schema:
            structured_model = self.model.with_structured_output(output_schema)
            return structured_model.invoke([HumanMessage(content=str(final_message.content))])
        return str(final_message.content)

    return self._trace("invoke_agent_loop", messages, _exec)

LangChainLLMFactory

Bases: LLMFactory

Multi-provider LLM factory backed by LangChain chat model classes.

Supports OpenAI, Azure, Anthropic (Claude), Google (Gemini), DeepSeek, and Qwen. Provider detection is automatic based on the model name, or explicit via the "provider:model" syntax (e.g., "openai:gpt-5-mini").

Parameters:

Name Type Description Default
config LLMConfig

LLMConfig containing API keys and base URLs.

required
container Any

Optional PicoContainer used to resolve TraceService.

None

Raises:

Type Description
AgentConfigurationError

If the required API key is missing.

ImportError

If the required LangChain provider package is not installed.

ValueError

If the provider name is unknown.

Source code in src/pico_agent/providers.py
class LangChainLLMFactory(LLMFactory):
    """Multi-provider LLM factory backed by LangChain chat model classes.

    Supports OpenAI, Azure, Anthropic (Claude), Google (Gemini), DeepSeek,
    and Qwen.  Provider detection is automatic based on the model name, or
    explicit via the ``"provider:model"`` syntax (e.g., ``"openai:gpt-5-mini"``).

    Args:
        config: ``LLMConfig`` containing API keys and base URLs.
        container: Optional ``PicoContainer`` used to resolve ``TraceService``.

    Raises:
        AgentConfigurationError: If the required API key is missing.
        ImportError: If the required LangChain provider package is not
            installed.
        ValueError: If the provider name is unknown.
    """

    def __init__(self, config: LLMConfig, container: Any = None):
        self.config = config
        self.container = container

    def create(
        self, model_name: str, temperature: float, max_tokens: Optional[int], llm_profile: Optional[str] = None
    ) -> LLM:
        """Create a ``LangChainAdapter`` for the given model.

        The provider is detected from the model name or specified explicitly
        using the ``"provider:model"`` syntax.

        Args:
            model_name: Model identifier, optionally prefixed with provider
                (e.g., ``"openai:gpt-5-mini"``).
            temperature: Sampling temperature.
            max_tokens: Maximum response tokens, or ``None``.
            llm_profile: Named profile for API-key / base-URL selection.

        Returns:
            A configured ``LangChainAdapter`` instance.

        Raises:
            AgentConfigurationError: If the API key for the detected provider
                is missing.  Message pattern:
                ``"API Key not found for provider '<provider>' (Profile: '<profile>'). Please configure it via LLMConfig."``
            ImportError: If the LangChain package for the provider is not
                installed.  Message pattern:
                ``"Please install 'pico-agent[<extra>]' to use <provider>."``
            ValueError: If the provider name is not recognised.  Message:
                ``"Unknown LLM Provider: <provider>"``
        """
        final_provider = None
        real_model_name = model_name

        if ":" in model_name:
            parts = model_name.split(":", 1)
            final_provider = parts[0]
            real_model_name = parts[1]

        if not final_provider:
            final_provider = self._detect_provider(real_model_name)

        chat_model = self.create_chat_model(final_provider, real_model_name, llm_profile)

        if temperature is not None:
            try:
                chat_model.temperature = temperature
            except AttributeError:
                pass

        if max_tokens is not None:
            try:
                chat_model.max_tokens = max_tokens
            except AttributeError:
                pass

        tracer = None
        if self.container:
            try:
                from .tracing import TraceService

                if self.container.has(TraceService):
                    tracer = self.container.get(TraceService)
            except ImportError:
                pass

        return LangChainAdapter(chat_model, tracer, real_model_name)

    def _get_api_key(self, provider: str, profile: Optional[str]) -> Optional[str]:
        if profile and profile in self.config.api_keys:
            return self.config.api_keys[profile]
        return self.config.api_keys.get(provider)

    def _get_base_url(self, provider: str, default: Optional[str], profile: Optional[str]) -> Optional[str]:
        if profile and profile in self.config.base_urls:
            return self.config.base_urls[profile]
        return self.config.base_urls.get(provider, default)

    def _detect_provider(self, model_name: str) -> str:
        """Auto-detect the LLM provider from a model name.

        Detection rules (first match wins):

        - Contains ``"gemini"`` -> ``"gemini"``
        - Contains ``"claude"`` or ``"anthropic"`` -> ``"claude"``
        - Contains ``"deepseek"`` -> ``"deepseek"``
        - Contains ``"qwen"`` -> ``"qwen"``
        - Contains ``"azure"`` -> ``"azure"``
        - Otherwise -> ``"openai"``

        Args:
            model_name: The model identifier to inspect.

        Returns:
            A provider key string.
        """
        name_lower = model_name.lower()
        if "gemini" in name_lower:
            return "gemini"
        elif "claude" in name_lower or "anthropic" in name_lower:
            return "claude"
        elif "deepseek" in name_lower:
            return "deepseek"
        elif "qwen" in name_lower:
            return "qwen"
        elif "azure" in name_lower:
            return "azure"
        return "openai"

    def _require_key(self, provider_name: str, profile: Optional[str]) -> str:
        """Get API key or raise configuration error."""
        key = self._get_api_key(provider_name, profile)
        if not key:
            raise AgentConfigurationError(
                f"API Key not found for provider '{provider_name}' (Profile: '{profile}'). "
                "Please configure it via LLMConfig."
            )
        return key

    def _create_openai(self, model_name: str, profile: Optional[str], timeout: int) -> BaseChatModel:
        try:
            from langchain_openai import ChatOpenAI

            api_key = self._require_key("openai", profile)
            return ChatOpenAI(model=model_name, api_key=api_key, request_timeout=timeout)
        except ImportError:
            raise ImportError("Please install 'pico-agent[openai]' to use this provider.")

    def _create_azure(self, model_name: str, profile: Optional[str], timeout: int) -> BaseChatModel:
        try:
            import os

            from langchain_openai import AzureChatOpenAI

            api_key = self._require_key("azure", profile)
            return AzureChatOpenAI(
                azure_deployment=model_name,
                openai_api_version=os.getenv("AZURE_OPENAI_API_VERSION"),
                api_key=api_key,
                request_timeout=timeout,
            )
        except ImportError:
            raise ImportError("Please install 'pico-agent[openai]' to use Azure OpenAI.")

    def _create_gemini(self, model_name: str, profile: Optional[str], timeout: int) -> BaseChatModel:
        try:
            from langchain_google_genai import ChatGoogleGenerativeAI

            api_key = self._require_key("google", profile)
            return ChatGoogleGenerativeAI(
                model=model_name,
                google_api_key=api_key,
                temperature=0.0,
                request_timeout=timeout,
            )
        except ImportError:
            raise ImportError("Please install 'pico-agent[google]' to use Gemini.")

    def _create_anthropic(self, model_name: str, profile: Optional[str], timeout: int) -> BaseChatModel:
        try:
            from langchain_anthropic import ChatAnthropic

            api_key = self._require_key("anthropic", profile)
            base_url = self._get_base_url("anthropic", None, profile)
            return ChatAnthropic(
                model=model_name, api_key=api_key, base_url=base_url, temperature=0.0, default_request_timeout=timeout
            )
        except ImportError:
            raise ImportError("Please install 'pico-agent[anthropic]' to use Claude.")

    def _create_deepseek(self, model_name: str, profile: Optional[str], timeout: int) -> BaseChatModel:
        try:
            from langchain_openai import ChatOpenAI

            base_url = self._get_base_url("deepseek", "https://api.deepseek.com/v1", profile)
            api_key = self._require_key("deepseek", profile)
            return ChatOpenAI(
                model=model_name,
                base_url=base_url,
                api_key=api_key,
                temperature=0.0,
                request_timeout=timeout,
            )
        except ImportError:
            raise ImportError("Please install 'pico-agent[openai]' to use DeepSeek.")

    def _create_qwen(self, model_name: str, profile: Optional[str], timeout: int) -> BaseChatModel:
        try:
            from langchain_openai import ChatOpenAI

            base_url = self._get_base_url("qwen", "https://dashscope.aliyuncs.com/compatible-mode/v1", profile)
            api_key = self._require_key("qwen", profile)
            return ChatOpenAI(
                model=model_name,
                base_url=base_url,
                api_key=api_key,
                temperature=0.0,
                request_timeout=timeout,
            )
        except ImportError:
            raise ImportError("Please install 'pico-agent[openai]' to use Qwen.")

    def create_chat_model(self, provider: str, model_name: str, profile: Optional[str]) -> BaseChatModel:
        """Create a LangChain ``BaseChatModel`` for the specified provider.

        Args:
            provider: Provider key (``"openai"``, ``"azure"``, ``"gemini"``,
                ``"google"``, ``"claude"``, ``"anthropic"``, ``"deepseek"``,
                ``"qwen"``).
            model_name: The provider-specific model name.
            profile: Optional named profile for API key / base URL.

        Returns:
            A configured ``BaseChatModel`` instance.

        Raises:
            ValueError: If *provider* is not recognised.
        """
        provider_lower = provider.lower()
        timeout = 60

        providers = {
            "openai": self._create_openai,
            "azure": self._create_azure,
            "gemini": self._create_gemini,
            "google": self._create_gemini,
            "claude": self._create_anthropic,
            "anthropic": self._create_anthropic,
            "deepseek": self._create_deepseek,
            "qwen": self._create_qwen,
        }

        creator = providers.get(provider_lower)
        if not creator:
            raise ValueError(f"Unknown LLM Provider: {provider}")

        return creator(model_name, profile, timeout)

create(model_name, temperature, max_tokens, llm_profile=None)

Create a LangChainAdapter for the given model.

The provider is detected from the model name or specified explicitly using the "provider:model" syntax.

Parameters:

Name Type Description Default
model_name str

Model identifier, optionally prefixed with provider (e.g., "openai:gpt-5-mini").

required
temperature float

Sampling temperature.

required
max_tokens Optional[int]

Maximum response tokens, or None.

required
llm_profile Optional[str]

Named profile for API-key / base-URL selection.

None

Returns:

Type Description
LLM

A configured LangChainAdapter instance.

Raises:

Type Description
AgentConfigurationError

If the API key for the detected provider is missing. Message pattern: "API Key not found for provider '<provider>' (Profile: '<profile>'). Please configure it via LLMConfig."

ImportError

If the LangChain package for the provider is not installed. Message pattern: "Please install 'pico-agent[<extra>]' to use <provider>."

ValueError

If the provider name is not recognised. Message: "Unknown LLM Provider: <provider>"

Source code in src/pico_agent/providers.py
def create(
    self, model_name: str, temperature: float, max_tokens: Optional[int], llm_profile: Optional[str] = None
) -> LLM:
    """Create a ``LangChainAdapter`` for the given model.

    The provider is detected from the model name or specified explicitly
    using the ``"provider:model"`` syntax.

    Args:
        model_name: Model identifier, optionally prefixed with provider
            (e.g., ``"openai:gpt-5-mini"``).
        temperature: Sampling temperature.
        max_tokens: Maximum response tokens, or ``None``.
        llm_profile: Named profile for API-key / base-URL selection.

    Returns:
        A configured ``LangChainAdapter`` instance.

    Raises:
        AgentConfigurationError: If the API key for the detected provider
            is missing.  Message pattern:
            ``"API Key not found for provider '<provider>' (Profile: '<profile>'). Please configure it via LLMConfig."``
        ImportError: If the LangChain package for the provider is not
            installed.  Message pattern:
            ``"Please install 'pico-agent[<extra>]' to use <provider>."``
        ValueError: If the provider name is not recognised.  Message:
            ``"Unknown LLM Provider: <provider>"``
    """
    final_provider = None
    real_model_name = model_name

    if ":" in model_name:
        parts = model_name.split(":", 1)
        final_provider = parts[0]
        real_model_name = parts[1]

    if not final_provider:
        final_provider = self._detect_provider(real_model_name)

    chat_model = self.create_chat_model(final_provider, real_model_name, llm_profile)

    if temperature is not None:
        try:
            chat_model.temperature = temperature
        except AttributeError:
            pass

    if max_tokens is not None:
        try:
            chat_model.max_tokens = max_tokens
        except AttributeError:
            pass

    tracer = None
    if self.container:
        try:
            from .tracing import TraceService

            if self.container.has(TraceService):
                tracer = self.container.get(TraceService)
        except ImportError:
            pass

    return LangChainAdapter(chat_model, tracer, real_model_name)

create_chat_model(provider, model_name, profile)

Create a LangChain BaseChatModel for the specified provider.

Parameters:

Name Type Description Default
provider str

Provider key ("openai", "azure", "gemini", "google", "claude", "anthropic", "deepseek", "qwen").

required
model_name str

The provider-specific model name.

required
profile Optional[str]

Optional named profile for API key / base URL.

required

Returns:

Type Description
BaseChatModel

A configured BaseChatModel instance.

Raises:

Type Description
ValueError

If provider is not recognised.

Source code in src/pico_agent/providers.py
def create_chat_model(self, provider: str, model_name: str, profile: Optional[str]) -> BaseChatModel:
    """Create a LangChain ``BaseChatModel`` for the specified provider.

    Args:
        provider: Provider key (``"openai"``, ``"azure"``, ``"gemini"``,
            ``"google"``, ``"claude"``, ``"anthropic"``, ``"deepseek"``,
            ``"qwen"``).
        model_name: The provider-specific model name.
        profile: Optional named profile for API key / base URL.

    Returns:
        A configured ``BaseChatModel`` instance.

    Raises:
        ValueError: If *provider* is not recognised.
    """
    provider_lower = provider.lower()
    timeout = 60

    providers = {
        "openai": self._create_openai,
        "azure": self._create_azure,
        "gemini": self._create_gemini,
        "google": self._create_gemini,
        "claude": self._create_anthropic,
        "anthropic": self._create_anthropic,
        "deepseek": self._create_deepseek,
        "qwen": self._create_qwen,
    }

    creator = providers.get(provider_lower)
    if not creator:
        raise ValueError(f"Unknown LLM Provider: {provider}")

    return creator(model_name, profile, timeout)

Messages

pico_agent.messages

Shared message builder for agent execution.

Converts AgentConfig system / user prompt templates and an input context dictionary into the [{"role": ..., "content": ...}] message list expected by the LLM protocol.

build_messages(config, context)

Build an LLM message list from agent config and input context.

Template placeholders in config.system_prompt and config.user_prompt_template are filled using context keys. If a placeholder cannot be resolved, the raw template is used as-is.

Parameters:

Name Type Description Default
config AgentConfig

The agent's AgentConfig.

required
context Dict[str, Any]

Mapping of parameter names to their string values, typically derived from the method signature of the invoked agent method.

required

Returns:

Type Description
List[Dict[str, str]]

A list of message dicts with "role" and "content" keys,

List[Dict[str, str]]

starting with an optional "system" message followed by a

List[Dict[str, str]]

"user" message.

Source code in src/pico_agent/messages.py
def build_messages(config: AgentConfig, context: Dict[str, Any]) -> List[Dict[str, str]]:
    """Build an LLM message list from agent config and input context.

    Template placeholders in ``config.system_prompt`` and
    ``config.user_prompt_template`` are filled using *context* keys.  If a
    placeholder cannot be resolved, the raw template is used as-is.

    Args:
        config: The agent's ``AgentConfig``.
        context: Mapping of parameter names to their string values, typically
            derived from the method signature of the invoked agent method.

    Returns:
        A list of message dicts with ``"role"`` and ``"content"`` keys,
        starting with an optional ``"system"`` message followed by a
        ``"user"`` message.
    """
    messages = []
    if config.system_prompt:
        try:
            sys_content = config.system_prompt.format(**context)
        except KeyError:
            sys_content = config.system_prompt
        messages.append({"role": "system", "content": sys_content})

    user_content = " ".join(str(v) for v in context.values())
    if config.user_prompt_template:
        try:
            user_content = config.user_prompt_template.format(**context)
        except KeyError:
            pass

    messages.append({"role": "user", "content": user_content})
    return messages

Tools

pico_agent.tools

Tool wrappers for pico-agent.

ToolWrapper adapts pico-agent @tool-decorated classes to the LangChain tool interface. AgentAsTool wraps child agents so they can be invoked as tools by a parent agent during a ReAct loop.

ToolWrapper

Adapts a pico-agent @tool-decorated instance to the LangChain tool interface.

Exposes name, description, args_schema, and __call__ so the instance can be passed directly to LangChain tool-binding APIs.

Parameters:

Name Type Description Default
instance Any

An instance of a @tool-decorated class.

required
config ToolConfig

The ToolConfig extracted from the class metadata.

required

Raises:

Type Description
ValueError

If the instance does not implement __call__, run, execute, or invoke. Message: "Tool <name> must implement __call__, run, execute, or invoke."

Source code in src/pico_agent/tools.py
class ToolWrapper:
    """Adapts a pico-agent ``@tool``-decorated instance to the LangChain tool interface.

    Exposes ``name``, ``description``, ``args_schema``, and ``__call__`` so
    the instance can be passed directly to LangChain tool-binding APIs.

    Args:
        instance: An instance of a ``@tool``-decorated class.
        config: The ``ToolConfig`` extracted from the class metadata.

    Raises:
        ValueError: If the instance does not implement ``__call__``, ``run``,
            ``execute``, or ``invoke``.  Message:
            ``"Tool <name> must implement __call__, run, execute, or invoke."``
    """

    def __init__(self, instance: Any, config: ToolConfig):
        self.instance = instance
        self.name = config.name
        self.description = config.description
        self.func = self._resolve_function(instance)
        self.args_schema = _create_schema_from_sig(self.name, self.func)

    def _resolve_function(self, instance: Any) -> Any:
        if hasattr(instance, "__call__"):
            return instance.__call__

        for method in ["run", "execute", "invoke"]:
            if hasattr(instance, method):
                return getattr(instance, method)

        raise ValueError(f"Tool {self.name} must implement __call__, run, execute, or invoke.")

    def __call__(self, **kwargs):
        return self.func(**kwargs)

AgentAsTool

Wraps a DynamicAgentProxy as a LangChain-compatible tool.

This allows a parent agent to invoke a child agent through the LLM's tool-calling mechanism. The tool's args_schema is derived from the child agent's Protocol method signature.

Parameters:

Name Type Description Default
agent_proxy Any

A DynamicAgentProxy for the child agent.

required
method_name str

The protocol method to invoke (default: "invoke").

'invoke'
description str

Optional description override. If empty, the child agent's AgentConfig.description is used.

''
Source code in src/pico_agent/tools.py
class AgentAsTool:
    """Wraps a ``DynamicAgentProxy`` as a LangChain-compatible tool.

    This allows a parent agent to invoke a child agent through the LLM's
    tool-calling mechanism.  The tool's ``args_schema`` is derived from the
    child agent's Protocol method signature.

    Args:
        agent_proxy: A ``DynamicAgentProxy`` for the child agent.
        method_name: The protocol method to invoke (default: ``"invoke"``).
        description: Optional description override.  If empty, the child
            agent's ``AgentConfig.description`` is used.
    """

    def __init__(self, agent_proxy: Any, method_name: str = "invoke", description: str = ""):
        self.proxy = agent_proxy
        self.method_name = method_name
        self._func = getattr(agent_proxy, method_name)
        self.name = getattr(agent_proxy, "agent_name", "agent_tool")

        if description:
            self.description = description
        else:
            config_service = getattr(agent_proxy, "config_service", None)
            if config_service:
                try:
                    cfg = config_service.get_config(self.name)
                    self.description = cfg.description or f"Agent {self.name}"
                except (ValueError, KeyError) as e:
                    logger.debug("Could not get config for agent %s: %s", self.name, e)
                    self.description = f"Agent {self.name}"
            else:
                self.description = f"Agent {self.name}"

        protocol_cls = self.proxy.protocol_cls
        real_method = getattr(protocol_cls, self.method_name)
        self.args_schema = _create_schema_from_sig(self.name, real_method)

    def __call__(self, **kwargs):
        return self._func(**kwargs)

Router

pico_agent.router

Capability-to-model routing.

ModelRouter translates abstract AgentCapability labels into concrete provider-specific model names, enabling global model changes without modifying individual agent definitions.

ModelRouter

Maps AgentCapability labels to concrete LLM model names.

Default mappings:

============================== ======================== Capability Default model ============================== ======================== AgentCapability.FAST gpt-5-mini AgentCapability.SMART gpt-5.1 AgentCapability.REASONING gemini-3-pro AgentCapability.VISION gpt-4o AgentCapability.CODING claude-3-5-sonnet ============================== ========================

Use update_mapping() to change a mapping at runtime.

Source code in src/pico_agent/router.py
@component(scope="singleton")
class ModelRouter:
    """Maps ``AgentCapability`` labels to concrete LLM model names.

    Default mappings:

    ==============================  ========================
    Capability                      Default model
    ==============================  ========================
    ``AgentCapability.FAST``        ``gpt-5-mini``
    ``AgentCapability.SMART``       ``gpt-5.1``
    ``AgentCapability.REASONING``   ``gemini-3-pro``
    ``AgentCapability.VISION``      ``gpt-4o``
    ``AgentCapability.CODING``      ``claude-3-5-sonnet``
    ==============================  ========================

    Use ``update_mapping()`` to change a mapping at runtime.
    """

    def __init__(self):
        self._capability_map: Dict[str, str] = {
            AgentCapability.FAST: "gpt-5-mini",
            AgentCapability.SMART: "gpt-5.1",
            AgentCapability.REASONING: "gemini-3-pro",
            AgentCapability.VISION: "gpt-4o",
            AgentCapability.CODING: "claude-3-5-sonnet",
        }

    def resolve_model(self, capability: str, runtime_override: Optional[str] = None) -> str:
        """Resolve a capability label to a model name.

        If a *runtime_override* is provided it takes precedence over the
        capability mapping.

        Args:
            capability: An ``AgentCapability`` constant (e.g., ``"smart"``).
            runtime_override: Explicit model name that bypasses the mapping.

        Returns:
            The model name string.
        """
        if runtime_override:
            return runtime_override

        return self._capability_map.get(capability, "gpt-5.1")

    def update_mapping(self, capability: str, model: str) -> None:
        """Change the model associated with a capability.

        Args:
            capability: The ``AgentCapability`` constant to update.
            model: The new model name.
        """
        self._capability_map[capability] = model

resolve_model(capability, runtime_override=None)

Resolve a capability label to a model name.

If a runtime_override is provided it takes precedence over the capability mapping.

Parameters:

Name Type Description Default
capability str

An AgentCapability constant (e.g., "smart").

required
runtime_override Optional[str]

Explicit model name that bypasses the mapping.

None

Returns:

Type Description
str

The model name string.

Source code in src/pico_agent/router.py
def resolve_model(self, capability: str, runtime_override: Optional[str] = None) -> str:
    """Resolve a capability label to a model name.

    If a *runtime_override* is provided it takes precedence over the
    capability mapping.

    Args:
        capability: An ``AgentCapability`` constant (e.g., ``"smart"``).
        runtime_override: Explicit model name that bypasses the mapping.

    Returns:
        The model name string.
    """
    if runtime_override:
        return runtime_override

    return self._capability_map.get(capability, "gpt-5.1")

update_mapping(capability, model)

Change the model associated with a capability.

Parameters:

Name Type Description Default
capability str

The AgentCapability constant to update.

required
model str

The new model name.

required
Source code in src/pico_agent/router.py
def update_mapping(self, capability: str, model: str) -> None:
    """Change the model associated with a capability.

    Args:
        capability: The ``AgentCapability`` constant to update.
        model: The new model name.
    """
    self._capability_map[capability] = model

Registry

pico_agent.registry

Registries for tools and agent configurations.

Provides ToolRegistry (tool storage with tag-based lookup), LocalAgentRegistry (stores configs discovered by AgentScanner), and AgentConfigService (merges central, local, and runtime config).

ToolRegistry

Central registry that stores tool classes/instances and supports tag-based lookup.

Tools are registered by ToolScanner during auto-discovery or manually via register(). At execution time, DynamicAgentProxy and VirtualAgentRunner query this registry to resolve tool dependencies.

Source code in src/pico_agent/registry.py
@component
class ToolRegistry:
    """Central registry that stores tool classes/instances and supports tag-based lookup.

    Tools are registered by ``ToolScanner`` during auto-discovery or manually
    via ``register()``.  At execution time, ``DynamicAgentProxy`` and
    ``VirtualAgentRunner`` query this registry to resolve tool dependencies.
    """

    def __init__(self):
        self._tools: Dict[str, Any] = {}
        self._tag_map: Dict[str, List[str]] = {}

    def register(self, name: str, tool_cls_or_instance: Any, tags: Optional[List[str]] = None) -> None:
        """Register a tool by name with optional tags.

        Args:
            name: Unique tool identifier.
            tool_cls_or_instance: The tool class or an already-instantiated
                tool object.
            tags: Optional list of tags for dynamic tool lookup.  Tools
                tagged ``"global"`` are attached to every agent automatically.
        """
        tags = tags or []
        self._tools[name] = tool_cls_or_instance
        for tag in tags:
            if tag not in self._tag_map:
                self._tag_map[tag] = []
            self._tag_map[tag].append(name)

    def get_tool(self, name: str) -> Optional[Any]:
        """Retrieve a tool by name.

        Args:
            name: The tool identifier.

        Returns:
            The tool class or instance, or ``None`` if not found.
        """
        return self._tools.get(name)

    def get_tool_names_by_tag(self, tag: str) -> List[str]:
        """Return all tool names associated with the given tag.

        Args:
            tag: The tag to search for.

        Returns:
            List of matching tool names (may be empty).
        """
        return self._tag_map.get(tag, [])

    def get_dynamic_tools(self, agent_tags: List[str]) -> List[Any]:
        """Collect tool instances matching any of the given tags, plus ``"global"`` tools.

        Duplicates are excluded.

        Args:
            agent_tags: Tags from the agent's ``AgentConfig.tags``.

        Returns:
            De-duplicated list of tool instances.
        """
        found_tools = []
        for tag in agent_tags:
            tool_names = self._tag_map.get(tag, [])
            for name in tool_names:
                t = self._tools.get(name)
                if t and t not in found_tools:
                    found_tools.append(t)

        global_names = self._tag_map.get("global", [])
        for name in global_names:
            t = self._tools.get(name)
            if t and t not in found_tools:
                found_tools.append(t)
        return found_tools

register(name, tool_cls_or_instance, tags=None)

Register a tool by name with optional tags.

Parameters:

Name Type Description Default
name str

Unique tool identifier.

required
tool_cls_or_instance Any

The tool class or an already-instantiated tool object.

required
tags Optional[List[str]]

Optional list of tags for dynamic tool lookup. Tools tagged "global" are attached to every agent automatically.

None
Source code in src/pico_agent/registry.py
def register(self, name: str, tool_cls_or_instance: Any, tags: Optional[List[str]] = None) -> None:
    """Register a tool by name with optional tags.

    Args:
        name: Unique tool identifier.
        tool_cls_or_instance: The tool class or an already-instantiated
            tool object.
        tags: Optional list of tags for dynamic tool lookup.  Tools
            tagged ``"global"`` are attached to every agent automatically.
    """
    tags = tags or []
    self._tools[name] = tool_cls_or_instance
    for tag in tags:
        if tag not in self._tag_map:
            self._tag_map[tag] = []
        self._tag_map[tag].append(name)

get_tool(name)

Retrieve a tool by name.

Parameters:

Name Type Description Default
name str

The tool identifier.

required

Returns:

Type Description
Optional[Any]

The tool class or instance, or None if not found.

Source code in src/pico_agent/registry.py
def get_tool(self, name: str) -> Optional[Any]:
    """Retrieve a tool by name.

    Args:
        name: The tool identifier.

    Returns:
        The tool class or instance, or ``None`` if not found.
    """
    return self._tools.get(name)

get_tool_names_by_tag(tag)

Return all tool names associated with the given tag.

Parameters:

Name Type Description Default
tag str

The tag to search for.

required

Returns:

Type Description
List[str]

List of matching tool names (may be empty).

Source code in src/pico_agent/registry.py
def get_tool_names_by_tag(self, tag: str) -> List[str]:
    """Return all tool names associated with the given tag.

    Args:
        tag: The tag to search for.

    Returns:
        List of matching tool names (may be empty).
    """
    return self._tag_map.get(tag, [])

get_dynamic_tools(agent_tags)

Collect tool instances matching any of the given tags, plus "global" tools.

Duplicates are excluded.

Parameters:

Name Type Description Default
agent_tags List[str]

Tags from the agent's AgentConfig.tags.

required

Returns:

Type Description
List[Any]

De-duplicated list of tool instances.

Source code in src/pico_agent/registry.py
def get_dynamic_tools(self, agent_tags: List[str]) -> List[Any]:
    """Collect tool instances matching any of the given tags, plus ``"global"`` tools.

    Duplicates are excluded.

    Args:
        agent_tags: Tags from the agent's ``AgentConfig.tags``.

    Returns:
        De-duplicated list of tool instances.
    """
    found_tools = []
    for tag in agent_tags:
        tool_names = self._tag_map.get(tag, [])
        for name in tool_names:
            t = self._tools.get(name)
            if t and t not in found_tools:
                found_tools.append(t)

    global_names = self._tag_map.get("global", [])
    for name in global_names:
        t = self._tools.get(name)
        if t and t not in found_tools:
            found_tools.append(t)
    return found_tools

LocalAgentRegistry

In-memory store of agent Protocol classes and their AgentConfig metadata.

Populated by AgentScanner during auto-discovery.

Source code in src/pico_agent/registry.py
@component
class LocalAgentRegistry:
    """In-memory store of agent Protocol classes and their ``AgentConfig`` metadata.

    Populated by ``AgentScanner`` during auto-discovery.
    """

    def __init__(self):
        self._configs: Dict[str, AgentConfig] = {}
        self._protocols: Dict[str, Type] = {}

    def register(self, name: str, protocol: Type, config: AgentConfig) -> None:
        """Register an agent protocol and its configuration.

        Args:
            name: Unique agent identifier.
            protocol: The Protocol class decorated with ``@agent``.
            config: The ``AgentConfig`` extracted from the decorator.
        """
        self._configs[name] = config
        self._protocols[name] = protocol

    def get_config(self, name: str) -> Optional[AgentConfig]:
        """Retrieve the locally registered ``AgentConfig``.

        Args:
            name: Agent identifier.

        Returns:
            The ``AgentConfig``, or ``None`` if not registered.
        """
        return self._configs.get(name)

    def get_protocol(self, name: str) -> Optional[Type]:
        """Retrieve the Protocol class for an agent.

        Args:
            name: Agent identifier.

        Returns:
            The Protocol class, or ``None`` if not registered.
        """
        return self._protocols.get(name)

register(name, protocol, config)

Register an agent protocol and its configuration.

Parameters:

Name Type Description Default
name str

Unique agent identifier.

required
protocol Type

The Protocol class decorated with @agent.

required
config AgentConfig

The AgentConfig extracted from the decorator.

required
Source code in src/pico_agent/registry.py
def register(self, name: str, protocol: Type, config: AgentConfig) -> None:
    """Register an agent protocol and its configuration.

    Args:
        name: Unique agent identifier.
        protocol: The Protocol class decorated with ``@agent``.
        config: The ``AgentConfig`` extracted from the decorator.
    """
    self._configs[name] = config
    self._protocols[name] = protocol

get_config(name)

Retrieve the locally registered AgentConfig.

Parameters:

Name Type Description Default
name str

Agent identifier.

required

Returns:

Type Description
Optional[AgentConfig]

The AgentConfig, or None if not registered.

Source code in src/pico_agent/registry.py
def get_config(self, name: str) -> Optional[AgentConfig]:
    """Retrieve the locally registered ``AgentConfig``.

    Args:
        name: Agent identifier.

    Returns:
        The ``AgentConfig``, or ``None`` if not registered.
    """
    return self._configs.get(name)

get_protocol(name)

Retrieve the Protocol class for an agent.

Parameters:

Name Type Description Default
name str

Agent identifier.

required

Returns:

Type Description
Optional[Type]

The Protocol class, or None if not registered.

Source code in src/pico_agent/registry.py
def get_protocol(self, name: str) -> Optional[Type]:
    """Retrieve the Protocol class for an agent.

    Args:
        name: Agent identifier.

    Returns:
        The Protocol class, or ``None`` if not registered.
    """
    return self._protocols.get(name)

AgentConfigService

Merges central, local, and runtime configuration for agents.

Configuration priority (highest wins): central > local > runtime. Central config (from CentralConfigClient) takes precedence over the local config discovered by AgentScanner. Runtime overrides set via update_agent_config() are applied on top of whichever base config is found.

Parameters:

Name Type Description Default
central_client CentralConfigClient

Remote configuration client.

required
local_registry LocalAgentRegistry

Registry populated by AgentScanner.

required
Source code in src/pico_agent/registry.py
@component
class AgentConfigService:
    """Merges central, local, and runtime configuration for agents.

    Configuration priority (highest wins): **central > local > runtime**.
    Central config (from ``CentralConfigClient``) takes precedence over the
    local config discovered by ``AgentScanner``.  Runtime overrides set via
    ``update_agent_config()`` are applied on top of whichever base config is
    found.

    Args:
        central_client: Remote configuration client.
        local_registry: Registry populated by ``AgentScanner``.
    """

    def __init__(self, central_client: CentralConfigClient, local_registry: LocalAgentRegistry):
        self.central_client = central_client
        self.local_registry = local_registry
        self.auto_register = True
        self._runtime_overrides: Dict[str, Dict[str, Any]] = {}

    def get_config(self, name: str) -> AgentConfig:
        """Return the effective ``AgentConfig`` for the named agent.

        Merges remote, local, and runtime sources.

        Args:
            name: Agent identifier.

        Returns:
            The merged ``AgentConfig``.

        Raises:
            ValueError: If no configuration exists for the given name.
                Message: ``"No configuration found for agent: <name>"``.
        """
        remote_config = self.central_client.get_agent_config(name)
        local_config = self.local_registry.get_config(name)

        base_config = remote_config or local_config
        runtime_data = self._runtime_overrides.get(name)

        if base_config:
            if runtime_data:
                return replace(base_config, **runtime_data)
            return base_config

        elif runtime_data:
            config_data = runtime_data.copy()
            if "name" not in config_data:
                config_data["name"] = name
            return AgentConfig(**config_data)

        raise ValueError(f"No configuration found for agent: {name}")

    def update_agent_config(self, name: str, **kwargs):
        """Apply runtime overrides to an agent's configuration.

        Overrides are merged on each ``get_config()`` call.

        Args:
            name: Agent identifier.
            **kwargs: Fields of ``AgentConfig`` to override.
        """
        if name not in self._runtime_overrides:
            self._runtime_overrides[name] = {}
        self._runtime_overrides[name].update(kwargs)

    def reset_agent_config(self, name: str):
        """Remove all runtime overrides for an agent.

        Args:
            name: Agent identifier.
        """
        if name in self._runtime_overrides:
            del self._runtime_overrides[name]

get_config(name)

Return the effective AgentConfig for the named agent.

Merges remote, local, and runtime sources.

Parameters:

Name Type Description Default
name str

Agent identifier.

required

Returns:

Type Description
AgentConfig

The merged AgentConfig.

Raises:

Type Description
ValueError

If no configuration exists for the given name. Message: "No configuration found for agent: <name>".

Source code in src/pico_agent/registry.py
def get_config(self, name: str) -> AgentConfig:
    """Return the effective ``AgentConfig`` for the named agent.

    Merges remote, local, and runtime sources.

    Args:
        name: Agent identifier.

    Returns:
        The merged ``AgentConfig``.

    Raises:
        ValueError: If no configuration exists for the given name.
            Message: ``"No configuration found for agent: <name>"``.
    """
    remote_config = self.central_client.get_agent_config(name)
    local_config = self.local_registry.get_config(name)

    base_config = remote_config or local_config
    runtime_data = self._runtime_overrides.get(name)

    if base_config:
        if runtime_data:
            return replace(base_config, **runtime_data)
        return base_config

    elif runtime_data:
        config_data = runtime_data.copy()
        if "name" not in config_data:
            config_data["name"] = name
        return AgentConfig(**config_data)

    raise ValueError(f"No configuration found for agent: {name}")

update_agent_config(name, **kwargs)

Apply runtime overrides to an agent's configuration.

Overrides are merged on each get_config() call.

Parameters:

Name Type Description Default
name str

Agent identifier.

required
**kwargs

Fields of AgentConfig to override.

{}
Source code in src/pico_agent/registry.py
def update_agent_config(self, name: str, **kwargs):
    """Apply runtime overrides to an agent's configuration.

    Overrides are merged on each ``get_config()`` call.

    Args:
        name: Agent identifier.
        **kwargs: Fields of ``AgentConfig`` to override.
    """
    if name not in self._runtime_overrides:
        self._runtime_overrides[name] = {}
    self._runtime_overrides[name].update(kwargs)

reset_agent_config(name)

Remove all runtime overrides for an agent.

Parameters:

Name Type Description Default
name str

Agent identifier.

required
Source code in src/pico_agent/registry.py
def reset_agent_config(self, name: str):
    """Remove all runtime overrides for an agent.

    Args:
        name: Agent identifier.
    """
    if name in self._runtime_overrides:
        del self._runtime_overrides[name]

Lifecycle

pico_agent.lifecycle

Agent system lifecycle management.

AgentSystem tracks the framework's lifecycle phases and publishes LifecycleEvent notifications via pico-ioc's EventBus.

LifecyclePhase

Bases: str, Enum

Phases of the pico-agent system lifecycle.

Attributes:

Name Type Description
INITIALIZING

Container is being built.

SCANNING

Agents and tools are being discovered.

READY

Container is fully configured.

RUNNING

System is accepting requests.

SHUTTING_DOWN

Graceful shutdown in progress.

STOPPED

System has stopped.

Source code in src/pico_agent/lifecycle.py
class LifecyclePhase(str, Enum):
    """Phases of the pico-agent system lifecycle.

    Attributes:
        INITIALIZING: Container is being built.
        SCANNING: Agents and tools are being discovered.
        READY: Container is fully configured.
        RUNNING: System is accepting requests.
        SHUTTING_DOWN: Graceful shutdown in progress.
        STOPPED: System has stopped.
    """

    INITIALIZING = "initializing"
    SCANNING = "scanning"
    READY = "ready"
    RUNNING = "running"
    SHUTTING_DOWN = "shutting_down"
    STOPPED = "stopped"

LifecycleEvent dataclass

Bases: Event

Event published when the system transitions between lifecycle phases.

Parameters:

Name Type Description Default
phase LifecyclePhase

The new LifecyclePhase.

required
detail str

Optional human-readable detail string.

''
Source code in src/pico_agent/lifecycle.py
@dataclass
class LifecycleEvent(Event):
    """Event published when the system transitions between lifecycle phases.

    Args:
        phase: The new ``LifecyclePhase``.
        detail: Optional human-readable detail string.
    """

    phase: LifecyclePhase
    detail: str = ""

AgentSystem

Lifecycle coordinator that publishes phase transitions via EventBus.

Transitions are published as LifecycleEvent instances. The system moves through: INITIALIZING -> READY -> RUNNING -> SHUTTING_DOWN -> STOPPED.

Source code in src/pico_agent/lifecycle.py
@component(scope="singleton")
class AgentSystem:
    """Lifecycle coordinator that publishes phase transitions via ``EventBus``.

    Transitions are published as ``LifecycleEvent`` instances.  The system
    moves through: ``INITIALIZING`` -> ``READY`` -> ``RUNNING`` ->
    ``SHUTTING_DOWN`` -> ``STOPPED``.
    """

    def __init__(self):
        self._phase = LifecyclePhase.INITIALIZING
        self._event_bus: EventBus | None = None

    @property
    def phase(self) -> LifecyclePhase:
        """The current lifecycle phase."""
        return self._phase

    def _transition(self, phase: LifecyclePhase, detail: str = ""):
        self._phase = phase
        if self._event_bus:
            self._event_bus.publish_sync(LifecycleEvent(phase=phase, detail=detail))

    @configure
    def _on_ready(self, container: PicoContainer):
        if container.has(EventBus):
            self._event_bus = container.get(EventBus)
        self._transition(LifecyclePhase.READY, "Container configured")
        self._transition(LifecyclePhase.RUNNING)

    @cleanup
    def _on_shutdown(self):
        self._transition(LifecyclePhase.SHUTTING_DOWN)
        self._transition(LifecyclePhase.STOPPED)

phase property

The current lifecycle phase.


Tracing

pico_agent.tracing

Observability tracing for agent, tool, and LLM invocations.

TraceService collects hierarchical TraceRun records. Parent-child relationships are maintained automatically via the run_context ContextVar. Traces are recorded by DynamicAgentProxy and LangChainAdapter.

run_context = ContextVar('run_context', default=None) module-attribute

ContextVar tracking the current trace run ID for parent-child hierarchy.

This is used for trace hierarchy only -- it is not used for DI scoping.

TraceRun dataclass

A single trace record for an agent, tool, or LLM invocation.

Attributes:

Name Type Description
id str

Unique run identifier (UUID).

name str

Human-readable name (e.g., agent name or "LLM: gpt-5").

run_type str

Category string -- "agent", "llm", or "tool".

inputs Dict[str, Any]

Input data (e.g., messages, arguments).

parent_id Optional[str]

ID of the parent run, or None for root runs.

start_time float

Unix timestamp when the run started.

end_time Optional[float]

Unix timestamp when the run ended (set by end_run).

outputs Optional[Dict[str, Any]]

Output data (set by end_run).

error Optional[str]

Error message if the run failed (set by end_run).

extra Dict[str, Any]

Arbitrary metadata (e.g., {"runtime_model": "gpt-4"}).

Source code in src/pico_agent/tracing.py
@dataclass
class TraceRun:
    """A single trace record for an agent, tool, or LLM invocation.

    Attributes:
        id: Unique run identifier (UUID).
        name: Human-readable name (e.g., agent name or ``"LLM: gpt-5"``).
        run_type: Category string -- ``"agent"``, ``"llm"``, or ``"tool"``.
        inputs: Input data (e.g., messages, arguments).
        parent_id: ID of the parent run, or ``None`` for root runs.
        start_time: Unix timestamp when the run started.
        end_time: Unix timestamp when the run ended (set by ``end_run``).
        outputs: Output data (set by ``end_run``).
        error: Error message if the run failed (set by ``end_run``).
        extra: Arbitrary metadata (e.g., ``{"runtime_model": "gpt-4"}``).
    """

    id: str
    name: str
    run_type: str
    inputs: Dict[str, Any]
    parent_id: Optional[str] = None
    start_time: float = field(default_factory=time.time)
    end_time: Optional[float] = None
    outputs: Optional[Dict[str, Any]] = None
    error: Optional[str] = None
    extra: Dict[str, Any] = field(default_factory=dict)

TraceService

Singleton service that collects hierarchical trace runs.

Traces are stored in memory and can be retrieved via get_traces(). On container shutdown (@cleanup), all traces are flushed.

Source code in src/pico_agent/tracing.py
@component(scope="singleton")
class TraceService:
    """Singleton service that collects hierarchical trace runs.

    Traces are stored in memory and can be retrieved via ``get_traces()``.
    On container shutdown (``@cleanup``), all traces are flushed.
    """

    def __init__(self):
        self.traces: List[TraceRun] = []

    def start_run(self, name: str, run_type: str, inputs: Dict[str, Any], extra: Dict[str, Any] = None) -> str:
        """Begin a new trace run.

        Automatically sets the parent ID from ``run_context`` and updates
        the context var to the new run ID.

        Args:
            name: Human-readable run name.
            run_type: Category (``"agent"``, ``"llm"``, ``"tool"``).
            inputs: Input data to record.
            extra: Optional metadata dict.

        Returns:
            The unique run ID (UUID string).
        """
        parent_id = run_context.get()
        run_id = str(uuid.uuid4())

        run = TraceRun(id=run_id, name=name, run_type=run_type, inputs=inputs, parent_id=parent_id, extra=extra or {})

        self.traces.append(run)
        run_context.set(run_id)
        return run_id

    def end_run(self, run_id: str, outputs: Any = None, error: Exception = None):
        """Complete a trace run, recording outputs or an error.

        Restores ``run_context`` to the parent run's ID.

        Args:
            run_id: The ID returned by ``start_run()``.
            outputs: Output data -- can be a string, dict, Pydantic model,
                or any object (converted via ``str()``).
            error: Exception instance if the run failed.
        """
        for run in reversed(self.traces):
            if run.id == run_id:
                run.end_time = time.time()
                if error:
                    run.error = str(error)
                else:
                    if isinstance(outputs, (str, int, float, bool)):
                        run.outputs = {"output": outputs}
                    elif hasattr(outputs, "dict"):
                        run.outputs = outputs.dict()
                    elif isinstance(outputs, dict):
                        run.outputs = outputs
                    else:
                        run.outputs = {"output": str(outputs)}

                run_context.set(run.parent_id)
                self._persist(run)
                break

    def _persist(self, run: TraceRun):
        pass

    @cleanup
    def _on_shutdown(self):
        logger.debug("TraceService: flushing %d traces", len(self.traces))
        self.traces.clear()

    def get_traces(self) -> List[Dict[str, Any]]:
        """Return all recorded traces as a list of dictionaries.

        Returns:
            List of dicts, each representing a ``TraceRun``.
        """
        return [asdict(t) for t in self.traces]

start_run(name, run_type, inputs, extra=None)

Begin a new trace run.

Automatically sets the parent ID from run_context and updates the context var to the new run ID.

Parameters:

Name Type Description Default
name str

Human-readable run name.

required
run_type str

Category ("agent", "llm", "tool").

required
inputs Dict[str, Any]

Input data to record.

required
extra Dict[str, Any]

Optional metadata dict.

None

Returns:

Type Description
str

The unique run ID (UUID string).

Source code in src/pico_agent/tracing.py
def start_run(self, name: str, run_type: str, inputs: Dict[str, Any], extra: Dict[str, Any] = None) -> str:
    """Begin a new trace run.

    Automatically sets the parent ID from ``run_context`` and updates
    the context var to the new run ID.

    Args:
        name: Human-readable run name.
        run_type: Category (``"agent"``, ``"llm"``, ``"tool"``).
        inputs: Input data to record.
        extra: Optional metadata dict.

    Returns:
        The unique run ID (UUID string).
    """
    parent_id = run_context.get()
    run_id = str(uuid.uuid4())

    run = TraceRun(id=run_id, name=name, run_type=run_type, inputs=inputs, parent_id=parent_id, extra=extra or {})

    self.traces.append(run)
    run_context.set(run_id)
    return run_id

end_run(run_id, outputs=None, error=None)

Complete a trace run, recording outputs or an error.

Restores run_context to the parent run's ID.

Parameters:

Name Type Description Default
run_id str

The ID returned by start_run().

required
outputs Any

Output data -- can be a string, dict, Pydantic model, or any object (converted via str()).

None
error Exception

Exception instance if the run failed.

None
Source code in src/pico_agent/tracing.py
def end_run(self, run_id: str, outputs: Any = None, error: Exception = None):
    """Complete a trace run, recording outputs or an error.

    Restores ``run_context`` to the parent run's ID.

    Args:
        run_id: The ID returned by ``start_run()``.
        outputs: Output data -- can be a string, dict, Pydantic model,
            or any object (converted via ``str()``).
        error: Exception instance if the run failed.
    """
    for run in reversed(self.traces):
        if run.id == run_id:
            run.end_time = time.time()
            if error:
                run.error = str(error)
            else:
                if isinstance(outputs, (str, int, float, bool)):
                    run.outputs = {"output": outputs}
                elif hasattr(outputs, "dict"):
                    run.outputs = outputs.dict()
                elif isinstance(outputs, dict):
                    run.outputs = outputs
                else:
                    run.outputs = {"output": str(outputs)}

            run_context.set(run.parent_id)
            self._persist(run)
            break

get_traces()

Return all recorded traces as a list of dictionaries.

Returns:

Type Description
List[Dict[str, Any]]

List of dicts, each representing a TraceRun.

Source code in src/pico_agent/tracing.py
def get_traces(self) -> List[Dict[str, Any]]:
    """Return all recorded traces as a list of dictionaries.

    Returns:
        List of dicts, each representing a ``TraceRun``.
    """
    return [asdict(t) for t in self.traces]

Scheduler

pico_agent.scheduler

Concurrency scheduler for async agent operations.

PlatformScheduler uses an asyncio.Semaphore to limit the number of concurrent LLM calls, preventing resource exhaustion during map-reduce workflows and parallel agent invocations.

PlatformScheduler

Asyncio-based concurrency limiter for parallel agent operations.

The concurrency limit is read from the PICO_AGENT_MAX_CONCURRENCY environment variable (default: 10).

Example

async with scheduler.semaphore: ... result = await some_llm_call()

Source code in src/pico_agent/scheduler.py
@component(scope="singleton")
class PlatformScheduler:
    """Asyncio-based concurrency limiter for parallel agent operations.

    The concurrency limit is read from the ``PICO_AGENT_MAX_CONCURRENCY``
    environment variable (default: ``10``).

    Example:
        >>> async with scheduler.semaphore:
        ...     result = await some_llm_call()
    """

    def __init__(self):
        self.limit = int(os.getenv("PICO_AGENT_MAX_CONCURRENCY", "10"))
        self._semaphore = asyncio.Semaphore(self.limit)

    async def acquire(self):
        """Acquire a concurrency slot (blocks if the limit is reached)."""
        await self._semaphore.acquire()

    def release(self):
        """Release a concurrency slot."""
        self._semaphore.release()

    @property
    def semaphore(self):
        """The underlying ``asyncio.Semaphore`` for use with ``async with``."""
        return self._semaphore

semaphore property

The underlying asyncio.Semaphore for use with async with.

acquire() async

Acquire a concurrency slot (blocks if the limit is reached).

Source code in src/pico_agent/scheduler.py
async def acquire(self):
    """Acquire a concurrency slot (blocks if the limit is reached)."""
    await self._semaphore.acquire()

release()

Release a concurrency slot.

Source code in src/pico_agent/scheduler.py
def release(self):
    """Release a concurrency slot."""
    self._semaphore.release()