The fastest path from a single agent to full multi-provider orchestration. Zero boilerplate. Full control. Built-in tracking.
LazyBridge is a Python framework for building agentic systems without accumulating orchestration debt.
It starts with a single idea: the same interface should work at every level of complexity — a one-line LLM call, a tool-using agent, or a nested multi-agent pipeline. Functions become tools. Agents become tools. Sessions become tools. The system grows by composition, not by rewriting the architecture.
The goal is not abstraction for its own sake. The goal is to keep control as complexity increases.
LazyBridge stays provider-agnostic and treats these as first-class primitives:
The API remains readable by both human developers and AI coding assistants.
Start simple. Scale into real orchestration. The grammar of the code never changes.
Example: an orchestrator dispatches a task to a composed pipeline. Three research agents run in parallel (vertical, fan-out), then the merged output flows through a writer and editor in sequence (horizontal chain).
Both the inner parallel session and the outer chain session expose .as_tool() —
the orchestrator calls the whole pipeline as a single function and gets a typed result back.
One API. Any scale. Five minutes to your first pipeline.
.text() — you get back a string.
No SDK setup, no message arrays, no parsing. One line.
from lazybridge import LazyAgent # "anthropic" · "openai" · "google" · "deepseek" — same code, swap one string ai = LazyAgent("anthropic") # .text() returns a plain string — no response object to unwrap result = ai.text("Summarize the state of open-source LLMs")
.loop() keeps calling tools until the model signals it's done. No manual dispatch loop.
from lazybridge import LazyAgent, LazyTool # A plain Python function — no decorator, no boilerplate def search(query: str, max_results: int = 5) -> str: """Search the web and return results.""" ... # Type hints + docstring → JSON schema, auto-generated search_tool = LazyTool.from_function(search) ai = LazyAgent("anthropic") # .loop() runs the tool-call cycle until the model stops calling tools result = ai.loop("Find recent AI papers and summarize them", tools=[search_tool])
LazyAgent becomes a tool via .as_tool().
The orchestrator decides when to invoke each one — and they can run on different providers.
No glue code needed.
from lazybridge import LazyAgent # Two specialist agents — different providers, same interface researcher = LazyAgent("anthropic", name="researcher") analyst = LazyAgent("openai", name="analyst") orchestrator = LazyAgent("anthropic") # .as_tool() wraps each agent — the orchestrator picks who to call and when result = orchestrator.loop( "Research and analyse the EV market", tools=[researcher.as_tool(), analyst.as_tool()] )
LazySession groups agents together.
.as_tool() wraps the entire session — parallel, chain, or mixed — into a single tool
that any outer orchestrator can call like a function.
from lazybridge import LazyAgent, LazySession sess = LazySession() # Inner parallel session + one chained editor → wrapped as a single tool pipeline = sess.as_tool( "research_and_edit", "Research a topic in parallel, then edit into a report", mode="chain", # steps run in sequence participants=[ inner_sess.as_tool("research", "...", mode="parallel"), # runs in parallel LazyAgent("anthropic", name="editor", session=sess), ] ) # From the outside: just a function call — hides all internal complexity result = pipeline.run({"task": "Analyse open-source LLM trends"})
# Step 1 — three agents search in parallel (mixed providers) research_sess = LazySession() LazyAgent("anthropic", name="tech", session=research_sess, native_tools=["web_search"]) LazyAgent("anthropic", name="market", session=research_sess, native_tools=["web_search"]) LazyAgent("openai", name="opinion", session=research_sess, native_tools=["web_search"]) # Wrap the whole parallel session as a single reusable tool research_tool = research_sess.as_tool("research", "...", mode="parallel") # Step 2 — chain: research_tool → writer → editor (typed output) outer_sess = LazySession() pipeline = outer_sess.as_tool( "full_pipeline", "Parallel research → write → edit", mode="chain", participants=[ research_tool, # inner session LazyAgent("anthropic", name="writer", session=outer_sess), LazyAgent("openai", name="editor", session=outer_sess, output_schema=BlogPost), ] ) # result is a typed BlogPost — structured output, web search, parallel + chain, ~40 lines post = pipeline.run({"task": "Open-source LLMs in 2025"})
With LazyBridge you go from a single agent to a multi-agent pipeline without changing mental model. One grammar at every scale.
Less infrastructure code, more logic. Type hints and docstrings become tool schemas automatically.
Same pattern on OpenAI, Anthropic, Google, DeepSeek. Change one string to switch providers.
Stateful conversations with Memory. Session event logs with LazySession. Typed outputs with Pydantic. All built-in, all composable.
"From single-agent to multi-agent orchestration in the same mental model."
from lazybridge import LazyAgent, LazySession from lazybridge import Event sess = LazySession(db="pipeline.db", tracking="basic") # Same code. Different provider. One string to change. agent = LazyAgent("anthropic", session=sess) # or: "openai" | "google" | "deepseek" result = agent.text("Summarize the state of open-source LLMs") # Built-in event log — no external stack needed calls = sess.events.get(event_type=Event.TOOL_CALL)
The abstraction is concrete: a Python function becomes a tool with
LazyTool.from_function(...), an agent becomes a tool with
agent.as_tool(), and a session can be packaged as one
callable unit.
Once everything shares the same interface, the outer orchestrator can call a function-backed tool, an agent-backed tool, or a pipeline-backed tool in exactly the same loop.
Type hints become JSON schema. Docstring becomes description. Zero boilerplate.
agent.as_tool() wraps any agent. An orchestrator calls it by name.
sess.as_tool(mode="parallel") or "chain" — N agents become one callable.
An entire nested pipeline — parallel research + chain editorial — is one tool. An outer orchestrator calls it without knowing anything inside.
from lazybridge import LazyAgent, LazyTool def search_web(query: str, max_results: int = 5) -> str: """Search the web. Returns top results as text.""" ... # Type hints → JSON schema. Docstring → description. Zero config. search_tool = LazyTool.from_function(search_web) agent = LazyAgent("anthropic") result = agent.loop("What happened in AI this week?", tools=[search_tool])
from lazybridge import LazyAgent analyst = LazyAgent( "anthropic", name="analyst", system="You are a data analyst. Be concise and quantitative.", ) # Any agent becomes a named, callable tool for an orchestrator analyst_tool = analyst.as_tool( description="Analyse data and return key insights.", ) orchestrator = LazyAgent("anthropic") result = orchestrator.loop("Analyse this dataset: ...", tools=[analyst_tool])
from lazybridge import LazyAgent, LazySession inner_sess = LazySession() pipeline_tool = inner_sess.as_tool( "research_and_summarise", "Research a topic and return a concise summary.", mode="chain", participants=[ LazyAgent("anthropic", name="researcher", session=inner_sess), LazyAgent("openai", name="summariser", session=inner_sess), ], ) # researcher → summariser, wired automatically. One callable surface. orchestrator = LazyAgent("anthropic") result = orchestrator.loop("Cover these topics: ...", tools=[pipeline_tool])
from lazybridge import LazyAgent, LazySession, LazyTool # 1. Python function → tool search_tool = LazyTool.from_function(search_web) # 2. Agent → tool analyst_tool = analyst.as_tool(description="Analyse data and return insights.") # 3. Pipeline → tool inner_sess = LazySession() pipeline_tool = inner_sess.as_tool( "research_and_summarise", "Research a topic and summarise it.", mode="chain", participants=[ LazyAgent("anthropic", name="researcher", session=inner_sess), LazyAgent("openai", name="summariser", session=inner_sess), ], ) # Same interface. Any depth. One orchestrator. orchestrator = LazyAgent("anthropic", system="You coordinate research tasks.") result = orchestrator.loop( "Prepare a full report on open-source LLMs.", tools=[search_tool, analyst_tool, pipeline_tool], )
LazyBridge keeps code readable as orchestration grows. Compact APIs, predictable structure, less boilerplate to paste and adapt.
Keep architectural control. The framework is invisible — your system is the product.
Generate more coherent, less fragile code. Consistent patterns mean fewer hallucinated APIs.
Comprehensive documentation written for engineers. Covers every class, method, and pattern — from a first pipeline to advanced multi-agent orchestration. Start with the Quickstart and build from there.
Read the Quickstart →A structured reference index built for coding assistants and LLMs. Every class, pattern, and rule is machine-readable and unambiguous. Also available as a native Claude Code skill — install it once and Claude Code understands LazyBridge natively, without pasting docs or explaining the API.
Browse the AI reference →LazyRouter — send to different agents based on outcomeoutput_schema=MyModel — Pydantic at any pipeline depthTopology
from pydantic import BaseModel from lazybridge import LazyAgent, LazySession from lazybridge.core.types import NativeTool class BlogPost(BaseModel): title: str; body: str; tags: list[str] # Layer 1 — parallel research rs = LazySession(tracking="basic", console=True) LazyAgent("anthropic", name="tech", session=rs, native_tools=[NativeTool.WEB_SEARCH]) LazyAgent("anthropic", name="market", session=rs, native_tools=[NativeTool.WEB_SEARCH]) LazyAgent("openai", name="opinion", session=rs, native_tools=[NativeTool.WEB_SEARCH]) research_tool = rs.as_tool("research", "...", mode="parallel") # Layer 2 — chain editorial os = LazySession(tracking="basic", console=True) pipeline = os.as_tool( "blog_pipeline", "Research → write → edit", mode="chain", participants=[ research_tool, LazyAgent("anthropic", name="writer", session=os), LazyAgent("openai", name="editor", session=os, output_schema=BlogPost), ] ) post = pipeline.run({"task": "Open-source LLMs in 2025"}) # post is a BlogPost instance. Typed. Done.