Use the Blaxel SDK to develop and run a custom agent in Python.
bl create-agent-app
.
bl create-agent-app myagent
main.py
. While you typically won’t need to modify this file, you can add specific logic there if needed. Your main work will focus on the agent.py
file. Blaxel’s development paradigm lets you leverage its hosting capabilities without modifying your agent’s core logic.
BL_SERVER_HOST
(for the host) and BL_SERVER_PORT
(for the port). These two environment variables are required for the host+port combo.
from blaxel.{FRAMEWORK_NAME} import bl_model
model = await bl_model("Model-name-on-Blaxel");
FRAMEWORK_NAME
specified in the import.
Available frameworks :
langgraph
crewai
llamaindex()
openai()
pydantic()
googleadk()
my-model
in a LlamaIndex agent:
from blaxel.llamaindex import bl_model
model = await bl_model("my-model")
from blaxel.{FRAMEWORK_NAME} import bl_tools
await bl_tools(['Tool-Server-name-on-Blaxel'])
langgraph
(LangGraph/Langchain), llamaindex
(LlamaIndex), crewai
(CrewAI), openai
(OpenAI Agents), pydantic
(PydanticAI Agents) and googleadk
(Google ADK).
You can develop agents by mixing tools defined locally in your agents, and tools defined as remote servers. Using separated tools prevents monolithic designs which make maintenance easier in the long run. Let’s look at a practical example combining remote and local tools. The code below uses two tools:
blaxel-search
: A remote tool server on Blaxel providing web search functionality (learn how to create your own MCP servers here)weather
: A local tool that accepts a city parameter and returns a mock weather response (always “sunny”)
from typing import AsyncGenerator
from blaxel.langgraph import bl_model, bl_tools
from langchain.tools import tool
from langchain_core.messages import AIMessageChunk
from langgraph.prebuilt import create_react_agent
@tool
def weather(city: str) -> str:
"""Get the weather in a given city"""
return f"The weather in {city} is sunny"
async def agent(input: str) -> AsyncGenerator[str, None]:
prompt = "You are a helpful assistant that can answer questions and help with tasks."
### Load tools dynamically from Blaxel, and adding a tool defined locally:
tools = await bl_tools(["blaxel-search"]) + [weather]
### Load model API dynamically from Blaxel:
model = await bl_model("gpt-4o-mini")
agent = create_react_agent(model=model, tools=tools, prompt=prompt)
messages = {"messages": [("user", input)]}
async for chunk in agent.astream(messages, stream_mode=["updates", "messages"]):
type_, stream_chunk = chunk
# This is to stream the response from the agent, filtering response from tools
if type_ == "messages" and len(stream_chunk) > 0 and isinstance(stream_chunk[0], AIMessageChunk):
msg = stream_chunk[0]
if msg.content:
if not msg.tool_calls:
yield msg.content
# This to show a call has been made to a tool, usefull if you want to show the tool call in your interface
if type_ == "updates":
if "tools" in stream_chunk:
for msg in stream_chunk["tools"]["messages"]:
yield f"Tool call: {msg.name}\n"
bl_agent().run()
rather than combining all functionality into a single codebase.
from blaxel.core.agents import bl_agent
first_agent_response = await bl_agent("first_agent").run(input);
second_agent_response = await bl_agent("second_agent").run(first_agent_response);
blaxel.toml
file at the root of your directory.
Read the file structure section down below for more details.
import blaxel.core
import logging
logger = getLogger(__name__)
logger.info("Hello, world!");
pyproject.toml # Mandatory. This file is the standard pyproject.toml file, it defines dependencies.
blaxel.toml # This file lists configurations dedicated to Blaxel to customize the deployment. It is not mandatory.
.blaxel # This folder allows you to define custom resources using the Blaxel API specifications. These resources will be deployed along with your agent.
├── blaxel-search.yaml # Here, blaxel-search is a sandbox Web search tool we provide so you can develop your first agent. It has a low rate limit, so we recommend you use a dedicated MCP server for production.
src/
└── main.py # This file is the standard entrypoint of the project. It is used to start the server and create an endpoint bound with agent.py file.
├── agent.py # This file is the main file of your agent. It is loaded from main.py. In the template, all the agent logic is implemented here.
type
so Blaxel knows which kind of entity to deploy. Others are not mandatory but allow you to customize the deployment.
name = "my-agent"
workspace = "my-workspace"
type = "agent"
agents = []
functions = ["blaxel-search"]
models = ["gpt-4o-mini"]
[env]
DEFAULT_CITY = "San Francisco"
[runtime]
timeout = 900
memory = 1024
[[triggers]]
id = "trigger-async-my-agent"
type = "http-async"
[triggers.configuration]
path = "agents/my-agent/async" # This will create this endpoint on the following base URL: https://run.blaxel.ai/{YOUR-WORKSPACE}
retry = 1
[[triggers]]
id = "trigger-my-agent"
type = "http"
[triggers.configuration]
path = "agents/my-agent/sync"
retry = 1
authenticationType = "public"
name
, workspace
, and type
fields are optional and serve as default values. Any bl command run in the folder will use these defaults rather than prompting you for input.agents
, functions
, and models
fields are also optional. They specify which resources to deploy with the agent. These resources are preloaded during build, eliminating runtime dependencies on the Blaxel control plane and dramatically improving performance.[env]
section defines environment variables that the agent can access via the SDK. Note that these are NOT secrets.[runtime]
section allows to override agent deployment parameters: timeout (in s) or memory (in MB) to allocate.[[triggers]]
and [triggers.configuration]
sections defines ways to send requests to the agent. You can create both synchronous and asynchronous trigger endpoints. You can also make them either private (default) or public.
A private synchronous HTTP endpoint is always created by default, even if you don’t define any trigger here.[entrypoint]
section to specify how Blaxel is going to start your server:
...
[entrypoint]
prod = "python src/main.py"
dev = "fastapi dev"
...
prod
: this is the command that will be used to serve your agentpython src/main.py
dev
: same as prod in dev mode, it will be used with the command --hotreload
. Example:fastapi dev
entrypoint
section is optional. If not specified, Blaxel will automatically detect in the agent’s content and configure your agent startup settings.
Default STARTUP TCP probe failed 1 time consecutively for container "agent" on port 80. The instance was not started.
Connection failed with status DEADLINE_EXCEEDED.
BL_SERVER_HOST
& BL_SERVER_PORT
. Blaxel automatically injects these variables during deployment.
Was this page helpful?