ell package¶
Note
This is coming soon, be sure to read the source code.
Automatically generated API reference for ell
.
ell is a Python library for language model programming (LMP). It provides a simple and intuitive interface for working with large language models.
- pydantic model ell.Config¶
Bases:
BaseModel
Configuration class for ELL.
Show JSON schema
{ "title": "Config", "type": "object", "properties": { "registry": { "additionalProperties": { "$ref": "#/$defs/_Model" }, "description": "A dictionary mapping model names to their configurations.", "title": "Registry", "type": "object" }, "verbose": { "default": false, "description": "If True, enables verbose logging.", "title": "Verbose", "type": "boolean" }, "wrapped_logging": { "default": true, "description": "If True, enables wrapped logging for better readability.", "title": "Wrapped Logging", "type": "boolean" }, "override_wrapped_logging_width": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "description": "If set, overrides the default width for wrapped logging.", "title": "Override Wrapped Logging Width" }, "store": { "default": null, "description": "An optional Store instance for persistence.", "title": "Store", "type": "null" }, "autocommit": { "default": false, "description": "If True, enables automatic committing of changes to the store.", "title": "Autocommit", "type": "boolean" }, "lazy_versioning": { "default": true, "description": "If True, enables lazy versioning for improved performance.", "title": "Lazy Versioning", "type": "boolean" }, "default_api_params": { "description": "Default parameters for language models.", "title": "Default Api Params", "type": "object" }, "default_client": { "default": null, "title": "Default Client" }, "autocommit_model": { "default": "gpt-4o-mini", "description": "When set, changes the default autocommit model from GPT 4o mini.", "title": "Autocommit Model", "type": "string" }, "providers": { "default": null, "title": "Providers" } }, "$defs": { "_Model": { "properties": { "name": { "title": "Name", "type": "string" }, "default_client": { "anyOf": [ {}, { "type": "null" } ], "default": null, "title": "Default Client" }, "supports_streaming": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Supports Streaming" } }, "required": [ "name" ], "title": "_Model", "type": "object" } } }
- Config:
arbitrary_types_allowed: bool = True
protected_namespaces: tuple = (‘protect_’,)
- Fields:
autocommit (bool)
autocommit_model (str)
default_api_params (Dict[str, Any])
default_client (openai.OpenAI | None)
lazy_versioning (bool)
override_wrapped_logging_width (int | None)
providers (Dict[Type, ell.provider.Provider])
registry (Dict[str, ell.configurator._Model])
store (None)
verbose (bool)
wrapped_logging (bool)
- field autocommit: bool = False¶
If True, enables automatic committing of changes to the store.
- field autocommit_model: str = 'gpt-4o-mini'¶
When set, changes the default autocommit model from GPT 4o mini.
- field default_api_params: Dict[str, Any] [Optional]¶
Default parameters for language models.
- field default_client: OpenAI | None = None¶
The default OpenAI client used when a specific model client is not found.
- field lazy_versioning: bool = True¶
If True, enables lazy versioning for improved performance.
- field override_wrapped_logging_width: int | None = None¶
If set, overrides the default width for wrapped logging.
- field providers: Dict[Type, Provider] [Optional]¶
A dictionary mapping client types to provider classes.
- field registry: Dict[str, _Model] [Optional]¶
A dictionary mapping model names to their configurations.
- field store: None = None¶
An optional Store instance for persistence.
- field verbose: bool = False¶
If True, enables verbose logging.
- field wrapped_logging: bool = True¶
If True, enables wrapped logging for better readability.
- get_client_for(model_name: str) Tuple[OpenAI | None, bool] ¶
Get the OpenAI client for a specific model name.
- Parameters:
model_name (str) – The name of the model to get the client for.
- Returns:
The OpenAI client for the specified model, or None if not found, and a fallback flag.
- Return type:
Tuple[Optional[openai.Client], bool]
- get_provider_for(client: Type[Any] | Any) Provider | None ¶
Get the provider instance for a specific client instance.
- Parameters:
client (Any) – The client instance to get the provider for.
- Returns:
The provider instance for the specified client, or None if not found.
- Return type:
Optional[Provider]
- model_registry_override(overrides: Dict[str, _Model])¶
Temporarily override the model registry with new model configurations.
- Parameters:
overrides (Dict[str, ModelConfig]) – A dictionary of model names to ModelConfig instances to override.
- register_model(name: str, default_client: OpenAI | Any | None = None, supports_streaming: bool | None = None) None ¶
Register a model with its configuration.
- register_provider(provider: Provider, client_type: Type[Any]) None ¶
Register a provider class for a specific client type.
- Parameters:
provider_class (Type[Provider]) – The provider class to register.
- pydantic model ell.ContentBlock¶
Bases:
BaseModel
Show JSON schema
{ "title": "ContentBlock", "type": "object", "properties": { "text": { "anyOf": [ { "properties": { "content": { "title": "Content", "type": "string" }, "__origin_trace__": { "title": " Origin Trace ", "type": "string" }, "__lstr": { "title": " Lstr", "type": "boolean" } }, "required": [ "content", "__origin_trace__", "__lstr" ], "type": "object" }, { "type": "string" }, { "type": "null" } ], "default": null, "title": "Text" }, "image": { "default": null, "title": "Image" }, "audio": { "anyOf": [ { "items": { "type": "number" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Audio" }, "tool_call": { "default": null, "title": "Tool Call" }, "parsed": { "anyOf": [ { "$ref": "#/$defs/BaseModel" }, { "type": "null" } ], "default": null }, "tool_result": { "default": null, "title": "Tool Result" } }, "$defs": { "BaseModel": { "properties": {}, "title": "BaseModel", "type": "object" } } }
- Config:
arbitrary_types_allowed: bool = True
- Fields:
audio (numpy.ndarray | List[float] | None)
image (ell.types.message.ImageContent | None)
parsed (pydantic.main.BaseModel | None)
text (ell.types._lstr._lstr | str | None)
tool_call (ell.types.message.ToolCall | None)
tool_result (ell.types.message.ToolResult | None)
- Validators:
check_single_non_null
»all fields
- field audio: ndarray | List[float] | None = None¶
- Validated by:
check_single_non_null
- field image: ImageContent | None = None¶
- Validated by:
check_single_non_null
- field parsed: BaseModel | None = None¶
- Validated by:
check_single_non_null
- field text: _lstr | str | None = None¶
- Validated by:
check_single_non_null
- field tool_call: ToolCall | None = None¶
- Validated by:
check_single_non_null
- field tool_result: ToolResult | None = None¶
- Validated by:
check_single_non_null
- validator check_single_non_null » all fields¶
- classmethod coerce(content: ContentBlock | str | ToolCall | ToolResult | ImageContent | ndarray | Image | BaseModel) ContentBlock ¶
Coerce various types of content into a ContentBlock.
This method provides a flexible way to create ContentBlock instances from different types of input.
Args: content: The content to be coerced into a ContentBlock. Can be one of the following types: - str: Will be converted to a text ContentBlock. - ToolCall: Will be converted to a tool_call ContentBlock. - ToolResult: Will be converted to a tool_result ContentBlock. - BaseModel: Will be converted to a parsed ContentBlock. - ContentBlock: Will be returned as-is. - Image: Will be converted to an image ContentBlock. - np.ndarray: Will be converted to an image ContentBlock. - PILImage.Image: Will be converted to an image ContentBlock.
Returns: ContentBlock: A new ContentBlock instance containing the coerced content.
Raises: ValueError: If the content cannot be coerced into a valid ContentBlock.
Examples: >>> ContentBlock.coerce(“Hello, world!”) ContentBlock(text=”Hello, world!”)
>>> tool_call = ToolCall(...) >>> ContentBlock.coerce(tool_call) ContentBlock(tool_call=tool_call)
>>> tool_result = ToolResult(...) >>> ContentBlock.coerce(tool_result) ContentBlock(tool_result=tool_result)
>>> class MyModel(BaseModel): ... field: str >>> model_instance = MyModel(field="value") >>> ContentBlock.coerce(model_instance) ContentBlock(parsed=model_instance)
>>> from PIL import Image as PILImage >>> img = PILImage.new('RGB', (100, 100)) >>> ContentBlock.coerce(img) ContentBlock(image=ImageContent(image=<PIL.Image.Image object>))
>>> import numpy as np >>> arr = np.random.rand(100, 100, 3) >>> ContentBlock.coerce(arr) ContentBlock(image=ImageContent(image=<PIL.Image.Image object>))
>>> image = Image(url="https://example.com/image.jpg") >>> ContentBlock.coerce(image) ContentBlock(image=ImageContent(url="https://example.com/image.jpg"))
Notes: - This method is particularly useful when working with heterogeneous content types
and you want to ensure they are all properly encapsulated in ContentBlock instances.
The method performs type checking and appropriate conversions to ensure the resulting ContentBlock is valid according to the model’s constraints.
For image content, Image objects, PIL Image objects, and numpy arrays are supported, with automatic conversion to the appropriate format.
As a last resort, the method will attempt to create an image from the input before raising a ValueError.
- serialize_parsed(value: BaseModel | None, _info)¶
- property content¶
- property type¶
- pydantic model ell.Message¶
Bases:
BaseModel
Show JSON schema
{ "title": "Message", "type": "object", "properties": { "role": { "title": "Role", "type": "string" }, "content": { "default": null, "title": "Content" } }, "required": [ "role" ] }
- Fields:
content (List[ell.types.message.ContentBlock])
role (str)
- field content: List[ContentBlock] [Required]¶
- field role: str [Required]¶
- call_tools_and_collect_as_message(parallel=False, max_workers=None)¶
- classmethod model_validate_json(json_str: str) Message ¶
Custom validation to handle deserialization from JSON string
- serialize_content(content: List[ContentBlock])¶
Serialize content blocks to a format suitable for JSON
- property audios: List[ndarray | List[float]]¶
Returns a list of all audio content.
Example
>>> audio1 = np.array([0.1, 0.2, 0.3]) >>> audio2 = np.array([0.4, 0.5, 0.6]) >>> message = Message(role="user", content=["Text", audio1, "More text", audio2]) >>> len(message.audios) 2
- property images: List[ImageContent]¶
Returns a list of all image content.
Example
>>> from PIL import Image as PILImage >>> image1 = Image(url="https://example.com/image.jpg") >>> image2 = Image(image=PILImage.new('RGB', (200, 200))) >>> message = Message(role="user", content=["Text", image1, "More text", image2]) >>> len(message.images) 2 >>> isinstance(message.images[0], Image) True >>> message.images[0].url 'https://example.com/image.jpg' >>> isinstance(message.images[1].image, PILImage.Image) True
- property parsed: BaseModel | List[BaseModel]¶
Returns a list of all parsed content.
Example
>>> class CustomModel(BaseModel): ... value: int >>> parsed_content = CustomModel(value=42) >>> message = Message(role="user", content=["Text", ContentBlock(parsed=parsed_content)]) >>> len(message.parsed) 1
- property text: str¶
Returns all text content, replacing non-text content with their representations.
Example
>>> message = Message(role="user", content=["Hello", PILImage.new('RGB', (100, 100)), "World"]) >>> message.text 'Hello\n<PilImage>\nWorld'
- property text_only: str¶
Returns only the text content, ignoring non-text content.
Example
>>> message = Message(role="user", content=["Hello", PILImage.new('RGB', (100, 100)), "World"]) >>> message.text_only 'Hello\nWorld'
- property tool_calls: List[ToolCall]¶
Returns a list of all tool calls.
Example
>>> tool_call = ToolCall(tool=lambda x: x, params=BaseModel()) >>> message = Message(role="user", content=["Text", tool_call]) >>> len(message.tool_calls) 1
- property tool_results: List[ToolResult]¶
Returns a list of all tool results.
Example
>>> tool_result = ToolResult(tool_call_id="123", result=[ContentBlock(text="Result")]) >>> message = Message(role="user", content=["Text", tool_result]) >>> len(message.tool_results) 1
- ell.assistant(content: ContentBlock | str | ToolCall | ToolResult | ImageContent | ndarray | Image | BaseModel | List[ContentBlock | str | ToolCall | ToolResult | ImageContent | ndarray | Image | BaseModel]) Message ¶
Create an assistant message with the given content.
Args: content (str): The content of the assistant message.
Returns: Message: A Message object with role set to ‘assistant’ and the provided content.
- ell.complex(model: str, client: Any | None = None, tools: List[Callable] | None = None, exempt_from_tracking=False, post_callback: Callable | None = None, **api_params)¶
A sophisticated language model programming decorator for complex LLM interactions.
This decorator transforms a function into a Language Model Program (LMP) capable of handling multi-turn conversations, tool usage, and various output formats. It’s designed for advanced use cases where full control over the LLM’s capabilities is needed.
- Parameters:
model (str) – The name or identifier of the language model to use.
client (Optional[openai.Client]) – An optional OpenAI client instance. If not provided, a default client will be used.
tools (Optional[List[Callable]]) – A list of tool functions that can be used by the LLM. Only available for certain models.
response_format (Optional[Dict[str, Any]]) – The response format for the LLM. Only available for certain models.
n (Optional[int]) – The number of responses to generate for the LLM. Only available for certain models.
temperature (Optional[float]) – The temperature parameter for controlling the randomness of the LLM.
max_tokens (Optional[int]) – The maximum number of tokens to generate for the LLM.
top_p (Optional[float]) – The top-p sampling parameter for controlling the diversity of the LLM.
frequency_penalty (Optional[float]) – The frequency penalty parameter for controlling the LLM’s repetition.
presence_penalty (Optional[float]) – The presence penalty parameter for controlling the LLM’s relevance.
stop (Optional[List[str]]) – The stop sequence for the LLM.
exempt_from_tracking (bool) – If True, the LMP usage won’t be tracked. Default is False.
post_callback (Optional[Callable]) – An optional function to process the LLM’s output before returning.
api_params (Any) – Additional keyword arguments to pass to the underlying API call.
- Returns:
A decorator that can be applied to a function, transforming it into a complex LMP.
- Return type:
Callable
Functionality:
- Advanced LMP Creation:
Supports multi-turn conversations and stateful interactions.
Enables tool usage within the LLM context.
Allows for various output formats, including structured data and function calls.
- Flexible Input Handling:
Can process both single prompts and conversation histories.
Supports multimodal inputs (text, images, etc.) in the prompt.
- Comprehensive Integration:
Integrates with ell’s tracking system for monitoring LMP versions, usage, and performance.
Supports various language models and API configurations.
- Output Processing:
Can return raw LLM outputs or process them through a post-callback function.
Supports returning multiple message types (e.g., text, function calls, tool results).
Usage Modes and Examples:
Basic Prompt:
@ell.complex(model="gpt-4") def generate_story(prompt: str) -> List[Message]: '''You are a creative story writer''' # System prompt return [ ell.user(f"Write a short story based on this prompt: {prompt}") ] story : ell.Message = generate_story("A robot discovers emotions") print(story.text) # Access the text content of the last message
Multi-turn Conversation:
@ell.complex(model="gpt-4") def chat_bot(message_history: List[Message]) -> List[Message]: return [ ell.system("You are a helpful assistant."), ] + message_history conversation = [ ell.user("Hello, who are you?"), ell.assistant("I'm an AI assistant. How can I help you today?"), ell.user("Can you explain quantum computing?") ] response : ell.Message = chat_bot(conversation) print(response.text) # Print the assistant's response
Tool Usage:
@ell.tool() def get_weather(location: str) -> str: # Implementation to fetch weather return f"The weather in {location} is sunny." @ell.complex(model="gpt-4", tools=[get_weather]) def weather_assistant(message_history: List[Message]) -> List[Message]: return [ ell.system("You are a weather assistant. Use the get_weather tool when needed."), ] + message_history conversation = [ ell.user("What's the weather like in New York?") ] response : ell.Message = weather_assistant(conversation) if response.tool_calls: tool_results = response.call_tools_and_collect_as_message() print("Tool results:", tool_results.text) # Continue the conversation with tool results final_response = weather_assistant(conversation + [response, tool_results]) print("Final response:", final_response.text)
Structured Output:
from pydantic import BaseModel class PersonInfo(BaseModel): name: str age: int @ell.complex(model="gpt-4", response_format=PersonInfo) def extract_person_info(text: str) -> List[Message]: return [ ell.system("Extract person information from the given text."), ell.user(text) ] text = "John Doe is a 30-year-old software engineer." result : ell.Message = extract_person_info(text) person_info = result.parsed print(f"Name: {person_info.name}, Age: {person_info.age}")
Multimodal Input:
@ell.complex(model="gpt-4-vision-preview") def describe_image(image: PIL.Image.Image) -> List[Message]: return [ ell.system("Describe the contents of the image in detail."), ell.user([ ContentBlock(text="What do you see in this image?"), ContentBlock(image=image) ]) ] image = PIL.Image.open("example.jpg") description = describe_image(image) print(description.text)
Parallel Tool Execution:
@ell.complex(model="gpt-4", tools=[tool1, tool2, tool3]) def parallel_assistant(message_history: List[Message]) -> List[Message]: return [ ell.system("You can use multiple tools in parallel."), ] + message_history response = parallel_assistant([ell.user("Perform tasks A, B, and C simultaneously.")]) if response.tool_calls: tool_results : ell.Message = response.call_tools_and_collect_as_message(parallel=True, max_workers=3) print("Parallel tool results:", tool_results.text)
Helper Functions for Output Processing:
response.text: Get the full text content of the last message.
response.text_only: Get only the text content, excluding non-text elements.
response.tool_calls: Access the list of tool calls in the message.
response.tool_results: Access the list of tool results in the message.
response.structured: Access structured data outputs.
response.call_tools_and_collect_as_message(): Execute tool calls and collect results.
Message(role=”user”, content=[…]).to_openai_message(): Convert to OpenAI API format.
Notes:
The decorated function should return a list of Message objects.
For tool usage, ensure that tools are properly decorated with @ell.tool().
When using structured outputs, specify the response_format in the decorator.
The complex decorator supports all features of simpler decorators like @ell.simple.
Use helper functions and properties to easily access and process different types of outputs.
See Also:
ell.simple: For simpler text-only LMP interactions.
ell.tool: For defining tools that can be used within complex LMPs.
ell.studio: For visualizing and analyzing LMP executions.
- ell.get_store() None ¶
- ell.init(store: None | str = None, verbose: bool = False, autocommit: bool = True, lazy_versioning: bool = True, default_api_params: Dict[str, Any] | None = None, default_client: Any | None = None, autocommit_model: str = 'gpt-4o-mini') None ¶
Initialize the ELL configuration with various settings.
- Parameters:
verbose (bool) – Set verbosity of ELL operations.
store (Union[Store, str], optional) – Set the store for ELL. Can be a Store instance or a string path for SQLiteStore.
autocommit (bool) – Set autocommit for the store operations.
lazy_versioning (bool) – Enable or disable lazy versioning.
default_api_params (Dict[str, Any], optional) – Set default parameters for language models.
default_openai_client (openai.Client, optional) – Set the default OpenAI client.
autocommit_model (str) – Set the model used for autocommitting.
- ell.register_provider(provider: Provider, client_type: Type[Any]) None ¶
- ell.set_store(*args, **kwargs) None ¶
- ell.simple(model: str, client: Any | None = None, exempt_from_tracking=False, **api_params)¶
The fundamental unit of language model programming in ell.
This decorator simplifies the process of creating Language Model Programs (LMPs) that return text-only outputs from language models, while supporting multimodal inputs. It wraps the more complex ‘complex’ decorator, providing a streamlined interface for common use cases.
- Parameters:
model (str) – The name or identifier of the language model to use.
client (Optional[openai.Client]) – An optional OpenAI client instance. If not provided, a default client will be used.
exempt_from_tracking (bool) – If True, the LMP usage won’t be tracked. Default is False.
api_params (Any) – Additional keyword arguments to pass to the underlying API call.
Usage: The decorated function can return either a single prompt or a list of ell.Message objects:
@ell.simple(model="gpt-4", temperature=0.7) def summarize_text(text: str) -> str: '''You are an expert at summarizing text.''' # System prompt return f"Please summarize the following text:\n\n{text}" # User prompt @ell.simple(model="gpt-4", temperature=0.7) def describe_image(image : PIL.Image.Image) -> List[ell.Message]: '''Describe the contents of an image.''' # unused because we're returning a list of Messages return [ # helper function for ell.Message(text="...", role="system") ell.system("You are an AI trained to describe images."), # helper function for ell.Message(content="...", role="user") ell.user(["Describe this image in detail.", image]), ] image_description = describe_image(PIL.Image.open("https://example.com/image.jpg")) print(image_description) # Output will be a string text-only description of the image summary = summarize_text("Long text to summarize...") print(summary) # Output will be a text-only summary
Notes:
This decorator is designed for text-only model outputs, but supports multimodal inputs.
It simplifies complex responses from language models to text-only format, regardless of the model’s capability for structured outputs, function calling, or multimodal outputs.
For preserving complex model outputs (e.g., structured data, function calls, or multimodal outputs), use the @ell.complex decorator instead. @ell.complex returns a Message object (role=’assistant’)
The decorated function can return a string or a list of ell.Message objects for more complex prompts, including multimodal inputs.
If called with n > 1 in api_params, the wrapped LMP will return a list of strings for the n parallel outputs of the model instead of just one string. Otherwise, it will return a single string.
You can pass LM API parameters either in the decorator or when calling the decorated function. Parameters passed during the function call will override those set in the decorator.
Example of passing LM API params:
@ell.simple(model="gpt-4", temperature=0.7) def generate_story(prompt: str) -> str: return f"Write a short story based on this prompt: {prompt}" # Using default parameters story1 = generate_story("A day in the life of a time traveler") # Overriding parameters during function call story2 = generate_story("An AI's first day of consciousness", api_params={"temperature": 0.9, "max_tokens": 500})
See Also:
ell.complex()
: For LMPs that preserve full structure of model responses, including multimodal outputs.ell.tool()
: For defining tools that can be used within complex LMPs.ell.studio
: For visualizing and analyzing LMP executions.
- ell.system(content: ContentBlock | str | ToolCall | ToolResult | ImageContent | ndarray | Image | BaseModel | List[ContentBlock | str | ToolCall | ToolResult | ImageContent | ndarray | Image | BaseModel]) Message ¶
Create a system message with the given content.
Args: content (str): The content of the system message.
Returns: Message: A Message object with role set to ‘system’ and the provided content.
- ell.tool(*, exempt_from_tracking: bool = False, **tool_kwargs)¶
Defines a tool for use in language model programs (LMPs) that support tool use.
This decorator wraps a function, adding metadata and handling for tool invocations. It automatically extracts the tool’s description and parameters from the function’s docstring and type annotations, creating a structured representation for LMs to use.
- Parameters:
exempt_from_tracking (bool) – If True, the tool usage won’t be tracked. Default is False.
tool_kwargs – Additional keyword arguments for tool configuration.
- Returns:
A wrapped version of the original function, usable as a tool by LMs.
- Return type:
Callable
Requirements:
Function must have fully typed arguments (Pydantic-serializable).
Return value must be one of: str, JSON-serializable object, Pydantic model, or List[ContentBlock].
All parameters must have type annotations.
Complex types should be Pydantic models.
Function should have a descriptive docstring.
Can only be used in LMPs with @ell.complex decorators
Functionality:
- Metadata Extraction:
Uses function docstring as tool description.
Extracts parameter info from type annotations and docstring.
Creates a Pydantic model for parameter validation and schema generation.
- Integration with LMs:
Can be passed to @ell.complex decorators.
Provides structured tool information to LMs.
- Invocation Handling:
Manages tracking, logging, and result processing.
Wraps results in appropriate types (e.g., _lstr) for tracking.
Usage Modes:
- Normal Function Call:
Behaves like a regular Python function.
Example: result = my_tool(arg1=”value”, arg2=123)
- LMP Tool Call:
Used within LMPs or with explicit _tool_call_id.
Returns a ToolResult object.
Example: result = my_tool(arg1=”value”, arg2=123, _tool_call_id=”unique_id”)
Result Coercion:
String → ContentBlock(text=result)
Pydantic BaseModel → ContentBlock(parsed=result)
List[ContentBlock] → Used as-is
Other types → ContentBlock(text=json.dumps(result))
Example:
@ell.tool() def create_claim_draft( claim_details: str, claim_type: str, claim_amount: float, claim_date: str = Field(description="Date format: YYYY-MM-DD") ) -> str: '''Create a claim draft. Returns the created claim ID.''' return "12345" # For use in a complex LMP: @ell.complex(model="gpt-4", tools=[create_claim_draft], temperature=0.1) def insurance_chatbot(message_history: List[Message]) -> List[Message]: # Chatbot implementation... x = insurance_chatbot([ ell.user("I crashed my car into a tree."), ell.assistant("I'm sorry to hear that. Can you provide more details?"), ell.user("The car is totaled and I need to file a claim. Happened on 2024-08-01. total value is like $5000") ]) print(x) '''ell.Message(content=[ ContentBlock(tool_call( tool_call_id="asdas4e", tool_fn=create_claim_draft, input=create_claim_draftParams({ claim_details="The car is totaled and I need to file a claim. Happened on 2024-08-01. total value is like $5000", claim_type="car", claim_amount=5000, claim_date="2024-08-01" }) )) ], role='assistant')''' if x.tool_calls: next_user_message = response_message.call_tools_and_collect_as_message() # This actually calls create_claim_draft print(next_user_message) ''' ell.Message(content=[ ContentBlock(tool_result=ToolResult( tool_call_id="asdas4e", result=[ContentBlock(text="12345")] )) ], role='user') ''' y = insurance_chatbot(message_history + [x, next_user_message]) print(y) ''' ell.Message("I've filed that for you!", role='assistant') '''
Note: - Tools are integrated into LMP calls via the ‘tools’ parameter in @ell.complex. - LMs receive structured tool information, enabling understanding and usage within the conversation context.
- ell.user(content: ContentBlock | str | ToolCall | ToolResult | ImageContent | ndarray | Image | BaseModel | List[ContentBlock | str | ToolCall | ToolResult | ImageContent | ndarray | Image | BaseModel]) Message ¶
Create a user message with the given content.
Args: content (str): The content of the user message.
Returns: Message: A Message object with role set to ‘user’ and the provided content.