@ell.complex¶
While @ell.simple
provides a straightforward way to work with language models that return text, modern language models are increasingly capable of handling and generating multimodal content, structured outputs, and complex interactions. This is where @ell.complex
comes into play.
The @ell.complex
decorator is designed to handle sophisticated interactions with language models, including multimodal inputs/outputs, structured data, and tool usage. It extends @ell.simple
’s capabilities to address the evolving nature of language models, which can now process images, generate structured data, make function calls, and engage in multi-turn conversations. By returning rich Message
objects instead of simple strings, @ell.complex
enables more nuanced and powerful interactions, overcoming the limitations of traditional string-based interfaces in these advanced scenarios.
Note
Messages in ell are not the same as the dictionary messages used in the OpenAI API. ell’s Message API provides a more intuitive and flexible way to construct and manipulate messages. You can read more about ell’s Message API and type coercion in the Messages page.
Usage¶
The basic usage of @ell.complex
is similar to @ell.simple
, but with enhanced capabilities:
import ell
from pydantic import BaseModel, Field
class MovieReview(BaseModel):
title: str = Field(description="The title of the movie")
rating: int = Field(description="The rating of the movie out of 10")
summary: str = Field(description="A brief summary of the movie")
@ell.complex(model="gpt-4o-2024-08-06", response_format=MovieReview)
def generate_movie_review(movie: str):
"""You are a movie review generator. Given the name of a movie, you need to return a structured review."""
return f"Generate a review for the movie {movie}"
review_message = generate_movie_review("The Matrix")
review = review_message.parsed
print(f"Movie: {review.title}, Rating: {review.rating}/10")
print(f"Summary: {review.summary}")
Key Features¶
1. Structured Outputs¶
@ell.complex
allows for structured outputs using Pydantic models:
@ell.complex(model="gpt-4o-2024-08-06", response_format=MovieReview)
def generate_movie_review(movie: str) -> MovieReview:
"""You are a movie review generator. Given the name of a movie, you need to return a structured review."""
return f"Generate a review for the movie {movie}"
review_message = generate_movie_review("Inception")
review = review_message.parsed
print(f"Rating: {review.rating}/10")
2. Multimodal Interactions¶
@ell.complex
can handle various types of inputs and outputs, including text and images:
from PIL import Image
@ell.complex(model="gpt-5-omni")
def describe_and_generate(prompt: str):
return [
ell.system("You can describe images and generate new ones based on text prompts."),
ell.user(prompt)
]
result = describe_and_generate("A serene lake at sunset")
print(result.text) # Prints the description
if result.images:
result.images[0].show() # Displays the generated image
3. Chat-based Use Cases¶
@ell.complex
is particularly useful for chat-based applications where you need to maintain conversation history:
from ell import Message
@ell.complex(model="gpt-4o", temperature=0.7)
def chat_bot(message_history: List[Message]) -> List[Message]:
return [
ell.system("You are a friendly chatbot. Engage in casual conversation."),
] + message_history
message_history = []
while True:
user_input = input("You: ")
message_history.append(ell.user(user_input))
response = chat_bot(message_history)
print("Bot:", response.text)
message_history.append(response)
4. Tool Usage¶
@ell.complex
supports tool usage, allowing language models to make function calls:
@ell.tool()
def get_weather(location: str = Field(description="The full name of a city and country, e.g. San Francisco, CA, USA")):
"""Get the current weather for a given location."""
# Simulated weather API call
return f"The weather in {location} is sunny."
@ell.complex(model="gpt-4-turbo", tools=[get_weather])
def travel_planner(destination: str):
"""Plan a trip based on the destination and current weather."""
return [
ell.system("You are a travel planner. Use the weather tool to provide relevant advice."),
ell.user(f"Plan a trip to {destination}")
]
result = travel_planner("Paris")
print(result.text) # Prints travel advice
if result.tool_calls:
# This is done so that we can pass the tool calls to the language model
result_message = result.call_tools_and_collect_as_message()
print("Weather info:", result_message.tool_results[0].text) # Raw text of the tool call.
print("Message to be sent to the LLM:", result_message.text) # Representation of the message to be sent to the LLM.
Reference¶
- ell.complex(model: str, client: Any | None = None, tools: List[Callable] | None = None, exempt_from_tracking=False, post_callback: Callable | None = None, **api_params)¶
A sophisticated language model programming decorator for complex LLM interactions.
This decorator transforms a function into a Language Model Program (LMP) capable of handling multi-turn conversations, tool usage, and various output formats. It’s designed for advanced use cases where full control over the LLM’s capabilities is needed.
- Parameters:
model (str) – The name or identifier of the language model to use.
client (Optional[openai.Client]) – An optional OpenAI client instance. If not provided, a default client will be used.
tools (Optional[List[Callable]]) – A list of tool functions that can be used by the LLM. Only available for certain models.
response_format (Optional[Dict[str, Any]]) – The response format for the LLM. Only available for certain models.
n (Optional[int]) – The number of responses to generate for the LLM. Only available for certain models.
temperature (Optional[float]) – The temperature parameter for controlling the randomness of the LLM.
max_tokens (Optional[int]) – The maximum number of tokens to generate for the LLM.
top_p (Optional[float]) – The top-p sampling parameter for controlling the diversity of the LLM.
frequency_penalty (Optional[float]) – The frequency penalty parameter for controlling the LLM’s repetition.
presence_penalty (Optional[float]) – The presence penalty parameter for controlling the LLM’s relevance.
stop (Optional[List[str]]) – The stop sequence for the LLM.
exempt_from_tracking (bool) – If True, the LMP usage won’t be tracked. Default is False.
post_callback (Optional[Callable]) – An optional function to process the LLM’s output before returning.
api_params (Any) – Additional keyword arguments to pass to the underlying API call.
- Returns:
A decorator that can be applied to a function, transforming it into a complex LMP.
- Return type:
Callable
Functionality:
- Advanced LMP Creation:
Supports multi-turn conversations and stateful interactions.
Enables tool usage within the LLM context.
Allows for various output formats, including structured data and function calls.
- Flexible Input Handling:
Can process both single prompts and conversation histories.
Supports multimodal inputs (text, images, etc.) in the prompt.
- Comprehensive Integration:
Integrates with ell’s tracking system for monitoring LMP versions, usage, and performance.
Supports various language models and API configurations.
- Output Processing:
Can return raw LLM outputs or process them through a post-callback function.
Supports returning multiple message types (e.g., text, function calls, tool results).
Usage Modes and Examples:
Basic Prompt:
@ell.complex(model="gpt-4") def generate_story(prompt: str) -> List[Message]: '''You are a creative story writer''' # System prompt return [ ell.user(f"Write a short story based on this prompt: {prompt}") ] story : ell.Message = generate_story("A robot discovers emotions") print(story.text) # Access the text content of the last message
Multi-turn Conversation:
@ell.complex(model="gpt-4") def chat_bot(message_history: List[Message]) -> List[Message]: return [ ell.system("You are a helpful assistant."), ] + message_history conversation = [ ell.user("Hello, who are you?"), ell.assistant("I'm an AI assistant. How can I help you today?"), ell.user("Can you explain quantum computing?") ] response : ell.Message = chat_bot(conversation) print(response.text) # Print the assistant's response
Tool Usage:
@ell.tool() def get_weather(location: str) -> str: # Implementation to fetch weather return f"The weather in {location} is sunny." @ell.complex(model="gpt-4", tools=[get_weather]) def weather_assistant(message_history: List[Message]) -> List[Message]: return [ ell.system("You are a weather assistant. Use the get_weather tool when needed."), ] + message_history conversation = [ ell.user("What's the weather like in New York?") ] response : ell.Message = weather_assistant(conversation) if response.tool_calls: tool_results = response.call_tools_and_collect_as_message() print("Tool results:", tool_results.text) # Continue the conversation with tool results final_response = weather_assistant(conversation + [response, tool_results]) print("Final response:", final_response.text)
Structured Output:
from pydantic import BaseModel class PersonInfo(BaseModel): name: str age: int @ell.complex(model="gpt-4", response_format=PersonInfo) def extract_person_info(text: str) -> List[Message]: return [ ell.system("Extract person information from the given text."), ell.user(text) ] text = "John Doe is a 30-year-old software engineer." result : ell.Message = extract_person_info(text) person_info = result.parsed print(f"Name: {person_info.name}, Age: {person_info.age}")
Multimodal Input:
@ell.complex(model="gpt-4-vision-preview") def describe_image(image: PIL.Image.Image) -> List[Message]: return [ ell.system("Describe the contents of the image in detail."), ell.user([ ContentBlock(text="What do you see in this image?"), ContentBlock(image=image) ]) ] image = PIL.Image.open("example.jpg") description = describe_image(image) print(description.text)
Parallel Tool Execution:
@ell.complex(model="gpt-4", tools=[tool1, tool2, tool3]) def parallel_assistant(message_history: List[Message]) -> List[Message]: return [ ell.system("You can use multiple tools in parallel."), ] + message_history response = parallel_assistant([ell.user("Perform tasks A, B, and C simultaneously.")]) if response.tool_calls: tool_results : ell.Message = response.call_tools_and_collect_as_message(parallel=True, max_workers=3) print("Parallel tool results:", tool_results.text)
Helper Functions for Output Processing:
response.text: Get the full text content of the last message.
response.text_only: Get only the text content, excluding non-text elements.
response.tool_calls: Access the list of tool calls in the message.
response.tool_results: Access the list of tool results in the message.
response.structured: Access structured data outputs.
response.call_tools_and_collect_as_message(): Execute tool calls and collect results.
Message(role=”user”, content=[…]).to_openai_message(): Convert to OpenAI API format.
Notes:
The decorated function should return a list of Message objects.
For tool usage, ensure that tools are properly decorated with @ell.tool().
When using structured outputs, specify the response_format in the decorator.
The complex decorator supports all features of simpler decorators like @ell.simple.
Use helper functions and properties to easily access and process different types of outputs.
See Also:
ell.simple: For simpler text-only LMP interactions.
ell.tool: For defining tools that can be used within complex LMPs.
ell.studio: For visualizing and analyzing LMP executions.