Function Calling & Structured Output
What You'll Build Today
Up until now, we have treated Large Language Models (LLMs) like very smart chatbots. You send text in, and you get text out. But real-world software needs more than just paragraphs of text—it needs structured data. It needs dates, numbers, boolean flags, and lists.
Today, we are going to build an Intelligent Email Parser.
Imagine you receive hundreds of emails about meetings. Currently, a human has to read them and manually type the details into a calendar. You will build an AI system that takes a messy, rambling email and automatically converts it into a clean, structured JSON object containing the meeting date, a list of attendees, and specific action items.
Here is what you will learn:
* OpenAI Tool Calling: How to connect the "brain" of the LLM to the "hands" of your code.
* Pydantic: Why we need a strict set of rules (a schema) to ensure the AI doesn't give us broken data.
* Structured Data Extraction: How to turn unstructured text (natural language) into database-ready formats.
* The "Tools" Parameter: The specific API syntax that allows the LLM to prepare arguments for Python functions.
The Problem
Let's say you want to extract a meeting date from a user's message so you can schedule it in a database.
You might try to write code like this using standard string manipulation or even a basic LLM prompt:
# The "Old School" way - and why it hurts
user_email = "Hey team, let's meet next Tuesday at 2 PM to discuss the roadmap."
# Attempt 1: String splitting (The naive approach)
# This fails immediately because "next Tuesday" isn't a fixed format
try:
date_part = user_email.split("at")[0].split("meet")[1]
print(f"Extracted date: {date_part}")
except:
print("Code crashed: formatting didn't match exactly.")
# Attempt 2: Asking the LLM for JSON via a standard prompt
from openai import OpenAI
client = OpenAI()
prompt = f"""
Extract the date and time from this text: "{user_email}"
Return it as JSON format like {{ "date": "..." }}.
"""
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}]
)
content = response.choices[0].message.content
print(f"Raw AI Output: {content}")
# THE PAIN:
# The AI might output: "Here is your JSON: { "date": "2023-10-10 14:00" }"
# Or: json { "date": "..." } # Or: "{ 'date': ... }" (using single quotes which breaks JSON parsing)
import json
try:
data = json.loads(content) # This often crashes because of extra text
print("Success!")
except json.JSONDecodeError:
print("FAILURE: The AI added extra text or formatting that broke the parser.")
Why is this painful?
There has to be a way to force the LLM to speak "Code" instead of "English."
Let's Build It
We are going to use Function Calling (also known as Tool Use). This feature allows us to describe a Python function to the LLM. The LLM won't execute the code itself; instead, it will generate the perfect arguments for us to execute that code.
Step 1: Setup and Pydantic
We need a library called pydantic. This library allows us to define "Data Models"—essentially blueprints for what our data must look like.
If you haven't installed the libraries yet:
pip install openai pydantic
Now, let's define the structure of the data we want to extract from the email.
from pydantic import BaseModel, Field
from typing import List
# This class defines the "Shape" of the data we want.
# It acts as a contract between us and the AI.
class MeetingDetails(BaseModel):
meeting_topic: str = Field(description="The main subject of the meeting")
date_time: str = Field(description="The date and time of the meeting in ISO format (YYYY-MM-DD HH:MM)")
attendees: List[str] = Field(description="List of names of people invited")
priority: str = Field(description="Priority level: High, Medium, or Low based on urgency")
print("Schema defined successfully.")
Step 2: Converting Pydantic to an OpenAI Schema
OpenAI expects a very specific JSON format to understand tools. Pydantic can help us generate this, but we need to format it into the dictionary structure OpenAI's API expects.
We will create a tool definition.
# We define the tool structure manually to ensure clarity on what OpenAI sees
tools = [
{
"type": "function",
"function": {
"name": "extract_meeting_details",
"description": "Extracts structured meeting information from an email",
"parameters": {
"type": "object",
"properties": {
"meeting_topic": {"type": "string", "description": "The main subject"},
"date_time": {"type": "string", "description": "ISO format date time"},
"attendees": {
"type": "array",
"items": {"type": "string"},
"description": "List of names"
},
"priority": {
"type": "string",
"enum": ["High", "Medium", "Low"],
"description": "Urgency level"
}
},
"required": ["meeting_topic", "date_time", "attendees", "priority"]
}
}
}
]
print("Tool schema ready to send to OpenAI.")
Step 3: The API Call
Now we send the email to the LLM. We pass our tools list to the API. We also use tool_choice to force the AI to use the tool.
from openai import OpenAI
import json
client = OpenAI()
email_content = """
Hi everyone,
I'm really worried about the Q4 timeline. We need to sit down and fix the roadmap.
Let's meet this Friday at 3 PM. I need Sarah, Mike, and the design lead there.
This is super urgent!
Thanks,
Boss
"""
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant. Today is Wednesday, Oct 25th, 2023."},
{"role": "user", "content": email_content}
],
tools=tools,
tool_choice={"type": "function", "function": {"name": "extract_meeting_details"}}
)
# Let's inspect what we got back
print("--- Raw API Response ---")
print(response.choices[0].message)
Step 4: Extracting the Arguments
The response from OpenAI won't be in message.content (which will likely be null). It will be in message.tool_calls.
The LLM has generated a string that looks like JSON inside the arguments field. We need to parse it.
# Access the tool call
tool_call = response.choices[0].message.tool_calls[0]
# The arguments come back as a JSON string
function_args = json.loads(tool_call.function.arguments)
print("\n--- Parsed Data (Dictionary) ---")
print(f"Topic: {function_args.get('meeting_topic')}")
print(f"Date: {function_args.get('date_time')}")
print(f"Attendees: {function_args.get('attendees')}")
print(f"Priority: {function_args.get('priority')}")
Step 5: Putting it all together
Let's wrap this into a clean function that takes text and returns a dictionary.
def parse_email_to_json(email_text):
# 1. Define the tool (same as above)
my_tools = [
{
"type": "function",
"function": {
"name": "save_meeting",
"description": "Saves meeting details to database",
"parameters": {
"type": "object",
"properties": {
"topic": {"type": "string"},
"date": {"type": "string", "description": "YYYY-MM-DD HH:MM"},
"participants": {"type": "array", "items": {"type": "string"}},
"action_items": {"type": "array", "items": {"type": "string"}}
},
"required": ["topic", "date", "participants", "action_items"]
}
}
}
]
# 2. Call OpenAI
completion = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are an admin assistant. Current year is 2024."},
{"role": "user", "content": email_text}
],
tools=my_tools,
tool_choice={"type": "function", "function": {"name": "save_meeting"}}
)
# 3. Parse arguments
tool_call = completion.choices[0].message.tool_calls[0]
data = json.loads(tool_call.function.arguments)
return data
# Test it
messy_email = """
Guys, the server crashed again. We need a post-mortem.
Can we do tomorrow morning at 9am?
Invite the devops team and Alice.
We need to:
Review logs
Check firewall settings
"""
result = parse_email_to_json(messy_email)
print("\n--- Final Structured Output ---")
print(json.dumps(result, indent=2))
Why this works: The LLM did the hard work of understanding "tomorrow morning" and extracting the list of action items, but it returned it in a format our code can immediately use.
Now You Try
Here are three ways to extend the email parser.
Modify the parameters schema to include a new field called sentiment. It should be an enum allowing only: "Positive", "Negative", or "Neutral". Pass an angry email to the function and see if it correctly flags the sentiment.
Create a completely new schema for parsing an invoice. It should extract:
* vendor_name (string)
* total_amount (number/float)
* invoice_date (string)
* is_paid (boolean)
Test it with a text string like: "Received bill from AWS for $450.20 dated Jan 12th. Not paid yet."
In the participants field of your email parser, update the description to say: "Extract names. If no specific names are mentioned, return an empty list." Test it with an email that says "Let's meet tomorrow" but names no one, to ensure your code handles empty lists gracefully.
Challenge Project: The Weather Assistant
In this challenge, you will build the logic for a chatbot that knows when to check the weather.
The Concept:The LLM cannot check the weather. It doesn't have internet access to real-time data. However, you can give it a "tool" called get_weather. When the user asks "What's the weather in London?", the LLM should not answer the question. Instead, it should output the arguments to call your weather function.
get_weather.location (string) and unit (enum: "celsius", "fahrenheit").print the fact that the AI wanted to call the function and show the arguments it generated (e.g., {"location": "Paris", "unit": "celsius"}).User: Hi there!
AI: Hello! How can I help you today?
User: What is the weather in New York?
AI wants to call function: get_weather
Arguments: {'location': 'New York', 'unit': 'fahrenheit'}
Hint:
Unlike the email parser, do not set tool_choice to force a function. Set tool_choice="auto" (or leave it out, as it is the default). This allows the LLM to decide whether to chat or use the tool.
What You Learned
Today you bridged the gap between "text" and "data."
* Function Calling: You learned that OpenAI models can be programmed to output JSON arguments instead of conversation.
* Schemas: You learned that defining strict parameters (like enum or required fields) forces the AI to be precise.
* Extraction: You built a system that turns messy human language into clean, database-ready variables.
Why This Matters:This is the foundation of AI Agents. An Agent is simply an LLM that has access to many tools (calculator, calendar, email sender) and decides which ones to use and when. You have just built the mechanism that allows the AI to press buttons in your software.
Tomorrow:We are going to tackle speed. Waiting for the full AI response can feel slow. Tomorrow, we learn Streaming, so your applications feel instant and responsive.