In my previous post, I wrote about using OpenAI‘s Functions API to force their large language models (LLMs) to return structured, machine-readable responses. The natural next step for me was to see what it’s like to actually call the functions the model would like called.
To try this out, I made a module that emulates a portion of the API that might be exposed by a calendar service, and asked the model to help me with some simple scheduling tasks. It gave me a better idea of how OpenAI’s plugin system (probably) works under the hood, and I was impressed by how close these simple API parameters come to what people have taken to call “Agents” in the AI space.
Using the Agent
Let’s see what this thing does before looking at the code.
I wrote my code in Elixir and used a Livebook to interact with it, so there’s no fancy web demo, but you should be able to get the idea. I added some IO.puts
calls to show what’s happening when you invoke the agent with a prompt:
history = CalendarAgent.run("Can you tell me what I have scheduled for tomorrow?")
FUNCTION CALL: get_current_date(%{}) FUNCTION CALL: get_scheduled_events(%{"date" => "2023-07-21"}) Tomorrow, you have the following events scheduled: Project standup at 10:00 AM for 30 minutes. Pair Programming with Sue at 10:30 AM for 60 minutes. Focus time: writing a blog post at 1:30 PM for 120 minutes. Please let me know if there's anything else I can help with!
The model understood that “tomorrow” was a piece of info it didn’t have, but could get by calling the get_current_date()
function, so that’s the first thing it does. Once it has the date, it’s ready to make another function call to fetch that day’s schedule. With that info, it’s ready to provide a response.
The history
value now contains the full conversation, so I can continue by running the agent with the history and a new instruction:
history = CalendarAgent.run(
history,
"""
Can you schedule me for a 2hr lunch with Jamie
immediately after my pairing session with Sue?
"""
)
FUNCTION CALL: schedule_event(%{"datetime" => "2023-07-21T11:30:00", "duration_minutes" => 120, "title" => "Lunch with Jamie"}) I have scheduled a 2-hour lunch with Jamie for tomorrow at 11:30 AM. It will start immediately after your pairing session with Sue. If you need any further assistance, feel free to ask!
Because we passed in the conversation history, the model understood we were still talking about “tomorrow” and knew when to schedule the appointment. It seems to have gotten the details correct, but we also still have the history output, so we could conceivably ask it to make adjustments if we don’t like the outcome.
The Code
The code consists of a couple of pieces:
- The core module,
OpenAIAgent
, that knows how to interact with OpenAI’s API and call functions as requested - The plugin module,
CalendarAgent.Functions
that exposes a certain set of functionality to the agent - I defined a
OpenAIAgent.Functions
behaviour and allowed new agents to be created byuse
-ingOpenAIAgent
and providing a module whose functions should be exposed (see below.) It felt a little silly, but it drives home how pluggable this agent logic is. All you need to create a new agent is a new set of functions and a specification.
I’ll start with the high-level code and work up to the agent itself.
CalendarAgent Module
This is what we interact with to have a chat that has access to the calendar functions, and there’s nothing to it. It just says, “I’m an agent and these are my functions.”
defmodule CalendarAgent do
use OpenAIAgent, functions_module: CalendarAgent.Functions
end
OpenAIAgent.Functions Behaviour
The key to function calling with the OpenAI API is passing a JSON array that defines all the functions the model is allowed to call. So the one function we need provides that specification.
defmodule OpenAIAgent.Functions do
@doc """
The specification for callable functions provided by this module.
Every function specified must be a public function in this module,
and the specification should follow the format required by OpenAI:
https://platform.openai.com/docs/api-reference/chat/create#chat/create-functions
"""
@callback spec() :: list(map())
end
The Calendar-Specific Functions
To test out this flow, I made a module that exposes a partial calendar app API. But instead of actually doing anything, it logs the function name and args and returns stub responses.
It consists of the spec
function returning the JSON specification of the module’s public API, as well as the implementation of each of those functions.
defmodule CalendarAgent.Functions do
@behaviour OpenAIAgent.Functions
@function_spec [
%{
name: "get_current_date",
description: "Get the current date in the user's default timezone.",
parameters: %{
type: "object",
properties: %{},
required: []
}
},
%{
name: "get_scheduled_events",
description:
"Get the events on the user's calendar for the specified day. Uses the user's default timezone.",
parameters: %{
type: "object",
properties: %{
date: %{
type: "string",
description:
"The date on which to fetch events from the calendar, formatted yyyy-MM-dd"
}
},
required: [
"date"
]
}
},
%{
name: "schedule_event",
description:
"Schedule an event at the given date and time, in the user's default timezone.",
parameters: %{
type: "object",
properties: %{
datetime: %{
type: "string",
description: "The date and time to schedule the event, formatted yyyy-MM-ddTHH:mm:ss"
},
duration_minutes: %{
type: "integer",
description: "The length of the event, in minutes"
},
title: %{
type: "string",
description: "The name of the event, as it will be displayed on the calendar"
}
},
required: [
"datetime",
"duration_minutes",
"title"
]
}
}
]
def spec, do: @function_spec
def get_current_date(%{} = params) do
IO.puts("FUNCTION CALL: get_current_date(#{inspect(params)})")
Date.to_iso8601(Date.utc_today())
end
def get_scheduled_events(%{"date" => _} = params) do
IO.puts("FUNCTION CALL: get_scheduled_events(#{inspect(params)})")
[
%{datetime: "2023-07-20T10:00:00", duration_minutes: 30, title: "Project standup"},
%{
datetime: "2023-07-20T10:30:00",
duration_minutes: 60,
title: "Pair Programming with Sue"
},
%{
datetime: "2023-07-20T13:30:00",
duration_minutes: 120,
title: "Focus time: writing a blog post"
}
]
end
def schedule_event(
%{"datetime" => _datetime, "duration_minutes" => _duration, "title" => _title} = params
) do
IO.puts("FUNCTION CALL: schedule_event(#{inspect(params)})")
"OK"
end
end
Agent Logic
The real work is done here. The module consists of a __using__
macro that allows the function module to be passed in and set as a module attribute. Then it defines run/1
and run/2
functions to interact with the OpenAI API with or without a list of previous messages.
The core of that logic is a call to Enum.reduce_while/3 that accumulates chat history. On a particular iteration, if the model asks to call a function, it will call that function, add the response to the history, and move onto the next iteration. If the model is done calling functions and responds with a text reply, the reduce will halt. The complete history is returned so we can continue the chat.
Error handling code is left out so you can more clearly see the flow, and the final response is sent to the console for simplicity.
defmodule OpenAIAgent do
defmacro __using__(opts) do
quote bind_quoted: [opts: opts] do
functions_module =
opts[:functions_module] || raise "The functions_module: option must be supplied!"
model = opts[:model] || "gpt-3.5-turbo-0613"
@functions_module functions_module
@model model
def run(history, prompt) when is_list(history) and is_binary(prompt) do
run(history ++ [%{role: "user", content: prompt}])
end
def run(prompt) when is_binary(prompt) do
run([], prompt)
end
def run(history) when is_list(history) do
[nil]
|> Stream.cycle()
|> Enum.reduce_while(history, fn nil, messages ->
{:ok, %{choices: [%{"message" => response}]}} =
OpenAI.chat_completion(
model: @model,
functions: @functions_module.spec(),
stream: false,
messages: messages
)
if response["function_call"] do
%{"name" => function_name, "arguments" => raw_args} = response["function_call"]
function_response =
apply(
@functions_module,
String.to_existing_atom(function_name),
[Jason.decode!(raw_args)]
)
{:cont,
messages ++
[
response,
%{
role: "function",
name: function_name,
content: Jason.encode!(function_response)
}
]}
else
IO.puts(response["content"])
{:halt, messages ++ [response]}
end
end)
end
end
end
end
Chat History
The returned history tells us everything we might need to know about what happened in the chat, and allows us to resume the chat at any time. Looking at it, we can see the user prompts, each function call, the function responses, and the model’s responses to the user.
[
%{content: "Can you tell me what I have scheduled for tomorrow?", role: "user"},
%{
"content" => nil,
"function_call" => %{"arguments" => "{}", "name" => "get_current_date"},
"role" => "assistant"
},
%{content: "\"2023-07-19\"", name: "get_current_date", role: "function"},
%{
"content" => nil,
"function_call" => %{
"arguments" => "{\n \"date\": \"2023-07-20\"\n}",
"name" => "get_scheduled_events"
},
"role" => "assistant"
},
%{
content: "[{\"datetime\":\"2023-07-20T10:00:00\",\"duration_minutes\":30,\"title\":\"Project standup\"},{\"datetime\":\"2023-07-20T10:30:00\",\"duration_minutes\":60,\"title\":\"Pair Programming with Sue\"},{\"datetime\":\"2023-07-20T13:30:00\",\"duration_minutes\":120,\"title\":\"Focus time: writing a blog post\"}]",
name: "get_scheduled_events",
role: "function"
},
%{
"content" => "On tomorrow, which is July 20th, you have the following events scheduled:\n\n1. Project standup from 10:00 AM to 10:30 AM (30 minutes)\n2. Pair Programming with Sue from 10:30 AM to 11:30 AM (1 hour)\n3. Focus time: writing a blog post from 1:30 PM to 3:30 PM (2 hours)",
"role" => "assistant"
},
%{
content: "Can you schedule me for a 2hr lunch with Jamie\nimmediately after my pairing session with Sue?\n",
role: "user"
},
%{
"content" => nil,
"function_call" => %{
"arguments" => "{\n \"datetime\": \"2023-07-20T11:30:00\",\n \"duration_minutes\": 120,\n \"title\": \"Lunch with Jamie\"\n}",
"name" => "schedule_event"
},
"role" => "assistant"
},
%{content: "\"ok\"", name: "schedule_event", role: "function"},
%{
"content" => "Sure! I have scheduled a 2-hour lunch with Jamie immediately after your pairing session with Sue. The lunch will start at 11:30 AM and end at 1:30 PM.",
"role" => "assistant"
},
%{content: "That's all for now, thank you!", role: "user"},
%{
"content" => "You're welcome! If you have any more questions, feel free to ask. Have a great day!",
"role" => "assistant"
}
]
Room for Improvement?
Overall, I’m really impressed by how this works, though I did run into an interesting problem on some of my attempts:
When asked to schedule the meeting without first asking about the schedule (“Can you schedule me for a 2hr lunch with Jamie tomorrow, immediately after my pairing session with Sue?”), GPT-3.5 sometimes tried to use the python interpreter as a function, passing in python code to execute:
{
"function_call": {
"name": "python",
"arguments": "import datetime\n\ncurrent_date = datetime.datetime.now().date()\ncurrent_date"
},
...
}
It’s a bit disheartening to see our smart assistant making up its own way to do something that’s pretty obviously accomplished by one of the functions I provided in the functions
parameter of the API call. It’s not too hard to catch this, though turning it into a pleasant user experience might be a little tougher.
I didn’t get any responses like this from GPT-4, so maybe this is the kind of use case where it significantly exceeds its predecessors. One thing GPT-4 did on some invocations — but not all — was to confirm with the user, once it had chosen the start and end times for the meeting. That’s cool in some ways, but the inconsistency could present difficulties when trying to add this kind of functionality to an interface other than chat.
In the end, I’m glad to have this in my toolbox, and even more excited to explore how / whether similar functionality might be added to some of the popular open source models.
We're building an AI-powered Product Operations Cloud, leveraging AI in almost every aspect of the software delivery lifecycle. Want to test drive it with us? Join the ProdOps party at ProdOps.ai.