Tools & Function Calling
One of the powerful features of modern Large Language Models is their ability to use "tools" or "call functions." This allows LLMs to interact with external systems, APIs, or data sources to gather information or perform actions, making them much more capable and grounded in real-world data. QueryMT provides robust support for defining and using tools.
Key Concepts
-
querymt::chat::Tool: A struct representing a tool that the LLM can use. It primarily describes a function.tool_type: Currently, this is typically"function".function: Aquerymt::chat::FunctionTooldetailing the function.- Source:
crates/querymt/src/chat/mod.rs
-
querymt::chat::FunctionTool: Describes a specific function the LLM can call.name: The name of the function.description: A natural language description of what the function does, its parameters, and when to use it. This is crucial for the LLM to understand the tool's purpose.parameters: Aserde_json::Valuedefining the expected input arguments for the function, typically in JSON Schema format.- Source:
crates/querymt/src/chat/mod.rs
-
querymt::ToolCall: When an LLM decides to use a tool, its response will include one or moreToolCallobjects.id: A unique ID for this specific tool call instance.call_type: Usually"function".function: Aquerymt::FunctionCall.- Source:
crates/querymt/src/lib.rs
-
querymt::FunctionCall: Details of the function the LLM wants to invoke.name: The name of the function to call.arguments: A string containing the arguments for the function, typically as a JSON object.- Source:
crates/querymt/src/lib.rs
-
querymt::chat::ToolChoice: An enum that allows you to specify how the LLM should use the provided tools.Auto: The model can choose to call a tool or not (default).Any: The model must call at least one of the available tools.Tool(name): The model must call the specific tool with the given name.None: The model is forbidden from calling any tools.- Source:
crates/querymt/src/chat/mod.rs
-
querymt::tool_decorator::CallFunctionTool: A trait that your host-side Rust code must implement for each function you want to make available to the LLM.descriptor(): Returns theTooldefinition (schema) for this function.call(&self, args: Value): The actual Rust async function that gets executed when the LLM calls this tool. It receives parsed JSON arguments and should return a string result.- Source:
crates/querymt/src/tool_decorator.rs
-
querymt::tool_decorator::ToolEnabledProvider: A decorator struct that wraps anLLMProvider. When you register tools usingLLMBuilder::add_tool(), the builder automatically wraps the base provider withToolEnabledProvider. This wrapper manages the registered tools and handles the two-way communication:- It passes the tool descriptors to the LLM during a
chat_with_toolscall. - If the LLM responds with a
ToolCall,ToolEnabledProvidercan dispatch the call to the appropriateCallFunctionToolimplementation via itscall_toolmethod. - Source:
crates/querymt/src/tool_decorator.rs
- It passes the tool descriptors to the LLM during a
Workflow
-
Define Tools:
- Implement the
CallFunctionTooltrait for each Rust function you want to expose. - In the
descriptor()method, accurately describe the function's purpose and parameters usingToolandFunctionTool.
- Implement the
-
Register Tools:
- When building your
LLMProviderusingLLMBuilder, use theadd_tool()method to register instances of yourCallFunctionToolimplementations.
- When building your
-
Chat with Tools:
- Use the
chat_with_tools()method on theLLMProvider. TheToolEnabledProvider(if tools were added) will automatically pass the descriptors of registered tools to the LLM. - You can use
ToolChoiceto guide the LLM's tool usage.
- Use the
-
LLM Decides to Call a Tool:
- The LLM, based on the conversation and tool descriptions, might decide to call one or more tools. Its response (via
ChatResponse::tool_calls()) will containToolCallobjects.
- The LLM, based on the conversation and tool descriptions, might decide to call one or more tools. Its response (via
-
Application Executes Tool:
- Your application receives the
ToolCalls. - The
LLMProvideritself (if it's aToolEnabledProvider) can handle the dispatch via itscall_tool(name, args)method. This involves:- Parsing the
argumentsstring (usually JSON) into the expected types for your Rust function. - Calling the actual Rust function logic.
- Parsing the
- Your application receives the
-
Return Tool Result to LLM:
- For each tool call that was executed, create a corresponding
ToolCallstruct that contains the result. The result string is placed into thefunction.argumentsfield. - Construct a new
ChatMessageusing the builder:ChatMessage::user().tool_result(vec_of_result_tool_calls).build(). - Send this message (along with the conversation history) back to the LLM using
chat_with_tools().
- For each tool call that was executed, create a corresponding
-
LLM Continues:
- The LLM uses the tool's output to formulate its final response or decide on further actions.
Example (Conceptual CallFunctionTool Implementation)
use querymt::tool_decorator::CallFunctionTool;
use querymt::chat::{Tool, FunctionTool};
use querymt::builder::FunctionBuilder;
use async_trait::async_trait;
use serde_json::{Value, json};
struct GetWeatherTool;
#[async_trait]
impl CallFunctionTool for GetWeatherTool {
fn descriptor(&self) -> Tool {
FunctionBuilder::new("get_current_weather")
.description("Get the current weather in a given location")
.json_schema(json!({
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
},
"required": ["location"]
}))
.build()
}
async fn call(&self, args: Value) -> anyhow::Result<String> {
let location = args.get("location").and_then(Value::as_str).unwrap_or_default();
// In a real scenario, call a weather API here
Ok(json!({ "weather": format!("Sunny in {}", location) }).to_string())
}
}
// To use it:
// let builder = LLMBuilder::new().provider("some_provider").add_tool(GetWeatherTool);
// let llm = builder.build(®istry)?;
// ... then use llm.chat_with_tools(...) ...
Tool usage significantly enhances the capabilities of LLMs, allowing them to perform complex tasks that require external information or actions. QueryMT's system provides a structured way to integrate these tools into your LLM applications.