Skip to content

Chat Interactions

QueryMT provides comprehensive support for chat-based interactions with Large Language Models, enabling you to build conversational AI applications.

Key Components

  • querymt::chat::ChatMessage: Represents a single message in a conversation. Key attributes include:

    • role: Indicates who sent the message (querymt::chat::ChatRole::User or querymt::chat::ChatRole::Assistant).
    • message_type: Specifies the nature of the content (e.g., querymt::chat::MessageType::Text, Image, ToolUse, ToolResult).
    • content: The primary text content of the message.
    • Source: crates/querymt/src/chat/mod.rs
  • querymt::chat::ChatResponse: A trait representing the LLM's response to a chat request. It provides methods to access:

    • text(): The textual content of the LLM's reply.
    • tool_calls(): A list of querymt::ToolCall objects if the LLM decided to use one or more tools.
    • thinking(): Optional "thoughts" or reasoning steps from the model, if supported and enabled.
    • usage(): Optional token usage information (querymt::Usage).
    • Source: crates/querymt/src/chat/mod.rs
  • querymt::chat::BasicChatProvider: A trait that LLM providers implement to support fundamental chat functionality. It has a single method:

    • chat(&self, messages: &[ChatMessage]): Sends a list of messages to the LLM and returns a ChatResponse.
    • Source: crates/querymt/src/chat/mod.rs
  • querymt::chat::ToolChatProvider: Extends BasicChatProvider to include support for tools (function calling). It has one primary method:

    • chat_with_tools(&self, messages: &[ChatMessage], tools: Option<&[Tool]>): Sends messages along with a list of available tools the LLM can use.
    • Source: crates/querymt/src/chat/mod.rs

How It Works

  1. Construct Messages: Your application assembles a sequence of ChatMessage objects representing the conversation history using the querymt::chat::ChatMessageBuilder. This typically starts with alternating User and Assistant messages.
  2. Initiate Chat: You call the chat or chat_with_tools method on an LLMProvider instance, passing the message history and optionally, a list of available tools.
  3. Provider Interaction: The LLMProvider (or its underlying implementation like HTTPLLMProvider) formats the request according to the specific LLM's API, sends it, and receives the raw response.
  4. Parse Response: The provider parses the raw response into an object implementing ChatResponse.
  5. Handle Response: Your application processes the ChatResponse:
    • If text() is present, it's the LLM's textual reply.
    • If tool_calls() is present, the LLM wants to execute one or more functions. Your application needs to:
      • Execute these functions.
      • Send the results back to the LLM as new ChatMessages (typically with MessageType::ToolResult).
      • Continue the chat loop.

Example Flow (Conceptual)

use querymt::chat::{ChatMessage, ChatMessageBuilder, ChatResponse, MessageType, Tool, ToolCall};
use querymt::LLMProvider;
use serde_json::Value;

// Assuming 'llm_provider' is an instance of Box<dyn LLMProvider> that has tools registered
// and 'my_tools' is a Vec<Tool> describing them.

async fn handle_tool_calling_loop(
    llm_provider: &Box<dyn LLMProvider>,
    initial_messages: Vec<ChatMessage>,
    my_tools: &[Tool],
) -> anyhow::Result<String> {
    let mut messages = initial_messages;

    loop {
        let response = llm_provider.chat_with_tools(&messages, Some(my_tools)).await?;

        // Add the assistant's response to the history. It might contain text and/or tool calls.
        messages.push(
            ChatMessage::assistant()
                .content(response.text().unwrap_or_default())
                .tool_use(response.tool_calls().unwrap_or_default()) // This will be empty if no tools were called
                .build(),
        );

        if let Some(tool_calls) = response.tool_calls() {
            if tool_calls.is_empty() {
                // No tool calls, so the text response is the final answer.
                return Ok(response.text().unwrap_or_default());
            }

            // The model wants to call one or more tools.
            let mut tool_results = Vec::new();
            for call in tool_calls {
                println!("LLM wants to call tool: {} with args: {}", call.function.name, call.function.arguments);

                // In a real app, you would dispatch to your tool execution logic here.
                // The `LLMProvider` trait has a `call_tool` helper for this.
                let args: Value = serde_json::from_str(&call.function.arguments)?;
                let result_str = llm_provider.call_tool(&call.function.name, args).await?;

                // Create a ToolCall struct containing the result.
                // Note: The result is placed in the `arguments` field for transport.
                let result_call = ToolCall {
                    id: call.id,
                    call_type: "function".to_string(),
                    function: querymt::FunctionCall {
                        name: call.function.name,
                        arguments: result_str,
                    },
                };
                tool_results.push(result_call);
            }

            // Add the tool results back to the conversation history.
            messages.push(ChatMessage::user().tool_result(tool_results).build());
            // Loop again to let the model process the tool results.
        } else {
            // No tool calls in the response, so the text is the final answer.
            return Ok(response.text().unwrap_or_default());
        }
    }
}

QueryMT's chat system is designed to be flexible, supporting simple Q&A, complex multi-turn dialogues, and sophisticated interactions involving external tools. The Tool and ToolChoice mechanisms provide fine-grained control over how LLMs can utilize functions.