Generate Assistant Message
TaskingAI provides the generate_message
method to create responses from an AI assistant within a chat session.
This method is crucial for simulating real-time interactions between the user and the assistant, offering flexibility in response generation.
Normal Usage
This is the basic use case where the assistant generates a response based on the current chat context.
import taskingai
assistant_message = taskingai.assistant.generate_message(
assistant_id="YOUR_ASSISTANT_ID",
chat_id="YOUR_CHAT_ID",
)
The returned Message
object contains the response generated by the assistant with its role set to assistant
.
Note that you can only generate a response when the last message in the chat session is from the user.
Generating with Variables
System prompt variables offer a way to dynamically alter the behavior of the assistant's responses. These variables can be included in the assistant's system prompt template and are replaced with actual values during message generation.
Setting Up the Assistant
First, create an assistant with a system prompt template that incorporates some variables:
assistant = taskingai.assistant.create_assistant(
model_id="YOUR_MODEL_ID",
memory={"type": "naive"},
system_prompt_template=[
"You are a virtual travel guide, providing information in {{language}} about places in {{country}}."
]
)
In this example, the system prompt template uses two variables, {{language}}
and {{country}}
,
which allow the assistant to respond as a travel guide based on the specified language and country.
Generating the Assistant Message
When generating a message, you can pass specific values for these variables:
assistant_message = taskingai.assistant.generate_message(
assistant_id="YOUR_ASSISTANT_ID",
chat_id="YOUR_CHAT_ID",
system_prompt_variables={
"language": "Spanish",
"country": "Spain"
}
)
In this instance, the assistant's response will be tailored to provide travel information about Spain in Spanish, as per the variables provided.
Generate in Stream Mode
Stream mode is used when you want to receive the assistant's response in chunks, which is useful for long or continuous outputs.
You can simply set the stream
parameter to True
when calling the generate_message
method to enable stream mode.
Code example:
assistant_message_response = taskingai.assistant.generate_message(
assistant_id="YOUR_ASSISTANT_ID",
chat_id="YOUR_CHAT_ID",
system_prompt_variables={
"language": "English",
"country": "Australia"
},
stream=True,
)
for item in assistant_message_response:
if hasattr(item, 'delta'):
# print each message chunk content
print(f"[{item.index}]: {item.delta}")
Each MessageChunk
object contains a delta
attribute, which represents a part of the assistant's response.
They will be printed out as they are received, providing a streaming effect.