Skip to main content

Async Python

TaskingAI provides asynchronous support for its APIs by utilizing Python's asyncio library, which allows for concurrent execution of code.

Async methods in TaskingAI are straightforward to use. Compared to the synchronous API calls, you only need to prepend a_ to the API call name and use the await keyword in front of the function call.

Currently, async methods are supported for all endpoints, for example:

  • a_chat_completion: Asynchronous streaming for chat completion.
  • a_create_chat: Asynchronous chat creation.

In the following example, we demonstrate the usage of TaskingAI's async API to perform different tasks. By executing asynchronously, we can handle multiple tasks concurrently, significantly improving performance and response times.

Example Usage 1

For a simple example, we can retrieve a chat information either synchronously or asynchronously:

# Sync API
from taskingai.assistant import get_chat
chat1 = get_chat(assistant_id="YOUR_ASSISTANT_ID", chat_id="YOUR_CHAT_ID")

# Async API
from taskingai.assistant import a_get_chat
chat2 = await a_get_chat(assistant_id="YOUR_ASSISTANT_ID", chat_id="YOUR_CHAT_ID")

Note that Async API calls require an event loop to run, which is typically provided by asyncio.run in a standard Python script. For web and server environments, the event loop is usually handled by the framework (e.g., FastAPI, AIOHTTP).

For example:

import asyncio
async def test_async_get_chat():
from taskingai.assistant import a_get_chat
chat = await a_get_chat(assistant_id="YOUR_ASSISTANT_ID", chat_id="YOUR_CHAT_ID")

asyncio.run(test_async_get_chat())

Example Usage 2

Here's another example of using the async API to start a streaming chat completion task:

from taskingai.inference import a_chat_completion, UserMessage, SystemMessage

# Start an async chat completion task with streaming
chat_completion = await a_chat_completion(
model_id="YOUR_MODEL_ID",
messages=[
SystemMessage("You are a professional assistant."),
UserMessage("Count from 1 to 50"),
],
stream=True
)

# Asynchronously process the stream of responses
async for chunk in chat_completion:
if isinstance(chunk, ChatCompletionChunk):
print(chunk.delta, end="", flush=True)