Skip to main content

Async Python

TaskingAI provides asynchronous support for its APIs by utilizing Python's asyncio library, which allows for concurrent execution of code.

Async methods in TaskingAI are straightforward to use. Compared to the synchronous API calls, you only need to prepend a_ to the API call name and use the await keyword in front of the function call.

Currently, async methods are supported for all endpoints, for example:

  • a_chat_completion: Asynchronous streaming for chat completion.
  • a_create_assistant: Asynchronous assistant creation.
  • a_delete_action: Asynchronous action deletion.

In the following example, we demonstrate the usage of TaskingAI's async API to perform different tasks. By executing asynchronously, we can handle multiple tasks concurrently, significantly improving performance and response times.

Example Usage 1

For a simple example, we can retrieve an assistant's information either synchronously or asynchronously:

# Sync API
from taskingai.assistant import get_assistant
assistant1 = get_assistant(assistant_id="YOUR_ASSISTANT_ID")

# Async API
from taskingai.assistant import a_get_assistant
assistant2 = await a_get_assistant(assistant_id="YOUR_ASSISTANT_ID")

Note that Async API calls require an event loop to run, which is typically provided by asyncio.run in a standard Python script. For web and server environments, the event loop is usually handled by the framework (e.g., FastAPI, AIOHTTP).

For example:

import asyncio
async def test_async_get_assistant():
assistant = await a_get_assistant(assistant_id="YOUR_ASSISTANT_ID")
print(assistant)

asyncio.run(test_async_get_assistant())

Example Usage 2

Here's another example of using the async API to start a streaming chat completion task:

# Start an async chat completion task with streaming
from taskingai.inference import a_chat_completion
from taskingai.inference import UserMessage, SystemMessage

chat_completion = await a_chat_completion(
model_id="YOUR_MODEL_ID",
messages=[
SystemMessage("You are a professional assistant."),
UserMessage("Count from 1 to 50"),
],
stream=True
)

# Asynchronously process the stream of responses
async for chunk in chat_completion:
if isinstance(chunk, ChatCompletionChunk):
print(chunk.delta, end="", flush=True)