Skip to main content

Configurations

Stream

stream is a Boolean parameter that determines whether the model should return the results incrementally, chunk by chunk, or wait until the entire task is completed and then return the result as a whole.

response = client.chat.completions.create(
model="YOUR_TASKINGAI_MODEL_ID",
messages=[
{"role": "user", "content": "Hello, how are you?"},
],
stream=True,
)

for chunk in response:
print(chunk)

Other Configurations

Other configurations for chat completions include options such as temperature, max_tokens, top_p, stop, and more. However, these attributes are currently unsupported in the OpenAI-compatible API. Instead, invocations will default to the pre-set configurations of the TaskingAI model.