LocalAI
If your tasking AI server is locally deployed with Docker, and the target model is also running in your local environment, LOCALAI_HOST
should start with http://host.docker.internal:port instead of http://localhost:port. Replace port
with your actual port number.
LocalAI is the free, Open Source OpenAI alternative. LocalAI act as a drop-in replacement REST API that’s compatible with OpenAI API specifications for local inferencing.
Requisites
To integrate a model running on LocalAI to TaskingAI, you need to have a valid LocalAI service first. To get started, please visit LocalAI's website, or follow the simple instructions in the Quick Start.
Required credentials:
- LOCALAI_HOST: Your LocalAI host URL.
Supported Models:
Wildcard
- Model schema id: localai/wildcard
Quick Start
Deploy LocalAI service to your local environment
- Download and Install Docker.
- Start the service with docker command:
In this way, a model will be running on your localhost at 8090 port. You can access the model by sending a POST request to
docker run -ti -p 8090:8090 --gpus all localai/localai:v2.9.0-cublas-cuda11-core <model_name>
http://localhost:8090
. For a detailed model list, please check LocalAI Models.
Integrate Ollama to TaskingAI
Now that you have a running Ollama service with your desired model, you can integrate it to TaskingAI by creating a new model with the following steps:
- Visit your local TaskingAI service page. By default, it is running at
http://localhost:8090
. - Login and navigate to
Model
management page. - Start creating a new model by clicking the
Create Model
button. - Select
LocalAI
as the provider, andwildcard
as model. - Use Ollama service's address
http://localhost:8090
asLOCALAI_HOST
- Input the model name and
provider_model_id
. Theprovider_model_id
is the name of your desired model.