Skip to content

Configure Providers

AQBot connects to any number of AI providers simultaneously. Each provider has its own API keys, model list, and parameter defaults.

Supported Providers

AQBot includes first-class support for the following providers. Any service that exposes an OpenAI-compatible API also works out of the box.

ProviderExample Models
OpenAIGPT-4o, GPT-4, o3, o4-mini
AnthropicClaude 4 Sonnet, Claude 4 Opus, Claude 3.5 Sonnet
GoogleGemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.0
DeepSeekDeepSeek V3, DeepSeek R1
Alibaba CloudQwen series
Zhipu AIGLM series
xAIGrok series
Any OpenAI-compatible APIOllama, vLLM, LiteLLM, third-party relays, etc.

Adding a Provider

  1. Go to Settings → Providers.
  2. Click the + button at the bottom left.
  3. Fill in the provider details:
FieldDescription
NameA display name for the sidebar (e.g. OpenAI)
TypeProvider type — determines the default base URL and API behavior
IconOptional icon for visual identification
API KeyThe secret key from your provider's dashboard
Base URLAPI endpoint (pre-filled for built-in types, changeable for relays)
API PathRequest path — defaults to /v1/chat/completions

TIP

For third-party relay services, keep the type set to OpenAI (or the matching upstream type) and change the Base URL to the relay's endpoint.


Multi-Key Rotation

AQBot supports multiple API keys per provider for load distribution and rate-limit avoidance.

Adding Keys

In the provider detail panel, click Add Key to add additional API keys. Each key shows its prefix, creation date, and last-used timestamp.

How Rotation Works

AQBot rotates through enabled keys automatically using a round-robin index. When a request completes, the rotation index advances to the next key. If a key is disabled or fails validation, it is skipped.

Validating Keys

Click the validate button next to a key to verify it against the provider's API. The validation result and any error message are recorded for reference.


Model Management

Fetching Models

Click Fetch Models in the provider detail panel to pull the full list of available models from the provider's API. The discovered models are added to your local list automatically.

Adding Models Manually

If a model is not returned by the API (e.g. a fine-tuned model or a new release), you can add it by typing the model ID directly.

Per-Model Parameters

Each model can have its own default parameter overrides. Open the model's settings to configure:

ParameterDescription
TemperatureControls randomness (0 = deterministic, higher = more creative)
Max TokensMaximum number of tokens in the response
Top PNucleus sampling threshold
Frequency PenaltyReduces repetition of token sequences
Presence PenaltyEncourages the model to introduce new topics

Model Capabilities

AQBot tracks capabilities per model — such as Vision, Function Calling, and Reasoning — and displays them as tags. These capabilities affect which features are available during a conversation.


Custom & Local Endpoints

AQBot works with any endpoint that implements the OpenAI chat completions API.

Ollama (Local Models)

  1. Install and start Ollama.
  2. In AQBot, create a new provider with type OpenAI.
  3. Set the Base URL to http://localhost:11434.
  4. Click Fetch Models to discover the models you have pulled locally.

vLLM / TGI

Point the Base URL to the address of your inference server (e.g. http://localhost:8000) and fetch or add models as usual.

API Relay Services

For relay or aggregator services (e.g. OpenRouter, one-api), set the type to OpenAI, enter the relay's base URL, and provide the relay's API key.


Default Model Settings

Default Assistant Model

Go to Settings → Default Model to choose the provider and model that new conversations use by default. You can always override the model on a per-conversation basis from the model selector.

Topic Naming Model

AQBot can automatically generate a title for each conversation. In the default model settings, you can assign a separate, lightweight model for topic naming to save cost and latency. Configure a custom prompt and context window size for title generation.


Next Steps

  • MCP Servers — connect external tools to extend AI capabilities
  • API Gateway — expose your providers as a local API server