You can configure Copilot CLI to use your own LLM provider, also called BYOK (Bring Your Own Key), instead of GitHub-hosted models. This lets you connect to OpenAI-compatible endpoints, Azure OpenAI, or Anthropic, including locally running models such as Ollama.
Prerequisites
- Copilot CLI is installed. See GitHub Copilot CLIをインストールする.
- You have an API key from a supported LLM provider, or you have a local model running (such as Ollama).
Supported providers
Copilot CLI supports three provider types:
| Provider type | Compatible services |
|---|---|
openai | OpenAI, Ollama, vLLM, Foundry Local, and any other OpenAI Chat Completions API-compatible endpoint. This is the default provider type. |
azure | Azure OpenAI Service. |
anthropic | Anthropic (Claude models). |
For additional examples, run copilot help providers in your terminal.
Model requirements
Models must support tool calling (also called function calling) and streaming. If a model does not support either capability, Copilot CLI returns an error. For best results, use a model with a context window of at least 128k tokens.
Configuring your provider
You configure your model provider by setting environment variables before starting Copilot CLI.
| Environment variable | Required | Description |
|---|---|---|
COPILOT_PROVIDER_BASE_URL | Yes | The base URL of your model provider's API endpoint. |
COPILOT_PROVIDER_TYPE | No | The provider type: openai (default), azure, or anthropic. |
COPILOT_PROVIDER_API_KEY | No | Your API key for the provider. Not required for providers that do not use authentication, such as a local Ollama instance. |
COPILOT_MODEL | Yes | The model identifier to use. You can also set this with the --model command-line flag. |
Connecting to an OpenAI-compatible endpoint
Use the following steps if you are connecting to OpenAI, Ollama, vLLM, Foundry Local, or any other endpoint that is compatible with the OpenAI Chat Completions API.
-
Set environment variables for your provider. For example, for a local Ollama instance:
export COPILOT_PROVIDER_BASE_URL=http://localhost:11434 export COPILOT_MODEL=YOUR-MODEL-NAMEReplace
YOUR-MODEL-NAMEwith the name of the model you have pulled in Ollama (for example,llama3.2). -
For a remote OpenAI endpoint, also set your API key.
export COPILOT_PROVIDER_BASE_URL=https://api.openai.com export COPILOT_PROVIDER_API_KEY=YOUR-OPENAI-API-KEY export COPILOT_MODEL=YOUR-MODEL-NAMEReplace
YOUR-OPENAI-API-KEYwith your OpenAI API key andYOUR-MODEL-NAMEwith the model you want to use (for example,gpt-4o). -
Start Copilot CLI.
copilot
Connecting to Azure OpenAI
-
Set the environment variables for Azure OpenAI.
export COPILOT_PROVIDER_BASE_URL=https://YOUR-RESOURCE-NAME.openai.azure.com/openai/deployments/YOUR-DEPLOYMENT-NAME export COPILOT_PROVIDER_TYPE=azure export COPILOT_PROVIDER_API_KEY=YOUR-AZURE-API-KEY export COPILOT_MODEL=YOUR-DEPLOYMENT-NAMEReplace the following placeholders:
YOUR-RESOURCE-NAME: your Azure OpenAI resource nameYOUR-DEPLOYMENT-NAME: the name of your model deploymentYOUR-AZURE-API-KEY: your Azure OpenAI API key
-
Start Copilot CLI.
copilot
Connecting to Anthropic
-
Set the environment variables for Anthropic:
export COPILOT_PROVIDER_TYPE=anthropic export COPILOT_PROVIDER_API_KEY=YOUR-ANTHROPIC-API-KEY export COPILOT_MODEL=YOUR-MODEL-NAMEReplace
YOUR-ANTHROPIC-API-KEYwith your Anthropic API key and YOUR-MODEL-NAME with the Claude model you want to use (for example,claude-opus-4-5). -
Start Copilot CLI.
copilot
Running in offline mode
You can run Copilot CLI in offline mode to prevent it from contacting GitHub's servers. This is designed for isolated environments where the CLI should communicate only with your local or on-premises model provider.
重要
Offline mode only guarantees full network isolation if your provider is also local or within the same isolated environment. If COPILOT_PROVIDER_BASE_URL points to a remote endpoint, your prompts and code context are still sent over the network to that provider.
-
Configure your provider environment variables as described in Configuring your provider.
-
Set the offline mode environment variable:
export COPILOT_OFFLINE=true -
Start Copilot CLI.
copilot