Skip to content

AnthropicLLMConfig

Configuration for the Anthropic LLM provider.

Defined in: src/providers/llm/anthropic/AnthropicLLM.ts:69

Configuration for the Anthropic LLM provider.

Remarks

Provide either apiKey (direct API access) or proxyUrl (server-side proxy). At least one must be set; if both are provided, proxyUrl takes precedence.

Anthropic’s API differs from OpenAI’s in that max_tokens is required for every request. This config defaults it to 1024 if not explicitly set.

Example

// Direct API access
const config: AnthropicLLMConfig = {
  apiKey: 'sk-ant-...',
  model: 'claude-haiku-4-5',
  maxTokens: 2048,
  systemPrompt: 'You are a helpful voice assistant.',
};

// Via server-side proxy (recommended for browser apps)
const proxyConfig: AnthropicLLMConfig = {
  proxyUrl: 'http://localhost:3000/api/proxy/anthropic',
  model: 'claude-sonnet-4-6',
};

See

LLMProviderConfig for inherited base properties (temperature, topP, systemPrompt, etc.).

Extends

Properties

PropertyTypeDefault valueDescriptionOverridesInherited fromDefined in
apiKey?stringundefinedAnthropic API key. Required when connecting directly to Anthropic. Omit when using proxyUrl — the proxy server supplies the key.LLMProviderConfig.apiKey-src/providers/llm/anthropic/AnthropicLLM.ts:77
baseURL?stringundefined (SDK default: 'https://api.anthropic.com')Base URL for the Anthropic API. Remarks For custom endpoints only. Use proxyUrl for the CompositeVoice proxy pattern. When neither is set, the SDK defaults to https://api.anthropic.com.--src/providers/llm/anthropic/AnthropicLLM.ts:125
debug?booleanfalseWhether to enable debug logging for this provider. Remarks When true, the provider emits detailed internal logs. This is separate from the SDK-level LoggingConfig.-LLMProviderConfig.debugsrc/core/types/providers.ts:86
endpoint?stringundefinedCustom endpoint URL to override the provider’s default API endpoint. Remarks Useful for self-hosted instances, proxy servers, or development environments.-LLMProviderConfig.endpointsrc/core/types/providers.ts:75
maxRetries?number3Maximum number of retries for failed API requests.--src/providers/llm/anthropic/AnthropicLLM.ts:131
maxTokens?number1024Maximum tokens to generate per response. Remarks Anthropic’s Messages API requires this field on every request. The provider defaults to 1024 if not set in config or per-call options.LLMProviderConfig.maxTokens-src/providers/llm/anthropic/AnthropicLLM.ts:115
modelstring'claude-haiku-4-5'Anthropic model identifier. Remarks - Fastest: 'claude-haiku-4-5' (default) - Balanced: 'claude-sonnet-4-6' - Most capable: 'claude-opus-4-6'LLMProviderConfig.model-src/providers/llm/anthropic/AnthropicLLM.ts:105
proxyUrl?stringundefinedURL of the CompositeVoice proxy server’s Anthropic endpoint. Remarks When set, the Anthropic SDK sends requests to this URL instead of https://api.anthropic.com, allowing browsers to reach Anthropic through a same-origin proxy that injects the real API key server-side. A dummy API key ('proxy') is used with the SDK. Example proxyUrl: 'http://localhost:3000/api/proxy/anthropic'--src/providers/llm/anthropic/AnthropicLLM.ts:94
stopSequences?string[]undefinedSequences that cause the LLM to stop generating. Remarks When the model generates any of these sequences, generation halts. Useful for controlling response boundaries.-LLMProviderConfig.stopSequencessrc/core/types/providers.ts:627
stream?booleanundefinedWhether to stream the LLM response token by token. Remarks When true, the provider yields tokens incrementally via an async iterable. Streaming is essential for low-latency voice applications as it allows TTS to begin synthesizing before the full response is generated.-LLMProviderConfig.streamsrc/core/types/providers.ts:618
systemPrompt?stringundefinedSystem prompt providing instructions and context to the LLM. Remarks Sets the behavior and persona of the assistant. For voice applications, include instructions to keep responses brief and conversational.-LLMProviderConfig.systemPromptsrc/core/types/providers.ts:608
temperature?numberundefinedTemperature for controlling generation randomness. Remarks Values from 0 (deterministic) to 2 (highly creative). Lower values produce more focused responses; higher values increase variety.-LLMProviderConfig.temperaturesrc/core/types/providers.ts:580
timeout?numberundefinedRequest timeout in milliseconds. Remarks Applies to HTTP requests (REST providers) and connection establishment (WebSocket providers). Set to 0 for no timeout.-LLMProviderConfig.timeoutsrc/core/types/providers.ts:95
topP?numberundefinedTop-P (nucleus) sampling parameter. Remarks Limits token selection to the smallest set whose cumulative probability exceeds this value. Values from 0 to 1. Often used as an alternative to temperature.-LLMProviderConfig.topPsrc/core/types/providers.ts:599

© 2026 CompositeVoice. All rights reserved.

Font size
Contrast
Motion
Transparency