AnthropicLLM
Anthropic LLM provider for Claude models.
Defined in: src/providers/llm/anthropic/AnthropicLLM.ts:191
Anthropic LLM provider for Claude models.
Remarks
Uses the official @anthropic-ai/sdk for the Messages API with full support for streaming (messages.stream) and non-streaming (messages.create) responses. The provider handles the Anthropic-specific message format automatically:
- System messages are extracted from the message array and passed as the top-level
systemparameter (Anthropic does not acceptrole: 'system'in the messages array). - Streaming yields text from
content_block_deltaevents withtype: 'text_delta'. - Abort support is provided via the
options.signalparameter, which is forwarded to the Anthropic SDK to cancel in-flight HTTP requests.
Examples
import { AnthropicLLM } from 'composite-voice';
const llm = new AnthropicLLM({
apiKey: process.env.ANTHROPIC_API_KEY,
model: 'claude-haiku-4-5',
maxTokens: 512,
systemPrompt: 'You are a concise voice assistant.',
});
await llm.initialize();
const stream = await llm.generate('What is the speed of light?');
for await (const chunk of stream) {
process.stdout.write(chunk);
}
await llm.dispose();
import { AnthropicLLM } from 'composite-voice';
const llm = new AnthropicLLM({
proxyUrl: 'http://localhost:3000/api/proxy/anthropic',
model: 'claude-sonnet-4-6',
});
await llm.initialize();
const stream = await llm.generate('Tell me a joke.');
for await (const chunk of stream) {
document.getElementById('output')!.textContent += chunk;
}
See
- AnthropicLLMConfig for configuration options.
- BaseLLMProvider for the abstract base class.
- OpenAILLM for the OpenAI alternative.
Extends
Implements
Constructors
Constructor
new AnthropicLLM(config, logger?): AnthropicLLM;
Defined in: src/providers/llm/anthropic/AnthropicLLM.ts:208
Creates a new Anthropic LLM provider instance.
Parameters
| Parameter | Type | Description |
|---|---|---|
config | AnthropicLLMConfig | Anthropic provider configuration. Must include at least model and either apiKey or proxyUrl. |
logger? | Logger | Optional custom logger instance. If omitted, a default logger is created by the base class. |
Returns
AnthropicLLM
Remarks
The constructor normalizes the configuration by applying defaults: maxTokens defaults to 1024, stream defaults to true, and model defaults to 'claude-haiku-4-5'.
Overrides
Properties
| Property | Modifier | Type | Default value | Description | Overrides | Inherited from | Defined in |
|---|---|---|---|---|---|---|---|
config | public | AnthropicLLMConfig | undefined | LLM-specific provider configuration. | ToolAwareLLMProvider.config BaseLLMProvider.config | - | src/providers/llm/anthropic/AnthropicLLM.ts:192 |
initialized | protected | boolean | false | Tracks whether initialize has completed successfully. | - | BaseLLMProvider.initialized | src/providers/base/BaseProvider.ts:97 |
logger | protected | Logger | undefined | Scoped logger instance for this provider. | - | BaseLLMProvider.logger | src/providers/base/BaseProvider.ts:94 |
roles | readonly | readonly ProviderRole[] | undefined | LLM providers cover the 'llm' pipeline role by default. | - | ToolAwareLLMProvider.roles BaseLLMProvider.roles | src/providers/base/BaseLLMProvider.ts:75 |
type | readonly | ProviderType | undefined | Communication transport this provider uses ('rest' or 'websocket'). | - | ToolAwareLLMProvider.type BaseLLMProvider.type | src/providers/base/BaseProvider.ts:74 |
Methods
assertReady()
protected assertReady(): void;
Defined in: src/providers/base/BaseProvider.ts:255
Guard that throws if the provider has not been initialized.
Returns
void
Remarks
Call at the start of any method that requires the provider to be ready.
Throws
Error Thrown with a descriptive message when initialized is false.
Inherited from
dispose()
dispose(): Promise<void>;
Defined in: src/providers/base/BaseProvider.ts:154
Clean up resources and dispose of the provider.
Returns
Promise<void>
Remarks
Delegates to the subclass hook onDispose and resets the initialized flag. If the provider is not initialized, the call is a no-op.
Throws
Re-throws any error raised by onDispose.
Implementation of
Inherited from
generate()
generate(prompt, options?): Promise<AsyncIterable<string, any, any>>;
Defined in: src/providers/llm/anthropic/AnthropicLLM.ts:307
Generate an LLM response from a single text prompt.
Parameters
| Parameter | Type | Description |
|---|---|---|
prompt | string | The user’s text prompt. |
options? | LLMGenerationOptions | Optional generation overrides (temperature, maxTokens, signal, etc.). |
Returns
Promise<AsyncIterable<string, any, any>>
An async iterable that yields text chunks. When streaming is enabled (the default), chunks arrive incrementally; otherwise, a single chunk containing the full response is yielded.
Remarks
Convenience wrapper that converts the prompt to a message array (prepending the system prompt if configured) and delegates to generateFromMessages.
Throws
Error Thrown if the provider has not been initialized or the client is unavailable.
Throws
AbortError Thrown if the provided options.signal is aborted before or during generation.
Implementation of
Overrides
generateFromMessages()
generateFromMessages(messages, options?): Promise<AsyncIterable<string, any, any>>;
Defined in: src/providers/llm/anthropic/AnthropicLLM.ts:351
Generate an LLM response from a multi-turn conversation.
Parameters
| Parameter | Type | Description |
|---|---|---|
messages | LLMMessage[] | Array of conversation messages (system, user, assistant). System messages are extracted and concatenated into the top-level system parameter. |
options? | LLMGenerationOptions | Optional generation overrides (temperature, maxTokens, signal, etc.). |
Returns
Promise<AsyncIterable<string, any, any>>
An async iterable that yields text chunks. When streaming is enabled (the default), chunks arrive incrementally from content_block_delta events; otherwise, a single chunk containing the full response is yielded.
Remarks
This is the primary generation method. It extracts system messages from the array and passes them as Anthropic’s top-level system parameter (since Anthropic does not accept role: 'system' inline). Remaining messages are converted to the Anthropic MessageParam format.
Dispatches to either the streaming (messages.stream) or non-streaming (messages.create) code path based on config.stream.
Throws
Error Thrown if the provider has not been initialized or the client is unavailable.
Throws
AbortError Thrown if the provided options.signal is aborted before or during generation.
Example
const messages: LLMMessage[] = [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Summarize the theory of relativity.' },
];
const stream = await anthropicLLM.generateFromMessages(messages);
for await (const chunk of stream) {
process.stdout.write(chunk);
}
Implementation of
ToolAwareLLMProvider.generateFromMessages
Overrides
BaseLLMProvider.generateFromMessages
generateWithTools()
generateWithTools(messages, options?): Promise<AsyncIterable<LLMStreamChunk, any, any>>;
Defined in: src/providers/llm/anthropic/AnthropicLLM.ts:556
Generate a response with tool use support.
Parameters
| Parameter | Type |
|---|---|
messages | LLMMessage[] |
options? | LLMGenerationOptions & { tools?: LLMToolDefinition[]; } |
Returns
Promise<AsyncIterable<LLMStreamChunk, any, any>>
Remarks
Returns an async iterable of LLMStreamChunk — text chunks go to TTS, tool_call chunks go to the tool executor. The done chunk signals the stop reason so the caller knows whether to send tool results and re-call.
Implementation of
ToolAwareLLMProvider.generateWithTools
getConfig()
getConfig(): LLMProviderConfig;
Defined in: src/providers/base/BaseLLMProvider.ts:192
Get a shallow copy of the current LLM configuration.
Returns
A new LLMProviderConfig object.
Inherited from
initialize()
initialize(): Promise<void>;
Defined in: src/providers/base/BaseProvider.ts:127
Initialize the provider, making it ready for use.
Returns
Promise<void>
Remarks
Calls the subclass hook onInitialize. If the provider has already been initialized the call is a no-op.
Throws
ProviderInitializationError Thrown when onInitialize rejects. The original error is wrapped with the provider class name for diagnostics.
Implementation of
ToolAwareLLMProvider.initialize
Inherited from
isReady()
isReady(): boolean;
Defined in: src/providers/base/BaseProvider.ts:178
Check whether the provider has been initialized and is ready.
Returns
boolean
true when initialize has completed successfully and dispose has not yet been called.
Implementation of
Inherited from
mergeOptions()
protected mergeOptions(options?): LLMGenerationOptions;
Defined in: src/providers/base/BaseLLMProvider.ts:170
Merge per-call generation options with the provider’s config defaults.
Parameters
| Parameter | Type | Description |
|---|---|---|
options? | LLMGenerationOptions | Optional per-call overrides. |
Returns
A merged LLMGenerationOptions object.
Remarks
Values supplied in options take precedence over values in config. Only defined values are included in the result, allowing providers to distinguish “not set” from explicit values.
Inherited from
onConfigUpdate()
protected onConfigUpdate(_config): void;
Defined in: src/providers/base/BaseProvider.ts:242
Hook called after updateConfig merges new values.
Parameters
| Parameter | Type | Description |
|---|---|---|
_config | Partial<BaseProviderConfig> | The partial configuration that was merged. |
Returns
void
Remarks
The default implementation is a no-op. Override in subclasses to react to runtime configuration changes (e.g. reconnect with a new API key).
Inherited from
BaseLLMProvider.onConfigUpdate
onDispose()
protected onDispose(): Promise<void>;
Defined in: src/providers/llm/anthropic/AnthropicLLM.ts:282
Dispose of the Anthropic client and release resources.
Returns
Promise<void>
Remarks
Nullifies the client reference so that it can be garbage-collected. Called automatically by BaseLLMProvider.dispose.
Overrides
onInitialize()
protected onInitialize(): Promise<void>;
Defined in: src/providers/llm/anthropic/AnthropicLLM.ts:230
Initialize the Anthropic client.
Returns
Promise<void>
Remarks
Dynamically imports the @anthropic-ai/sdk peer dependency, resolves the base URL (preferring proxyUrl over baseURL), and creates the SDK client instance. Called automatically by BaseLLMProvider.initialize.
Throws
ProviderInitializationError Thrown if neither apiKey nor proxyUrl is configured, or if the @anthropic-ai/sdk package cannot be found (peer dependency not installed).
Overrides
promptToMessages()
protected promptToMessages(prompt): LLMMessage[];
Defined in: src/providers/base/BaseLLMProvider.ts:141
Convert a plain-text prompt into an LLMMessage array.
Parameters
| Parameter | Type | Description |
|---|---|---|
prompt | string | The user’s input text. |
Returns
A messages array suitable for generateFromMessages.
Remarks
If the provider’s config includes a systemPrompt, it is prepended as a system message. The prompt itself becomes a user message.
Inherited from
BaseLLMProvider.promptToMessages
updateConfig()
updateConfig(config): void;
Defined in: src/providers/base/BaseProvider.ts:201
Merge partial configuration updates into the current config.
Parameters
| Parameter | Type | Description |
|---|---|---|
config | Partial<BaseProviderConfig> | A partial configuration object whose keys will overwrite existing values. |
Returns
void
Remarks
After merging, the subclass hook onConfigUpdate is called so providers can react to changed values at runtime.