Skip to content

OpenAICompatibleLLM

Base LLM provider for any service that speaks the OpenAI chat completions format.

Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:174

Base LLM provider for any service that speaks the OpenAI chat completions format.

Remarks

This class implements the Template Method pattern. The base class provides the full generation pipeline — SDK initialization, streaming, non-streaming, abort handling, and proxy support — while subclasses customize behavior by overriding a small number of hooks:

  1. providerName — Display name used in log messages and error reports.
  2. buildClientOptions() — Return an object of provider-specific options merged into the new OpenAI() constructor call (e.g., organization for OpenAI).

Subclasses typically also define their own config interface that extends OpenAICompatibleLLMConfig to add provider-specific fields.

The openai npm package is a peer dependency and is dynamically imported during initialization. It does not need to be bundled unless one of the OpenAI-compatible providers is used.

Example

import { OpenAICompatibleLLM } from 'composite-voice';
import type { OpenAICompatibleLLMConfig } from 'composite-voice';

interface MyProviderConfig extends OpenAICompatibleLLMConfig {
  customField?: string;
}

class MyProviderLLM extends OpenAICompatibleLLM {
  declare public config: MyProviderConfig;
  protected override readonly providerName = 'MyProviderLLM';

  constructor(config: MyProviderConfig) {
    super({
      ...config,
      baseURL: config.baseURL ?? 'https://api.myprovider.com/v1',
      model: config.model ?? 'my-default-model',
    });
  }

  protected override buildClientOptions(): Record<string, unknown> {
    return { customHeader: this.config.customField };
  }
}

See

Extends

Extended by

Constructors

Constructor

new OpenAICompatibleLLM(config, logger?): OpenAICompatibleLLM;

Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:198

Creates a new OpenAI-compatible LLM provider instance.

Parameters

ParameterTypeDescription
configOpenAICompatibleLLMConfigProvider configuration. Must include at least model and either apiKey or proxyUrl.
logger?LoggerOptional custom logger instance. If omitted, a default logger is created by the base class.

Returns

OpenAICompatibleLLM

Overrides

BaseLLMProvider.constructor

Properties

PropertyModifierTypeDefault valueDescriptionOverridesInherited fromDefined in
configpublicOpenAICompatibleLLMConfigundefinedLLM-specific provider configuration.BaseLLMProvider.config-src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:175
initializedprotectedbooleanfalseTracks whether initialize has completed successfully.-BaseLLMProvider.initializedsrc/providers/base/BaseProvider.ts:97
loggerprotectedLoggerundefinedScoped logger instance for this provider.-BaseLLMProvider.loggersrc/providers/base/BaseProvider.ts:94
providerNamereadonlystring'OpenAICompatibleLLM'Display name used in log messages and errors. Remarks Override this property in subclasses to provide a meaningful name (e.g., 'GroqLLM', 'GeminiLLM'). The name appears in all log output and in ProviderInitializationError messages.--src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:188
rolesreadonlyreadonly ProviderRole[]undefinedLLM providers cover the 'llm' pipeline role by default.-BaseLLMProvider.rolessrc/providers/base/BaseLLMProvider.ts:75
typereadonlyProviderTypeundefinedCommunication transport this provider uses ('rest' or 'websocket').-BaseLLMProvider.typesrc/providers/base/BaseProvider.ts:74

Methods

assertReady()

protected assertReady(): void;

Defined in: src/providers/base/BaseProvider.ts:255

Guard that throws if the provider has not been initialized.

Returns

void

Remarks

Call at the start of any method that requires the provider to be ready.

Throws

Error Thrown with a descriptive message when initialized is false.

Inherited from

BaseLLMProvider.assertReady


buildClientOptions()

protected buildClientOptions(): Record<string, unknown>;

Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:221

Build the options object passed to new OpenAI(...).

Returns

Record<string, unknown>

An object of additional options to pass to the OpenAI SDK constructor.

Remarks

Override in subclasses to inject provider-specific SDK constructor options. The returned object is spread into the OpenAI client constructor after the base options (apiKey, baseURL, maxRetries, timeout, dangerouslyAllowBrowser).

Example

// In a subclass (e.g., OpenAILLM):
protected override buildClientOptions(): Record<string, unknown> {
  return { organization: this.config.organizationId };
}

dispose()

dispose(): Promise<void>;

Defined in: src/providers/base/BaseProvider.ts:154

Clean up resources and dispose of the provider.

Returns

Promise<void>

Remarks

Delegates to the subclass hook onDispose and resets the initialized flag. If the provider is not initialized, the call is a no-op.

Throws

Re-throws any error raised by onDispose.

Inherited from

BaseLLMProvider.dispose


generate()

generate(prompt, options?): Promise<AsyncIterable<string, any, any>>;

Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:324

Generate an LLM response from a single text prompt.

Parameters

ParameterTypeDescription
promptstringThe user’s text prompt.
options?LLMGenerationOptionsOptional generation overrides (temperature, maxTokens, signal, etc.).

Returns

Promise<AsyncIterable<string, any, any>>

An async iterable that yields text chunks. When streaming is enabled (the default), chunks arrive incrementally; otherwise, a single chunk containing the full response is yielded.

Remarks

Convenience wrapper that converts the prompt to a message array (prepending the system prompt if configured) and delegates to generateFromMessages.

Throws

Error Thrown if the provider has not been initialized or the client is unavailable.

Throws

AbortError Thrown if the provided options.signal is aborted before or during generation.

Example

const provider = new OpenAICompatibleLLM({ apiKey: 'sk-...', model: 'gpt-4' });
await provider.initialize();

const stream = await provider.generate('Explain quantum computing briefly.');
for await (const chunk of stream) {
  process.stdout.write(chunk);
}

Overrides

BaseLLMProvider.generate


generateFromMessages()

generateFromMessages(messages, options?): Promise<AsyncIterable<string, any, any>>;

Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:371

Generate an LLM response from a multi-turn conversation.

Parameters

ParameterTypeDescription
messagesLLMMessage[]Array of conversation messages (system, user, assistant).
options?LLMGenerationOptionsOptional generation overrides (temperature, maxTokens, signal, etc.).

Returns

Promise<AsyncIterable<string, any, any>>

An async iterable that yields text chunks. When streaming is enabled (the default), chunks arrive incrementally; otherwise, a single chunk containing the full response is yielded.

Remarks

This is the primary generation method. It merges per-call options with the provider config defaults, converts the messages to OpenAI’s ChatCompletionMessageParam format, and dispatches to either the streaming or non-streaming code path based on config.stream.

The returned async iterable respects the options.signal abort signal. When aborted, iteration stops and an AbortError is thrown. This is used by the CompositeVoice eager/preflight pipeline to cancel speculative generations.

Throws

Error Thrown if the provider has not been initialized or the client is unavailable.

Throws

AbortError Thrown if the provided options.signal is aborted before or during generation.

Example

const provider = new OpenAICompatibleLLM({ apiKey: 'sk-...', model: 'gpt-4' });
await provider.initialize();

const messages: LLMMessage[] = [
  { role: 'system', content: 'You are a helpful assistant.' },
  { role: 'user', content: 'What is the capital of France?' },
];

const stream = await provider.generateFromMessages(messages);
for await (const chunk of stream) {
  process.stdout.write(chunk);
}

Overrides

BaseLLMProvider.generateFromMessages


getConfig()

getConfig(): LLMProviderConfig;

Defined in: src/providers/base/BaseLLMProvider.ts:192

Get a shallow copy of the current LLM configuration.

Returns

LLMProviderConfig

A new LLMProviderConfig object.

Inherited from

BaseLLMProvider.getConfig


initialize()

initialize(): Promise<void>;

Defined in: src/providers/base/BaseProvider.ts:127

Initialize the provider, making it ready for use.

Returns

Promise<void>

Remarks

Calls the subclass hook onInitialize. If the provider has already been initialized the call is a no-op.

Throws

ProviderInitializationError Thrown when onInitialize rejects. The original error is wrapped with the provider class name for diagnostics.

Inherited from

BaseLLMProvider.initialize


isReady()

isReady(): boolean;

Defined in: src/providers/base/BaseProvider.ts:178

Check whether the provider has been initialized and is ready.

Returns

boolean

true when initialize has completed successfully and dispose has not yet been called.

Inherited from

BaseLLMProvider.isReady


mergeOptions()

protected mergeOptions(options?): LLMGenerationOptions;

Defined in: src/providers/base/BaseLLMProvider.ts:170

Merge per-call generation options with the provider’s config defaults.

Parameters

ParameterTypeDescription
options?LLMGenerationOptionsOptional per-call overrides.

Returns

LLMGenerationOptions

A merged LLMGenerationOptions object.

Remarks

Values supplied in options take precedence over values in config. Only defined values are included in the result, allowing providers to distinguish “not set” from explicit values.

Inherited from

BaseLLMProvider.mergeOptions


onConfigUpdate()

protected onConfigUpdate(_config): void;

Defined in: src/providers/base/BaseProvider.ts:242

Hook called after updateConfig merges new values.

Parameters

ParameterTypeDescription
_configPartial<BaseProviderConfig>The partial configuration that was merged.

Returns

void

Remarks

The default implementation is a no-op. Override in subclasses to react to runtime configuration changes (e.g. reconnect with a new API key).

Inherited from

BaseLLMProvider.onConfigUpdate


onDispose()

protected onDispose(): Promise<void>;

Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:288

Dispose of the OpenAI client and release resources.

Returns

Promise<void>

Remarks

Nullifies the client reference so that it can be garbage-collected. Called automatically by BaseLLMProvider.dispose.

Overrides

BaseLLMProvider.onDispose


onInitialize()

protected onInitialize(): Promise<void>;

Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:237

Initialize the OpenAI-compatible client.

Returns

Promise<void>

Remarks

Dynamically imports the openai peer dependency, resolves the base URL (preferring proxyUrl over baseURL), and creates the SDK client instance. Called automatically by BaseLLMProvider.initialize.

Throws

ProviderInitializationError Thrown if neither apiKey nor proxyUrl is configured, or if the openai package cannot be found (peer dependency not installed).

Overrides

BaseLLMProvider.onInitialize


promptToMessages()

protected promptToMessages(prompt): LLMMessage[];

Defined in: src/providers/base/BaseLLMProvider.ts:141

Convert a plain-text prompt into an LLMMessage array.

Parameters

ParameterTypeDescription
promptstringThe user’s input text.

Returns

LLMMessage[]

A messages array suitable for generateFromMessages.

Remarks

If the provider’s config includes a systemPrompt, it is prepended as a system message. The prompt itself becomes a user message.

Inherited from

BaseLLMProvider.promptToMessages


updateConfig()

updateConfig(config): void;

Defined in: src/providers/base/BaseProvider.ts:201

Merge partial configuration updates into the current config.

Parameters

ParameterTypeDescription
configPartial<BaseProviderConfig>A partial configuration object whose keys will overwrite existing values.

Returns

void

Remarks

After merging, the subclass hook onConfigUpdate is called so providers can react to changed values at runtime.

Inherited from

BaseLLMProvider.updateConfig

© 2026 CompositeVoice. All rights reserved.

Font size
Contrast
Motion
Transparency