ADR-003: BYOK Provider Pattern
Status: Accepted
Date: 2026-05-09
Context
Embercore orchestrates LLM calls across multiple agents (Athena, Apollo, Hermes, Hephaestus). We needed to decide how users provide LLM access:
- Bundled API key — Ship a shared key. Simple UX, but creates billing liability, rate-limit contention, and security risk.
- BYOK (Bring Your Own Key) — Users supply their own API keys via environment variables. More setup, but zero liability and full user control.
- Self-hosted only — Require local models. Limits capability and audience.
Decision
We implement a BYOK provider pattern with a unified interface, env-based factory, and custom base URL support.
Provider interface (internal/provider/provider.go)
type Provider interface {
Complete(ctx context.Context, messages []Message, opts Options) (*Response, error)
Name() string
}
All agents accept a Provider via the WithProvider functional option (see ADR-008). Agents are provider-agnostic — they call Complete() without knowing which LLM backend is in use.
Factory pattern (internal/provider/factory.go)
NewFromEnv() constructs the appropriate provider from environment variables:
| Variable | Purpose | Default |
|---|---|---|
EMBERCORE_PROVIDER |
Select backend (anthropic or openai) |
anthropic |
ANTHROPIC_API_KEY |
Anthropic API authentication | — |
OPENAI_API_KEY |
OpenAI API authentication | — |
EMBERCORE_MODEL |
Override the default model | Provider-specific |
OPENAI_BASE_URL |
Custom endpoint for OpenAI-compatible APIs | OpenAI default |
Custom base URL support
The OpenAI provider accepts WithBaseURL(url) (internal/provider/openai.go), enabling:
- Ollama — Local open-source models via OpenAI-compatible API
- LM Studio — Local model serving with OpenAI-compatible endpoints
- Azure OpenAI — Enterprise deployments with custom endpoints
- Any OpenAI-compatible proxy — vLLM, LocalAI, etc.
The Anthropic provider also supports HERALD_ANTHROPIC_BASE_URL for proxied/self-hosted Anthropic-compatible endpoints.
Default models
| Provider | Default Model |
|---|---|
| Anthropic | claude-sonnet-4-20250514 |
| OpenAI | Determined by EMBERCORE_MODEL |
Consequences
Benefits:
- Zero billing liability for the project — users pay their own LLM costs
- Users can choose the best provider/model for their use case
- Local model support (Ollama, LM Studio) enables fully offline operation
- Adding new providers only requires implementing the
Providerinterface - Environment variables are the standard secret-management pattern for CLI tools
Trade-offs:
- Higher onboarding friction — users must obtain and configure API keys before first use
- Provider-specific behavior differences (token limits, function calling support) must be handled gracefully
- No centralized usage analytics or cost tracking across the user base
Related decisions: