Search
Search docs, blog posts, and ecosystem packages with citations.
Enter a query to see grounded citations.
Composable Elixir library for LLM interactions built on Req
Open source on GitHub →defp deps do
[
{:req_llm, "~> 1.6.0"}
]
end
Core ecosystem relationship
ReqLLM is a composable Elixir library for LLM interactions built on Req and Finch. It provides a unified, idiomatic Elixir interface that standardizes requests and responses across LLM providers — eliminating the need to learn and maintain separate client code for each API.
ReqLLM serves as the universal LLM client layer for the Jido ecosystem. It abstracts away provider-specific API differences so that higher-level packages can interact with any supported AI model through a single, consistent interface.
ReqLLM)
High-level functions: generate_text/3, stream_text/3, generate_object/4, generate_image/3, model resolution, key management, and provider lookup.
Three-component architecture: Streaming orchestrates flow, StreamServer manages state and SSE events, StreamResponse provides lazy token streams with early cancellation.
Behaviour-based provider architecture with 15+ concrete implementations: Anthropic, OpenAI, Google, Groq, OpenRouter, xAI, Amazon Bedrock, Cerebras, and more.
Function calling framework with NimbleOptions-compatible parameter schemas, automatic JSON Schema conversion, and callback execution.
Component-based billing calculator with per-million-token cost computation and [:req_llm, :token_usage] telemetry events.