# ol.llx

> Unified LLM API and agent runtime for Clojure, ClojureScript, and Clojure Dart.

![doc](https://img.shields.io/badge/doc-outskirtslabs-orange.svg)
![status: experimental](https://img.shields.io/badge/status-experimental-orange.svg)
![Clojars Project](https://img.shields.io/clojars/v/com.outskirtslabs/llx.svg)

## ol.llx.ai

`ol.llx.ai` targets JVM Clojure, ClojureScript, and ClojureDart. Under the hood it is built around the API families that cover most of the current LLM ecosystem: OpenAI Completions, OpenAI Responses, Anthropic Messages, and Google Generative AI.

It absorbs provider quirks around streaming, tool calls, reasoning traces, and model behavior to present a unified API without relying on vendor SDKs or Java wrappers.

## ol.llx.agent

`ol.llx.agent` sits on top of `ol.llx.ai` and handles the multi-turn agent loop.

Feature highlights:

* tools run concurrently and can be aborted mid-execution
* the steering queue can interrupt an active run, skip pending tools, and re-prompt
* the follow-up queue can defer messages until the current turn finishes
* event subscriptions expose structured turn, message, and tool lifecycle events
* agent state can be snapshotted and rehydrated by the application
* custom message roles allow app-specific history that is filtered before LLM submission

Project status: **[Experimental](https://docs.outskirtslabs.com/open-source-vital-signs#experimental)**.

## Installation

Only git dep usage will be supported for now while we are in the experimental phase.

```clojure
;; deps.edn
{:deps {io.github.outskirtslabs/llx {:git/sha "3427a11a5ddff01d21b6c6fe24374cac270ecf62"}}}
```

## Quick Start

```clojure
(require '[ol.llx.ai :as ai]
         '[ol.llx.agent :as agent]
         '[promesa.exec.csp :as sp])

(def read-tool
  {:name         "read_file"
   :description  "Read the contents of a file at the given path."
   :input-schema [:map [:path :string]]
   :execute      (fn [_id {:keys [path]} _abort _on-update]
                   {:content [{:type :text :text (slurp path)}]})})

(def a (agent/create-agent {:model         (ai/get-model :anthropic "claude-sonnet-4-6")
                            :system-prompt "You are a helpful assistant."
                            :tools         [read-tool]}))

;; subscribe to events
(let [ch (agent/subscribe a)]
  (future
    (loop []
      (when-let [ev (sp/take! ch)]
        (println (:type ev))
        (recur)))))

(agent/prompt a [{:role :user :content "read README.adoc" :timestamp 1}])

(agent/close a)
```

`ol.llx.ai` exposes unified entrypoints at `ol.llx.ai/complete` and `ol.llx.ai/stream`, with `complete*` and `stream*` available when you need provider-specific escape hatches.

## Documentation

* [Docs](https://docs.outskirtslabs.com/ol.llx/next/)
* [API Reference](https://docs.outskirtslabs.com/ol.llx/next/api)
* [Example: Minimal Coding Agent](examples/minimal-coding-agent/README.md)
* [Support via GitHub Issues](https://github.com/outskirtslabs/llx/issues)

## License

Copyright (C) 2026 Casey Link

Distributed under the [EUPL-1.2](https://spdx.org/licenses/EUPL-1.2.html).
