API Reference
Full programmatic API for the Uneven class — methods, events and TypeScript types.
Constructor
import { Uneven } from 'uneven-ai' const ai = new Uneven(config: UnevenConfig)
Methods
init()
Initializes the Uneven AI engine: loads the native Rust binding, starts the event loop and indexes the knowledge base. Must be called before any other method.
ai.init(): Promise<void>
watch()
Starts the terminal watcher and autonomous agent. Spawns the configured process and reads stdout/stderr in real time. Blocks until stop() is called.
ai.watch(): Promise<void>
ask(query)
Queries the knowledge base and returns an LLM response grounded in your indexed project context.
ai.ask(query: string): Promise<string> // Example const answer = await ai.ask('How does the auth service work?')
pentest()
Runs the security analysis engine. Behavior depends on pentester.mode in config.
ai.pentest(): Promise<void>
index()
Re-indexes all configured knowledge sources. Use when you've added new files, updated the database or changed the config.
ai.index(): Promise<void>
stop()
Stops all watchers, releases the Rust native binding and shuts down gracefully.
ai.stop(): Promise<void>
isInitialized() / isRunning()
ai.isInitialized(): boolean ai.isRunning(): boolean if (!ai.isInitialized()) await ai.init() if (!ai.isRunning()) await ai.watch()
getLogger()
Returns the logger instance for writing custom entries to .uneven-ai/log.md.
const logger = ai.getLogger() await logger.info('Deployment started') await logger.securityFinding( 'high', 'SQL Injection', 'src/db.ts', 28, 'String concatenation in query', 'Use parameterized queries' )
getConfig()
Returns the resolved UnevenConfig object currently in use.
ai.getConfig(): UnevenConfig
Events
All events are emitted during watch() and pentest().
ai.on('error-detected', (e) => { // e.file, e.line, e.column, e.message, e.stackTrace console.log(`Error in ${e.file}:${e.line}`) }) ai.on('fix-applied', (e) => { // e.file, e.line, e.before, e.after, e.diff console.log(`Fixed ${e.file}`) }) ai.on('pentest-finding', (e) => { // e.severity, e.type, e.file, e.description, e.recommendation console.log(`[${e.severity}] ${e.type}`) }) ai.on('indexed', (e) => { // e.totalDocuments, e.totalChunks, e.diskSize console.log(`Indexed ${e.totalDocuments} documents`) }) // Additional event types ai.on('warning', (e) => { /* non-critical warnings */ }) ai.on('info', (e) => { /* informational messages */ })
Retrieval-Augmented Fix (RAF)
When Uneven AI detects an error, it runs RAF before falling back to pattern-match heuristics. RAF does a semantic search in your knowledge base to ground the fix suggestion in real project context.
1. Build optimized query from error code + message + language + context 2. Semantic search in vector store (similarity threshold: 0.60) 3. If relevant docs found → LLM prompt with KB chunks as context 4. Parse EXPLANATION / FIX / AFTER_CODE from response 5. Fallback: context-only suggestion (no LLM, confidence ≤ 0.70) 6. Final fallback: per-language pattern-match heuristics
Fix suggestions expose groundedInKB: boolean and kbSources: string[] fields identifying which source documents informed the fix.
Session State Protocol
On every uneven-ai start, Uneven AI writes a machine-readable snapshot to .uneven-ai/session.json. LLM agents integrating with Uneven AI must read this file before writing any files in the project.
{
"sessionId": "uuid-...",
"mode": "development",
"autoFix": true,
"knowledge": { "staleness": "fresh" },
"writeLock": { "active": false, "timeoutMs": 30000 },
"pendingDiffs": []
}Write Lock Protocol
Prevents race conditions between external LLM agent writes and Uneven AI's auto-fix engine. The lock auto-releases after 30 seconds. Emergency release: uneven-ai reset.
import { SessionManager } from 'uneven-ai/core/session' const session = new SessionManager() await session.withLock('my-agent', async () => { // write files here — Uneven AI won't apply fixes during the lock })
LLM Agent Integration
Uneven AI is designed to work alongside other LLM coding agents. Division of responsibility:
| Task | Owner |
|---|---|
| New features, new files, requested logic | LLM agent |
| Runtime and compile-time error detection | Uneven AI |
| Surgical corrections to existing files | Uneven AI |
| Security vulnerability patching | Uneven AI |
| Diff approval / rejection | User |
Performance
| Metric | Value |
|---|---|
| Vector store memory | ~100 MB per 1M embeddings |
| LLM RAM (local model) | 4–8 GB |
| Indexing speed | ~100 files/second |
| Query latency (local) | 50–500 ms |
| Query latency (API provider) | 100–1000 ms |
| Embedding dimensions | 1024 (L2 normalized) |
| RAF similarity threshold | 0.60 |
Full example
import { Uneven } from 'uneven-ai' import config from './uneven-ai.config' const ai = new Uneven(config) ai.on('error-detected', (e) => { console.log(`✗ ${e.file}:${e.line} — ${e.message}`) }) ai.on('fix-applied', (e) => { console.log(`✓ Fixed ${e.file}`) }) ai.on('pentest-finding', (e) => { console.log(`[${e.severity.toUpperCase()}] ${e.description}`) }) await ai.init() await ai.watch() // graceful shutdown process.on('SIGINT', async () => { await ai.stop() process.exit(0) })