Thunk
Sovereign Personal AI
A cognitive architecture that runs entirely on your hardware. Your data stays yours. Your AI answers to you.
What is Thunk?
Thunk is the flagship application built on the Thinking System framework. It's a personal AI that combines local-first execution, verifiable behaviours, and a peer-to-peer fabric that belongs to everyone.
Unlike cloud-based AI services, Thunk runs on your own hardware. Your conversations, data, and cognitive processes remain under your exclusive control. There's no silent data collection, no training on your conversations, no third-party access.
Core Principles
Sovereignty
Autonomy over your processes and data. No external party can compel behaviour that contradicts your configured policies.
Transparency
All cognitive processes are introspectable and auditable. Every persistence decision is recorded with full provenance.
Modularity
Loosely coupled, independently verifiable components. Extend through Flows, WASM modules, and custom adapters.
Emergence
Complex behaviour from simple components. Intelligence emerges from composition, not monolithic models.
Visualising Thunk
Thunk is built on the Thinking System—four interlocking Rust crates that form a complete cognitive pipeline. STEEL, SLEET, STELE, and ESTEL handle everything from kernel-level resource management to human-facing output, all running locally on your hardware.
STEEL is the kernel: process isolation, resource scheduling, and the inter-process messaging bus. Above it, SLEET provides a gas-metered bytecode VM with ephemeral execution theatres—sandboxed contexts for untrusted reasoning that are destroyed upon completion.
STELE implements the cognitive core: multi-tier memory (episodic, semantic, short-term), the declarative flow engine, and the policy-governed Scribes that mediate between ephemeral computation and durable state. Bitemporal storage tracks not just what you know, but when you learned it and how.
ESTEL handles the human–AI interface: intent capture, adaptive visualisation, and bidirectional transformation between human-readable and machine-processable representations.
The visualisation below traces information flow through this architecture. Each node is a real symbol from the codebase. Hover to see its documentation; trace the attention weights connecting layers.
Capabilities
Autonomous Code Synthesis
Generate, validate, and self-heal code from natural language specifications. The system iterates until tests pass.
Structured Memory
Multi-tier memory with episodic, semantic, and short-term layers. All data carries bitemporal timestamps and provenance.
Declarative Workflows
Define reasoning patterns as Flows—graphs of semantic blocks transpiled to deterministic bytecode.
Policy-Governed Persistence
Only trusted Scribes can commit to long-term storage, each evaluating requests against programmable policies.
Peer-to-Peer Network
Optional participation in decentralised compute pools. Access distributed resources while contributing spare cycles.
Use Cases
Personal Assistant
Deploy on your personal hardware for a truly private assistant. Manage schedules, finances, health records—all under your exclusive control.
Enterprise
Each department deploys Scribes with specific governance policies. Cross-functional collaboration without compromising data sovereignty.
Development
Accelerate implementation with the generate-validate-heal loop. Natural language to working code with full provenance.
Research
Access decentralised compute for large-scale experiments. Publish results with cryptographic provenance.
Recommended Resources
- Recommended: 8+ cores, 32 GB RAM, 100 GB SSD, NVIDIA GPU
- Platforms: macOS 12+, Linux (Ubuntu 22.04+, Fedora 38+), Windows via WSL2
- Architectures: x86_64 and aarch64 (Apple Silicon)