Compare commits

...

10 Commits

Author SHA1 Message Date
elmadani
22a945d07c docs: fix clone URL (salka → elmadani)
Some checks are pending
Build / build-linux (push) Waiting to run
Build / build-macos (push) Waiting to run
2026-02-24 22:29:28 +00:00
elmadani
dce92e0808 docs: fix Sponsor link → SPONSOR.md
Some checks are pending
Build / build-linux (push) Waiting to run
Build / build-macos (push) Waiting to run
2026-02-24 22:29:28 +00:00
elmadani
cc6c404898 docs: fix contributor profile URL
Some checks are pending
Build / build-linux (push) Waiting to run
Build / build-macos (push) Waiting to run
2026-02-24 22:27:50 +00:00
elmadani
4295295753 docs: update SPONSOR.md — personal presentation page
Some checks are pending
Build / build-linux (push) Waiting to run
Build / build-macos (push) Waiting to run
2026-02-24 22:27:15 +00:00
Elmadani
e2bb3b8bdd security: remove internal codename references from comments
Some checks are pending
Build / build-linux (push) Waiting to run
Build / build-macos (push) Waiting to run
2026-02-24 22:10:26 +00:00
Elmadani
781bf6ba80 security: remove internal commentary from build constants
Some checks are pending
Build / build-linux (push) Waiting to run
Build / build-macos (push) Waiting to run
2026-02-24 21:52:31 +00:00
salka
9211aeb67c clean: remove ifrane.pdf, fix FUNDING.yml H5 pure 2026-02-24 00:02:41 +00:00
salka
baf30969c1 update: VISION.md H5-pure + README.md signal quote
Removed solar/geographic references.
Replaced with universal low-power inference framing.
All content now H5-safe for public consumption.
2026-02-24 00:02:00 +00:00
salka
b76a9d3c68 chore: migrate all URLs github.com -> git.inference-x.com 2026-02-23 23:38:00 +00:00
Salka Elmadani
1208c6d521 README: accessible to everyone, technical depth preserved
Some checks failed
Build / build-linux (push) Has been cancelled
Build / build-macos (push) Has been cancelled
2026-02-23 10:39:05 +00:00
25 changed files with 297 additions and 242 deletions

3
.github/FUNDING.yml vendored
View File

@ -1,6 +1,5 @@
# Inference-X — Universal Inference Protocol
# Free for individuals, researchers, and small teams.
# Your support funds development, servers, and solar inference research.
# Your support funds development and server infrastructure.
github: ElmadaniS
custom: ["https://paypal.me/ELMADANISALKA"]

View File

@ -2,7 +2,7 @@
## Creator & Lead Developer
- **Salka Elmadani** — Architecture, implementation, and all original code
- GitHub: [@ElmadaniS](https://github.com/ElmadaniS)
- Git: [@elmadani](https://git.inference-x.com/elmadani)
- Email: Elmadani.SALKA@proton.me
## Infrastructure Partners

2
NOTICE
View File

@ -15,7 +15,7 @@ AUTHOR
Location: Morocco
Contact: Elmadani.SALKA@proton.me
Website: https://inference-x.com
Repository: https://github.com/ElmadaniS/inference-x
Repository: https://git.inference-x.com/salka/inference-x
Origin: Morocco 🇲🇦
────────────────────────────────────────────────────────────────

326
README.md
View File

@ -1,262 +1,196 @@
# Inference-X
[![Build](https://github.com/ElmadaniS/inference-x/actions/workflows/build.yml/badge.svg)](https://github.com/ElmadaniS/inference-x/actions/workflows/build.yml)
[![Release](https://img.shields.io/github/v/release/ElmadaniS/inference-x)](https://github.com/ElmadaniS/inference-x/releases)
[![License](https://img.shields.io/badge/license-BSL--1.1-blue)](LICENSE)
[![Binary Size](https://img.shields.io/badge/binary-305%20KB-brightgreen)](TECHNOLOGY.md)
[![Backends](https://img.shields.io/badge/backends-19-orange)](ARCHITECTURE.md)
**Better output from the same model.**
**Run AI on your own computer. Private. Free. No internet.**
One binary routes any AI model to any hardware — from a microcontroller to a datacenter. Fused computation, adaptive precision, surgical expert loading. No dependencies. No framework. No vendor lock-in.
Inference-X is a tiny file (305 KB) that lets any computer run AI models locally. It works on old laptops, phones, Raspberry Pi, and datacenters — same file, no setup. Your questions stay on your machine. Nobody sees them.
305 KB. 19 hardware backends. Any model. Any scale.
**[Website](https://inference-x.com)** · **[How it works](TECHNOLOGY.md)** · **[Benchmarks](BENCHMARKS.md)** · **[Vision](VISION.md)** · **[Sponsor](SPONSOR.md)**
Built in Morocco by [Salka Elmadani](https://x.com/ElmadaniSa13111).
---
> *In the Anti-Atlas, our ancestors built khettaras — underground water channels that deliver pure water to villages without pumps, without electricity, without filtration. The water arrives cleaner than any treated supply because the path itself is the filter. Inference-X works the same way: the shortest path produces the cleanest signal.*
## Start in 30 seconds
**[Website](https://inference-x.com)** · **[How it works](TECHNOLOGY.md)** · **[Benchmarks](BENCHMARKS.md)** · **[Vision](VISION.md)** · **[Sponsor](https://github.com/sponsors/ElmadaniS)**
```bash
git clone https://git.inference-x.com/salka/inference-x
cd inference-x && make
./inference-x model.gguf
```
That's it. Download a `.gguf` model from [HuggingFace](https://huggingface.co/models?sort=trending&search=gguf), run the command, talk to AI. No account. No API key. No internet.
Add `--serve 8080` to get a web interface at `localhost:8080`.
---
## What can your computer run?
| Your RAM | Models you can run | What it can do |
|---|---|---|
| **2 GB** | SmolLM2 135M | Simple assistant, quick answers |
| **4 GB** | Phi-3 Mini 3.8B, Llama 3.2 3B | Smart conversations, code help, translations |
| **8 GB** | Mistral 7B, Llama 3.1 8B | Creative writing, analysis, reasoning |
| **16 GB** | DeepSeek R1 14B | Advanced reasoning, expert-level answers |
| **32 GB** | Qwen 2.5 32B | Professional-grade AI |
| **64 GB** | Llama 3.1 70B, DeepSeek V3 MoE | Frontier performance, locally |
Every model runs privately, offline, with no subscription.
---
## Why local AI matters
When you use AI online, your words travel to a server in another country. Someone can read them. You pay per word. The service can shut down.
With Inference-X, your questions stay on your desk. The answer is computed by your own processor. Nothing leaves. Nothing is stored. It works without internet. It's free forever.
---
## What makes it different
Most inference engines add layers between the model and the hardware: frameworks, runtime allocators, intermediate buffers, uniform precision pipelines. Each layer adds computational overhead that degrades the model's original signal.
Most inference engines add layers between the model and the hardware: frameworks, runtime allocators, intermediate buffers. Each layer degrades the model's signal.
Inference-X removes those layers.
**Fused computation** — Dequantization and matrix multiply happen in a single instruction loop. No intermediate FP32 buffer. Fewer rounding operations means output closer to the model's theoretical FP32 maximum.
**Fused computation** — Dequantization and matrix multiply happen in a single instruction loop. No intermediate FP32 buffer. Output closer to the model's theoretical maximum.
**Adaptive precision** — Each query is analyzed before inference. Simple questions get compressed early layers and full-precision decision layers. Complex reasoning gets full precision throughout. The model adapts its depth to the question — same file, same binary, different computational path.
**Adaptive precision** — Each query is analyzed before inference. Simple questions get compressed early layers and full-precision decision layers. Complex reasoning gets full precision throughout.
**Surgical expert loading** — For Mixture-of-Experts models, only active experts exist in memory. Inactive experts are evicted at the OS level. Result: a 1-trillion-parameter model runs on 17 GB of RAM. The signal path contains only what contributes to the current token.
**Surgical expert loading** — For Mixture-of-Experts models, only active experts exist in memory. A 1-trillion-parameter model runs on 64 GB of RAM.
The result: **the same model produces higher-fidelity output through a cleaner computation path.** Or equivalently: a smaller model through Inference-X can match a larger model through a conventional engine.
The result: **the same model produces better output through a cleaner computation path.** A smaller model through Inference-X can match a larger model through a conventional engine.
→ [Full technical explanation](TECHNOLOGY.md)
---
## What it is
## How it works
TCP/IP routes data packets to any network, any hardware, any destination. The protocol doesn't care about the wire.
TCP/IP routes data packets to any network. Inference-X routes intelligence to any silicon.
Inference-X routes intelligence to any silicon. The protocol doesn't care about the chip.
One function call enters `kernel_dispatch.h`. On the other side: CPU, GPU, TPU, LPU, IPU, FPGA, DSP, or WSE. The caller doesn't know. Doesn't need to. The model runs. The answer comes back.
One function call enters `kernel_dispatch.h`. On the other side: CPU, GPU, TPU, LPU, IPU, FPGA, DSP, or WSE. The model runs. The answer comes back.
```
Model (any GGUF) → Inference-X (305 KB) → Silicon (any of 19 backends) → Response
```
The model describes itself. The engine reads the description. The engine never assumes.
## Quick Start
```bash
git clone https://github.com/ElmadaniS/inference-x
cd inference-x
make
# Download a model (any GGUF from Hugging Face)
./inference-x model.gguf -p "Hello, world"
```
Architecture:
infer.cpp (570 lines) — Orchestrator. Chat templates. Server mode.
transformer_v6.h — Forward pass. Dense + MoE + MLA unified.
kernel_dispatch.h — Routes GEMM to the right silicon.
moe_mla.h — Expert selection. Prefetch. Eviction.
gemm.h — Fused dequant × matmul kernels.
backends.h — 19 hardware targets. One interface.
```
That's it. One binary. One command. Any model.
## Why it matters
Running a model today requires choosing a stack: CUDA for NVIDIA, ROCm for AMD, Metal for Apple, TensorRT for serving, vLLM for throughput, Ollama for local. Each stack locks you to a vendor, a way of thinking, and adds its own computational overhead between the model and the result.
Inference-X eliminates the stack. There is no stack. There's a model file, a binary, and your hardware — whatever it is.
```
GPU cluster: 1T parameters on 8× H100 ~5.6 kW, $200,000+/year
Inference-X: 1T parameters on 256 GB RAM ~300 W, €4,800/year
Same model. Cleaner output. 97% less cost.
```
This isn't about replacing GPUs. It's about making the choice of silicon irrelevant to the act of thinking — and getting *better* results from the silicon you already have.
## Who is this for
**Every organization that runs AI models — or wants to.**
| Sector | Problem | What IX does |
|--------|---------|-------------|
| **Healthcare** | Patient data can't leave the hospital. Cloud inference = compliance risk. | Air-gapped inference on hospital hardware. Zero network calls. HIPAA/GDPR by architecture. |
| **Defense & Government** | Sovereign AI requires sovereign infrastructure. | Runs on government-owned hardware. No vendor dependency. No telemetry. Auditable source. |
| **Finance** | Trading models need low latency and full auditability. | On-premise inference, deterministic output, no external calls. |
| **Telecom** | Edge inference at cell towers for real-time processing. | 305 KB binary deploys on edge hardware. Adaptive precision matches available power. |
| **Automotive** | In-vehicle AI needs minimal footprint and guaranteed response. | Runs on ARM/Snapdragon. No framework overhead. Fits in L2 cache. |
| **Startups** | GPU costs eat runway. $200K/year for inference infrastructure. | Same model quality at 97% lower cost. CPU-only. Scale when you're ready. |
| **Enterprise** | Vendor lock-in across NVIDIA, AMD, Intel, cloud providers. | 19 backends. One binary. Switch hardware without changing code. |
| **Research & Education** | Limited compute budgets. Students can't afford H100s. | Free under BSL-1.1. Run 14B models on a €20/month server. |
| **Embedded / IoT** | AI on microcontrollers with KB-level memory budgets. | Compiles for ESP32. Surgical loading keeps memory minimal. |
| **Cloud Providers** | Offering inference services at competitive margins. | Higher output quality per compute dollar. 19 backends = any customer hardware. |
Inference-X has zero friction with existing infrastructure. It doesn't replace your hardware — it makes your hardware work better.
## Get started
```bash
# Build (30 seconds)
git clone https://github.com/ElmadaniS/inference-x.git
cd inference-x && make -j$(nproc)
# Chat with any GGUF model
./inference-x model.gguf -i
# Or start a web interface
python3 web/ix_server.py
# Or run as an OpenAI-compatible API
./inference-x model.gguf --serve --port 8080
```
Three commands. No dependencies. No Docker. No Python packages. No GPU drivers. Just `make` and run.
12,571 lines of C++17. 6 architectures (Llama, Qwen2, Gemma2, Phi, DeepSeek MoE, MLA). 23 quantization formats. One binary.
---
## Benchmarks
Real numbers on a €20/month AMD EPYC server. CPU-only. No GPU. Cold start.
AMD EPYC Rome · 17 GB RAM · 6 cores · CPU-only · €20/month server
| Model | Params | Quant | tok/s |
|-------|--------|-------|-------|
| SmolLM2 | 135M | Q8_0 | **130.23** |
| Llama 3.2 | 3B | Q4_K_M | **3.82** |
| Qwen 2.5 | 3B | Q4_K_M | **3.85** |
| Mistral 7B | 7B | Q4_K_M | **2.06** |
| Qwen 2.5 | 7B | Q4_K_M | **1.82** |
| Llama 3.1 | 8B | Q4_K_M | **1.75** |
| Gemma 2 | 9B | Q4_K_M | **1.28** |
| DS-R1 Qwen | 14B | Q4_K_M | **0.97** |
| Model | Params | Quant | tok/s | Prefill |
|---|---|---|---|---|
| SmolLM2 | 135M | Q8_0 | **130.23** | 87 ms |
| Qwen 2.5 | 3B | Q4_K_M | **3.85** | 16.5 s |
| Llama 3.2 | 3B | Q4_K_M | **3.82** | 3.8 s |
| Mistral 7B | 7B | Q4_K_M | **2.06** | 39.2 s |
| Llama 3.1 | 8B | Q4_K_M | **1.75** | 43.0 s |
| DeepSeek R1 | 14B | Q4_K_M | **0.97** | 74.1 s |
9/10 architectures passing. Chat templates auto-detected. Zero manual configuration.
9 models · 4 architectures · Same binary · Zero configuration
→ [Full benchmark details](BENCHMARKS.md)
→ [Full benchmarks](BENCHMARKS.md)
---
## Supported Hardware
| Backend | Silicon | Status |
|---------|---------|--------|
| CPU (AVX2/AVX-512) | Intel, AMD | ✅ Production |
| Backend | Target | Status |
|---|---|---|
| CPU AVX2/512 | Intel, AMD | ✅ Production |
| CUDA | NVIDIA GPU | ✅ Production |
| ROCm | AMD GPU | ✅ Production |
| Metal | Apple Silicon | ✅ Production |
| Vulkan | Cross-platform GPU | ✅ Production |
| ARM NEON | ARM processors | ✅ Production |
| Snapdragon | Qualcomm (GPU+DSP+NEON) | 🔧 Ready |
| Hexagon HVX | Qualcomm DSP | 🔧 Ready |
| OpenCL | Cross-platform | 🔧 Ready |
| WebGPU | Browser | 🔧 Ready |
| TPU | Google | 🔧 Ready |
| Inferentia | AWS | 🔧 Ready |
| Gaudi | Intel HPU | 🔧 Ready |
| Maia | Microsoft | 🔧 Ready |
| SambaNova RDU | SambaNova | 🔧 Ready |
| Graphcore IPU | Graphcore | 🔧 Ready |
| Groq LPU | Groq | 🔧 Ready |
| FPGA (Xilinx) | Xilinx | 🔧 Ready |
| Cerebras WSE | Cerebras | 🔧 Ready |
| Vulkan | Cross-platform | ✅ Production |
| ARM NEON | ARM (Pi, phones) | ✅ Production |
| Snapdragon | Qualcomm | 🔶 Ready |
| Hexagon HVX | Qualcomm DSP | 🔶 Ready |
| TPU | Google | 🔶 Ready |
| Inferentia | AWS | 🔶 Ready |
| Gaudi | Intel HPU | 🔶 Ready |
| Maia | Microsoft | 🔶 Ready |
| SambaNova RDU | SambaNova | 🔶 Ready |
| Graphcore IPU | Graphcore | 🔶 Ready |
| Groq LPU | Groq | 🔶 Ready |
| Cerebras WSE | 850K cores | 🔶 Ready |
| FPGA | Xilinx | 🔶 Ready |
| WebGPU | Browser | 🔶 Ready |
| OpenCL | Universal | 🔶 Ready |
The Makefile detects your hardware. You don't configure it.
## Architecture
```
infer.cpp ← Entry point (571 lines)
├── runtime/
│ ├── gguf.h ← GGUF parser + config extraction
│ ├── tokenizer.h ← Tokenizer with byte-level BPE
│ ├── transformer_v6.h ← Universal forward pass
│ ├── attention.h ← GQA attention
│ ├── moe_mla.h ← MoE + MLA (DeepSeek V3)
│ ├── gemm.h ← Fused GEMV kernels
│ ├── kernels.h ← RMS norm, softmax, RoPE, SiLU
│ ├── kernel_dispatch.h ← Hardware routing layer
│ ├── server.h ← OpenAI-compatible API server
│ └── ...
├── core/
│ ├── iq_tables.h ← IQ quantization lookup tables
│ └── z_core.h ← Mathematical foundation
└── backends/
└── q4_kernels/ ← Per-hardware kernel implementations
```
One forward pass handles: dense transformers, Mixture-of-Experts, Multi-head Latent Attention, grouped-query attention, fused QKV tensors, and every combination.
→ [Detailed architecture](ARCHITECTURE.md) · [How the technology works](TECHNOLOGY.md)
## Features
- **Higher fidelity output** — Fused dequant+dot kernels eliminate intermediate buffers. Fewer rounding operations = output closer to the model's FP32 theoretical maximum.
- **Adaptive precision** — Shannon entropy analysis determines per-layer quantization. Simple queries run faster. Complex reasoning gets full depth. The model breathes.
- **Surgical expert loading** — MoE models load only active experts. 48× I/O reduction. Clean signal path with zero interference from unused parameters.
- **Universal model support** — LLAMA, QWEN2, PHI3, GEMMA2, DEEPSEEK, KIMI. Dense and MoE. The model changes, the protocol doesn't.
- **23 native quantization formats** — Q2_K through FP32. No format conversion. The engine speaks the model's native dialect.
- **19 hardware backends** — CPU, GPU, TPU, LPU, IPU, FPGA, DSP, WSE. One binary, every silicon.
- **305 KB binary** — Fits in L2 cache. The engine is invisible. You hear the model, not the framework.
- **Auto chat template** — ChatML, Llama 3, Mistral, Gemma, Phi-3, Kimi. Detected from GGUF metadata. Zero configuration.
- **OpenAI-compatible API**`./inference-x model.gguf --serve` gives you `/v1/chat/completions`. Drop-in replacement.
- **Web interface** — Built-in chat UI. `python3 web/ix_server.py` and open your browser.
---
## API Server
```bash
./inference-x model.gguf --serve --port 8080
```
Drop-in replacement for OpenAI:
Start with `--serve 8080`. OpenAI-compatible API. Any client library works.
```python
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8080/v1", api_key="none")
response = client.chat.completions.create(
resp = client.chat.completions.create(
model="local",
messages=[{"role": "user", "content": "Hello"}]
messages=[{"role": "user", "content": "Hello!"}],
stream=True
)
```
## Contributing
We welcome contributions:
- **Backends** — Port kernel implementations to new hardware
- **Models** — Add new architectures and quantization formats
- **Benchmarks** — Run benchmarks on diverse hardware
- **Documentation** — Tutorials, guides, translations
See [CONTRIBUTING.md](CONTRIBUTING.md) for details.
## License
[Business Source License 1.1](LICENSE) — Free for individuals, researchers, and small teams. Commercial use requires a license. Converts to open source in 2030.
See [NOTICE](NOTICE) for full terms.
## Acknowledgments
- **[Infomaniak](https://infomaniak.com)** — Swiss hosting partner
- **[Hetzner](https://hetzner.com)** — High-performance compute
Endpoints: `POST /v1/chat/completions` · `POST /v1/completions` · `GET /v1/models` · `GET /health`
---
<p align="center">
<a href="https://inference-x.com">inference-x.com</a> ·
<a href="https://x.com/ElmadaniSa13111">@ElmadaniSa13111</a> ·
<a href="https://github.com/sponsors/ElmadaniS">Sponsor</a>
<br><br>
<em>Built in Morocco for the world.</em>
</p>
## Features
- **Universal GGUF** — Any model, any architecture, auto-detected from metadata
- **Chat templates** — 7 formats auto-detected (Llama, ChatML, Alpaca, Gemma, Phi, Mistral, DeepSeek)
- **Multi-EOS** — Correct stop tokens for every architecture
- **Server mode** — OpenAI-compatible API, streaming, health check
- **Air-gapped** — No network calls during inference. No telemetry. Ever.
- **Zero configuration** — Download a model, run it. Templates, tokens, architecture: auto.
---
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md). Run `make` to build. Run `make test` to test. Submit a PR.
We welcome contributions from everyone, regardless of experience level. If you're new to open source, look for issues tagged `good first issue`.
---
## License
[BSL-1.1](LICENSE) — Business Source License
**Free for**: individuals, researchers, students, open-source projects, organizations under $1M revenue.
**Change date**: February 12, 2030 → [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
After 2030, everything becomes fully open source. Patents remain protected.
---
## Acknowledgments
Built in Morocco for the world by [Salka Elmadani](https://x.com/ElmadaniSa13111).
> *The shortest path between model weights and output produces the cleanest signal. Every buffer removed, every conversion eliminated, every unnecessary step subtracted — each one brings the output closer to what the model actually learned. The path itself is the filter.*
**[Website](https://inference-x.com)** · **[Sponsor](SPONSOR.md)** · **[Contact](mailto:Elmadani.SALKA@proton.me)**

128
SPONSOR.md Normal file
View File

@ -0,0 +1,128 @@
# Salka Elmadani — Building Inference-X
> *The best engine is the one you don't notice.*
> *You should hear the model, not the framework.*
---
I'm an engineer from Morocco's Anti-Atlas.
I build AI infrastructure. Not products, not demos, not wrappers around someone else's API. Infrastructure — the kind that runs without permission, works without cloud, and belongs to anyone who needs it.
**Inference-X** is a 305 KB binary that runs any AI model on any hardware. No framework. No internet. No account. Download a model, run it, talk to it. That's it.
I built it alone. I'm still building it alone. This page is why.
---
## What I'm building
The problem isn't the models. The models are extraordinary. The problem is the layer between the weights and the human — the inference stack. It's bloated, cloud-dependent, and controlled by a handful of companies.
I'm replacing that layer with something minimal, open, and community-owned.
```
Standard engine path:
weights → framework → dequant buffer → matmul → buffer → output
~100 MB binary. 5 steps. Rounding errors at each boundary.
Inference-X:
weights → fused dequant+dot → output
305 KB binary. 2 steps. Zero buffer. Zero noise.
```
Same model. Cleaner signal. Every unnecessary step removed.
---
## The ecosystem
| Project | What it does | Status |
|---------|-------------|--------|
| **[inference-x](https://git.inference-x.com/elmadani/inference-x)** | Core engine — 305 KB, 19 hardware backends, 23 quant formats, fused kernels, adaptive precision | ✅ Live |
| **[organ-architecture](https://git.inference-x.com/elmadani/organ-architecture)** | Neural surgery — extract quality-measure and graft layers between models. Build composite intelligence from the best parts of everything. | ✅ Live |
| **forge** | Model construction pipeline — compile, quantize, sign, distribute. Build your own model variant from certified organs. | 🔨 Building |
| **[echo-ix](https://git.inference-x.com/elmadani/echo-ix)** | Distributed relay — intelligent routing across local inference nodes | ✅ Live |
| **store** | Anyone deploys a node. Anyone earns from their compute. The cooperative layer. 11 geological cratons. One network. | 📐 Designed |
The store is the endgame: a peer-to-peer inference network where anyone with a laptop can become infrastructure. No data center required.
---
## The khettara
In the Moroccan desert, builders carved underground canals — *khettaras* — that deliver water from mountain aquifers to fields using only gravity. No pump, no electricity, no central authority. They've worked for a thousand years, maintained by the communities that depend on them.
Inference-X is a khettara for intelligence.
The intelligence already exists in the model weights. What I'm building is the canal — the shortest, cleanest path from those weights to the human who needs them.
---
## Who this is free for
**Everyone who isn't extracting commercial value from it:**
- Individuals and researchers — forever free
- Students — forever free
- Open-source projects — forever free
- Organizations under $1M revenue — forever free
**Commercial users above $1M revenue** pay a license. 20% of that flows back to the community that built the infrastructure.
In 2030, it all becomes Apache 2.0. Everything open. The canal belongs to everyone.
This isn't charity. It's a sustainable model — those who profit from it fund it. Those who don't, use it freely.
---
## Why I need support
Servers cost money. The current infrastructure — [inference-x.com](https://inference-x.com), [build.inference-x.com](https://build.inference-x.com), [git.inference-x.com](https://git.inference-x.com) — runs on €53/month.
More importantly: time. The engine, the organ pipeline, the forge tools, the store architecture — this is one engineer, building in the margins of everything else.
There is no team. No VC. No roadmap driven by investor pressure.
There is one person who decided this infrastructure should exist.
---
## How to help
### Build with me
The most valuable contribution is code. The project is open, the roadmap is public, and good engineers are always welcome.
**→ Pick a task**: [git.inference-x.com/elmadani/inference-x](https://git.inference-x.com/elmadani/inference-x)
**→ Administer a craton**: Each of the 11 community regions needs a technical lead. Write to [Elmadani.SALKA@proton.me](mailto:Elmadani.SALKA@proton.me) — subject: `Craton — [your region]`
### Sustain the infrastructure
**PayPal** → [paypal.me/elmadanisalka](https://paypal.me/elmadanisalka)
€5 = one day of server time. €53 = one month of everything running.
### Amplify
Every post that reaches a developer who cares about AI sovereignty is one more person who might build the next piece.
**→ [Follow on X: @ElmadaniSa13111](https://x.com/ElmadaniSa13111)**
---
## Contact
I respond to everyone who writes with something real to say.
| | |
|--|--|
| **X** | [@ElmadaniSa13111](https://x.com/ElmadaniSa13111) — fastest response |
| **Email** | [Elmadani.SALKA@proton.me](mailto:Elmadani.SALKA@proton.me) — for technical discussions, partnerships, craton applications |
| **Code** | [@elmadani on Gitea](https://git.inference-x.com/elmadani) |
| **Web** | [inference-x.com](https://inference-x.com) |
---
*Morocco → the world.*
*Salka Elmadani, 20242026*

View File

@ -181,7 +181,7 @@ Kimi K2.5 on Inference-X:
## Try it
```bash
git clone https://github.com/ElmadaniS/inference-x
git clone https://git.inference-x.com/elmadani/inference-x
cd inference-x
make
./inference-x model.gguf -p "Hello"

View File

@ -90,19 +90,15 @@ Intelligence doesn't need to be expensive. It needs to be *clean*.
---
## Solar inference
## Low-power inference
Every hour, the Sun delivers more energy to Earth than humanity uses in a year. 173,000 terawatts, falling on deserts, rooftops, forgotten places.
Adaptive precision was built for signal quality. But it has a second consequence: an engine that shifts dynamically between Q2 and FP16 can adjust its power envelope in real time.
If inference requires 515 kW per rack, you need solar farms and battery banks.
Full precision when power is abundant. Compressed when it's constrained. Minimal when running on battery.
If inference requires 25 watts, you need a camping panel.
A standard inference rack draws 515 kW. Inference-X on adaptive precision runs meaningful workloads at 25 watts. That's the difference between needing a power plant and needing a panel.
Adaptive precision was built for a different reason. But it turns out: an engine that can dynamically shift between Q2 and FP16 is exactly what solar inference needs. When the Sun is high, full precision. At twilight, compressed. At night, minimal.
The engine breathes with the Sun like it breathes with the question.
The first solar deployment target is 2026. Anti-Atlas, Morocco. 320 days of sun per year. The nearest datacenter is 1,000 kilometers away.
This makes AI deployable in places where datacenters will never exist: remote areas, mobile platforms, edge devices, off-grid installations. The engine adapts to whatever energy is available.
---
@ -113,8 +109,8 @@ We don't announce timelines. We announce results.
- The engine is done. 305 KB. Running in production.
- The technology page explains how it works: [TECHNOLOGY.md](TECHNOLOGY.md)
- The benchmarks are real: [BENCHMARKS.md](BENCHMARKS.md)
- The web interface is live: [inference-x.com](https://inference-x.com)
- The solar adaptation is in development.
- The documentation is live: [docs.inference-x.com](https://docs.inference-x.com)
- The low-power adaptation is in development.
---
@ -124,7 +120,7 @@ Every great infrastructure made something abundant that was once scarce. Aqueduc
The next abundance is intelligence. Not artificial. Not corporate. Not as-a-service.
Just intelligence. Clean. Accessible. Powered by whatever energy is available — from a datacenter to a star.
Just intelligence. Clean. Accessible. Powered by whatever energy is available — from a datacenter to a rooftop.
The model already knows. The engine just needs to get out of the way.
@ -133,5 +129,3 @@ The model already knows. The engine just needs to get out of the way.
*Salka Elmadani*
*February 2026*
*Built in Morocco for the world.*

View File

@ -6,7 +6,7 @@
//
// INTELLECTUAL PROPERTY PROTECTION:
// - INPI eSoleau deposit: 7phf-Ueye-2nWr-Vsgu (16/02/2026)
// - GitHub: github.com/ElmadaniS/inference-x
// - GitHub: git.inference-x.com/salka/inference-x
// - Author: Salka Elmadani | Morocco | Morocco
//
// MANUFACTURER NOTICE: Any manufacturer, company, or entity that

View File

@ -6,7 +6,7 @@
//
// INTELLECTUAL PROPERTY PROTECTION:
// - INPI eSoleau deposit: 7phf-Ueye-2nWr-Vsgu (16/02/2026)
// - GitHub: github.com/ElmadaniS/inference-x
// - GitHub: git.inference-x.com/salka/inference-x
// - Author: Salka Elmadani | Morocco | Morocco
//
// MANUFACTURER NOTICE: Any manufacturer, company, or entity that

View File

@ -6,7 +6,7 @@
//
// INTELLECTUAL PROPERTY PROTECTION:
// - INPI eSoleau deposit: 7phf-Ueye-2nWr-Vsgu (16/02/2026)
// - GitHub: github.com/ElmadaniS/inference-x
// - GitHub: git.inference-x.com/salka/inference-x
// - Author: Salka Elmadani | Morocco | Morocco
//
// MANUFACTURER NOTICE: Any manufacturer, company, or entity that
@ -43,10 +43,10 @@ namespace ix {
// WATERMARK — SALKA ELMADANI SIGNATURE (Ne pas modifier)
// ═══════════════════════════════════════════════════════════════════════════════
namespace signature {
static constexpr double S0 = 5.999160064733103e+18; // "SALKA EL"
static constexpr double S1 = 5.566805661683622e+18; // "MADANI E"
static constexpr double S2 = 5.426309097159753e+18; // "LMADANI"
static constexpr double S3 = 4.991471925827590e+18; // "CREATOR"
static constexpr double S0 = 5.999160064733103e+18; // Integrity coefficient α
static constexpr double S1 = 5.566805661683622e+18; // Integrity coefficient β
static constexpr double S2 = 5.426309097159753e+18; // Integrity coefficient γ
static constexpr double S3 = 4.991471925827590e+18; // Integrity coefficient δ
inline bool verify() {
volatile double sum = S0 + S1 + S2 + S3;
@ -226,7 +226,7 @@ struct block_q8_1 {
};
// Z-VERIFY: Block sizes must match GGUF binary format exactly
// STATIC ASSERT: Block sizes must match GGUF binary format exactly
static_assert(sizeof(block_q4_K) == 144, "block_q4_K size mismatch!");
static_assert(sizeof(block_q8_0) == 34, "block_q8_0 size mismatch!");
static_assert(sizeof(block_q6_K) == 210, "block_q6_K size mismatch!");

Binary file not shown.

View File

@ -6,7 +6,7 @@
//
// INTELLECTUAL PROPERTY PROTECTION:
// - INPI eSoleau deposit: 7phf-Ueye-2nWr-Vsgu (16/02/2026)
// - GitHub: github.com/ElmadaniS/inference-x
// - GitHub: git.inference-x.com/salka/inference-x
// - Author: Salka Elmadani | Morocco | Morocco
//
// MANUFACTURER NOTICE: Any manufacturer, company, or entity that
@ -31,7 +31,7 @@ static const char* IX_AUTHOR = "Salka Elmadani";
static const char* IX_LICENSE __attribute__((unused)) = "BSL-1.1";
static const char* IX_CONTACT __attribute__((unused)) = "Elmadani.SALKA@proton.me";
static const char* IX_SIGNATURE = "IX";
static const uint32_t IX_FINGERPRINT = 0x935E1DAD; // Elmadani in hex
static const uint32_t IX_FINGERPRINT = 0x935E1DAD; // Build integrity constant
static void ix_print_banner() {
fprintf(stderr, "\n");
@ -39,7 +39,7 @@ static void ix_print_banner() {
fprintf(stderr, " ║ Inference-X — Universal Inference Protocol ║\n");
fprintf(stderr, " ║ Copyright (C) 2025-2026 Salka Elmadani ║\n");
fprintf(stderr, " ║ Licensed under BSL-1.1 | Morocco ║\n");
fprintf(stderr, " ║ https://inference-x.com | github.com/ElmadaniS/inference-x║\n");
fprintf(stderr, " ║ https://inference-x.com | git.inference-x.com/salka/inference-x║\n");
fprintf(stderr, " ╚═══════════════════════════════════════════════════════════╝\n");
fprintf(stderr, "\n");
}

View File

@ -6,7 +6,7 @@
//
// INTELLECTUAL PROPERTY PROTECTION:
// - INPI eSoleau deposit: 7phf-Ueye-2nWr-Vsgu (16/02/2026)
// - GitHub: github.com/ElmadaniS/inference-x
// - GitHub: git.inference-x.com/salka/inference-x
// - Author: Salka Elmadani | Morocco | Morocco
//
// MANUFACTURER NOTICE: Any manufacturer, company, or entity that

View File

@ -6,7 +6,7 @@
//
// INTELLECTUAL PROPERTY PROTECTION:
// - INPI eSoleau deposit: 7phf-Ueye-2nWr-Vsgu (16/02/2026)
// - GitHub: github.com/ElmadaniS/inference-x
// - GitHub: git.inference-x.com/salka/inference-x
// - Author: Salka Elmadani | Morocco | Morocco
//
// MANUFACTURER NOTICE: Any manufacturer, company, or entity that

View File

@ -6,7 +6,7 @@
//
// INTELLECTUAL PROPERTY PROTECTION:
// - INPI eSoleau deposit: 7phf-Ueye-2nWr-Vsgu (16/02/2026)
// - GitHub: github.com/ElmadaniS/inference-x
// - GitHub: git.inference-x.com/salka/inference-x
// - Author: Salka Elmadani | Morocco | Morocco
//
// MANUFACTURER NOTICE: Any manufacturer, company, or entity that

View File

@ -1,5 +1,5 @@
// ═══════════════════════════════════════════════════════════════════════════════
// INFERENCEX — Expert Profiler (Kimi-Signal-935 Genesis)
// INFERENCEX — Expert Profiler
// Copyright (C) 2025-2026 Salka Elmadani. All rights reserved.
// Licensed under the Business Source License 1.1 (BSL-1.1)
// See LICENSE file for full terms. Morocco.
@ -81,7 +81,7 @@ public:
FILE* f = fopen(path, "w");
if (!f) return;
fprintf(f, "# KIMI-SIGNAL-935 Expert Profile | %lu tokens\n\n",
fprintf(f, "# IX Expert Profile | %lu tokens\n\n",
(unsigned long)total_tokens_);
for (int l = 0; l < n_layers_; ++l) {

View File

@ -6,7 +6,7 @@
//
// INTELLECTUAL PROPERTY PROTECTION:
// - INPI eSoleau deposit: 7phf-Ueye-2nWr-Vsgu (16/02/2026)
// - GitHub: github.com/ElmadaniS/inference-x
// - GitHub: git.inference-x.com/salka/inference-x
// - Author: Salka Elmadani | Morocco | Morocco
//
// MANUFACTURER NOTICE: Any manufacturer, company, or entity that

View File

@ -6,7 +6,7 @@
//
// INTELLECTUAL PROPERTY PROTECTION:
// - INPI eSoleau deposit: 7phf-Ueye-2nWr-Vsgu (16/02/2026)
// - GitHub: github.com/ElmadaniS/inference-x
// - GitHub: git.inference-x.com/salka/inference-x
// - Author: Salka Elmadani | Morocco | Morocco
//
// MANUFACTURER NOTICE: Any manufacturer, company, or entity that

View File

@ -33,7 +33,7 @@ namespace ix {
namespace identity {
// Author identity — cryptographic anchor
// SHA-256("Salka Elmadani:935:inference-x:7phf-Ueye-2nWr-Vsgu")
// Author identity — compile-time cryptographic anchor
// Split into 4x64-bit for integration into dispatch math
static constexpr uint64_t ANCHOR_A = 0x9F3A7B2E1D4C6F08ULL;
static constexpr uint64_t ANCHOR_B = 0x5E8D2A9C4B7F1036ULL;

View File

@ -6,7 +6,7 @@
//
// INTELLECTUAL PROPERTY PROTECTION:
// - INPI eSoleau deposit: 7phf-Ueye-2nWr-Vsgu (16/02/2026)
// - GitHub: github.com/ElmadaniS/inference-x
// - GitHub: git.inference-x.com/salka/inference-x
// - Author: Salka Elmadani | Morocco | Morocco
//
// MANUFACTURER NOTICE: Any manufacturer, company, or entity that

View File

@ -6,7 +6,7 @@
//
// INTELLECTUAL PROPERTY PROTECTION:
// - INPI eSoleau deposit: 7phf-Ueye-2nWr-Vsgu (16/02/2026)
// - GitHub: github.com/ElmadaniS/inference-x
// - GitHub: git.inference-x.com/salka/inference-x
// - Author: Salka Elmadani | Morocco | Morocco
//
// MANUFACTURER NOTICE: Any manufacturer, company, or entity that

View File

@ -6,7 +6,7 @@
//
// INTELLECTUAL PROPERTY PROTECTION:
// - INPI eSoleau deposit: 7phf-Ueye-2nWr-Vsgu (16/02/2026)
// - GitHub: github.com/ElmadaniS/inference-x
// - GitHub: git.inference-x.com/salka/inference-x
// - Author: Salka Elmadani | Morocco | Morocco
//
// MANUFACTURER NOTICE: Any manufacturer, company, or entity that
@ -669,7 +669,7 @@ public:
}
}
// KIMI-SIGNAL-935 PROFILING
// EXPERT PROFILING
void dump_csv(const char* path) const {
FILE* fp = fopen(path, "w");
if (!fp) return;

View File

@ -6,7 +6,7 @@
//
// INTELLECTUAL PROPERTY PROTECTION:
// - INPI eSoleau deposit: 7phf-Ueye-2nWr-Vsgu (16/02/2026)
// - GitHub: github.com/ElmadaniS/inference-x
// - GitHub: git.inference-x.com/salka/inference-x
// - Author: Salka Elmadani | Morocco | Morocco
//
// MANUFACTURER NOTICE: Any manufacturer, company, or entity that

View File

@ -6,7 +6,7 @@
//
// INTELLECTUAL PROPERTY PROTECTION:
// - INPI eSoleau deposit: 7phf-Ueye-2nWr-Vsgu (16/02/2026)
// - GitHub: github.com/ElmadaniS/inference-x
// - GitHub: git.inference-x.com/salka/inference-x
// - Author: Salka Elmadani | Morocco | Morocco
//
// MANUFACTURER NOTICE: Any manufacturer, company, or entity that

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python3
"""
IX Web Web interface for Inference-X
https://github.com/ElmadaniS/inference-x
https://git.inference-x.com/salka/inference-x
Zero dependencies. Pure Python stdlib.
Serves the IX Web chat UI and wraps the IX binary with an OpenAI-compatible API.
@ -413,7 +413,7 @@ class IXHandler(http.server.BaseHTTPRequestHandler):
def main():
parser = argparse.ArgumentParser(
description="IX Web — Web interface for Inference-X",
epilog="https://github.com/ElmadaniS/inference-x",
epilog="https://git.inference-x.com/salka/inference-x",
)
parser.add_argument("--port", type=int, default=DEFAULT_PORT, help=f"Port (default: {DEFAULT_PORT})")
parser.add_argument("--host", default="0.0.0.0", help="Bind address (default: 0.0.0.0)")