From 531f21f2419f5c0b2aafd3e9f8a1b138e6dea7cc Mon Sep 17 00:00:00 2001 From: elmadani Date: Tue, 24 Feb 2026 22:27:17 +0000 Subject: [PATCH] =?UTF-8?q?docs:=20update=20SPONSOR.md=20=E2=80=94=20perso?= =?UTF-8?q?nal=20presentation=20page?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- SPONSOR.md | 128 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 128 insertions(+) create mode 100644 SPONSOR.md diff --git a/SPONSOR.md b/SPONSOR.md new file mode 100644 index 0000000..315b4a1 --- /dev/null +++ b/SPONSOR.md @@ -0,0 +1,128 @@ +# Salka Elmadani — Building Inference-X + +> *The best engine is the one you don't notice.* +> *You should hear the model, not the framework.* + +--- + +I'm an engineer from Morocco's Anti-Atlas. + +I build AI infrastructure. Not products, not demos, not wrappers around someone else's API. Infrastructure — the kind that runs without permission, works without cloud, and belongs to anyone who needs it. + +**Inference-X** is a 305 KB binary that runs any AI model on any hardware. No framework. No internet. No account. Download a model, run it, talk to it. That's it. + +I built it alone. I'm still building it alone. This page is why. + +--- + +## What I'm building + +The problem isn't the models. The models are extraordinary. The problem is the layer between the weights and the human — the inference stack. It's bloated, cloud-dependent, and controlled by a handful of companies. + +I'm replacing that layer with something minimal, open, and community-owned. + +``` +Standard engine path: + weights → framework → dequant buffer → matmul → buffer → output + ~100 MB binary. 5 steps. Rounding errors at each boundary. + +Inference-X: + weights → fused dequant+dot → output + 305 KB binary. 2 steps. Zero buffer. Zero noise. +``` + +Same model. Cleaner signal. Every unnecessary step removed. + +--- + +## The ecosystem + +| Project | What it does | Status | +|---------|-------------|--------| +| **[inference-x](https://git.inference-x.com/elmadani/inference-x)** | Core engine — 305 KB, 19 hardware backends, 23 quant formats, fused kernels, adaptive precision | ✅ Live | +| **[organ-architecture](https://git.inference-x.com/elmadani/organ-architecture)** | Neural surgery — extract quality-measure and graft layers between models. Build composite intelligence from the best parts of everything. | ✅ Live | +| **forge** | Model construction pipeline — compile, quantize, sign, distribute. Build your own model variant from certified organs. | 🔨 Building | +| **[echo-ix](https://git.inference-x.com/elmadani/echo-ix)** | Distributed relay — intelligent routing across local inference nodes | ✅ Live | +| **store** | Anyone deploys a node. Anyone earns from their compute. The cooperative layer. 11 geological cratons. One network. | 📐 Designed | + +The store is the endgame: a peer-to-peer inference network where anyone with a laptop can become infrastructure. No data center required. + +--- + +## The khettara + +In the Moroccan desert, builders carved underground canals — *khettaras* — that deliver water from mountain aquifers to fields using only gravity. No pump, no electricity, no central authority. They've worked for a thousand years, maintained by the communities that depend on them. + +Inference-X is a khettara for intelligence. + +The intelligence already exists in the model weights. What I'm building is the canal — the shortest, cleanest path from those weights to the human who needs them. + +--- + +## Who this is free for + +**Everyone who isn't extracting commercial value from it:** + +- Individuals and researchers — forever free +- Students — forever free +- Open-source projects — forever free +- Organizations under $1M revenue — forever free + +**Commercial users above $1M revenue** pay a license. 20% of that flows back to the community that built the infrastructure. + +In 2030, it all becomes Apache 2.0. Everything open. The canal belongs to everyone. + +This isn't charity. It's a sustainable model — those who profit from it fund it. Those who don't, use it freely. + +--- + +## Why I need support + +Servers cost money. The current infrastructure — [inference-x.com](https://inference-x.com), [build.inference-x.com](https://build.inference-x.com), [git.inference-x.com](https://git.inference-x.com) — runs on €53/month. + +More importantly: time. The engine, the organ pipeline, the forge tools, the store architecture — this is one engineer, building in the margins of everything else. + +There is no team. No VC. No roadmap driven by investor pressure. + +There is one person who decided this infrastructure should exist. + +--- + +## How to help + +### Build with me + +The most valuable contribution is code. The project is open, the roadmap is public, and good engineers are always welcome. + +**→ Pick a task**: [git.inference-x.com/elmadani/inference-x](https://git.inference-x.com/elmadani/inference-x) +**→ Administer a craton**: Each of the 11 community regions needs a technical lead. Write to [Elmadani.SALKA@proton.me](mailto:Elmadani.SALKA@proton.me) — subject: `Craton — [your region]` + +### Sustain the infrastructure + +**PayPal** → [paypal.me/elmadanisalka](https://paypal.me/elmadanisalka) + +€5 = one day of server time. €53 = one month of everything running. + +### Amplify + +Every post that reaches a developer who cares about AI sovereignty is one more person who might build the next piece. + +**→ [Follow on X: @ElmadaniSa13111](https://x.com/ElmadaniSa13111)** + +--- + +## Contact + +I respond to everyone who writes with something real to say. + +| | | +|--|--| +| **X** | [@ElmadaniSa13111](https://x.com/ElmadaniSa13111) — fastest response | +| **Email** | [Elmadani.SALKA@proton.me](mailto:Elmadani.SALKA@proton.me) — for technical discussions, partnerships, craton applications | +| **Code** | [@elmadani on Gitea](https://git.inference-x.com/elmadani) | +| **Web** | [inference-x.com](https://inference-x.com) | + +--- + +*Morocco → the world.* +*Salka Elmadani, 2024–2026*