docs: update SPONSOR.md — personal presentation page

This commit is contained in:
elmadani 2026-02-24 22:27:17 +00:00
parent 7ecd533f6a
commit 8388021cb9

View File

@ -1,103 +1,128 @@
# Support Inference-X
# Salka Elmadani — Building Inference-X
> *"AI should flow like water — to anyone who needs it, without gatekeepers, without permission."*
> *The best engine is the one you don't notice.*
> *You should hear the model, not the framework.*
---
## What you're supporting
I'm an engineer from Morocco's Anti-Atlas.
**Inference-X** is a 305 KB inference engine built from scratch in Morocco.
I build AI infrastructure. Not products, not demos, not wrappers around someone else's API. Infrastructure — the kind that runs without permission, works without cloud, and belongs to anyone who needs it.
No framework. No cloud. No dependencies. One binary that runs any AI model — on your laptop, your phone, your Raspberry Pi, your datacenter — with the same result.
**Inference-X** is a 305 KB binary that runs any AI model on any hardware. No framework. No internet. No account. Download a model, run it, talk to it. That's it.
This is not a product. It is infrastructure. The kind that takes years to build alone.
I built it alone. I'm still building it alone. This page is why.
---
## The project ecosystem
## What I'm building
| Tool | What it does |
|------|-------------|
| **[inference-x](https://git.inference-x.com/elmadani/inference-x)** | Core engine — 305 KB, 19 hardware backends, 23 quant formats. Fused dequant+dot kernels. Zero buffer. |
| **[organ-architecture](https://git.inference-x.com/elmadani/organ-architecture)** | Neural surgery — extract, measure, and graft components between models. Build composite intelligence. |
| **[forge](https://git.inference-x.com/elmadani/ix-tools)** | Model construction pipeline — compile, sign, distribute custom model variants. |
| **[echo-ix](https://git.inference-x.com/elmadani/echo-ix)** | Relay layer — intelligent request routing across distributed inference nodes. |
| **[EchoNet](https://inference-x.com#echonet)** | Federated inference network — share idle compute, earn credits. The distributed fabric. |
The problem isn't the models. The models are extraordinary. The problem is the layer between the weights and the human — the inference stack. It's bloated, cloud-dependent, and controlled by a handful of companies.
All of this is **one person building one vision**: AI inference that belongs to everyone.
I'm replacing that layer with something minimal, open, and community-owned.
```
Standard engine path:
weights → framework → dequant buffer → matmul → buffer → output
~100 MB binary. 5 steps. Rounding errors at each boundary.
Inference-X:
weights → fused dequant+dot → output
305 KB binary. 2 steps. Zero buffer. Zero noise.
```
Same model. Cleaner signal. Every unnecessary step removed.
---
## The vision in one paragraph
## The ecosystem
The world's best AI models exist. The weights are trained. The intelligence is there.
| Project | What it does | Status |
|---------|-------------|--------|
| **[inference-x](https://git.inference-x.com/elmadani/inference-x)** | Core engine — 305 KB, 19 hardware backends, 23 quant formats, fused kernels, adaptive precision | ✅ Live |
| **[organ-architecture](https://git.inference-x.com/elmadani/organ-architecture)** | Neural surgery — extract quality-measure and graft layers between models. Build composite intelligence from the best parts of everything. | ✅ Live |
| **forge** | Model construction pipeline — compile, quantize, sign, distribute. Build your own model variant from certified organs. | 🔨 Building |
| **[echo-ix](https://git.inference-x.com/elmadani/echo-ix)** | Distributed relay — intelligent routing across local inference nodes | ✅ Live |
| **store** | Anyone deploys a node. Anyone earns from their compute. The cooperative layer. 11 geological cratons. One network. | 📐 Designed |
What stands between those weights and a student in Casablanca, a doctor in Dakar, a developer in Jakarta — is the **infrastructure layer**. The engine that actually runs the model. Inference-X removes every unnecessary step in that path. Fused computation. Adaptive precision. Zero telemetry. One file.
The store will let anyone deploy a node, contribute compute, and earn from it. Like a cooperative. 11 cratons. One network.
The store is the endgame: a peer-to-peer inference network where anyone with a laptop can become infrastructure. No data center required.
---
## How your support is used
## The khettara
Every euro funds **two things only**:
In the Moroccan desert, builders carved underground canals — *khettaras* — that deliver water from mountain aquifers to fields using only gravity. No pump, no electricity, no central authority. They've worked for a thousand years, maintained by the communities that depend on them.
- **Server infrastructure** — €53/month keeps inference-x.com, build.inference-x.com, and git.inference-x.com running. These are the arteries.
- **Development time** — The engine, the organ pipeline, the forge tools, the store architecture. All of this is built alone, in the margins of everything else.
Inference-X is a khettara for intelligence.
There is no team. There is no funding round. There is no VC timeline.
There is one engineer who decided that this infrastructure should exist, and is building it.
The intelligence already exists in the model weights. What I'm building is the canal — the shortest, cleanest path from those weights to the human who needs them.
---
## Ways to help
## Who this is free for
### Direct support
**Everyone who isn't extracting commercial value from it:**
- Individuals and researchers — forever free
- Students — forever free
- Open-source projects — forever free
- Organizations under $1M revenue — forever free
**Commercial users above $1M revenue** pay a license. 20% of that flows back to the community that built the infrastructure.
In 2030, it all becomes Apache 2.0. Everything open. The canal belongs to everyone.
This isn't charity. It's a sustainable model — those who profit from it fund it. Those who don't, use it freely.
---
## Why I need support
Servers cost money. The current infrastructure — [inference-x.com](https://inference-x.com), [build.inference-x.com](https://build.inference-x.com), [git.inference-x.com](https://git.inference-x.com) — runs on €53/month.
More importantly: time. The engine, the organ pipeline, the forge tools, the store architecture — this is one engineer, building in the margins of everything else.
There is no team. No VC. No roadmap driven by investor pressure.
There is one person who decided this infrastructure should exist.
---
## How to help
### Build with me
The most valuable contribution is code. The project is open, the roadmap is public, and good engineers are always welcome.
**→ Pick a task**: [git.inference-x.com/elmadani/inference-x](https://git.inference-x.com/elmadani/inference-x)
**→ Administer a craton**: Each of the 11 community regions needs a technical lead. Write to [Elmadani.SALKA@proton.me](mailto:Elmadani.SALKA@proton.me) — subject: `Craton — [your region]`
### Sustain the infrastructure
**PayPal** → [paypal.me/elmadanisalka](https://paypal.me/elmadanisalka)
Choose your amount. Even €5 covers a day of server time.
### Technical contribution
€5 = one day of server time. €53 = one month of everything running.
The cratons need administrators. Each of the 11 geological regions that organize the Inference-X community needs someone who understands their region, speaks the local languages, and can review contributions.
### Amplify
**→ Apply for your craton**: write to [Elmadani.SALKA@proton.me](mailto:Elmadani.SALKA@proton.me) with subject `Craton Application — [Region Name]`
Every post that reaches a developer who cares about AI sovereignty is one more person who might build the next piece.
Or open a pull request. The roadmap is public. Pick a task.
### Signal amplification
If you believe AI should be sovereign infrastructure — follow, share, discuss:
**→ [@ElmadaniSa13111 on X](https://x.com/ElmadaniSa13111)**
One person building this in public. Every share reaches someone who might help build it.
---
## The license model
Inference-X uses **BSL-1.1** (Business Source License).
- **Free forever** for individuals, researchers, students, open-source projects, and organizations under $1M annual revenue.
- **Commercial license** required above $1M. 20% of that revenue returns to the community.
- **Apache 2.0** from February 12, 2030 — fully open, no restrictions.
This is not charity infrastructure. It is a sustainable model: those who profit from it contribute to it. Those who don't, use it freely.
**→ [Follow on X: @ElmadaniSa13111](https://x.com/ElmadaniSa13111)**
---
## Contact
| Channel | Address |
|---------|---------|
| X / Twitter | [@ElmadaniSa13111](https://x.com/ElmadaniSa13111) |
| Email | [Elmadani.SALKA@proton.me](mailto:Elmadani.SALKA@proton.me) |
| Gitea | [@elmadani](https://git.inference-x.com/elmadani) |
| Website | [inference-x.com](https://inference-x.com) |
I respond to everyone who writes with something real to say.
| | |
|--|--|
| **X** | [@ElmadaniSa13111](https://x.com/ElmadaniSa13111) — fastest response |
| **Email** | [Elmadani.SALKA@proton.me](mailto:Elmadani.SALKA@proton.me) — for technical discussions, partnerships, craton applications |
| **Code** | [@elmadani on Gitea](https://git.inference-x.com/elmadani) |
| **Web** | [inference-x.com](https://inference-x.com) |
---
*Built in Morocco. For everyone.*
*Salka Elmadani 20242026*
*Morocco → the world.*
*Salka Elmadani, 20242026*