ix-tools/SPONSOR.md
2026-02-25 02:56:52 +00:00

4.7 KiB
Raw Permalink Blame History

Salka Elmadani — Building Inference-X

The best engine is the one you don't notice.
You should hear the model, not the framework.


I build AI infrastructure. Not products, not demos, not wrappers around someone else's API. Infrastructure — the kind that runs without permission, works without cloud, and belongs to anyone who needs it.

Inference-X is a 305 KB binary that runs any AI model on any hardware. No framework. No internet. No account. Download a model, run it, talk to it. That's it.

I built it alone. I'm still building it alone. This page is why.


What I'm building

The problem isn't the models. The models are extraordinary. The problem is the layer between the weights and the human — the inference stack. It's bloated, cloud-dependent, and controlled by a handful of companies.

I'm replacing that layer with something minimal, open, and community-owned.

Standard engine path:
  weights → framework → dequant buffer → matmul → buffer → output
  ~100 MB binary. 5 steps. Rounding errors at each boundary.

Inference-X:
  weights → fused dequant+dot → output
  305 KB binary. 2 steps. Zero buffer. Zero noise.

Same model. Cleaner signal. Every unnecessary step removed.


The ecosystem

Project What it does Status
inference-x Core engine — 305 KB, 19 hardware backends, 23 quant formats, fused kernels, adaptive precision Live
forge Model construction pipeline — compile, quantize, sign, distribute. Build your own model variant from certified organs. 🔨 Building
echo-ix Distributed relay — intelligent routing across local inference nodes Live
store Anyone deploys a node. Anyone earns from their compute. The cooperative layer. 11 geological cratons. One network. 📐 Designed

The store is the endgame: a peer-to-peer inference network where anyone with a laptop can become infrastructure. No data center required.


The intelligence already exists in the model weights. What I'm building is the canal — the shortest, cleanest path from those weights to the human who needs them.


Who this is free for

Everyone who isn't extracting commercial value from it:

  • Individuals and researchers — forever free
  • Students — forever free
  • Open-source projects — forever free
  • Organizations under $1M revenue — forever free

Commercial users above $1M revenue pay a license. 20% of that flows back to the community that built the infrastructure.

In 2030, it all becomes Apache 2.0. Everything open. The canal belongs to everyone.

This isn't charity. It's a sustainable model — those who profit from it fund it. Those who don't, use it freely.


Why I need support

Servers cost money. The current infrastructure — inference-x.com, build.inference-x.com, git.inference-x.com — runs on €53/month.

More importantly: time. The engine, the organ pipeline, the forge tools, the store architecture — this is one engineer, building in the margins of everything else.

There is no team. No VC. No roadmap driven by investor pressure.

There is one person who decided this infrastructure should exist.


How to help

Build with me

The most valuable contribution is code. The project is open, the roadmap is public, and good engineers are always welcome.

→ Pick a task: git.inference-x.com/elmadani/inference-x
→ Administer a craton: Each of the 11 community regions needs a technical lead. Write to Elmadani.SALKA@proton.me — subject: Craton — [your region]

Sustain the infrastructure

PayPalpaypal.me/elmadanisalka

€5 = one day of server time. €53 = one month of everything running.

Amplify

Every post that reaches a developer who cares about AI sovereignty is one more person who might build the next piece.

Follow on X: @ElmadaniSa13111


Contact

I respond to everyone who writes with something real to say.

X @ElmadaniSa13111 — fastest response
Email Elmadani.SALKA@proton.me — for technical discussions, partnerships, craton applications
Code @elmadani on Gitea
Web inference-x.com

Morocco → the world.
Salka Elmadani, 20242026