Update profile — signal quality, adaptive precision, fused computation
This commit is contained in:
parent
dc1ad70a4e
commit
64727ec35d
18
README.md
18
README.md
@ -1,23 +1,29 @@
|
|||||||
## Salka Elmadani
|
## Salka Elmadani
|
||||||
|
|
||||||
**Building [Inference-X](https://inference-x.com)** — the shortest path between a model and silicon.
|
**Building [Inference-X](https://inference-x.com)** — better output from the same model.
|
||||||
|
|
||||||
One binary routes any AI model to any hardware. 305 KB. 19 backends. Zero dependencies. Built in Morocco.
|
Universal AI inference engine. Fused computation, adaptive precision, surgical expert loading. 305 KB, 19 backends, zero dependencies. Built in Morocco.
|
||||||
|
|
||||||
### What I'm working on
|
### What I'm working on
|
||||||
|
|
||||||
| Project | Description |
|
| Project | Description |
|
||||||
|---------|-------------|
|
|---------|-------------|
|
||||||
| [**Inference-X**](https://github.com/ElmadaniS/inference-x) | Universal inference protocol — 305 KB, C++17, 23 quantization formats, 19 hardware backends |
|
| [**Inference-X**](https://github.com/ElmadaniS/inference-x) | Universal inference engine — 305 KB binary, fused dequant+dot kernels, Shannon entropy adaptive precision, 23 quantization formats, 19 hardware backends |
|
||||||
| **Solar Inference** | Running AI on direct current from solar panels. No cloud. No API key. Just starlight. |
|
| **Solar Inference** | Deploying AI on direct solar current. Adaptive precision + 25W power budget = inference powered by a star. Anti-Atlas, 2026. |
|
||||||
|
|
||||||
### Philosophy
|
### Philosophy
|
||||||
|
|
||||||
> *In the Anti-Atlas, our ancestors built khettaras — underground water channels that irrigate villages without pumps, without electricity, flowing for centuries by gravity alone. Inference-X is the khettara of intelligence: once built, it flows without intervention.*
|
> *The best inference engine is the one you don't notice. You should hear the model, not the framework.*
|
||||||
|
|
||||||
|
### How it works
|
||||||
|
|
||||||
|
The same model produces higher-fidelity output through Inference-X because the computation path is cleaner: fused kernels eliminate intermediate buffers, adaptive precision allocates depth where it matters, and surgical expert loading keeps only active parameters in memory.
|
||||||
|
|
||||||
|
→ [Full technical explanation](https://github.com/ElmadaniS/inference-x/blob/master/TECHNOLOGY.md)
|
||||||
|
|
||||||
### Links
|
### Links
|
||||||
|
|
||||||
🌐 [inference-x.com](https://inference-x.com) · 📧 [Elmadani.SALKA@proton.me](mailto:Elmadani.SALKA@proton.me) · 🐦 [@ElmadaniSa13111](https://x.com/ElmadaniSa13111)
|
🌐 [inference-x.com](https://inference-x.com) · 📧 [Elmadani.SALKA@proton.me](mailto:Elmadani.SALKA@proton.me) · 𝕏 [@ElmadaniSa13111](https://x.com/ElmadaniSa13111)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user