## Salka Elmadani **Building [Inference-X](https://inference-x.com)** โ€” better output from the same model. Universal AI inference engine. Fused computation, adaptive precision, surgical expert loading. 305 KB, 19 backends, zero dependencies. Built in Morocco. ### What I'm working on | Project | Description | |---------|-------------| | [**Inference-X**](https://github.com/ElmadaniS/inference-x) | Universal inference engine โ€” 305 KB binary, fused dequant+dot kernels, Shannon entropy adaptive precision, 23 quantization formats, 19 hardware backends | | **Solar Inference** | Deploying AI on direct solar current. Adaptive precision + 25W power budget = inference powered by a star. Anti-Atlas, 2026. | ### Philosophy > *The best inference engine is the one you don't notice. You should hear the model, not the framework.* ### How it works The same model produces higher-fidelity output through Inference-X because the computation path is cleaner: fused kernels eliminate intermediate buffers, adaptive precision allocates depth where it matters, and surgical expert loading keeps only active parameters in memory. โ†’ [Full technical explanation](https://github.com/ElmadaniS/inference-x/blob/master/TECHNOLOGY.md) ### Links ๐ŸŒ [inference-x.com](https://inference-x.com) ยท ๐Ÿ“ง [Elmadani.SALKA@proton.me](mailto:Elmadani.SALKA@proton.me) ยท ๐• [@ElmadaniSa13111](https://x.com/ElmadaniSa13111) --- *Morocco ๐Ÿ‡ฒ๐Ÿ‡ฆ*