Better output from the same model. Fused computation, adaptive precision, surgical expert loading. 305 KB, 19 backends, zero dependencies. https://inference-x.com
638 B
638 B
Contributors
Creator & Lead Developer
- Salka Elmadani — Architecture, implementation, and all original code
- GitHub: @ElmadaniS
- Email: Elmadani.SALKA@proton.me
Infrastructure Partners
- Infomaniak — Development servers and Swiss hosting
- Hetzner — High-performance compute for benchmarking
Community Contributors
Your name here — submit a PR!
Inference-X was built from first principles. No code was derived from existing inference frameworks.
Licensed under BSL-1.1 — see LICENSE and NOTICE files.