Better output from the same model. Fused computation, adaptive precision, surgical expert loading. 305 KB, 19 backends, zero dependencies. https://inference-x.com
20 lines
638 B
Markdown
20 lines
638 B
Markdown
# Contributors
|
|
|
|
## Creator & Lead Developer
|
|
- **Salka Elmadani** — Architecture, implementation, and all original code
|
|
- GitHub: [@ElmadaniS](https://github.com/ElmadaniS)
|
|
- Email: Elmadani.SALKA@proton.me
|
|
|
|
## Infrastructure Partners
|
|
- **[Infomaniak](https://infomaniak.com)** — Development servers and Swiss hosting
|
|
- **[Hetzner](https://hetzner.com)** — High-performance compute for benchmarking
|
|
|
|
## Community Contributors
|
|
*Your name here — submit a PR!*
|
|
|
|
---
|
|
|
|
*Inference-X was built from first principles. No code was derived from existing inference frameworks.*
|
|
|
|
*Licensed under BSL-1.1 — see LICENSE and NOTICE files.*
|