Better output from the same model. Fused computation, adaptive precision, surgical expert loading. 305 KB, 19 backends, zero dependencies. https://inference-x.com
1.7 KiB
1.7 KiB
Contributing to Inference-X
Thank you for your interest in Inference-X! We welcome contributions from everyone — whether you're fixing a typo, optimizing a kernel, or porting to new hardware.
How to contribute
- Fork the repository
- Create a branch for your change (
git checkout -b feature/my-improvement) - Make your changes — keep commits focused and descriptive
- Test — make sure
makesucceeds and basic inference works - Submit a pull request with a clear description of what and why
What we're looking for
High-impact contributions
- Backend performance — Faster GEMM kernels for existing platforms
- New backends — RISC-V, custom ASICs, new accelerators
- Model architectures — Support for new transformer variants
- Quantization — New formats, better quality at lower bits
Always welcome
- Bug fixes
- Documentation improvements
- Test scripts and benchmarks on diverse hardware
- Translations of documentation
- Examples and tutorials
Good first issues
- Run benchmarks on your hardware and share results
- Test with models we haven't tried
- Improve error messages
- Add code comments
Code style
- C++17, no external dependencies
- One function does one thing
- Comments explain why, not what
- No frameworks, no build tools beyond Make
Communication
- Issues — Bug reports and feature requests
- Pull Requests — Code contributions
- Email — Elmadani.SALKA@proton.me for private matters
License
By contributing, you agree that your contributions will be licensed under BSL-1.1 (transitioning to Apache 2.0 in 2030).
Every contribution makes AI more accessible. Thank you for being part of this.