inference-x/CONTRIBUTING.md
Salka Elmadani ec36668cf5 Inference-X v1.0 — Universal AI Inference Engine
Better output from the same model. Fused computation, adaptive precision,
surgical expert loading. 305 KB, 19 backends, zero dependencies.

https://inference-x.com
2026-02-23 07:10:47 +00:00

1.7 KiB

Contributing to Inference-X

Thank you for your interest in Inference-X! We welcome contributions from everyone — whether you're fixing a typo, optimizing a kernel, or porting to new hardware.

How to contribute

  1. Fork the repository
  2. Create a branch for your change (git checkout -b feature/my-improvement)
  3. Make your changes — keep commits focused and descriptive
  4. Test — make sure make succeeds and basic inference works
  5. Submit a pull request with a clear description of what and why

What we're looking for

High-impact contributions

  • Backend performance — Faster GEMM kernels for existing platforms
  • New backends — RISC-V, custom ASICs, new accelerators
  • Model architectures — Support for new transformer variants
  • Quantization — New formats, better quality at lower bits

Always welcome

  • Bug fixes
  • Documentation improvements
  • Test scripts and benchmarks on diverse hardware
  • Translations of documentation
  • Examples and tutorials

Good first issues

  • Run benchmarks on your hardware and share results
  • Test with models we haven't tried
  • Improve error messages
  • Add code comments

Code style

  • C++17, no external dependencies
  • One function does one thing
  • Comments explain why, not what
  • No frameworks, no build tools beyond Make

Communication

  • Issues — Bug reports and feature requests
  • Pull Requests — Code contributions
  • EmailElmadani.SALKA@proton.me for private matters

License

By contributing, you agree that your contributions will be licensed under BSL-1.1 (transitioning to Apache 2.0 in 2030).


Every contribution makes AI more accessible. Thank you for being part of this.