chore: migrate all URLs github.com -> git.inference-x.com

This commit is contained in:
salka 2026-02-23 23:37:59 +00:00
parent 64727ec35d
commit 2292af5e5c

View File

@ -8,7 +8,7 @@ Universal AI inference engine. Fused computation, adaptive precision, surgical e
| Project | Description | | Project | Description |
|---------|-------------| |---------|-------------|
| [**Inference-X**](https://github.com/ElmadaniS/inference-x) | Universal inference engine — 305 KB binary, fused dequant+dot kernels, Shannon entropy adaptive precision, 23 quantization formats, 19 hardware backends | | [**Inference-X**](https://git.inference-x.com/salka/inference-x) | Universal inference engine — 305 KB binary, fused dequant+dot kernels, Shannon entropy adaptive precision, 23 quantization formats, 19 hardware backends |
| **Solar Inference** | Deploying AI on direct solar current. Adaptive precision + 25W power budget = inference powered by a star. Anti-Atlas, 2026. | | **Solar Inference** | Deploying AI on direct solar current. Adaptive precision + 25W power budget = inference powered by a star. Anti-Atlas, 2026. |
### Philosophy ### Philosophy
@ -19,7 +19,7 @@ Universal AI inference engine. Fused computation, adaptive precision, surgical e
The same model produces higher-fidelity output through Inference-X because the computation path is cleaner: fused kernels eliminate intermediate buffers, adaptive precision allocates depth where it matters, and surgical expert loading keeps only active parameters in memory. The same model produces higher-fidelity output through Inference-X because the computation path is cleaner: fused kernels eliminate intermediate buffers, adaptive precision allocates depth where it matters, and surgical expert loading keeps only active parameters in memory.
→ [Full technical explanation](https://github.com/ElmadaniS/inference-x/blob/master/TECHNOLOGY.md) → [Full technical explanation](https://git.inference-x.com/salka/inference-x/blob/master/TECHNOLOGY.md)
### Links ### Links