From 2292af5e5cb688265338e201b5a3e324aad932ee Mon Sep 17 00:00:00 2001 From: salka Date: Mon, 23 Feb 2026 23:37:59 +0000 Subject: [PATCH] chore: migrate all URLs github.com -> git.inference-x.com --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index c6d615d..8c27930 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,7 @@ Universal AI inference engine. Fused computation, adaptive precision, surgical e | Project | Description | |---------|-------------| -| [**Inference-X**](https://github.com/ElmadaniS/inference-x) | Universal inference engine — 305 KB binary, fused dequant+dot kernels, Shannon entropy adaptive precision, 23 quantization formats, 19 hardware backends | +| [**Inference-X**](https://git.inference-x.com/salka/inference-x) | Universal inference engine — 305 KB binary, fused dequant+dot kernels, Shannon entropy adaptive precision, 23 quantization formats, 19 hardware backends | | **Solar Inference** | Deploying AI on direct solar current. Adaptive precision + 25W power budget = inference powered by a star. Anti-Atlas, 2026. | ### Philosophy @@ -19,7 +19,7 @@ Universal AI inference engine. Fused computation, adaptive precision, surgical e The same model produces higher-fidelity output through Inference-X because the computation path is cleaner: fused kernels eliminate intermediate buffers, adaptive precision allocates depth where it matters, and surgical expert loading keeps only active parameters in memory. -→ [Full technical explanation](https://github.com/ElmadaniS/inference-x/blob/master/TECHNOLOGY.md) +→ [Full technical explanation](https://git.inference-x.com/salka/inference-x/blob/master/TECHNOLOGY.md) ### Links