docs: professional README — Feb 2026
This commit is contained in:
parent
101afdfbd1
commit
ba81ad4b87
66
README.md
66
README.md
@ -1,48 +1,38 @@
|
|||||||
|
<div align="center">
|
||||||
|
|
||||||
# IX Forge
|
# IX Forge
|
||||||
*Part of the Inference-X Ecosystem*
|
|
||||||
|
|
||||||
[](LICENSE)
|
**Community fine-tuning platform for Inference-X**
|
||||||
|
|
||||||
**Community fine-tuning platform. Collective intelligence.**
|
[]()
|
||||||
|
|
||||||
The Forge is where the community contributes training data, improves models,
|
[inference-x.com](https://inference-x.com)
|
||||||
and shares fine-tuned adapters (LoRA/QLoRA format). Open. Transparent. Community-owned.
|
|
||||||
|
|
||||||
## Deploy
|
</div>
|
||||||
```bash
|
|
||||||
pip install fastapi uvicorn
|
|
||||||
python ix_forge.py
|
|
||||||
# → API at http://localhost:7937
|
|
||||||
```
|
|
||||||
|
|
||||||
## API
|
---
|
||||||
```
|
|
||||||
POST /contribute — Submit training data (Q&A pairs)
|
|
||||||
GET /datasets — Browse approved datasets
|
|
||||||
POST /submit-adapter — Submit a trained LoRA adapter
|
|
||||||
GET /adapters — Browse community adapters
|
|
||||||
GET /leaderboard — Top contributors by training pairs
|
|
||||||
GET /health — Service status
|
|
||||||
```
|
|
||||||
|
|
||||||
## Contribution Format
|
## What is IX Forge?
|
||||||
```json
|
|
||||||
{
|
|
||||||
"type": "qa",
|
|
||||||
"language": "en",
|
|
||||||
"domain": "science",
|
|
||||||
"contributor": "your_handle",
|
|
||||||
"pairs": [
|
|
||||||
{"q": "What is...", "a": "It is..."}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Supported Domains
|
IX Forge is a collaborative platform for fine-tuning AI models using distributed Inference-X nodes. Contribute training data, compute, or expertise — earn credits usable for inference.
|
||||||
`science`, `code`, `math`, `multilingual`, `reasoning`, `creative`, `general`
|
|
||||||
|
|
||||||
Built in Morocco for the world. 🇲🇦
|
## Features
|
||||||
|
|
||||||
## License
|
- **Dataset sharing** — curated, reviewed community datasets
|
||||||
[Business Source License 1.1](LICENSE)
|
- **Distributed training** — split LoRA fine-tuning across multiple IX nodes
|
||||||
**Author:** Salka Elmadani
|
- **Model sharing** — publish your fine-tuned LoRA adapters
|
||||||
|
- **Credits system** — contribute compute, earn inference credits
|
||||||
|
|
||||||
|
## Supported methods
|
||||||
|
|
||||||
|
- LoRA / QLoRA fine-tuning
|
||||||
|
- GGUF quantization of fine-tuned models
|
||||||
|
- Dataset formatting (Alpaca, ShareGPT, ChatML)
|
||||||
|
|
||||||
|
## Status
|
||||||
|
|
||||||
|
Alpha — architecture defined, implementation ongoing.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Part of the [Inference-X community](https://git.inference-x.com/inference-x-community)*
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user