AMD-AIG-AIMA / InstellaLinks
Fully Open Language Models with Stellar Performance
☆247Updated 3 weeks ago
Alternatives and similar repositories for Instella
Users that are interested in Instella are comparing it to the libraries listed below
Sorting:
- ☆187Updated last year
- Pivotal Token Search☆121Updated last month
- Lightweight Inference server for OpenVINO☆202Updated this week
- Docs for GGUF quantization (unofficial)☆251Updated last month
- A companion toolkit to pico-train for quantifying, comparing, and visualizing how language models evolve during training.☆107Updated 4 months ago
- Official repository for "NoLiMa: Long-Context Evaluation Beyond Literal Matching"☆144Updated last month
- ☆407Updated this week
- Sparse Inferencing for transformer based LLMs☆197Updated 2 weeks ago
- ☆95Updated 7 months ago
- Reverse Engineering Gemma 3n: Google's New Edge-Optimized Language Model☆238Updated 3 months ago
- No-code CLI designed for accelerating ONNX workflows☆208Updated 2 months ago
- InferX is a Inference Function as a Service Platform☆129Updated last week
- 1.58 Bit LLM on Apple Silicon using MLX☆221Updated last year
- See Through Your Models☆400Updated last month
- Editor with LLM generation tree exploration☆73Updated 6 months ago
- Run LLM Agents on Ryzen AI PCs in Minutes☆529Updated last week
- ☆403Updated this week
- ☆311Updated this week
- ☆197Updated 3 months ago
- GRadient-INformed MoE☆264Updated 11 months ago
- Live-bending a foundation model’s output at neural network level.☆266Updated 4 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆289Updated last week
- ☆230Updated last month
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆319Updated 10 months ago
- A platform to self-host AI on easy mode☆159Updated 2 weeks ago
- Smart proxy for LLM APIs that enables model-specific parameter control, automatic mode switching (like Qwen3's /think and /no_think), and…☆49Updated 3 months ago
- Local Qwen3 LLM inference. One easy-to-understand file of C source with no dependencies.☆102Updated last month
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆1,098Updated this week
- Train, tune, and infer Bamba model☆131Updated 2 months ago
- ☆260Updated 2 months ago