AMD-AIG-AIMA / Instella
Fully Open Language Models with Stellar Performance
☆227Updated 3 weeks ago
Alternatives and similar repositories for Instella:
Users that are interested in Instella are comparing it to the libraries listed below
- ☆186Updated 8 months ago
- Lightweight Inference server for OpenVINO☆160Updated last week
- Run LLM Agents on Ryzen AI PCs in Minutes☆331Updated last month
- Moxin is a family of fully open-source and reproducible LLMs☆87Updated this week
- ☆56Updated last week
- GRadient-INformed MoE☆262Updated 7 months ago
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆108Updated 2 months ago
- Local LLM Server with NPU Acceleration☆164Updated this week
- AI Tensor Engine for ROCm☆180Updated this week
- Modular, open source LLMOps stack that separates concerns: LiteLLM unifies LLM APIs, manages routing and cost controls, and ensures high-…☆93Updated 2 months ago
- Turns devices into a scalable LLM platform☆128Updated this week
- Live-bending a foundation model’s output at neural network level.☆241Updated 3 weeks ago
- PyTorch implementation of models from the Zamba2 series.☆179Updated 3 months ago
- 1.58 Bit LLM on Apple Silicon using MLX☆200Updated 11 months ago
- Work with LLMs on a local environment using containers☆218Updated this week
- Kolosal AI is an OpenSource and Lightweight alternative to LM Studio to run LLMs 100% offline on your device.☆211Updated this week
- A minimalistic C++ Jinja templating engine for LLM chat templates☆132Updated 2 weeks ago
- A Pure Rust based LLM (Any LLM based MLLM such as Spark-TTS) Inference Engine, powering by Candle framework.☆100Updated last month
- EvaByte: Efficient Byte-level Language Models at Scale☆88Updated last week
- Neo AI integrates into the Linux terminal, capable of executing system commands and providing helpful information.☆103Updated last week
- Gnome shell extension for accurate OFFLINE speech to text input in Linux using whisper.cpp. Input text from speech anywhere.☆75Updated 2 weeks ago
- Kyutai with an "eye"☆188Updated last month
- ☆94Updated 3 months ago
- ☆208Updated 3 months ago
- TPI-LLM: Serving 70b-scale LLMs Efficiently on Low-resource Edge Devices☆177Updated 5 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆723Updated last month
- Clue inspired puzzles for testing LLM deduction abilities☆33Updated last month
- This is the documentation repository for SGLang. It is auto-generated from https://github.com/sgl-project/sglang/tree/main/docs.☆38Updated this week
- Editor with LLM generation tree exploration☆66Updated 2 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆94Updated this week