pierrel55 / llama_stLinks
Load and run Llama from safetensors files in C
☆11Updated last year
Alternatives and similar repositories for llama_st
Users that are interested in llama_st are comparing it to the libraries listed below
Sorting:
- Tiny Llama model trained to play chess☆27Updated 3 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- SwiftLet is a lightweight Python framework for running open-source Large Language Models (LLMs) locally using safetensors☆28Updated 3 months ago
- 1.58-bit LLaMa model☆83Updated last year
- entropix style sampling + GUI☆27Updated last year
- Train your own small bitnet model☆75Updated last year
- Experiments with BitNet inference on CPU☆54Updated last year
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆29Updated 7 months ago
- ☆62Updated 3 months ago
- ☆105Updated 4 months ago
- Yet another frontend for LLM, written using .NET and WinUI 3☆10Updated last month
- ☆136Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- ☆51Updated last year
- Video+code lecture on building nanoGPT from scratch☆68Updated last year
- ☆118Updated 4 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆97Updated 5 months ago
- Efficient non-uniform quantization with GPTQ for GGUF☆52Updated last month
- Running Microsoft's BitNet via Electron, React & Astro☆46Updated last month
- GPT-2 small trained on phi-like data☆67Updated last year
- Inference RWKV v7 in pure C.☆41Updated 3 weeks ago
- 5X faster 60% less memory QLoRA finetuning☆21Updated last year
- RWKV-7: Surpassing GPT☆98Updated 11 months ago
- a lightweight, open-source blueprint for building powerful and scalable LLM chat applications☆28Updated last year
- run ollama & gguf easily with a single command☆52Updated last year
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆44Updated last week
- ☆19Updated last month
- Lightweight C inference for Qwen3 GGUF. Multiturn prefix caching & batch processing.☆17Updated 2 months ago
- FMS Model Optimizer is a framework for developing reduced precision neural network models.☆20Updated last week
- ☆17Updated 10 months ago