☆469Nov 25, 2025Updated 3 months ago
Alternatives and similar repositories for tokasaurus
Users that are interested in tokasaurus are comparing it to the libraries listed below
Sorting:
- Storing long contexts in tiny caches with self-study☆249Dec 5, 2025Updated 3 months ago
- Simple, flexible configuration in pure Python!☆32Jul 1, 2025Updated 8 months ago
- About A Model Context Protocol server that executes commands in the current WezTerm session☆33May 28, 2025Updated 9 months ago
- ☆39Aug 4, 2025Updated 7 months ago
- Graph model execution API for Candle☆17Jul 27, 2025Updated 7 months ago
- new optimizer☆20Aug 4, 2024Updated last year
- Produce your own Dynamic 3.0 Quants and achieve optimum accuracy & SOTA quantization performance! Input your VRAM and RAM and the toolcha…☆82Updated this week
- kernels, of the mega variety☆690Updated this week
- Official PyTorch implementation of CD-MOE☆12Mar 13, 2026Updated last week
- ☆11Mar 24, 2025Updated 11 months ago
- Muon is Scalable for LLM Training☆1,446Aug 3, 2025Updated 7 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆413Mar 3, 2026Updated 2 weeks ago
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- Merliot Device Hub☆166Jun 11, 2025Updated 9 months ago
- ☆238Jan 5, 2026Updated 2 months ago
- ☆14Dec 21, 2025Updated 3 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,145Mar 15, 2026Updated last week
- Everything about the SmolLM and SmolVLM family of models☆3,675Jan 13, 2026Updated 2 months ago
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,829Updated this week
- Tile primitives for speedy kernels☆3,232Updated this week
- ☆72Jun 20, 2025Updated 9 months ago
- ☆40Aug 20, 2025Updated 7 months ago
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆630Mar 23, 2025Updated 11 months ago
- Knowledge transfer from high-resource to low-resource programming languages for Code LLMs☆16Aug 12, 2025Updated 7 months ago
- A simple, performant and scalable Jax LLM!☆2,170Updated this week
- DFloat11 [NeurIPS '25]: Lossless Compression of LLMs and DiTs for Efficient GPU Inference☆615Nov 24, 2025Updated 3 months ago
- PyTorch native quantization and sparsity for training and inference☆2,739Updated this week
- nyc is so back☆21Jun 27, 2025Updated 8 months ago
- Entropy Based Sampling and Parallel CoT Decoding☆3,434Nov 13, 2024Updated last year
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,891Updated this week
- Repository for Skill Set Optimization☆14Jul 26, 2024Updated last year
- Source code for the collaborative reasoner research project at Meta FAIR.☆112Apr 17, 2025Updated 11 months ago
- Our library for RL environments + evals☆3,918Updated this week
- A tool for benchmarking LLMs on Modal☆50Aug 29, 2025Updated 6 months ago
- 🕸 GlotCC Dataset and Pipline -- NeurIPS 2024☆20Apr 6, 2025Updated 11 months ago
- Minimalist ML framework for Rust☆19Dec 4, 2025Updated 3 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆102Jul 19, 2025Updated 8 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,956Updated this week
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆118May 19, 2025Updated 10 months ago