☆468Nov 25, 2025Updated 3 months ago
Alternatives and similar repositories for tokasaurus
Users that are interested in tokasaurus are comparing it to the libraries listed below
Sorting:
- Storing long contexts in tiny caches with self-study☆243Dec 5, 2025Updated 2 months ago
- A simple no-install web UI for Ollama and OAI-Compatible APIs!☆31Jan 30, 2025Updated last year
- new optimizer☆20Aug 4, 2024Updated last year
- About A Model Context Protocol server that executes commands in the current WezTerm session☆33May 28, 2025Updated 9 months ago
- A tool for benchmarking LLMs on Modal☆48Aug 29, 2025Updated 6 months ago
- ☆37Aug 4, 2025Updated 6 months ago
- Minimalistic large language model 3D-parallelism training☆2,579Feb 19, 2026Updated last week
- Knowledge transfer from high-resource to low-resource programming languages for Code LLMs☆16Aug 12, 2025Updated 6 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆403Updated this week
- PyTorch native quantization and sparsity for training and inference☆2,707Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,009Feb 23, 2026Updated last week
- Muon is Scalable for LLM Training☆1,440Aug 3, 2025Updated 6 months ago
- DFloat11 [NeurIPS '25]: Lossless Compression of LLMs and DiTs for Efficient GPU Inference☆608Nov 24, 2025Updated 3 months ago
- Merliot Device Hub☆166Jun 11, 2025Updated 8 months ago
- Everything about the SmolLM and SmolVLM family of models☆3,636Jan 13, 2026Updated last month
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆102Jul 19, 2025Updated 7 months ago
- Easy to use and open-source unknown stealer☆22Jul 24, 2023Updated 2 years ago
- Tile primitives for speedy kernels☆3,183Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆23,905Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,787Updated this week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,903Updated this week
- A simple, performant and scalable Jax LLM!☆2,148Updated this week
- Entropy Based Sampling and Parallel CoT Decoding☆3,434Nov 13, 2024Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆112May 19, 2025Updated 9 months ago
- Ask question to your PDF☆11Jun 11, 2023Updated 2 years ago
- Produce your own Dynamic 3.0 Quants and achieve optimum accuracy & SOTA quantization performance! Input your VRAM and RAM and the toolcha…☆79Feb 22, 2026Updated last week
- QAlign is a new test-time alignment approach that improves language model performance by using Markov chain Monte Carlo methods.☆26Feb 11, 2026Updated 2 weeks ago
- Easy and Efficient Quantization for Transformers☆206Jan 28, 2026Updated last month
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆629Mar 23, 2025Updated 11 months ago
- Our library for RL environments + evals☆3,869Updated this week
- Build your own visual reasoning model☆419Jan 13, 2026Updated last month
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,184Aug 22, 2025Updated 6 months ago
- kernels, of the mega variety☆679Jan 29, 2026Updated last month
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,728May 21, 2025Updated 9 months ago
- Pivotal Token Search☆145Dec 20, 2025Updated 2 months ago
- ☆18Dec 9, 2025Updated 2 months ago
- Detect how uv was installed and get upgrade instructions☆29Jul 23, 2025Updated 7 months ago
- Official PyTorch implementation of CD-MOE☆12Mar 29, 2025Updated 11 months ago
- A thin cython wrapper around llama.cpp, whisper.cpp and stable-diffusion.cpp☆16Feb 10, 2026Updated 2 weeks ago