ScalingIntelligence / tokasaurusLinks
☆466Updated 2 months ago
Alternatives and similar repositories for tokasaurus
Users that are interested in tokasaurus are comparing it to the libraries listed below
Sorting:
- Storing long contexts in tiny caches with self-study☆236Updated 2 months ago
- Pytorch script hot swap: Change code without unloading your LLM from VRAM☆125Updated 9 months ago
- ☆219Updated last year
- Simple & Scalable Pretraining for Neural Architecture Research☆307Updated 2 months ago
- MoE training for Me and You and maybe other people☆335Updated last month
- Reverse Engineering Gemma 3n: Google's New Edge-Optimized Language Model☆262Updated 8 months ago
- GRPO training code which scales to 32xH100s for long horizon terminal/coding tasks. Base agent is now the top Qwen3 agent on Stanford's T…☆344Updated 5 months ago
- Pivotal Token Search☆144Updated last month
- ☆237Updated last month
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆347Updated last year
- An implementation of bucketMul LLM inference☆224Updated last year
- Simple high-throughput inference library☆155Updated 8 months ago
- LLM Inference on consumer devices☆129Updated 10 months ago
- SIMD quantization kernels☆94Updated 5 months ago
- SWE-Bench Pro: Can AI Agents Solve Long-Horizon Software Engineering Tasks?☆259Updated last month
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆141Updated 4 months ago
- rl from zero pretrain, can it be done? yes.☆286Updated 4 months ago
- Samples of good AI generated CUDA kernels☆99Updated 8 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆334Updated 3 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆273Updated this week
- Train your own SOTA deductive reasoning model☆107Updated 11 months ago
- ☆258Updated 11 months ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆287Updated this week
- PyTorch implementation of models from the Zamba2 series.☆186Updated last year
- Felafax is building AI infra for non-NVIDIA GPUs☆570Updated last year
- code for training & evaluating Contextual Document Embedding models☆202Updated 8 months ago
- A character-level language diffusion model trained on Tiny Shakespeare☆849Updated 3 weeks ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆112Updated 8 months ago
- 👷 Build compute kernels☆215Updated last week
- ☆214Updated 2 weeks ago