snowflakedb / ArcticTrainingLinks
ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)
☆200Updated last week
Alternatives and similar repositories for ArcticTraining
Users that are interested in ArcticTraining are comparing it to the libraries listed below
Sorting:
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆210Updated last week
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆141Updated last year
- Load compute kernels from the Hub☆244Updated this week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆327Updated 3 months ago
- Efficient LLM Inference over Long Sequences☆389Updated 2 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆270Updated last year
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆260Updated last month
- An extension of the nanoGPT repository for training small MOE models.☆178Updated 5 months ago
- Simple extension on vLLM to help you speed up reasoning model without training.☆177Updated 2 months ago
- ☆211Updated 6 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆124Updated 8 months ago
- PyTorch building blocks for the OLMo ecosystem☆274Updated this week
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆61Updated 10 months ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆210Updated 3 months ago
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆137Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Updated 10 months ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆222Updated last week
- ☆217Updated 7 months ago
- The HELMET Benchmark☆165Updated last week
- A family of compressed models obtained via pruning and knowledge distillation☆348Updated 9 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆132Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆344Updated 8 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆151Updated 4 months ago
- A pipeline for LLM knowledge distillation☆108Updated 4 months ago
- Reproducible, flexible LLM evaluations☆237Updated last month
- KV cache compression for high-throughput LLM inference☆134Updated 6 months ago
- Easy and Efficient Quantization for Transformers☆201Updated 2 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆213Updated 3 weeks ago
- Complex Function Calling Benchmark.☆124Updated 7 months ago