wdlctc / headinfer
☆39Updated last month
Alternatives and similar repositories for headinfer:
Users that are interested in headinfer are comparing it to the libraries listed below
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆47Updated 8 months ago
- Work in progress.☆50Updated last week
- QuIP quantization☆52Updated last year
- ☆47Updated last week
- ☆53Updated 9 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆111Updated 3 months ago
- A collection of tricks and tools to speed up transformer models☆145Updated this week
- Train, tune, and infer Bamba model☆87Updated 2 months ago
- Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This rep…☆56Updated 4 months ago
- ☆60Updated last week
- ☆112Updated this week
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆44Updated 8 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆125Updated 3 months ago
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆79Updated 2 weeks ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆42Updated last week
- Public Goods Game (PGG) Benchmark: Contribute & Punish is a multi-agent benchmark that tests cooperative and self-interested strategies a…☆27Updated last week
- RWKV-7: Surpassing GPT☆82Updated 4 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆148Updated 2 months ago
- A repository for research on medium sized language models.☆76Updated 10 months ago
- ☆36Updated last month
- LLM Inference on consumer devices☆95Updated last week
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆59Updated 5 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆130Updated 9 months ago
- ☆37Updated 5 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆53Updated this week
- Repository for CPU Kernel Generation for LLM Inference☆25Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆36Updated last year
- Data preparation code for CrystalCoder 7B LLM☆44Updated 10 months ago
- Code and data releases for the paper -- DelTA: An Online Document-Level Translation Agent Based on Multi-Level Memory☆38Updated last month
- Nexusflow function call, tool use, and agent benchmarks.☆19Updated 3 months ago