SusCom-Lab / ZSMerge
☆16Updated 3 weeks ago
Alternatives and similar repositories for ZSMerge
Users that are interested in ZSMerge are comparing it to the libraries listed below
Sorting:
- ☆27Updated 2 weeks ago
- ☆46Updated 9 months ago
- Make triton easier☆47Updated 11 months ago
- FlexAttention w/ FlashAttention3 Support☆26Updated 7 months ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆21Updated 6 months ago
- RWKV-7: Surpassing GPT☆84Updated 6 months ago
- GoldFinch and other hybrid transformer components☆45Updated 9 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆32Updated 9 months ago
- Cascade Speculative Drafting☆29Updated last year
- Training hybrid models for dummies.☆21Updated 4 months ago
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated 5 months ago
- ☆37Updated 7 months ago
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆24Updated 3 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆27Updated 7 months ago
- MPI Code Generation through Domain-Specific Language Models☆13Updated 5 months ago
- ☆16Updated last month
- DPO, but faster 🚀☆42Updated 5 months ago
- Data preparation code for CrystalCoder 7B LLM☆44Updated last year
- Using FlexAttention to compute attention with different masking patterns☆43Updated 7 months ago
- ☆45Updated 2 months ago
- ☆55Updated last month
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆126Updated 5 months ago
- A collection of reproducible inference engine benchmarks☆30Updated 3 weeks ago
- This repository contains code for the MicroAdam paper.☆18Updated 5 months ago
- Official implementation of "The Sparse Frontier: Sparse Attention Trade-offs in Transformer LLMs"☆27Updated 3 weeks ago
- Implementation of Hyena Hierarchy in JAX☆10Updated 2 years ago
- QuIP quantization☆52Updated last year
- ☆49Updated last year
- Linear Attention Sequence Parallelism (LASP)☆82Updated 11 months ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆44Updated 2 months ago