NolanoOrg / SpectraSuite
☆46Updated 9 months ago
Alternatives and similar repositories for SpectraSuite:
Users that are interested in SpectraSuite are comparing it to the libraries listed below
- QuIP quantization☆52Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- A repository for research on medium sized language models.☆76Updated 11 months ago
- Work in progress.☆58Updated 3 weeks ago
- RWKV-7: Surpassing GPT☆83Updated 5 months ago
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆48Updated 9 months ago
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆83Updated last month
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆44Updated last month
- PB-LLM: Partially Binarized Large Language Models☆152Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆91Updated 2 weeks ago
- Repository for CPU Kernel Generation for LLM Inference☆26Updated last year
- Here we will test various linear attention designs.☆60Updated last year
- This repo is based on https://github.com/jiaweizzhao/GaLore☆27Updated 7 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆45Updated 2 weeks ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆27Updated this week
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆97Updated 7 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆126Updated 5 months ago
- Latent Large Language Models☆18Updated 8 months ago
- Code for data-aware compression of DeepSeek models☆21Updated 3 weeks ago
- ☆125Updated last year
- ☆16Updated last year
- ☆50Updated 6 months ago
- The code repository for the CURLoRA research paper. Stable LLM continual fine-tuning and catastrophic forgetting mitigation.☆43Updated 8 months ago
- ☆131Updated last month
- ☆33Updated 10 months ago
- ☆78Updated 8 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆32Updated 8 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆148Updated 3 weeks ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year