sebulo / LoQT
☆80Updated 5 months ago
Alternatives and similar repositories for LoQT:
Users that are interested in LoQT are comparing it to the libraries listed below
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆126Updated 5 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆27Updated 7 months ago
- A repository for research on medium sized language models.☆76Updated 11 months ago
- RWKV-7: Surpassing GPT☆84Updated 5 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆91Updated 2 weeks ago
- ☆78Updated 8 months ago
- ☆94Updated 3 months ago
- ☆68Updated 9 months ago
- This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.☆104Updated 10 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 6 months ago
- This is the official repository for Inheritune.☆111Updated 3 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆231Updated 3 months ago
- ☆25Updated 3 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆199Updated 9 months ago
- Collection of autoregressive model implementation☆85Updated 2 weeks ago
- Work in progress.☆58Updated last month
- ☆131Updated last month
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆60Updated 3 months ago
- This repository contains code for the MicroAdam paper.☆18Updated 4 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆139Updated 2 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆148Updated last month
- https://x.com/BlinkDL_AI/status/1884768989743882276☆27Updated this week
- ☆198Updated 5 months ago
- PyTorch implementation of models from the Zamba2 series.☆180Updated 3 months ago
- ☆33Updated 10 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- ☆49Updated last year
- ☆50Updated 11 months ago
- ☆54Updated last month
- Token Omission Via Attention☆126Updated 6 months ago