nanomaoli / llm_reproducibilityLinks
☆23Updated last week
Alternatives and similar repositories for llm_reproducibility
Users that are interested in llm_reproducibility are comparing it to the libraries listed below
Sorting:
- Using FlexAttention to compute attention with different masking patterns☆44Updated 9 months ago
- ☆58Updated this week
- The evaluation framework for training-free sparse attention in LLMs☆69Updated this week
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆41Updated last month
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆46Updated 8 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆28Updated 9 months ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆19Updated 11 months ago
- ☆114Updated 3 weeks ago
- ☆76Updated 4 months ago
- ☆36Updated last week
- Fast and memory-efficient exact attention☆68Updated 3 months ago
- ☆14Updated last month
- ☆51Updated 3 months ago
- ☆32Updated 5 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆32Updated 3 months ago
- Cascade Speculative Drafting☆29Updated last year
- Official Implementation of SAM-Decoding: Speculative Decoding via Suffix Automaton☆28Updated 4 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆131Updated last week
- ☆28Updated 4 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆39Updated last year
- ☆21Updated last month
- Stick-breaking attention☆57Updated last week
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆70Updated last week
- ☆47Updated last week
- Linear Attention Sequence Parallelism (LASP)☆84Updated last year
- Awesome Triton Resources☆31Updated last month
- Kinetics: Rethinking Test-Time Scaling Laws☆29Updated this week
- XAttention: Block Sparse Attention with Antidiagonal Scoring☆166Updated this week
- 16-fold memory access reduction with nearly no loss☆99Updated 2 months ago
- ☆36Updated this week