lucidrains / CALM-pytorchLinks
Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind
☆177Updated 10 months ago
Alternatives and similar repositories for CALM-pytorch
Users that are interested in CALM-pytorch are comparing it to the libraries listed below
Sorting:
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated last year
- ☆263Updated last year
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- ☆124Updated 9 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆144Updated 9 months ago
- This is the official repository for Inheritune.☆111Updated 5 months ago
- ☆151Updated last year
- ☆135Updated 8 months ago
- Recurrent Memory Transformer☆150Updated last year
- Scaling Data-Constrained Language Models☆337Updated 2 weeks ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆177Updated 3 weeks ago
- ☆191Updated last week
- ☆181Updated 2 months ago
- Pre-training code for Amber 7B LLM☆166Updated last year
- ☆183Updated last year
- Self-Alignment with Principle-Following Reward Models☆162Updated 2 months ago
- LLM-Merging: Building LLMs Efficiently through Merging☆201Updated 9 months ago
- ☆98Updated last year
- X-LoRA: Mixture of LoRA Experts☆231Updated 11 months ago
- ☆219Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆194Updated 11 months ago
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆174Updated 3 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆117Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆223Updated 7 months ago
- Plug in and play implementation of " Textbooks Are All You Need", ready for training, inference, and dataset generation☆76Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆86Updated last year
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆203Updated 2 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 9 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆206Updated last month
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆427Updated last year