IBM / ModuleFormerLinks
ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. We released a collection of ModuleFormer-based Language Models (MoLM) ranging in scale from 4 billion to 8 billion parameters.
☆225Updated last year
Alternatives and similar repositories for ModuleFormer
Users that are interested in ModuleFormer are comparing it to the libraries listed below
Sorting:
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆206Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆200Updated 2 years ago
- Code repository for the c-BTM paper☆107Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆201Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated last year
- Experiments on speculative sampling with Llama models☆128Updated 2 years ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆271Updated last year
- ☆96Updated 2 years ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆138Updated 2 years ago
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- Scaling Data-Constrained Language Models☆342Updated 2 months ago
- ☆127Updated 11 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆110Updated 9 months ago
- ☆202Updated 9 months ago
- Pre-training code for Amber 7B LLM☆167Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- ☆150Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated 11 months ago
- Functional Benchmarks and the Reasoning Gap☆88Updated 11 months ago
- ☆128Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆88Updated last year
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆215Updated last month
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆177Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated last year
- ☆122Updated 6 months ago
- Evaluating LLMs with fewer examples☆161Updated last year
- A repository for transformer critique learning and generation☆90Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆388Updated last year