IBM / ModuleFormerLinks
ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. We released a collection of ModuleFormer-based Language Models (MoLM) ranging in scale from 4 billion to 8 billion parameters.
☆226Updated 4 months ago
Alternatives and similar repositories for ModuleFormer
Users that are interested in ModuleFormer are comparing it to the libraries listed below
Sorting:
- Code repository for the c-BTM paper☆108Updated 2 years ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆202Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆204Updated last year
- Pre-training code for Amber 7B LLM☆170Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- Experiments on speculative sampling with Llama models☆127Updated 2 years ago
- Evaluating LLMs with CommonGen-Lite☆93Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆280Updated last year
- ☆95Updated 2 years ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- ☆203Updated last year
- Data preparation code for Amber 7B LLM☆94Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆112Updated last year
- Scaling Data-Constrained Language Models☆342Updated 7 months ago
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- ☆207Updated 3 weeks ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆179Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆316Updated 2 years ago
- Small and Efficient Mathematical Reasoning LLMs☆73Updated 2 years ago
- ☆128Updated 2 years ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆222Updated 2 years ago
- batched loras☆349Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆280Updated 2 years ago
- Public Inflection Benchmarks☆68Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- Evaluating LLMs with fewer examples☆169Updated last year
- A repository for transformer critique learning and generation☆89Updated 2 years ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- ☆130Updated last year