IBM / ModuleFormer
ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. We released a collection of ModuleFormer-based Language Models (MoLM) ranging in scale from 4 billion to 8 billion parameters.
☆215Updated 9 months ago
Alternatives and similar repositories for ModuleFormer:
Users that are interested in ModuleFormer are comparing it to the libraries listed below
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated 7 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆114Updated 7 months ago
- Code repository for the c-BTM paper☆105Updated last year
- ☆94Updated last year
- ☆115Updated 3 months ago
- Functional Benchmarks and the Reasoning Gap☆82Updated 3 months ago
- A simple unified framework for evaluating LLMs☆164Updated 3 weeks ago
- Just a bunch of benchmark logs for different LLMs☆116Updated 5 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆247Updated 6 months ago
- batched loras☆336Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆263Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆100Updated last month
- Experiments on speculative sampling with Llama models☆122Updated last year
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆153Updated last month
- Simple next-token-prediction for RLHF☆222Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆193Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆184Updated 5 months ago
- ☆150Updated last year
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆126Updated 2 months ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆217Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆115Updated last year
- ☆113Updated 2 months ago
- The dataset and code for paper: TheoremQA: A Theorem-driven Question Answering dataset☆155Updated 8 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆80Updated 10 months ago
- Pre-training code for Amber 7B LLM☆160Updated 8 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆154Updated 2 months ago
- ☆108Updated 3 months ago
- This is the official repository for Inheritune.☆108Updated 3 months ago
- Scaling Data-Constrained Language Models☆330Updated 3 months ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆175Updated last month