IBM / ModuleFormer
ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. We released a collection of ModuleFormer-based Language Models (MoLM) ranging in scale from 4 billion to 8 billion parameters.
☆217Updated 11 months ago
Alternatives and similar repositories for ModuleFormer:
Users that are interested in ModuleFormer are comparing it to the libraries listed below
- Code repository for the c-BTM paper☆106Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated 10 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆186Updated 7 months ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆195Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆253Updated 8 months ago
- Simple next-token-prediction for RLHF☆222Updated last year
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆111Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆218Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆272Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated 9 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆125Updated 3 months ago
- Experiments on speculative sampling with Llama models☆125Updated last year
- ☆119Updated 6 months ago
- Self-Alignment with Principle-Following Reward Models☆156Updated last year
- Evaluating LLMs with CommonGen-Lite☆89Updated last year
- ☆94Updated last year
- Scaling Data-Constrained Language Models☆335Updated 6 months ago
- ☆125Updated last year
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆196Updated last week
- ☆143Updated 11 months ago
- ☆195Updated 3 months ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆186Updated 4 months ago
- Token Omission Via Attention☆124Updated 5 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆166Updated 3 weeks ago
- EvaByte: Efficient Byte-level Language Models at Scale☆85Updated last week
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆111Updated 9 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆299Updated last year
- Functional Benchmarks and the Reasoning Gap☆84Updated 6 months ago
- batched loras☆340Updated last year
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆131Updated 4 months ago