IBM / ModuleFormerLinks
ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. We released a collection of ModuleFormer-based Language Models (MoLM) ranging in scale from 4 billion to 8 billion parameters.
☆222Updated last year
Alternatives and similar repositories for ModuleFormer
Users that are interested in ModuleFormer are comparing it to the libraries listed below
Sorting:
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆192Updated 10 months ago
- Code repository for the c-BTM paper☆106Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆117Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆198Updated 2 years ago
- Experiments on speculative sampling with Llama models☆128Updated 2 years ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆219Updated last year
- DSIR large-scale data selection framework for language model training☆251Updated last year
- Scaling Data-Constrained Language Models☆337Updated this week
- Evaluating LLMs with CommonGen-Lite☆90Updated last year
- Simple next-token-prediction for RLHF☆227Updated last year
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆110Updated last year
- ☆126Updated last year
- ☆150Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated 11 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆130Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆113Updated last year
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆70Updated 2 years ago
- Self-Alignment with Principle-Following Reward Models☆161Updated last month
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆105Updated 4 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆227Updated 7 months ago
- A repository for transformer critique learning and generation☆90Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆257Updated last year
- The dataset and code for paper: TheoremQA: A Theorem-driven Question Answering dataset☆157Updated last year
- ☆95Updated last year
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆379Updated 11 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆261Updated 11 months ago
- ☆270Updated 2 years ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆304Updated last year
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆289Updated 4 months ago