IBM / ModuleFormer
ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. We released a collection of ModuleFormer-based Language Models (MoLM) ranging in scale from 4 billion to 8 billion parameters.
☆220Updated last year
Alternatives and similar repositories for ModuleFormer:
Users that are interested in ModuleFormer are comparing it to the libraries listed below
- Multipack distributed sampler for fast padding-free training of LLMs☆188Updated 8 months ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆196Updated last year
- Code repository for the c-BTM paper☆106Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated 11 months ago
- DSIR large-scale data selection framework for language model training☆246Updated last year
- ☆94Updated last year
- Experiments on speculative sampling with Llama models☆125Updated last year
- Pre-training code for Amber 7B LLM☆166Updated 11 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated 10 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆254Updated 9 months ago
- ☆255Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Evaluating LLMs with CommonGen-Lite☆89Updated last year
- ☆197Updated 4 months ago
- Simple next-token-prediction for RLHF☆225Updated last year
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆133Updated 5 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆120Updated last year
- Self-Alignment with Principle-Following Reward Models☆160Updated last year
- ☆412Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Scaling Data-Constrained Language Models☆334Updated 7 months ago
- batched loras☆341Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆218Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- RuLES: a benchmark for evaluating rule-following in language models☆221Updated 2 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆274Updated last year
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆111Updated last year
- ☆92Updated last year
- LOFT: A 1 Million+ Token Long-Context Benchmark☆187Updated 2 weeks ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆142Updated 7 months ago