IBM / ModuleFormerLinks
ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. We released a collection of ModuleFormer-based Language Models (MoLM) ranging in scale from 4 billion to 8 billion parameters.
☆222Updated last year
Alternatives and similar repositories for ModuleFormer
Users that are interested in ModuleFormer are comparing it to the libraries listed below
Sorting:
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆191Updated 10 months ago
- Code repository for the c-BTM paper☆106Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆203Updated last year
- Scaling Data-Constrained Language Models☆335Updated 9 months ago
- Experiments on speculative sampling with Llama models☆128Updated 2 years ago
- Simple next-token-prediction for RLHF☆227Updated last year
- Evaluating LLMs with CommonGen-Lite☆90Updated last year
- batched loras☆343Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆260Updated 11 months ago
- Pre-training code for Amber 7B LLM☆166Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated last year
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆111Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated 10 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆129Updated last year
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆137Updated 7 months ago
- ☆126Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆197Updated 2 years ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆177Updated 9 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆143Updated 9 months ago
- Data preparation code for Amber 7B LLM☆91Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆112Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- ☆198Updated 6 months ago
- ☆264Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆257Updated last year
- ☆123Updated 8 months ago
- ☆95Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆224Updated 7 months ago
- Functional Benchmarks and the Reasoning Gap☆87Updated 8 months ago