IBM / ModuleFormerLinks
ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. We released a collection of ModuleFormer-based Language Models (MoLM) ranging in scale from 4 billion to 8 billion parameters.
☆223Updated last year
Alternatives and similar repositories for ModuleFormer
Users that are interested in ModuleFormer are comparing it to the libraries listed below
Sorting:
- Code repository for the c-BTM paper☆107Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆200Updated 2 years ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆117Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- Experiments on speculative sampling with Llama models☆128Updated 2 years ago
- Multipack distributed sampler for fast padding-free training of LLMs☆198Updated 11 months ago
- ☆95Updated 2 years ago
- Pre-training code for Amber 7B LLM☆166Updated last year
- Evaluating LLMs with CommonGen-Lite☆90Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated 11 months ago
- ☆124Updated 9 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆106Updated 7 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆267Updated last year
- RuLES: a benchmark for evaluating rule-following in language models☆227Updated 5 months ago
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆113Updated last year
- Data preparation code for Amber 7B LLM☆91Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆133Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆219Updated last year
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆140Updated 8 months ago
- ☆199Updated 7 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆86Updated last year
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆111Updated last year
- Functional Benchmarks and the Reasoning Gap☆88Updated 9 months ago
- Scaling Data-Constrained Language Models☆338Updated 3 weeks ago
- ☆134Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- A repository for transformer critique learning and generation☆90Updated last year
- batched loras☆344Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆231Updated 8 months ago
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆100Updated last year