ajtejankar / mixtral-vis-moeLinks
Visualize expert firing frequencies across sentences in the Mixtral MoE model
☆18Updated last year
Alternatives and similar repositories for mixtral-vis-moe
Users that are interested in mixtral-vis-moe are comparing it to the libraries listed below
Sorting:
- Experiments on speculative sampling with Llama models☆127Updated 2 years ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆103Updated 7 months ago
- Data preparation code for Amber 7B LLM☆94Updated last year
- PB-LLM: Partially Binarized Large Language Models☆157Updated 2 years ago
- Simple and efficient DeepSeek V3 SFT using pipeline parallel and expert parallel, with both FP8 and BF16 trainings☆101Updated 4 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 2 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆111Updated 8 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- ☆128Updated last year
- Small, simple agent task environments for training and evaluation☆19Updated last year
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆80Updated last year
- Simple high-throughput inference library☆152Updated 7 months ago
- A repository for research on medium sized language models.☆77Updated last year
- ☆55Updated last year
- ☆88Updated last week
- ☆59Updated last year
- ☆53Updated last year
- ☆52Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Updated 3 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- LM engine is a library for pretraining/finetuning LLMs☆77Updated last week
- Hugging Face Inference Toolkit used to serve transformers, sentence-transformers, and diffusers models.☆88Updated last month
- ☆62Updated last year
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆272Updated this week
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- ☆219Updated 10 months ago
- Small and Efficient Mathematical Reasoning LLMs☆73Updated last year
- Streamline on-policy/off-policy distillation workflows in a few lines of code☆81Updated this week
- ☆26Updated 11 months ago