hydrallm / llama-moe-v1View external linksLinks
☆95Jul 26, 2023Updated 2 years ago
Alternatives and similar repositories for llama-moe-v1
Users that are interested in llama-moe-v1 are comparing it to the libraries listed below
Sorting:
- ☆415Nov 2, 2023Updated 2 years ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Sep 10, 2023Updated 2 years ago
- Token-level adaptation of LoRA matrices for downstream task generalization.☆15Apr 14, 2024Updated last year
- QLoRA for Masked Language Modeling☆22Sep 11, 2023Updated 2 years ago
- Mixture of Expert (MoE) techniques for enhancing LLM performance through expert-driven prompt mapping and adapter combinations.☆12Feb 11, 2024Updated 2 years ago
- Code repository for the c-BTM paper☆108Sep 26, 2023Updated 2 years ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Jul 12, 2023Updated 2 years ago
- ☆11Jul 30, 2016Updated 9 years ago
- ☆14Feb 7, 2024Updated 2 years ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Feb 27, 2024Updated last year
- Distributes tasks to a network of GPUs efficiently and uploads the result.☆16Aug 13, 2024Updated last year
- direct preference optimization with only 1 model copy :)☆14Oct 2, 2023Updated 2 years ago
- A package for fine tuning of pretrained NLP transformers using Semi Supervised Learning☆14Oct 27, 2021Updated 4 years ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,657Mar 8, 2024Updated last year
- The Intermediate Goal of the project is to train a GPT like architecture to learn to summarise reddit posts from human preferences, as th…☆12Jul 14, 2021Updated 4 years ago
- ACL 2023 short: Balancing Lexical and Semantic Quality in Abstractive Summarization☆16Dec 18, 2023Updated 2 years ago
- Using modal.com to process FineWeb-edu data☆20Apr 5, 2025Updated 10 months ago
- ☆17Dec 9, 2022Updated 3 years ago
- A library for squeakily cleaning and filtering language datasets.☆50Jul 10, 2023Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆279Nov 3, 2023Updated 2 years ago
- PyTorch reimplementation of the paper "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization"☆16Oct 17, 2021Updated 4 years ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 4 months ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆114May 2, 2022Updated 3 years ago
- ☆21Oct 6, 2023Updated 2 years ago
- Official implementation of Vector-ICL: In-context Learning with Continuous Vector Representations (ICLR 2025)☆21Jun 2, 2025Updated 8 months ago
- Code release for Type-Aware Bi-Encoders for Open-Domain Entity Retrieval☆19Sep 24, 2022Updated 3 years ago
- RAG Agent for the ARC AGI Challenge☆20Jul 1, 2024Updated last year
- ☆22Aug 27, 2023Updated 2 years ago
- ☆23Jul 10, 2023Updated 2 years ago
- The simplest repository for training medium-sized BackpackLM for cs224n☆25Aug 13, 2023Updated 2 years ago
- Finetune Malaysian LLM for Malaysian context embedding task.☆23Apr 27, 2024Updated last year
- Dialogue generation models (GPT-2 and Meena) of Pingpong, ScatterLab.☆21Nov 15, 2021Updated 4 years ago
- A Collection of Pydantic Models to Abstract IRL☆36Dec 10, 2025Updated 2 months ago
- ☆273Oct 31, 2023Updated 2 years ago
- Helpers and such for working with Lambda Cloud☆52Nov 7, 2023Updated 2 years ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆146Sep 20, 2024Updated last year
- A curated reading list of research in Adaptive Computation, Inference-Time Computation & Mixture of Experts (MoE).☆160Jan 1, 2025Updated last year
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆34Jun 11, 2025Updated 8 months ago
- 한국어 LLM 리더보드 및 모델 성능/안전성 관리☆22Sep 26, 2023Updated 2 years ago