☆95Jul 26, 2023Updated 2 years ago
Alternatives and similar repositories for llama-moe-v1
Users that are interested in llama-moe-v1 are comparing it to the libraries listed below
Sorting:
- ☆415Nov 2, 2023Updated 2 years ago
- Token-level adaptation of LoRA matrices for downstream task generalization.☆15Apr 14, 2024Updated last year
- QLoRA for Masked Language Modeling☆23Sep 11, 2023Updated 2 years ago
- A repository of projects and datasets under active development by Alignment Lab AI☆22Dec 22, 2023Updated 2 years ago
- Code repository for the c-BTM paper☆108Sep 26, 2023Updated 2 years ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Jul 12, 2023Updated 2 years ago
- ☆11Jul 30, 2016Updated 9 years ago
- ☆14Feb 7, 2024Updated 2 years ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Feb 27, 2024Updated 2 years ago
- direct preference optimization with only 1 model copy :)☆14Oct 2, 2023Updated 2 years ago
- A package for fine tuning of pretrained NLP transformers using Semi Supervised Learning☆14Oct 27, 2021Updated 4 years ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,664Mar 8, 2024Updated 2 years ago
- ACL 2023 short: Balancing Lexical and Semantic Quality in Abstractive Summarization☆16Dec 18, 2023Updated 2 years ago
- ☆17Dec 9, 2022Updated 3 years ago
- Using modal.com to process FineWeb-edu data☆20Apr 5, 2025Updated 11 months ago
- ☆14Nov 20, 2022Updated 3 years ago
- Code for the Findings of NAACL 2022(Long Paper): AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NL…☆18May 4, 2022Updated 3 years ago
- A library for squeakily cleaning and filtering language datasets.☆50Jul 10, 2023Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆281Nov 3, 2023Updated 2 years ago
- PyTorch reimplementation of the paper "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization"☆16Oct 17, 2021Updated 4 years ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 5 months ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,002Dec 6, 2024Updated last year
- Customizable implementation of the self-instruct paper.☆1,050Mar 7, 2024Updated 2 years ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆114May 2, 2022Updated 3 years ago
- ☆21Oct 6, 2023Updated 2 years ago
- Official implementation of Vector-ICL: In-context Learning with Continuous Vector Representations (ICLR 2025)☆21Jun 2, 2025Updated 9 months ago
- Code release for Type-Aware Bi-Encoders for Open-Domain Entity Retrieval☆19Sep 24, 2022Updated 3 years ago
- ☆22Aug 27, 2023Updated 2 years ago
- ☆23Jul 10, 2023Updated 2 years ago
- Finetune Malaysian LLM for Malaysian context embedding task.☆23Apr 27, 2024Updated last year
- The simplest repository for training medium-sized BackpackLM for cs224n☆25Aug 13, 2023Updated 2 years ago
- data collator for UL2 and U-PaLM☆29Aug 20, 2023Updated 2 years ago
- Merge Transformers language models by use of gradient parameters.☆214Aug 8, 2024Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆145Sep 20, 2024Updated last year
- A curated reading list of research in Adaptive Computation, Inference-Time Computation & Mixture of Experts (MoE).☆159Jan 1, 2025Updated last year
- 한국어 LLM 리더보드 및 모델 성능/안전성 관리☆22Sep 26, 2023Updated 2 years ago
- Lite Self-Training☆30Jul 25, 2023Updated 2 years ago
- ☆24Oct 8, 2024Updated last year
- 🚀 Template Haystack Search Application with Streamlit☆27Jan 16, 2025Updated last year