hydrallm / llama-moe-v1Links
☆95Updated last year
Alternatives and similar repositories for llama-moe-v1
Users that are interested in llama-moe-v1 are comparing it to the libraries listed below
Sorting:
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- Code repository for the c-BTM paper☆106Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆129Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆191Updated 10 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- Just a bunch of benchmark logs for different LLMs☆119Updated 10 months ago
- Multi-Domain Expert Learning☆67Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- ☆123Updated 8 months ago
- ☆92Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆197Updated 2 years ago
- A bagel, with everything.☆321Updated last year
- Evaluating LLMs with CommonGen-Lite☆90Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆222Updated last year
- Simple next-token-prediction for RLHF☆227Updated last year
- Experiments on speculative sampling with Llama models☆128Updated 2 years ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆154Updated last year
- A repository for transformer critique learning and generation☆90Updated last year
- ☆87Updated last year
- Token-level adaptation of LoRA matrices for downstream task generalization.☆14Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆203Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆143Updated 9 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆168Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 7 months ago
- ☆73Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆112Updated last year