fanqiwan / FuseAI
FuseAI Project
☆497Updated this week
Alternatives and similar repositories for FuseAI:
Users that are interested in FuseAI are comparing it to the libraries listed below
- [ACL 2024] Progressive LLaMA with Block Expansion.☆496Updated 8 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆531Updated last month
- ☆489Updated 2 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆450Updated 10 months ago
- Official repository for ICLR 2025 paper "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient an…☆587Updated last week
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆238Updated last month
- OLMoE: Open Mixture-of-Experts Language Models☆536Updated last month
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆347Updated 4 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆637Updated 7 months ago
- Codebase for Merging Language Models (ICML 2024)☆793Updated 8 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆581Updated 10 months ago
- Official repository for ORPO☆432Updated 7 months ago
- ☆251Updated 6 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆387Updated 9 months ago
- Generative Representational Instruction Tuning☆588Updated last week
- ☆250Updated last year
- RewardBench: the first evaluation tool for reward models.☆493Updated this week
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆610Updated 6 months ago
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆378Updated 6 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆306Updated 4 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆431Updated 5 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆694Updated 4 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆803Updated 2 months ago
- ☆304Updated 7 months ago
- An Open Source Toolkit For LLM Distillation☆442Updated 3 weeks ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆383Updated 3 months ago
- A series of technical report on Slow Thinking with LLM☆359Updated this week
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆293Updated 4 months ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆388Updated 8 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆540Updated 10 months ago