jb-01 / LoRA-TLELinks
Token-level adaptation of LoRA matrices for downstream task generalization.
☆14Updated last year
Alternatives and similar repositories for LoRA-TLE
Users that are interested in LoRA-TLE are comparing it to the libraries listed below
Sorting:
- A repository for transformer critique learning and generation☆89Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆78Updated last year
- ☆61Updated last year
- Evaluating LLMs with fewer examples☆155Updated last year
- Code repository for the c-BTM paper☆106Updated last year
- Self-Alignment with Principle-Following Reward Models☆161Updated 3 weeks ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆84Updated last year
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆75Updated 6 months ago
- Repository for "I am a Strange Dataset: Metalinguistic Tests for Language Models"☆43Updated last year
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆30Updated 11 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆73Updated 2 weeks ago
- ☆97Updated 11 months ago
- ☆38Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆106Updated 3 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆202Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆123Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated last year
- ☆72Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆47Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆117Updated 6 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated 9 months ago
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- ☆95Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆142Updated 7 months ago
- Codebase for Instruction Following without Instruction Tuning☆34Updated 8 months ago
- ☆34Updated 11 months ago
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆100Updated 10 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated 9 months ago
- Evaluating LLMs with CommonGen-Lite☆90Updated last year
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 8 months ago