jb-01 / LoRA-TLELinks
Token-level adaptation of LoRA matrices for downstream task generalization.
☆14Updated last year
Alternatives and similar repositories for LoRA-TLE
Users that are interested in LoRA-TLE are comparing it to the libraries listed below
Sorting:
- A repository for transformer critique learning and generation☆89Updated 2 years ago
- Code repository for the c-BTM paper☆108Updated 2 years ago
- ☆180Updated 2 years ago
- ☆95Updated 2 years ago
- Functional Benchmarks and the Reasoning Gap☆90Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆202Updated 2 years ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆77Updated last year
- ☆129Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆117Updated 2 years ago
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆71Updated 2 years ago
- PASTA: Post-hoc Attention Steering for LLMs☆132Updated last year
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 6 months ago
- Public Inflection Benchmarks☆68Updated last year
- ☆100Updated last year
- ☆75Updated last year
- Self-Alignment with Principle-Following Reward Models☆169Updated 3 months ago
- Evaluating LLMs with fewer examples☆170Updated last year
- A package to generate summaries of long-form text and evaluate the coherence of these summaries. Official package for our ICLR 2024 paper…☆128Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- ☆125Updated 10 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Updated 3 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆76Updated 7 months ago
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆102Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆61Updated last year
- ☆159Updated 2 years ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 2 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- A repository to perform self-instruct with a model on HF Hub☆32Updated 2 years ago
- 🚢 Data Toolkit for Sailor Language Models☆95Updated 10 months ago