HazyResearch / TARTLinks
TART: A plug-and-play Transformer module for task-agnostic reasoning
☆200Updated 2 years ago
Alternatives and similar repositories for TART
Users that are interested in TART are comparing it to the libraries listed below
Sorting:
- Code repository for the c-BTM paper☆107Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆225Updated last year
- A repository for transformer critique learning and generation☆90Updated last year
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆94Updated 2 years ago
- ☆159Updated 2 years ago
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆115Updated 2 years ago
- ☆172Updated 2 years ago
- ☆135Updated last year
- ☆94Updated 8 months ago
- Reverse Instructions to generate instruction tuning data with corpus examples☆215Updated last year
- Simple next-token-prediction for RLHF☆227Updated last year
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 2 months ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated last year
- ☆295Updated last year
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆70Updated 2 years ago
- ☆96Updated 2 years ago
- Self-Alignment with Principle-Following Reward Models☆165Updated 4 months ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆219Updated 2 years ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆88Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Scaling Data-Constrained Language Models☆341Updated 2 months ago
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆291Updated 7 months ago
- ☆180Updated 2 years ago
- Multipack distributed sampler for fast padding-free training of LLMs☆199Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Updated last year
- ☆150Updated last year
- ☆127Updated 11 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆206Updated last year
- ☆69Updated last year
- Official Implementation of InstructZero; the first framework to optimize bad prompts of ChatGPT(API LLMs) and finally obtain good prompts…☆196Updated last year