HazyResearch / TARTLinks
TART: A plug-and-play Transformer module for task-agnostic reasoning
☆196Updated last year
Alternatives and similar repositories for TART
Users that are interested in TART are comparing it to the libraries listed below
Sorting:
- Code repository for the c-BTM paper☆106Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆188Updated 9 months ago
- ☆95Updated last year
- ☆159Updated 2 years ago
- ☆94Updated 5 months ago
- Reverse Instructions to generate instruction tuning data with corpus examples☆211Updated last year
- ☆178Updated 2 years ago
- Inspecting and Editing Knowledge Representations in Language Models☆116Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆134Updated last year
- Simple next-token-prediction for RLHF☆226Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆220Updated last year
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆162Updated last year
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆204Updated this week
- ☆412Updated last year
- Self-Alignment with Principle-Following Reward Models☆161Updated 3 weeks ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆421Updated last year
- ☆133Updated last year
- A repository for transformer critique learning and generation☆89Updated last year
- Scaling Data-Constrained Language Models☆334Updated 8 months ago
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆492Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆114Updated 2 years ago
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆70Updated 2 years ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆201Updated 3 weeks ago
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 8 months ago
- ☆68Updated 9 months ago
- Reimplementation of the task generation part from the Alpaca paper☆118Updated 2 years ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆108Updated last year
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆94Updated 2 years ago
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆285Updated 3 months ago