HazyResearch / TARTLinks
TART: A plug-and-play Transformer module for task-agnostic reasoning
☆201Updated 2 years ago
Alternatives and similar repositories for TART
Users that are interested in TART are comparing it to the libraries listed below
Sorting:
- Code repository for the c-BTM paper☆107Updated 2 years ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆224Updated 2 weeks ago
- A repository for transformer critique learning and generation☆90Updated last year
- Reverse Instructions to generate instruction tuning data with corpus examples☆215Updated last year
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated 2 years ago
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆71Updated 2 years ago
- ☆134Updated last year
- Simple next-token-prediction for RLHF☆227Updated 2 years ago
- ☆69Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆201Updated last year
- Scaling Data-Constrained Language Models☆342Updated 3 months ago
- ☆180Updated 2 years ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆218Updated 2 years ago
- ☆127Updated last year
- ☆159Updated 2 years ago
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 3 months ago
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆94Updated 2 years ago
- ☆96Updated 2 years ago
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆186Updated 2 months ago
- Self-Alignment with Principle-Following Reward Models☆166Updated 2 weeks ago
- ☆173Updated 2 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆115Updated last year
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆116Updated 2 years ago
- Pre-training code for Amber 7B LLM☆168Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆89Updated last year
- ☆95Updated 9 months ago
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆295Updated 7 months ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆178Updated last year