facebookresearch / LIGHT
LIGHT is a platform for text-situated dialogue research. We originally hosted LIGHT as a live game with dialogue models in a grounded setting. This repo contains all of the code to get the LIGHT game running, as well as reproducible code for the research projects along the way of getting LIGHT to where it was.
☆69Updated last year
Alternatives and similar repositories for LIGHT:
Users that are interested in LIGHT are comparing it to the libraries listed below
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆94Updated 2 years ago
- Advanced Reasoning Benchmark Dataset for LLMs☆45Updated last year
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆69Updated 2 years ago
- ☆27Updated 3 weeks ago
- ☆44Updated 5 months ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆79Updated 7 months ago
- ☆120Updated 6 months ago
- ☆48Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated last year
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- A set of utilities for running few-shot prompting experiments on large-language models☆118Updated last year
- ☆32Updated last year
- Implementation of "SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models"☆27Updated 2 months ago
- Official code for ACL 2023 (short, findings) paper "Recursion of Thought: A Divide and Conquer Approach to Multi-Context Reasoning with L…☆43Updated last year
- Evaluating LLMs with CommonGen-Lite☆89Updated last year
- Multi-Domain Expert Learning☆67Updated last year
- ☆37Updated last year
- For experiments involving instruct gpt. Currently used for documenting open research questions.☆71Updated 2 years ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆42Updated 5 months ago
- ☆24Updated last year
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆42Updated last year
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆126Updated last year
- Based on the tree of thoughts paper☆48Updated last year
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆114Updated 7 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- Evaluating tool-augmented LLMs in conversation settings☆84Updated 10 months ago
- Mixture of Expert (MoE) techniques for enhancing LLM performance through expert-driven prompt mapping and adapter combinations.☆12Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Developing tools to automatically analyze datasets☆74Updated 5 months ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆73Updated 6 months ago