elicit / fave-datasetLinks
Paper dataset for "Factored Verification: Detecting and Reducing Hallucination in Summaries of Academic Papers"
☆14Updated 10 months ago
Alternatives and similar repositories for fave-dataset
Users that are interested in fave-dataset are comparing it to the libraries listed below
Sorting:
- A repository for research on medium sized language models.☆78Updated last year
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆43Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated 9 months ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated this week
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated 2 years ago
- Training hybrid models for dummies.☆25Updated 7 months ago
- Aioli: A unified optimization framework for language model data mixing☆26Updated 7 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆24Updated this week
- 🚀 Automatically convert unstructured data into a high-quality 'textbook' format, optimized for fine-tuning Large Language Models (LLMs)☆25Updated last year
- ☆34Updated last month
- ☆38Updated last year
- Verifiers for LLM Reinforcement Learning☆71Updated 4 months ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- Latent Large Language Models☆18Updated last year
- On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability☆40Updated last month
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 7 months ago
- ☆23Updated 3 weeks ago
- ☆35Updated 2 years ago
- Official repository for "BLEUBERI: BLEU is a surprisingly effective reward for instruction following"☆25Updated 3 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated 3 weeks ago
- Official repo for Learning to Reason for Long-Form Story Generation☆68Updated 4 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 2 months ago
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆116Updated 3 weeks ago
- ☆26Updated last year
- ☆29Updated 3 weeks ago
- ☆54Updated 9 months ago