Eureka6174 / LearnNLPlanLinks
Learning to Program with Natural Language
☆6Updated last year
Alternatives and similar repositories for LearnNLPlan
Users that are interested in LearnNLPlan are comparing it to the libraries listed below
Sorting:
- Mixing Language Models with Self-Verification and Meta-Verification☆105Updated 7 months ago
- ☆33Updated 2 years ago
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆111Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Track the progress of LLM context utilisation☆55Updated 3 months ago
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆97Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆168Updated last year
- ☆35Updated 2 years ago
- Evaluating LLMs with CommonGen-Lite☆90Updated last year
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆43Updated last year
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆64Updated 2 years ago
- The Next Generation Multi-Modality Superintelligence☆70Updated 11 months ago
- Finetune any model on HF in less than 30 seconds☆57Updated 2 weeks ago
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- ☆74Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- The data processing pipeline for the Koala chatbot language model☆117Updated 2 years ago
- Pre-training code for CrystalCoder 7B LLM☆55Updated last year
- ☆29Updated last week
- Based on the tree of thoughts paper☆48Updated last year
- [NeurIPS 2023] PyTorch code for Can Language Models Teach? Teacher Explanations Improve Student Performance via Theory of Mind☆66Updated last year
- Official repo of Respond-and-Respond: data, code, and evaluation☆103Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated 2 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆223Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Score LLM pretraining data with classifiers☆55Updated last year
- [ICML 2023] "Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation", Wenqing Zheng, S P Sharan, Ajay Kumar Jaiswal, …☆40Updated last year
- Backtracing: Retrieving the Cause of the Query, EACL 2024 Long Paper, Findings.☆89Updated last year
- [NAACL 2024] Struc-Bench: Are Large Language Models Good at Generating Complex Structured Tabular Data? https://aclanthology.org/2024.naa…☆54Updated last week