Glavin001 / Data2AITextbookLinks
π Automatically convert unstructured data into a high-quality 'textbook' format, optimized for fine-tuning Large Language Models (LLMs)
β25Updated last year
Alternatives and similar repositories for Data2AITextbook
Users that are interested in Data2AITextbook are comparing it to the libraries listed below
Sorting:
- GPT-4 Level Conversational QA Trained In a Few Hoursβ64Updated last year
- Fast approximate inference on a single GPU with sparsity aware offloadingβ38Updated last year
- entropix style sampling + GUIβ27Updated 10 months ago
- β20Updated last year
- LLM reads a paper and produce a working prototypeβ57Updated 5 months ago
- β28Updated 5 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.β34Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Modelsβ70Updated 2 years ago
- An Implementation of "Orca: Progressive Learning from Complex Explanation Traces of GPT-4"β43Updated 11 months ago
- β54Updated 10 months ago
- Using open source LLMs to build synthetic datasets for direct preference optimizationβ65Updated last year
- β85Updated 2 years ago
- Using multiple LLMs for ensemble Forecastingβ16Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optunaβ55Updated 7 months ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first appβ¦β170Updated last year
- π Scale your RAG pipeline using Ragswift: A scalable centralized embeddings management platformβ38Updated last year
- Data preparation code for CrystalCoder 7B LLMβ45Updated last year
- Mixing Language Models with Self-Verification and Meta-Verificationβ110Updated 9 months ago
- Generate High Quality textual or multi-modal datasets with Agentsβ18Updated 2 years ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"β40Updated 10 months ago
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)β92Updated 7 months ago
- EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding forβ¦β27Updated 9 months ago
- β35Updated 2 years ago
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorerβ43Updated last year
- Let's create synthetic textbooks together :)β75Updated last year
- The Next Generation Multi-Modality Superintelligenceβ70Updated last year
- SCREWS: A Modular Framework for Reasoning with Revisionsβ27Updated last year
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absoluteβ¦β49Updated last year
- β51Updated last year
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."β64Updated 2 years ago