Glavin001 / Data2AITextbook
π Automatically convert unstructured data into a high-quality 'textbook' format, optimized for fine-tuning Large Language Models (LLMs)
β25Updated last year
Related projects β
Alternatives and complementary repositories for Data2AITextbook
- LLM reads a paper and produce a working prototypeβ36Updated 2 weeks ago
- β41Updated 2 weeks ago
- SCREWS: A Modular Framework for Reasoning with Revisionsβ26Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.β33Updated 8 months ago
- Using multiple LLMs for ensemble Forecastingβ16Updated 10 months ago
- Data preparation code for CrystalCoder 7B LLMβ42Updated 6 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"β83Updated 2 months ago
- Using open source LLMs to build synthetic datasets for direct preference optimizationβ41Updated 8 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Modelsβ69Updated last year
- One Line To Build Zero-Data Classifiers in Minutesβ33Updated last month
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absoluteβ¦β48Updated 4 months ago
- entropix style sampling + GUIβ25Updated 3 weeks ago
- EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding forβ¦β21Updated last week
- Fast approximate inference on a single GPU with sparsity aware offloadingβ38Updated 10 months ago
- β20Updated last year
- Official code for ACL 2023 (short, findings) paper "Recursion of Thought: A Divide and Conquer Approach to Multi-Context Reasoning with Lβ¦β42Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hoursβ55Updated 3 months ago
- β24Updated last year
- Set of scripts to finetune LLMsβ36Updated 7 months ago
- β53Updated 5 months ago
- β28Updated 5 months ago
- β35Updated 3 weeks ago
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Modelsβ20Updated 9 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limitβ63Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"β42Updated 2 weeks ago
- β41Updated 2 months ago
- Code and data for "StructLM: Towards Building Generalist Models for Structured Knowledge Grounding" (COLM 2024)β68Updated last month
- β33Updated 6 months ago
- β73Updated 10 months ago
- β45Updated 2 months ago