Glavin001 / Data2AITextbook
π Automatically convert unstructured data into a high-quality 'textbook' format, optimized for fine-tuning Large Language Models (LLMs)
β26Updated last year
Alternatives and similar repositories for Data2AITextbook:
Users that are interested in Data2AITextbook are comparing it to the libraries listed below
- Using multiple LLMs for ensemble Forecastingβ16Updated last year
- EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding forβ¦β22Updated last month
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.β33Updated 10 months ago
- Universal text classifier for generative modelsβ22Updated 6 months ago
- β48Updated 2 months ago
- SCREWS: A Modular Framework for Reasoning with Revisionsβ27Updated last year
- Data preparation code for CrystalCoder 7B LLMβ44Updated 8 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absoluteβ¦β48Updated 6 months ago
- Code and data for "StructLM: Towards Building Generalist Models for Structured Knowledge Grounding" (COLM 2024)β74Updated 3 months ago
- entropix style sampling + GUIβ25Updated 3 months ago
- β20Updated last year
- A collection of notebooks for the Hugging Face blog series (https://huggingface.co/blog).β42Updated 5 months ago
- LLM reads a paper and produce a working prototypeβ48Updated last month
- Fast approximate inference on a single GPU with sparsity aware offloadingβ38Updated last year
- Using open source LLMs to build synthetic datasets for direct preference optimizationβ52Updated 11 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limitβ63Updated last year
- Pre-training code for CrystalCoder 7B LLMβ55Updated 8 months ago
- LLMs as Collaboratively Edited Knowledge Basesβ43Updated 11 months ago
- β31Updated 7 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Modelsβ69Updated last year
- Official homepage for "Self-Harmonized Chain of Thought"β89Updated last week
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasksβ31Updated 8 months ago
- β24Updated last year
- Code for our paper PAPILLON: PrivAcy Preservation from Internet-based and Local Language MOdel ENsemblesβ20Updated last month
- Data preparation code for Amber 7B LLMβ84Updated 8 months ago
- Minimal, clean code implementation of RAG with mlx using gguf model weightsβ46Updated 9 months ago
- β49Updated 8 months ago
- π Scale your RAG pipeline using Ragswift: A scalable centralized embeddings management platformβ37Updated last year
- a pipeline for using api calls to agnostically convert unstructured data into structured training dataβ29Updated 4 months ago
- The Next Generation Multi-Modality Superintelligenceβ70Updated 4 months ago