Agora-Lab-AI / The-Distiller
Generate High Quality textual or multi-modal datasets with Agents
☆18Updated last year
Alternatives and similar repositories for The-Distiller:
Users that are interested in The-Distiller are comparing it to the libraries listed below
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 11 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- entropix style sampling + GUI☆26Updated 6 months ago
- 🚀 Automatically convert unstructured data into a high-quality 'textbook' format, optimized for fine-tuning Large Language Models (LLMs)☆26Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆42Updated 5 months ago
- LLM reads a paper and produce a working prototype☆55Updated 3 weeks ago
- Finetune any model on HF in less than 30 seconds☆58Updated last month
- A repository of projects and datasets under active development by Alignment Lab AI☆22Updated last year
- The Next Generation Multi-Modality Superintelligence☆71Updated 8 months ago
- Official code for ACL 2023 (short, findings) paper "Recursion of Thought: A Divide and Conquer Approach to Multi-Context Reasoning with L…☆43Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- ☆38Updated last year
- Data preparation code for CrystalCoder 7B LLM☆44Updated 11 months ago
- ☆20Updated last year
- ☆22Updated last year
- 5X faster 60% less memory QLoRA finetuning☆21Updated 11 months ago
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆64Updated last year
- Merge LLM that are split in to parts☆26Updated last year
- Evaluating LLMs with CommonGen-Lite☆90Updated last year
- GPT-2 small trained on phi-like data☆66Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆42Updated 11 months ago
- ☆63Updated last month
- Experimental sampler to make LLMs more creative☆31Updated last year
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆90Updated 3 months ago
- ☆37Updated 2 years ago
- ☆73Updated last year
- A repository for research on medium sized language models.☆76Updated 11 months ago
- ☆48Updated 6 months ago