iantbutler01 / dittyLinks
A library for simplifying fine tuning with multi gpu setups in the Huggingface ecosystem.
☆16Updated 8 months ago
Alternatives and similar repositories for ditty
Users that are interested in ditty are comparing it to the libraries listed below
Sorting:
- Training hybrid models for dummies.☆25Updated 6 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- LLMs as Collaboratively Edited Knowledge Bases☆45Updated last year
- GoldFinch and other hybrid transformer components☆46Updated 11 months ago
- Latent Large Language Models☆18Updated 10 months ago
- ☆44Updated last year
- Implementation of SelfExtend from the paper "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" from Pytorch and Zeta☆13Updated 8 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Implementation of the Mamba SSM with hf_integration.☆56Updated 10 months ago
- ☆35Updated last year
- RWKV model implementation☆38Updated 2 years ago
- Submission to the inverse scaling prize☆23Updated last year
- Simple GRPO scripts and configurations.☆59Updated 5 months ago
- ☆28Updated 10 months ago
- Very minimal (and stateless) agent framework☆44Updated 6 months ago
- Exploration using DSPy to optimize modules to maximize performance on the OpenToM dataset☆16Updated last year
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated last week
- Implementation of Spectral State Space Models☆16Updated last year
- A framework for pitting LLMs against each other in an evolving library of games ⚔☆32Updated 2 months ago
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆24Updated 2 weeks ago
- Efficiently computing & storing token n-grams from large corpora☆24Updated 9 months ago
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆43Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- An EXA-Scale repository of Multi-Modality AI resources from papers and models, to foundational libraries!☆41Updated last year
- Github repo for Peifeng's internship project☆13Updated last year
- A new way to generate large quantities of high quality synthetic data (on par with GPT-4), with better controllability, at a fraction of …☆22Updated 9 months ago
- Understanding how features learned by neural networks evolve throughout training☆36Updated 8 months ago
- This library supports evaluating disparities in generated image quality, diversity, and consistency between geographic regions.☆20Updated last year