iantbutler01 / ditty
A library for simplifying fine tuning with multi gpu setups in the Huggingface ecosystem.
☆16Updated 4 months ago
Alternatives and similar repositories for ditty:
Users that are interested in ditty are comparing it to the libraries listed below
- Implementation of Spectral State Space Models☆16Updated last year
- Training hybrid models for dummies.☆20Updated 2 months ago
- Implementation of SelfExtend from the paper "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" from Pytorch and Zeta☆13Updated 4 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- Latent Large Language Models☆17Updated 6 months ago
- ☆15Updated 5 months ago
- implementation of https://arxiv.org/pdf/2312.09299☆20Updated 8 months ago
- RWKV model implementation☆37Updated last year
- A new way to generate large quantities of high quality synthetic data (on par with GPT-4), with better controllability, at a fraction of …☆22Updated 5 months ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆28Updated last week
- Minimum Description Length probing for neural network representations☆19Updated last month
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Github repo for Peifeng's internship project☆13Updated last year
- Generate interleaved text and image content in a structured format you can directly pass to downstream APIs.☆27Updated 5 months ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated 11 months ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated 9 months ago
- GoldFinch and other hybrid transformer components☆45Updated 8 months ago
- ☆31Updated 9 months ago
- Submission to the inverse scaling prize☆23Updated last year
- ☆21Updated 3 months ago
- Unleash the full potential of exascale LLMs on consumer-class GPUs, proven by extensive benchmarks, with no long-term adjustments and min…☆25Updated 4 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- ☆16Updated last year
- Rust bindings for CTranslate2☆14Updated last year
- A Data Source for Reasoning Embodied Agents☆19Updated last year
- Trying to deconstruct RWKV in understandable terms☆14Updated last year
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆32Updated 5 months ago