Birch-san / booru-embedLinks
[WIP] Transformer to embed Danbooru labelsets
☆13Updated last year
Alternatives and similar repositories for booru-embed
Users that are interested in booru-embed are comparing it to the libraries listed below
Sorting:
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- ☆49Updated last year
- ☆38Updated last year
- ☆63Updated 11 months ago
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated last year
- Latent Large Language Models☆18Updated last year
- QLoRA for Masked Language Modeling☆22Updated last year
- Collection of autoregressive model implementation☆86Updated 4 months ago
- ☆38Updated last year
- GoldFinch and other hybrid transformer components☆46Updated last year
- ☆22Updated last year
- ☆27Updated last year
- A repository for research on medium sized language models.☆78Updated last year
- implementation of https://arxiv.org/pdf/2312.09299☆21Updated last year
- Simple GRPO scripts and configurations.☆59Updated 6 months ago
- Full finetuning of large language models without large memory requirements☆94Updated last year
- ☆54Updated 9 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 6 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year
- Training hybrid models for dummies.☆25Updated 7 months ago
- Lego for GRPO☆28Updated 2 months ago
- Mixture of Expert (MoE) techniques for enhancing LLM performance through expert-driven prompt mapping and adapter combinations.☆12Updated last year
- Multi-Domain Expert Learning☆67Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Cerule - A Tiny Mighty Vision Model☆66Updated 11 months ago
- An introduction to LLM Sampling☆79Updated 8 months ago
- Experiments for efforts to train a new and improved t5☆76Updated last year
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated 2 years ago