donaldafeith / Pytorch_MergeLinks
Merge LLM that are split in to parts
☆26Updated last year
Alternatives and similar repositories for Pytorch_Merge
Users that are interested in Pytorch_Merge are comparing it to the libraries listed below
Sorting:
- ☆74Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- ☆27Updated last year
- Experimental sampler to make LLMs more creative☆31Updated last year
- Multi-Domain Expert Learning☆67Updated last year
- A library for squeakily cleaning and filtering language datasets.☆47Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- Image Diffusion block merging technique applied to transformers based Language Models.☆54Updated 2 years ago
- ☆52Updated 8 months ago
- ☆35Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- ☆63Updated 9 months ago
- ☆22Updated last year
- ☆49Updated last year
- ☆23Updated 2 years ago
- Implementation of the Mamba SSM with hf_integration.☆56Updated 10 months ago
- Let's create synthetic textbooks together :)☆75Updated last year
- Finetune any model on HF in less than 30 seconds☆57Updated 3 months ago
- Simple GRPO scripts and configurations.☆59Updated 5 months ago
- ☆32Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- Mixing Language Models with Self-Verification and Meta-Verification☆106Updated 7 months ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- ☆20Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated last month
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆102Updated 2 years ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆168Updated last year