Manuel030 / alpaca-optLinks
Yet another LLM
☆10Updated 2 years ago
Alternatives and similar repositories for alpaca-opt
Users that are interested in alpaca-opt are comparing it to the libraries listed below
Sorting:
- ☆40Updated 2 years ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Experimental sampler to make LLMs more creative☆31Updated last year
- The Next Generation Multi-Modality Superintelligence☆70Updated 10 months ago
- ☆74Updated last year
- ☆63Updated 10 months ago
- ☆22Updated last year
- GPT-2 small trained on phi-like data☆66Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated 2 years ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- An OpenAI API compatible LLM inference server based on ExLlamaV2.☆25Updated last year
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago
- Merge LLM that are split in to parts☆27Updated this week
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- ☆15Updated last year
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆30Updated last week
- entropix style sampling + GUI☆26Updated 8 months ago
- RWKV-7: Surpassing GPT☆94Updated 8 months ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆103Updated 2 years ago
- Implementation of the Mamba SSM with hf_integration.☆56Updated 10 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 2 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆81Updated 2 months ago
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆168Updated last year
- Zeta implementation of a reusable and plug in and play feedforward from the paper "Exponentially Faster Language Modeling"☆16Updated 8 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 5 months ago
- A simple package for leveraging Falcon 180B and the HF ecosystem's tools, including training/inference scripts, safetensors, integrations…☆12Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated last year