deep-diver / PingPongLinks
manage histories of LLM applied applications
☆91Updated 2 years ago
Alternatives and similar repositories for PingPong
Users that are interested in PingPong are comparing it to the libraries listed below
Sorting:
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated last year
- ☆37Updated 2 years ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated 7 months ago
- evolve llm training instruction, from english instruction to any language.☆119Updated 2 years ago
- 1-Click is all you need.☆63Updated last year
- HuggingChat like UI in Gradio☆70Updated 2 years ago
- ☆33Updated 2 years ago
- Use OpenAI with HuggingChat by emulating the text_generation_inference_server☆45Updated 2 years ago
- ☆32Updated last year
- 🎨 Imagine what Picasso could have done with AI. Self-host your StableDiffusion API.☆50Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆79Updated last year
- Tune MPTs☆84Updated 2 years ago
- Alpaca-lora for huggingface implementation using Deepspeed and FullyShardedDataParallel☆24Updated 2 years ago
- OSLO: Open Source for Large-scale Optimization☆174Updated 2 years ago
- Weekly visualization report of Open LLM model performance based on 4 metrics.☆86Updated 2 years ago
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Efficient fine-tuning for ko-llm models☆184Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆141Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- Small and Efficient Mathematical Reasoning LLMs☆73Updated last year
- Official code for ACL 2023 (short, findings) paper "Recursion of Thought: A Divide and Conquer Approach to Multi-Context Reasoning with L…☆45Updated 2 years ago
- ☆78Updated 2 years ago
- Anh - LAION's multilingual assistant datasets and models☆27Updated 2 years ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆170Updated last year
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- Patch for MPT-7B which allows using and training a LoRA☆58Updated 2 years ago
- ☆74Updated 2 years ago
- hllama is a library which aims to provide a set of utility tools for large language models.☆10Updated last year
- ☆37Updated last year