AUGMXNT / llm-experiments
Experiments w/ ChatGPT, LangChain, local LLMs
β24Updated last year
Alternatives and similar repositories for llm-experiments:
Users that are interested in llm-experiments are comparing it to the libraries listed below
- β26Updated 2 years ago
- Unofficial python bindings for the rust llm library. πβ€οΈπ¦β75Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Modelsβ69Updated last year
- Merge LLM that are split in to partsβ26Updated last year
- 4 bits quantization of SantaCoder using GPTQβ51Updated last year
- β33Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chatβ101Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.β50Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytesβ¦β147Updated last year
- A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZβ64Updated last year
- Trace LLM calls (and others) and visualize them in WandB, as interactive SVG or using a streaming local webappβ14Updated last month
- Reimplementation of the task generation part from the Alpaca paperβ119Updated last year
- Python examples using the bigcode/tiny_starcoder_py 159M model to generate codeβ44Updated last year
- π Scale your RAG pipeline using Ragswift: A scalable centralized embeddings management platformβ37Updated last year
- Code for finetuning RedPajama-Chat-3B using LoRAβ13Updated last year
- β37Updated last year
- Sentence Embedding as a Serviceβ15Updated last year
- Modified Stanford-Alpaca Trainer for Training Replit's Code Modelβ40Updated last year
- Fast approximate inference on a single GPU with sparsity aware offloadingβ38Updated last year
- DiffusionWithAutoscalerβ29Updated 11 months ago
- Training and Inference Notebooks for the RedPajama (OpenLlama) modelsβ18Updated last year
- Integrate an LLM copilot within your Keras model development workflowβ28Updated last year
- Here is a Google Colab Notebook for fine-tuning Alpaca Lora (within 3 hours with a 40GB A100 GPU)β38Updated last year
- BIG: Back In the Game of Creative AIβ27Updated 2 years ago
- Image Diffusion block merging technique applied to transformers based Language Models.β54Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limitβ63Updated last year
- β24Updated last year
- Instruct-tune LLaMA on consumer hardwareβ73Updated last year
- Adversarial Training and SFT for Bot Safety Modelsβ39Updated last year
- An OpenAI Completions API compatible server for NLP transformers modelsβ64Updated last year