pickaxeproject / llama2
Our Process for Llama2 Finetuning
☆16Updated last year
Alternatives and similar repositories for llama2:
Users that are interested in llama2 are comparing it to the libraries listed below
- large language model for mastering data analysis using pandas☆47Updated last year
- Experimental sampler to make LLMs more creative☆31Updated last year
- ☆48Updated last year
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆43Updated last year
- ☆12Updated last month
- ☆75Updated last year
- ☆30Updated 9 months ago
- Text to Python Objects via a LLM Function Call☆57Updated last year
- ☆20Updated last year
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆21Updated 5 months ago
- Python examples using the bigcode/tiny_starcoder_py 159M model to generate code☆44Updated last year
- Chat Markup Language conversation library☆55Updated last year
- Writing Blog Posts with Generative Feedback Loops!☆47Updated last year
- Finetune any model on HF in less than 30 seconds☆58Updated 3 weeks ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated 9 months ago
- ☆24Updated last year
- A framework for evaluating function calls made by LLMs☆37Updated 9 months ago
- ☆33Updated 2 years ago
- Conduct consumer interviews with synthetic focus groups using LLMs and LangChain☆43Updated last year
- LLM reads a paper and produce a working prototype☆52Updated 2 weeks ago
- Streamlit app for recommending eval functions using prompt diffs☆27Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- Testing speed and accuracy of RAG with, and without Cross Encoder Reranker.☆48Updated last year
- C++ inference wrappers for running blazing fast embedding services on your favourite serverless like AWS Lambda. By Prithivi Da, PRs welc…☆22Updated last year
- ☆73Updated last year
- Repository of the code base for KT Generation process that we worked at Google Cloud and Searce GenAI Hackathon.☆74Updated last year
- minimal LLM scripts for 24GB VRAM GPUs. training, inference, whatever☆38Updated last month
- Source codes for the paper "Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints"☆28Updated 2 years ago