togethercomputer / Llama-2-7B-32K-Instruct
☆84Updated last year
Alternatives and similar repositories for Llama-2-7B-32K-Instruct:
Users that are interested in Llama-2-7B-32K-Instruct are comparing it to the libraries listed below
- ☆74Updated last year
- Evaluating LLMs with CommonGen-Lite☆88Updated 10 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 9 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆98Updated 4 months ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆163Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆100Updated last month
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- ☆74Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆66Updated 3 months ago
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆63Updated last year
- Client Code Examples, Use Cases and Benchmarks for Enterprise h2oGPTe RAG-Based GenAI Platform☆82Updated 3 weeks ago
- A set of utilities for running few-shot prompting experiments on large-language models☆116Updated last year
- HuggingChat like UI in Gradio☆69Updated last year
- Weekly visualization report of Open LLM model performance based on 4 metrics.☆86Updated last year
- Track the progress of LLM context utilisation☆53Updated 6 months ago
- Codebase accompanying the Summary of a Haystack paper.☆74Updated 4 months ago
- Pre-training code for CrystalCoder 7B LLM☆55Updated 8 months ago
- Reimplementation of the task generation part from the Alpaca paper☆119Updated last year
- Based on the tree of thoughts paper☆46Updated last year
- inference code for mixtral-8x7b-32kseqlen☆99Updated last year
- ☆38Updated last year
- Here is a Google Colab Notebook for fine-tuning Alpaca Lora (within 3 hours with a 40GB A100 GPU)☆38Updated last year
- Learning to Program with Natural Language☆4Updated last year
- Just a bunch of benchmark logs for different LLMs☆117Updated 6 months ago
- ☆51Updated 6 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆115Updated last year
- ☆37Updated last year
- The data processing pipeline for the Koala chatbot language model☆117Updated last year
- Data preparation code for Amber 7B LLM☆84Updated 8 months ago