Open-Assistant / oasst-model-evalLinks
Evaluation of the Open-Assistant language models
☆29Updated 4 months ago
Alternatives and similar repositories for oasst-model-eval
Users that are interested in oasst-model-eval are comparing it to the libraries listed below
Sorting:
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA☆302Updated 2 years ago
- ☆457Updated 2 years ago
- Multi-Domain Expert Learning☆67Updated last year
- batched loras☆347Updated 2 years ago
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- ☆415Updated 2 years ago
- Multipack distributed sampler for fast padding-free training of LLMs☆202Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆100Updated 2 years ago
- Tune MPTs☆84Updated 2 years ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated 6 months ago
- ☆95Updated 2 years ago
- Patch for MPT-7B which allows using and training a LoRA☆58Updated 2 years ago
- Code repository for the c-BTM paper☆108Updated 2 years ago
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Updated 2 months ago
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆150Updated last year
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick☆293Updated 2 years ago
- ☆128Updated 2 years ago
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆357Updated 2 years ago
- A bagel, with everything.☆325Updated last year
- React app implementing OpenAI and Google APIs to re-create behavior of the toolformer paper.☆233Updated 2 years ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆101Updated 2 years ago
- Inference code for Mistral and Mixtral hacked up into original Llama implementation☆371Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆210Updated last year
- A dataset featuring diverse dialogues between two ChatGPT (gpt-3.5-turbo) instances with system messages written by GPT-4. Covering vario…☆164Updated 2 years ago
- Minimal code to train a Large Language Model (LLM).☆172Updated 3 years ago