sanjeevanahilan / nanoChatGPTLinks
A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick
☆289Updated last year
Alternatives and similar repositories for nanoChatGPT
Users that are interested in nanoChatGPT are comparing it to the libraries listed below
Sorting:
- Used for adaptive human in the loop evaluation of language and embedding models.☆308Updated 2 years ago
- ☆457Updated last year
- A repository for research on medium sized language models.☆495Updated 3 weeks ago
- JAX implementation of the Llama 2 model☆217Updated last year
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆408Updated 4 months ago
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆581Updated 11 months ago
- Implementation of Reinforcement Learning from Human Feedback (RLHF)☆173Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆114Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- Simple next-token-prediction for RLHF☆226Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆188Updated 9 months ago
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,004Updated 9 months ago
- A bagel, with everything.☆320Updated last year
- A minimum example of aligning language models with RLHF similar to ChatGPT☆218Updated last year
- Train very large language models in Jax.☆204Updated last year
- Scaling Data-Constrained Language Models☆334Updated 8 months ago
- batched loras☆342Updated last year
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆212Updated 9 months ago
- Inference code for LLaMA models in JAX☆117Updated last year
- Language Modeling with the H3 State Space Model☆518Updated last year
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- Reimplementation of the task generation part from the Alpaca paper☆118Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆421Updated last year
- Minimal code to train a Large Language Model (LLM).☆168Updated 2 years ago
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆352Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- An interactive exploration of Transformer programming.☆264Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆196Updated last year
- ☆92Updated last year
- Inference code for Mistral and Mixtral hacked up into original Llama implementation☆370Updated last year