sanjeevanahilan / nanoChatGPT
A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick
☆289Updated last year
Alternatives and similar repositories for nanoChatGPT:
Users that are interested in nanoChatGPT are comparing it to the libraries listed below
- Multipack distributed sampler for fast padding-free training of LLMs☆184Updated 6 months ago
- JAX implementation of the Llama 2 model☆215Updated last year
- Experiments with generating opensource language model assistants☆97Updated last year
- Used for adaptive human in the loop evaluation of language and embedding models.☆306Updated last year
- ☆456Updated last year
- ☆94Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆207Updated 6 months ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆195Updated last year
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆992Updated 6 months ago
- Simple next-token-prediction for RLHF☆222Updated last year
- ☆92Updated last year
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆405Updated last month
- Scaling Data-Constrained Language Models☆333Updated 5 months ago
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆342Updated 6 months ago
- Language Modeling with the H3 State Space Model☆516Updated last year
- ☆412Updated last year
- A minimum example of aligning language models with RLHF similar to ChatGPT☆217Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆421Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate …☆630Updated last year
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA☆301Updated last year
- Erasing concepts from neural representations with provable guarantees☆222Updated 3 weeks ago
- Inference code for Mistral and Mixtral hacked up into original Llama implementation☆371Updated last year
- Train very large language models in Jax.☆203Updated last year
- Implementation of Reinforcement Learning from Human Feedback (RLHF)☆171Updated last year
- Code repository for the c-BTM paper☆105Updated last year
- An interactive exploration of Transformer programming.☆258Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆566Updated 7 months ago
- ☆160Updated last year