RobertRiachi / nanoPALM
☆143Updated last year
Alternatives and similar repositories for nanoPALM:
Users that are interested in nanoPALM are comparing it to the libraries listed below
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆342Updated 6 months ago
- Helpers and such for working with Lambda Cloud☆51Updated last year
- ☆92Updated last year
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- An interactive exploration of Transformer programming.☆258Updated last year
- ☆412Updated last year
- ☆208Updated 7 months ago
- Simple Transformer in Jax☆136Updated 7 months ago
- Drive a browser with Cohere☆72Updated last year
- A puzzle to learn about prompting☆124Updated last year
- git extension for {collaborative, communal, continual} model development☆207Updated 3 months ago
- Simple embedding -> text model trained on a small subset of Wikipedia sentences.☆153Updated last year
- Language Modeling with the H3 State Space Model☆516Updated last year
- A really tiny autograd engine☆89Updated 10 months ago
- Functional local implementations of main model parallelism approaches☆95Updated last year
- a small code base for training large models☆286Updated 2 months ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- ☆153Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated last year
- Automatic gradient descent☆207Updated last year
- Run GGML models with Kubernetes.☆174Updated last year
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆164Updated this week
- [Added T5 support to TRLX] A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆47Updated 2 years ago
- Train very large language models in Jax.☆202Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆195Updated last year
- Prompt programming with FMs.☆440Updated 6 months ago
- Simplex Random Feature attention, in PyTorch☆73Updated last year
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆202Updated last month
- Fast bare-bones BPE for modern tokenizer training☆146Updated 3 months ago