hitorilabs / naviLinks
compute, storage, and networking infra at home
☆64Updated last year
Alternatives and similar repositories for navi
Users that are interested in navi are comparing it to the libraries listed below
Sorting:
- Just large language models. Hackable, with as little abstraction as possible. Done for my own purposes, feel free to rip.☆44Updated 2 years ago
- Simple Transformer in Jax☆139Updated last year
- Simplex Random Feature attention, in PyTorch☆74Updated last year
- Simple embedding -> text model trained on a small subset of Wikipedia sentences.☆157Updated 2 years ago
- This repo is my attempt at a rough implementation of nanoGPT trained on a dataset of 30,000 unique Twitter usernames☆24Updated last year
- Ultra low overhead NVIDIA GPU telemetry plugin for telegraf with memory temperature readings.☆63Updated last year
- papers.day☆91Updated last year
- a highly efficient compression algorithm for the n1 implant (neuralink's compression challenge)☆46Updated last year
- A really tiny autograd engine☆95Updated 4 months ago
- ☆62Updated last year
- Stream of my favorite papers and links☆43Updated 6 months ago
- Chat Markup Language conversation library☆55Updated last year
- ☆22Updated 2 years ago
- inference code for mixtral-8x7b-32kseqlen☆101Updated last year
- ☆40Updated last year
- Cerule - A Tiny Mighty Vision Model☆67Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Updated 2 years ago
- Helpers and such for working with Lambda Cloud☆51Updated last year
- ☆19Updated 2 years ago
- ☆144Updated 2 years ago
- Various handy scripts to quickly setup new Linux and Windows sandboxes, containers and WSL.☆40Updated last week
- Turing machines, Rule 110, and A::B reversal using Claude 3 Opus.☆58Updated last year
- An introduction to LLM Sampling☆79Updated 9 months ago
- ☆46Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆100Updated 2 years ago
- ☆28Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated 2 weeks ago
- ☆49Updated last year
- ☆96Updated last year