yacineMTB / just-large-models
Just large language models. Hackable, with as little abstraction as possible. Done for my own purposes, feel free to rip.
☆44Updated last year
Related projects ⓘ
Alternatives and complementary repositories for just-large-models
- Simple Transformer in Jax☆119Updated 5 months ago
- ☆57Updated 11 months ago
- look how they massacred my boy☆58Updated last month
- KMD is a collection of conversational exchanges between patients and doctors on various medical topics. It aims to capture the intricaci…☆23Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated 10 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- Stream of my favorite papers and links☆36Updated 2 months ago
- A really tiny autograd engine☆87Updated 7 months ago
- ☆22Updated last year
- Turing machines, Rule 110, and A::B reversal using Claude 3 Opus.☆60Updated 6 months ago
- papers.day☆79Updated 11 months ago
- ☆36Updated 3 months ago
- ☆20Updated 3 weeks ago
- inference code for mixtral-8x7b-32kseqlen☆98Updated 11 months ago
- Simplex Random Feature attention, in PyTorch☆71Updated last year
- ☆49Updated 8 months ago
- A repository of prompts and Python scripts for intelligent transformation of raw text into diverse formats.☆29Updated last year
- ☆48Updated last year
- Cerule - A Tiny Mighty Vision Model☆67Updated 2 months ago
- An introduction to LLM Sampling☆65Updated 2 weeks ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆56Updated 2 weeks ago
- Helpers and such for working with Lambda Cloud☆51Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated 10 months ago
- ☆27Updated 4 months ago
- ☆74Updated 3 weeks ago
- Verbosity control for AI agents☆59Updated 6 months ago
- This repo is my attempt at a rough implementation of nanoGPT trained on a dataset of 30,000 unique Twitter usernames☆26Updated 7 months ago
- Simple embedding -> text model trained on a small subset of Wikipedia sentences.☆152Updated last year
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆22Updated last month
- ☆104Updated 8 months ago