austinsilveria / tricksy
Fast approximate inference on a single GPU with sparsity aware offloading
☆38Updated last year
Alternatives and similar repositories for tricksy:
Users that are interested in tricksy are comparing it to the libraries listed below
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated 11 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated 10 months ago
- Scripts to create your own moe models using mlx☆86Updated 11 months ago
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆21Updated 2 months ago
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- Data preparation code for CrystalCoder 7B LLM☆44Updated 9 months ago
- Video+code lecture on building nanoGPT from scratch☆65Updated 8 months ago
- ☆48Updated 3 months ago
- Cerule - A Tiny Mighty Vision Model☆67Updated 5 months ago
- entropix style sampling + GUI☆25Updated 3 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- ☆65Updated 8 months ago
- ☆52Updated 8 months ago
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆40Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆58Updated 5 months ago
- ☆45Updated last week
- ☆20Updated last year
- ☆24Updated last year
- Latent Large Language Models