austinsilveria / tricksyLinks
Fast approximate inference on a single GPU with sparsity aware offloading
☆39Updated last year
Alternatives and similar repositories for tricksy
Users that are interested in tricksy are comparing it to the libraries listed below
Sorting:
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Let's create synthetic textbooks together :)☆75Updated last year
- ☆86Updated last year
- Scripts to create your own moe models using mlx☆90Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Simple GRPO scripts and configurations.☆59Updated 10 months ago
- ☆55Updated last year
- ☆68Updated last year
- Evaluating LLMs with CommonGen-Lite☆93Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆35Updated last year
- ☆136Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated last month
- ☆74Updated 2 years ago
- GPT-4 Level Conversational QA Trained In a Few Hours☆66Updated last year
- entropix style sampling + GUI☆27Updated last year
- Collection of autoregressive model implementation☆85Updated 7 months ago
- ☆117Updated 11 months ago
- inference code for mixtral-8x7b-32kseqlen☆104Updated last year
- Cerule - A Tiny Mighty Vision Model☆68Updated last month
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated 2 months ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆100Updated 6 months ago
- ☆51Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- Pre-training code for CrystalCoder 7B LLM☆55Updated last year