kolinko / effort
An implementation of bucketMul LLM inference
☆217Updated 10 months ago
Alternatives and similar repositories for effort:
Users that are interested in effort are comparing it to the libraries listed below
- Pytorch script hot swap: Change code without unloading your LLM from VRAM☆123Updated 2 weeks ago
- Mistral7B playing DOOM☆131Updated 9 months ago
- Stop messing around with finicky sampling parameters and just use DRµGS!☆349Updated 11 months ago
- GGUF implementation in C as a library and a tools CLI program☆269Updated 3 months ago
- Visualize the intermediate output of Mistral 7B☆360Updated 3 months ago
- Fast parallel LLM inference for MLX☆186Updated 9 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Inference of Mamba models in pure C☆188Updated last year
- Run GGML models with Kubernetes.☆173Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- ☆241Updated last year
- run paligemma in real time☆131Updated 11 months ago
- Live-bending a foundation model’s output at neural network level.☆247Updated 3 weeks ago
- A multi-player tournament benchmark that tests LLMs in social reasoning, strategy, and deception. Players engage in public and private co…☆261Updated last week
- ☆112Updated 3 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆237Updated 11 months ago
- a small code base for training large models☆294Updated last week
- Heirarchical Navigable Small Worlds☆96Updated 3 weeks ago
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆283Updated this week
- Implement recursion using English as the programming language and an LLM as the runtime.☆232Updated 2 years ago
- WebGPU LLM inference tuned by hand☆149Updated last year
- Dead Simple LLM Abliteration☆212Updated 2 months ago
- An mlx project to train a base model on your whatsapp chats using (Q)Lora finetuning☆166Updated last year
- Tiny inference-only implementation of LLaMA☆93Updated last year
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆135Updated 9 months ago
- LLM-based code completion engine☆185Updated 3 months ago
- throwaway GPT inference☆139Updated 11 months ago
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆607Updated last month
- Code sample showing how to run and benchmark models on Qualcomm's Window PCs☆96Updated 7 months ago
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆204Updated 5 months ago