kolinko / effortLinks
An implementation of bucketMul LLM inference
☆224Updated last year
Alternatives and similar repositories for effort
Users that are interested in effort are comparing it to the libraries listed below
Sorting:
- Visualize the intermediate output of Mistral 7B☆384Updated last year
- Mistral7B playing DOOM☆139Updated last year
- Stop messing around with finicky sampling parameters and just use DRµGS!☆360Updated last year
- WebGPU LLM inference tuned by hand☆150Updated 2 years ago
- ☆466Updated 2 months ago
- ☆251Updated last year
- Pytorch script hot swap: Change code without unloading your LLM from VRAM☆125Updated 9 months ago
- Fast parallel LLM inference for MLX☆246Updated last year
- a small code base for training large models☆322Updated 9 months ago
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆218Updated last year
- GGUF implementation in C as a library and a tools CLI program☆303Updated 5 months ago
- run paligemma in real time☆133Updated last year
- Run GGML models with Kubernetes.☆175Updated 2 years ago
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆288Updated 3 weeks ago
- Implement recursion using English as the programming language and an LLM as the runtime.☆240Updated 2 years ago
- a curated list of data for reasoning ai☆141Updated last year
- A multi-player tournament benchmark that tests LLMs in social reasoning, strategy, and deception. Players engage in public and private co…☆297Updated last month
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated 2 years ago
- Finetune llama2-70b and codellama on MacBook Air without quantization☆450Updated last year
- JS tokenizer for LLaMA 1 and 2☆363Updated last year
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆141Updated last year
- Felafax is building AI infra for non-NVIDIA GPUs☆570Updated last year
- Tiny inference-only implementation of LLaMA☆92Updated last year
- LLaVA server (llama.cpp).☆183Updated 2 years ago
- ☆115Updated last year
- Live-bending a foundation model’s output at neural network level.☆273Updated 10 months ago
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆628Updated 10 months ago
- ☆258Updated 11 months ago
- Autograd to GPT-2 completely from scratch☆126Updated 6 months ago
- Applying the ideas of Deepseek R1 to computer use☆221Updated last year