kolinko / effortLinks
An implementation of bucketMul LLM inference
☆223Updated last year
Alternatives and similar repositories for effort
Users that are interested in effort are comparing it to the libraries listed below
Sorting:
- Visualize the intermediate output of Mistral 7B☆368Updated 7 months ago
- Stop messing around with finicky sampling parameters and just use DRµGS!☆353Updated last year
- ☆401Updated this week
- Mistral7B playing DOOM☆135Updated last year
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆286Updated 3 weeks ago
- ☆116Updated 6 months ago
- Fast parallel LLM inference for MLX☆206Updated last year
- Pytorch script hot swap: Change code without unloading your LLM from VRAM☆126Updated 4 months ago
- WebGPU LLM inference tuned by hand☆151Updated 2 years ago
- a small code base for training large models☆309Updated 3 months ago
- Finetune llama2-70b and codellama on MacBook Air without quantization☆448Updated last year
- run paligemma in real time☆131Updated last year
- A multi-player tournament benchmark that tests LLMs in social reasoning, strategy, and deception. Players engage in public and private co…☆290Updated last week
- Tiny inference-only implementation of LLaMA☆93Updated last year
- 1.58 Bit LLM on Apple Silicon using MLX☆221Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- GGUF implementation in C as a library and a tools CLI program☆283Updated 7 months ago
- a curated list of data for reasoning ai☆137Updated last year
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆210Updated 9 months ago
- ☆89Updated 10 months ago
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆622Updated 5 months ago
- Full finetuning of large language models without large memory requirements☆94Updated last year
- JS tokenizer for LLaMA 1 and 2☆357Updated last year
- ☆221Updated 5 months ago
- Run GGML models with Kubernetes.☆174Updated last year
- Live-bending a foundation model’s output at neural network level.☆266Updated 4 months ago
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆140Updated last year
- Implement recursion using English as the programming language and an LLM as the runtime.☆239Updated 2 years ago
- ☆249Updated last year
- A library for incremental loading of large PyTorch checkpoints☆56Updated 2 years ago