furiousteabag / vram-calculatorLinks
Transformer GPU VRAM estimator
☆67Updated last year
Alternatives and similar repositories for vram-calculator
Users that are interested in vram-calculator are comparing it to the libraries listed below
Sorting:
- ☆68Updated last year
- ☆115Updated 10 months ago
- Google TPU optimizations for transformers models☆124Updated 10 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆53Updated 2 years ago
- GRDN.AI app for garden optimization☆69Updated 3 weeks ago
- Train, tune, and infer Bamba model☆137Updated 6 months ago
- Aana SDK is a powerful framework for building AI enabled multimodal applications.☆53Updated 3 months ago
- Simple high-throughput inference library☆151Updated 6 months ago
- An OpenAI Completions API compatible server for NLP transformers models☆65Updated 2 years ago
- ☆101Updated last year
- Pivotal Token Search☆132Updated last week
- Public reports detailing responses to sets of prompts by Large Language Models.☆32Updated 11 months ago
- Self-host LLMs with vLLM and BentoML☆161Updated 2 weeks ago
- ☆66Updated 8 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated last month
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆103Updated 6 months ago
- inference code for mixtral-8x7b-32kseqlen☆104Updated last year
- ☆164Updated 4 months ago
- smolbox of recipies☆28Updated 7 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆139Updated last year
- Train your own SOTA deductive reasoning model☆107Updated 9 months ago
- papers.day☆91Updated last year
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆91Updated last year
- A collection of all available inference solutions for the LLMs☆93Updated 9 months ago
- Simple examples using Argilla tools to build AI☆56Updated last year
- Inference of Mamba models in pure C☆194Updated last year
- DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.☆182Updated 6 months ago
- 1.58 Bit LLM on Apple Silicon using MLX☆226Updated last year
- Public repository containing METR's DVC pipeline for eval data analysis☆143Updated 8 months ago