furiousteabag / vram-calculatorLinks
Transformer GPU VRAM estimator
☆66Updated last year
Alternatives and similar repositories for vram-calculator
Users that are interested in vram-calculator are comparing it to the libraries listed below
Sorting:
- ☆116Updated 7 months ago
- Aana SDK is a powerful framework for building AI enabled multimodal applications.☆52Updated 3 weeks ago
- Pivotal Token Search☆124Updated 2 months ago
- Train, tune, and infer Bamba model☆131Updated 3 months ago
- Public reports detailing responses to sets of prompts by Large Language Models.☆31Updated 8 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆52Updated last year
- IBM development fork of https://github.com/huggingface/text-generation-inference☆61Updated 4 months ago
- ☆67Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆88Updated 3 months ago
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆88Updated last year
- inference code for mixtral-8x7b-32kseqlen☆101Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 7 months ago
- An OpenAI Completions API compatible server for NLP transformers models☆65Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- Lego for GRPO☆29Updated 3 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Simple high-throughput inference library☆127Updated 4 months ago
- GRDN.AI app for garden optimization☆70Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆146Updated 6 months ago
- ☆74Updated 2 years ago
- This is the documentation repository for SGLang. It is auto-generated from https://github.com/sgl-project/sglang/tree/main/docs.☆76Updated this week
- Your buddy in the (L)LM space.☆64Updated 11 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆89Updated last week
- Self-host LLMs with vLLM and BentoML☆145Updated this week
- ☆63Updated 5 months ago
- Chat Markup Language conversation library☆55Updated last year
- Small, simple agent task environments for training and evaluation☆18Updated 10 months ago
- vLLM adapter for a TGIS-compatible gRPC server.☆39Updated this week
- Smart proxy for LLM APIs that enables model-specific parameter control, automatic mode switching (like Qwen3's /think and /no_think), and…☆50Updated 3 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆291Updated 3 weeks ago