huggingface / optimum-nvidiaLinks
☆1,026Updated 11 months ago
Alternatives and similar repositories for optimum-nvidia
Users that are interested in optimum-nvidia are comparing it to the libraries listed below
Sorting:
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆686Updated last year
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,313Updated 5 months ago
- Training LLMs with QLoRA + FSDP☆1,537Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆986Updated last year
- ☆446Updated last year
- Official implementation of Half-Quadratic Quantization (HQQ)☆905Updated last month
- A simple, performant and scalable Jax LLM!☆2,089Updated this week
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆737Updated last year
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,432Updated this week
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,173Updated last year
- Train Models Contrastively in Pytorch☆771Updated 9 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,639Updated last year
- ☆551Updated last year
- A pytorch quantization backend for optimum☆1,021Updated last month
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,893Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,314Updated 10 months ago
- Serving multiple LoRA finetuned LLM as one☆1,134Updated last year
- Run Mixtral-8x7B models in Colab or consumer desktops☆2,329Updated last year
- Official inference library for pre-processing of Mistral models☆846Updated last week
- PyTorch native quantization and sparsity for training and inference☆2,617Updated last week
- An innovative library for efficient LLM inference via low-bit quantization☆351Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,407Updated last year
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆326Updated 3 months ago
- ☆576Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆941Updated 2 months ago
- The repository for the code of the UltraFastBERT paper☆519Updated last year
- ☆866Updated 2 years ago
- [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling☆1,811Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,088Updated 6 months ago
- RayLLM - LLMs on Ray (Archived). Read README for more info.☆1,264Updated 10 months ago