nectere-sdk / congenial-gogglesLinks
☆10Updated 6 months ago
Alternatives and similar repositories for congenial-goggles
Users that are interested in congenial-goggles are comparing it to the libraries listed below
Sorting:
- ☆10Updated 6 months ago
- ☆10Updated 6 months ago
- ☆10Updated 6 months ago
- ☆547Updated 8 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆846Updated 2 weeks ago
- ☆987Updated 5 months ago
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,272Updated 2 months ago
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆928Updated last year
- Training LLMs with QLoRA + FSDP☆1,494Updated 8 months ago
- ☆544Updated 7 months ago
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆685Updated 11 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,169Updated 9 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆241Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆702Updated last year
- ☆550Updated 11 months ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆715Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆374Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆697Updated 11 months ago
- A bagel, with everything.☆323Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆897Updated 2 months ago
- Serving multiple LoRA finetuned LLM as one☆1,075Updated last year
- Inference code for Persimmon-8B☆415Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆659Updated last year
- For releasing code related to compression methods for transformers, accompanying our publications☆436Updated 6 months ago
- Reaching LLaMA2 Performance with 0.1M Dollars☆983Updated last year
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆828Updated 2 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆863Updated 10 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,260Updated 4 months ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,521Updated last year