☆71Jul 11, 2024Updated last year
Alternatives and similar repositories for LTE
Users that are interested in LTE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Source Code for the JAIR Paper "Does CLIP Know my Face?" (Demo: https://huggingface.co/spaces/AIML-TUDA/does-clip-know-my-face)☆16Jul 9, 2024Updated last year
- Linear Attention Sequence Parallelism (LASP)☆88Jun 4, 2024Updated last year
- Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation, ICML 2024☆23Jun 26, 2024Updated last year
- ☆33Jul 8, 2024Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆148Apr 8, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- A repository for research on medium sized language models.☆78May 23, 2024Updated last year
- ☆11Feb 3, 2025Updated last year
- ☆48Aug 29, 2024Updated last year
- ☆12Feb 11, 2026Updated 2 months ago
- ☆51Oct 29, 2023Updated 2 years ago
- This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.☆106Jul 1, 2024Updated last year
- ☆22Apr 24, 2025Updated 11 months ago
- Byte-sized text games for code generation tasks on virtual environments☆20Jul 8, 2024Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆472Apr 21, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- This repository contains code for the MicroAdam paper.☆21Dec 14, 2024Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Oct 1, 2025Updated 6 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆33Nov 4, 2024Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,683Oct 28, 2024Updated last year
- ☆14Mar 31, 2024Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆281Nov 3, 2023Updated 2 years ago
- ☆232Jun 24, 2024Updated last year
- download playlists from youtube to google drive☆15Oct 22, 2020Updated 5 years ago
- Codes for Merging Large Language Models☆35Aug 7, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆17Jan 4, 2021Updated 5 years ago
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆25Nov 25, 2024Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆164Apr 13, 2025Updated last year
- ☆23Nov 26, 2024Updated last year
- LLMem: GPU Memory Estimation for Fine-Tuning Pre-Trained LLMs☆29May 31, 2025Updated 10 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆205Jul 17, 2024Updated last year
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆53Dec 22, 2025Updated 3 months ago
- Efficient construction of the BWT using string compression☆22Mar 17, 2026Updated 3 weeks ago
- ☆17Apr 7, 2025Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- The official implementation of the paper "Reducing Fine-Tuning Memory Overhead by Approximate and Memory-Sharing Backpropagation"☆21Dec 10, 2024Updated last year
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆65Apr 15, 2024Updated last year
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆25Sep 13, 2024Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆85Mar 5, 2024Updated 2 years ago
- Tiny ASIC implementation for "The Era of 1-bit LLMs All Large Language Models are in 1.58 Bits" matrix multiplication unit☆186Apr 19, 2024Updated last year
- PyTorch code for our paper "Resource-Adaptive Federated Learning with All-In-One Neural Composition" (NeurIPS2022)☆19Dec 4, 2022Updated 3 years ago
- ☆16Feb 5, 2025Updated last year