☆71Jul 11, 2024Updated last year
Alternatives and similar repositories for LTE
Users that are interested in LTE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Linear Attention Sequence Parallelism (LASP)☆88Jun 4, 2024Updated last year
- Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation, ICML 2024☆23Jun 26, 2024Updated last year
- ☆33Jul 8, 2024Updated last year
- A repository for research on medium sized language models.☆78May 23, 2024Updated last year
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆39Nov 1, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- ☆11Feb 3, 2025Updated last year
- ☆48Aug 29, 2024Updated last year
- ☆51Oct 29, 2023Updated 2 years ago
- This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.☆107Jul 1, 2024Updated last year
- ☆22Apr 24, 2025Updated last year
- Byte-sized text games for code generation tasks on virtual environments☆20Jul 8, 2024Updated last year
- Repository of the RANLP 2023 paper "Exploring the Landscape of Natural Language Processing Research".☆13Oct 20, 2024Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆474Apr 21, 2024Updated 2 years ago
- This repository contains code for the MicroAdam paper.☆21Dec 14, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Oct 1, 2025Updated 7 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆33Nov 4, 2024Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,690Oct 28, 2024Updated last year
- ☆14Mar 31, 2024Updated 2 years ago
- ☆16Feb 23, 2025Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆280Nov 3, 2023Updated 2 years ago
- ☆233Jun 24, 2024Updated last year
- Codes for Merging Large Language Models☆36Aug 7, 2024Updated last year
- ☆17Jan 4, 2021Updated 5 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆25Nov 25, 2024Updated last year
- ☆23Nov 26, 2024Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆167Apr 13, 2025Updated last year
- [ICLR‘24 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆106Jun 20, 2025Updated 10 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆205Jul 17, 2024Updated last year
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆53Dec 22, 2025Updated 4 months ago
- ☆17Apr 7, 2025Updated last year
- The official implementation of the paper "Reducing Fine-Tuning Memory Overhead by Approximate and Memory-Sharing Backpropagation"☆21Dec 10, 2024Updated last year
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆65Apr 15, 2024Updated 2 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆25Sep 13, 2024Updated last year
- DeiT implementation for Q-ViT☆25Apr 21, 2025Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆86Mar 5, 2024Updated 2 years ago
- Tiny ASIC implementation for "The Era of 1-bit LLMs All Large Language Models are in 1.58 Bits" matrix multiplication unit☆189Apr 19, 2024Updated 2 years ago
- PyTorch code for our paper "Resource-Adaptive Federated Learning with All-In-One Neural Composition" (NeurIPS2022)☆19Dec 4, 2022Updated 3 years ago
- ☆18Aug 19, 2024Updated last year
- ☆16Feb 5, 2025Updated last year