wolfecameron / GISTLinks
Repository for "GIST: Distributed training for large-scale graph convolutional networks"
☆15Updated 3 years ago
Alternatives and similar repositories for GIST
Users that are interested in GIST are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2022] DreamShard: Generalizable Embedding Table Placement for Recommender Systems☆29Updated 2 years ago
- Using FlexAttention to compute attention with different masking patterns☆47Updated last year
- Implementation of Hyena Hierarchy in JAX☆10Updated 2 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆44Updated last year
- ☆35Updated last year
- ☆26Updated 2 years ago
- Code repository for the paper - "AdANNS: A Framework for Adaptive Semantic Search"☆66Updated 2 years ago
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models…☆40Updated 2 years ago
- Repository for CPU Kernel Generation for LLM Inference☆27Updated 2 years ago
- Code for this paper "HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts via HyperNetwork"☆33Updated 2 years ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Updated last year
- Utilities for Training Very Large Models☆58Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated 2 years ago
- Linear Attention Sequence Parallelism (LASP)☆88Updated last year
- ☆27Updated last year
- JORA: JAX Tensor-Parallel LoRA Library (ACL 2024)☆36Updated last year
- [ACL 2025 Main] Repository for the paper: 500xCompressor: Generalized Prompt Compression for Large Language Models☆56Updated 7 months ago
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆39Updated 3 years ago
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆69Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- Code for the paper "On the Expressivity Role of LayerNorm in Transformers' Attention" (Findings of ACL'2023)☆57Updated last year
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆31Updated last year
- KV Cache Steering for Inducing Reasoning in Small Language Models☆46Updated 6 months ago
- [NAACL 2025] Official Implementation of "HMT: Hierarchical Memory Transformer for Long Context Language Processing"☆80Updated 2 weeks ago
- Kinetics: Rethinking Test-Time Scaling Laws☆86Updated 6 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- PB-LLM: Partially Binarized Large Language Models☆157Updated 2 years ago
- Low-Rank Llama Custom Training☆23Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Updated last week