wolfecameron / GISTLinks
Repository for "GIST: Distributed training for large-scale graph convolutional networks"
☆15Updated 2 years ago
Alternatives and similar repositories for GIST
Users that are interested in GIST are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2022] DreamShard: Generalizable Embedding Table Placement for Recommender Systems☆29Updated 2 years ago
- Linear Attention Sequence Parallelism (LASP)☆86Updated last year
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆39Updated 3 years ago
- ☆51Updated last year
- Experimental scripts for researching data adaptive learning rate scheduling.☆23Updated last year
- Utilities for Training Very Large Models☆58Updated 11 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆114Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆26Updated 2 years ago
- AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers☆47Updated 2 years ago
- Using FlexAttention to compute attention with different masking patterns☆44Updated 11 months ago
- Code for this paper "HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts via HyperNetwork"☆33Updated last year
- Implementation of Hyena Hierarchy in JAX☆10Updated 2 years ago
- Code repository for the paper - "AdANNS: A Framework for Adaptive Semantic Search"☆65Updated last year
- ☆10Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆40Updated last year
- JORA: JAX Tensor-Parallel LoRA Library (ACL 2024)☆36Updated last year
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆33Updated last year
- ☆76Updated last week
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 11 months ago
- ☆26Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆49Updated 2 years ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models…☆36Updated last year
- Lottery Ticket Adaptation☆39Updated 9 months ago
- Aioli: A unified optimization framework for language model data mixing☆27Updated 7 months ago
- Official Repository for Task-Circuit Quantization☆23Updated 3 months ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated last month
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆61Updated 10 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated last year