facebookresearch / fastgenLinks
Simple high-throughput inference library
☆147Updated 5 months ago
Alternatives and similar repositories for fastgen
Users that are interested in fastgen are comparing it to the libraries listed below
Sorting:
- RWKV-7: Surpassing GPT☆98Updated 11 months ago
- Samples of good AI generated CUDA kernels☆91Updated 4 months ago
- Make triton easier☆48Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆92Updated 5 months ago
- vLLM adapter for a TGIS-compatible gRPC server.☆41Updated this week
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated 10 months ago
- Inference of Mamba models in pure C☆192Updated last year
- A collection of reproducible inference engine benchmarks☆37Updated 6 months ago
- ☆50Updated last year
- Storing long contexts in tiny caches with self-study☆205Updated last week
- 👷 Build compute kernels☆163Updated last week
- Efficient non-uniform quantization with GPTQ for GGUF☆52Updated last month
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆58Updated last week
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 6 months ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆147Updated last week
- Simple and efficient DeepSeek V3 SFT using pipeline parallel and expert parallel, with both FP8 and BF16 trainings☆88Updated 3 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆248Updated 8 months ago
- ☆52Updated 11 months ago
- PyTorch implementation of models from the Zamba2 series.☆185Updated 9 months ago
- Collection of autoregressive model implementation☆86Updated 6 months ago
- ☆39Updated last year
- ☆51Updated last year
- ☆60Updated 4 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated last year
- ☆18Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆129Updated last year
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- ☆218Updated 9 months ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆133Updated last month
- NanoGPT (124M) quality in 2.67B tokens☆28Updated last month