determined-ai / determined-examplesLinks
Example ML projects that use the Determined library.
☆32Updated last year
Alternatives and similar repositories for determined-examples
Users that are interested in determined-examples are comparing it to the libraries listed below
Sorting:
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆69Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- ☆71Updated 10 months ago
- Intel Gaudi's Megatron DeepSpeed Large Language Models for training☆18Updated last year
- Benchmark suite for LLMs from Fireworks.ai☆89Updated this week
- LM engine is a library for pretraining/finetuning LLMs☆113Updated this week
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated 2 years ago
- Distributed preprocessing and data loading for language datasets☆40Updated last year
- ☆26Updated 2 years ago
- MLPerf™ logging library☆38Updated last month
- ☆53Updated last year
- Parallel framework for training and fine-tuning deep neural networks☆70Updated 3 months ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆228Updated this week
- ☆125Updated last year
- Make triton easier☆50Updated last year
- Easy and Efficient Quantization for Transformers☆205Updated 2 weeks ago
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆170Updated last month
- A place to store reusable transformer components of my own creation or found on the interwebs☆72Updated this week
- A collection of reproducible inference engine benchmarks☆38Updated 9 months ago
- Torch Distributed Experimental☆117Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆219Updated last week
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- IBM development fork of https://github.com/huggingface/text-generation-inference☆63Updated 4 months ago
- 🚀 Collection of libraries used with fms-hf-tuning to accelerate fine-tuning and training of large models.☆13Updated last week
- ☆124Updated last year
- Various transformers for FSDP research☆38Updated 3 years ago
- Utilities for Training Very Large Models☆58Updated last year
- ☆96Updated 2 weeks ago
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆92Updated 2 years ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Updated 10 months ago