tunib-ai / oslo
OSLO: Open Source framework for Large-scale model Optimization
☆306Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for oslo
- OSLO: Open Source for Large-scale Optimization☆174Updated last year
- Large-scale language modeling tutorials with PyTorch☆287Updated 3 years ago
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆778Updated last year
- Easy Language Model Pretraining leveraging Huggingface's Transformers and Datasets☆127Updated 2 years ago
- A performance library for machine learning applications.☆180Updated last year
- FriendliAI Model Hub☆89Updated 2 years ago
- Data processing system for polyglot☆90Updated last year
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)☆116Updated 2 years ago
- ☆236Updated 3 months ago
- Implementation of a Transformer, but completely in Triton☆249Updated 2 years ago
- Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraint☆361Updated 7 months ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆111Updated last year
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆179Updated last year
- Prune a model while finetuning or training.☆394Updated 2 years ago
- My collection of machine learning papers☆269Updated last year
- KoCLIP: Korean port of OpenAI CLIP, in Flax☆142Updated last year
- Polyglot: Large Language Models of Well-balanced Competence in Multi-languages☆475Updated last year
- Official PyTorch implementation of "Large-scale Bilingual Language-Image Contrastive Learning" (ICLRW 2022)☆95Updated 2 years ago
- Lightweight and Parallel Deep Learning Framework☆263Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆185Updated 2 years ago
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆309Updated last year
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆225Updated 2 months ago
- Korean Math Word Problems☆57Updated 2 years ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆234Updated last year
- [Google Meet] MLLM Arxiv Casual Talk☆55Updated last year
- The code and models for "An Empirical Study of Tokenization Strategies for Various Korean NLP Tasks" (AACL-IJCNLP 2020)☆116Updated 4 years ago
- ☆334Updated 7 months ago
- Scalable PaLM implementation of PyTorch☆192Updated last year
- Code for the ALiBi method for transformer language models (ICLR 2022)☆507Updated last year