megvii-research / megfileLinks
Megvii FILE Library - Working with Files in Python same as the standard library
☆167Updated this week
Alternatives and similar repositories for megfile
Users that are interested in megfile are comparing it to the libraries listed below
Sorting:
- mllm-npu: training multimodal large language models on Ascend NPUs☆95Updated last year
- To pioneer training long-context multi-modal transformer models☆66Updated 5 months ago
- useful dotfiles included vim, zsh, tmux and vscode☆19Updated 2 weeks ago
- FireFlyer Record file format, writer and reader for DL training samples.☆238Updated 3 years ago
- Demystify RAM Usage in Multi-Process Data Loaders☆205Updated 2 years ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆598Updated this week
- An industrial extension library of pytorch to accelerate large scale model training☆56Updated 4 months ago
- Datasets, Transforms and Models specific to Computer Vision☆90Updated 2 years ago
- A hyperparameter manager for deep learning experiments.☆97Updated 3 years ago
- TVMScript kernel for deformable attention☆25Updated 4 years ago
- Large scale image dataset visiualization tool.☆121Updated last month
- ☆441Updated 4 months ago
- Patch convolution to avoid large GPU memory usage of Conv2D☆93Updated 11 months ago
- JittorInfer is a high-performance C++ inference framework designed for large language models on Huawei's Ascend AI processor.☆77Updated 2 weeks ago
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆471Updated last week
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo☆1,504Updated this week
- MegEngine到其他框架的转换器☆70Updated 2 years ago
- ☆79Updated 2 years ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆338Updated 10 months ago
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- Megatron's multi-modal data loader☆302Updated last week
- A model compression and acceleration toolbox based on pytorch.☆333Updated last year
- ☆61Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆619Updated 2 weeks ago
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆256Updated last month
- ☆71Updated last week
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆406Updated 5 months ago
- High performance inference engine for diffusion models☆102Updated 4 months ago
- ☆36Updated 2 years ago
- A parallelism VAE avoids OOM for high resolution image generation☆84Updated 5 months ago