megvii-research / megfileLinks
Megvii FILE Library - Working with Files in Python same as the standard library
☆164Updated 2 weeks ago
Alternatives and similar repositories for megfile
Users that are interested in megfile are comparing it to the libraries listed below
Sorting:
- useful dotfiles included vim, zsh, tmux and vscode☆18Updated 3 months ago
- To pioneer training long-context multi-modal transformer models☆64Updated 4 months ago
- mllm-npu: training multimodal large language models on Ascend NPUs☆94Updated last year
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆584Updated this week
- ☆439Updated 4 months ago
- Demystify RAM Usage in Multi-Process Data Loaders☆205Updated 2 years ago
- Patch convolution to avoid large GPU memory usage of Conv2D☆93Updated 10 months ago
- FireFlyer Record file format, writer and reader for DL training samples.☆237Updated 3 years ago
- TVMScript kernel for deformable attention☆25Updated 4 years ago
- Large scale image dataset visiualization tool.☆121Updated last month
- Datasets, Transforms and Models specific to Computer Vision☆90Updated 2 years ago
- A parallelism VAE avoids OOM for high resolution image generation☆84Updated 4 months ago
- An industrial extension library of pytorch to accelerate large scale model training☆55Updated 4 months ago
- High performance inference engine for diffusion models☆100Updated 3 months ago
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo☆1,432Updated this week
- A hyperparameter manager for deep learning experiments.☆96Updated 3 years ago
- ☆66Updated 2 weeks ago
- MegEngine到其他框架的转换器☆69Updated 2 years ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆338Updated 9 months ago
- Megatron's multi-modal data loader☆292Updated this week
- ☆61Updated last year
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆252Updated 4 months ago
- flex-block-attn: an efficient block sparse attention computation library☆94Updated 3 weeks ago
- ☆187Updated 11 months ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆406Updated 4 months ago
- A light-weight and high-efficient training framework for accelerating diffusion tasks.☆50Updated last year
- OneFlow Serving☆20Updated 8 months ago
- A sparse attention kernel supporting mix sparse patterns☆406Updated 10 months ago
- ☆79Updated 2 years ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆614Updated this week