Oneflow-Inc / conda-envLinks
☆12Updated 2 years ago
Alternatives and similar repositories for conda-env
Users that are interested in conda-env are comparing it to the libraries listed below
Sorting:
- OneFlow Serving☆20Updated 2 months ago
- GPTQ inference TVM kernel☆40Updated last year
- A TVM-like CUDA/C code generator.☆9Updated 3 years ago
- ☆18Updated last year
- TensorRT LLM Benchmark Configuration☆13Updated 11 months ago
- A practical way of learning Swizzle☆20Updated 4 months ago
- study of cutlass☆21Updated 7 months ago
- ☆23Updated 2 years ago
- ☆16Updated last year
- A toolkit for developers to simplify the transformation of nn.Module instances. It's now corresponding to Pytorch.fx.☆13Updated 2 years ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆18Updated 8 months ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- An external memory allocator example for PyTorch.☆14Updated 3 years ago
- ☆11Updated 4 years ago
- Quantized Attention on GPU☆44Updated 7 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆70Updated last year
- ☆96Updated 9 months ago
- ☆11Updated last year
- Benchmark tests supporting the TiledCUDA library.☆16Updated 7 months ago
- Distributed DataLoader For Pytorch Based On Ray☆24Updated 3 years ago
- ☆14Updated this week
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Updated last week
- ☆11Updated last year
- ☆39Updated this week
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆92Updated 3 weeks ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆110Updated 9 months ago
- ☆19Updated 8 months ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆121Updated 3 years ago
- Framework to reduce autotune overhead to zero for well known deployments.☆77Updated last week