conda-forge / miniforgeLinks
A conda-forge distribution.
☆7,681Updated last week
Alternatives and similar repositories for miniforge
Users that are interested in miniforge are comparing it to the libraries listed below
Sorting:
- The Fast Cross-Platform Package Manager☆7,390Updated this week
- JupyterLab desktop application, based on Electron.☆4,008Updated 5 months ago
- Development repository for the Triton language and compiler☆15,687Updated this week
- A system-level, binary package and environment manager running on all major operating systems and platforms.☆6,908Updated this week
- Fast and memory-efficient exact attention☆17,572Updated last week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆9,527Updated this week
- A Fast, Extensible Progress Bar for Python and CLI☆29,883Updated last week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆12,008Updated 5 months ago
- Official repository for Spyder - The Scientific Python Development Environment☆8,740Updated this week
- Ongoing research training transformer models at scale☆12,428Updated last week
- NumPy aware dynamic Python compiler using LLVM☆10,437Updated last week
- The Sphinx documentation generator☆7,098Updated this week
- The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!☆7,745Updated this week
- Tensors and Dynamic neural networks in Python with strong GPU acceleration☆90,331Updated this week
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆8,771Updated this week
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆8,943Updated last month
- Install and Run Python Applications in Isolated Environments☆11,640Updated this week
- NumPy & SciPy for GPU☆10,233Updated this week
- Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.☆37,301Updated this week
- An interactive NVIDIA-GPU process viewer and beyond, the one-stop solution for GPU process management.☆5,574Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆48,531Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆18,598Updated this week
- GPU & Accelerator process monitoring for AMD, Apple, Huawei, Intel, NVIDIA and Qualcomm☆9,145Updated last month
- Build and run Docker containers leveraging NVIDIA GPUs☆17,376Updated last year
- SGLang is a fast serving framework for large language models and vision language models.☆14,667Updated this week
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,320Updated 7 months ago
- Transformer related optimization, including BERT, GPT☆6,173Updated last year
- SciPy library main repository☆13,692Updated last week
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆32,386Updated this week
- TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and support state-of-the-art optimizati…☆10,586Updated this week