FL33TW00D / coremlprofilerLinks
Profile your CoreML models directly from Python π
β30Updated 4 months ago
Alternatives and similar repositories for coremlprofiler
Users that are interested in coremlprofiler are comparing it to the libraries listed below
Sorting:
- Find out why your CoreML model isn't running on the Neural Engine!β30Updated last year
- Tool for visual profiling Core ML models, compatible with both package and compiled versions, including reasons for unsupported operationβ¦β37Updated last year
- See the device (CPU/GPU/ANE) and estimated cost for every layer in your CoreML model.β25Updated 3 months ago
- python bindings for symphonia/opus - read various audio formats from python and write opus filesβ77Updated 3 weeks ago
- ModernBERT model optimized for Apple Neural Engine.β30Updated last year
- MLX support for the Open Neural Network Exchange (ONNX)β63Updated last year
- A simple, hackable text-to-speech system in PyTorch and MLXβ185Updated 6 months ago
- Implementation of 'Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesis', in MLXβ23Updated last year
- Rust crate for some audio utilitiesβ27Updated 10 months ago
- β27Updated last year
- mlx image models for Apple Silicon machinesβ91Updated 2 months ago
- Supporting code for "LLMs for your iPhone: Whole-Tensor 4 Bit Quantization"β11Updated last year
- Open-source reproducible benchmarks from Argmaxβ77Updated 2 weeks ago
- Experiments with BitNet inference on CPUβ55Updated last year
- β58Updated 2 years ago
- CLI to demonstrate running a large language model (LLM) on Apple Neural Engine.β121Updated last year
- Proof of concept for running moshi/hibiki using webrtcβ20Updated 11 months ago
- Implementation of E2-TTS, "Embarrassingly Easy Fully Non-Autoregressive Zero-Shot TTS", in MLXβ21Updated last year
- Embarrassingly Easy Fully Non-Autoregressive Zero-Shot TTS (E2 TTS) in MLXβ29Updated last year
- Thin wrapper around GGML to make life easierβ42Updated 3 months ago
- C API for MLXβ172Updated this week
- β129Updated 7 months ago
- Simple high-throughput inference libraryβ155Updated 8 months ago
- π€ Optimum ONNX: Export your model to ONNX and run inference with ONNX Runtimeβ114Updated last week
- β20Updated 2 weeks ago
- FlashAttention (Metal Port)β579Updated last year
- Training Models Dailyβ17Updated 2 years ago
- β50Updated 3 months ago
- SmolVLM2 Demoβ185Updated 10 months ago
- Inference of Mamba and Mamba2 models in pure Cβ196Updated 2 weeks ago