sachinraja13 / DINO_DETR_MLXLinks
MLX version of DINO DETR
☆13Updated 7 months ago
Alternatives and similar repositories for DINO_DETR_MLX
Users that are interested in DINO_DETR_MLX are comparing it to the libraries listed below
Sorting:
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆172Updated last year
- CLI to demonstrate running a large language model (LLM) on Apple Neural Engine.☆114Updated 7 months ago
- Swift implementation of Flux.1 using mlx-swift☆95Updated 3 weeks ago
- FastMLX is a high performance production ready API to host MLX models.☆320Updated 4 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆187Updated this week
- SmolVLM2 Demo☆170Updated 4 months ago
- ☆110Updated last month
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆270Updated 10 months ago
- ☆367Updated 10 months ago
- ☆183Updated 4 months ago
- Fast parallel LLM inference for MLX☆204Updated last year
- mlx image models for Apple Silicon machines☆82Updated 3 months ago
- MLX Model Manager unifies loading and inferencing with LLMs and VLMs.☆98Updated 6 months ago
- An implementation of the CSM(Conversation Speech Model) for Apple Silicon using MLX.☆369Updated 2 months ago
- MLX Swift implementation of Andrej Karpathy's Let's build GPT video☆58Updated last year
- Start a server from the MLX library.☆189Updated last year
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆275Updated last month
- MLX Transformers is a library that provides model implementation in MLX. It uses a similar model interface as HuggingFace Transformers an…☆66Updated 8 months ago
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆259Updated last month
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆451Updated 6 months ago
- ☆296Updated 3 months ago
- The easiest way to run the fastest MLX-based LLMs locally☆295Updated 9 months ago
- run embeddings in MLX☆90Updated 10 months ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆83Updated 3 months ago
- Run transformers (incl. LLMs) on the Apple Neural Engine.☆62Updated last year
- For inferring and serving local LLMs using the MLX framework☆107Updated last year
- MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. I…☆495Updated 3 weeks ago
- Train Large Language Models on MLX.☆138Updated this week
- ☆75Updated 8 months ago
- Experimenting with conversational AI in iOS, macOS and visionOS apps☆95Updated last month