huggingface / local-gemmaLinks
Gemma 2 optimized for your local machine.
☆376Updated last year
Alternatives and similar repositories for local-gemma
Users that are interested in local-gemma are comparing it to the libraries listed below
Sorting:
- ☆162Updated 2 months ago
- ☆209Updated 8 months ago
- ☆207Updated last year
- ☆102Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 11 months ago
- Official inference library for pre-processing of Mistral models☆801Updated last week
- Video+code lecture on building nanoGPT from scratch☆68Updated last year
- A simple tool that let's you explore different possible paths that an LLM might sample.☆190Updated 5 months ago
- ☆170Updated last year
- Fast parallel LLM inference for MLX☆220Updated last year
- ☆447Updated last year
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆273Updated last year
- 🤗 Benchmark Large Language Models Reliably On Your Data☆404Updated 2 weeks ago
- Maybe the new state of the art vision model? we'll see 🤷♂️☆165Updated last year
- ☆136Updated last month
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆221Updated last year
- ☆264Updated 3 months ago
- Start a server from the MLX library.☆192Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆161Updated 2 months ago
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)☆242Updated last year
- Banishing LLM Hallucinations Requires Rethinking Generalization☆275Updated last year
- ☆116Updated 10 months ago
- Let's build better datasets, together!☆262Updated 9 months ago
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆278Updated 4 months ago
- GRadient-INformed MoE☆264Updated last year
- FastMLX is a high performance production ready API to host MLX models.☆331Updated 7 months ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated last year
- Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp☆160Updated 5 months ago
- A Lightweight Library for AI Observability☆251Updated 7 months ago
- ☆210Updated 3 months ago