huggingface / local-gemmaLinks
Gemma 2 optimized for your local machine.
☆376Updated last year
Alternatives and similar repositories for local-gemma
Users that are interested in local-gemma are comparing it to the libraries listed below
Sorting:
- ☆102Updated last year
- ☆208Updated 6 months ago
- ☆161Updated 3 weeks ago
- ☆206Updated last year
- A simple tool that let's you explore different possible paths that an LLM might sample.☆185Updated 3 months ago
- Official inference library for pre-processing of Mistral models☆784Updated this week
- Banishing LLM Hallucinations Requires Rethinking Generalization☆276Updated last year
- Unattended Lightweight Text Classifiers with LLM Embeddings☆184Updated 11 months ago
- ☆170Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 10 months ago
- Collection of scripts and notebooks for OpenAI's latest GPT OSS models☆398Updated 2 weeks ago
- 🤗 Benchmark Large Language Models Reliably On Your Data☆387Updated this week
- Simple UI for debugging correlations of text embeddings☆289Updated 3 months ago
- ☆262Updated 2 months ago
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)☆241Updated last year
- LLM inference in C/C++☆101Updated this week
- 1.58 Bit LLM on Apple Silicon using MLX☆221Updated last year
- GRadient-INformed MoE☆264Updated 11 months ago
- Video+code lecture on building nanoGPT from scratch☆69Updated last year
- FRP Fork☆177Updated 4 months ago
- ☆446Updated last year
- A flexible, adaptive classification system for dynamic text classification☆424Updated this week
- ☆134Updated last week
- Build datasets using natural language☆518Updated 3 months ago
- ☆155Updated 4 months ago
- Solving data for LLMs - Create quality synthetic datasets!☆151Updated 7 months ago
- A compact LLM pretrained in 9 days by using high quality data☆322Updated 4 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp☆158Updated 4 months ago
- ☆116Updated 8 months ago