apple / corenetLinks
CoreNet: A library for training deep neural networks
☆7,013Updated 3 weeks ago
Alternatives and similar repositories for corenet
Users that are interested in corenet are comparing it to the libraries listed below
Sorting:
- MLX: An array framework for Apple silicon☆20,757Updated this week
- PyTorch native post-training library☆5,217Updated this week
- Examples in the MLX framework☆7,444Updated 3 weeks ago
- ☆8,623Updated 7 months ago
- lightweight, standalone C++ inference engine for Google's Gemma models.☆6,445Updated this week
- Large World Model -- Modeling Text and Video with Millions Context☆7,277Updated 7 months ago
- ☆2,952Updated 8 months ago
- tiny vision language model☆8,019Updated last week
- An Extensible Deep Learning Library☆2,057Updated last week
- Official inference library for Mistral models☆10,262Updated 2 months ago
- The official PyTorch implementation of Google's Gemma models☆5,459Updated 2 months ago
- llama3 implementation one matrix multiplication at a time☆14,982Updated last year
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆12,174Updated this week
- Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We als…☆17,395Updated this week
- A Lightweight Recommendation System☆8,851Updated last year
- Gemma open-weight LLM library, from Google DeepMind☆3,345Updated this week
- Utilities intended for use with Llama models.☆7,036Updated 3 weeks ago
- A PyTorch native platform for training generative AI models☆3,838Updated this week
- The official Meta Llama 3 GitHub site☆28,740Updated 4 months ago
- ☆4,083Updated 11 months ago
- High-speed Large Language Model Serving for Local Deployment☆8,213Updated 3 months ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,504Updated last year
- A minimal GPU design in Verilog to learn how GPUs work from the ground up☆8,394Updated 9 months ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆5,960Updated last month
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆9,662Updated 10 months ago
- Finetune Qwen3, Llama 4, TTS, DeepSeek-R1 & Gemma 3 LLMs 2x faster with 70% less memory! 🦥☆39,558Updated this week
- Examples using MLX Swift☆1,807Updated 2 weeks ago
- ☆8,404Updated 11 months ago
- A simple, performant and scalable Jax LLM!☆1,734Updated this week
- DataComp for Language Models☆1,300Updated 2 months ago