moxin-org / Moxin-LLM
Moxin is a family of fully open-source and reproducible LLMs
☆86Updated this week
Alternatives and similar repositories for Moxin-LLM:
Users that are interested in Moxin-LLM are comparing it to the libraries listed below
- Easy to use, High Performant Knowledge Distillation for LLMs☆59Updated last week
- Simple examples using Argilla tools to build AI☆52Updated 5 months ago
- ☆84Updated 3 months ago
- ☆66Updated 10 months ago
- ☆24Updated 2 months ago
- Experimental Code for StructuredRAG: JSON Response Formatting with Large Language Models☆105Updated last week
- AnyModal is a Flexible Multimodal Language Model Framework for PyTorch☆90Updated 3 months ago
- ☆99Updated 7 months ago
- klmbr - a prompt pre-processing technique to break through the barrier of entropy while generating text with LLMs☆71Updated 6 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆42Updated 10 months ago
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆55Updated last month
- ☆117Updated 7 months ago
- ☆39Updated last year
- Guaranteed Structured Output from any Language Model via Hierarchical State Machines☆124Updated this week
- ☆112Updated 4 months ago
- ☆153Updated 9 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆171Updated 11 months ago
- ☆130Updated 2 weeks ago
- Who needs o1 anyways. Add CoT to any OpenAI compatible endpoint.☆41Updated 7 months ago
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆90Updated 2 months ago
- Testing LLM reasoning abilities with family relationship quizzes.☆62Updated 2 months ago
- A pipeline parallel training script for LLMs.☆137Updated 3 weeks ago
- Distributed Inference for mlx LLm☆87Updated 8 months ago
- This small API downloads and exposes access to NeuML's txtai-wikipedia and full wikipedia datasets, taking in a query and returning full …☆91Updated 2 weeks ago
- ☆53Updated 10 months ago
- Serving LLMs in the HF-Transformers format via a PyFlask API☆71Updated 7 months ago
- ☆97Updated this week
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆66Updated 5 months ago
- GPT-4 Level Conversational QA Trained In a Few Hours☆59Updated 8 months ago
- Train your own SOTA deductive reasoning model☆86Updated last month