deepshard / mixtral-8x7b-Inference
Eh, simple and works.
☆27Updated last year
Alternatives and similar repositories for mixtral-8x7b-Inference
Users that are interested in mixtral-8x7b-Inference are comparing it to the libraries listed below
Sorting:
- inference code for mixtral-8x7b-32kseqlen☆100Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆53Updated 3 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Simple GRPO scripts and configurations.☆58Updated 3 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated 9 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- ☆22Updated last year
- ☆48Updated last year
- Scripts to create your own moe models using mlx☆89Updated last year
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- ☆114Updated 4 months ago
- Train your own SOTA deductive reasoning model☆92Updated 2 months ago
- ☆87Updated last year
- A fast, local, and secure approach for training LLMs for coding tasks using GRPO with WebAssembly and interpreter feedback.☆23Updated last month
- ☆61Updated last year
- ☆28Updated last year
- Simplex Random Feature attention, in PyTorch☆74Updated last year
- Cerule - A Tiny Mighty Vision Model☆67Updated 8 months ago
- An example implementation of RLHF (or, more accurately, RLAIF) built on MLX and HuggingFace.☆26Updated 10 months ago
- A repository of projects and datasets under active development by Alignment Lab AI☆22Updated last year
- tiny_fnc_engine is a minimal python library that provides a flexible engine for calling functions extracted from a LLM.☆38Updated 8 months ago
- A repository of prompts and Python scripts for intelligent transformation of raw text into diverse formats.☆30Updated last year
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆40Updated last year
- Score LLM pretraining data with classifiers☆55Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆98Updated 2 months ago
- Full finetuning of large language models without large memory requirements☆94Updated last year
- entropix style sampling + GUI☆26Updated 6 months ago
- Chat Markup Language conversation library☆55Updated last year