mistralai-sf24 / hackathonLinks
☆447Updated last year
Alternatives and similar repositories for hackathon
Users that are interested in hackathon are comparing it to the libraries listed below
Sorting:
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆684Updated 10 months ago
- Automatically evaluate your LLMs in Google Colab☆646Updated last year
- Official inference library for pre-processing of Mistral models☆755Updated this week
- Train Models Contrastively in Pytorch☆727Updated 3 months ago
- Banishing LLM Hallucinations Requires Rethinking Generalization☆276Updated 11 months ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆714Updated last year
- ☆864Updated last year
- This project showcases an LLMOps pipeline that fine-tunes a small-size LLM model to prepare for the outage of the service LLM.☆307Updated 3 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆888Updated 2 months ago
- Inference code for Mistral and Mixtral hacked up into original Llama implementation☆371Updated last year
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated 2 months ago
- A bagel, with everything.☆322Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 8 months ago
- ☆415Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆984Updated 11 months ago
- ☆546Updated 10 months ago
- Inference code for Persimmon-8B☆415Updated last year
- batched loras☆343Updated last year
- Open Source LLM toolkit to build trustworthy LLM applications. TigerArmor (AI safety), TigerRAG (embedding, RAG), TigerTune (fine-tuning)☆396Updated last year
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆282Updated 4 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- data cleaning and curation for unstructured text☆327Updated 11 months ago
- ☆986Updated 5 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆221Updated last year
- A comprehensive deep dive into the world of tokens☆224Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,392Updated last year
- Evaluation suite for LLMs☆352Updated 3 months ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆447Updated 9 months ago
- gpt-2 from scratch in mlx☆391Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆262Updated last year