mistralai-sf24 / hackathonLinks
☆447Updated last year
Alternatives and similar repositories for hackathon
Users that are interested in hackathon are comparing it to the libraries listed below
Sorting:
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆687Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 11 months ago
- Official inference library for pre-processing of Mistral models☆794Updated last week
- Automatically evaluate your LLMs in Google Colab☆661Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆717Updated last year
- ☆1,004Updated 8 months ago
- ☆866Updated last year
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆289Updated 7 months ago
- Banishing LLM Hallucinations Requires Rethinking Generalization☆276Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- A bagel, with everything.☆325Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆986Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆720Updated last year
- data cleaning and curation for unstructured text☆328Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆912Updated 5 months ago
- Training LLMs with QLoRA + FSDP☆1,529Updated 10 months ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆268Updated 4 months ago
- ☆416Updated last year
- Generate textbook-quality synthetic LLM pretraining data☆505Updated last year
- An mlx project to train a base model on your whatsapp chats using (Q)Lora finetuning☆170Updated last year
- Train Models Contrastively in Pytorch☆750Updated 6 months ago
- The repository for the code of the UltraFastBERT paper☆519Updated last year
- [ACL'25] Official Code for LlamaDuo: LLMOps Pipeline for Seamless Migration from Service LLMs to Small-Scale Local LLMs☆314Updated 2 months ago
- Fast bare-bones BPE for modern tokenizer training☆165Updated 3 months ago
- Inference code for Persimmon-8B☆414Updated 2 years ago
- Guide for fine-tuning Llama/Mistral/CodeLlama models and more☆625Updated 4 months ago
- Visualize the intermediate output of Mistral 7B☆371Updated 8 months ago
- ☆570Updated last year
- Open Source LLM toolkit to build trustworthy LLM applications. TigerArmor (AI safety), TigerRAG (embedding, RAG), TigerTune (fine-tuning)☆399Updated last year