MDK8888 / GPTFastLinks
Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.
☆685Updated last year
Alternatives and similar repositories for GPTFast
Users that are interested in GPTFast are comparing it to the libraries listed below
Sorting:
- ☆446Updated last year
- Training LLMs with QLoRA + FSDP☆1,527Updated 11 months ago
- ☆1,007Updated 8 months ago
- Reaching LLaMA2 Performance with 0.1M Dollars☆987Updated last year
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,296Updated 2 months ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆714Updated 2 years ago
- llama3.np is a pure NumPy implementation for Llama 3 model.☆989Updated 5 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆722Updated last year
- Official inference library for pre-processing of Mistral models☆803Updated 2 weeks ago
- Visualize the intermediate output of Mistral 7B☆375Updated 9 months ago
- Train Models Contrastively in Pytorch☆753Updated 6 months ago
- The repository for the code of the UltraFastBERT paper☆518Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- Automatically evaluate your LLMs in Google Colab☆660Updated last year
- DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. 🤖💤☆1,069Updated 8 months ago
- ☆572Updated last year
- a small code base for training large models☆309Updated 5 months ago
- Inference code for Persimmon-8B☆413Updated 2 years ago
- batched loras☆346Updated 2 years ago
- Domain Adapted Language Modeling Toolkit - E2E RAG☆329Updated 11 months ago
- Scale LLM Engine public repository☆813Updated last week
- Inference Llama 2 in one file of pure Python☆422Updated last year
- ☆415Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 11 months ago
- ☆865Updated last year
- Generate textbook-quality synthetic LLM pretraining data☆505Updated 2 years ago
- A bagel, with everything.☆324Updated last year
- Guide for fine-tuning Llama/Mistral/CodeLlama models and more☆626Updated last week
- Open Source LLM toolkit to build trustworthy LLM applications. TigerArmor (AI safety), TigerRAG (embedding, RAG), TigerTune (fine-tuning)☆397Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆221Updated last year