Narsil / fast_gpt2Links
☆155Updated 2 years ago
Alternatives and similar repositories for fast_gpt2
Users that are interested in fast_gpt2 are comparing it to the libraries listed below
Sorting:
- ☆143Updated 2 years ago
- Understanding large language models☆117Updated 2 years ago
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- Simple embedding -> text model trained on a small subset of Wikipedia sentences.☆152Updated last year
- Drop in replacement for OpenAI, but with Open models.☆152Updated 2 years ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated 2 years ago
- ☆137Updated last year
- ☆92Updated last year
- Tiny inference-only implementation of LLaMA☆93Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Unofficial python bindings for the rust llm library. 🐍❤️🦀☆75Updated last year
- [WIP] A 🔥 interface for running code in the cloud☆85Updated 2 years ago
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- GPU accelerated client-side embeddings for vector search, RAG etc.☆66Updated last year
- [Added T5 support to TRLX] A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆47Updated 2 years ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- ☆129Updated last year
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆41Updated 2 years ago
- Run GGML models with Kubernetes.☆173Updated last year
- Used for adaptive human in the loop evaluation of language and embedding models.☆309Updated 2 years ago
- Smol but mighty language model☆62Updated 2 years ago
- ☆46Updated 2 years ago
- inference code for mixtral-8x7b-32kseqlen☆100Updated last year
- ☆198Updated last year
- Helpers and such for working with Lambda Cloud☆51Updated last year
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆38Updated last year
- ☆26Updated 7 months ago
- SoTA Transformers with C-backend for fast inference on your CPU.☆309Updated last year
- ☆39Updated 2 years ago
- A miniature version of Modal☆20Updated last year