zphang / transformersLinks
Code and models for BERT on STILTs
☆52Updated 2 years ago
Alternatives and similar repositories for transformers
Users that are interested in transformers are comparing it to the libraries listed below
Sorting:
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- Pre-training code for Amber 7B LLM☆167Updated last year
- a Fine-tuned LLaMA that is Good at Arithmetic Tasks☆178Updated last year
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆184Updated 2 years ago
- ☆180Updated 2 years ago
- The data processing pipeline for the Koala chatbot language model☆118Updated 2 years ago
- Official repository for LongChat and LongEval☆527Updated last year
- Crosslingual Generalization through Multitask Finetuning☆538Updated 11 months ago
- Inspired by google c4, here is a series of colossal clean data cleaning scripts focused on CommonCrawl data processing. Including Chinese…☆129Updated 2 years ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆149Updated last year
- ☆271Updated 2 years ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆219Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆206Updated last year
- ☆96Updated 2 years ago
- Official codebase for "SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation"☆229Updated 2 years ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆398Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆199Updated last year
- Unofficial implementation of AlpaGasus☆92Updated last year
- Fast Inference Solutions for BLOOM☆564Updated 10 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆470Updated last year
- ☆104Updated 2 years ago
- Reverse Instructions to generate instruction tuning data with corpus examples☆215Updated last year
- LLaMa Tuning with Stanford Alpaca Dataset using Deepspeed and Transformers☆51Updated 2 years ago
- Open Source WizardCoder Dataset☆160Updated 2 years ago
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆314Updated 2 years ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆224Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- ☆459Updated last year