LowinLi / transformers-stream-generatorLinks
This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/Transformers.
☆96Updated last year
Alternatives and similar repositories for transformers-stream-generator
Users that are interested in transformers-stream-generator are comparing it to the libraries listed below
Sorting:
- A unified tokenization tool for Images, Chinese and English.☆153Updated 2 years ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆224Updated 2 years ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated last year
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆318Updated 2 years ago
- Official repository for LongChat and LongEval☆533Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- Mixture-of-Experts (MoE) Language Model☆194Updated last year
- Imitate OpenAI with Local Models☆89Updated last year
- Inspired by google c4, here is a series of colossal clean data cleaning scripts focused on CommonCrawl data processing. Including Chinese…☆135Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆140Updated last year
- Open efforts to implement ChatGPT-like models and beyond.☆107Updated last year
- Model Compression for Big Models☆167Updated 2 years ago
- LongQLoRA: Extent Context Length of LLMs Efficiently☆168Updated 2 years ago
- Evaluating LLMs' multi-round chatting capability via assessing conversations generated by two LLM instances.☆160Updated 7 months ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- Multi-language Enhanced LLaMA☆303Updated 2 years ago
- Rectified Rotary Position Embeddings☆386Updated last year
- NTK scaled version of ALiBi position encoding in Transformer.☆69Updated 2 years ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆141Updated last year
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆151Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated 2 years ago
- a Fine-tuned LLaMA that is Good at Arithmetic Tasks☆178Updated 2 years ago
- ☆173Updated 2 years ago
- Naive Bayes-based Context Extension☆326Updated last year
- Retrieves parquet files from Hugging Face, identifies and quantifies junky data, duplication, contamination, and biased content in datase…☆53Updated 2 years ago
- 旨在对当前主流LLM进行一个直观、具体、标准的评测☆95Updated 2 years ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆514Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year