LowinLi / transformers-stream-generatorLinks
This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/Transformers.
☆97Updated last year
Alternatives and similar repositories for transformers-stream-generator
Users that are interested in transformers-stream-generator are comparing it to the libraries listed below
Sorting:
- A unified tokenization tool for Images, Chinese and English.☆151Updated 2 years ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆224Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated 2 years ago
- Imitate OpenAI with Local Models☆89Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆137Updated 8 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated last year
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆313Updated 2 years ago
- Mixture-of-Experts (MoE) Language Model☆189Updated 11 months ago
- a Fine-tuned LLaMA that is Good at Arithmetic Tasks☆177Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- LongQLoRA: Extent Context Length of LLMs Efficiently☆166Updated last year
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆141Updated last year
- A more efficient GLM implementation!☆54Updated 2 years ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year