LowinLi / transformers-stream-generatorLinks
This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/Transformers.
☆95Updated last year
Alternatives and similar repositories for transformers-stream-generator
Users that are interested in transformers-stream-generator are comparing it to the libraries listed below
Sorting:
- A unified tokenization tool for Images, Chinese and English.☆152Updated 2 years ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆223Updated last year
- 本项目旨在对大量文本文件进行快速编码检测和转换以辅助mnbvc语料集项目的数据清洗工作☆61Updated 8 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆131Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated last year
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆139Updated last year
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆313Updated 2 years ago
- Official repository for LongChat and LongEval☆521Updated last year
- a Fine-tuned LLaMA that is Good at Arithmetic Tasks☆178Updated last year
- Imitate OpenAI with Local Models☆87Updated 10 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆136Updated 7 months ago
- Another ChatGLM2 implementation for GPTQ quantization☆54Updated last year
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆66Updated 2 years ago
- 文本去重☆74Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆97Updated last year
- Inspired by google c4, here is a series of colossal clean data cleaning scripts focused on CommonCrawl data processing. Including Chinese…☆127Updated 2 years ago
- ☆172Updated 2 years ago
- NTK scaled version of ALiBi position encoding in Transformer.☆68Updated last year
- code for piccolo embedding model from SenseTime☆131Updated last year
- Mixture-of-Experts (MoE) Language Model☆189Updated 10 months ago
- LongQLoRA: Extent Context Length of LLMs Efficiently☆166Updated last year
- Model Compression for Big Models☆163Updated 2 years ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆250Updated 7 months ago
- Rectified Rotary Position Embeddings☆374Updated last year
- Chinese large language model base generated through incremental pre-training on Chinese datasets☆236Updated 2 years ago
- Open efforts to implement ChatGPT-like models and beyond.☆108Updated 11 months ago
- Naive Bayes-based Context Extension☆326Updated 7 months ago