chainyo / transformers-pipeline-onnxLinks
How to export Hugging Face's 🤗 NLP Transformers models to ONNX and use the exported model with the appropriate Transformers pipeline.
☆24Updated 3 years ago
Alternatives and similar repositories for transformers-pipeline-onnx
Users that are interested in transformers-pipeline-onnx are comparing it to the libraries listed below
Sorting:
- This is a simple implementation of how to leverage a Language Model for a prompt-based learning model☆44Updated 3 years ago
- ☆52Updated 4 years ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- BLOOM 模型的指令微调☆24Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- (NBCE)Naive Bayes-based Context Extension on ChatGLM-6b☆14Updated 2 years ago
- TensorRT☆11Updated 4 years ago
- 用于微调LLM的中文指令数据集☆26Updated 2 years ago
- Tool for converting LLMs from uni-directional to bi-directional by removing causal mask for tasks like classification and sentence embedd…☆59Updated 5 months ago
- Let ChatGPT (Large Language Models) Serve As Data Annotator and Zero-shot/few-shot Information Extractor.☆32Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Updated last year
- 中文概念图谱OpenConcepts☆45Updated 3 years ago
- Sentence-Transformers Information Retrieval example on Chinese☆29Updated last year
- sentence-transformers to onnx 让sbert模型推理效率更快☆164Updated 3 years ago
- A more efficient GLM implementation!☆55Updated 2 years ago
- 中文AllenNLP教程(持续更新)☆14Updated 6 years ago
- 🛠️ Tools for Transformers compression using PyTorch Lightning ⚡☆83Updated 6 months ago
- SNCSE: Contrastive Learning for Unsupervised Sentence Embedding with Soft Negative Samples☆75Updated 2 years ago
- Dual Cross Encoder for Dense Retrieval☆16Updated 2 years ago
- ☆35Updated last month
- 科大讯飞低资源多语种文本翻译挑战赛获奖方案☆28Updated last year
- The multilingual variant of GLM, a general language model trained with autoregressive blank infilling objective☆62Updated 2 years ago
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year
- huggingface ChineseBert Tokenizer☆15Updated 3 years ago
- 长文本相似度模型☆21Updated last year
- benchmark of KgCLUE, with different models and methods☆27Updated 3 years ago
- Official Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP 2021)☆42Updated 10 months ago
- 复习论文《A Frustratingly Easy Approach for Joint Entity and Relation Extraction》☆31Updated 4 years ago
- GLM (General Language Model)☆24Updated 3 years ago
- Using TensorRT and Triton Server to build BERT model as a service☆13Updated 3 years ago