padeoe / hf-mirror-siteLinks
a huggingface mirror site.
☆289Updated last year
Alternatives and similar repositories for hf-mirror-site
Users that are interested in hf-mirror-site are comparing it to the libraries listed below
Sorting:
- huggingface mirror download☆580Updated 2 months ago
- hf-mirror-cli 使用国内镜像,无需配置开箱即用,快速下载hugingface上的模型☆136Updated 3 months ago
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆441Updated 7 months ago
- A streamlined and customizable framework for efficient large model evaluation and performance benchmarking☆1,076Updated this week
- [EMNLP'24] CharacterGLM: Customizing Chinese Conversational AI Characters with Large Language Models☆464Updated 4 months ago
- 中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)☆606Updated last year
- ☆310Updated 5 months ago
- CMMLU: Measuring massive multitask language understanding in Chinese☆765Updated 6 months ago
- 中文Mixtral-8x7B(Chinese-Mixtral-8x7B)☆650Updated 9 months ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆655Updated 5 months ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆262Updated last year
- FlagEval is an evaluation toolkit for AI large foundation models.☆337Updated last month
- ☆107Updated 5 months ago
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,738Updated last year
- ☆231Updated 3 months ago
- Yuan 2.0 Large Language Model☆685Updated 10 months ago
- ☆166Updated this week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆254Updated last week
- ☆349Updated 10 months ago
- Phi3 中文后训练模型仓库☆321Updated 6 months ago
- Mixture-of-Experts (MoE) Language Model☆189Updated 8 months ago
- 万卷1.0多模态语料☆561Updated last year
- 大模型中文测试题库-民间版本☆83Updated 2 years ago
- Llama3-Chinese是以Meta-Llama-3-8B为底座,使用 DORA + LORA+ 的训练方法,在50w高质量中文多轮SFT数据 + 10w英文多轮SFT数据 + 2000单轮自我认知数据训练而来的大模型。☆295Updated last year
- 多模态中文LLaMA&Alpaca大语言模型(VisualCLA)☆447Updated last year
- 360zhinao☆289Updated 3 weeks ago
- ☆224Updated last year
- 部署你自己的OpenAI api🤩, 基于flask, transformers (使用 Baichuan2-13B-Chat-4bits 模型, 可以运行在单张Tesla T4显卡) ,实现了OpenAI中Chat, Models和Completions接口,包含流式响…☆93Updated last year
- 活字通用大模型☆388Updated 8 months ago
- 更纯粹、更高压缩率的Tokenizer☆481Updated 6 months ago