padeoe / hf-mirror-site
a huggingface mirror site.
☆275Updated last year
Alternatives and similar repositories for hf-mirror-site:
Users that are interested in hf-mirror-site are comparing it to the libraries listed below
- huggingface mirror download☆567Updated 2 weeks ago
- hf-mirror-cli 使用国内镜像,无需配置开箱即用,快速下载hugingface上的模型☆129Updated last month
- A streamlined and customizable framework for efficient large model evaluation and performance benchmarking☆726Updated this week
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆440Updated 5 months ago
- ☆310Updated 3 months ago
- ☆159Updated this week
- ☆349Updated 8 months ago
- 大模型多维度中文对齐评测基准 (ACL 2024)☆371Updated 7 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆240Updated 3 weeks ago
- Phi3 中文后训练模型仓库☆320Updated 4 months ago
- 360zhinao☆291Updated 2 months ago
- 中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)☆603Updated 11 months ago
- CMMLU: Measuring massive multitask language understanding in Chinese☆750Updated 3 months ago
- Z-Bench 1.0 by 真格基金:一个麻瓜的大语言模型中文测试集。Z-Bench is a LLM prompt dataset for non-technical users, developed by an enthusiastic AI-focused team…☆491Updated last year
- 中文Mixtral-8x7B(Chinese-Mixtral-8x7B)☆648Updated 7 months ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆625Updated 2 months ago
- 大模型中文测试题库-民间版本☆76Updated last year
- Alpaca Chinese Dataset -- 中文指令微调数据集☆193Updated 5 months ago
- Phi2-Chinese-0.2B 从0开始训练自己的Phi2中文小模型,支持接入langchain加载本地知识库做检索增强生成RAG。Training your own Phi2 small chat model from scratch.☆540Updated 8 months ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆328Updated 8 months ago
- [EMNLP'24] CharacterGLM: Customizing Chinese Conversational AI Characters with Large Language Models☆458Updated 2 months ago
- ☆103Updated 3 months ago
- ☆220Updated last year
- 更纯粹、更高压缩率的Tokenizer☆472Updated 4 months ago
- 通义千问VLLM推理部署DEMO☆554Updated last year
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆260Updated 10 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated 3 months ago
- Mixture-of-Experts (MoE) Language Model☆185Updated 6 months ago
- Llama3-Chinese是以Meta-Llama-3-8B为底座,使用 DORA + LORA+ 的训练方法,在50w高质量中文多轮SFT数据 + 10w英文多轮SFT数据 + 2000单轮自我认知数据训练而来的大模型。☆294Updated 11 months ago
- GLM Series Edge Models☆131Updated last month