multimodal-art-projection / MAP-NEOLinks
☆964Updated 8 months ago
Alternatives and similar repositories for MAP-NEO
Users that are interested in MAP-NEO are comparing it to the libraries listed below
Sorting:
- 中文Mixtral-8x7B(Chinese-Mixtral-8x7B)☆654Updated last year
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆782Updated 7 months ago
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆572Updated last month
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆995Updated 10 months ago
- 大模型多维度中文对齐评测基准 (ACL 2024)☆414Updated last year
- A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI☆771Updated last year
- ☆748Updated last month
- LongBench v2 and LongBench (ACL 25'&24')☆997Updated 9 months ago
- Train a 1B LLM with 1T tokens from scratch by personal☆740Updated 5 months ago
- Large Reasoning Models☆805Updated 10 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆923Updated 8 months ago
- CMMLU: Measuring massive multitask language understanding in Chinese☆790Updated 10 months ago
- ☆963Updated 9 months ago
- Yuan 2.0 Large Language Model☆688Updated last year
- Unify Efficient Fine-tuning of RAG Retrieval, including Embedding, ColBERT, ReRanker.☆1,047Updated 3 months ago
- 中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)☆610Updated last year
- A live reading list for LLM data synthesis (Updated to July, 2025).☆387Updated last month
- [ACL 2024] Progressive LLaMA with Block Expansion.☆510Updated last year
- ☆548Updated 9 months ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆688Updated 9 months ago
- ☆234Updated last year
- 万卷1.0多模态语料☆567Updated 2 years ago
- LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA☆507Updated 9 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆571Updated 10 months ago
- ☆354Updated last year
- An Open Large Reasoning Model for Real-World Solutions☆1,522Updated 4 months ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆264Updated last year
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆445Updated last year
- SOTA Math Opensource LLM☆333Updated last year
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆474Updated 5 months ago