multimodal-art-projection / MAP-NEOLinks
☆960Updated 6 months ago
Alternatives and similar repositories for MAP-NEO
Users that are interested in MAP-NEO are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆756Updated 5 months ago
- 中文Mixtral-8x7B(Chinese-Mixtral-8x7B)☆651Updated last year
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆560Updated last week
- ☆737Updated 2 months ago
- A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI☆769Updated last year
- Train a 1B LLM with 1T tokens from scratch by personal☆716Updated 3 months ago
- Large Reasoning Models☆805Updated 8 months ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆981Updated 8 months ago
- Unify Efficient Fine-tuning of RAG Retrieval, including Embedding, ColBERT, ReRanker.☆1,018Updated last month
- LongBench v2 and LongBench (ACL 25'&24')☆944Updated 7 months ago
- ☆231Updated last year
- 大模型多维度中文对齐评测基准 (ACL 2024)☆407Updated last year
- LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA☆504Updated 7 months ago
- ☆955Updated 7 months ago
- Yuan 2.0 Large Language Model☆689Updated last year
- 中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)☆608Updated last year
- A live reading list for LLM data synthesis (Updated to July, 2025).☆360Updated this week
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆264Updated last year
- CMMLU: Measuring massive multitask language understanding in Chinese☆779Updated 8 months ago
- O1 Replication Journey☆1,998Updated 7 months ago
- ☆1,358Updated 9 months ago
- 开源SFT数据集整理,随时补充☆535Updated 2 years ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆566Updated 8 months ago
- ☆351Updated last year
- [ACL 2024] Progressive LLaMA with Block Expansion.☆509Updated last year
- FlagEval is an evaluation toolkit for AI large foundation models.☆339Updated 4 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,516Updated 2 months ago
- 🩹Editing large language models within 10 seconds⚡☆1,340Updated 2 years ago
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆463Updated 3 months ago
- This is a user guide for the MiniCPM and MiniCPM-V series of small language models (SLMs) developed by ModelBest. “面壁小钢炮” focuses on achi…☆275Updated last month