multimodal-art-projection / MAP-NEOLinks
☆969Updated 9 months ago
Alternatives and similar repositories for MAP-NEO
Users that are interested in MAP-NEO are comparing it to the libraries listed below
Sorting:
- 中文Mixtral-8x7B(Chinese-Mixtral-8x7B)☆656Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆996Updated 11 months ago
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆582Updated last week
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆793Updated 8 months ago
- ☆751Updated 3 months ago
- Train a 1B LLM with 1T tokens from scratch by personal☆757Updated 7 months ago
- 大模型多维度中文对齐评测基准 (ACL 2024)☆424Updated last month
- Large Reasoning Models☆807Updated last year
- LongBench v2 and LongBench (ACL 25'&24')☆1,028Updated 10 months ago
- ☆235Updated last year
- A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI☆773Updated last year
- LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA☆513Updated 11 months ago
- ☆966Updated 10 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆931Updated 9 months ago
- 中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)☆609Updated last year
- CMMLU: Measuring massive multitask language understanding in Chinese☆795Updated 11 months ago
- Unify Efficient Fine-tuning of RAG Retrieval, including Embedding, ColBERT, ReRanker.☆1,063Updated 4 months ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆266Updated last year
- O1 Replication Journey☆2,002Updated 10 months ago
- Yuan 2.0 Large Language Model☆690Updated last year
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,471Updated 2 years ago
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆346Updated last year
- 更纯粹、更高压缩率的Tokenizer☆486Updated last year
- ☆1,348Updated last year
- ☆551Updated 11 months ago
- ☆330Updated last year
- A live reading list for LLM data synthesis (Updated to July, 2025).☆418Updated 3 months ago
- 开源SFT数据集整理,随时补充☆562Updated 2 years ago
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆446Updated last year
- ☆181Updated 2 years ago