multimodal-art-projection / MAP-NEOLinks
☆977Updated 11 months ago
Alternatives and similar repositories for MAP-NEO
Users that are interested in MAP-NEO are comparing it to the libraries listed below
Sorting:
- 中文Mixtral-8x7B(Chinese-Mixtral-8x7B)☆655Updated last year
- ☆761Updated last month
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆825Updated 10 months ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,004Updated last year
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆606Updated 2 months ago
- Yuan 2.0 Large Language Model☆690Updated last year
- A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI☆773Updated 2 years ago
- Unify Efficient Fine-tuning of RAG Retrieval, including Embedding, ColBERT, ReRanker.☆1,080Updated 6 months ago
- Large Reasoning Models☆807Updated last year
- Train a 1B LLM with 1T tokens from scratch by personal☆786Updated 9 months ago
- LongBench v2 and LongBench (ACL 25'&24')☆1,078Updated last year
- ☆235Updated last year
- CMMLU: Measuring massive multitask language understanding in Chinese☆801Updated last year
- 大模型多维度中文对齐评测基准 (ACL 2024)☆421Updated 3 months ago
- LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA☆517Updated last year
- 中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)☆609Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆944Updated 11 months ago
- ☆971Updated last year
- O1 Replication Journey☆2,001Updated last year
- An Open Large Reasoning Model for Real-World Solutions☆1,533Updated 8 months ago
- GAOKAO-Bench is an evaluation framework that utilizes GAOKAO questions as a dataset to evaluate large language models.☆701Updated last year
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆264Updated last year
- ☆552Updated last year
- Efficient AI Inference & Serving☆480Updated 2 years ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆751Updated last year
- Collaborative Training of Large Language Models in an Efficient Way☆417Updated last year
- Accelerate inference without tears☆372Updated last week
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆304Updated last year
- ☆362Updated last year
- ☆1,345Updated last year