xverse-ai / XVERSE-MoE-A36BLinks
XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.
☆38Updated last year
Alternatives and similar repositories for XVERSE-MoE-A36B
Users that are interested in XVERSE-MoE-A36B are comparing it to the libraries listed below
Sorting:
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem” [EMNLP25]☆33Updated 5 months ago
- ☆67Updated 10 months ago
- ☆21Updated last year
- ☆100Updated 5 months ago
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆45Updated 11 months ago
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆28Updated last year
- SkyScript-100M: 1,000,000,000 Pairs of Scripts and Shooting Scripts for Short Drama: https://arxiv.org/abs/2408.09333v2☆135Updated last year
- 😊 TPTT: Transforming Pretrained Transformers into Titans☆57Updated 2 months ago
- ☆19Updated last year
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆32Updated last year
- The open-source code of MetaStone-S1.☆105Updated 6 months ago
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆39Updated last year
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆53Updated last year
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆120Updated 8 months ago
- [NAACL 2025] Representing Rule-based Chatbots with Transformers☆23Updated 11 months ago
- Fused Qwen3 MoE layer for faster training, compatible with Transformers, LoRA, bnb 4-bit quant, Unsloth. Also possible to train LoRA over…☆229Updated last week
- ☆194Updated last month
- ☆96Updated last year
- ☆18Updated 9 months ago
- Beyond Real: Imaginary Extension of Rotary Position Embeddings for Long-Context LLMs☆33Updated last month
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆59Updated 8 months ago
- Fathom-DeepResearch: Unlocking Long Horizon Information Retrieval And Synthesis For SLMs☆52Updated 3 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- ☆88Updated 8 months ago
- ☆72Updated 2 months ago
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆58Updated last year
- Multi-Layer Key-Value sharing experiments on Pythia models☆34Updated last year
- Official Implementation of APB (ACL 2025 main Oral) and Spava.☆32Updated this week
- XmodelLM☆38Updated last year
- Tencent Hunyuan 7B (short as Hunyuan-7B) is one of the large language dense models of Tencent Hunyuan☆71Updated 5 months ago