xverse-ai / XVERSE-MoE-A36BLinks
XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.
☆38Updated last year
Alternatives and similar repositories for XVERSE-MoE-A36B
Users that are interested in XVERSE-MoE-A36B are comparing it to the libraries listed below
Sorting:
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆44Updated 7 months ago
- ☆67Updated 5 months ago
- SkyScript-100M: 1,000,000,000 Pairs of Scripts and Shooting Scripts for Short Drama: https://arxiv.org/abs/2408.09333v2☆126Updated 10 months ago
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆39Updated last year
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆172Updated this week
- Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This rep…☆59Updated 10 months ago
- The open-source code of MetaStone-S1.☆108Updated last month
- imagetokenizer is a python package, helps you encoder visuals and generate visuals token ids from codebook, supports both image and video…☆35Updated last year
- ☆20Updated last year
- Official Implementation of APB (ACL 2025 main Oral)☆31Updated 6 months ago
- Code for Paper: Test-Time Reinforcement Learning for GUI Grounding via Region Consistency☆42Updated last month
- ☆34Updated 7 months ago
- [NAACL 2025] Representing Rule-based Chatbots with Transformers☆22Updated 7 months ago
- ☆45Updated this week
- ☆18Updated 4 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- ☆95Updated 9 months ago
- GLM Series Edge Models☆149Updated 3 months ago
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem” [EMNLP25]☆31Updated 2 weeks ago
- XmodelLM☆39Updated 9 months ago
- ☆87Updated 3 months ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆57Updated 10 months ago
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆61Updated 6 months ago
- ☆97Updated last month
- 😊 TPTT: Transforming Pretrained Transformers into Titans☆26Updated this week
- ☆14Updated last year
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆53Updated 9 months ago