xverse-ai / XVERSE-MoE-A36B
XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.
☆37Updated 4 months ago
Alternatives and similar repositories for XVERSE-MoE-A36B:
Users that are interested in XVERSE-MoE-A36B are comparing it to the libraries listed below
- Representing Rule-based Chatbots with Transformers☆19Updated 6 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆36Updated last year
- ☆27Updated last month
- ☆28Updated last week
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆36Updated 8 months ago
- Code for paper: "Executing Arithmetic: Fine-Tuning Large Language Models as Turing Machines"☆11Updated 3 months ago
- Empirical Study Towards Building An Effective Multi-Modal Large Language Model☆23Updated last year
- ☆27Updated 5 months ago
- SkyScript-100M: 1,000,000,000 Pairs of Scripts and Shooting Scripts for Short Drama: https://arxiv.org/abs/2408.09333v2☆110Updated 2 months ago
- Its an open source LLM based on MOE Structure.☆57Updated 6 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆40Updated 7 months ago
- Official repository for ICML 2024 paper "MoRe Fine-Tuning with 10x Fewer Parameters"☆17Updated 3 weeks ago
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆36Updated 4 months ago
- ☆60Updated 3 months ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆72Updated 3 months ago
- ☆20Updated 7 months ago
- imagetokenizer is a python package, helps you encoder visuals and generate visuals token ids from codebook, supports both image and video…☆30Updated 7 months ago
- HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models☆36Updated 2 months ago
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆28Updated 10 months ago
- FuseAI Project☆80Updated this week
- ☆36Updated 4 months ago
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆52Updated 2 months ago
- ☆33Updated 6 months ago
- Code Implementation, Evaluations, Documentation, Links and Resources for Min P paper☆20Updated 3 weeks ago
- Official Repository for Paper "BaichuanSEED: Sharing the Potential of ExtensivE Data Collection and Deduplication by Introducing a Compet…☆18Updated 5 months ago
- OLA-VLM: Elevating Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆47Updated this week
- ☆35Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆128Updated 7 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆29Updated 8 months ago