GPT-Alternatives / gpt_alternativesLinks
☆74Updated 2 years ago
Alternatives and similar repositories for gpt_alternatives
Users that are interested in gpt_alternatives are comparing it to the libraries listed below
Sorting:
- This is the official implementation of "Progressive-Hint Prompting Improves Reasoning in Large Language Models"☆209Updated 2 years ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- Official completion of “Training on the Benchmark Is Not All You Need”.☆38Updated last year
- AI Alignment: A Comprehensive Survey☆136Updated 2 years ago
- SOTA Math Opensource LLM☆333Updated 2 years ago
- a Fine-tuned LLaMA that is Good at Arithmetic Tasks☆178Updated 2 years ago
- ☆125Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆168Updated 2 years ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆40Updated 2 years ago
- Unleashing the Power of Cognitive Dynamics on Large Language Models☆63Updated last year
- A curated reading list for large language model (LLM) alignment. Take a look at our new survey "Large Language Model Alignment: A Survey"…☆81Updated 2 years ago
- ☆104Updated last year
- ☆147Updated last year
- ☆83Updated last year
- An opensource ChatBot built with ExpertPrompting which achieves 96% of ChatGPT's capability.☆300Updated 2 years ago
- Reformatted Alignment☆111Updated last year
- The official GitHub page for the survey paper "A Survey on Data Augmentation in Large Model Era"☆132Updated last year
- Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆91Updated 2 years ago
- ☆36Updated last year
- Mixture-of-Experts (MoE) Language Model☆194Updated last year
- Code and Data for Our NeurIPS 2024 paper "AMOR: A Recipe for Building Adaptable Modular Knowledge Agents Through Process Feedback"☆34Updated last year
- Counting-Stars (★)☆83Updated 2 months ago
- Data and code for paper "M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models"☆103Updated 2 years ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆139Updated last year
- SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning (NeurIPS D&B Track 2024)☆86Updated last year
- ☆162Updated last year
- SUS-Chat: Instruction tuning done right☆49Updated 2 years ago
- ☆76Updated last year
- ☆41Updated last year