bojone / NBCELinks
Naive Bayes-based Context Extension
☆327Updated last year
Alternatives and similar repositories for NBCE
Users that are interested in NBCE are comparing it to the libraries listed below
Sorting:
- ☆459Updated last year
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆212Updated last year
- Collaborative Training of Large Language Models in an Efficient Way☆418Updated last year
- ☆282Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆284Updated 2 years ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆117Updated 2 years ago
- ☆173Updated 2 years ago
- ☆129Updated 2 years ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆224Updated 2 years ago
- ☆164Updated 2 years ago
- ☆184Updated 2 years ago
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆89Updated last year
- 中文 Instruction tuning datasets☆141Updated last year
- 语言模型中文认知能力分析☆235Updated 2 years ago
- ☆147Updated last year
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆137Updated 9 months ago
- Efficient, Low-Resource, Distributed transformer implementation based on BMTrain☆266Updated 2 years ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆587Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆415Updated 7 months ago
- ☆313Updated 2 years ago
- ☆321Updated last year
- ☆147Updated last year
- ☆99Updated 2 years ago
- A Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark☆103Updated 2 years ago
- ☆334Updated last year
- ☆70Updated 2 years ago
- ☆84Updated 2 years ago
- Implementation of Chinese ChatGPT☆288Updated 2 years ago
- TencentLLMEval is a comprehensive and extensive benchmark for artificial evaluation of large models that includes task trees, standards, …☆41Updated 10 months ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago