The aim of this repository is to utilize LLaMA to reproduce and enhance the Stanford Alpaca
☆98Apr 5, 2023Updated 3 years ago
Alternatives and similar repositories for efficient_alpaca
Users that are interested in efficient_alpaca are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The framework to prune LLMs to any size and any config.☆96Mar 1, 2024Updated 2 years ago
- Evaluating the faithfulness of long-context language models☆30Oct 21, 2024Updated last year
- This is the codebase for pre-training, compressing, extending, and distilling LLMs with Megatron-LM.☆12Mar 11, 2024Updated 2 years ago
- ☆95Oct 8, 2023Updated 2 years ago
- Code and data for "Improving Temporal Generalization of Pre-trained Language Models with Lexical Semantic Change" (EMNLP2022)☆18Dec 8, 2022Updated 3 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- CMD: a framework for Context-aware Model self-Detoxification (EMNLP2024 Long Paper)☆17Feb 10, 2025Updated last year
- Diffusion Model Improvement Method☆35Sep 4, 2023Updated 2 years ago
- [COLING 2025] NesTools: A Dataset for Evaluating Nested Tool Learning Abilities of Large Language Models☆18Jan 18, 2025Updated last year
- Code and data for "Timo: Towards Better Temporal Reasoning for Language Models" (COLM 2024)☆26Oct 23, 2024Updated last year
- ☆187Jul 22, 2024Updated last year
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆24Oct 10, 2025Updated 6 months ago
- ☆18Mar 10, 2023Updated 3 years ago
- Official Implementation of "Learning to Refuse: Towards Mitigating Privacy Risks in LLMs"☆10Dec 13, 2024Updated last year
- ChatGLM-6B-Slim:裁减掉20K图片Token的ChatGLM-6B,完全一样的性能,占用更小的显存。☆124Apr 5, 2023Updated 3 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆29May 24, 2025Updated 11 months ago
- Calculate the probability of a paper being accepted by EMNLP2023 based on score distribution of ACL2023.☆14Sep 7, 2023Updated 2 years ago
- ✒️ ChatGPT as a writing partner.☆14Mar 6, 2023Updated 3 years ago
- Using conversational games to evaluate powerful LLMs☆18Sep 3, 2023Updated 2 years ago
- 🪞A powerful toolkit for almost all the Information Extraction tasks.☆124Apr 21, 2025Updated last year
- The code for "MoPE: Mixture of Prefix Experts for Zero-Shot Dialogue State Tracking"☆19Jan 25, 2025Updated last year
- Official Implementation of "Probing Language Models for Pre-training Data Detection"☆20Dec 4, 2024Updated last year
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLM…☆68Oct 27, 2024Updated last year
- 苏州大学研究生学位论文模板 - Soochow University Thesis TeX Template☆20Feb 27, 2026Updated 2 months ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- LLaMa Tuning with Stanford Alpaca Dataset using Deepspeed and Transformers☆49Mar 15, 2023Updated 3 years ago
- ☆20May 14, 2025Updated 11 months ago
- ☆19Oct 13, 2025Updated 6 months ago
- 🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts☆41Sep 29, 2024Updated last year
- ☆18Sep 26, 2020Updated 5 years ago
- Code for "Mixed Cross Entropy Loss for Neural Machine Translation"☆20Jul 23, 2021Updated 4 years ago
- [ICLR 2025] Official Code Release for Explaining Modern Gated-Linear RNNs via a Unified Implicit Attention Formulation☆49Mar 1, 2025Updated last year
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,606Aug 30, 2023Updated 2 years ago
- ☆38Mar 6, 2026Updated last month
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Code for paper "Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning"☆84Jan 24, 2024Updated 2 years ago
- ☆14Apr 16, 2024Updated 2 years ago
- Code for Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language Model, IJCAI 2020☆12Nov 26, 2020Updated 5 years ago
- Statistical discontinuous constituent parsing☆11Feb 15, 2018Updated 8 years ago
- ☆96Dec 6, 2024Updated last year
- ☆12Nov 5, 2024Updated last year
- ☆152Jan 31, 2024Updated 2 years ago