Shawn-Guo-CN / Lossless_Text_Compression_with_Transformer
This repo is to demo the concept of lossless compression with Transformers as encoder and decoder.
☆14Updated 9 months ago
Alternatives and similar repositories for Lossless_Text_Compression_with_Transformer:
Users that are interested in Lossless_Text_Compression_with_Transformer are comparing it to the libraries listed below
- ☆47Updated 10 months ago
- Efficient retrieval head analysis with triton flash attention that supports topK probability☆12Updated 8 months ago
- Analyzing LLM Alignment via Token distribution shift☆15Updated last year
- The information of NLP PhD application in the world.☆36Updated 5 months ago
- LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation☆18Updated 3 weeks ago
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆36Updated last year
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆38Updated 3 months ago
- ACL'23: Unified Demonstration Retriever for In-Context Learning☆36Updated last year
- The code and data for the paper JiuZhang3.0☆40Updated 8 months ago
- [NeurIPS 2023] Repetition In Repetition Out: Towards Understanding Neural Text Degeneration from the Data Perspective☆30Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆55Updated 7 months ago
- The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".☆63Updated last year
- Code for M4LE: A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context Evaluation Benchmark for Large Language Models☆22Updated 6 months ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆69Updated last year
- ☆28Updated last month
- ☆14Updated 3 months ago
- [NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li…☆21Updated last year
- Use the tokenizer in parallel to achieve superior acceleration☆15Updated 10 months ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆58Updated 3 months ago
- ☆16Updated last year
- Code for our EMNLP-2023 paper: "Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks"☆24Updated last year
- ☆12Updated last year
- [ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Models☆53Updated 6 months ago
- ☆16Updated last year
- ☆14Updated last year
- Explore what LLMs are really leanring over SFT☆28Updated 10 months ago
- Towards Systematic Measurement for Long Text Quality☆31Updated 5 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆31Updated 6 months ago