youkaichao / fast_bpe_tokenizer
fast bpe tokenizer, simple to understand, easy to use
☆25Updated last year
Alternatives and similar repositories for fast_bpe_tokenizer
Users that are interested in fast_bpe_tokenizer are comparing it to the libraries listed below
Sorting:
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 4 months ago
- Longitudinal Evaluation of LLMs via Data Compression☆32Updated 11 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆77Updated last year
- Low-bit optimizers for PyTorch☆128Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆55Updated 3 weeks ago
- ☆106Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆84Updated 6 months ago
- ☆16Updated last year
- Experiments on speculative sampling with Llama models☆126Updated last year
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆153Updated 11 months ago
- Evaluating LLMs with Dynamic Data☆87Updated 3 weeks ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆101Updated 11 months ago
- ☆16Updated last year
- ☆78Updated 4 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆37Updated last year
- Plug in and play implementation of " Textbooks Are All You Need", ready for training, inference, and dataset generation☆76Updated last year
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆32Updated last year
- Unofficial implementation of AlpaGasus☆91Updated last year
- FuseAI Project☆86Updated 3 months ago
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆142Updated 2 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆132Updated 11 months ago
- ☆147Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆69Updated 11 months ago
- Official implementation of paper "Autonomous Data Selection with Language Models for Mathematical Texts" (As Huggingface Daily Papers: ht…☆81Updated 6 months ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆95Updated last year