bigcode-project / bigcode-tokenizerLinks
☆15Updated last year
Alternatives and similar repositories for bigcode-tokenizer
Users that are interested in bigcode-tokenizer are comparing it to the libraries listed below
Sorting:
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆33Updated 2 months ago
- Using short models to classify long texts☆21Updated 2 years ago
- minimal pytorch implementation of bm25 (with sparse tensors)☆104Updated last year
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆37Updated 3 weeks ago
- 🚀🤗 A collection of templates for Hugging Face Spaces☆35Updated last year
- A library for squeakily cleaning and filtering language datasets.☆47Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- ☆49Updated 5 months ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆62Updated 2 months ago
- **ARCHIVED** Filesystem interface to 🤗 Hub☆58Updated 2 years ago
- Hugging Face Inference Toolkit used to serve transformers, sentence-transformers, and diffusers models.☆83Updated last week
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Code for SaGe subword tokenizer (EACL 2023)☆25Updated 8 months ago
- ☆57Updated 10 months ago
- 🤝 Trade any tensors over the network☆30Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated last year
- BPE modification that implements removing of the intermediate tokens during tokenizer training.☆24Updated 8 months ago
- ☆19Updated 2 years ago
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- ☆37Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- 🤗 Disaggregators: Curated data labelers for in-depth analysis.☆66Updated 2 years ago
- Pre-train Static Word Embeddings☆85Updated 2 months ago
- Developing tools to automatically analyze datasets☆74Updated 9 months ago
- Supercharge huggingface transformers with model parallelism.☆77Updated last week
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆49Updated last year
- ☆23Updated 2 years ago