Gleghorn-Lab / Mixture-of-Experts-Sentence-SimilarityLinks
☆16Updated 10 months ago
Alternatives and similar repositories for Mixture-of-Experts-Sentence-Similarity
Users that are interested in Mixture-of-Experts-Sentence-Similarity are comparing it to the libraries listed below
Sorting:
- Pre-trained Language Model for Scientific Text☆45Updated last year
- {DeepL, Google, WMT-Best, davinci-003, turbo, gpt-4} × {En-De, En-Cs, En-Ru, En-Zh, De-Fr, En-Ja, Uk-En, Uk-Cs, En-Hr, En-Ha, En-Is}☆14Updated 2 years ago
- Efficient retrieval head analysis with triton flash attention that supports topK probability☆13Updated last year
- ☆28Updated 3 months ago
- Text Diffusion Model with Encoder-Decoder Transformers for Sequence-to-Sequence Generation [NAACL 2024]☆98Updated 2 years ago
- [ACL 2024] ProtLLM: An Interleaved Protein-Language LLM with Protein-as-Word Pre-Training☆50Updated last year
- Few-shot Learning with Auxiliary Data☆31Updated 2 years ago
- Download, parse, and filter data PubMed, data-ready for The-Pile☆23Updated 4 years ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆56Updated 11 months ago
- [COLM 2024] Early Weight Averaging meets High Learning Rates for LLM Pre-training☆18Updated last year
- data collator for UL2 and U-PaLM☆29Updated 2 years ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆89Updated last year
- Scaling Sparse Fine-Tuning to Large Language Models☆18Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆112Updated 2 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- ☆29Updated last year
- ☆20Updated last year
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆28Updated 8 months ago
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆48Updated 6 months ago
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- BioCoder: A Benchmark for Bioinformatics Code Generation with Large Language Models https://arxiv.org/abs/2308.16458☆53Updated 5 months ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated last year
- Implementation of Denoising Diffusion for protein design, but using the new Equiformer (successor to SE3 Transformers) with some addition…☆57Updated 3 years ago
- [EMNLP 2022] Differentiable Data Augmentation for Contrastive Sentence Representation Learning. https://arxiv.org/abs/2210.16536☆40Updated 3 years ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning (NeurIPS D&B Track 2024)☆86Updated last year
- ☆52Updated last year
- ACL 2022(findings): A Sentence is Worth 128 Pseudo Tokens: A Semantic-Aware Contrastive Learning Framework for Sentence Embeddings☆18Updated 3 years ago
- Embedding Recycling for Language models☆38Updated 2 years ago