jason9693 / FROZENLinks
☆14Updated 3 years ago
Alternatives and similar repositories for FROZEN
Users that are interested in FROZEN are comparing it to the libraries listed below
Sorting:
- ☆13Updated 3 years ago
- Repo for "Zemi: Learning Zero-Shot Semi-Parametric Language Models from Multiple Tasks" ACL 2023 Findings☆16Updated 2 years ago
- The official code repository for MetricMT - a reward optimization method for NMT with learned metrics☆25Updated 4 years ago
- Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.☆28Updated 4 years ago
- ☆29Updated 3 years ago
- PyTorch code for "Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative Latent Attention" (WACV 2023)☆33Updated 2 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆76Updated 2 years ago
- NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation (ACL-IJCNLP 2021)☆36Updated 4 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆48Updated 3 years ago
- [COLM 2024] Early Weight Averaging meets High Learning Rates for LLM Pre-training☆17Updated last year
- Calculating Expected Time for training LLM.☆38Updated 2 years ago
- ☆46Updated 3 years ago
- KETOD Knowledge-Enriched Task-Oriented Dialogue☆32Updated 2 years ago
- MetricEval: A framework that conceptualizes and operationalizes four main components of metric evaluation, in terms of reliability and va…☆11Updated last year
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆49Updated 5 years ago
- A PyTorch Implementation of the Luna: Linear Unified Nested Attention☆41Updated 4 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆49Updated 4 years ago
- [Findings of ACL-2023] This is the official implementation of On the Difference of BERT-style and CLIP-style Text Encoders.☆14Updated 2 years ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Updated 2 years ago
- All-in-one repository for Fine-tuning & Pretraining (Large) Language Models☆15Updated 2 years ago
- ☆31Updated 2 years ago
- exBERT on Transformers🤗☆10Updated 4 years ago
- DEMix Layers for Modular Language Modeling☆54Updated 4 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆81Updated 3 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆99Updated 2 years ago
- Pretraining summarization models using a corpus of nonsense☆13Updated 4 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- ☆23Updated 2 years ago
- Code for text augmentation method leveraging large-scale language models☆62Updated 3 years ago