yoavg / pdf-tab-renamerLinks
chrome extension for renaming tabs showing paper-pdfs from common providers
☆95Updated 10 months ago
Alternatives and similar repositories for pdf-tab-renamer
Users that are interested in pdf-tab-renamer are comparing it to the libraries listed below
Sorting:
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆147Updated last month
- ☆142Updated 2 months ago
- Extract full next-token probabilities via language model APIs☆247Updated last year
- lossily compress representation vectors using product quantization☆59Updated last week
- Experiments for efforts to train a new and improved t5☆75Updated last year
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆56Updated last month
- ☆111Updated 8 months ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆114Updated last year
- Storing long contexts in tiny caches with self-study☆210Updated 3 weeks ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆193Updated last year
- A reading list of relevant papers and projects on foundation model annotation☆28Updated 8 months ago
- ☆80Updated this week
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 8 months ago
- An introduction to LLM Sampling☆79Updated 10 months ago
- Alice in Wonderland code base for experiments and raw experiments data☆131Updated last month
- ☆23Updated 4 months ago
- ☆91Updated last year
- Supercharge huggingface transformers with model parallelism.☆77Updated 3 months ago
- ☆69Updated last year
- code for training & evaluating Contextual Document Embedding models☆199Updated 5 months ago
- ☆21Updated last year
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆98Updated 3 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 9 months ago
- ☆68Updated 11 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- ☆57Updated last month
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆117Updated 3 months ago
- Open source interpretability artefacts for R1.☆163Updated 6 months ago
- ☆36Updated 6 months ago
- ☆125Updated 10 months ago