bremen79 / preciseLinks
Portfolio REgret for Confidence SEquences
☆19Updated 5 months ago
Alternatives and similar repositories for precise
Users that are interested in precise are comparing it to the libraries listed below
Sorting:
- Code for minimum-entropy coupling.☆32Updated 11 months ago
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆25Updated 4 months ago
- ☆60Updated 3 years ago
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- ☆43Updated this week
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆58Updated last year
- Implementations of growing and pruning in neural networks☆22Updated last year
- Deep Networks Grok All the Time and Here is Why☆36Updated last year
- ☆26Updated 2 years ago
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆36Updated 2 years ago
- DiCE: The Infinitely Differentiable Monte-Carlo Estimator☆31Updated last year
- Because we don't want a jupyter notebook mess...☆62Updated last week
- ☆32Updated last year
- Generative cellular automaton-like learning environments for RL.☆19Updated 4 months ago
- PyTorch implementation for "Long Horizon Temperature Scaling", ICML 2023☆20Updated 2 years ago
- Code for "Counterfactual Token Generation in Large Language Models", Arxiv 2024.☆26Updated 6 months ago
- Code for "Accelerating Training with Neuron Interaction and Nowcasting Networks" [to appear at ICLR 2025]☆19Updated last week
- Code for the paper "Function-Space Learning Rates"☆20Updated last month
- gzip Predicts Data-dependent Scaling Laws☆35Updated last year
- ☆53Updated 8 months ago
- Personal implementation of ASIF by Antonio Norelli☆25Updated last year
- We integrate discrete diffusion models with neurosymbolic predictors for scalable and calibrated learning and reasoning☆33Updated 2 weeks ago
- Concept Learning Dynamics☆14Updated 7 months ago
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆37Updated 2 years ago
- ☆33Updated 8 months ago
- Understanding how features learned by neural networks evolve throughout training☆34Updated 7 months ago
- The Energy Transformer block, in JAX☆56Updated last year
- Efficient Scaling laws and collaborative pretraining.☆16Updated 4 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated 2 months ago
- ☆19Updated 2 weeks ago