SamsungSAILMontreal / AnyMolGenCriticLinks
☆20Updated 7 months ago
Alternatives and similar repositories for AnyMolGenCritic
Users that are interested in AnyMolGenCritic are comparing it to the libraries listed below
Sorting:
- Bare-bones implementations of some generative models in Jax: diffusion, normalizing flows, consistency models, flow matching, (beta)-VAEs…☆137Updated last year
- This repository contains the data and scripts necessary to reproduce the results presented in the paper: **"Scalable molecular simulation…☆48Updated last year
- The Forward-Forward Algorithm for Drug Discovery☆34Updated 2 years ago
- Pytorch implementation of a simple way to enable (Stochastic) Frame Averaging for any network☆51Updated last year
- Your favourite classical machine learning algos on the GPU/TPU☆20Updated 10 months ago
- TLDRs for ML in Drug Discovery papers☆71Updated 2 years ago
- ☆37Updated this week
- Simple Scalable Discrete Diffusion for text in PyTorch☆37Updated last year
- Proof-of-concept of global switching between numpy/jax/pytorch in a library.☆18Updated last year
- Unofficial implementation of GotenNet, new SOTA 3d equivariant transformer, in Pytorch☆67Updated 7 months ago
- Erwin: A Tree-based Hierarchical Transformer for Large-scale Physical Systems [ICML'25]☆107Updated last month
- code for "Adjoint Sampling: Highly Scalable Diffusion Samplers via Adjoint Matching"☆126Updated 3 months ago
- Explorations into whether a transformer with RL can direct a genetic algorithm to converge faster☆71Updated 6 months ago
- A simple implimentation of Bayesian Flow Networks (BFN)☆240Updated last year
- Framework enabling modular interchange of language agents, environments, and optimizers☆113Updated last week
- A Fast, Simplified Model for Molecular Generation with Improved Physical Quality☆22Updated last month
- Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.☆173Updated 2 years ago
- Graph neural networks in JAX.☆68Updated last year
- This repository contains the official code for Energy Transformer---an efficient Energy-based Transformer variant for graph classificatio…☆25Updated last year
- RITA is a family of autoregressive protein models, developed by LightOn in collaboration with the OATML group at Oxford and the Debora Ma…☆98Updated 2 years ago
- Multi-framework implementation of Deep Kernel Shaping and Tailored Activation Transformations, which are methods that modify neural netwo…☆74Updated 4 months ago
- ☆56Updated 11 months ago
- Lagrangian formulation of Doob's h-transform allowing for efficient rare event sampling☆52Updated 7 months ago
- Latent Program Network (from the "Searching Latent Program Spaces" paper)☆103Updated last month
- A language agent gym with challenging scientific tasks☆210Updated last week
- Generative Flow Networks - GFlowNet☆292Updated this week
- Making folding experiments more accessible .☆86Updated 4 months ago
- The Superposition of Diffusion Models Using the Itô Density Estimator☆51Updated 7 months ago
- ☆61Updated last year
- σ-GPT: A New Approach to Autoregressive Models☆69Updated last year