Boltz-1 on Modal

Gist link to modal input script here

Like everyone else, I was excited to see the release of Boltz-1 - a cofolding model, in the same cohort as AlphaFold 3, with MIT-licensed code and weights. While I don't think cofolding models are quite ready for "prime time", at least we'll all be able to explore their strengths and weaknesses more easily when we can run the code using data related to commercial projects.

I got OOMs on an AWS instance, so instead here is a way to run Boltz-1 on Modal in order to easily ramp up the GPU type and discover the required hardware more quickly. The example job required at least an A100 so, if you don't have one of those around, this Modal job could also be the primary way to use boltz1.

This Modal script downloads the weights into the modal image, so they'll only need to be downloaded once, while setting up. uv is used to install, and it seemed to work out of the box. The whole script works by reading a sequence alignment (a3m file) and input yaml file from your local machine, then passing those as raw text to the remote GPU-enabled machine, where the boltz predict command is executed. The results are packaged into a tar.gz file which gets return to you locally.

The example used in the script comes from the examples directory in the Boltz-1 github repo. The file paths in the input yaml have to be slightly changed, so look in the gist for the ligand.yaml file.