Skip to main content
boileroom provides a unified Python interface to run protein prediction models (structure prediction and embeddings) on serverless (Modal) or local (Apptainer) GPU infrastructure. You call the same methods regardless of backend — the models and their outputs are the primary interface.
boileroom powers BAGEL’s oracle inference. When you create an oracle like bg.oracles.ESMFold(use_modal=True) in BAGEL, boileroom handles model loading, GPU execution, and dependency isolation behind the scenes. See the BAGEL Oracles documentation for how oracles integrate with energy terms.

Architecture

User Code


Model (ESMFold / ESM2 / Chai1 / Boltz2)


Backend (Modal / Apptainer)


GPU Execution

Quick start

from boileroom import ESMFold

model = ESMFold(backend="modal")
result = model.fold("MKTVRQERLKSIVRI")

# Access the predicted structure
print(result.atom_array)  # list of Biotite AtomArray objects

Available models

ModelTypeMethodDescription
ESMFoldStructure prediction.fold()Meta’s fast single-sequence structure prediction
ESM2Embeddings.embed()Protein language model embeddings (6 model sizes)
Chai1Structure prediction.fold()Diffusion-based structure prediction
Boltz2Structure prediction.fold()Diffusion-based structure prediction with MSA support

Import patterns

from boileroom import ESMFold, ESM2, Chai1, Boltz2

Constructor

All models share the same constructor signature:
Model(backend="modal", device=None, config=None)
backend
str
default:"modal"
Backend to use for execution. Supported values: "modal" (serverless GPU) or "apptainer" (local GPU via container). See Backends.
device
str | None
default:"None"
GPU device identifier (e.g., "cuda:0", "cpu"). If None, defaults to "cuda:0" when a GPU is available.
config
dict | None
default:"None"
Model-specific configuration overrides merged with defaults. Each model has its own config keys — see the model’s page or Configuration.

Context manager

All models can be used as context managers to ensure the backend is properly shut down:
with ESMFold(backend="modal") as model:
    result = model.fold("MKTVRQERLKSIVRI")
# Backend is automatically stopped
Without a context manager, the backend is cleaned up when the model instance is garbage collected or when the Python process exits.

Folding models

Embedding models

Reference