dcbench.common package

Submodules

dcbench.common.artifact module

class Artifact(artifact_id, **kwargs)[source]

Bases: abc.ABC

A pointer to a unit of data (e.g. a CSV file) that is stored locally on disk and/or in a remote GCS bucket.

In DCBench, each artifact is identified by a unique artifact ID. The only state that the Artifact object must maintain is this ID (self.id). The object does not hold the actual data in memory, making it lightweight.

Artifact is an abstract base class. Different types of artifacts (e.g. a CSV file vs. a PyTorch model) have corresponding subclasses of Artifact (e.g. CSVArtifact, ModelArtifact).

Tip

The vast majority of users should not call the Artifact constructor directly. Instead, they should either create a new artifact by calling from_data() or load an existing artifact from a YAML file.

The class provides utilities for accessing and managing a unit of data:

Parameters

artifact_id (str) – The unique artifact ID.

Return type

None

id

The unique artifact ID.

Type

str

classmethod from_data(data, artifact_id=None)[source]

Create a new artifact object from raw data and save the artifact to disk in the local directory specified in the config file at config.local_dir.

Tip

When called on the abstract base class Artifact, this method will infer which artifact subclass to use. If you know exactly which artifact class you’d like to use (e.g. DataPanelArtifact), you should call this classmethod on that subclass.

Parameters
  • data (Union[mk.DataPanel, pd.DataFrame, Model]) – The raw data that will be saved to disk.

  • artifact_id (str, optional) – . Defaults to None, in which case a UUID will be generated and used.

Returns

A new artifact pointing to the :arg:`data` that was saved to disk.

Return type

Artifact

property local_path: str

The local path to the artifact in the local directory specified in the config file at config.local_dir.

property remote_url: str

The URL of the artifact in the remote GCS bucket specified in the config file at config.public_bucket_name.

property is_downloaded: bool

Checks if artifact is downloaded to local directory specified in the config file at config.local_dir.

Returns

True if artifact is downloaded, False otherwise.

Return type

bool

property is_uploaded: bool

Checks if artifact is uploaded to GCS bucket specified in the config file at config.public_bucket_name.

Returns

True if artifact is uploaded, False otherwise.

Return type

bool

upload(force=False, bucket=None)[source]

Uploads artifact to a GCS bucket at self.path, which by default is just the artifact ID with the default extension.

Parameters
  • force (bool, optional) – Force upload even if artifact is already uploaded. Defaults to False.

  • bucket (storage.Bucket, optional) – The GCS bucket to which the artifact is uplioaded. Defaults to None, in which case the artifact is uploaded to the bucket speciried in the config file at config.public_bucket_name.

Return type

bool

Returns

bool: True if artifact was uploaded, False otherwise.

download(force=False)[source]

Downloads artifact from GCS bucket to the local directory specified in the config file at config.local_dir. The relative path to the artifact within that directory is self.path, which by default is just the artifact ID with the default extension.

Parameters

force (bool, optional) – Force download even if artifact is already downloaded. Defaults to False.

Returns

True if artifact was downloaded, False otherwise.

Return type

bool

Warning

By default, the GCS cache on public urls has a max-age up to an hour. Therefore, when updating an existin artifacts, changes may not be immediately reflected in subsequent downloads.

See here for more details.

DEFAULT_EXT: str = ''
isdir: bool = False
abstract load()[source]

Load the artifact into memory from disk at self.local_path.

Return type

Any

abstract save(data)[source]

Save data to disk at self.local_path.

Parameters

data (Any) –

Return type

None

static from_yaml(loader, node)[source]

This function is called by the YAML loader to convert a YAML node into an Artifact object.

It should not be called directly.

Parameters

loader (yaml.loader.Loader) –

static to_yaml(dumper, data)[source]

This function is called by the YAML dumper to convert an Artifact object into a YAML node.

It should not be called directly.

Parameters
class CSVArtifact(artifact_id, **kwargs)[source]

Bases: dcbench.common.artifact.Artifact

Parameters

artifact_id (str) –

Return type

None

DEFAULT_EXT: str = 'csv'
load()[source]

Load the artifact into memory from disk at self.local_path.

Return type

pandas.core.frame.DataFrame

save(data)[source]

Save data to disk at self.local_path.

Parameters

data (pandas.core.frame.DataFrame) –

Return type

None

class YAMLArtifact(artifact_id, **kwargs)[source]

Bases: dcbench.common.artifact.Artifact

Parameters

artifact_id (str) –

Return type

None

DEFAULT_EXT: str = 'yaml'
load()[source]

Load the artifact into memory from disk at self.local_path.

Return type

Any

save(data)[source]

Save data to disk at self.local_path.

Parameters

data (Any) –

Return type

None

class DataPanelArtifact(artifact_id, **kwargs)[source]

Bases: dcbench.common.artifact.Artifact

Parameters

artifact_id (str) –

Return type

None

DEFAULT_EXT: str = 'mk'
isdir: bool = True
load()[source]

Load the artifact into memory from disk at self.local_path.

Return type

pandas.core.frame.DataFrame

save(data)[source]

Save data to disk at self.local_path.

Parameters

data (meerkat.datapanel.DataPanel) –

Return type

None

class VisionDatasetArtifact(artifact_id, **kwargs)[source]

Bases: dcbench.common.artifact.DataPanelArtifact

Parameters

artifact_id (str) –

Return type

None

DEFAULT_EXT: str = 'mk'
isdir: bool = True
COLUMN_SUBSETS = {'celeba': ['id', 'image', 'identity', 'split'], 'imagenet': ['id', 'image', 'name', 'synset']}
classmethod from_name(name)[source]
Parameters

name (str) –

download(force=False)[source]

Downloads artifact from GCS bucket to the local directory specified in the config file at config.local_dir. The relative path to the artifact within that directory is self.path, which by default is just the artifact ID with the default extension.

Parameters

force (bool, optional) – Force download even if artifact is already downloaded. Defaults to False.

Returns

True if artifact was downloaded, False otherwise.

Return type

bool

Warning

By default, the GCS cache on public urls has a max-age up to an hour. Therefore, when updating an existin artifacts, changes may not be immediately reflected in subsequent downloads.

See here for more details.

class ModelArtifact(artifact_id, **kwargs)[source]

Bases: dcbench.common.artifact.Artifact

Parameters

artifact_id (str) –

Return type

None

DEFAULT_EXT: str = 'pt'
load()[source]

Load the artifact into memory from disk at self.local_path.

Return type

dcbench.common.modeling.Model

save(data)[source]

Save data to disk at self.local_path.

Parameters

data (dcbench.common.modeling.Model) –

Return type

None

dcbench.common.artifact_container module

class ArtifactSpec(description: 'str', artifact_type: 'type', optional: 'bool' = False)[source]

Bases: object

Parameters
  • description (str) –

  • artifact_type (type) –

  • optional (bool) –

Return type

None

description: str
artifact_type: type
optional: bool = False
class ArtifactContainer(artifacts, attributes=None, container_id=None)[source]

Bases: abc.ABC, collections.abc.Mapping, dcbench.common.table.RowMixin

A logical collection of artifacts and attributes (simple tags describing the container), which are useful for finding, sorting and grouping containers.

Parameters
  • artifacts (Mapping[str, Union[Artifact, Any]]) – A mapping with the same keys as the ArtifactContainer.artifact_specs (possibly excluding optional artifacts). Each value can either be an Artifact, in which case the artifact type must match the type specified in the corresponding ArtifactSpec, or a raw object, in which case a new artifact of the type specified in artifact_specs is created from the raw object and an artifact_id is generated according to the following pattern: <task_id>/<container_type>/artifacts/<container_id>/<key>.

  • attributes (Mapping[str, PRIMITIVE_TYPE], optional) – A mapping with the same keys as the ArtifactContainer.attribute_specs (possibly excluding optional attributes). Each value must be of the type specified in the corresponding AttributeSpec. Defaults to None.

  • container_id (str, optional) – The ID of the container. Defaults to None, in which case a UUID is generated.

artifacts

A dictionary of artifacts, indexed by name.

Tip

We can use the index operator directly on ArtifactContainer objects to both fetch the artifact, download it if necessary, and load it into memory. For example, to load the artifact "data" into memory from a container container, we can simply call container["data"], which is equivalent to calling container.artifacts["data"].download() followed by container.artifacts["data"].load().

Type

Dict[str, Artifact]

attributes

A dictionary of attributes, indexed by name.

Tip

Accessing attributes Atttributes can be accessed via a dot-notation (as long as the attribute name does not conflict). For example, to access the attribute "data" in a container container, we can simply call container.data.

Type

Dict[str, Attribute]

Notes

ArtifactContainer is an abstract base class, and should not be instantiated directly. There are two main groups of ArtifactContainer subclasses:

  1. dcbench.Problem - A logical collection of artifacts and attributes that correspond to a specific problem to be solved.

  2. dcbench.Solution - A logical collection of artifacts and attributes that correspond to a solution to a problem.

A concrete (i.e. non-abstract) subclass of ArtifactContainer must include (1) a specification for the artifacts it holds, (2) a specification for the attributes used to tag it, and (3) a task_id linking the subclass to one of dcbench’s tasks (see Task). For example, in the code block below we include such a specification in the definition of a simple container that holds a training dataset and a test dataset (see dcbench.SliceDiscoveryProblem for a real example):

class DemoContainer(ArtifactContainer):
    artifact_specs = {
        "train_dataset": ArtifactSpec(
            artifact_type=CSVArtifact,
            description="A CSV containing training data."
        ),
        "test_dataset": ArtifactSpec(
            artifact_type=CSVArtifact,
            description="A CSV containing test data."
        ),
    }
    attribute_specs = {
        "dataset_name": AttributeSpec(
            attribute_type=str,
            description="The name of the dataset."
        ),
    }
    task_id = "slice_discovery"
artifact_specs: Mapping[str, ArtifactSpec]
task_id: str
attribute_specs: Mapping[str, AttributeSpec] = {}
container_type: str = 'artifact_container'
property is_downloaded: bool

Checks if all of the artifacts in the container are downloaded to the local directory specified in the config file at config.local_dir.

Returns

True if artifact is downloaded, False otherwise.

Return type

bool

property is_uploaded: bool

Checks if all of the artifacts in the container are uploaded to the GCS bucket specified in the config file at config.public_bucket_name.

Returns

True if artifact is uploaded, False otherwise.

Return type

bool

upload(force=False, bucket=None)[source]

Uploads all of the artifacts in the container to a GCS bucket, skipping artifacts that are already uploaded.

Parameters
  • force (bool, optional) – Force upload even if an artifact is already uploaded. Defaults to False.

  • bucket (storage.Bucket, optional) – The GCS bucket to which the artifacts are uploaded. Defaults to None, in which case the artifact is uploaded to the bucket speciried in the config file at config.public_bucket_name.

Returns

True if any artifacts were uploaded, False otherwise.

Return type

bool

download(force=False)[source]

Downloads artifacts in the container from the GCS bucket specified in the config file at config.public_bucket_name to the local directory specified in the config file at config.local_dir. The relative path to the artifact within that directory is self.path, which by default is just the artifact ID with the default extension.

Parameters

force (bool, optional) – Force download even if an artifact is already downloaded. Defaults to False.

Returns

True if any artifacts were downloaded, False otherwise.

Return type

bool

static from_yaml(loader, node)[source]

This function is called by the YAML loader to convert a YAML node into an ArtifactContainer object.

It should not be called directly.

Parameters

loader (yaml.loader.Loader) –

static to_yaml(dumper, data)[source]

This function is called by the YAML dumper to convert an ArtifactContainer object into a YAML node.

It should not be called directly.

Parameters

dcbench.common.method module

dcbench.common.modeling module

class Model(config=None)[source]

Bases: pytorch_lightning.core.lightning.LightningModule

Parameters

config (dict) –

DEFAULT_CONFIG = {}
training: bool
class ResNet(num_classes, arch='resnet18', dropout=0.0, pretrained=True)[source]

Bases: torchvision.models.resnet.ResNet

Parameters
  • num_classes (int) –

  • arch (str) –

  • dropout (float) –

  • pretrained (bool) –

ACTIVATION_DIMS = [64, 128, 256, 512]
ACTIVATION_WIDTH_HEIGHT = [64, 32, 16, 8]
RESNET_TO_ARCH = {'resnet18': [2, 2, 2, 2], 'resnet50': [3, 4, 6, 3]}
training: bool
default_transform(img)[source]
Parameters

img (PIL.Image.Image) –

default_train_transform(img)[source]
Parameters

img (PIL.Image.Image) –

class DenseNet(num_classes, arch='densenet121', pretrained=True)[source]

Bases: torchvision.models.densenet.DenseNet

Parameters
  • num_classes (int) –

  • arch (str) –

  • pretrained (bool) –

DENSENET_TO_ARCH = {'densenet121': {'block_config': (6, 12, 24, 16), 'growth_rate': 32, 'num_init_features': 64}}
training: bool
class VisionClassifier(config=None)[source]

Bases: dcbench.common.modeling.Model

Parameters

config (dict) –

DEFAULT_CONFIG = {'arch': 'resnet18', 'lr': 0.0001, 'model_name': 'resnet', 'num_classes': 2, 'pretrained': True, 'train_transform': <function default_train_transform>, 'transform': <function default_transform>}
forward(x)[source]

Same as torch.nn.Module.forward().

Parameters
  • *args – Whatever you decide to pass into the forward method.

  • **kwargs – Keyword arguments are also possible.

Returns

Your model’s output

training_step(batch, batch_idx)[source]

Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.

Parameters
Returns

Any of.

  • Tensor - The loss tensor

  • dict - A dictionary. Can include any keys, but must include the key 'loss'

  • None - Training will skip to the next batch. This is only for automatic optimization.

    This is not supported for multi-GPU, TPU, IPU, or DeepSpeed.

In this step you’d normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something model specific.

Example:

def training_step(self, batch, batch_idx):
    x, y, z = batch
    out = self.encoder(x)
    loss = self.loss(out, x)
    return loss

If you define multiple optimizers, this step will be called with an additional optimizer_idx parameter.

# Multiple optimizers (e.g.: GANs)
def training_step(self, batch, batch_idx, optimizer_idx):
    if optimizer_idx == 0:
        # do training_step with encoder
        ...
    if optimizer_idx == 1:
        # do training_step with decoder
        ...

If you add truncated back propagation through time you will also get an additional argument with the hidden states of the previous step.

# Truncated back-propagation through time
def training_step(self, batch, batch_idx, hiddens):
    # hiddens are the hidden states from the previous truncated backprop step
    out, hiddens = self.lstm(data, hiddens)
    loss = ...
    return {"loss": loss, "hiddens": hiddens}

Note

The loss value shown in the progress bar is smoothed (averaged) over the last values, so it differs from the actual loss returned in train/validation step.

validation_step(batch, batch_idx)[source]

Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.

# the pseudocode for these calls
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    val_outs.append(out)
validation_epoch_end(val_outs)
Parameters
  • batch – The output of your DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_idx – The index of the dataloader that produced this batch. (only if multiple val dataloaders used)

Returns

  • Any object or value

  • None - Validation will skip to the next batch

# pseudocode of order
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    if defined("validation_step_end"):
        out = validation_step_end(out)
    val_outs.append(out)
val_outs = validation_epoch_end(val_outs)
# if you have one val dataloader:
def validation_step(self, batch, batch_idx):
    ...


# if you have multiple val dataloaders:
def validation_step(self, batch, batch_idx, dataloader_idx=0):
    ...

Examples:

# CASE 1: A single validation dataset
def validation_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'val_loss': loss, 'val_acc': val_acc})

If you pass in multiple val dataloaders, validation_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple validation dataloaders
def validation_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    ...

Note

If you don’t need to validate you don’t need to implement this method.

Note

When the validation_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.

validation_epoch_end(outputs)[source]

Called at the end of the validation epoch with the outputs of all validation steps.

# the pseudocode for these calls
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    val_outs.append(out)
validation_epoch_end(val_outs)
Parameters

outputs – List of outputs you defined in validation_step(), or if there are multiple dataloaders, a list containing a list of outputs for each dataloader.

Returns

None

Return type

None

Note

If you didn’t define a validation_step(), this won’t be called.

Examples

With a single dataloader:

def validation_epoch_end(self, val_step_outputs):
    for out in val_step_outputs:
        ...

With multiple dataloaders, outputs will be a list of lists. The outer list contains one entry per dataloader, while the inner list contains the individual outputs of each validation step for that dataloader.

def validation_epoch_end(self, outputs):
    for dataloader_output_result in outputs:
        dataloader_outs = dataloader_output_result.dataloader_i_outputs

    self.log("final_metric", final_value)
test_epoch_end(outputs)[source]

Called at the end of a test epoch with the output of all test steps.

# the pseudocode for these calls
test_outs = []
for test_batch in test_data:
    out = test_step(test_batch)
    test_outs.append(out)
test_epoch_end(test_outs)
Parameters

outputs – List of outputs you defined in test_step_end(), or if there are multiple dataloaders, a list containing a list of outputs for each dataloader

Returns

None

Return type

None

Note

If you didn’t define a test_step(), this won’t be called.

Examples

With a single dataloader:

def test_epoch_end(self, outputs):
    # do something with the outputs of all test batches
    all_test_preds = test_step_outputs.predictions

    some_result = calc_all_results(all_test_preds)
    self.log(some_result)

With multiple dataloaders, outputs will be a list of lists. The outer list contains one entry per dataloader, while the inner list contains the individual outputs of each test step for that dataloader.

def test_epoch_end(self, outputs):
    final_value = 0
    for dataloader_outputs in outputs:
        for test_step_out in dataloader_outputs:
            # do something
            final_value += test_step_out

    self.log("final_metric", final_value)
test_step(batch, batch_idx)[source]

Operates on a single batch of data from the test set. In this step you’d normally generate examples or calculate anything of interest such as accuracy.

# the pseudocode for these calls
test_outs = []
for test_batch in test_data:
    out = test_step(test_batch)
    test_outs.append(out)
test_epoch_end(test_outs)
Parameters
  • batch – The output of your DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_id – The index of the dataloader that produced this batch. (only if multiple test dataloaders used).

Returns

Any of.

  • Any object or value

  • None - Testing will skip to the next batch

# if you have one test dataloader:
def test_step(self, batch, batch_idx):
    ...


# if you have multiple test dataloaders:
def test_step(self, batch, batch_idx, dataloader_idx=0):
    ...

Examples:

# CASE 1: A single test dataset
def test_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'test_loss': loss, 'test_acc': test_acc})

If you pass in multiple test dataloaders, test_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple test dataloaders
def test_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    ...

Note

If you don’t need to test you don’t need to implement this method.

Note

When the test_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of the test epoch, the model goes back to training mode and gradients are enabled.

configure_optimizers()[source]

Choose what optimizers and learning-rate schedulers to use in your optimization. Normally you’d need one. But in the case of GANs or similar you might have multiple.

Returns

Any of these 6 options.

  • Single optimizer.

  • List or Tuple of optimizers.

  • Two lists - The first list has multiple optimizers, and the second has multiple LR schedulers (or multiple lr_scheduler_config).

  • Dictionary, with an "optimizer" key, and (optionally) a "lr_scheduler" key whose value is a single LR scheduler or lr_scheduler_config.

  • Tuple of dictionaries as described above, with an optional "frequency" key.

  • None - Fit will run without any optimizer.

The lr_scheduler_config is a dictionary which contains the scheduler and its associated configuration. The default configuration is shown below.

lr_scheduler_config = {
    # REQUIRED: The scheduler instance
    "scheduler": lr_scheduler,
    # The unit of the scheduler's step size, could also be 'step'.
    # 'epoch' updates the scheduler on epoch end whereas 'step'
    # updates it after a optimizer update.
    "interval": "epoch",
    # How many epochs/steps should pass between calls to
    # `scheduler.step()`. 1 corresponds to updating the learning
    # rate after every epoch/step.
    "frequency": 1,
    # Metric to to monitor for schedulers like `ReduceLROnPlateau`
    "monitor": "val_loss",
    # If set to `True`, will enforce that the value specified 'monitor'
    # is available when the scheduler is updated, thus stopping
    # training if not found. If set to `False`, it will only produce a warning
    "strict": True,
    # If using the `LearningRateMonitor` callback to monitor the
    # learning rate progress, this keyword can be used to specify
    # a custom logged name
    "name": None,
}

When there are schedulers in which the .step() method is conditioned on a value, such as the torch.optim.lr_scheduler.ReduceLROnPlateau scheduler, Lightning requires that the lr_scheduler_config contains the keyword "monitor" set to the metric name that the scheduler should be conditioned on.

Metrics can be made available to monitor by simply logging it using self.log('metric_to_track', metric_val) in your LightningModule.

Note

The frequency value specified in a dict along with the optimizer key is an int corresponding to the number of sequential batches optimized with the specific optimizer. It should be given to none or to all of the optimizers. There is a difference between passing multiple optimizers in a list, and passing multiple optimizers in dictionaries with a frequency of 1:

  • In the former case, all optimizers will operate on the given batch in each optimization step.

  • In the latter, only one optimizer will operate on the given batch at every step.

This is different from the frequency value specified in the lr_scheduler_config mentioned above.

def configure_optimizers(self):
    optimizer_one = torch.optim.SGD(self.model.parameters(), lr=0.01)
    optimizer_two = torch.optim.SGD(self.model.parameters(), lr=0.01)
    return [
        {"optimizer": optimizer_one, "frequency": 5},
        {"optimizer": optimizer_two, "frequency": 10},
    ]

In this example, the first optimizer will be used for the first 5 steps, the second optimizer for the next 10 steps and that cycle will continue. If an LR scheduler is specified for an optimizer using the lr_scheduler key in the above dict, the scheduler will only be updated when its optimizer is being used.

Examples:

# most cases. no learning rate scheduler
def configure_optimizers(self):
    return Adam(self.parameters(), lr=1e-3)

# multiple optimizer case (e.g.: GAN)
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    return gen_opt, dis_opt

# example with learning rate schedulers
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    dis_sch = CosineAnnealing(dis_opt, T_max=10)
    return [gen_opt, dis_opt], [dis_sch]

# example with step-based learning rate schedulers
# each optimizer has its own scheduler
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    gen_sch = {
        'scheduler': ExponentialLR(gen_opt, 0.99),
        'interval': 'step'  # called after each training step
    }
    dis_sch = CosineAnnealing(dis_opt, T_max=10) # called every epoch
    return [gen_opt, dis_opt], [gen_sch, dis_sch]

# example with optimizer frequencies
# see training procedure in `Improved Training of Wasserstein GANs`, Algorithm 1
# https://arxiv.org/abs/1704.00028
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    n_critic = 5
    return (
        {'optimizer': dis_opt, 'frequency': n_critic},
        {'optimizer': gen_opt, 'frequency': 1}
    )

Note

Some things to know:

  • Lightning calls .backward() and .step() on each optimizer and learning rate scheduler as needed.

  • If you use 16-bit precision (precision=16), Lightning will automatically handle the optimizers.

  • If you use multiple optimizers, training_step() will have an additional optimizer_idx parameter.

  • If you use torch.optim.LBFGS, Lightning handles the closure function automatically for you.

  • If you use multiple optimizers, gradients will be calculated only for the parameters of current optimizer at each training step.

  • If you need to control how often those optimizers step or override the default .step() schedule, override the optimizer_step() hook.

training: bool
trainer: Optional['pl.Trainer']
precision: int
prepare_data_per_node: bool
allow_zero_length_dataloader_with_multiple_devices: bool

dcbench.common.problem module

class Problem(artifacts, attributes=None, container_id=None)[source]

Bases: dcbench.common.artifact_container.ArtifactContainer

A logical collection of :class:`Artifact`s and β€œattributes” that correspond to a specific problem to be solved.

See the walkthrough section on Problem for more information.

Parameters
  • artifacts (Mapping[str, Artifact]) –

  • attributes (Mapping[str, Attribute]) –

  • container_id (str) –

container_type: str = 'problem'
name: str
summary: str
task_id: str
solution_class: type
abstract solve(**kwargs)[source]
Parameters

kwargs (Any) –

Return type

Solution

abstract evaluate(solution)[source]
Parameters

solution (Solution) –

Return type

Result

artifact_specs: Mapping[str, ArtifactSpec]
class ProblemTable(data)[source]

Bases: dcbench.common.table.Table

trial(solver=None)[source]
Parameters

solver (Optional[Callable[[Problem], Solution]]) –

Return type

Trial

dcbench.common.result module

class Result(id, attributes=None)[source]

Bases: dcbench.common.table.RowMixin

Parameters
  • id (str) –

  • attributes (Mapping[str, Union[int, float, str, bool]]) –

attribute_specs: Mapping[str, dcbench.common.table.AttributeSpec]

dcbench.common.solution module

class Result(source)[source]

Bases: Mapping

save(path)[source]
Parameters

path (str) –

Return type

None

static load(path)[source]
Parameters

path (str) –

Return type

dcbench.common.solution.Result

class Solution(artifacts, attributes=None, container_id=None)[source]

Bases: dcbench.common.artifact_container.ArtifactContainer

Parameters
  • artifacts (Mapping[str, Artifact]) –

  • attributes (Mapping[str, Attribute]) –

  • container_id (str) –

container_type: str = 'solution'
artifact_specs: Mapping[str, ArtifactSpec]
task_id: str

dcbench.common.solve module

dcbench.common.solver module

solver(id, summary)[source]
Parameters
  • id (str) –

  • summary (str) –

dcbench.common.table module

class AttributeSpec(description: str, attribute_type: type, optional: bool = False)[source]

Bases: object

Parameters
  • description (str) –

  • attribute_type (type) –

  • optional (bool) –

Return type

None

description: str
attribute_type: type
optional: bool = False
class RowMixin(id, attributes=None)[source]

Bases: object

Parameters
  • id (str) –

  • attributes (Mapping[str, Union[int, float, str, bool]]) –

attribute_specs: Mapping[str, dcbench.common.table.AttributeSpec]
property attributes: Optional[Mapping[str, Union[int, float, str, bool]]]
class RowUnion(id, elements)[source]

Bases: dcbench.common.table.RowMixin

Parameters
attribute_specs: Mapping[str, dcbench.common.table.AttributeSpec]
predicate(a, b)[source]
Parameters
  • a (Union[int, float, str, bool]) –

  • b (Union[int, float, str, bool, slice, Sequence[Union[int, float, str, bool]]]) –

Return type

bool

class Table(data)[source]

Bases: Mapping[str, dcbench.common.table.RowMixin]

property df
where(**kwargs)[source]
Parameters

kwargs (Union[int, float, str, bool, slice, Sequence[Union[int, float, str, bool]]]) –

Return type

dcbench.common.table.Table

average(*targets, groupby=None, std=False)[source]
Parameters
  • targets (str) –

  • groupby (Optional[Sequence[str]]) –

  • std (bool) –

Return type

dcbench.common.table.Table

dcbench.common.task module

class Task(task_id, name, summary, problem_class, solution_class, baselines=Empty DataFrame Columns: [] Index: [])[source]

Bases: dcbench.common.table.RowMixin

Task(task_id: str, name: str, summary: str, problem_class: type, solution_class: type, baselines: dcbench.common.table.Table = Empty DataFrame Columns: [] Index: [])

Parameters
  • task_id (str) –

  • name (str) –

  • summary (str) –

  • problem_class (type) –

  • solution_class (type) –

  • baselines (dcbench.common.table.Table) –

Return type

None

task_id: str
name: str
summary: str
problem_class: type
solution_class: type
baselines: dcbench.common.table.Table = Empty DataFrame Columns: [] Index: []
property problems_path
property local_problems_path
property remote_problems_url
write_problems(containers, append=True)[source]
Parameters
upload_problems(include_artifacts=False, force=True)[source]

Uploads the problems to the remote storage.

Parameters
  • include_artifacts (bool) – If True, also uploads the artifacts of the problems.

  • force (bool) –

    If True, if the problem overwrites the remote problems. Defaults to True. .. warning:

    It is somewhat dangerous to set `force=False`, as this could lead
    to remote and local problems being out of sync.
    

download_problems(include_artifacts=False)[source]
Parameters

include_artifacts (bool) –

property problems
attribute_specs: Mapping[str, AttributeSpec]

dcbench.common.trial module

class Problem[source]

Bases: object

class Solution[source]

Bases: object

class Trial(problems=None, solver=None)[source]

Bases: dcbench.common.table.Table

evaluate(repeat=1, quiet=False)[source]
Parameters
  • repeat (int) –

  • quiet (bool) –

Return type

dcbench.common.trial.Trial

save()[source]
Return type

None

dcbench.common.utils module

Module contents

class Artifact(artifact_id, **kwargs)[source]

Bases: abc.ABC

A pointer to a unit of data (e.g. a CSV file) that is stored locally on disk and/or in a remote GCS bucket.

In DCBench, each artifact is identified by a unique artifact ID. The only state that the Artifact object must maintain is this ID (self.id). The object does not hold the actual data in memory, making it lightweight.

Artifact is an abstract base class. Different types of artifacts (e.g. a CSV file vs. a PyTorch model) have corresponding subclasses of Artifact (e.g. CSVArtifact, ModelArtifact).

Tip

The vast majority of users should not call the Artifact constructor directly. Instead, they should either create a new artifact by calling from_data() or load an existing artifact from a YAML file.

The class provides utilities for accessing and managing a unit of data:

Parameters

artifact_id (str) – The unique artifact ID.

Return type

None

id

The unique artifact ID.

Type

str

classmethod from_data(data, artifact_id=None)[source]

Create a new artifact object from raw data and save the artifact to disk in the local directory specified in the config file at config.local_dir.

Tip

When called on the abstract base class Artifact, this method will infer which artifact subclass to use. If you know exactly which artifact class you’d like to use (e.g. DataPanelArtifact), you should call this classmethod on that subclass.

Parameters
  • data (Union[mk.DataPanel, pd.DataFrame, Model]) – The raw data that will be saved to disk.

  • artifact_id (str, optional) – . Defaults to None, in which case a UUID will be generated and used.

Returns

A new artifact pointing to the :arg:`data` that was saved to disk.

Return type

Artifact

property local_path: str

The local path to the artifact in the local directory specified in the config file at config.local_dir.

property remote_url: str

The URL of the artifact in the remote GCS bucket specified in the config file at config.public_bucket_name.

property is_downloaded: bool

Checks if artifact is downloaded to local directory specified in the config file at config.local_dir.

Returns

True if artifact is downloaded, False otherwise.

Return type

bool

property is_uploaded: bool

Checks if artifact is uploaded to GCS bucket specified in the config file at config.public_bucket_name.

Returns

True if artifact is uploaded, False otherwise.

Return type

bool

upload(force=False, bucket=None)[source]

Uploads artifact to a GCS bucket at self.path, which by default is just the artifact ID with the default extension.

Parameters
  • force (bool, optional) – Force upload even if artifact is already uploaded. Defaults to False.

  • bucket (storage.Bucket, optional) – The GCS bucket to which the artifact is uplioaded. Defaults to None, in which case the artifact is uploaded to the bucket speciried in the config file at config.public_bucket_name.

Return type

bool

Returns

bool: True if artifact was uploaded, False otherwise.

download(force=False)[source]

Downloads artifact from GCS bucket to the local directory specified in the config file at config.local_dir. The relative path to the artifact within that directory is self.path, which by default is just the artifact ID with the default extension.

Parameters

force (bool, optional) – Force download even if artifact is already downloaded. Defaults to False.

Returns

True if artifact was downloaded, False otherwise.

Return type

bool

Warning

By default, the GCS cache on public urls has a max-age up to an hour. Therefore, when updating an existin artifacts, changes may not be immediately reflected in subsequent downloads.

See here for more details.

DEFAULT_EXT: str = ''
isdir: bool = False
abstract load()[source]

Load the artifact into memory from disk at self.local_path.

Return type

Any

abstract save(data)[source]

Save data to disk at self.local_path.

Parameters

data (Any) –

Return type

None

static from_yaml(loader, node)[source]

This function is called by the YAML loader to convert a YAML node into an Artifact object.

It should not be called directly.

Parameters

loader (yaml.loader.Loader) –

static to_yaml(dumper, data)[source]

This function is called by the YAML dumper to convert an Artifact object into a YAML node.

It should not be called directly.

Parameters
class Problem(artifacts, attributes=None, container_id=None)[source]

Bases: dcbench.common.artifact_container.ArtifactContainer

A logical collection of :class:`Artifact`s and β€œattributes” that correspond to a specific problem to be solved.

See the walkthrough section on Problem for more information.

Parameters
  • artifacts (Mapping[str, Artifact]) –

  • attributes (Mapping[str, Attribute]) –

  • container_id (str) –

container_type: str = 'problem'
name: str
summary: str
task_id: str
solution_class: type
abstract solve(**kwargs)[source]
Parameters

kwargs (Any) –

Return type

Solution

abstract evaluate(solution)[source]
Parameters

solution (Solution) –

Return type

Result

artifact_specs: Mapping[str, ArtifactSpec]
class Solution(artifacts, attributes=None, container_id=None)[source]

Bases: dcbench.common.artifact_container.ArtifactContainer

Parameters
  • artifacts (Mapping[str, Artifact]) –

  • attributes (Mapping[str, Attribute]) –

  • container_id (str) –

container_type: str = 'solution'
artifact_specs: Mapping[str, ArtifactSpec]
task_id: str
class Task(task_id, name, summary, problem_class, solution_class, baselines=Empty DataFrame Columns: [] Index: [])[source]

Bases: dcbench.common.table.RowMixin

Task(task_id: str, name: str, summary: str, problem_class: type, solution_class: type, baselines: dcbench.common.table.Table = Empty DataFrame Columns: [] Index: [])

Parameters
  • task_id (str) –

  • name (str) –

  • summary (str) –

  • problem_class (type) –

  • solution_class (type) –

  • baselines (dcbench.common.table.Table) –

Return type

None

task_id: str
name: str
summary: str
problem_class: type
solution_class: type
baselines: dcbench.common.table.Table = Empty DataFrame Columns: [] Index: []
property problems_path
property local_problems_path
property remote_problems_url
write_problems(containers, append=True)[source]
Parameters
upload_problems(include_artifacts=False, force=True)[source]

Uploads the problems to the remote storage.

Parameters
  • include_artifacts (bool) – If True, also uploads the artifacts of the problems.

  • force (bool) –

    If True, if the problem overwrites the remote problems. Defaults to True. .. warning:

    It is somewhat dangerous to set `force=False`, as this could lead
    to remote and local problems being out of sync.
    

download_problems(include_artifacts=False)[source]
Parameters

include_artifacts (bool) –

property problems
attribute_specs: Mapping[str, AttributeSpec]
class Table(data)[source]

Bases: Mapping[str, dcbench.common.table.RowMixin]

property df
where(**kwargs)[source]
Parameters

kwargs (Union[int, float, str, bool, slice, Sequence[Union[int, float, str, bool]]]) –

Return type

dcbench.common.table.Table

average(*targets, groupby=None, std=False)[source]
Parameters
  • targets (str) –

  • groupby (Optional[Sequence[str]]) –

  • std (bool) –

Return type

dcbench.common.table.Table

class Result(id, attributes=None)[source]

Bases: dcbench.common.table.RowMixin

Parameters
  • id (str) –

  • attributes (Mapping[str, Union[int, float, str, bool]]) –

attribute_specs: Mapping[str, dcbench.common.table.AttributeSpec]