Skip to content

Hera Workflows

hera.workflows

Hera classes.

Label module-attribute

Label = _ModelMetricLabel

AWSElasticBlockStoreVolumeVolume

Representation of AWS elastic block store volume.

Source code in src/hera/workflows/volume.py
class AWSElasticBlockStoreVolumeVolume(_BaseVolume, _ModelAWSElasticBlockStoreVolumeSource):
    """Representation of AWS elastic block store volume."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            aws_elastic_block_store=_ModelAWSElasticBlockStoreVolumeSource(
                fs_type=self.fs_type, partition=self.partition, read_only=self.read_only, volume_id=self.volume_id
            ),
        )

fs_type class-attribute instance-attribute

fs_type = Field(default=None, alias='fsType', description='Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

partition class-attribute instance-attribute

partition = Field(default=None, description='The partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty).')

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

volume_id class-attribute instance-attribute

volume_id = Field(..., alias='volumeID', description='Unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

AccessMode

A representations of the volume access modes for Kubernetes.

Notes

See: access modes docs for more information.

Source code in src/hera/workflows/volume.py
class AccessMode(Enum):
    """A representations of the volume access modes for Kubernetes.

    Notes:
        See: [access modes docs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) for
        more information.
    """

    read_write_once = "ReadWriteOnce"
    """
    The volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple
    pods to access the volume when the pods are running on the same node
    """

    read_only_many = "ReadOnlyMany"
    """The volume can be mounted as read-only by many nodes"""

    read_write_many = "ReadWriteMany"
    """The volume can be mounted as read-write by many nodes"""

    read_write_once_pod = "ReadWriteOncePod"
    """
    The volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to
    ensure that only one pod across whole cluster can read that PVC or write to it. This is only supported for CSI
    volumes and Kubernetes version 1.22+.
    """

    def __str__(self) -> str:
        """Returns the value representation of the enum in the form of a string."""
        return str(self.value)

read_only_many class-attribute instance-attribute

read_only_many = 'ReadOnlyMany'

The volume can be mounted as read-only by many nodes

read_write_many class-attribute instance-attribute

read_write_many = 'ReadWriteMany'

The volume can be mounted as read-write by many nodes

read_write_once class-attribute instance-attribute

read_write_once = 'ReadWriteOnce'

The volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node

read_write_once_pod class-attribute instance-attribute

read_write_once_pod = 'ReadWriteOncePod'

The volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to ensure that only one pod across whole cluster can read that PVC or write to it. This is only supported for CSI volumes and Kubernetes version 1.22+.

ArchiveStrategy

Base archive strategy model.

Source code in src/hera/workflows/archive.py
class ArchiveStrategy(BaseModel):
    """Base archive strategy model."""

    def _build_archive_strategy(self) -> _ModelArchiveStrategy:
        return _ModelArchiveStrategy()

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

Artifact

Base artifact representation.

Source code in src/hera/workflows/artifact.py
class Artifact(BaseModel):
    """Base artifact representation."""

    name: Optional[str]
    """name of the artifact"""

    archive: Optional[Union[_ModelArchiveStrategy, ArchiveStrategy]] = None
    """artifact archiving configuration"""

    archive_logs: Optional[bool] = None
    """whether to log the archive object"""

    artifact_gc: Optional[ArtifactGC] = None
    """artifact garbage collection configuration"""

    deleted: Optional[bool] = None
    """whether the artifact is deleted"""

    from_: Optional[str] = None
    """configures the artifact task/step origin"""

    from_expression: Optional[str] = None
    """an expression that dictates where to obtain the artifact from"""

    global_name: Optional[str] = None
    """global workflow artifact name"""

    mode: Optional[int] = None
    """mode bits to use on the artifact, must be a value between 0 and 0777 set when loading input artifacts."""

    path: Optional[str] = None
    """path where the artifact should be placed/loaded from"""

    recurse_mode: Optional[str] = None
    """recursion mode when applying the permissions of the artifact if it is an artifact folder"""

    sub_path: Optional[str] = None
    """allows the specification of an artifact from a subpath within the main source."""

    loader: Optional[ArtifactLoader] = None
    """used in Artifact annotations for determining how to load the data"""

    output: bool = False
    """used to specify artifact as an output in function signature annotations"""

    def _check_name(self):
        if not self.name:
            raise ValueError("name cannot be `None` or empty when used")

    def _build_archive(self) -> Optional[_ModelArchiveStrategy]:
        if self.archive is None:
            return None

        if isinstance(self.archive, _ModelArchiveStrategy):
            return self.archive
        return cast(ArchiveStrategy, self.archive)._build_archive_strategy()

    def _build_artifact(self) -> _ModelArtifact:
        self._check_name()
        return _ModelArtifact(
            name=self.name,
            archive=self._build_archive(),
            archive_logs=self.archive_logs,
            artifact_gc=self.artifact_gc,
            deleted=self.deleted,
            from_=self.from_,
            from_expression=self.from_expression,
            global_name=self.global_name,
            mode=self.mode,
            path=self.path,
            recurse_mode=self.recurse_mode,
            sub_path=self.sub_path,
        )

    def _build_artifact_paths(self) -> _ModelArtifactPaths:
        self._check_name()
        artifact = self._build_artifact()
        return _ModelArtifactPaths(**artifact.dict())

    def as_name(self, name: str) -> _ModelArtifact:
        """DEPRECATED, use with_name.

        Returns a 'built' copy of the current artifact, renamed using the specified `name`.
        """
        logger.warning("'as_name' is deprecated, use 'with_name'")
        artifact = self._build_artifact()
        artifact.name = name
        return artifact

    def with_name(self, name: str) -> Artifact:
        """Returns a copy of the current artifact, renamed using the specified `name`."""
        artifact = self.copy(deep=True)
        artifact.name = name
        return artifact

    @classmethod
    def _get_input_attributes(cls):
        """Return the attributes used for input artifact annotations."""
        return [
            "mode",
            "name",
            "optional",
            "path",
            "recurseMode",
            "subPath",
        ]

archive class-attribute instance-attribute

archive = None

artifact archiving configuration

archive_logs class-attribute instance-attribute

archive_logs = None

whether to log the archive object

artifact_gc class-attribute instance-attribute

artifact_gc = None

artifact garbage collection configuration

deleted class-attribute instance-attribute

deleted = None

whether the artifact is deleted

from_ class-attribute instance-attribute

from_ = None

configures the artifact task/step origin

from_expression class-attribute instance-attribute

from_expression = None

an expression that dictates where to obtain the artifact from

global_name class-attribute instance-attribute

global_name = None

global workflow artifact name

loader class-attribute instance-attribute

loader = None

used in Artifact annotations for determining how to load the data

mode class-attribute instance-attribute

mode = None

mode bits to use on the artifact, must be a value between 0 and 0777 set when loading input artifacts.

name instance-attribute

name

name of the artifact

output class-attribute instance-attribute

output = False

used to specify artifact as an output in function signature annotations

path class-attribute instance-attribute

path = None

path where the artifact should be placed/loaded from

recurse_mode class-attribute instance-attribute

recurse_mode = None

recursion mode when applying the permissions of the artifact if it is an artifact folder

sub_path class-attribute instance-attribute

sub_path = None

allows the specification of an artifact from a subpath within the main source.

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

as_name

as_name(name)

DEPRECATED, use with_name.

Returns a ‘built’ copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def as_name(self, name: str) -> _ModelArtifact:
    """DEPRECATED, use with_name.

    Returns a 'built' copy of the current artifact, renamed using the specified `name`.
    """
    logger.warning("'as_name' is deprecated, use 'with_name'")
    artifact = self._build_artifact()
    artifact.name = name
    return artifact

with_name

with_name(name)

Returns a copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def with_name(self, name: str) -> Artifact:
    """Returns a copy of the current artifact, renamed using the specified `name`."""
    artifact = self.copy(deep=True)
    artifact.name = name
    return artifact

ArtifactLoader

Enum for artifact loader options.

Source code in src/hera/workflows/artifact.py
class ArtifactLoader(Enum):
    """Enum for artifact loader options."""

    json = "json"
    file = "file"

file class-attribute instance-attribute

file = 'file'

json class-attribute instance-attribute

json = 'json'

ArtifactoryArtifact

An artifact sourced from Artifactory.

Source code in src/hera/workflows/artifact.py
class ArtifactoryArtifact(_ModelArtifactoryArtifact, Artifact):
    """An artifact sourced from Artifactory."""

    def _build_artifact(self) -> _ModelArtifact:
        artifact = super()._build_artifact()
        artifact.artifactory = _ModelArtifactoryArtifact(
            url=self.url, password_secret=self.password_secret, username_secret=self.username_secret
        )
        return artifact

    @classmethod
    def _get_input_attributes(cls):
        """Return the attributes used for input artifact annotations."""
        return super()._get_input_attributes() + ["url", "password_secret", "username_secret"]

archive class-attribute instance-attribute

archive = None

artifact archiving configuration

archive_logs class-attribute instance-attribute

archive_logs = None

whether to log the archive object

artifact_gc class-attribute instance-attribute

artifact_gc = None

artifact garbage collection configuration

deleted class-attribute instance-attribute

deleted = None

whether the artifact is deleted

from_ class-attribute instance-attribute

from_ = None

configures the artifact task/step origin

from_expression class-attribute instance-attribute

from_expression = None

an expression that dictates where to obtain the artifact from

global_name class-attribute instance-attribute

global_name = None

global workflow artifact name

loader class-attribute instance-attribute

loader = None

used in Artifact annotations for determining how to load the data

mode class-attribute instance-attribute

mode = None

mode bits to use on the artifact, must be a value between 0 and 0777 set when loading input artifacts.

name instance-attribute

name

name of the artifact

output class-attribute instance-attribute

output = False

used to specify artifact as an output in function signature annotations

password_secret class-attribute instance-attribute

password_secret = Field(default=None, alias='passwordSecret', description='PasswordSecret is the secret selector to the repository password')

path class-attribute instance-attribute

path = None

path where the artifact should be placed/loaded from

recurse_mode class-attribute instance-attribute

recurse_mode = None

recursion mode when applying the permissions of the artifact if it is an artifact folder

sub_path class-attribute instance-attribute

sub_path = None

allows the specification of an artifact from a subpath within the main source.

url class-attribute instance-attribute

url = Field(..., description='URL of the artifact')

username_secret class-attribute instance-attribute

username_secret = Field(default=None, alias='usernameSecret', description='UsernameSecret is the secret selector to the repository username')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

as_name

as_name(name)

DEPRECATED, use with_name.

Returns a ‘built’ copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def as_name(self, name: str) -> _ModelArtifact:
    """DEPRECATED, use with_name.

    Returns a 'built' copy of the current artifact, renamed using the specified `name`.
    """
    logger.warning("'as_name' is deprecated, use 'with_name'")
    artifact = self._build_artifact()
    artifact.name = name
    return artifact

with_name

with_name(name)

Returns a copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def with_name(self, name: str) -> Artifact:
    """Returns a copy of the current artifact, renamed using the specified `name`."""
    artifact = self.copy(deep=True)
    artifact.name = name
    return artifact

AzureArtifact

An artifact sourced from Microsoft Azure.

Source code in src/hera/workflows/artifact.py
class AzureArtifact(_ModelAzureArtifact, Artifact):
    """An artifact sourced from Microsoft Azure."""

    def _build_artifact(self) -> _ModelArtifact:
        artifact = super()._build_artifact()
        artifact.azure = _ModelAzureArtifact(
            account_key_secret=self.account_key_secret,
            blob=self.blob,
            container=self.container,
            endpoint=self.endpoint,
            use_sdk_creds=self.use_sdk_creds,
        )
        return artifact

    @classmethod
    def _get_input_attributes(cls):
        """Return the attributes used for input artifact annotations."""
        return super()._get_input_attributes() + [
            "endpoint",
            "container",
            "blob",
            "account_key_secret",
            "use_sdk_creds",
        ]

account_key_secret class-attribute instance-attribute

account_key_secret = Field(default=None, alias='accountKeySecret', description='AccountKeySecret is the secret selector to the Azure Blob Storage account access key')

archive class-attribute instance-attribute

archive = None

artifact archiving configuration

archive_logs class-attribute instance-attribute

archive_logs = None

whether to log the archive object

artifact_gc class-attribute instance-attribute

artifact_gc = None

artifact garbage collection configuration

blob class-attribute instance-attribute

blob = Field(..., description='Blob is the blob name (i.e., path) in the container where the artifact resides')

container class-attribute instance-attribute

container = Field(..., description='Container is the container where resources will be stored')

deleted class-attribute instance-attribute

deleted = None

whether the artifact is deleted

endpoint class-attribute instance-attribute

endpoint = Field(..., description='Endpoint is the service url associated with an account. It is most likely "https://<ACCOUNT_NAME>.blob.core.windows.net"')

from_ class-attribute instance-attribute

from_ = None

configures the artifact task/step origin

from_expression class-attribute instance-attribute

from_expression = None

an expression that dictates where to obtain the artifact from

global_name class-attribute instance-attribute

global_name = None

global workflow artifact name

loader class-attribute instance-attribute

loader = None

used in Artifact annotations for determining how to load the data

mode class-attribute instance-attribute

mode = None

mode bits to use on the artifact, must be a value between 0 and 0777 set when loading input artifacts.

name instance-attribute

name

name of the artifact

output class-attribute instance-attribute

output = False

used to specify artifact as an output in function signature annotations

path class-attribute instance-attribute

path = None

path where the artifact should be placed/loaded from

recurse_mode class-attribute instance-attribute

recurse_mode = None

recursion mode when applying the permissions of the artifact if it is an artifact folder

sub_path class-attribute instance-attribute

sub_path = None

allows the specification of an artifact from a subpath within the main source.

use_sdk_creds class-attribute instance-attribute

use_sdk_creds = Field(default=None, alias='useSDKCreds', description='UseSDKCreds tells the driver to figure out credentials based on sdk defaults.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

as_name

as_name(name)

DEPRECATED, use with_name.

Returns a ‘built’ copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def as_name(self, name: str) -> _ModelArtifact:
    """DEPRECATED, use with_name.

    Returns a 'built' copy of the current artifact, renamed using the specified `name`.
    """
    logger.warning("'as_name' is deprecated, use 'with_name'")
    artifact = self._build_artifact()
    artifact.name = name
    return artifact

with_name

with_name(name)

Returns a copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def with_name(self, name: str) -> Artifact:
    """Returns a copy of the current artifact, renamed using the specified `name`."""
    artifact = self.copy(deep=True)
    artifact.name = name
    return artifact

AzureDiskVolumeVolume

Representation of an Azure disk volume.

Source code in src/hera/workflows/volume.py
class AzureDiskVolumeVolume(_BaseVolume, _ModelAzureDiskVolumeSource):
    """Representation of an Azure disk volume."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            azure_disk=_ModelAzureDiskVolumeSource(
                caching_mode=self.caching_mode,
                disk_name=self.disk_name,
                disk_uri=self.disk_uri,
                fs_type=self.fs_type,
                kind=self.kind,
                read_only=self.read_only,
            ),
        )

caching_mode class-attribute instance-attribute

caching_mode = Field(default=None, alias='cachingMode', description='Host Caching mode: None, Read Only, Read Write.')

disk_name class-attribute instance-attribute

disk_name = Field(..., alias='diskName', description='The Name of the data disk in the blob storage')

disk_uri class-attribute instance-attribute

disk_uri = Field(..., alias='diskURI', description='The URI the data disk in the blob storage')

fs_type class-attribute instance-attribute

fs_type = Field(default=None, alias='fsType', description='Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.')

kind class-attribute instance-attribute

kind = Field(default=None, description='Expected values Shared: multiple blob disks per storage account  Dedicated: single blob disk per storage account  Managed: azure managed data disk (only in managed availability set). defaults to shared')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

AzureFileVolumeVolume

Representation of an Azure file that can be mounted as a volume.

Source code in src/hera/workflows/volume.py
class AzureFileVolumeVolume(_BaseVolume, _ModelAzureFileVolumeSource):
    """Representation of an Azure file that can be mounted as a volume."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            azure_file=_ModelAzureFileVolumeSource(
                read_only=self.read_only, secret_name=self.secret_name, share_name=self.share_name
            ),
        )

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

secret_name class-attribute instance-attribute

secret_name = Field(..., alias='secretName', description='the name of secret that contains Azure Storage Account Name and Key')

share_name class-attribute instance-attribute

share_name = Field(..., alias='shareName', description='Share Name')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

CSIVolume

Representation of a container service interface volume.

Source code in src/hera/workflows/volume.py
class CSIVolume(_BaseVolume, _ModelCSIVolumeSource):
    """Representation of a container service interface volume."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            csi=_ModelCSIVolumeSource(
                driver=self.driver,
                fs_type=self.fs_type,
                node_publish_secret_ref=self.node_publish_secret_ref,
                read_only=self.read_only,
                volume_attributes=self.volume_attributes,
            ),
        )

driver class-attribute instance-attribute

driver = Field(..., description='Driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster.')

fs_type class-attribute instance-attribute

fs_type = Field(default=None, alias='fsType', description='Filesystem type to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply.')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

node_publish_secret_ref class-attribute instance-attribute

node_publish_secret_ref = Field(default=None, alias='nodePublishSecretRef', description='NodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and  may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed.')

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

volume_attributes class-attribute instance-attribute

volume_attributes = Field(default=None, alias='volumeAttributes', description="VolumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values.")

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

CephFSVolumeVolume

Representation of a Ceph file system volume.

Source code in src/hera/workflows/volume.py
class CephFSVolumeVolume(_BaseVolume, _ModelCephFSVolumeSource):
    """Representation of a Ceph file system volume."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            cephfs=_ModelCephFSVolumeSource(
                monitors=self.monitors,
                path=self.path,
                read_only=self.read_only,
                secret_file=self.secret_file,
                secret_ref=self.secret_ref,
                user=self.user,
            ),
        )

monitors class-attribute instance-attribute

monitors = Field(..., description='Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

path class-attribute instance-attribute

path = Field(default=None, description='Optional: Used as the mounted root, rather than the full Ceph tree, default is /')

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

secret_file class-attribute instance-attribute

secret_file = Field(default=None, alias='secretFile', description='Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it')

secret_ref class-attribute instance-attribute

secret_ref = Field(default=None, alias='secretRef', description='Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

user class-attribute instance-attribute

user = Field(default=None, description='Optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

CinderVolume

Representation of a Cinder volume.

Source code in src/hera/workflows/volume.py
class CinderVolume(_BaseVolume, _ModelCinderVolumeSource):
    """Representation of a Cinder volume."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            cinder=_ModelCinderVolumeSource(
                fs_type=self.fs_type,
                read_only=self.read_only,
                secret_ref=self.secret_ref,
                volume_id=self.volume_id,
            ),
        )

fs_type class-attribute instance-attribute

fs_type = Field(default=None, alias='fsType', description='Filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

secret_ref class-attribute instance-attribute

secret_ref = Field(default=None, alias='secretRef', description='Optional: points to a secret object containing parameters used to connect to OpenStack.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

volume_id class-attribute instance-attribute

volume_id = Field(..., alias='volumeID', description='volume id used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

ClusterWorkflowTemplate

ClusterWorkflowTemplates are cluster scoped templates.

Since cluster workflow templates are scoped at the cluster level, they are available globally in the cluster.

Source code in src/hera/workflows/cluster_workflow_template.py
class ClusterWorkflowTemplate(WorkflowTemplate):
    """ClusterWorkflowTemplates are cluster scoped templates.

    Since cluster workflow templates are scoped at the cluster level, they are available globally in the cluster.
    """

    @validator("namespace", pre=True, always=True)
    def _set_namespace(cls, v):
        if v is not None:
            raise ValueError("namespace is not a valid field on a ClusterWorkflowTemplate")

    def create(self) -> TWorkflow:  # type: ignore
        """Creates the ClusterWorkflowTemplate on the Argo cluster."""
        assert self.workflows_service, "workflow service not initialized"
        return self.workflows_service.create_cluster_workflow_template(
            ClusterWorkflowTemplateCreateRequest(template=self.build())
        )

    def get(self) -> TWorkflow:
        """Attempts to get a workflow template based on the parameters of this template e.g. name + namespace."""
        assert self.workflows_service, "workflow service not initialized"
        assert self.name, "workflow name not defined"
        return self.workflows_service.get_cluster_workflow_template(name=self.name)

    def update(self) -> TWorkflow:
        """Attempts to perform a workflow template update based on the parameters of this template.

        Note that this creates the template if it does not exist. In addition, this performs
        a get prior to updating to get the resource version to update in the first place. If you know the template
        does not exist ahead of time, it is more efficient to use `create()` directly to avoid one round trip.
        """
        assert self.workflows_service, "workflow service not initialized"
        assert self.name, "workflow name not defined"
        # we always need to do a get prior to updating to get the resource version to update in the first place
        # https://github.com/argoproj/argo-workflows/pull/5465#discussion_r597797052

        template = self.build()
        try:
            curr = self.get()
            template.metadata.resource_version = curr.metadata.resource_version
        except NotFound:
            return self.create()
        return self.workflows_service.update_cluster_workflow_template(
            self.name,
            ClusterWorkflowTemplateUpdateRequest(template=template),
        )

    def lint(self) -> TWorkflow:
        """Lints the ClusterWorkflowTemplate using the Argo cluster."""
        assert self.workflows_service, "workflow service not initialized"
        return self.workflows_service.lint_cluster_workflow_template(
            ClusterWorkflowTemplateLintRequest(template=self.build())
        )

    def build(self) -> TWorkflow:
        """Builds the ClusterWorkflowTemplate and its components into an Argo schema ClusterWorkflowTemplate object."""
        # Note that ClusterWorkflowTemplates are exactly the same as WorkflowTemplates except for the kind which is
        # handled in Workflow._set_kind (by __name__). When using ClusterWorkflowTemplates via templateRef, clients
        # should specify cluster_scope=True, but that is an intrinsic property of ClusterWorkflowTemplates from our
        # perspective.
        return _ModelClusterWorkflowTemplate(**super().build().dict())

active_deadline_seconds class-attribute instance-attribute

active_deadline_seconds = None

affinity class-attribute instance-attribute

affinity = None

annotations class-attribute instance-attribute

annotations = None

api_version class-attribute instance-attribute

api_version = None

archive_logs class-attribute instance-attribute

archive_logs = None

arguments instance-attribute

arguments

artifact_gc class-attribute instance-attribute

artifact_gc = None

artifact_repository_ref class-attribute instance-attribute

artifact_repository_ref = None

automount_service_account_token class-attribute instance-attribute

automount_service_account_token = None

cluster_name class-attribute instance-attribute

cluster_name = None

creation_timestamp class-attribute instance-attribute

creation_timestamp = None

deletion_grace_period_seconds class-attribute instance-attribute

deletion_grace_period_seconds = None

deletion_timestamp class-attribute instance-attribute

deletion_timestamp = None

dns_config class-attribute instance-attribute

dns_config = None

dns_policy class-attribute instance-attribute

dns_policy = None

entrypoint class-attribute instance-attribute

entrypoint = None

executor class-attribute instance-attribute

executor = None

finalizers class-attribute instance-attribute

finalizers = None

generate_name class-attribute instance-attribute

generate_name = None

generation class-attribute instance-attribute

generation = None

hooks class-attribute instance-attribute

hooks = None

host_aliases class-attribute instance-attribute

host_aliases = None

host_network class-attribute instance-attribute

host_network = None

image_pull_secrets class-attribute instance-attribute

image_pull_secrets = None

kind class-attribute instance-attribute

kind = None

labels class-attribute instance-attribute

labels = None

managed_fields class-attribute instance-attribute

managed_fields = None

metrics instance-attribute

metrics

name class-attribute instance-attribute

name = None

namespace class-attribute instance-attribute

namespace = None

node_selector class-attribute instance-attribute

node_selector = None

on_exit class-attribute instance-attribute

on_exit = None

owner_references class-attribute instance-attribute

owner_references = None

parallelism class-attribute instance-attribute

parallelism = None

pod_disruption_budget class-attribute instance-attribute

pod_disruption_budget = None

pod_gc class-attribute instance-attribute

pod_gc = None

pod_metadata class-attribute instance-attribute

pod_metadata = None

pod_priority class-attribute instance-attribute

pod_priority = None

pod_priority_class_name class-attribute instance-attribute

pod_priority_class_name = None

pod_spec_patch class-attribute instance-attribute

pod_spec_patch = None

priority class-attribute instance-attribute

priority = None

resource_version class-attribute instance-attribute

resource_version = None

retry_strategy class-attribute instance-attribute

retry_strategy = None

scheduler_name class-attribute instance-attribute

scheduler_name = None

security_context class-attribute instance-attribute

security_context = None
self_link = None

service_account_name class-attribute instance-attribute

service_account_name = None

shutdown class-attribute instance-attribute

shutdown = None

status class-attribute instance-attribute

status = None

suspend class-attribute instance-attribute

suspend = None

synchronization class-attribute instance-attribute

synchronization = None

template_defaults class-attribute instance-attribute

template_defaults = None

templates class-attribute instance-attribute

templates = []

tolerations class-attribute instance-attribute

tolerations = None

ttl_strategy class-attribute instance-attribute

ttl_strategy = None

uid class-attribute instance-attribute

uid = None

volume_claim_gc class-attribute instance-attribute

volume_claim_gc = None

volume_claim_templates class-attribute instance-attribute

volume_claim_templates = None

volumes instance-attribute

volumes

workflow_metadata class-attribute instance-attribute

workflow_metadata = None

workflow_template_ref class-attribute instance-attribute

workflow_template_ref = None

workflows_service class-attribute instance-attribute

workflows_service = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

ModelMapper

Source code in src/hera/workflows/_mixins.py
class ModelMapper:
    def __init__(self, model_path: str, hera_builder: Optional[Callable] = None):
        self.model_path = None
        self.builder = hera_builder

        if not model_path:
            # Allows overriding parent attribute annotations to remove the mapping
            return

        self.model_path = model_path.split(".")
        curr_class: Type[BaseModel] = self._get_model_class()
        for key in self.model_path:
            if key not in curr_class.__fields__:
                raise ValueError(f"Model key '{key}' does not exist in class {curr_class}")
            curr_class = curr_class.__fields__[key].outer_type_

    @classmethod
    def _get_model_class(cls) -> Type[BaseModel]:
        raise NotImplementedError

    @classmethod
    def build_model(
        cls, hera_class: Type[ModelMapperMixin], hera_obj: ModelMapperMixin, model: TWorkflow
    ) -> TWorkflow:
        assert isinstance(hera_obj, ModelMapperMixin)

        for attr, annotation in hera_class._get_all_annotations().items():
            if get_origin(annotation) is Annotated and isinstance(
                get_args(annotation)[1], ModelMapperMixin.ModelMapper
            ):
                mapper = get_args(annotation)[1]
                # Value comes from builder function if it exists on hera_obj, otherwise directly from the attr
                value = (
                    getattr(hera_obj, mapper.builder.__name__)()
                    if mapper.builder is not None
                    else getattr(hera_obj, attr)
                )
                if value is not None:
                    _set_model_attr(model, mapper.model_path, value)

        return model

builder instance-attribute

builder = hera_builder

model_path instance-attribute

model_path = model_path.split('.')

build_model classmethod

build_model(hera_class, hera_obj, model)
Source code in src/hera/workflows/_mixins.py
@classmethod
def build_model(
    cls, hera_class: Type[ModelMapperMixin], hera_obj: ModelMapperMixin, model: TWorkflow
) -> TWorkflow:
    assert isinstance(hera_obj, ModelMapperMixin)

    for attr, annotation in hera_class._get_all_annotations().items():
        if get_origin(annotation) is Annotated and isinstance(
            get_args(annotation)[1], ModelMapperMixin.ModelMapper
        ):
            mapper = get_args(annotation)[1]
            # Value comes from builder function if it exists on hera_obj, otherwise directly from the attr
            value = (
                getattr(hera_obj, mapper.builder.__name__)()
                if mapper.builder is not None
                else getattr(hera_obj, attr)
            )
            if value is not None:
                _set_model_attr(model, mapper.model_path, value)

    return model

build

build()

Builds the ClusterWorkflowTemplate and its components into an Argo schema ClusterWorkflowTemplate object.

Source code in src/hera/workflows/cluster_workflow_template.py
def build(self) -> TWorkflow:
    """Builds the ClusterWorkflowTemplate and its components into an Argo schema ClusterWorkflowTemplate object."""
    # Note that ClusterWorkflowTemplates are exactly the same as WorkflowTemplates except for the kind which is
    # handled in Workflow._set_kind (by __name__). When using ClusterWorkflowTemplates via templateRef, clients
    # should specify cluster_scope=True, but that is an intrinsic property of ClusterWorkflowTemplates from our
    # perspective.
    return _ModelClusterWorkflowTemplate(**super().build().dict())

create

create()

Creates the ClusterWorkflowTemplate on the Argo cluster.

Source code in src/hera/workflows/cluster_workflow_template.py
def create(self) -> TWorkflow:  # type: ignore
    """Creates the ClusterWorkflowTemplate on the Argo cluster."""
    assert self.workflows_service, "workflow service not initialized"
    return self.workflows_service.create_cluster_workflow_template(
        ClusterWorkflowTemplateCreateRequest(template=self.build())
    )

create_as_workflow

create_as_workflow(generate_name=None, wait=False, poll_interval=5)

Run this WorkflowTemplate instantly as a Workflow.

If generate_name is given, the workflow created uses generate_name as a prefix, as per the usual for hera.workflows.Workflow.generate_name. If not given, the WorkflowTemplate’s name will be used, truncated to 57 chars and appended with a hyphen.

Note: this function does not require the WorkflowTemplate to already exist on the cluster

Source code in src/hera/workflows/workflow_template.py
def create_as_workflow(
    self,
    generate_name: Optional[str] = None,
    wait: bool = False,
    poll_interval: int = 5,
) -> TWorkflow:
    """Run this WorkflowTemplate instantly as a Workflow.

    If generate_name is given, the workflow created uses generate_name as a prefix, as per the usual for
    hera.workflows.Workflow.generate_name. If not given, the WorkflowTemplate's name will be used, truncated to 57
    chars and appended with a hyphen.

    Note: this function does not require the WorkflowTemplate to already exist on the cluster
    """
    workflow = self._get_as_workflow(generate_name)
    return workflow.create(wait=wait, poll_interval=poll_interval)

from_dict classmethod

from_dict(model_dict)

Create a WorkflowTemplate from a WorkflowTemplate contained in a dict.

Examples:

>>> my_workflow_template = WorkflowTemplate(name="my-wft")
>>> my_workflow_template == WorkflowTemplate.from_dict(my_workflow_template.to_dict())
True
Source code in src/hera/workflows/workflow_template.py
@classmethod
def from_dict(cls, model_dict: Dict) -> ModelMapperMixin:
    """Create a WorkflowTemplate from a WorkflowTemplate contained in a dict.

    Examples:
        >>> my_workflow_template = WorkflowTemplate(name="my-wft")
        >>> my_workflow_template == WorkflowTemplate.from_dict(my_workflow_template.to_dict())
        True
    """
    return cls._from_dict(model_dict, _ModelWorkflowTemplate)

from_file classmethod

from_file(yaml_file)

Create a WorkflowTemplate from a WorkflowTemplate contained in a YAML file.

Examples:

>>> yaml_file = Path(...)
>>> my_workflow_template = WorkflowTemplate.from_file(yaml_file)
Source code in src/hera/workflows/workflow_template.py
@classmethod
def from_file(cls, yaml_file: Union[Path, str]) -> ModelMapperMixin:
    """Create a WorkflowTemplate from a WorkflowTemplate contained in a YAML file.

    Examples:
        >>> yaml_file = Path(...)
        >>> my_workflow_template = WorkflowTemplate.from_file(yaml_file)
    """
    return cls._from_file(yaml_file, _ModelWorkflowTemplate)

from_yaml classmethod

from_yaml(yaml_str)

Create a WorkflowTemplate from a WorkflowTemplate contained in a YAML string.

Examples:

>>> my_workflow_template = WorkflowTemplate.from_yaml(yaml_str)
Source code in src/hera/workflows/workflow_template.py
@classmethod
def from_yaml(cls, yaml_str: str) -> ModelMapperMixin:
    """Create a WorkflowTemplate from a WorkflowTemplate contained in a YAML string.

    Examples:
        >>> my_workflow_template = WorkflowTemplate.from_yaml(yaml_str)
    """
    return cls._from_yaml(yaml_str, _ModelWorkflowTemplate)

get

get()

Attempts to get a workflow template based on the parameters of this template e.g. name + namespace.

Source code in src/hera/workflows/cluster_workflow_template.py
def get(self) -> TWorkflow:
    """Attempts to get a workflow template based on the parameters of this template e.g. name + namespace."""
    assert self.workflows_service, "workflow service not initialized"
    assert self.name, "workflow name not defined"
    return self.workflows_service.get_cluster_workflow_template(name=self.name)

get_parameter

get_parameter(name)

Attempts to find and return a Parameter of the specified name.

Source code in src/hera/workflows/workflow.py
def get_parameter(self, name: str) -> Parameter:
    """Attempts to find and return a `Parameter` of the specified name."""
    arguments = self._build_arguments()
    if arguments is None:
        raise KeyError("Workflow has no arguments set")
    if arguments.parameters is None:
        raise KeyError("Workflow has no argument parameters set")

    parameters = arguments.parameters
    if next((p for p in parameters if p.name == name), None) is None:
        raise KeyError(f"`{name}` is not a valid workflow parameter")
    return Parameter(name=name, value=f"{{{{workflow.parameters.{name}}}}}")

lint

lint()

Lints the ClusterWorkflowTemplate using the Argo cluster.

Source code in src/hera/workflows/cluster_workflow_template.py
def lint(self) -> TWorkflow:
    """Lints the ClusterWorkflowTemplate using the Argo cluster."""
    assert self.workflows_service, "workflow service not initialized"
    return self.workflows_service.lint_cluster_workflow_template(
        ClusterWorkflowTemplateLintRequest(template=self.build())
    )

to_dict

to_dict()

Builds the Workflow as an Argo schema Workflow object and returns it as a dictionary.

Source code in src/hera/workflows/workflow.py
def to_dict(self) -> Any:
    """Builds the Workflow as an Argo schema Workflow object and returns it as a dictionary."""
    return self.build().dict(exclude_none=True, by_alias=True)

to_file

to_file(output_directory='.', name='', *args, **kwargs)

Writes the Workflow as an Argo schema Workflow object to a YAML file and returns the path to the file.

Parameters:

Name Type Description Default
output_directory Union[Path, str]

The directory to write the file to. Defaults to the current working directory.

'.'
name str

The name of the file to write without the file extension. Defaults to the Workflow’s name or a generated name.

''
*args

Additional arguments to pass to yaml.dump.

()
**kwargs

Additional keyword arguments to pass to yaml.dump.

{}
Source code in src/hera/workflows/workflow.py
def to_file(self, output_directory: Union[Path, str] = ".", name: str = "", *args, **kwargs) -> Path:
    """Writes the Workflow as an Argo schema Workflow object to a YAML file and returns the path to the file.

    Args:
        output_directory: The directory to write the file to. Defaults to the current working directory.
        name: The name of the file to write without the file extension.  Defaults to the Workflow's name or a
              generated name.
        *args: Additional arguments to pass to `yaml.dump`.
        **kwargs: Additional keyword arguments to pass to `yaml.dump`.
    """
    workflow_name = self.name or (self.generate_name or "workflow").rstrip("-")
    name = name or workflow_name
    output_directory = Path(output_directory)
    output_path = Path(output_directory) / f"{name}.yaml"
    output_directory.mkdir(parents=True, exist_ok=True)
    output_path.write_text(self.to_yaml(*args, **kwargs))
    return output_path.absolute()

to_yaml

to_yaml(*args, **kwargs)

Builds the Workflow as an Argo schema Workflow object and returns it as yaml string.

Source code in src/hera/workflows/workflow.py
def to_yaml(self, *args, **kwargs) -> str:
    """Builds the Workflow as an Argo schema Workflow object and returns it as yaml string."""
    if not _yaml:
        raise ImportError("`PyYAML` is not installed. Install `hera[yaml]` to bring in the extra dependency")
    # Set some default options if not provided by the user
    kwargs.setdefault("default_flow_style", False)
    kwargs.setdefault("sort_keys", False)
    return _yaml.dump(self.to_dict(), *args, **kwargs)

update

update()

Attempts to perform a workflow template update based on the parameters of this template.

Note that this creates the template if it does not exist. In addition, this performs a get prior to updating to get the resource version to update in the first place. If you know the template does not exist ahead of time, it is more efficient to use create() directly to avoid one round trip.

Source code in src/hera/workflows/cluster_workflow_template.py
def update(self) -> TWorkflow:
    """Attempts to perform a workflow template update based on the parameters of this template.

    Note that this creates the template if it does not exist. In addition, this performs
    a get prior to updating to get the resource version to update in the first place. If you know the template
    does not exist ahead of time, it is more efficient to use `create()` directly to avoid one round trip.
    """
    assert self.workflows_service, "workflow service not initialized"
    assert self.name, "workflow name not defined"
    # we always need to do a get prior to updating to get the resource version to update in the first place
    # https://github.com/argoproj/argo-workflows/pull/5465#discussion_r597797052

    template = self.build()
    try:
        curr = self.get()
        template.metadata.resource_version = curr.metadata.resource_version
    except NotFound:
        return self.create()
    return self.workflows_service.update_cluster_workflow_template(
        self.name,
        ClusterWorkflowTemplateUpdateRequest(template=template),
    )

wait

wait(poll_interval=5)

Waits for the Workflow to complete execution.

Parameters

poll_interval: int = 5 The interval in seconds to poll the workflow status.

Source code in src/hera/workflows/workflow.py
def wait(self, poll_interval: int = 5) -> TWorkflow:
    """Waits for the Workflow to complete execution.

    Parameters
    ----------
    poll_interval: int = 5
        The interval in seconds to poll the workflow status.
    """
    assert self.workflows_service is not None, "workflow service not initialized"
    assert self.namespace is not None, "workflow namespace not defined"
    assert self.name is not None, "workflow name not defined"

    # here we use the sleep interval to wait for the workflow post creation. This is to address a potential
    # race conditions such as:
    # 1. Argo server says "workflow was accepted" but the workflow is not yet created
    # 2. Hera wants to verify the status of the workflow, but it's not yet defined because it's not created
    # 3. Argo finally creates the workflow
    # 4. Hera throws an `AssertionError` because the phase assertion fails
    time.sleep(poll_interval)
    wf = self.workflows_service.get_workflow(self.name, namespace=self.namespace)
    assert wf.metadata.name is not None, f"workflow name not defined for workflow {self.name}"

    assert wf.status is not None, f"workflow status not defined for workflow {wf.metadata.name}"
    assert wf.status.phase is not None, f"workflow phase not defined for workflow status {wf.status}"
    status = WorkflowStatus.from_argo_status(wf.status.phase)

    # keep polling for workflow status until completed, at the interval dictated by the user
    while status == WorkflowStatus.running:
        time.sleep(poll_interval)
        wf = self.workflows_service.get_workflow(wf.metadata.name, namespace=self.namespace)
        assert wf.status is not None, f"workflow status not defined for workflow {wf.metadata.name}"
        assert wf.status.phase is not None, f"workflow phase not defined for workflow status {wf.status}"
        status = WorkflowStatus.from_argo_status(wf.status.phase)
    return wf

ConfigMapEnv

ConfigMapEnv is an environment variable whose value originates from a Kubernetes config map.

Source code in src/hera/workflows/env.py
class ConfigMapEnv(_BaseEnv):
    """`ConfigMapEnv` is an environment variable whose value originates from a Kubernetes config map."""

    config_map_name: Optional[str]
    """the name of the config map to reference in Kubernetes"""

    config_map_key: str
    """the name of the field key whole value should be registered as an environment variable"""

    optional: Optional[bool] = None
    """whether the existence of the config map is optional"""

    def build(self) -> _ModelEnvVar:
        """Constructs and returns the Argo environment specification."""
        return _ModelEnvVar(
            name=self.name,
            value_from=_ModelEnvVarSource(
                config_map_key_ref=_ModelConfigMapKeySelector(
                    name=self.config_map_name, key=self.config_map_key, optional=self.optional
                )
            ),
        )

config_map_key instance-attribute

config_map_key

the name of the field key whole value should be registered as an environment variable

config_map_name instance-attribute

config_map_name

the name of the config map to reference in Kubernetes

name instance-attribute

name

the name of the environment variable. This is universally required irrespective of the type of env variable

optional class-attribute instance-attribute

optional = None

whether the existence of the config map is optional

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

build

build()

Constructs and returns the Argo environment specification.

Source code in src/hera/workflows/env.py
def build(self) -> _ModelEnvVar:
    """Constructs and returns the Argo environment specification."""
    return _ModelEnvVar(
        name=self.name,
        value_from=_ModelEnvVarSource(
            config_map_key_ref=_ModelConfigMapKeySelector(
                name=self.config_map_name, key=self.config_map_key, optional=self.optional
            )
        ),
    )

ConfigMapEnvFrom

Exposes a K8s config map’s value as an environment variable.

Source code in src/hera/workflows/env_from.py
class ConfigMapEnvFrom(_BaseEnvFrom, _ModelConfigMapEnvSource):
    """Exposes a K8s config map's value as an environment variable."""

    def build(self) -> _ModelEnvFromSource:
        """Constructs and returns the Argo EnvFrom specification."""
        return _ModelEnvFromSource(
            prefix=self.prefix,
            config_map_ref=_ModelConfigMapEnvSource(
                name=self.name,
                optional=self.optional,
            ),
        )

name class-attribute instance-attribute

name = Field(default=None, description='Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names')

optional class-attribute instance-attribute

optional = Field(default=None, description='Specify whether the ConfigMap must be defined')

prefix class-attribute instance-attribute

prefix = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

build

build()

Constructs and returns the Argo EnvFrom specification.

Source code in src/hera/workflows/env_from.py
def build(self) -> _ModelEnvFromSource:
    """Constructs and returns the Argo EnvFrom specification."""
    return _ModelEnvFromSource(
        prefix=self.prefix,
        config_map_ref=_ModelConfigMapEnvSource(
            name=self.name,
            optional=self.optional,
        ),
    )

ConfigMapVolume

Representation of a config map volume.

Source code in src/hera/workflows/volume.py
class ConfigMapVolume(_BaseVolume, _ModelConfigMapVolumeSource):  # type: ignore
    """Representation of a config map volume."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            config_map=_ModelConfigMapVolumeSource(
                default_mode=self.default_mode, items=self.items, name=self.name, optional=self.optional
            ),
        )

default_mode class-attribute instance-attribute

default_mode = Field(default=None, alias='defaultMode', description='Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.')

items class-attribute instance-attribute

items = Field(default=None, description="If unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'.")

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

optional class-attribute instance-attribute

optional = Field(default=None, description='Specify whether the ConfigMap or its keys must be defined')

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

Container

The Container template type defines a container to run on Argo.

The container generally consists of running a Docker container remotely, which is configured via fields such as command (the command to run to start the container), args (arguments for the container), working_dir (for setting the active working directory relative to container execution), etc.

Source code in src/hera/workflows/container.py
class Container(
    EnvIOMixin,
    ContainerMixin,
    TemplateMixin,
    ResourceMixin,
    VolumeMountMixin,
    CallableTemplateMixin,
):
    """The Container template type defines a container to run on Argo.

    The container generally consists of running a Docker container remotely, which is configured via fields such as
    `command` (the command to run to start the container), `args` (arguments for the container), `working_dir` (for
    setting the active working directory relative to container execution), etc.
    """

    args: Optional[List[str]] = None
    command: Optional[List[str]] = None
    lifecycle: Optional[Lifecycle] = None
    security_context: Optional[SecurityContext] = None
    working_dir: Optional[str] = None

    def _build_container(self) -> _ModelContainer:
        """Builds the generated `Container` representation."""
        return _ModelContainer(
            args=self.args,
            command=self.command,
            env=self._build_env(),
            env_from=self._build_env_from(),
            image=self.image,
            image_pull_policy=self._build_image_pull_policy(),
            lifecycle=self.lifecycle,
            liveness_probe=self.liveness_probe,
            ports=self.ports,
            readiness_probe=self.readiness_probe,
            resources=self._build_resources(),
            security_context=self.security_context,
            startup_probe=self.startup_probe,
            stdin=self.stdin,
            stdin_once=self.stdin_once,
            termination_message_path=self.termination_message_path,
            termination_message_policy=self.termination_message_policy,
            tty=self.tty,
            volume_devices=self.volume_devices,
            volume_mounts=self._build_volume_mounts(),
            working_dir=self.working_dir,
        )

    def _build_template(self) -> _ModelTemplate:
        """Builds the generated `Template` representation of the container."""
        return _ModelTemplate(
            active_deadline_seconds=self.active_deadline_seconds,
            affinity=self.affinity,
            archive_location=self.archive_location,
            automount_service_account_token=self.automount_service_account_token,
            container=self._build_container(),
            daemon=self.daemon,
            executor=self.executor,
            fail_fast=self.fail_fast,
            host_aliases=self.host_aliases,
            init_containers=self.init_containers,
            inputs=self._build_inputs(),
            memoize=self.memoize,
            metadata=self._build_metadata(),
            metrics=self._build_metrics(),
            name=self.name,
            node_selector=self.node_selector,
            outputs=self._build_outputs(),
            plugin=self.plugin,
            pod_spec_patch=self.pod_spec_patch,
            priority=self.priority,
            priority_class_name=self.priority_class_name,
            retry_strategy=self.retry_strategy,
            scheduler_name=self.scheduler_name,
            security_context=self.pod_security_context,
            service_account_name=self.service_account_name,
            sidecars=self._build_sidecars(),
            synchronization=self.synchronization,
            timeout=self.timeout,
            tolerations=self.tolerations,
            volumes=self._build_volumes(),
        )

active_deadline_seconds class-attribute instance-attribute

active_deadline_seconds = None

affinity class-attribute instance-attribute

affinity = None

annotations class-attribute instance-attribute

annotations = None

archive_location class-attribute instance-attribute

archive_location = None

args class-attribute instance-attribute

args = None

arguments class-attribute instance-attribute

arguments = None

automount_service_account_token class-attribute instance-attribute

automount_service_account_token = None

command class-attribute instance-attribute

command = None

daemon class-attribute instance-attribute

daemon = None

env class-attribute instance-attribute

env = None

env_from class-attribute instance-attribute

env_from = None

executor class-attribute instance-attribute

executor = None

fail_fast class-attribute instance-attribute

fail_fast = None

host_aliases class-attribute instance-attribute

host_aliases = None

http class-attribute instance-attribute

http = None

image class-attribute instance-attribute

image = None

image_pull_policy class-attribute instance-attribute

image_pull_policy = None

init_containers class-attribute instance-attribute

init_containers = None

inputs class-attribute instance-attribute

inputs = None

labels class-attribute instance-attribute

labels = None

lifecycle class-attribute instance-attribute

lifecycle = None

liveness_probe class-attribute instance-attribute

liveness_probe = None

memoize class-attribute instance-attribute

memoize = None

metrics class-attribute instance-attribute

metrics = None

name class-attribute instance-attribute

name = None

node_selector class-attribute instance-attribute

node_selector = None

outputs class-attribute instance-attribute

outputs = None

parallelism class-attribute instance-attribute

parallelism = None

plugin class-attribute instance-attribute

plugin = None

pod_security_context class-attribute instance-attribute

pod_security_context = None

pod_spec_patch class-attribute instance-attribute

pod_spec_patch = None

ports class-attribute instance-attribute

ports = None

priority class-attribute instance-attribute

priority = None

priority_class_name class-attribute instance-attribute

priority_class_name = None

readiness_probe class-attribute instance-attribute

readiness_probe = None

resources class-attribute instance-attribute

resources = None

retry_strategy class-attribute instance-attribute

retry_strategy = None

scheduler_name class-attribute instance-attribute

scheduler_name = None

security_context class-attribute instance-attribute

security_context = None

service_account_name class-attribute instance-attribute

service_account_name = None

sidecars class-attribute instance-attribute

sidecars = None

startup_probe class-attribute instance-attribute

startup_probe = None

stdin class-attribute instance-attribute

stdin = None

stdin_once class-attribute instance-attribute

stdin_once = None

synchronization class-attribute instance-attribute

synchronization = None

termination_message_path class-attribute instance-attribute

termination_message_path = None

termination_message_policy class-attribute instance-attribute

termination_message_policy = None

timeout class-attribute instance-attribute

timeout = None

tolerations class-attribute instance-attribute

tolerations = None

tty class-attribute instance-attribute

tty = None

volume_devices class-attribute instance-attribute

volume_devices = None

volume_mounts class-attribute instance-attribute

volume_mounts = None

volumes class-attribute instance-attribute

volumes = None

working_dir class-attribute instance-attribute

working_dir = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

ContainerNode

A regular container that can be used as part of a hera.workflows.ContainerSet.

See Also

Container Set Template

Source code in src/hera/workflows/container_set.py
class ContainerNode(ContainerMixin, VolumeMountMixin, ResourceMixin, EnvMixin, SubNodeMixin):
    """A regular container that can be used as part of a `hera.workflows.ContainerSet`.

    See Also:
        [Container Set Template](https://argoproj.github.io/argo-workflows/container-set-template/)
    """

    name: str
    args: Optional[List[str]] = None
    command: Optional[List[str]] = None
    dependencies: Optional[List[str]] = None
    lifecycle: Optional[Lifecycle] = None
    security_context: Optional[SecurityContext] = None
    working_dir: Optional[str] = None

    def next(self, other: ContainerNode) -> ContainerNode:
        """Sets the given container as a dependency of this container and returns the given container.

        Examples:
            >>> from hera.workflows import ContainerNode
            >>> a, b = ContainerNode(name="a"), ContainerNode(name="b")
            >>> a.next(b)
            >>> b.dependencies
            ['a']
        """
        assert issubclass(other.__class__, ContainerNode)
        if other.dependencies is None:
            other.dependencies = [self.name]
        else:
            other.dependencies.append(self.name)
        other.dependencies = sorted(list(set(other.dependencies)))
        return other

    def __rrshift__(self, other: List[ContainerNode]) -> ContainerNode:
        """Sets `self` as a dependent of the given list of other `hera.workflows.ContainerNode`.

        Practically, the `__rrshift__` allows us to express statements such as `[a, b, c] >> d`, where `d` is `self.`

        Examples:
            >>> from hera.workflows import ContainerNode
            >>> a, b, c = ContainerNode(name="a"), ContainerNode(name="b"), ContainerNode(name="c")
            >>> [a, b] >> c
            >>> c.dependencies
            ['a', 'b']
        """
        assert isinstance(other, list), f"Unknown type {type(other)} specified using reverse right bitshift operator"
        for o in other:
            o.next(self)
        return self

    def __rshift__(
        self, other: Union[ContainerNode, List[ContainerNode]]
    ) -> Union[ContainerNode, List[ContainerNode]]:
        """Sets the given container as a dependency of this container and returns the given container.

        Examples:
            >>> from hera.workflows import ContainerNode
            >>> a, b = ContainerNode(name="a"), ContainerNode(name="b")
            >>> a >> b
            >>> b.dependencies
            ['a']
        """
        if isinstance(other, ContainerNode):
            return self.next(other)
        elif isinstance(other, list):
            for o in other:
                assert isinstance(
                    o, ContainerNode
                ), f"Unknown list item type {type(o)} specified using right bitshift operator `>>`"
                self.next(o)
            return other
        raise ValueError(f"Unknown type {type(other)} provided to `__rshift__`")

    def _build_container_node(self) -> _ModelContainerNode:
        """Builds the generated `ContainerNode`."""
        image_pull_policy = self._build_image_pull_policy()
        return _ModelContainerNode(
            args=self.args,
            command=self.command,
            dependencies=self.dependencies,
            env=self._build_env(),
            env_from=self._build_env_from(),
            image=self.image,
            image_pull_policy=None if image_pull_policy is None else image_pull_policy.value,
            lifecycle=self.lifecycle,
            liveness_probe=self.liveness_probe,
            name=self.name,
            ports=self.ports,
            readiness_probe=self.readiness_probe,
            resources=self._build_resources(),
            security_context=self.security_context,
            startup_probe=self.startup_probe,
            stdin=self.stdin,
            stdin_once=self.stdin_once,
            termination_message_path=self.termination_message_path,
            termination_message_policy=self.termination_message_policy,
            tty=self.tty,
            volume_devices=self.volume_devices,
            volume_mounts=self._build_volume_mounts(),
            working_dir=self.working_dir,
        )

args class-attribute instance-attribute

args = None

command class-attribute instance-attribute

command = None

dependencies class-attribute instance-attribute

dependencies = None

env class-attribute instance-attribute

env = None

env_from class-attribute instance-attribute

env_from = None

image class-attribute instance-attribute

image = None

image_pull_policy class-attribute instance-attribute

image_pull_policy = None

lifecycle class-attribute instance-attribute

lifecycle = None

liveness_probe class-attribute instance-attribute

liveness_probe = None

name instance-attribute

name

ports class-attribute instance-attribute

ports = None

readiness_probe class-attribute instance-attribute

readiness_probe = None

resources class-attribute instance-attribute

resources = None

security_context class-attribute instance-attribute

security_context = None

startup_probe class-attribute instance-attribute

startup_probe = None

stdin class-attribute instance-attribute

stdin = None

stdin_once class-attribute instance-attribute

stdin_once = None

termination_message_path class-attribute instance-attribute

termination_message_path = None

termination_message_policy class-attribute instance-attribute

termination_message_policy = None

tty class-attribute instance-attribute

tty = None

volume_devices class-attribute instance-attribute

volume_devices = None

volume_mounts class-attribute instance-attribute

volume_mounts = None

volumes class-attribute instance-attribute

volumes = None

working_dir class-attribute instance-attribute

working_dir = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

next

next(other)

Sets the given container as a dependency of this container and returns the given container.

Examples:

>>> from hera.workflows import ContainerNode
>>> a, b = ContainerNode(name="a"), ContainerNode(name="b")
>>> a.next(b)
>>> b.dependencies
['a']
Source code in src/hera/workflows/container_set.py
def next(self, other: ContainerNode) -> ContainerNode:
    """Sets the given container as a dependency of this container and returns the given container.

    Examples:
        >>> from hera.workflows import ContainerNode
        >>> a, b = ContainerNode(name="a"), ContainerNode(name="b")
        >>> a.next(b)
        >>> b.dependencies
        ['a']
    """
    assert issubclass(other.__class__, ContainerNode)
    if other.dependencies is None:
        other.dependencies = [self.name]
    else:
        other.dependencies.append(self.name)
    other.dependencies = sorted(list(set(other.dependencies)))
    return other

ContainerSet

ContainerSet is the implementation of a set of containers that can be run in parallel on Kubernetes.

The containers are run within the same pod.

Examples:

>>> with ContainerSet(...) as cs:
>>>     ContainerNode(...)
>>>     ContainerNode(...)
Source code in src/hera/workflows/container_set.py
class ContainerSet(
    EnvIOMixin,
    ContainerMixin,
    TemplateMixin,
    CallableTemplateMixin,
    ResourceMixin,
    VolumeMountMixin,
    ContextMixin,
):
    """`ContainerSet` is the implementation of a set of containers that can be run in parallel on Kubernetes.

    The containers are run within the same pod.

    Examples:
        >>> with ContainerSet(...) as cs:
        >>>     ContainerNode(...)
        >>>     ContainerNode(...)
    """

    containers: List[Union[ContainerNode, _ModelContainerNode]] = []
    container_set_retry_strategy: Optional[ContainerSetRetryStrategy] = None

    def _add_sub(self, node: Any):
        if not isinstance(node, ContainerNode):
            raise InvalidType(type(node))

        self.containers.append(node)

    def _build_container_set(self) -> _ModelContainerSetTemplate:
        """Builds the generated `ContainerSetTemplate`."""
        containers = [c._build_container_node() if isinstance(c, ContainerNode) else c for c in self.containers]
        return _ModelContainerSetTemplate(
            containers=containers,
            retry_strategy=self.container_set_retry_strategy,
            volume_mounts=self.volume_mounts,
        )

    def _build_template(self) -> _ModelTemplate:
        """Builds the generated `Template` representation of the container set."""
        return _ModelTemplate(
            active_deadline_seconds=self.active_deadline_seconds,
            affinity=self.affinity,
            archive_location=self.archive_location,
            automount_service_account_token=self.automount_service_account_token,
            container_set=self._build_container_set(),
            daemon=self.daemon,
            executor=self.executor,
            fail_fast=self.fail_fast,
            host_aliases=self.host_aliases,
            init_containers=self.init_containers,
            inputs=self._build_inputs(),
            memoize=self.memoize,
            metadata=self._build_metadata(),
            metrics=self._build_metrics(),
            name=self.name,
            node_selector=self.node_selector,
            outputs=self._build_outputs(),
            plugin=self.plugin,
            pod_spec_patch=self.pod_spec_patch,
            priority=self.priority,
            priority_class_name=self.priority_class_name,
            resource=self._build_resources(),
            retry_strategy=self.retry_strategy,
            scheduler_name=self.scheduler_name,
            security_context=self.pod_security_context,
            service_account_name=self.service_account_name,
            sidecars=self._build_sidecars(),
            synchronization=self.synchronization,
            timeout=self.timeout,
            tolerations=self.tolerations,
            volumes=self._build_volumes(),
        )

active_deadline_seconds class-attribute instance-attribute

active_deadline_seconds = None

affinity class-attribute instance-attribute

affinity = None

annotations class-attribute instance-attribute

annotations = None

archive_location class-attribute instance-attribute

archive_location = None

arguments class-attribute instance-attribute

arguments = None

automount_service_account_token class-attribute instance-attribute

automount_service_account_token = None

container_set_retry_strategy class-attribute instance-attribute

container_set_retry_strategy = None

containers class-attribute instance-attribute

containers = []

daemon class-attribute instance-attribute

daemon = None

env class-attribute instance-attribute

env = None

env_from class-attribute instance-attribute

env_from = None

executor class-attribute instance-attribute

executor = None

fail_fast class-attribute instance-attribute

fail_fast = None

host_aliases class-attribute instance-attribute

host_aliases = None

http class-attribute instance-attribute

http = None

image class-attribute instance-attribute

image = None

image_pull_policy class-attribute instance-attribute

image_pull_policy = None

init_containers class-attribute instance-attribute

init_containers = None

inputs class-attribute instance-attribute

inputs = None

labels class-attribute instance-attribute

labels = None

liveness_probe class-attribute instance-attribute

liveness_probe = None

memoize class-attribute instance-attribute

memoize = None

metrics class-attribute instance-attribute

metrics = None

name class-attribute instance-attribute

name = None

node_selector class-attribute instance-attribute

node_selector = None

outputs class-attribute instance-attribute

outputs = None

parallelism class-attribute instance-attribute

parallelism = None

plugin class-attribute instance-attribute

plugin = None

pod_security_context class-attribute instance-attribute

pod_security_context = None

pod_spec_patch class-attribute instance-attribute

pod_spec_patch = None

ports class-attribute instance-attribute

ports = None

priority class-attribute instance-attribute

priority = None

priority_class_name class-attribute instance-attribute

priority_class_name = None

readiness_probe class-attribute instance-attribute

readiness_probe = None

resources class-attribute instance-attribute

resources = None

retry_strategy class-attribute instance-attribute

retry_strategy = None

scheduler_name class-attribute instance-attribute

scheduler_name = None

service_account_name class-attribute instance-attribute

service_account_name = None

sidecars class-attribute instance-attribute

sidecars = None

startup_probe class-attribute instance-attribute

startup_probe = None

stdin class-attribute instance-attribute

stdin = None

stdin_once class-attribute instance-attribute

stdin_once = None

synchronization class-attribute instance-attribute

synchronization = None

termination_message_path class-attribute instance-attribute

termination_message_path = None

termination_message_policy class-attribute instance-attribute

termination_message_policy = None

timeout class-attribute instance-attribute

timeout = None

tolerations class-attribute instance-attribute

tolerations = None

tty class-attribute instance-attribute

tty = None

volume_devices class-attribute instance-attribute

volume_devices = None

volume_mounts class-attribute instance-attribute

volume_mounts = None

volumes class-attribute instance-attribute

volumes = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

Counter

Counter metric component used to count specific events based on the given value.

Notes

See https://argoproj.github.io/argo-workflows/metrics/#grafana-dashboard-for-argo-controller-metrics

Source code in src/hera/workflows/metrics.py
class Counter(_BaseMetric):
    """Counter metric component used to count specific events based on the given value.

    Notes:
        See [https://argoproj.github.io/argo-workflows/metrics/#grafana-dashboard-for-argo-controller-metrics](https://argoproj.github.io/argo-workflows/metrics/#grafana-dashboard-for-argo-controller-metrics)
    """

    value: str

    def _build_metric(self) -> _ModelPrometheus:
        return _ModelPrometheus(
            counter=_ModelCounter(value=self.value),
            gauge=None,
            help=self.help,
            histogram=None,
            labels=self._build_labels(),
            name=self.name,
            when=self.when,
        )

help instance-attribute

help

labels class-attribute instance-attribute

labels = None

name instance-attribute

name

value instance-attribute

value

when class-attribute instance-attribute

when = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

CronWorkflow

CronWorkflow allows a user to run a Workflow on a recurring basis.

Note

Hera’s CronWorkflow is a subclass of Workflow which means certain fields are renamed for compatibility, see cron_suspend and cron_status which are different from the Argo spec. See CronWorkflowSpec for more details.

Source code in src/hera/workflows/cron_workflow.py
class CronWorkflow(Workflow):
    """CronWorkflow allows a user to run a Workflow on a recurring basis.

    Note:
        Hera's CronWorkflow is a subclass of Workflow which means certain fields are renamed
        for compatibility, see `cron_suspend` and `cron_status` which are different from the Argo
        spec. See [CronWorkflowSpec](https://argoproj.github.io/argo-workflows/fields/#cronworkflow) for more details.
    """

    concurrency_policy: Annotated[Optional[str], _CronWorkflowModelMapper("spec.concurrency_policy")] = None
    failed_jobs_history_limit: Annotated[
        Optional[int], _CronWorkflowModelMapper("spec.failed_jobs_history_limit")
    ] = None
    schedule: Annotated[str, _CronWorkflowModelMapper("spec.schedule")]
    starting_deadline_seconds: Annotated[
        Optional[int], _CronWorkflowModelMapper("spec.starting_deadline_seconds")
    ] = None
    successful_jobs_history_limit: Annotated[
        Optional[int], _CronWorkflowModelMapper("spec.successful_jobs_history_limit")
    ] = None
    cron_suspend: Annotated[Optional[bool], _CronWorkflowModelMapper("spec.suspend")] = None
    timezone: Annotated[Optional[str], _CronWorkflowModelMapper("spec.timezone")] = None
    cron_status: Annotated[Optional[CronWorkflowStatus], _CronWorkflowModelMapper("status")] = None

    def create(self) -> TWorkflow:  # type: ignore
        """Creates the CronWorkflow on the Argo cluster."""
        assert self.workflows_service, "workflow service not initialized"
        assert self.namespace, "workflow namespace not defined"
        return self.workflows_service.create_cron_workflow(
            CreateCronWorkflowRequest(cron_workflow=self.build()), namespace=self.namespace
        )

    def get(self) -> TWorkflow:
        """Attempts to get a cron workflow based on the parameters of this template e.g. name + namespace."""
        assert self.workflows_service, "workflow service not initialized"
        assert self.namespace, "workflow namespace not defined"
        assert self.name, "workflow name not defined"
        return self.workflows_service.get_cron_workflow(name=self.name, namespace=self.namespace)

    def update(self) -> TWorkflow:
        """Attempts to perform a workflow template update based on the parameters of this template.

        Note that this creates the template if it does not exist. In addition, this performs
        a get prior to updating to get the resource version to update in the first place. If you know the template
        does not exist ahead of time, it is more efficient to use `create()` directly to avoid one round trip.
        """
        assert self.workflows_service, "workflow service not initialized"
        assert self.namespace, "workflow namespace not defined"
        assert self.name, "workflow name not defined"
        # we always need to do a get prior to updating to get the resource version to update in the first place
        # https://github.com/argoproj/argo-workflows/pull/5465#discussion_r597797052

        template = self.build()
        try:
            curr = self.get()
            template.metadata.resource_version = curr.metadata.resource_version
        except NotFound:
            return self.create()
        return self.workflows_service.update_cron_workflow(
            self.name,
            UpdateCronWorkflowRequest(cron_workflow=template),
            namespace=self.namespace,
        )

    def lint(self) -> TWorkflow:
        """Lints the CronWorkflow using the Argo cluster."""
        assert self.workflows_service, "workflow service not initialized"
        assert self.namespace, "workflow namespace not defined"
        return self.workflows_service.lint_cron_workflow(
            LintCronWorkflowRequest(cron_workflow=self.build()), namespace=self.namespace
        )

    def build(self) -> TWorkflow:
        """Builds the CronWorkflow and its components into an Argo schema CronWorkflow object."""
        self = self._dispatch_hooks()

        model_workflow = super().build()
        model_cron_workflow = _ModelCronWorkflow(
            metadata=model_workflow.metadata,
            spec=CronWorkflowSpec(
                schedule=self.schedule,
                workflow_spec=model_workflow.spec,
            ),
        )

        return _CronWorkflowModelMapper.build_model(CronWorkflow, self, model_cron_workflow)

    @classmethod
    def _from_model(cls, model: BaseModel) -> ModelMapperMixin:
        """Parse from given model to cls's type."""
        assert isinstance(model, _ModelCronWorkflow)
        hera_cron_workflow = CronWorkflow(schedule="")

        for attr, annotation in cls._get_all_annotations().items():
            if get_origin(annotation) is Annotated and isinstance(
                get_args(annotation)[1], ModelMapperMixin.ModelMapper
            ):
                mapper = get_args(annotation)[1]
                if mapper.model_path:
                    value = None

                    if (
                        isinstance(mapper, _CronWorkflowModelMapper)
                        or isinstance(mapper, _WorkflowModelMapper)
                        and mapper.model_path[0] == "metadata"
                    ):
                        value = _get_model_attr(model, mapper.model_path)
                    elif isinstance(mapper, _WorkflowModelMapper) and mapper.model_path[0] == "spec":
                        # We map "spec.workflow_spec" from the model CronWorkflow to "spec" for Hera's Workflow (used
                        # as the parent class of Hera's CronWorkflow)
                        value = _get_model_attr(model.spec.workflow_spec, mapper.model_path[1:])

                    if value is not None:
                        setattr(hera_cron_workflow, attr, value)

        return hera_cron_workflow

    @classmethod
    def from_dict(cls, model_dict: Dict) -> ModelMapperMixin:
        """Create a CronWorkflow from a CronWorkflow contained in a dict.

        Examples:
            >>> my_cron_workflow = CronWorkflow(name="my-cron-wf")
            >>> my_cron_workflow == CronWorkflow.from_dict(my_cron_workflow.to_dict())
            True
        """
        return cls._from_dict(model_dict, _ModelCronWorkflow)

    @classmethod
    def from_yaml(cls, yaml_str: str) -> ModelMapperMixin:
        """Create a CronWorkflow from a CronWorkflow contained in a YAML string.

        Examples:
            >>> my_cron_workflow = CronWorkflow.from_yaml(yaml_str)
        """
        return cls._from_yaml(yaml_str, _ModelCronWorkflow)

    @classmethod
    def from_file(cls, yaml_file: Union[Path, str]) -> ModelMapperMixin:
        """Create a CronWorkflow from a CronWorkflow contained in a YAML file.

        Examples:
            >>> yaml_file = Path(...)
            >>> my_workflow_template = CronWorkflow.from_file(yaml_file)
        """
        return cls._from_file(yaml_file, _ModelCronWorkflow)

active_deadline_seconds class-attribute instance-attribute

active_deadline_seconds = None

affinity class-attribute instance-attribute

affinity = None

annotations class-attribute instance-attribute

annotations = None

api_version class-attribute instance-attribute

api_version = None

archive_logs class-attribute instance-attribute

archive_logs = None

arguments instance-attribute

arguments

artifact_gc class-attribute instance-attribute

artifact_gc = None

artifact_repository_ref class-attribute instance-attribute

artifact_repository_ref = None

automount_service_account_token class-attribute instance-attribute

automount_service_account_token = None

cluster_name class-attribute instance-attribute

cluster_name = None

concurrency_policy class-attribute instance-attribute

concurrency_policy = None

creation_timestamp class-attribute instance-attribute

creation_timestamp = None

cron_status class-attribute instance-attribute

cron_status = None

cron_suspend class-attribute instance-attribute

cron_suspend = None

deletion_grace_period_seconds class-attribute instance-attribute

deletion_grace_period_seconds = None

deletion_timestamp class-attribute instance-attribute

deletion_timestamp = None

dns_config class-attribute instance-attribute

dns_config = None

dns_policy class-attribute instance-attribute

dns_policy = None

entrypoint class-attribute instance-attribute

entrypoint = None

executor class-attribute instance-attribute

executor = None

failed_jobs_history_limit class-attribute instance-attribute

failed_jobs_history_limit = None

finalizers class-attribute instance-attribute

finalizers = None

generate_name class-attribute instance-attribute

generate_name = None

generation class-attribute instance-attribute

generation = None

hooks class-attribute instance-attribute

hooks = None

host_aliases class-attribute instance-attribute

host_aliases = None

host_network class-attribute instance-attribute

host_network = None

image_pull_secrets class-attribute instance-attribute

image_pull_secrets = None

kind class-attribute instance-attribute

kind = None

labels class-attribute instance-attribute

labels = None

managed_fields class-attribute instance-attribute

managed_fields = None

metrics instance-attribute

metrics

name class-attribute instance-attribute

name = None

namespace class-attribute instance-attribute

namespace = None

node_selector class-attribute instance-attribute

node_selector = None

on_exit class-attribute instance-attribute

on_exit = None

owner_references class-attribute instance-attribute

owner_references = None

parallelism class-attribute instance-attribute

parallelism = None

pod_disruption_budget class-attribute instance-attribute

pod_disruption_budget = None

pod_gc class-attribute instance-attribute

pod_gc = None

pod_metadata class-attribute instance-attribute

pod_metadata = None

pod_priority class-attribute instance-attribute

pod_priority = None

pod_priority_class_name class-attribute instance-attribute

pod_priority_class_name = None

pod_spec_patch class-attribute instance-attribute

pod_spec_patch = None

priority class-attribute instance-attribute

priority = None

resource_version class-attribute instance-attribute

resource_version = None

retry_strategy class-attribute instance-attribute

retry_strategy = None

schedule instance-attribute

schedule

scheduler_name class-attribute instance-attribute

scheduler_name = None

security_context class-attribute instance-attribute

security_context = None
self_link = None

service_account_name class-attribute instance-attribute

service_account_name = None

shutdown class-attribute instance-attribute

shutdown = None

starting_deadline_seconds class-attribute instance-attribute

starting_deadline_seconds = None

status class-attribute instance-attribute

status = None

successful_jobs_history_limit class-attribute instance-attribute

successful_jobs_history_limit = None

suspend class-attribute instance-attribute

suspend = None

synchronization class-attribute instance-attribute

synchronization = None

template_defaults class-attribute instance-attribute

template_defaults = None

templates class-attribute instance-attribute

templates = []

timezone class-attribute instance-attribute

timezone = None

tolerations class-attribute instance-attribute

tolerations = None

ttl_strategy class-attribute instance-attribute

ttl_strategy = None

uid class-attribute instance-attribute

uid = None

volume_claim_gc class-attribute instance-attribute

volume_claim_gc = None

volume_claim_templates class-attribute instance-attribute

volume_claim_templates = None

volumes instance-attribute

volumes

workflow_metadata class-attribute instance-attribute

workflow_metadata = None

workflow_template_ref class-attribute instance-attribute

workflow_template_ref = None

workflows_service class-attribute instance-attribute

workflows_service = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

ModelMapper

Source code in src/hera/workflows/_mixins.py
class ModelMapper:
    def __init__(self, model_path: str, hera_builder: Optional[Callable] = None):
        self.model_path = None
        self.builder = hera_builder

        if not model_path:
            # Allows overriding parent attribute annotations to remove the mapping
            return

        self.model_path = model_path.split(".")
        curr_class: Type[BaseModel] = self._get_model_class()
        for key in self.model_path:
            if key not in curr_class.__fields__:
                raise ValueError(f"Model key '{key}' does not exist in class {curr_class}")
            curr_class = curr_class.__fields__[key].outer_type_

    @classmethod
    def _get_model_class(cls) -> Type[BaseModel]:
        raise NotImplementedError

    @classmethod
    def build_model(
        cls, hera_class: Type[ModelMapperMixin], hera_obj: ModelMapperMixin, model: TWorkflow
    ) -> TWorkflow:
        assert isinstance(hera_obj, ModelMapperMixin)

        for attr, annotation in hera_class._get_all_annotations().items():
            if get_origin(annotation) is Annotated and isinstance(
                get_args(annotation)[1], ModelMapperMixin.ModelMapper
            ):
                mapper = get_args(annotation)[1]
                # Value comes from builder function if it exists on hera_obj, otherwise directly from the attr
                value = (
                    getattr(hera_obj, mapper.builder.__name__)()
                    if mapper.builder is not None
                    else getattr(hera_obj, attr)
                )
                if value is not None:
                    _set_model_attr(model, mapper.model_path, value)

        return model

builder instance-attribute

builder = hera_builder

model_path instance-attribute

model_path = model_path.split('.')

build_model classmethod

build_model(hera_class, hera_obj, model)
Source code in src/hera/workflows/_mixins.py
@classmethod
def build_model(
    cls, hera_class: Type[ModelMapperMixin], hera_obj: ModelMapperMixin, model: TWorkflow
) -> TWorkflow:
    assert isinstance(hera_obj, ModelMapperMixin)

    for attr, annotation in hera_class._get_all_annotations().items():
        if get_origin(annotation) is Annotated and isinstance(
            get_args(annotation)[1], ModelMapperMixin.ModelMapper
        ):
            mapper = get_args(annotation)[1]
            # Value comes from builder function if it exists on hera_obj, otherwise directly from the attr
            value = (
                getattr(hera_obj, mapper.builder.__name__)()
                if mapper.builder is not None
                else getattr(hera_obj, attr)
            )
            if value is not None:
                _set_model_attr(model, mapper.model_path, value)

    return model

build

build()

Builds the CronWorkflow and its components into an Argo schema CronWorkflow object.

Source code in src/hera/workflows/cron_workflow.py
def build(self) -> TWorkflow:
    """Builds the CronWorkflow and its components into an Argo schema CronWorkflow object."""
    self = self._dispatch_hooks()

    model_workflow = super().build()
    model_cron_workflow = _ModelCronWorkflow(
        metadata=model_workflow.metadata,
        spec=CronWorkflowSpec(
            schedule=self.schedule,
            workflow_spec=model_workflow.spec,
        ),
    )

    return _CronWorkflowModelMapper.build_model(CronWorkflow, self, model_cron_workflow)

create

create()

Creates the CronWorkflow on the Argo cluster.

Source code in src/hera/workflows/cron_workflow.py
def create(self) -> TWorkflow:  # type: ignore
    """Creates the CronWorkflow on the Argo cluster."""
    assert self.workflows_service, "workflow service not initialized"
    assert self.namespace, "workflow namespace not defined"
    return self.workflows_service.create_cron_workflow(
        CreateCronWorkflowRequest(cron_workflow=self.build()), namespace=self.namespace
    )

from_dict classmethod

from_dict(model_dict)

Create a CronWorkflow from a CronWorkflow contained in a dict.

Examples:

>>> my_cron_workflow = CronWorkflow(name="my-cron-wf")
>>> my_cron_workflow == CronWorkflow.from_dict(my_cron_workflow.to_dict())
True
Source code in src/hera/workflows/cron_workflow.py
@classmethod
def from_dict(cls, model_dict: Dict) -> ModelMapperMixin:
    """Create a CronWorkflow from a CronWorkflow contained in a dict.

    Examples:
        >>> my_cron_workflow = CronWorkflow(name="my-cron-wf")
        >>> my_cron_workflow == CronWorkflow.from_dict(my_cron_workflow.to_dict())
        True
    """
    return cls._from_dict(model_dict, _ModelCronWorkflow)

from_file classmethod

from_file(yaml_file)

Create a CronWorkflow from a CronWorkflow contained in a YAML file.

Examples:

>>> yaml_file = Path(...)
>>> my_workflow_template = CronWorkflow.from_file(yaml_file)
Source code in src/hera/workflows/cron_workflow.py
@classmethod
def from_file(cls, yaml_file: Union[Path, str]) -> ModelMapperMixin:
    """Create a CronWorkflow from a CronWorkflow contained in a YAML file.

    Examples:
        >>> yaml_file = Path(...)
        >>> my_workflow_template = CronWorkflow.from_file(yaml_file)
    """
    return cls._from_file(yaml_file, _ModelCronWorkflow)

from_yaml classmethod

from_yaml(yaml_str)

Create a CronWorkflow from a CronWorkflow contained in a YAML string.

Examples:

>>> my_cron_workflow = CronWorkflow.from_yaml(yaml_str)
Source code in src/hera/workflows/cron_workflow.py
@classmethod
def from_yaml(cls, yaml_str: str) -> ModelMapperMixin:
    """Create a CronWorkflow from a CronWorkflow contained in a YAML string.

    Examples:
        >>> my_cron_workflow = CronWorkflow.from_yaml(yaml_str)
    """
    return cls._from_yaml(yaml_str, _ModelCronWorkflow)

get

get()

Attempts to get a cron workflow based on the parameters of this template e.g. name + namespace.

Source code in src/hera/workflows/cron_workflow.py
def get(self) -> TWorkflow:
    """Attempts to get a cron workflow based on the parameters of this template e.g. name + namespace."""
    assert self.workflows_service, "workflow service not initialized"
    assert self.namespace, "workflow namespace not defined"
    assert self.name, "workflow name not defined"
    return self.workflows_service.get_cron_workflow(name=self.name, namespace=self.namespace)

get_parameter

get_parameter(name)

Attempts to find and return a Parameter of the specified name.

Source code in src/hera/workflows/workflow.py
def get_parameter(self, name: str) -> Parameter:
    """Attempts to find and return a `Parameter` of the specified name."""
    arguments = self._build_arguments()
    if arguments is None:
        raise KeyError("Workflow has no arguments set")
    if arguments.parameters is None:
        raise KeyError("Workflow has no argument parameters set")

    parameters = arguments.parameters
    if next((p for p in parameters if p.name == name), None) is None:
        raise KeyError(f"`{name}` is not a valid workflow parameter")
    return Parameter(name=name, value=f"{{{{workflow.parameters.{name}}}}}")

lint

lint()

Lints the CronWorkflow using the Argo cluster.

Source code in src/hera/workflows/cron_workflow.py
def lint(self) -> TWorkflow:
    """Lints the CronWorkflow using the Argo cluster."""
    assert self.workflows_service, "workflow service not initialized"
    assert self.namespace, "workflow namespace not defined"
    return self.workflows_service.lint_cron_workflow(
        LintCronWorkflowRequest(cron_workflow=self.build()), namespace=self.namespace
    )

to_dict

to_dict()

Builds the Workflow as an Argo schema Workflow object and returns it as a dictionary.

Source code in src/hera/workflows/workflow.py
def to_dict(self) -> Any:
    """Builds the Workflow as an Argo schema Workflow object and returns it as a dictionary."""
    return self.build().dict(exclude_none=True, by_alias=True)

to_file

to_file(output_directory='.', name='', *args, **kwargs)

Writes the Workflow as an Argo schema Workflow object to a YAML file and returns the path to the file.

Parameters:

Name Type Description Default
output_directory Union[Path, str]

The directory to write the file to. Defaults to the current working directory.

'.'
name str

The name of the file to write without the file extension. Defaults to the Workflow’s name or a generated name.

''
*args

Additional arguments to pass to yaml.dump.

()
**kwargs

Additional keyword arguments to pass to yaml.dump.

{}
Source code in src/hera/workflows/workflow.py
def to_file(self, output_directory: Union[Path, str] = ".", name: str = "", *args, **kwargs) -> Path:
    """Writes the Workflow as an Argo schema Workflow object to a YAML file and returns the path to the file.

    Args:
        output_directory: The directory to write the file to. Defaults to the current working directory.
        name: The name of the file to write without the file extension.  Defaults to the Workflow's name or a
              generated name.
        *args: Additional arguments to pass to `yaml.dump`.
        **kwargs: Additional keyword arguments to pass to `yaml.dump`.
    """
    workflow_name = self.name or (self.generate_name or "workflow").rstrip("-")
    name = name or workflow_name
    output_directory = Path(output_directory)
    output_path = Path(output_directory) / f"{name}.yaml"
    output_directory.mkdir(parents=True, exist_ok=True)
    output_path.write_text(self.to_yaml(*args, **kwargs))
    return output_path.absolute()

to_yaml

to_yaml(*args, **kwargs)

Builds the Workflow as an Argo schema Workflow object and returns it as yaml string.

Source code in src/hera/workflows/workflow.py
def to_yaml(self, *args, **kwargs) -> str:
    """Builds the Workflow as an Argo schema Workflow object and returns it as yaml string."""
    if not _yaml:
        raise ImportError("`PyYAML` is not installed. Install `hera[yaml]` to bring in the extra dependency")
    # Set some default options if not provided by the user
    kwargs.setdefault("default_flow_style", False)
    kwargs.setdefault("sort_keys", False)
    return _yaml.dump(self.to_dict(), *args, **kwargs)

update

update()

Attempts to perform a workflow template update based on the parameters of this template.

Note that this creates the template if it does not exist. In addition, this performs a get prior to updating to get the resource version to update in the first place. If you know the template does not exist ahead of time, it is more efficient to use create() directly to avoid one round trip.

Source code in src/hera/workflows/cron_workflow.py
def update(self) -> TWorkflow:
    """Attempts to perform a workflow template update based on the parameters of this template.

    Note that this creates the template if it does not exist. In addition, this performs
    a get prior to updating to get the resource version to update in the first place. If you know the template
    does not exist ahead of time, it is more efficient to use `create()` directly to avoid one round trip.
    """
    assert self.workflows_service, "workflow service not initialized"
    assert self.namespace, "workflow namespace not defined"
    assert self.name, "workflow name not defined"
    # we always need to do a get prior to updating to get the resource version to update in the first place
    # https://github.com/argoproj/argo-workflows/pull/5465#discussion_r597797052

    template = self.build()
    try:
        curr = self.get()
        template.metadata.resource_version = curr.metadata.resource_version
    except NotFound:
        return self.create()
    return self.workflows_service.update_cron_workflow(
        self.name,
        UpdateCronWorkflowRequest(cron_workflow=template),
        namespace=self.namespace,
    )

wait

wait(poll_interval=5)

Waits for the Workflow to complete execution.

Parameters

poll_interval: int = 5 The interval in seconds to poll the workflow status.

Source code in src/hera/workflows/workflow.py
def wait(self, poll_interval: int = 5) -> TWorkflow:
    """Waits for the Workflow to complete execution.

    Parameters
    ----------
    poll_interval: int = 5
        The interval in seconds to poll the workflow status.
    """
    assert self.workflows_service is not None, "workflow service not initialized"
    assert self.namespace is not None, "workflow namespace not defined"
    assert self.name is not None, "workflow name not defined"

    # here we use the sleep interval to wait for the workflow post creation. This is to address a potential
    # race conditions such as:
    # 1. Argo server says "workflow was accepted" but the workflow is not yet created
    # 2. Hera wants to verify the status of the workflow, but it's not yet defined because it's not created
    # 3. Argo finally creates the workflow
    # 4. Hera throws an `AssertionError` because the phase assertion fails
    time.sleep(poll_interval)
    wf = self.workflows_service.get_workflow(self.name, namespace=self.namespace)
    assert wf.metadata.name is not None, f"workflow name not defined for workflow {self.name}"

    assert wf.status is not None, f"workflow status not defined for workflow {wf.metadata.name}"
    assert wf.status.phase is not None, f"workflow phase not defined for workflow status {wf.status}"
    status = WorkflowStatus.from_argo_status(wf.status.phase)

    # keep polling for workflow status until completed, at the interval dictated by the user
    while status == WorkflowStatus.running:
        time.sleep(poll_interval)
        wf = self.workflows_service.get_workflow(wf.metadata.name, namespace=self.namespace)
        assert wf.status is not None, f"workflow status not defined for workflow {wf.metadata.name}"
        assert wf.status.phase is not None, f"workflow phase not defined for workflow status {wf.status}"
        status = WorkflowStatus.from_argo_status(wf.status.phase)
    return wf

DAG

A DAG template invocator is used to define Task dependencies as an acyclic graph.

DAG implements the contextmanager interface so allows usage of with, under which any hera.workflows.task.Task objects instantiated will be added to the DAG’s list of Tasks.

Examples:

>>> @script()
>>> def foo() -> None:
>>>     print(42)
>>> with DAG(...) as dag:
>>>     foo()
Source code in src/hera/workflows/dag.py
class DAG(
    IOMixin,
    TemplateMixin,
    CallableTemplateMixin,
    ContextMixin,
):
    """A DAG template invocator is used to define Task dependencies as an acyclic graph.

    DAG implements the contextmanager interface so allows usage of `with`, under which any
    `hera.workflows.task.Task` objects instantiated will be added to the DAG's list of Tasks.

    Examples:
        >>> @script()
        >>> def foo() -> None:
        >>>     print(42)
        >>> with DAG(...) as dag:
        >>>     foo()
    """

    fail_fast: Optional[bool] = None
    target: Optional[str] = None
    tasks: List[Union[Task, DAGTask]] = []

    def _add_sub(self, node: Any):
        if not isinstance(node, Task):
            raise InvalidType(type(node))
        self.tasks.append(node)

    def _build_template(self) -> _ModelTemplate:
        """Builds the auto-generated `Template` representation of the `DAG`."""
        tasks = []
        for task in self.tasks:
            if isinstance(task, Task):
                tasks.append(task._build_dag_task())
            else:
                tasks.append(task)
        return _ModelTemplate(
            active_deadline_seconds=self.active_deadline_seconds,
            affinity=self.affinity,
            archive_location=self.archive_location,
            automount_service_account_token=self.automount_service_account_token,
            daemon=self.daemon,
            dag=_ModelDAGTemplate(fail_fast=self.fail_fast, target=self.target, tasks=tasks),
            executor=self.executor,
            fail_fast=self.fail_fast,
            host_aliases=self.host_aliases,
            init_containers=self.init_containers,
            inputs=self._build_inputs(),
            memoize=self.memoize,
            metadata=self._build_metadata(),
            metrics=self._build_metrics(),
            name=self.name,
            node_selector=self.node_selector,
            outputs=self._build_outputs(),
            parallelism=self.parallelism,
            plugin=self.plugin,
            pod_spec_patch=self.pod_spec_patch,
            priority=self.priority,
            priority_class_name=self.priority_class_name,
            retry_strategy=self.retry_strategy,
            scheduler_name=self.scheduler_name,
            security_context=self.pod_security_context,
            service_account_name=self.service_account_name,
            sidecars=self._build_sidecars(),
            synchronization=self.synchronization,
            timeout=self.timeout,
            tolerations=self.tolerations,
        )

active_deadline_seconds class-attribute instance-attribute

active_deadline_seconds = None

affinity class-attribute instance-attribute

affinity = None

annotations class-attribute instance-attribute

annotations = None

archive_location class-attribute instance-attribute

archive_location = None

arguments class-attribute instance-attribute

arguments = None

automount_service_account_token class-attribute instance-attribute

automount_service_account_token = None

daemon class-attribute instance-attribute

daemon = None

executor class-attribute instance-attribute

executor = None

fail_fast class-attribute instance-attribute

fail_fast = None

host_aliases class-attribute instance-attribute

host_aliases = None

http class-attribute instance-attribute

http = None

init_containers class-attribute instance-attribute

init_containers = None

inputs class-attribute instance-attribute

inputs = None

labels class-attribute instance-attribute

labels = None

memoize class-attribute instance-attribute

memoize = None

metrics class-attribute instance-attribute

metrics = None

name class-attribute instance-attribute

name = None

node_selector class-attribute instance-attribute

node_selector = None

outputs class-attribute instance-attribute

outputs = None

parallelism class-attribute instance-attribute

parallelism = None

plugin class-attribute instance-attribute

plugin = None

pod_security_context class-attribute instance-attribute

pod_security_context = None

pod_spec_patch class-attribute instance-attribute

pod_spec_patch = None

priority class-attribute instance-attribute

priority = None

priority_class_name class-attribute instance-attribute

priority_class_name = None

retry_strategy class-attribute instance-attribute

retry_strategy = None

scheduler_name class-attribute instance-attribute

scheduler_name = None

service_account_name class-attribute instance-attribute

service_account_name = None

sidecars class-attribute instance-attribute

sidecars = None

synchronization class-attribute instance-attribute

synchronization = None

target class-attribute instance-attribute

target = None

tasks class-attribute instance-attribute

tasks = []

timeout class-attribute instance-attribute

timeout = None

tolerations class-attribute instance-attribute

tolerations = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

Data

Data implements the Argo data template representation.

Data can be used to indicate that some data, identified by a source, should be processed via the specified transformations. The transformations field can be either expressed via a pure str or via a hera.expr, which transpiles the expression into a statement that can be processed by Argo.

Source code in src/hera/workflows/data.py
class Data(TemplateMixin, IOMixin, CallableTemplateMixin):
    """`Data` implements the Argo data template representation.

    Data can be used to indicate that some data, identified by a `source`, should be processed via the specified
    `transformations`. The `transformations` field can be either expressed via a pure `str` or via a `hera.expr`,
    which transpiles the expression into a statement that can be processed by Argo.
    """

    source: Union[m.DataSource, m.ArtifactPaths, Artifact]
    transformations: List[Union[str, Node]] = []

    def _build_source(self) -> m.DataSource:
        """Builds the generated `DataSource`."""
        if isinstance(self.source, m.DataSource):
            return self.source
        elif isinstance(self.source, m.ArtifactPaths):
            return m.DataSource(artifact_paths=self.source)
        return m.DataSource(artifact_paths=self.source._build_artifact_paths())

    def _build_data(self) -> m.Data:
        """Builds the generated `Data` template."""
        return m.Data(
            source=self._build_source(),
            transformation=list(map(lambda expr: m.TransformationStep(expression=str(expr)), self.transformations)),
        )

    def _build_template(self) -> m.Template:
        """Builds the generated `Template` from the fields of `Data`."""
        return m.Template(
            active_deadline_seconds=self.active_deadline_seconds,
            affinity=self.affinity,
            archive_location=self.archive_location,
            automount_service_account_token=self.automount_service_account_token,
            data=self._build_data(),
            executor=self.executor,
            fail_fast=self.fail_fast,
            host_aliases=self.host_aliases,
            init_containers=self.init_containers,
            inputs=self._build_inputs(),
            outputs=self._build_outputs(),
            memoize=self.memoize,
            metadata=self._build_metadata(),
            name=self.name,
            node_selector=self.node_selector,
            plugin=self.plugin,
            priority=self.priority,
            priority_class_name=self.priority_class_name,
            retry_strategy=self.retry_strategy,
            scheduler_name=self.scheduler_name,
            security_context=self.pod_security_context,
            service_account_name=self.service_account_name,
            sidecars=self._build_sidecars(),
            synchronization=self.synchronization,
            timeout=self.timeout,
            tolerations=self.tolerations,
        )

active_deadline_seconds class-attribute instance-attribute

active_deadline_seconds = None

affinity class-attribute instance-attribute

affinity = None

annotations class-attribute instance-attribute

annotations = None

archive_location class-attribute instance-attribute

archive_location = None

arguments class-attribute instance-attribute

arguments = None

automount_service_account_token class-attribute instance-attribute

automount_service_account_token = None

daemon class-attribute instance-attribute

daemon = None

executor class-attribute instance-attribute

executor = None

fail_fast class-attribute instance-attribute

fail_fast = None

host_aliases class-attribute instance-attribute

host_aliases = None

http class-attribute instance-attribute

http = None

init_containers class-attribute instance-attribute

init_containers = None

inputs class-attribute instance-attribute

inputs = None

labels class-attribute instance-attribute

labels = None

memoize class-attribute instance-attribute

memoize = None

metrics class-attribute instance-attribute

metrics = None

name class-attribute instance-attribute

name = None

node_selector class-attribute instance-attribute

node_selector = None

outputs class-attribute instance-attribute

outputs = None

parallelism class-attribute instance-attribute

parallelism = None

plugin class-attribute instance-attribute

plugin = None

pod_security_context class-attribute instance-attribute

pod_security_context = None

pod_spec_patch class-attribute instance-attribute

pod_spec_patch = None

priority class-attribute instance-attribute

priority = None

priority_class_name class-attribute instance-attribute

priority_class_name = None

retry_strategy class-attribute instance-attribute

retry_strategy = None

scheduler_name class-attribute instance-attribute

scheduler_name = None

service_account_name class-attribute instance-attribute

service_account_name = None

sidecars class-attribute instance-attribute

sidecars = None

source instance-attribute

source

synchronization class-attribute instance-attribute

synchronization = None

timeout class-attribute instance-attribute

timeout = None

tolerations class-attribute instance-attribute

tolerations = None

transformations class-attribute instance-attribute

transformations = []

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

DownwardAPIVolume

Representation of a volume passed via the downward API.

Source code in src/hera/workflows/volume.py
class DownwardAPIVolume(_BaseVolume, _ModelDownwardAPIVolumeSource):
    """Representation of a volume passed via the downward API."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            downward_api=_ModelDownwardAPIVolumeSource(default_mode=self.default_mode, items=self.items),
        )

default_mode class-attribute instance-attribute

default_mode = Field(default=None, alias='defaultMode', description='Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.')

items class-attribute instance-attribute

items = Field(default=None, description='Items is a list of downward API volume file')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

EmptyDirVolume

Representation of an empty dir volume from K8s.

Source code in src/hera/workflows/volume.py
class EmptyDirVolume(_BaseVolume, _ModelEmptyDirVolumeSource):
    """Representation of an empty dir volume from K8s."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name, empty_dir=_ModelEmptyDirVolumeSource(medium=self.medium, size_limit=self.size_limit)
        )

medium class-attribute instance-attribute

medium = Field(default=None, description='What type of storage medium should back this directory. The default is "" which means to use the node\'s default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

size_limit class-attribute instance-attribute

size_limit = Field(default=None, alias='sizeLimit', description='Total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: http://kubernetes.io/docs/user-guide/volumes#emptydir')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

Env

A variable implementation that can expose a plain value or value from input as an env variable.

Source code in src/hera/workflows/env.py
class Env(_BaseEnv):
    """A variable implementation that can expose a plain `value` or `value from input` as an env variable."""

    value: Optional[Any] = None
    """the value of the environment variable"""

    value_from_input: Optional[Union[str, Parameter]] = None
    """an optional `str` or `Parameter` representation of the origin of the environment variable value"""

    @staticmethod
    def _sanitise_param_for_argo(v: str) -> str:
        """Sanitizes the given `v` value into one that satisfies Argo's parameter rules.

        Argo has some strict parameter validation. To satisfy, we replace all ._ with a dash,
        take only first 32 characters from a-zA-Z0-9-, and append md5 digest of the original string.
        """
        # NOTE move this to some general purpose utils?
        replaced_dashes = v.translate(str.maketrans({e: "-" for e in "_."}))  # type: ignore
        legit_set = string.ascii_letters + string.digits + "-"
        legit_prefix = "".join(islice((c for c in replaced_dashes if c in legit_set), 32))
        hash_suffix = hashlib.md5(v.encode("utf-8")).hexdigest()
        return f"{legit_prefix}-{hash_suffix}"

    @root_validator(pre=True)
    @classmethod
    def _check_values(cls, values):
        """Validates that only one of `value` or `value_from_input` is specified."""
        if values.get("value") is not None and values.get("value_from_input") is not None:
            raise ValueError("cannot specify both `value` and `value_from_input`")

        return values

    @property
    def param_name(self) -> str:
        """Returns the parameter name of the environment variable, conditioned on the use of `value_from_input`."""
        if not self.value_from_input:
            raise ValueError(
                "Unexpected use of `param_name` - without `value_from_input`, no param should be generated"
            )
        return Env._sanitise_param_for_argo(self.name)

    def build(self) -> _ModelEnvVar:
        """Constructs and returns the Argo environment specification."""
        if self.value_from_input is not None:
            self.value = f"{{{{inputs.parameters.{self.param_name}}}}}"
        elif isinstance(self.value, str):
            self.value = self.value
        else:
            self.value = json.dumps(self.value)
        return _ModelEnvVar(name=self.name, value=self.value)

name instance-attribute

name

the name of the environment variable. This is universally required irrespective of the type of env variable

param_name property

param_name

Returns the parameter name of the environment variable, conditioned on the use of value_from_input.

value class-attribute instance-attribute

value = None

the value of the environment variable

value_from_input class-attribute instance-attribute

value_from_input = None

an optional str or Parameter representation of the origin of the environment variable value

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

build

build()

Constructs and returns the Argo environment specification.

Source code in src/hera/workflows/env.py
def build(self) -> _ModelEnvVar:
    """Constructs and returns the Argo environment specification."""
    if self.value_from_input is not None:
        self.value = f"{{{{inputs.parameters.{self.param_name}}}}}"
    elif isinstance(self.value, str):
        self.value = self.value
    else:
        self.value = json.dumps(self.value)
    return _ModelEnvVar(name=self.name, value=self.value)

EphemeralVolume

Representation of a volume that uses ephemeral storage shared with the K8s node a pod is scheduled on.

Source code in src/hera/workflows/volume.py
class EphemeralVolume(_BaseVolume, _ModelEphemeralVolumeSource):
    """Representation of a volume that uses ephemeral storage shared with the K8s node a pod is scheduled on."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name, ephemeral=_ModelEphemeralVolumeSource(volume_claim_template=self.volume_claim_template)
        )

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

volume_claim_template class-attribute instance-attribute

volume_claim_template = Field(default=None, alias='volumeClaimTemplate', description='Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod.  The name of the PVC will be `<pod name>-<volume name>` where `<volume name>` is the name from the `PodSpec.Volumes` array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long).\n\nAn existing PVC with that name that is not owned by the pod will *not* be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster.\n\nThis field is read-only and no changes will be made by Kubernetes to the PVC after it has been created.\n\nRequired, must not be nil.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

ExistingVolume

ExistingVolume is a representation of an existing volume in K8s.

The existing volume is mounted based on the supplied claim name. This tells K8s that the specified persistent volume claim should be used to mount a volume to a pod.

Source code in src/hera/workflows/volume.py
class ExistingVolume(_BaseVolume, _ModelPersistentVolumeClaimVolumeSource):
    """`ExistingVolume` is a representation of an existing volume in K8s.

    The existing volume is mounted based on the supplied claim name. This tells K8s that the specified persistent
    volume claim should be used to mount a volume to a pod.
    """

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            persistent_volume_claim=_ModelPersistentVolumeClaimVolumeSource(
                claim_name=self.claim_name, read_only=self.read_only
            ),
        )

claim_name class-attribute instance-attribute

claim_name = Field(..., alias='claimName', description='ClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

FCVolume

An FV volume representation.

Source code in src/hera/workflows/volume.py
class FCVolume(_BaseVolume, _ModelFCVolumeSource):
    """An FV volume representation."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            fc=_ModelFCVolumeSource(
                fs_type=self.fs_type,
                lun=self.lun,
                read_only=self.read_only,
                target_ww_ns=self.target_ww_ns,
                wwids=self.wwids,
            ),
        )

fs_type class-attribute instance-attribute

fs_type = Field(default=None, alias='fsType', description='Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.')

lun class-attribute instance-attribute

lun = Field(default=None, description='Optional: FC target lun number')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

target_ww_ns class-attribute instance-attribute

target_ww_ns = Field(default=None, alias='targetWWNs', description='Optional: FC target worldwide names (WWNs)')

wwids class-attribute instance-attribute

wwids = Field(default=None, description='Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

FieldEnv

FieldEnv is an environment variable whose origin is in a field specification.

The field path specification points to a particular field within the workflow/container YAML specification. For instance, if there’s a YAML that has 3 fields like so

name: abc
spec:
  a: 42
then a reference to the field a must be encoded as spec.a in order for the value of 42 to be extracted and set as an environment variable.

Source code in src/hera/workflows/env.py
class FieldEnv(_BaseEnv):
    """`FieldEnv` is an environment variable whose origin is in a field specification.

    The field path specification points to a particular field within the workflow/container YAML specification. For
    instance, if there's a YAML that has 3 fields like so
    ```
    name: abc
    spec:
      a: 42
    ```
    then a reference to the field a must be encoded as `spec.a` in order for the value of `42` to be extracted and set
    as an environment variable.
    """

    field_path: str
    """the path to the field whose value should be extracted into an environment variable"""

    api_version: Optional[str] = None
    """optionally, an API version specification. This defaults to the Hera global config `api_version`"""

    @validator("api_version")
    @classmethod
    def _check_api_version(cls, v):
        """Checks whether the `api_version` field is set and uses the global config `api_version` if not."""
        if v is None:
            return global_config.api_version
        return v

    def build(self) -> _ModelEnvVar:
        """Constructs and returns the Argo environment specification."""
        return _ModelEnvVar(
            name=self.name,
            value_from=_ModelEnvVarSource(
                field_ref=_ModelObjectFieldSelector(
                    field_path=self.field_path,
                    api_version=self.api_version,
                )
            ),
        )

api_version class-attribute instance-attribute

api_version = None

optionally, an API version specification. This defaults to the Hera global config api_version

field_path instance-attribute

field_path

the path to the field whose value should be extracted into an environment variable

name instance-attribute

name

the name of the environment variable. This is universally required irrespective of the type of env variable

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

build

build()

Constructs and returns the Argo environment specification.

Source code in src/hera/workflows/env.py
def build(self) -> _ModelEnvVar:
    """Constructs and returns the Argo environment specification."""
    return _ModelEnvVar(
        name=self.name,
        value_from=_ModelEnvVarSource(
            field_ref=_ModelObjectFieldSelector(
                field_path=self.field_path,
                api_version=self.api_version,
            )
        ),
    )

FlexVolume

A Flex volume representation.

Source code in src/hera/workflows/volume.py
class FlexVolume(_BaseVolume, _ModelFlexVolumeSource):
    """A Flex volume representation."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            flex_volume=_ModelFlexVolumeSource(
                driver=self.driver,
                fs_type=self.fs_type,
                options=self.options,
                read_only=self.read_only,
                secret_ref=self.secret_ref,
            ),
        )

driver class-attribute instance-attribute

driver = Field(..., description='Driver is the name of the driver to use for this volume.')

fs_type class-attribute instance-attribute

fs_type = Field(default=None, alias='fsType', description='Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script.')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

options class-attribute instance-attribute

options = Field(default=None, description='Optional: Extra command options if any.')

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

secret_ref class-attribute instance-attribute

secret_ref = Field(default=None, alias='secretRef', description='Optional: SecretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

FlockerVolume

A Flocker volume representation.

Source code in src/hera/workflows/volume.py
class FlockerVolume(_BaseVolume, _ModelFlockerVolumeSource):
    """A Flocker volume representation."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            flocker=_ModelFlockerVolumeSource(dataset_name=self.dataset_name, dataset_uuid=self.dataset_uuid),
        )

dataset_name class-attribute instance-attribute

dataset_name = Field(default=None, alias='datasetName', description='Name of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated')

dataset_uuid class-attribute instance-attribute

dataset_uuid = Field(default=None, alias='datasetUUID', description='UUID of the dataset. This is unique identifier of a Flocker dataset')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

GCEPersistentDiskVolume

A representation of a Google Cloud Compute Enginer persistent disk.

Notes

The volume must exist on GCE before a request to mount it to a pod is performed.

Source code in src/hera/workflows/volume.py
class GCEPersistentDiskVolume(_BaseVolume, _ModelGCEPersistentDiskVolumeSource):
    """A representation of a Google Cloud Compute Enginer persistent disk.

    Notes:
        The volume must exist on GCE before a request to mount it to a pod is performed.
    """

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            gce_persistent_disk=_ModelGCEPersistentDiskVolumeSource(
                fs_type=self.fs_type, partition=self.partition, pd_name=self.pd_name, read_only=self.read_only
            ),
        )

fs_type class-attribute instance-attribute

fs_type = Field(default=None, alias='fsType', description='Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

partition class-attribute instance-attribute

partition = Field(default=None, description='The partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk')

pd_name class-attribute instance-attribute

pd_name = Field(..., alias='pdName', description='Unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk')

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

GCSArtifact

An artifact sourced from Google Cloud Storage.

Source code in src/hera/workflows/artifact.py
class GCSArtifact(_ModelGCSArtifact, Artifact):
    """An artifact sourced from Google Cloud Storage."""

    def _build_artifact(self) -> _ModelArtifact:
        artifact = super()._build_artifact()
        artifact.gcs = _ModelGCSArtifact(
            bucket=self.bucket,
            key=self.key,
            service_account_key_secret=self.service_account_key_secret,
        )
        return artifact

    @classmethod
    def _get_input_attributes(cls):
        """Return the attributes used for input artifact annotations."""
        return super()._get_input_attributes() + ["bucket", "key", "service_account_key_secret"]

archive class-attribute instance-attribute

archive = None

artifact archiving configuration

archive_logs class-attribute instance-attribute

archive_logs = None

whether to log the archive object

artifact_gc class-attribute instance-attribute

artifact_gc = None

artifact garbage collection configuration

bucket class-attribute instance-attribute

bucket = Field(default=None, description='Bucket is the name of the bucket')

deleted class-attribute instance-attribute

deleted = None

whether the artifact is deleted

from_ class-attribute instance-attribute

from_ = None

configures the artifact task/step origin

from_expression class-attribute instance-attribute

from_expression = None

an expression that dictates where to obtain the artifact from

global_name class-attribute instance-attribute

global_name = None

global workflow artifact name

key class-attribute instance-attribute

key = Field(..., description='Key is the path in the bucket where the artifact resides')

loader class-attribute instance-attribute

loader = None

used in Artifact annotations for determining how to load the data

mode class-attribute instance-attribute

mode = None

mode bits to use on the artifact, must be a value between 0 and 0777 set when loading input artifacts.

name instance-attribute

name

name of the artifact

output class-attribute instance-attribute

output = False

used to specify artifact as an output in function signature annotations

path class-attribute instance-attribute

path = None

path where the artifact should be placed/loaded from

recurse_mode class-attribute instance-attribute

recurse_mode = None

recursion mode when applying the permissions of the artifact if it is an artifact folder

service_account_key_secret class-attribute instance-attribute

service_account_key_secret = Field(default=None, alias='serviceAccountKeySecret', description="ServiceAccountKeySecret is the secret selector to the bucket's service account key")

sub_path class-attribute instance-attribute

sub_path = None

allows the specification of an artifact from a subpath within the main source.

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

as_name

as_name(name)

DEPRECATED, use with_name.

Returns a ‘built’ copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def as_name(self, name: str) -> _ModelArtifact:
    """DEPRECATED, use with_name.

    Returns a 'built' copy of the current artifact, renamed using the specified `name`.
    """
    logger.warning("'as_name' is deprecated, use 'with_name'")
    artifact = self._build_artifact()
    artifact.name = name
    return artifact

with_name

with_name(name)

Returns a copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def with_name(self, name: str) -> Artifact:
    """Returns a copy of the current artifact, renamed using the specified `name`."""
    artifact = self.copy(deep=True)
    artifact.name = name
    return artifact

Gauge

Gauge metric component used to record intervals based on the given value.

Notes

See https://argoproj.github.io/argo-workflows/metrics/#grafana-dashboard-for-argo-controller-metrics

Source code in src/hera/workflows/metrics.py
class Gauge(_BaseMetric, _ModelGauge):
    """Gauge metric component used to record intervals based on the given value.

    Notes:
        See [https://argoproj.github.io/argo-workflows/metrics/#grafana-dashboard-for-argo-controller-metrics](https://argoproj.github.io/argo-workflows/metrics/#grafana-dashboard-for-argo-controller-metrics)
    """

    realtime: bool
    value: str

    def _build_metric(self) -> _ModelPrometheus:
        return _ModelPrometheus(
            counter=None,
            gauge=_ModelGauge(realtime=self.realtime, value=self.value),
            help=self.help,
            histogram=None,
            labels=self._build_labels(),
            name=self.name,
            when=self.when,
        )

help instance-attribute

help

labels class-attribute instance-attribute

labels = None

name instance-attribute

name

realtime instance-attribute

realtime

value instance-attribute

value

when class-attribute instance-attribute

when = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

GitArtifact

An artifact sourced from GitHub.

Source code in src/hera/workflows/artifact.py
class GitArtifact(_ModelGitArtifact, Artifact):
    """An artifact sourced from GitHub."""

    def _build_artifact(self) -> _ModelArtifact:
        artifact = super()._build_artifact()
        artifact.git = _ModelGitArtifact(
            branch=self.branch,
            depth=self.depth,
            disable_submodules=self.disable_submodules,
            fetch=self.fetch,
            insecure_ignore_host_key=self.insecure_ignore_host_key,
            password_secret=self.password_secret,
            repo=self.repo,
            revision=self.revision,
            single_branch=self.single_branch,
            ssh_private_key_secret=self.ssh_private_key_secret,
            username_secret=self.username_secret,
        )
        return artifact

    @classmethod
    def _get_input_attributes(cls):
        """Return the attributes used for input artifact annotations."""
        return super()._get_input_attributes() + [
            "branch",
            "depth",
            "disable_submodules",
            "fetch",
            "insecure_ignore_host_key",
            "password_secret",
            "repo",
            "revision",
            "single_branch",
            "ssh_private_key_secret",
            "username_secret",
        ]

archive class-attribute instance-attribute

archive = None

artifact archiving configuration

archive_logs class-attribute instance-attribute

archive_logs = None

whether to log the archive object

artifact_gc class-attribute instance-attribute

artifact_gc = None

artifact garbage collection configuration

branch class-attribute instance-attribute

branch = Field(default=None, description='Branch is the branch to fetch when `SingleBranch` is enabled')

deleted class-attribute instance-attribute

deleted = None

whether the artifact is deleted

depth class-attribute instance-attribute

depth = Field(default=None, description='Depth specifies clones/fetches should be shallow and include the given number of commits from the branch tip')

disable_submodules class-attribute instance-attribute

disable_submodules = Field(default=None, alias='disableSubmodules', description='DisableSubmodules disables submodules during git clone')

fetch class-attribute instance-attribute

fetch = Field(default=None, description='Fetch specifies a number of refs that should be fetched before checkout')

from_ class-attribute instance-attribute

from_ = None

configures the artifact task/step origin

from_expression class-attribute instance-attribute

from_expression = None

an expression that dictates where to obtain the artifact from

global_name class-attribute instance-attribute

global_name = None

global workflow artifact name

insecure_ignore_host_key class-attribute instance-attribute

insecure_ignore_host_key = Field(default=None, alias='insecureIgnoreHostKey', description='InsecureIgnoreHostKey disables SSH strict host key checking during git clone')

loader class-attribute instance-attribute

loader = None

used in Artifact annotations for determining how to load the data

mode class-attribute instance-attribute

mode = None

mode bits to use on the artifact, must be a value between 0 and 0777 set when loading input artifacts.

name instance-attribute

name

name of the artifact

output class-attribute instance-attribute

output = False

used to specify artifact as an output in function signature annotations

password_secret class-attribute instance-attribute

password_secret = Field(default=None, alias='passwordSecret', description='PasswordSecret is the secret selector to the repository password')

path class-attribute instance-attribute

path = None

path where the artifact should be placed/loaded from

recurse_mode class-attribute instance-attribute

recurse_mode = None

recursion mode when applying the permissions of the artifact if it is an artifact folder

repo class-attribute instance-attribute

repo = Field(..., description='Repo is the git repository')

revision class-attribute instance-attribute

revision = Field(default=None, description='Revision is the git commit, tag, branch to checkout')

single_branch class-attribute instance-attribute

single_branch = Field(default=None, alias='singleBranch', description='SingleBranch enables single branch clone, using the `branch` parameter')

ssh_private_key_secret class-attribute instance-attribute

ssh_private_key_secret = Field(default=None, alias='sshPrivateKeySecret', description='SSHPrivateKeySecret is the secret selector to the repository ssh private key')

sub_path class-attribute instance-attribute

sub_path = None

allows the specification of an artifact from a subpath within the main source.

username_secret class-attribute instance-attribute

username_secret = Field(default=None, alias='usernameSecret', description='UsernameSecret is the secret selector to the repository username')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

as_name

as_name(name)

DEPRECATED, use with_name.

Returns a ‘built’ copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def as_name(self, name: str) -> _ModelArtifact:
    """DEPRECATED, use with_name.

    Returns a 'built' copy of the current artifact, renamed using the specified `name`.
    """
    logger.warning("'as_name' is deprecated, use 'with_name'")
    artifact = self._build_artifact()
    artifact.name = name
    return artifact

with_name

with_name(name)

Returns a copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def with_name(self, name: str) -> Artifact:
    """Returns a copy of the current artifact, renamed using the specified `name`."""
    artifact = self.copy(deep=True)
    artifact.name = name
    return artifact

GitRepoVolume

A representation of a Git repo that can be mounted as a volume.

Source code in src/hera/workflows/volume.py
class GitRepoVolume(_BaseVolume, _ModelGitRepoVolumeSource):
    """A representation of a Git repo that can be mounted as a volume."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            git_repo=_ModelGitRepoVolumeSource(
                directory=self.directory, repository=self.repository, revision=self.revision
            ),
        )

directory class-attribute instance-attribute

directory = Field(default=None, description="Target directory name. Must not contain or start with '..'.  If '.' is supplied, the volume directory will be the git repository.  Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name.")

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

repository class-attribute instance-attribute

repository = Field(..., description='Repository URL')

revision class-attribute instance-attribute

revision = Field(default=None, description='Commit hash for the specified revision.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

GlusterfsVolume

A representation for a Gluster filesystem volume.

Source code in src/hera/workflows/volume.py
class GlusterfsVolume(_BaseVolume, _ModelGlusterfsVolumeSource):
    """A representation for a Gluster filesystem volume."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            glusterfs=_ModelGlusterfsVolumeSource(endpoints=self.endpoints, path=self.path, read_only=self.read_only),
        )

endpoints class-attribute instance-attribute

endpoints = Field(..., description='EndpointsName is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

path class-attribute instance-attribute

path = Field(..., description='Path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod')

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

HDFSArtifact

A Hadoop File System artifact.

Note that HDFSArtifact does not inherit from the auto-generated HDFSArtifact because there’s a conflict in path with the base class Artifact. Here, we redefine the HDFS path to hdfs_path to differentiate between the parent class and the child class path.

Source code in src/hera/workflows/artifact.py
class HDFSArtifact(Artifact):
    """A Hadoop File System artifact.

    Note that `HDFSArtifact` does not inherit from the auto-generated `HDFSArtifact` because there's a
    conflict in `path` with the base class `Artifact`. Here, we redefine the HDFS `path` to `hdfs_path` to
    differentiate between the parent class and the child class `path`.
    """

    hdfs_path: str
    addresses: Optional[List[str]] = None
    force: Optional[bool] = None
    hdfs_user: Optional[str]
    krb_c_cache_secret: Optional[SecretKeySelector] = None
    krb_config_config_map: Optional[SecretKeySelector] = None
    krb_keytab_secret: Optional[SecretKeySelector] = None
    krb_realm: Optional[str] = None
    krb_service_principal_name: Optional[str] = None
    krb_username: Optional[str] = None

    def _build_artifact(self) -> _ModelArtifact:
        artifact = super()._build_artifact()
        artifact.hdfs = _ModelHDFSArtifact(
            addresses=self.addresses,
            force=self.force,
            hdfs_user=self.hdfs_user,
            krb_c_cache_secret=self.krb_c_cache_secret,
            krb_config_config_map=self.krb_config_config_map,
            krb_keytab_secret=self.krb_keytab_secret,
            krb_realm=self.krb_realm,
            krb_service_principal_name=self.krb_service_principal_name,
            krb_username=self.krb_username,
            path=self.hdfs_path,
        )
        return artifact

    @classmethod
    def _get_input_attributes(cls):
        """Return the attributes used for input artifact annotations."""
        return super()._get_input_attributes() + [
            "addresses",
            "force",
            "hdfs_path",
            "hdfs_user",
            "krb_c_cache_secret",
            "krb_config_config_map",
            "krb_keytab_secret",
            "krb_realm",
            "krb_service_principal_name",
            "krb_username",
        ]

addresses class-attribute instance-attribute

addresses = None

archive class-attribute instance-attribute

archive = None

artifact archiving configuration

archive_logs class-attribute instance-attribute

archive_logs = None

whether to log the archive object

artifact_gc class-attribute instance-attribute

artifact_gc = None

artifact garbage collection configuration

deleted class-attribute instance-attribute

deleted = None

whether the artifact is deleted

force class-attribute instance-attribute

force = None

from_ class-attribute instance-attribute

from_ = None

configures the artifact task/step origin

from_expression class-attribute instance-attribute

from_expression = None

an expression that dictates where to obtain the artifact from

global_name class-attribute instance-attribute

global_name = None

global workflow artifact name

hdfs_path instance-attribute

hdfs_path

hdfs_user instance-attribute

hdfs_user

krb_c_cache_secret class-attribute instance-attribute

krb_c_cache_secret = None

krb_config_config_map class-attribute instance-attribute

krb_config_config_map = None

krb_keytab_secret class-attribute instance-attribute

krb_keytab_secret = None

krb_realm class-attribute instance-attribute

krb_realm = None

krb_service_principal_name class-attribute instance-attribute

krb_service_principal_name = None

krb_username class-attribute instance-attribute

krb_username = None

loader class-attribute instance-attribute

loader = None

used in Artifact annotations for determining how to load the data

mode class-attribute instance-attribute

mode = None

mode bits to use on the artifact, must be a value between 0 and 0777 set when loading input artifacts.

name instance-attribute

name

name of the artifact

output class-attribute instance-attribute

output = False

used to specify artifact as an output in function signature annotations

path class-attribute instance-attribute

path = None

path where the artifact should be placed/loaded from

recurse_mode class-attribute instance-attribute

recurse_mode = None

recursion mode when applying the permissions of the artifact if it is an artifact folder

sub_path class-attribute instance-attribute

sub_path = None

allows the specification of an artifact from a subpath within the main source.

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

as_name

as_name(name)

DEPRECATED, use with_name.

Returns a ‘built’ copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def as_name(self, name: str) -> _ModelArtifact:
    """DEPRECATED, use with_name.

    Returns a 'built' copy of the current artifact, renamed using the specified `name`.
    """
    logger.warning("'as_name' is deprecated, use 'with_name'")
    artifact = self._build_artifact()
    artifact.name = name
    return artifact

with_name

with_name(name)

Returns a copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def with_name(self, name: str) -> Artifact:
    """Returns a copy of the current artifact, renamed using the specified `name`."""
    artifact = self.copy(deep=True)
    artifact.name = name
    return artifact

HTTP

HTTP is an implementation of the HTTP template that supports executing HTTP actions in a step/task.

Source code in src/hera/workflows/http_template.py
class HTTP(TemplateMixin, IOMixin, CallableTemplateMixin):
    """`HTTP` is an implementation of the HTTP template that supports executing HTTP actions in a step/task."""

    url: str
    body: Optional[str] = None
    body_from: Optional[HTTPBodySource] = None
    headers: Optional[List[HTTPHeader]] = None
    insecure_skip_verify: Optional[bool] = None
    method: Optional[str] = None
    success_condition: Optional[str] = None
    timeout_seconds: Optional[int] = None

    def _build_http_template(self) -> _ModelHTTP:
        """Builds the generated HTTP sub-template."""
        return _ModelHTTP(
            url=self.url,
            body=self.body,
            body_from=self.body_from,
            headers=self.headers,
            insecure_skip_verify=self.insecure_skip_verify,
            method=self.method,
            success_condition=self.success_condition,
            timeout_seconds=self.timeout_seconds,
        )

    def _build_template(self) -> _ModelTemplate:
        """Builds the HTTP generated `Template`."""
        return _ModelTemplate(
            active_deadline_seconds=self.active_deadline_seconds,
            affinity=self.affinity,
            archive_location=self.archive_location,
            automount_service_account_token=self.automount_service_account_token,
            executor=self.executor,
            fail_fast=self.fail_fast,
            host_aliases=self.host_aliases,
            http=self._build_http_template(),
            init_containers=self.init_containers,
            memoize=self.memoize,
            metadata=self._build_metadata(),
            inputs=self._build_inputs(),
            outputs=self._build_outputs(),
            name=self.name,
            node_selector=self.node_selector,
            plugin=self.plugin,
            priority=self.priority,
            priority_class_name=self.priority_class_name,
            retry_strategy=self.retry_strategy,
            scheduler_name=self.scheduler_name,
            security_context=self.pod_security_context,
            service_account_name=self.service_account_name,
            sidecars=self._build_sidecars(),
            synchronization=self.synchronization,
            timeout=self.timeout,
            tolerations=self.tolerations,
        )

active_deadline_seconds class-attribute instance-attribute

active_deadline_seconds = None

affinity class-attribute instance-attribute

affinity = None

annotations class-attribute instance-attribute

annotations = None

archive_location class-attribute instance-attribute

archive_location = None

arguments class-attribute instance-attribute

arguments = None

automount_service_account_token class-attribute instance-attribute

automount_service_account_token = None

body class-attribute instance-attribute

body = None

body_from class-attribute instance-attribute

body_from = None

daemon class-attribute instance-attribute

daemon = None

executor class-attribute instance-attribute

executor = None

fail_fast class-attribute instance-attribute

fail_fast = None

headers class-attribute instance-attribute

headers = None

host_aliases class-attribute instance-attribute

host_aliases = None

http class-attribute instance-attribute

http = None

init_containers class-attribute instance-attribute

init_containers = None

inputs class-attribute instance-attribute

inputs = None

insecure_skip_verify class-attribute instance-attribute

insecure_skip_verify = None

labels class-attribute instance-attribute

labels = None

memoize class-attribute instance-attribute

memoize = None

method class-attribute instance-attribute

method = None

metrics class-attribute instance-attribute

metrics = None

name class-attribute instance-attribute

name = None

node_selector class-attribute instance-attribute

node_selector = None

outputs class-attribute instance-attribute

outputs = None

parallelism class-attribute instance-attribute

parallelism = None

plugin class-attribute instance-attribute

plugin = None

pod_security_context class-attribute instance-attribute

pod_security_context = None

pod_spec_patch class-attribute instance-attribute

pod_spec_patch = None

priority class-attribute instance-attribute

priority = None

priority_class_name class-attribute instance-attribute

priority_class_name = None

retry_strategy class-attribute instance-attribute

retry_strategy = None

scheduler_name class-attribute instance-attribute

scheduler_name = None

service_account_name class-attribute instance-attribute

service_account_name = None

sidecars class-attribute instance-attribute

sidecars = None

success_condition class-attribute instance-attribute

success_condition = None

synchronization class-attribute instance-attribute

synchronization = None

timeout class-attribute instance-attribute

timeout = None

timeout_seconds class-attribute instance-attribute

timeout_seconds = None

tolerations class-attribute instance-attribute

tolerations = None

url instance-attribute

url

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

HTTPArtifact

An artifact sourced from an HTTP URL.

Source code in src/hera/workflows/artifact.py
class HTTPArtifact(_ModelHTTPArtifact, Artifact):
    """An artifact sourced from an HTTP URL."""

    def _build_artifact(self) -> _ModelArtifact:
        artifact = super()._build_artifact()
        artifact.http = _ModelHTTPArtifact(
            auth=self.auth,
            headers=self.headers,
            url=self.url,
        )
        return artifact

    @classmethod
    def _get_input_attributes(cls):
        """Return the attributes used for input artifact annotations."""
        return super()._get_input_attributes() + ["auth", "headers", "url"]

archive class-attribute instance-attribute

archive = None

artifact archiving configuration

archive_logs class-attribute instance-attribute

archive_logs = None

whether to log the archive object

artifact_gc class-attribute instance-attribute

artifact_gc = None

artifact garbage collection configuration

auth class-attribute instance-attribute

auth = Field(default=None, description='Auth contains information for client authentication')

deleted class-attribute instance-attribute

deleted = None

whether the artifact is deleted

from_ class-attribute instance-attribute

from_ = None

configures the artifact task/step origin

from_expression class-attribute instance-attribute

from_expression = None

an expression that dictates where to obtain the artifact from

global_name class-attribute instance-attribute

global_name = None

global workflow artifact name

headers class-attribute instance-attribute

headers = Field(default=None, description='Headers are an optional list of headers to send with HTTP requests for artifacts')

loader class-attribute instance-attribute

loader = None

used in Artifact annotations for determining how to load the data

mode class-attribute instance-attribute

mode = None

mode bits to use on the artifact, must be a value between 0 and 0777 set when loading input artifacts.

name instance-attribute

name

name of the artifact

output class-attribute instance-attribute

output = False

used to specify artifact as an output in function signature annotations

path class-attribute instance-attribute

path = None

path where the artifact should be placed/loaded from

recurse_mode class-attribute instance-attribute

recurse_mode = None

recursion mode when applying the permissions of the artifact if it is an artifact folder

sub_path class-attribute instance-attribute

sub_path = None

allows the specification of an artifact from a subpath within the main source.

url class-attribute instance-attribute

url = Field(..., description='URL of the artifact')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

as_name

as_name(name)

DEPRECATED, use with_name.

Returns a ‘built’ copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def as_name(self, name: str) -> _ModelArtifact:
    """DEPRECATED, use with_name.

    Returns a 'built' copy of the current artifact, renamed using the specified `name`.
    """
    logger.warning("'as_name' is deprecated, use 'with_name'")
    artifact = self._build_artifact()
    artifact.name = name
    return artifact

with_name

with_name(name)

Returns a copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def with_name(self, name: str) -> Artifact:
    """Returns a copy of the current artifact, renamed using the specified `name`."""
    artifact = self.copy(deep=True)
    artifact.name = name
    return artifact

Histogram

Histogram metric that records the value at the specified bucket intervals.

Notes

See https://argoproj.github.io/argo-workflows/metrics/#grafana-dashboard-for-argo-controller-metrics

Source code in src/hera/workflows/metrics.py
class Histogram(_BaseMetric):
    """Histogram metric that records the value at the specified bucket intervals.

    Notes:
        See [https://argoproj.github.io/argo-workflows/metrics/#grafana-dashboard-for-argo-controller-metrics](https://argoproj.github.io/argo-workflows/metrics/#grafana-dashboard-for-argo-controller-metrics)
    """

    buckets: List[Union[float, _ModelAmount]]  # type: ignore
    value: str

    def _build_buckets(self) -> List[_ModelAmount]:
        return [_ModelAmount(__root__=bucket) if isinstance(bucket, float) else bucket for bucket in self.buckets]

    def _build_metric(self) -> _ModelPrometheus:
        return _ModelPrometheus(
            counter=None,
            gauge=None,
            help=self.help,
            histogram=_ModelHistogram(
                buckets=self._build_buckets(),
                value=self.value,
            ),
            labels=self._build_labels(),
            name=self.name,
            when=self.when,
        )

buckets instance-attribute

buckets

help instance-attribute

help

labels class-attribute instance-attribute

labels = None

name instance-attribute

name

value instance-attribute

value

when class-attribute instance-attribute

when = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

HostPathVolume

Representation for a volume that can be mounted from a host path/node location.

Source code in src/hera/workflows/volume.py
class HostPathVolume(_BaseVolume, _ModelHostPathVolumeSource):
    """Representation for a volume that can be mounted from a host path/node location."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(name=self.name, host_path=_ModelHostPathVolumeSource(path=self.path, type=self.type))

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

path class-attribute instance-attribute

path = Field(..., description='Path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath')

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

type class-attribute instance-attribute

type = Field(default=None, description='Type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

ISCSIVolume

Representation of ISCSI volume.

Source code in src/hera/workflows/volume.py
class ISCSIVolume(_BaseVolume, _ModelISCSIVolumeSource):
    """Representation of ISCSI volume."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            iscsi=_ModelISCSIVolumeSource(
                chap_auth_discovery=self.chap_auth_discovery,
                chap_auth_session=self.chap_auth_discovery,
                fs_type=self.fs_type,
                initiator_name=self.initiator_name,
                iqn=self.iqn,
                iscsi_interface=self.iscsi_interface,
                lun=self.lun,
                portals=self.portals,
                read_only=self.read_only,
                secret_ref=self.secret_ref,
                target_portal=self.target_portal,
            ),
        )

chap_auth_discovery class-attribute instance-attribute

chap_auth_discovery = Field(default=None, alias='chapAuthDiscovery', description='whether support iSCSI Discovery CHAP authentication')

chap_auth_session class-attribute instance-attribute

chap_auth_session = Field(default=None, alias='chapAuthSession', description='whether support iSCSI Session CHAP authentication')

fs_type class-attribute instance-attribute

fs_type = Field(default=None, alias='fsType', description='Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi')

initiator_name class-attribute instance-attribute

initiator_name = Field(default=None, alias='initiatorName', description='Custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection.')

iqn class-attribute instance-attribute

iqn = Field(..., description='Target iSCSI Qualified Name.')

iscsi_interface class-attribute instance-attribute

iscsi_interface = Field(default=None, alias='iscsiInterface', description="iSCSI Interface Name that uses an iSCSI transport. Defaults to 'default' (tcp).")

lun class-attribute instance-attribute

lun = Field(..., description='iSCSI Target Lun number.')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

portals class-attribute instance-attribute

portals = Field(default=None, description='iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).')

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

secret_ref class-attribute instance-attribute

secret_ref = Field(default=None, alias='secretRef', description='CHAP Secret for iSCSI target and initiator authentication')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

target_portal class-attribute instance-attribute

target_portal = Field(..., alias='targetPortal', description='iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

InlineScriptConstructor

InlineScriptConstructor is a script constructor that submits a script as a source to Argo.

This script constructor is focused on taking a Python script/function “as is” for remote execution. The constructor processes the script to infer what parameters it needs to deserialize so the script can execute. The submitted script will contain prefixes such as new imports, e.g. import os, import json, etc. and will contain the necessary json.loads calls to deserialize the parameters so they are usable by the script just like a normal Python script/function.

Source code in src/hera/workflows/script.py
class InlineScriptConstructor(ScriptConstructor):
    """`InlineScriptConstructor` is a script constructor that submits a script as a `source` to Argo.

    This script constructor is focused on taking a Python script/function "as is" for remote execution. The
    constructor processes the script to infer what parameters it needs to deserialize so the script can execute.
    The submitted script will contain prefixes such as new imports, e.g. `import os`, `import json`, etc. and
    will contain the necessary `json.loads` calls to deserialize the parameters so they are usable by the script just
    like a normal Python script/function.
    """

    add_cwd_to_sys_path: Optional[bool] = None

    def _get_param_script_portion(self, instance: Script) -> str:
        """Constructs and returns a script that loads the parameters of the specified arguments.

        Since Argo passes parameters through `{{input.parameters.name}}` it can be very cumbersome for users to
        manage that. This creates a script that automatically imports json and loads/adds code to interpret
        each independent argument into the script.

        Returns:
        -------
        str
            The string representation of the script to load.
        """
        inputs = instance._build_inputs()
        assert inputs
        extract = "import json\n"
        for param in sorted(inputs.parameters or [], key=lambda x: x.name):
            # Hera does not know what the content of the `InputFrom` is, coming from another task. In some cases
            # non-JSON encoded strings are returned, which fail the loads, but they can be used as plain strings
            # which is why this captures that in an except. This is only used for `InputFrom` cases as the extra
            # payload of the script is not necessary when regular input is set on the task via `func_params`
            if param.value_from is None:
                extract += f"""try: {param.name} = json.loads(r'''{{{{inputs.parameters.{param.name}}}}}''')\n"""
                extract += f"""except: {param.name} = r'''{{{{inputs.parameters.{param.name}}}}}'''\n"""
        return textwrap.dedent(extract) if extract != "import json\n" else ""

    def generate_source(self, instance: Script) -> str:
        """Assembles and returns a script representation of the given function.

        This also assembles any extra script material prefixed to the string source.
        The script is expected to be a callable function the client is interested in submitting
        for execution on Argo and the `script_extra` material represents the parameter loading part obtained, likely,
        through `get_param_script_portion`.

        Returns:
        -------
        str
            Final formatted script.
        """
        if not callable(instance.source):
            assert isinstance(instance.source, str)
            return instance.source
        args = inspect.getfullargspec(instance.source).args
        script = ""
        # Argo will save the script as a file and run it with cmd:
        # - python /argo/staging/script
        # However, this prevents the script from importing modules in its cwd,
        # since it's looking for files relative to the script path.
        # We fix this by appending the cwd path to sys:
        if instance.add_cwd_to_sys_path or self.add_cwd_to_sys_path:
            script = "import os\nimport sys\nsys.path.append(os.getcwd())\n"

        script_extra = self._get_param_script_portion(instance) if args else None
        if script_extra:
            script += copy.deepcopy(script_extra)
            script += "\n"

        # We use ast parse/unparse to get the source code of the function
        # in order to have consistent looking functions and getting rid of any comments
        # parsing issues.
        # See https://github.com/argoproj-labs/hera/issues/572
        content = roundtrip(textwrap.dedent(inspect.getsource(instance.source))).splitlines()
        for i, line in enumerate(content):
            if line.startswith("def") or line.startswith("async def"):
                break

        s = "\n".join(content[i + 1 :])
        script += textwrap.dedent(s)
        return textwrap.dedent(script)

add_cwd_to_sys_path class-attribute instance-attribute

add_cwd_to_sys_path = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

generate_source

generate_source(instance)

Assembles and returns a script representation of the given function.

This also assembles any extra script material prefixed to the string source. The script is expected to be a callable function the client is interested in submitting for execution on Argo and the script_extra material represents the parameter loading part obtained, likely, through get_param_script_portion.

Returns:

str Final formatted script.

Source code in src/hera/workflows/script.py
def generate_source(self, instance: Script) -> str:
    """Assembles and returns a script representation of the given function.

    This also assembles any extra script material prefixed to the string source.
    The script is expected to be a callable function the client is interested in submitting
    for execution on Argo and the `script_extra` material represents the parameter loading part obtained, likely,
    through `get_param_script_portion`.

    Returns:
    -------
    str
        Final formatted script.
    """
    if not callable(instance.source):
        assert isinstance(instance.source, str)
        return instance.source
    args = inspect.getfullargspec(instance.source).args
    script = ""
    # Argo will save the script as a file and run it with cmd:
    # - python /argo/staging/script
    # However, this prevents the script from importing modules in its cwd,
    # since it's looking for files relative to the script path.
    # We fix this by appending the cwd path to sys:
    if instance.add_cwd_to_sys_path or self.add_cwd_to_sys_path:
        script = "import os\nimport sys\nsys.path.append(os.getcwd())\n"

    script_extra = self._get_param_script_portion(instance) if args else None
    if script_extra:
        script += copy.deepcopy(script_extra)
        script += "\n"

    # We use ast parse/unparse to get the source code of the function
    # in order to have consistent looking functions and getting rid of any comments
    # parsing issues.
    # See https://github.com/argoproj-labs/hera/issues/572
    content = roundtrip(textwrap.dedent(inspect.getsource(instance.source))).splitlines()
    for i, line in enumerate(content):
        if line.startswith("def") or line.startswith("async def"):
            break

    s = "\n".join(content[i + 1 :])
    script += textwrap.dedent(s)
    return textwrap.dedent(script)

transform_script_template_post_build

transform_script_template_post_build(instance, script)

A hook to transform the generated script template.

Source code in src/hera/workflows/script.py
def transform_script_template_post_build(
    self, instance: "Script", script: _ModelScriptTemplate
) -> _ModelScriptTemplate:
    """A hook to transform the generated script template."""
    return script

transform_template_post_build

transform_template_post_build(instance, template)

A hook to transform the generated template.

Source code in src/hera/workflows/script.py
def transform_template_post_build(self, instance: "Script", template: _ModelTemplate) -> _ModelTemplate:
    """A hook to transform the generated template."""
    return template

transform_values

transform_values(cls, values)

A function that will be invoked by the root validator of the Script class.

Source code in src/hera/workflows/script.py
def transform_values(self, cls: Type["Script"], values: Any) -> Any:
    """A function that will be invoked by the root validator of the Script class."""
    return values

InvalidDispatchType

Exception raised when Hera attempts to dispatch a hook and it fails to do so.

Source code in src/hera/workflows/exceptions.py
class InvalidDispatchType(WorkflowsException):
    """Exception raised when Hera attempts to dispatch a hook and it fails to do so."""

    ...

InvalidTemplateCall

Exception raised when an invalid template call is performed.

Source code in src/hera/workflows/exceptions.py
class InvalidTemplateCall(WorkflowsException):
    """Exception raised when an invalid template call is performed."""

    ...

InvalidType

Exception raised when an invalid type is submitted to a Hera object’s field or functionality.

Source code in src/hera/workflows/exceptions.py
class InvalidType(WorkflowsException):
    """Exception raised when an invalid type is submitted to a Hera object's field or functionality."""

    ...

Metric

Prometheus metric that can be used at the workflow or task/template level.

Notes

See https://argoproj.github.io/argo-workflows/metrics/#grafana-dashboard-for-argo-controller-metrics

Source code in src/hera/workflows/metrics.py
class Metric(_BaseMetric):
    """Prometheus metric that can be used at the workflow or task/template level.

    Notes:
        See [https://argoproj.github.io/argo-workflows/metrics/#grafana-dashboard-for-argo-controller-metrics](https://argoproj.github.io/argo-workflows/metrics/#grafana-dashboard-for-argo-controller-metrics)
    """

    counter: Optional[Counter] = None
    gauge: Optional[Gauge] = None
    histogram: Optional[Histogram] = None

    def _build_metric(self) -> _ModelPrometheus:
        return _ModelPrometheus(
            counter=_ModelCounter(value=self.counter.value) if self.counter else None,
            gauge=_ModelGauge(realtime=self.gauge.realtime, value=self.gauge.value) if self.gauge else None,
            help=self.help,
            histogram=_ModelHistogram(
                buckets=self.histogram.buckets,
                value=self.histogram.value,
            )
            if self.histogram
            else None,
            labels=self._build_labels(),
            name=self.name,
            when=self.when,
        )

counter class-attribute instance-attribute

counter = None

gauge class-attribute instance-attribute

gauge = None

help instance-attribute

help

histogram class-attribute instance-attribute

histogram = None

labels class-attribute instance-attribute

labels = None

name instance-attribute

name

when class-attribute instance-attribute

when = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

Metrics

A collection of Prometheus metrics.

Notes

See https://argoproj.github.io/argo-workflows/metrics/#grafana-dashboard-for-argo-controller-metrics

Source code in src/hera/workflows/metrics.py
class Metrics(BaseMixin):
    """A collection of Prometheus metrics.

    Notes:
        See [https://argoproj.github.io/argo-workflows/metrics/#grafana-dashboard-for-argo-controller-metrics](https://argoproj.github.io/argo-workflows/metrics/#grafana-dashboard-for-argo-controller-metrics)
    """

    metrics: List[Metric]

    def _build_metrics(self) -> List[_ModelPrometheus]:
        return [metric._build_metric() for metric in self.metrics]

metrics instance-attribute

metrics

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

NFSVolume

A network file system volume representation.

Source code in src/hera/workflows/volume.py
class NFSVolume(_BaseVolume, _ModelNFSVolumeSource):
    """A network file system volume representation."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name, nfs=_ModelNFSVolumeSource(server=self.server, path=self.path, read_only=self.read_only)
        )

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

path class-attribute instance-attribute

path = Field(..., description='Path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs')

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

server class-attribute instance-attribute

server = Field(..., description='Server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

NoneArchiveStrategy

NoneArchiveStrategy indicates artifacts should skip serialization.

Source code in src/hera/workflows/archive.py
class NoneArchiveStrategy(ArchiveStrategy):
    """`NoneArchiveStrategy` indicates artifacts should skip serialization."""

    def _build_archive_strategy(self) -> _ModelArchiveStrategy:
        return _ModelArchiveStrategy(none=_ModelNoneStrategy())

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

OSSArtifact

An artifact sourced from OSS.

Source code in src/hera/workflows/artifact.py
class OSSArtifact(_ModelOSSArtifact, Artifact):
    """An artifact sourced from OSS."""

    def _build_artifact(self) -> _ModelArtifact:
        artifact = super()._build_artifact()
        artifact.oss = _ModelOSSArtifact(
            access_key_secret=self.access_key_secret,
            bucket=self.bucket,
            create_bucket_if_not_present=self.create_bucket_if_not_present,
            endpoint=self.endpoint,
            key=self.key,
            lifecycle_rule=self.lifecycle_rule,
            secret_key_secret=self.secret_key_secret,
            security_token=self.security_token,
        )
        return artifact

    @classmethod
    def _get_input_attributes(cls):
        """Return the attributes used for input artifact annotations."""
        return super()._get_input_attributes() + [
            "access_key_secret",
            "bucket",
            "create_bucket_if_not_present",
            "endpoint",
            "key",
            "lifecycle_rule",
            "secret_key_secret",
            "security_token",
        ]

access_key_secret class-attribute instance-attribute

access_key_secret = Field(default=None, alias='accessKeySecret', description="AccessKeySecret is the secret selector to the bucket's access key")

archive class-attribute instance-attribute

archive = None

artifact archiving configuration

archive_logs class-attribute instance-attribute

archive_logs = None

whether to log the archive object

artifact_gc class-attribute instance-attribute

artifact_gc = None

artifact garbage collection configuration

bucket class-attribute instance-attribute

bucket = Field(default=None, description='Bucket is the name of the bucket')

create_bucket_if_not_present class-attribute instance-attribute

create_bucket_if_not_present = Field(default=None, alias='createBucketIfNotPresent', description="CreateBucketIfNotPresent tells the driver to attempt to create the OSS bucket for output artifacts, if it doesn't exist")

deleted class-attribute instance-attribute

deleted = None

whether the artifact is deleted

endpoint class-attribute instance-attribute

endpoint = Field(default=None, description='Endpoint is the hostname of the bucket endpoint')

from_ class-attribute instance-attribute

from_ = None

configures the artifact task/step origin

from_expression class-attribute instance-attribute

from_expression = None

an expression that dictates where to obtain the artifact from

global_name class-attribute instance-attribute

global_name = None

global workflow artifact name

key class-attribute instance-attribute

key = Field(..., description='Key is the path in the bucket where the artifact resides')

lifecycle_rule class-attribute instance-attribute

lifecycle_rule = Field(default=None, alias='lifecycleRule', description="LifecycleRule specifies how to manage bucket's lifecycle")

loader class-attribute instance-attribute

loader = None

used in Artifact annotations for determining how to load the data

mode class-attribute instance-attribute

mode = None

mode bits to use on the artifact, must be a value between 0 and 0777 set when loading input artifacts.

name instance-attribute

name

name of the artifact

output class-attribute instance-attribute

output = False

used to specify artifact as an output in function signature annotations

path class-attribute instance-attribute

path = None

path where the artifact should be placed/loaded from

recurse_mode class-attribute instance-attribute

recurse_mode = None

recursion mode when applying the permissions of the artifact if it is an artifact folder

secret_key_secret class-attribute instance-attribute

secret_key_secret = Field(default=None, alias='secretKeySecret', description="SecretKeySecret is the secret selector to the bucket's secret key")

security_token class-attribute instance-attribute

security_token = Field(default=None, alias='securityToken', description="SecurityToken is the user's temporary security token. For more details, check out: https://www.alibabacloud.com/help/doc-detail/100624.htm")

sub_path class-attribute instance-attribute

sub_path = None

allows the specification of an artifact from a subpath within the main source.

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

as_name

as_name(name)

DEPRECATED, use with_name.

Returns a ‘built’ copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def as_name(self, name: str) -> _ModelArtifact:
    """DEPRECATED, use with_name.

    Returns a 'built' copy of the current artifact, renamed using the specified `name`.
    """
    logger.warning("'as_name' is deprecated, use 'with_name'")
    artifact = self._build_artifact()
    artifact.name = name
    return artifact

with_name

with_name(name)

Returns a copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def with_name(self, name: str) -> Artifact:
    """Returns a copy of the current artifact, renamed using the specified `name`."""
    artifact = self.copy(deep=True)
    artifact.name = name
    return artifact

Operator

Operator is a representation of mathematical comparison symbols.

This can be used on tasks that execute conditionally based on the output of another task.

Notes

The task that outputs its result needs to do so using stdout. See examples for a sample workflow.

Source code in src/hera/workflows/operator.py
class Operator(Enum):
    """Operator is a representation of mathematical comparison symbols.

    This can be used on tasks that execute conditionally based on the output of another task.

    Notes:
        The task that outputs its result needs to do so using stdout. See `examples` for a sample workflow.
    """

    does_not_exist = "DoesNotExist"
    exists = "Exists"
    gt = "Gt"
    in_ = "In"
    lt = "Lt"
    not_in = "NotIn"

    equals = "=="
    greater = ">"
    less = "<"
    greater_equal = ">="
    less_equal = "<="
    not_equal = "!="
    or_ = "||"
    and_ = "&&"
    starts_with = "=~"

    def __str__(self) -> str:
        """Assembles the `value` representation of the enum and returns it as a string."""
        return str(self.value)

and_ class-attribute instance-attribute

and_ = '&&'

does_not_exist class-attribute instance-attribute

does_not_exist = 'DoesNotExist'

equals class-attribute instance-attribute

equals = '=='

exists class-attribute instance-attribute

exists = 'Exists'

greater class-attribute instance-attribute

greater = '>'

greater_equal class-attribute instance-attribute

greater_equal = '>='

gt class-attribute instance-attribute

gt = 'Gt'

in_ class-attribute instance-attribute

in_ = 'In'

less class-attribute instance-attribute

less = '<'

less_equal class-attribute instance-attribute

less_equal = '<='

lt class-attribute instance-attribute

lt = 'Lt'

not_equal class-attribute instance-attribute

not_equal = '!='

not_in class-attribute instance-attribute

not_in = 'NotIn'

or_ class-attribute instance-attribute

or_ = '||'

starts_with class-attribute instance-attribute

starts_with = '=~'

Parallel

Parallel is used to add a list of steps which will run in parallel.

Parallel implements the contextmanager interface so allows usage of with, under which any hera.workflows.steps.Step objects instantiated will be added to Parallel’s list of sub_steps.

Source code in src/hera/workflows/steps.py
class Parallel(
    ContextMixin,
    SubNodeMixin,
):
    """Parallel is used to add a list of steps which will run in parallel.

    Parallel implements the contextmanager interface so allows usage of `with`, under which any
    `hera.workflows.steps.Step` objects instantiated will be added to Parallel's list of sub_steps.
    """

    sub_steps: List[Union[Step, _ModelWorkflowStep]] = []

    def _add_sub(self, node: Any):
        if not isinstance(node, Step):
            raise InvalidType(type(node))
        self.sub_steps.append(node)

    def _build_step(self) -> List[_ModelWorkflowStep]:
        steps = []
        for step in self.sub_steps:
            if isinstance(step, Step):
                steps.append(step._build_as_workflow_step())
            elif isinstance(step, _ModelWorkflowStep):
                steps.append(step)
            else:
                raise InvalidType(type(step))
        return steps

sub_steps class-attribute instance-attribute

sub_steps = []

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

Parameter

A Parameter is used to pass values in and out of templates.

They are to declare input and output parameters in the case of templates, and are used for Steps and Tasks to assign values.

Source code in src/hera/workflows/parameter.py
class Parameter(_ModelParameter):
    """A `Parameter` is used to pass values in and out of templates.

    They are to declare input and output parameters in the case of templates, and are used
    for Steps and Tasks to assign values.
    """

    name: Optional[str] = None  # type: ignore

    output: Optional[bool] = False
    """used to specify parameter as an output in function signature annotations"""

    def _check_name(self):
        if not self.name:
            raise ValueError("name cannot be `None` or empty when used")

    # `MISSING` is the default value so that `Parameter` serialization understands the difference between a
    # missing value and a value of `None`, as set by a user. With this, when something sets a value of `None` it is
    # taken as a proper `None`. By comparison, if a user does not set a value, it is taken as `MISSING` and therefore
    # not serialized. This happens because the values if turned into an _actual_ `None` by `serialize` and therefore
    # Pydantic will not include it in the YAML that is passed to Argo
    value: Optional[Any] = MISSING
    default: Optional[Any] = MISSING

    @root_validator(pre=True, allow_reuse=True)
    def _check_values(cls, values):
        if values.get("value") is not None and values.get("value_from") is not None:
            raise ValueError("Cannot specify both `value` and `value_from` when instantiating `Parameter`")

        values["value"] = serialize(values.get("value", MISSING))
        values["default"] = serialize(values.get("default", MISSING))

        return values

    def __str__(self):
        """Represent the parameter as a string by pointing to its value.

        This is useful in situations where we want to concatenate string values such as
        Task.args=["echo", wf.get_parameter("message")].
        """
        if self.value is None:
            raise ValueError("Cannot represent `Parameter` as string as `value` is not set")
        return self.value

    def with_name(self, name: str) -> Parameter:
        """Returns a copy of the parameter with the name set to the value."""
        p = self.copy(deep=True)
        p.name = name
        return p

    def as_input(self) -> _ModelParameter:
        """Assembles the parameter for use as an input of a template."""
        self._check_name()
        return _ModelParameter(
            name=self.name,
            description=self.description,
            default=self.default,
            enum=self.enum,
            value=self.value,
            value_from=self.value_from,
        )

    def as_argument(self) -> _ModelParameter:
        """Assembles the parameter for use as an argument of a step or a task."""
        # Setting a default value when used as an argument is a no-op so we exclude it as it would get overwritten by
        # `value` or `value_from` (one of which is required)
        # Overwrite ref: https://github.com/argoproj/argo-workflows/blob/781675ddcf6f1138d697cb9c71dae484daa0548b/workflow/common/util.go#L126-L139
        # One of value/value_from required ref: https://github.com/argoproj/argo-workflows/blob/ab178bb0b36a5ce34b4c1302cf4855879a0e8cf5/workflow/validate/validate.go#L794-L798
        self._check_name()
        return _ModelParameter(
            name=self.name,
            global_name=self.global_name,
            description=self.description,
            enum=self.enum,
            value=self.value,
            value_from=self.value_from,
        )

    def as_output(self) -> _ModelParameter:
        """Assembles the parameter for use as an output of a template."""
        # Only `value` and `value_from` are valid here
        # see https://github.com/argoproj/argo-workflows/blob/e3254eca115c9dd358e55d16c6a3d41403c29cae/workflow/validate/validate.go#L1067
        self._check_name()
        return _ModelParameter(
            name=self.name,
            global_name=self.global_name,
            description=self.description,
            value=self.value,
            value_from=self.value_from,
        )

    @classmethod
    def _get_input_attributes(cls):
        """Return the attributes used for input parameter annotations."""
        return ["enum", "description", "default", "name", "value", "value_from"]

default class-attribute instance-attribute

default = MISSING

description class-attribute instance-attribute

description = Field(default=None, description='Description is the parameter description')

enum class-attribute instance-attribute

enum = Field(default=None, description='Enum holds a list of string values to choose from, for the actual value of the parameter')

global_name class-attribute instance-attribute

global_name = Field(default=None, alias='globalName', description="GlobalName exports an output parameter to the global scope, making it available as '{{io.argoproj.workflow.v1alpha1.outputs.parameters.XXXX}} and in workflow.status.outputs.parameters")

name class-attribute instance-attribute

name = None

output class-attribute instance-attribute

output = False

used to specify parameter as an output in function signature annotations

value class-attribute instance-attribute

value = MISSING

value_from class-attribute instance-attribute

value_from = Field(default=None, alias='valueFrom', description="ValueFrom is the source for the output parameter's value")

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

as_argument

as_argument()

Assembles the parameter for use as an argument of a step or a task.

Source code in src/hera/workflows/parameter.py
def as_argument(self) -> _ModelParameter:
    """Assembles the parameter for use as an argument of a step or a task."""
    # Setting a default value when used as an argument is a no-op so we exclude it as it would get overwritten by
    # `value` or `value_from` (one of which is required)
    # Overwrite ref: https://github.com/argoproj/argo-workflows/blob/781675ddcf6f1138d697cb9c71dae484daa0548b/workflow/common/util.go#L126-L139
    # One of value/value_from required ref: https://github.com/argoproj/argo-workflows/blob/ab178bb0b36a5ce34b4c1302cf4855879a0e8cf5/workflow/validate/validate.go#L794-L798
    self._check_name()
    return _ModelParameter(
        name=self.name,
        global_name=self.global_name,
        description=self.description,
        enum=self.enum,
        value=self.value,
        value_from=self.value_from,
    )

as_input

as_input()

Assembles the parameter for use as an input of a template.

Source code in src/hera/workflows/parameter.py
def as_input(self) -> _ModelParameter:
    """Assembles the parameter for use as an input of a template."""
    self._check_name()
    return _ModelParameter(
        name=self.name,
        description=self.description,
        default=self.default,
        enum=self.enum,
        value=self.value,
        value_from=self.value_from,
    )

as_output

as_output()

Assembles the parameter for use as an output of a template.

Source code in src/hera/workflows/parameter.py
def as_output(self) -> _ModelParameter:
    """Assembles the parameter for use as an output of a template."""
    # Only `value` and `value_from` are valid here
    # see https://github.com/argoproj/argo-workflows/blob/e3254eca115c9dd358e55d16c6a3d41403c29cae/workflow/validate/validate.go#L1067
    self._check_name()
    return _ModelParameter(
        name=self.name,
        global_name=self.global_name,
        description=self.description,
        value=self.value,
        value_from=self.value_from,
    )

with_name

with_name(name)

Returns a copy of the parameter with the name set to the value.

Source code in src/hera/workflows/parameter.py
def with_name(self, name: str) -> Parameter:
    """Returns a copy of the parameter with the name set to the value."""
    p = self.copy(deep=True)
    p.name = name
    return p

PhotonPersistentDiskVolume

A Photon Persisten Disk representation.

Source code in src/hera/workflows/volume.py
class PhotonPersistentDiskVolume(_BaseVolume, _ModelPhotonPersistentDiskVolumeSource):
    """A Photon Persisten Disk representation."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            photon_persistent_disk=_ModelPhotonPersistentDiskVolumeSource(fs_type=self.fs_type, pd_id=self.pd_id),
        )

fs_type class-attribute instance-attribute

fs_type = Field(default=None, alias='fsType', description='Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

pd_id class-attribute instance-attribute

pd_id = Field(..., alias='pdID', description='ID that identifies Photon Controller persistent disk')

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

PortworxVolume

PortworxVolume represents a Portworx volume to mount to a container.

Source code in src/hera/workflows/volume.py
class PortworxVolume(_BaseVolume, _ModelPortworxVolumeSource):
    """`PortworxVolume` represents a Portworx volume to mount to a container."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            portworx_volume=_ModelPortworxVolumeSource(
                fs_type=self.fs_type, read_only=self.read_only, volume_id=self.volume_id
            ),
        )

fs_type class-attribute instance-attribute

fs_type = Field(default=None, alias='fsType', description='FSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified.')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

volume_id class-attribute instance-attribute

volume_id = Field(..., alias='volumeID', description='VolumeID uniquely identifies a Portworx volume')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

ProjectedVolume

ProjectedVolume represents a projected volume to mount to a container.

Source code in src/hera/workflows/volume.py
class ProjectedVolume(_BaseVolume, _ModelProjectedVolumeSource):
    """`ProjectedVolume` represents a projected volume to mount to a container."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name, projected=_ModelProjectedVolumeSource(default_mode=self.default_mode, sources=self.sources)
        )

default_mode class-attribute instance-attribute

default_mode = Field(default=None, alias='defaultMode', description='Mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

sources class-attribute instance-attribute

sources = Field(default=None, description='list of volume projections')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

QuobyteVolume

QuobyteVolume represents a Quobyte volume to mount to a container.

Source code in src/hera/workflows/volume.py
class QuobyteVolume(_BaseVolume, _ModelQuobyteVolumeSource):
    """`QuobyteVolume` represents a Quobyte volume to mount to a container."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            quobyte=_ModelQuobyteVolumeSource(
                group=self.group,
                read_only=self.read_only,
                registry=self.registry,
                tenant=self.tenant,
                user=self.user,
                volume=self.volume,
            ),
        )

group class-attribute instance-attribute

group = Field(default=None, description='Group to map volume access to Default is no group')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

registry class-attribute instance-attribute

registry = Field(..., description='Registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

tenant class-attribute instance-attribute

tenant = Field(default=None, description='Tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin')

user class-attribute instance-attribute

user = Field(default=None, description='User to map volume access to Defaults to serivceaccount user')

volume class-attribute instance-attribute

volume = Field(..., description='Volume is a string that references an already created Quobyte volume by name.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

RBDVolume

An RDB volume representation.

Source code in src/hera/workflows/volume.py
class RBDVolume(_BaseVolume, _ModelRBDVolumeSource):
    """An RDB volume representation."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            rbd=_ModelRBDVolumeSource(
                fs_type=self.fs_type,
                image=self.image,
                keyring=self.keyring,
                monitors=self.monitors,
                pool=self.pool,
                read_only=self.read_only,
                secret_ref=self.secret_ref,
                user=self.user,
            ),
        )

fs_type class-attribute instance-attribute

fs_type = Field(default=None, alias='fsType', description='Filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd')

image class-attribute instance-attribute

image = Field(..., description='The rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it')

keyring class-attribute instance-attribute

keyring = Field(default=None, description='Keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it')

monitors class-attribute instance-attribute

monitors = Field(..., description='A collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

pool class-attribute instance-attribute

pool = Field(default=None, description='The rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it')

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

secret_ref class-attribute instance-attribute

secret_ref = Field(default=None, alias='secretRef', description='SecretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

user class-attribute instance-attribute

user = Field(default=None, description='The rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

RawArtifact

A raw bytes artifact representation.

Source code in src/hera/workflows/artifact.py
class RawArtifact(_ModelRawArtifact, Artifact):
    """A raw bytes artifact representation."""

    def _build_artifact(self) -> _ModelArtifact:
        artifact = super()._build_artifact()
        artifact.raw = _ModelRawArtifact(data=self.data)
        return artifact

    @classmethod
    def _get_input_attributes(cls):
        """Return the attributes used for input artifact annotations."""
        return super()._get_input_attributes() + ["data"]

archive class-attribute instance-attribute

archive = None

artifact archiving configuration

archive_logs class-attribute instance-attribute

archive_logs = None

whether to log the archive object

artifact_gc class-attribute instance-attribute

artifact_gc = None

artifact garbage collection configuration

data class-attribute instance-attribute

data = Field(..., description='Data is the string contents of the artifact')

deleted class-attribute instance-attribute

deleted = None

whether the artifact is deleted

from_ class-attribute instance-attribute

from_ = None

configures the artifact task/step origin

from_expression class-attribute instance-attribute

from_expression = None

an expression that dictates where to obtain the artifact from

global_name class-attribute instance-attribute

global_name = None

global workflow artifact name

loader class-attribute instance-attribute

loader = None

used in Artifact annotations for determining how to load the data

mode class-attribute instance-attribute

mode = None

mode bits to use on the artifact, must be a value between 0 and 0777 set when loading input artifacts.

name instance-attribute

name

name of the artifact

output class-attribute instance-attribute

output = False

used to specify artifact as an output in function signature annotations

path class-attribute instance-attribute

path = None

path where the artifact should be placed/loaded from

recurse_mode class-attribute instance-attribute

recurse_mode = None

recursion mode when applying the permissions of the artifact if it is an artifact folder

sub_path class-attribute instance-attribute

sub_path = None

allows the specification of an artifact from a subpath within the main source.

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

as_name

as_name(name)

DEPRECATED, use with_name.

Returns a ‘built’ copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def as_name(self, name: str) -> _ModelArtifact:
    """DEPRECATED, use with_name.

    Returns a 'built' copy of the current artifact, renamed using the specified `name`.
    """
    logger.warning("'as_name' is deprecated, use 'with_name'")
    artifact = self._build_artifact()
    artifact.name = name
    return artifact

with_name

with_name(name)

Returns a copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def with_name(self, name: str) -> Artifact:
    """Returns a copy of the current artifact, renamed using the specified `name`."""
    artifact = self.copy(deep=True)
    artifact.name = name
    return artifact

Resource

Resource is a representation of a K8s resource that can be created by Argo.

The resource is a callable step that can be invoked in a DAG/Workflow. The resource can create any K8s resource, such as other workflows, workflow templates, daemons, etc, as specified by the manifest field of the resource. The manifest field is a canonical YAML that is submitted to K8s by Argo. Note that the manifest is a union of multiple types. The manifest can be a string, in which case it is assume to be YAML. Otherwise, if it’s a Hera objects, it is automatically converted to the corresponding YAML representation.

Source code in src/hera/workflows/resource.py
class Resource(CallableTemplateMixin, TemplateMixin, SubNodeMixin, IOMixin):
    """`Resource` is a representation of a K8s resource that can be created by Argo.

    The resource is a callable step that can be invoked in a DAG/Workflow. The resource can create any K8s resource,
    such as other workflows, workflow templates, daemons, etc, as specified by the `manifest` field of the resource.
    The manifest field is a canonical YAML that is submitted to K8s by Argo. Note that the manifest is a union of
    multiple types. The manifest can be a string, in which case it is assume to be YAML. Otherwise, if it's a Hera
    objects, it is automatically converted to the corresponding YAML representation.
    """

    action: str
    failure_condition: Optional[str] = None
    flags: Optional[List[str]] = None
    manifest: Optional[Union[str, Workflow, CronWorkflow, WorkflowTemplate]] = None
    manifest_from: Optional[ManifestFrom] = None
    merge_strategy: Optional[str] = None
    set_owner_reference: Optional[bool] = None
    success_condition: Optional[str] = None

    def _build_manifest(self) -> Optional[str]:
        if isinstance(self.manifest, (Workflow, CronWorkflow, WorkflowTemplate)):
            # hack to appease raw yaml string comparison
            return self.manifest.to_yaml().replace("'{{", "{{").replace("}}'", "}}")
        return self.manifest

    def _build_resource_template(self) -> _ModelResourceTemplate:
        return _ModelResourceTemplate(
            action=self.action,
            failure_condition=self.failure_condition,
            flags=self.flags,
            manifest=self._build_manifest(),
            manifest_from=self.manifest_from,
            merge_strategy=self.merge_strategy,
            set_owner_reference=self.set_owner_reference,
            success_condition=self.success_condition,
        )

    def _build_template(self) -> _ModelTemplate:
        return _ModelTemplate(
            active_deadline_seconds=self.active_deadline_seconds,
            affinity=self.affinity,
            archive_location=self.archive_location,
            automount_service_account_token=self.automount_service_account_token,
            daemon=self.daemon,
            executor=self.executor,
            fail_fast=self.fail_fast,
            host_aliases=self.host_aliases,
            init_containers=self.init_containers,
            inputs=self._build_inputs(),
            memoize=self.memoize,
            metadata=self._build_metadata(),
            metrics=self._build_metrics(),
            name=self.name,
            node_selector=self.node_selector,
            outputs=self._build_outputs(),
            parallelism=self.parallelism,
            plugin=self.plugin,
            pod_spec_patch=self.pod_spec_patch,
            priority=self.priority,
            priority_class_name=self.priority_class_name,
            resource=self._build_resource_template(),
            retry_strategy=self.retry_strategy,
            scheduler_name=self.scheduler_name,
            security_context=self.pod_security_context,
            service_account_name=self.service_account_name,
            sidecars=self._build_sidecars(),
            synchronization=self.synchronization,
            timeout=self.timeout,
            tolerations=self.tolerations,
        )

action instance-attribute

action

active_deadline_seconds class-attribute instance-attribute

active_deadline_seconds = None

affinity class-attribute instance-attribute

affinity = None

annotations class-attribute instance-attribute

annotations = None

archive_location class-attribute instance-attribute

archive_location = None

arguments class-attribute instance-attribute

arguments = None

automount_service_account_token class-attribute instance-attribute

automount_service_account_token = None

daemon class-attribute instance-attribute

daemon = None

executor class-attribute instance-attribute

executor = None

fail_fast class-attribute instance-attribute

fail_fast = None

failure_condition class-attribute instance-attribute

failure_condition = None

flags class-attribute instance-attribute

flags = None

host_aliases class-attribute instance-attribute

host_aliases = None

http class-attribute instance-attribute

http = None

init_containers class-attribute instance-attribute

init_containers = None

inputs class-attribute instance-attribute

inputs = None

labels class-attribute instance-attribute

labels = None

manifest class-attribute instance-attribute

manifest = None

manifest_from class-attribute instance-attribute

manifest_from = None

memoize class-attribute instance-attribute

memoize = None

merge_strategy class-attribute instance-attribute

merge_strategy = None

metrics class-attribute instance-attribute

metrics = None

name class-attribute instance-attribute

name = None

node_selector class-attribute instance-attribute

node_selector = None

outputs class-attribute instance-attribute

outputs = None

parallelism class-attribute instance-attribute

parallelism = None

plugin class-attribute instance-attribute

plugin = None

pod_security_context class-attribute instance-attribute

pod_security_context = None

pod_spec_patch class-attribute instance-attribute

pod_spec_patch = None

priority class-attribute instance-attribute

priority = None

priority_class_name class-attribute instance-attribute

priority_class_name = None

retry_strategy class-attribute instance-attribute

retry_strategy = None

scheduler_name class-attribute instance-attribute

scheduler_name = None

service_account_name class-attribute instance-attribute

service_account_name = None

set_owner_reference class-attribute instance-attribute

set_owner_reference = None

sidecars class-attribute instance-attribute

sidecars = None

success_condition class-attribute instance-attribute

success_condition = None

synchronization class-attribute instance-attribute

synchronization = None

timeout class-attribute instance-attribute

timeout = None

tolerations class-attribute instance-attribute

tolerations = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

ResourceEnv

ResourceEnv exposes a resource field as an environment variable.

Only resources limits and requests such as limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage are currently supported.

Source code in src/hera/workflows/env.py
class ResourceEnv(_BaseEnv):
    """`ResourceEnv` exposes a resource field as an environment variable.

    Only resources limits and requests such as `limits.cpu`, `limits.memory`, `limits.ephemeral-storage`,
    `requests.cpu`, `requests.memory` and `requests.ephemeral-storage` are currently supported.
    """

    resource: str
    """the name of the resource to select, such as `limit.cpu`, `limits.memory`, etc."""

    container_name: Optional[str] = None
    """
    a pod can contain multiple containers, so this field helps select the right container whose resources should 
    be exposed as an env variable.
    """

    divisor: Optional[Quantity] = None
    """Specifies the output format of the exposed resources, defaults to `1` on Argo's side"""

    def build(self) -> _ModelEnvVar:
        """Builds the `ResourceEnv` into a Hera auto-generated environment variable model."""
        return _ModelEnvVar(
            name=self.name,
            value_from=_ModelEnvVarSource(
                resource_field_ref=_ModelResourceFieldSelector(
                    container_name=self.container_name,
                    divisor=self.divisor,
                    resource=self.resource,
                )
            ),
        )

container_name class-attribute instance-attribute

container_name = None

a pod can contain multiple containers, so this field helps select the right container whose resources should be exposed as an env variable.

divisor class-attribute instance-attribute

divisor = None

Specifies the output format of the exposed resources, defaults to 1 on Argo’s side

name instance-attribute

name

the name of the environment variable. This is universally required irrespective of the type of env variable

resource instance-attribute

resource

the name of the resource to select, such as limit.cpu, limits.memory, etc.

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

build

build()

Builds the ResourceEnv into a Hera auto-generated environment variable model.

Source code in src/hera/workflows/env.py
def build(self) -> _ModelEnvVar:
    """Builds the `ResourceEnv` into a Hera auto-generated environment variable model."""
    return _ModelEnvVar(
        name=self.name,
        value_from=_ModelEnvVarSource(
            resource_field_ref=_ModelResourceFieldSelector(
                container_name=self.container_name,
                divisor=self.divisor,
                resource=self.resource,
            )
        ),
    )

Resources

A representation of a collection of resources that are requested to be consumed by a task for execution.

This follow the K8S definition for resources.

Attributes:

Name Type Description
cpu_request Optional[Union[float, int, str]]

The number of CPUs to request, either as a fraction (millicpu), whole number, or a string.

cpu_limit Optional[Union[float, int, str]]

The limit of CPUs to request, either as a fraction (millicpu), whole number, or a string.

memory_request Optional[str]

The amount of memory to request.

memory_limit Optional[str]

The memory limit of the pod.

ephemeral_request Optional[str]

The amount of ephemeral storage to request.

ephemeral_limit Optional[str]

The ephemeral storage limit of the pod.

gpus Optional[Union[int, str]]

The number of GPUs to request.

gpu_flag Optional[str]

The GPU flag to use for identifying how many GPUs to mount to a pod. This is dependent on the cloud provider.

custom_resources Optional[Dict]

Any custom resources to request. This is dependent on the cloud provider.

Notes

Most of the fields that support a union of int and str support either specifying a number for the resource, such as 1 CPU, 2 GPU, etc., a str representation of that numerical resource, such as ‘1’ CPU, ‘2’ GPU, etc., but also supports specifying a to be computed value, such as {{inputs.parameters.cpu_request}}. This means tasks, steps, etc., can be stitched together in a way to have a task/step that computes the resource requirements, and then outputs them to the next step/task.

Source code in src/hera/workflows/resources.py
class Resources(_BaseModel):
    """A representation of a collection of resources that are requested to be consumed by a task for execution.

    This follow the K8S definition for resources.

    Attributes:
        cpu_request: The number of CPUs to request, either as a fraction (millicpu), whole number, or a string.
        cpu_limit: The limit of CPUs to request, either as a fraction (millicpu), whole number, or a string.
        memory_request: The amount of memory to request.
        memory_limit: The memory limit of the pod.
        ephemeral_request: The amount of ephemeral storage to request.
        ephemeral_limit: The ephemeral storage limit of the pod.
        gpus: The number of GPUs to request.
        gpu_flag: The GPU flag to use for identifying how many GPUs to mount to a pod. This is dependent on the cloud provider.
        custom_resources: Any custom resources to request. This is dependent on the cloud provider.

    Notes:
        Most of the fields that support a union of `int` and `str` support either specifying a number for the resource,
        such as 1 CPU, 2 GPU, etc., a `str` representation of that numerical resource, such as '1' CPU, '2' GPU, etc., but
        also supports specifying a *to be computed* value, such as `{{inputs.parameters.cpu_request}}`. This means tasks,
        steps, etc., can be stitched together in a way to have a task/step that *computes* the resource requirements, and
        then `outputs` them to the next step/task.
    """

    cpu_request: Optional[Union[float, int, str]] = None
    cpu_limit: Optional[Union[float, int, str]] = None
    memory_request: Optional[str] = None
    memory_limit: Optional[str] = None
    ephemeral_request: Optional[str] = None
    ephemeral_limit: Optional[str] = None
    gpus: Optional[Union[int, str]] = None
    gpu_flag: Optional[str] = "nvidia.com/gpu"
    custom_resources: Optional[Dict] = None

    @root_validator(pre=True)
    def _check_specs(cls, values):
        cpu_request: Optional[Union[float, int, str]] = values.get("cpu_request")
        cpu_limit: Optional[Union[float, int, str]] = values.get("cpu_limit")
        memory_request: Optional[str] = values.get("memory_request")
        memory_limit: Optional[str] = values.get("memory_limit")
        ephemeral_request: Optional[str] = values.get("ephemeral_request")
        ephemeral_limit: Optional[str] = values.get("ephemeral_limit")

        if memory_request is not None:
            validate_storage_units(memory_request)
        if memory_limit is not None:
            validate_storage_units(memory_limit)

        if ephemeral_request is not None:
            validate_storage_units(ephemeral_request)
        if ephemeral_limit:
            validate_storage_units(ephemeral_limit)

        # TODO: add validation for CPU units if str
        if cpu_limit is not None and isinstance(cpu_limit, int):
            assert cpu_limit >= 0, "CPU limit must be positive"
        if cpu_request is not None and isinstance(cpu_request, int):
            assert cpu_request >= 0, "CPU request must be positive"
            if cpu_limit is not None and isinstance(cpu_limit, int):
                assert cpu_request <= cpu_limit, "CPU request must be smaller or equal to limit"

        return values

    def build(self) -> _ModelResourceRequirements:
        """Builds the resource requirements of the pod."""
        resources: Dict = dict()

        if self.cpu_limit is not None:
            resources = _merge_dicts(resources, dict(limits=dict(cpu=str(self.cpu_limit))))

        if self.cpu_request is not None:
            resources = _merge_dicts(resources, dict(requests=dict(cpu=str(self.cpu_request))))

        if self.memory_limit is not None:
            resources = _merge_dicts(resources, dict(limits=dict(memory=self.memory_limit)))

        if self.memory_request is not None:
            resources = _merge_dicts(resources, dict(requests=dict(memory=self.memory_request)))

        if self.ephemeral_limit is not None:
            resources = _merge_dicts(resources, dict(limits={"ephemeral-storage": self.ephemeral_limit}))

        if self.ephemeral_request is not None:
            resources = _merge_dicts(resources, dict(requests={"ephemeral-storage": self.ephemeral_request}))

        if self.gpus is not None:
            resources = _merge_dicts(resources, dict(requests={self.gpu_flag: str(self.gpus)}))
            resources = _merge_dicts(resources, dict(limits={self.gpu_flag: str(self.gpus)}))

        if self.custom_resources:
            resources = _merge_dicts(resources, self.custom_resources)

        return _ModelResourceRequirements(**resources)

cpu_limit class-attribute instance-attribute

cpu_limit = None

cpu_request class-attribute instance-attribute

cpu_request = None

custom_resources class-attribute instance-attribute

custom_resources = None

ephemeral_limit class-attribute instance-attribute

ephemeral_limit = None

ephemeral_request class-attribute instance-attribute

ephemeral_request = None

gpu_flag class-attribute instance-attribute

gpu_flag = 'nvidia.com/gpu'

gpus class-attribute instance-attribute

gpus = None

memory_limit class-attribute instance-attribute

memory_limit = None

memory_request class-attribute instance-attribute

memory_request = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

build

build()

Builds the resource requirements of the pod.

Source code in src/hera/workflows/resources.py
def build(self) -> _ModelResourceRequirements:
    """Builds the resource requirements of the pod."""
    resources: Dict = dict()

    if self.cpu_limit is not None:
        resources = _merge_dicts(resources, dict(limits=dict(cpu=str(self.cpu_limit))))

    if self.cpu_request is not None:
        resources = _merge_dicts(resources, dict(requests=dict(cpu=str(self.cpu_request))))

    if self.memory_limit is not None:
        resources = _merge_dicts(resources, dict(limits=dict(memory=self.memory_limit)))

    if self.memory_request is not None:
        resources = _merge_dicts(resources, dict(requests=dict(memory=self.memory_request)))

    if self.ephemeral_limit is not None:
        resources = _merge_dicts(resources, dict(limits={"ephemeral-storage": self.ephemeral_limit}))

    if self.ephemeral_request is not None:
        resources = _merge_dicts(resources, dict(requests={"ephemeral-storage": self.ephemeral_request}))

    if self.gpus is not None:
        resources = _merge_dicts(resources, dict(requests={self.gpu_flag: str(self.gpus)}))
        resources = _merge_dicts(resources, dict(limits={self.gpu_flag: str(self.gpus)}))

    if self.custom_resources:
        resources = _merge_dicts(resources, self.custom_resources)

    return _ModelResourceRequirements(**resources)

RetryPolicy

An enum that holds options for retry policy.

Source code in src/hera/workflows/retry_strategy.py
class RetryPolicy(Enum):
    """An enum that holds options for retry policy."""

    always = "Always"
    """Retry all failed steps"""

    on_failure = "OnFailure"
    """Retry steps whose main container is marked as failed in Kubernetes"""

    on_error = "OnError"
    """Retry steps that encounter Argo controller errors, or whose init or wait containers fail"""

    on_transient_error = "OnTransientError"
    """Retry steps that encounter errors defined as transient, or errors matching the `TRANSIENT_ERROR_PATTERN`
    environment variable.
    Available in version 3.0 and later.
    """

    def __str__(self) -> str:
        """Assembles the `value` representation of the enum as a string."""
        return str(self.value)

always class-attribute instance-attribute

always = 'Always'

Retry all failed steps

on_error class-attribute instance-attribute

on_error = 'OnError'

Retry steps that encounter Argo controller errors, or whose init or wait containers fail

on_failure class-attribute instance-attribute

on_failure = 'OnFailure'

Retry steps whose main container is marked as failed in Kubernetes

on_transient_error class-attribute instance-attribute

on_transient_error = 'OnTransientError'

Retry steps that encounter errors defined as transient, or errors matching the TRANSIENT_ERROR_PATTERN environment variable. Available in version 3.0 and later.

RetryStrategy

RetryStrategy configures how an Argo job should retry.

Source code in src/hera/workflows/retry_strategy.py
class RetryStrategy(_BaseModel):
    """`RetryStrategy` configures how an Argo job should retry."""

    affinity: Optional[RetryAffinity] = None
    """affinity dictates the affinity of the retried jobs"""

    backoff: Optional[Backoff] = None
    """backoff dictates how long should a job wait for before retrying"""

    expression: Optional[str] = None
    """the expression field supports the expression of complex rules regarding retry behavior"""

    limit: Optional[Union[str, int, IntOrString]] = None
    """the hard numeric limit of how many times a jobs should retry"""

    retry_policy: Optional[Union[str, RetryPolicy]] = None
    """the policy dictates, at a high level, under what conditions should a job retry"""

    @validator("retry_policy", pre=True)
    def _convert_retry_policy(cls, v):
        """Converts the `retry_policy` field into a pure `str` from either `str` already or an enum."""
        if v is None or isinstance(v, str):
            return v

        v = cast(RetryPolicy, v)
        return v.value

    @validator("limit", pre=True)
    def _convert_limit(cls, v):
        """Converts the `limit` field from the union specification into a `str`."""
        if v is None or isinstance(v, IntOrString):
            return v

        return str(v)  # int or str

    def build(self) -> _ModelRetryStrategy:
        """Builds the generated `RetryStrategy` representation of the retry strategy."""
        return _ModelRetryStrategy(
            affinity=self.affinity,
            backoff=self.backoff,
            expression=self.expression,
            limit=self.limit,
            retry_policy=self.retry_policy,
        )

affinity class-attribute instance-attribute

affinity = None

affinity dictates the affinity of the retried jobs

backoff class-attribute instance-attribute

backoff = None

backoff dictates how long should a job wait for before retrying

expression class-attribute instance-attribute

expression = None

the expression field supports the expression of complex rules regarding retry behavior

limit class-attribute instance-attribute

limit = None

the hard numeric limit of how many times a jobs should retry

retry_policy class-attribute instance-attribute

retry_policy = None

the policy dictates, at a high level, under what conditions should a job retry

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

build

build()

Builds the generated RetryStrategy representation of the retry strategy.

Source code in src/hera/workflows/retry_strategy.py
def build(self) -> _ModelRetryStrategy:
    """Builds the generated `RetryStrategy` representation of the retry strategy."""
    return _ModelRetryStrategy(
        affinity=self.affinity,
        backoff=self.backoff,
        expression=self.expression,
        limit=self.limit,
        retry_policy=self.retry_policy,
    )

RunnerScriptConstructor

RunnerScriptConstructor is a script constructor that runs a script in a container.

The runner script, also known as “The Hera runner”, takes a script/Python function definition, infers the path to the function (module import), assembles a path to invoke the function, and passes any specified parameters to the function. This helps users “save” on the source space required for submitting a function for remote execution on Argo. Execution within the container requires the executing container to include the file that contains the submitted script. More specifically, the container must be created in some process (e.g. CI), so that it conains the script to run remotely.

Source code in src/hera/workflows/script.py
class RunnerScriptConstructor(ScriptConstructor, ExperimentalMixin):
    """`RunnerScriptConstructor` is a script constructor that runs a script in a container.

    The runner script, also known as "The Hera runner", takes a script/Python function definition, infers the path
    to the function (module import), assembles a path to invoke the function, and passes any specified parameters
    to the function. This helps users "save" on the `source` space required for submitting a function for remote
    execution on Argo. Execution within the container *requires* the executing container to include the file that
    contains the submitted script. More specifically, the container must be created in some process (e.g. CI), so that
    it conains the script to run remotely.
    """

    _flag: str = "script_runner"

    outputs_directory: Optional[str] = None
    """Used for saving outputs when defined using annotations."""

    volume_for_outputs: Optional[_BaseVolume] = None
    """Volume to use if saving outputs when defined using annotations."""

    DEFAULT_HERA_OUTPUTS_DIRECTORY: str = "/tmp/hera/outputs"
    """Used as the default value for when the outputs_directory is not set"""

    def transform_values(self, cls: Type[Script], values: Any) -> Any:
        """A function that can inspect the Script instance and generate the source field."""
        if not callable(values.get("source")):
            return values

        if values.get("args") is not None:
            raise ValueError("Cannot specify args when callable is True")
        values["args"] = [
            "-m",
            "hera.workflows.runner",
            "-e",
            f'{values["source"].__module__}:{values["source"].__name__}',
        ]

        return values

    def generate_source(self, instance: Script) -> str:
        """A function that can inspect the Script instance and generate the source field."""
        return f"{g.inputs.parameters:$}"

    def transform_script_template_post_build(
        self, instance: "Script", script: _ModelScriptTemplate
    ) -> _ModelScriptTemplate:
        """A hook to transform the generated script template."""
        if global_config.experimental_features["script_annotations"]:
            if not script.env:
                script.env = []
            script.env.append(EnvVar(name="hera__script_annotations", value=""))
            if self.outputs_directory:
                script.env.append(EnvVar(name="hera__outputs_directory", value=self.outputs_directory))
        return script

DEFAULT_HERA_OUTPUTS_DIRECTORY class-attribute instance-attribute

DEFAULT_HERA_OUTPUTS_DIRECTORY = '/tmp/hera/outputs'

Used as the default value for when the outputs_directory is not set

outputs_directory class-attribute instance-attribute

outputs_directory = None

Used for saving outputs when defined using annotations.

volume_for_outputs class-attribute instance-attribute

volume_for_outputs = None

Volume to use if saving outputs when defined using annotations.

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

generate_source

generate_source(instance)

A function that can inspect the Script instance and generate the source field.

Source code in src/hera/workflows/script.py
def generate_source(self, instance: Script) -> str:
    """A function that can inspect the Script instance and generate the source field."""
    return f"{g.inputs.parameters:$}"

transform_script_template_post_build

transform_script_template_post_build(instance, script)

A hook to transform the generated script template.

Source code in src/hera/workflows/script.py
def transform_script_template_post_build(
    self, instance: "Script", script: _ModelScriptTemplate
) -> _ModelScriptTemplate:
    """A hook to transform the generated script template."""
    if global_config.experimental_features["script_annotations"]:
        if not script.env:
            script.env = []
        script.env.append(EnvVar(name="hera__script_annotations", value=""))
        if self.outputs_directory:
            script.env.append(EnvVar(name="hera__outputs_directory", value=self.outputs_directory))
    return script

transform_template_post_build

transform_template_post_build(instance, template)

A hook to transform the generated template.

Source code in src/hera/workflows/script.py
def transform_template_post_build(self, instance: "Script", template: _ModelTemplate) -> _ModelTemplate:
    """A hook to transform the generated template."""
    return template

transform_values

transform_values(cls, values)

A function that can inspect the Script instance and generate the source field.

Source code in src/hera/workflows/script.py
def transform_values(self, cls: Type[Script], values: Any) -> Any:
    """A function that can inspect the Script instance and generate the source field."""
    if not callable(values.get("source")):
        return values

    if values.get("args") is not None:
        raise ValueError("Cannot specify args when callable is True")
    values["args"] = [
        "-m",
        "hera.workflows.runner",
        "-e",
        f'{values["source"].__module__}:{values["source"].__name__}',
    ]

    return values

S3Artifact

An artifact sourced from AWS S3.

Source code in src/hera/workflows/artifact.py
class S3Artifact(_ModelS3Artifact, Artifact):
    """An artifact sourced from AWS S3."""

    def _build_artifact(self) -> _ModelArtifact:
        artifact = super()._build_artifact()
        artifact.s3 = _ModelS3Artifact(
            access_key_secret=self.access_key_secret,
            bucket=self.bucket,
            create_bucket_if_not_present=self.create_bucket_if_not_present,
            encryption_options=self.encryption_options,
            endpoint=self.endpoint,
            insecure=self.insecure,
            key=self.key,
            region=self.region,
            role_arn=self.role_arn,
            secret_key_secret=self.secret_key_secret,
            use_sdk_creds=self.use_sdk_creds,
        )
        return artifact

    @classmethod
    def _get_input_attributes(cls):
        """Return the attributes used for input artifact annotations."""
        return super()._get_input_attributes() + [
            "access_key_secret",
            "bucket",
            "create_bucket_if_not_present",
            "encryption_options",
            "endpoint",
            "insecure",
            "key",
            "region",
            "role_arn",
            "secret_key_secret",
            "use_sdk_creds",
        ]

access_key_secret class-attribute instance-attribute

access_key_secret = Field(default=None, alias='accessKeySecret', description="AccessKeySecret is the secret selector to the bucket's access key")

archive class-attribute instance-attribute

archive = None

artifact archiving configuration

archive_logs class-attribute instance-attribute

archive_logs = None

whether to log the archive object

artifact_gc class-attribute instance-attribute

artifact_gc = None

artifact garbage collection configuration

bucket class-attribute instance-attribute

bucket = Field(default=None, description='Bucket is the name of the bucket')

create_bucket_if_not_present class-attribute instance-attribute

create_bucket_if_not_present = Field(default=None, alias='createBucketIfNotPresent', description="CreateBucketIfNotPresent tells the driver to attempt to create the S3 bucket for output artifacts, if it doesn't exist. Setting Enabled Encryption will apply either SSE-S3 to the bucket if KmsKeyId is not set or SSE-KMS if it is.")

deleted class-attribute instance-attribute

deleted = None

whether the artifact is deleted

encryption_options class-attribute instance-attribute

encryption_options = Field(default=None, alias='encryptionOptions')

endpoint class-attribute instance-attribute

endpoint = Field(default=None, description='Endpoint is the hostname of the bucket endpoint')

from_ class-attribute instance-attribute

from_ = None

configures the artifact task/step origin

from_expression class-attribute instance-attribute

from_expression = None

an expression that dictates where to obtain the artifact from

global_name class-attribute instance-attribute

global_name = None

global workflow artifact name

insecure class-attribute instance-attribute

insecure = Field(default=None, description='Insecure will connect to the service with TLS')

key class-attribute instance-attribute

key = Field(default=None, description='Key is the key in the bucket where the artifact resides')

loader class-attribute instance-attribute

loader = None

used in Artifact annotations for determining how to load the data

mode class-attribute instance-attribute

mode = None

mode bits to use on the artifact, must be a value between 0 and 0777 set when loading input artifacts.

name instance-attribute

name

name of the artifact

output class-attribute instance-attribute

output = False

used to specify artifact as an output in function signature annotations

path class-attribute instance-attribute

path = None

path where the artifact should be placed/loaded from

recurse_mode class-attribute instance-attribute

recurse_mode = None

recursion mode when applying the permissions of the artifact if it is an artifact folder

region class-attribute instance-attribute

region = Field(default=None, description='Region contains the optional bucket region')

role_arn class-attribute instance-attribute

role_arn = Field(default=None, alias='roleARN', description='RoleARN is the Amazon Resource Name (ARN) of the role to assume.')

secret_key_secret class-attribute instance-attribute

secret_key_secret = Field(default=None, alias='secretKeySecret', description="SecretKeySecret is the secret selector to the bucket's secret key")

sub_path class-attribute instance-attribute

sub_path = None

allows the specification of an artifact from a subpath within the main source.

use_sdk_creds class-attribute instance-attribute

use_sdk_creds = Field(default=None, alias='useSDKCreds', description='UseSDKCreds tells the driver to figure out credentials based on sdk defaults.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

as_name

as_name(name)

DEPRECATED, use with_name.

Returns a ‘built’ copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def as_name(self, name: str) -> _ModelArtifact:
    """DEPRECATED, use with_name.

    Returns a 'built' copy of the current artifact, renamed using the specified `name`.
    """
    logger.warning("'as_name' is deprecated, use 'with_name'")
    artifact = self._build_artifact()
    artifact.name = name
    return artifact

with_name

with_name(name)

Returns a copy of the current artifact, renamed using the specified name.

Source code in src/hera/workflows/artifact.py
def with_name(self, name: str) -> Artifact:
    """Returns a copy of the current artifact, renamed using the specified `name`."""
    artifact = self.copy(deep=True)
    artifact.name = name
    return artifact

ScaleIOVolume

ScaleIOVolume represents a ScaleIO volume to mount to the container.

Source code in src/hera/workflows/volume.py
class ScaleIOVolume(_BaseVolume, _ModelScaleIOVolumeSource):
    """`ScaleIOVolume` represents a ScaleIO volume to mount to the container."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            scale_io=_ModelScaleIOVolumeSource(
                fs_type=self.fs_type,
                gateway=self.gateway,
                protection_domain=self.protection_domain,
                read_only=self.read_only,
                secret_ref=self.secret_ref,
                ssl_enabled=self.ssl_enabled,
                storage_mode=self.storage_mode,
                storage_pool=self.storage_pool,
                system=self.system,
                volume_name=self.volume_name,
            ),
        )

fs_type class-attribute instance-attribute

fs_type = Field(default=None, alias='fsType', description='Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs".')

gateway class-attribute instance-attribute

gateway = Field(..., description='The host address of the ScaleIO API Gateway.')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

protection_domain class-attribute instance-attribute

protection_domain = Field(default=None, alias='protectionDomain', description='The name of the ScaleIO Protection Domain for the configured storage.')

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

secret_ref class-attribute instance-attribute

secret_ref = Field(..., alias='secretRef', description='SecretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail.')

ssl_enabled class-attribute instance-attribute

ssl_enabled = Field(default=None, alias='sslEnabled', description='Flag to enable/disable SSL communication with Gateway, default false')

storage_mode class-attribute instance-attribute

storage_mode = Field(default=None, alias='storageMode', description='Indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned.')

storage_pool class-attribute instance-attribute

storage_pool = Field(default=None, alias='storagePool', description='The ScaleIO Storage Pool associated with the protection domain.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

system class-attribute instance-attribute

system = Field(..., description='The name of the storage system as configured in ScaleIO.')

volume_name class-attribute instance-attribute

volume_name = Field(default=None, alias='volumeName', description='The name of a volume already created in the ScaleIO system that is associated with this volume source.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

Script

A Script acts as a wrapper around a container.

In Hera this defaults to a “python:3.8” image specified by global_config.image, which runs a python source specified by Script.source.

Source code in src/hera/workflows/script.py
class Script(
    EnvIOMixin,
    CallableTemplateMixin,
    ContainerMixin,
    TemplateMixin,
    ResourceMixin,
    VolumeMountMixin,
):
    """A Script acts as a wrapper around a container.

    In Hera this defaults to a "python:3.8" image specified by global_config.image, which runs a python source
    specified by `Script.source`.
    """

    container_name: Optional[str] = None
    args: Optional[List[str]] = None
    command: Optional[List[str]] = None
    lifecycle: Optional[Lifecycle] = None
    security_context: Optional[SecurityContext] = None
    source: Optional[Union[Callable, str]] = None
    working_dir: Optional[str] = None
    add_cwd_to_sys_path: Optional[bool] = None
    constructor: Optional[Union[str, ScriptConstructor]] = None

    @validator("constructor", always=True)
    @classmethod
    def _set_constructor(cls, v):
        if v is None:
            # TODO: In the future we can insert
            # detection code here to determine
            # the best constructor to use.
            v = InlineScriptConstructor()
        if isinstance(v, ScriptConstructor):
            return v
        assert isinstance(v, str)
        if v.lower() == "inline":
            return InlineScriptConstructor()
        elif v.lower() == "runner":
            return RunnerScriptConstructor()
        raise ValueError(f"Unknown constructor {v}")

    @validator("command", always=True)
    @classmethod
    def _set_command(cls, v):
        return v or global_config.script_command

    @validator("add_cwd_to_sys_path", always=True)
    @classmethod
    def _set_add_cwd_to_sys_path(cls, v):
        if v is None:
            return True

    @root_validator
    @classmethod
    def _constructor_validate(cls, values):
        constructor = values.get("constructor")
        assert isinstance(constructor, ScriptConstructor)
        return constructor.transform_values(cls, values)

    def _build_template(self) -> _ModelTemplate:
        assert isinstance(self.constructor, ScriptConstructor)

        return self.constructor.transform_template_post_build(
            self,
            _ModelTemplate(
                active_deadline_seconds=self.active_deadline_seconds,
                affinity=self.affinity,
                archive_location=self.archive_location,
                automount_service_account_token=self.automount_service_account_token,
                daemon=self.daemon,
                executor=self.executor,
                fail_fast=self.fail_fast,
                host_aliases=self.host_aliases,
                init_containers=self.init_containers,
                inputs=self._build_inputs(),
                memoize=self.memoize,
                metadata=self._build_metadata(),
                metrics=self._build_metrics(),
                name=self.name,
                node_selector=self.node_selector,
                outputs=self._build_outputs(),
                parallelism=self.parallelism,
                plugin=self.plugin,
                pod_spec_patch=self.pod_spec_patch,
                priority=self.priority,
                priority_class_name=self.priority_class_name,
                retry_strategy=self.retry_strategy,
                scheduler_name=self.scheduler_name,
                script=self._build_script(),
                security_context=self.pod_security_context,
                service_account_name=self.service_account_name,
                sidecars=self._build_sidecars(),
                synchronization=self.synchronization,
                timeout=self.timeout,
                tolerations=self.tolerations,
                volumes=self._build_volumes(),
            ),
        )

    def _build_script(self) -> _ModelScriptTemplate:
        assert isinstance(self.constructor, ScriptConstructor)
        image_pull_policy = self._build_image_pull_policy()
        if _output_annotations_used(cast(Callable, self.source)) and isinstance(
            self.constructor, RunnerScriptConstructor
        ):
            if not self.constructor.outputs_directory:
                self.constructor.outputs_directory = self.constructor.DEFAULT_HERA_OUTPUTS_DIRECTORY
            if self.constructor.volume_for_outputs is not None:
                if self.constructor.volume_for_outputs.mount_path is None:
                    self.constructor.volume_for_outputs.mount_path = self.constructor.outputs_directory
                self._create_hera_outputs_volume(self.constructor.volume_for_outputs)

        return self.constructor.transform_script_template_post_build(
            self,
            _ModelScriptTemplate(
                args=self.args,
                command=self.command,
                env=self._build_env(),
                env_from=self._build_env_from(),
                image=self.image,
                # `image_pull_policy` in script wants a string not an `ImagePullPolicy` object
                image_pull_policy=None if image_pull_policy is None else image_pull_policy.value,
                lifecycle=self.lifecycle,
                liveness_probe=self.liveness_probe,
                name=self.container_name,
                ports=self.ports,
                readiness_probe=self.readiness_probe,
                resources=self._build_resources(),
                security_context=self.security_context,
                source=self.constructor.generate_source(self),
                startup_probe=self.startup_probe,
                stdin=self.stdin,
                stdin_once=self.stdin_once,
                termination_message_path=self.termination_message_path,
                termination_message_policy=self.termination_message_policy,
                tty=self.tty,
                volume_devices=self.volume_devices,
                volume_mounts=self._build_volume_mounts(),
                working_dir=self.working_dir,
            ),
        )

    def _build_inputs(self) -> Optional[ModelInputs]:
        inputs = super()._build_inputs()
        func_parameters: List[Parameter] = []
        func_artifacts: List[Artifact] = []
        if callable(self.source):
            if global_config.experimental_features["script_annotations"]:
                func_parameters, func_artifacts = _get_inputs_from_callable(self.source)
            else:
                func_parameters = _get_parameters_from_callable(self.source)

        return cast(Optional[ModelInputs], self._aggregate_callable_io(inputs, func_parameters, func_artifacts, False))

    def _build_outputs(self) -> Optional[ModelOutputs]:
        outputs = super()._build_outputs()

        if not callable(self.source):
            return outputs

        if not global_config.experimental_features["script_annotations"]:
            return outputs

        outputs_directory = None
        if isinstance(self.constructor, RunnerScriptConstructor):
            outputs_directory = self.constructor.outputs_directory or self.constructor.DEFAULT_HERA_OUTPUTS_DIRECTORY

        out_parameters, out_artifacts = _get_outputs_from_return_annotation(self.source, outputs_directory)
        func_parameters, func_artifacts = _get_outputs_from_parameter_annotations(self.source, outputs_directory)
        func_parameters.extend(out_parameters)
        func_artifacts.extend(out_artifacts)

        return cast(
            Optional[ModelOutputs], self._aggregate_callable_io(outputs, func_parameters, func_artifacts, True)
        )

    def _aggregate_callable_io(
        self,
        current_io: Optional[Union[ModelInputs, ModelOutputs]],
        func_parameters: List[Parameter],
        func_artifacts: List[Artifact],
        output: bool,
    ) -> Union[ModelOutputs, ModelInputs, None]:
        """Aggregate the Inputs/Outputs with parameters and artifacts extracted from a callable."""
        if not func_parameters and not func_artifacts:
            return current_io
        if current_io is None:
            if output:
                return ModelOutputs(
                    parameters=[p.as_output() for p in func_parameters] or None,
                    artifacts=[a._build_artifact() for a in func_artifacts] or None,
                )

            return ModelInputs(
                parameters=[p.as_input() for p in func_parameters] or None,
                artifacts=[a._build_artifact() for a in func_artifacts] or None,
            )

        seen_params = {p.name for p in current_io.parameters or []}
        seen_artifacts = {a.name for a in current_io.artifacts or []}

        for param in func_parameters:
            if param.name not in seen_params and param.name not in seen_artifacts:
                if current_io.parameters is None:
                    current_io.parameters = []
                if output:
                    current_io.parameters.append(param.as_output())
                else:
                    current_io.parameters.append(param.as_input())

        for artifact in func_artifacts:
            if artifact.name not in seen_artifacts:
                if current_io.artifacts is None:
                    current_io.artifacts = []
                current_io.artifacts.append(artifact._build_artifact())

        return current_io

    def _create_hera_outputs_volume(self, volume: _BaseVolume) -> None:
        """Add given volume to the script template for the automatic saving of the hera outputs."""
        assert isinstance(self.constructor, RunnerScriptConstructor)

        if not isinstance(self.volumes, list) and self.volumes is not None:
            self.volumes = [self.volumes]
        elif self.volumes is None:
            self.volumes = []

        if volume not in self.volumes:
            self.volumes.append(volume)

active_deadline_seconds class-attribute instance-attribute

active_deadline_seconds = None

add_cwd_to_sys_path class-attribute instance-attribute

add_cwd_to_sys_path = None

affinity class-attribute instance-attribute

affinity = None

annotations class-attribute instance-attribute

annotations = None

archive_location class-attribute instance-attribute

archive_location = None

args class-attribute instance-attribute

args = None

arguments class-attribute instance-attribute

arguments = None

automount_service_account_token class-attribute instance-attribute

automount_service_account_token = None

command class-attribute instance-attribute

command = None

constructor class-attribute instance-attribute

constructor = None

container_name class-attribute instance-attribute

container_name = None

daemon class-attribute instance-attribute

daemon = None

env class-attribute instance-attribute

env = None

env_from class-attribute instance-attribute

env_from = None

executor class-attribute instance-attribute

executor = None

fail_fast class-attribute instance-attribute

fail_fast = None

host_aliases class-attribute instance-attribute

host_aliases = None

http class-attribute instance-attribute

http = None

image class-attribute instance-attribute

image = None

image_pull_policy class-attribute instance-attribute

image_pull_policy = None

init_containers class-attribute instance-attribute

init_containers = None

inputs class-attribute instance-attribute

inputs = None

labels class-attribute instance-attribute

labels = None

lifecycle class-attribute instance-attribute

lifecycle = None

liveness_probe class-attribute instance-attribute

liveness_probe = None

memoize class-attribute instance-attribute

memoize = None

metrics class-attribute instance-attribute

metrics = None

name class-attribute instance-attribute

name = None

node_selector class-attribute instance-attribute

node_selector = None

outputs class-attribute instance-attribute

outputs = None

parallelism class-attribute instance-attribute

parallelism = None

plugin class-attribute instance-attribute

plugin = None

pod_security_context class-attribute instance-attribute

pod_security_context = None

pod_spec_patch class-attribute instance-attribute

pod_spec_patch = None

ports class-attribute instance-attribute

ports = None

priority class-attribute instance-attribute

priority = None

priority_class_name class-attribute instance-attribute

priority_class_name = None

readiness_probe class-attribute instance-attribute

readiness_probe = None

resources class-attribute instance-attribute

resources = None

retry_strategy class-attribute instance-attribute

retry_strategy = None

scheduler_name class-attribute instance-attribute

scheduler_name = None

security_context class-attribute instance-attribute

security_context = None

service_account_name class-attribute instance-attribute

service_account_name = None

sidecars class-attribute instance-attribute

sidecars = None

source class-attribute instance-attribute

source = None

startup_probe class-attribute instance-attribute

startup_probe = None

stdin class-attribute instance-attribute

stdin = None

stdin_once class-attribute instance-attribute

stdin_once = None

synchronization class-attribute instance-attribute

synchronization = None

termination_message_path class-attribute instance-attribute

termination_message_path = None

termination_message_policy class-attribute instance-attribute

termination_message_policy = None

timeout class-attribute instance-attribute

timeout = None

tolerations class-attribute instance-attribute

tolerations = None

tty class-attribute instance-attribute

tty = None

volume_devices class-attribute instance-attribute

volume_devices = None

volume_mounts class-attribute instance-attribute

volume_mounts = None

volumes class-attribute instance-attribute

volumes = None

working_dir class-attribute instance-attribute

working_dir = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

ScriptConstructor

A ScriptConstructor is responsible for generating the source code for a Script given a python callable.

This allows users to customize the behaviour of the template that hera generates when a python callable is passed to the Script class.

In order to use your custom ScriptConstructor implementation, you can set it as the Script.constructor field.

Source code in src/hera/workflows/script.py
class ScriptConstructor(BaseMixin):
    """A ScriptConstructor is responsible for generating the source code for a Script given a python callable.

    This allows users to customize the behaviour of the template that hera generates when a python callable is
    passed to the Script class.

    In order to use your custom ScriptConstructor implementation, you can set it as the Script.constructor field.
    """

    @abstractmethod
    def generate_source(self, instance: "Script") -> str:
        """A function that can inspect the Script instance and generate the source field."""
        raise NotImplementedError

    def transform_values(self, cls: Type["Script"], values: Any) -> Any:
        """A function that will be invoked by the root validator of the Script class."""
        return values

    def transform_script_template_post_build(
        self, instance: "Script", script: _ModelScriptTemplate
    ) -> _ModelScriptTemplate:
        """A hook to transform the generated script template."""
        return script

    def transform_template_post_build(self, instance: "Script", template: _ModelTemplate) -> _ModelTemplate:
        """A hook to transform the generated template."""
        return template

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

generate_source abstractmethod

generate_source(instance)

A function that can inspect the Script instance and generate the source field.

Source code in src/hera/workflows/script.py
@abstractmethod
def generate_source(self, instance: "Script") -> str:
    """A function that can inspect the Script instance and generate the source field."""
    raise NotImplementedError

transform_script_template_post_build

transform_script_template_post_build(instance, script)

A hook to transform the generated script template.

Source code in src/hera/workflows/script.py
def transform_script_template_post_build(
    self, instance: "Script", script: _ModelScriptTemplate
) -> _ModelScriptTemplate:
    """A hook to transform the generated script template."""
    return script

transform_template_post_build

transform_template_post_build(instance, template)

A hook to transform the generated template.

Source code in src/hera/workflows/script.py
def transform_template_post_build(self, instance: "Script", template: _ModelTemplate) -> _ModelTemplate:
    """A hook to transform the generated template."""
    return template

transform_values

transform_values(cls, values)

A function that will be invoked by the root validator of the Script class.

Source code in src/hera/workflows/script.py
def transform_values(self, cls: Type["Script"], values: Any) -> Any:
    """A function that will be invoked by the root validator of the Script class."""
    return values

SecretEnv

SecretEnv is an environment variable whose value originates from a Kubernetes secret.

Source code in src/hera/workflows/env.py
class SecretEnv(_BaseEnv):
    """`SecretEnv` is an environment variable whose value originates from a Kubernetes secret."""

    secret_name: Optional[str] = None
    """the name of the Kubernetes secret to extract the value from"""

    secret_key: str
    """the field key within the secret that points to the value to extract and set as an env variable"""

    optional: Optional[bool] = None
    """whether the existence of the secret is optional"""

    def build(self) -> _ModelEnvVar:
        """Constructs and returns the Argo environment specification."""
        return _ModelEnvVar(
            name=self.name,
            value_from=_ModelEnvVarSource(
                secret_key_ref=_ModelSecretKeySelector(
                    name=self.secret_name, key=self.secret_key, optional=self.optional
                )
            ),
        )

name instance-attribute

name

the name of the environment variable. This is universally required irrespective of the type of env variable

optional class-attribute instance-attribute

optional = None

whether the existence of the secret is optional

secret_key instance-attribute

secret_key

the field key within the secret that points to the value to extract and set as an env variable

secret_name class-attribute instance-attribute

secret_name = None

the name of the Kubernetes secret to extract the value from

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

build

build()

Constructs and returns the Argo environment specification.

Source code in src/hera/workflows/env.py
def build(self) -> _ModelEnvVar:
    """Constructs and returns the Argo environment specification."""
    return _ModelEnvVar(
        name=self.name,
        value_from=_ModelEnvVarSource(
            secret_key_ref=_ModelSecretKeySelector(
                name=self.secret_name, key=self.secret_key, optional=self.optional
            )
        ),
    )

SecretEnvFrom

Exposes a K8s secret as an environment variable.

Source code in src/hera/workflows/env_from.py
class SecretEnvFrom(_BaseEnvFrom, _ModelSecretEnvSource):
    """Exposes a K8s secret as an environment variable."""

    def build(self) -> _ModelEnvFromSource:
        """Constructs and returns the Argo EnvFrom specification."""
        return _ModelEnvFromSource(
            prefix=self.prefix,
            secret_ref=_ModelSecretEnvSource(
                name=self.name,
                optional=self.optional,
            ),
        )

name class-attribute instance-attribute

name = Field(default=None, description='Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names')

optional class-attribute instance-attribute

optional = Field(default=None, description='Specify whether the Secret must be defined')

prefix class-attribute instance-attribute

prefix = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

build

build()

Constructs and returns the Argo EnvFrom specification.

Source code in src/hera/workflows/env_from.py
def build(self) -> _ModelEnvFromSource:
    """Constructs and returns the Argo EnvFrom specification."""
    return _ModelEnvFromSource(
        prefix=self.prefix,
        secret_ref=_ModelSecretEnvSource(
            name=self.name,
            optional=self.optional,
        ),
    )

SecretVolume

SecretVolume supports mounting a K8s secret as a container volume.

Source code in src/hera/workflows/volume.py
class SecretVolume(_BaseVolume, _ModelSecretVolumeSource):
    """`SecretVolume` supports mounting a K8s secret as a container volume."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            secret=_ModelSecretVolumeSource(
                default_mode=self.default_mode, items=self.items, optional=self.optional, secret_name=self.secret_name
            ),
        )

default_mode class-attribute instance-attribute

default_mode = Field(default=None, alias='defaultMode', description='Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.')

items class-attribute instance-attribute

items = Field(default=None, description="If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'.")

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

optional class-attribute instance-attribute

optional = Field(default=None, description='Specify whether the Secret or its keys must be defined')

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

secret_name class-attribute instance-attribute

secret_name = Field(default=None, alias='secretName', description="Name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret")

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

Step

Step is used to run a given template.

Must be instantiated under a Steps or Parallel context, or outside a Workflow.

Source code in src/hera/workflows/steps.py
class Step(
    TemplateInvocatorSubNodeMixin,
    ArgumentsMixin,
    SubNodeMixin,
    ParameterMixin,
    ItemMixin,
):
    """Step is used to run a given template.

    Must be instantiated under a Steps or Parallel context, or outside a Workflow.
    """

    @property
    def _subtype(self) -> str:
        return "steps"

    def _build_as_workflow_step(self) -> _ModelWorkflowStep:
        _template = None
        if isinstance(self.template, str):
            _template = self.template
        elif isinstance(self.template, (_ModelTemplate, TemplateMixin)):
            _template = self.template.name

        _inline = None
        if isinstance(self.inline, _ModelTemplate):
            _inline = self.inline
        elif isinstance(self.inline, Templatable):
            _inline = self.inline._build_template()

        return _ModelWorkflowStep(
            arguments=self._build_arguments(),
            continue_on=self.continue_on,
            hooks=self.hooks,
            inline=_inline,
            name=self.name,
            on_exit=self._build_on_exit(),
            template=_template,
            template_ref=self.template_ref,
            when=self.when,
            with_items=self._build_with_items(),
            with_param=self._build_with_param(),
            with_sequence=self.with_sequence,
        )

    def _build_step(
        self,
    ) -> List[_ModelWorkflowStep]:
        return [self._build_as_workflow_step()]

arguments class-attribute instance-attribute

arguments = None

continue_on class-attribute instance-attribute

continue_on = None

exit_code property

exit_code

ExitCode holds the exit code of a script template.

finished_at property

finished_at

Time at which this node completed.

hooks class-attribute instance-attribute

hooks = None

id property

id

ID of this node.

inline class-attribute instance-attribute

inline = None

ip property

ip

IP of this node.

name instance-attribute

name

on_exit class-attribute instance-attribute

on_exit = None

result property

result

Result holds the result (stdout) of a script template.

started_at property

started_at

Time at which this node started.

status property

status

Status of this node.

template class-attribute instance-attribute

template = None

template_ref class-attribute instance-attribute

template_ref = None

when class-attribute instance-attribute

when = None

with_items class-attribute instance-attribute

with_items = None

with_param class-attribute instance-attribute

with_param = None

with_sequence class-attribute instance-attribute

with_sequence = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

get_artifact

get_artifact(name)

Gets an artifact from the outputs of this subnode.

Source code in src/hera/workflows/_mixins.py
def get_artifact(self, name: str) -> Artifact:
    """Gets an artifact from the outputs of this subnode."""
    return self._get_artifact(name=name, subtype=self._subtype)

get_parameter

get_parameter(name)

Gets a parameter from the outputs of this subnode.

Source code in src/hera/workflows/_mixins.py
def get_parameter(self, name: str) -> Parameter:
    """Gets a parameter from the outputs of this subnode."""
    return self._get_parameter(name=name, subtype=self._subtype)

get_parameters_as

get_parameters_as(name)

Returns a Parameter that represents all the outputs of this subnode.

Parameters

name: str The name of the parameter to search for.

Returns:

Parameter The parameter, named based on the given name, along with a value that references all outputs.

Source code in src/hera/workflows/_mixins.py
def get_parameters_as(self, name: str) -> Parameter:
    """Returns a `Parameter` that represents all the outputs of this subnode.

    Parameters
    ----------
    name: str
        The name of the parameter to search for.

    Returns:
    -------
    Parameter
        The parameter, named based on the given `name`, along with a value that references all outputs.
    """
    return self._get_parameters_as(name=name, subtype=self._subtype)

get_result_as

get_result_as(name)

Returns a Parameter specification with the given name containing the results of self.

Source code in src/hera/workflows/_mixins.py
def get_result_as(self, name: str) -> Parameter:
    """Returns a `Parameter` specification with the given name containing the `results` of `self`."""
    return Parameter(name=name, value=self.result)

Steps

A Steps template invocator is used to define a sequence of steps which can run sequentially or in parallel.

Steps implements the contextmanager interface so allows usage of with, under which any hera.workflows.steps.Step objects instantiated will be added to the Steps’ list of sub_steps.

  • Step and Parallel objects initialised within a Steps context will be added to the list of sub_steps in the order they are initialised.
  • All Step objects initialised within a Parallel context will run in parallel.
Source code in src/hera/workflows/steps.py
class Steps(
    IOMixin,
    TemplateMixin,
    CallableTemplateMixin,
    ContextMixin,
):
    """A Steps template invocator is used to define a sequence of steps which can run sequentially or in parallel.

    Steps implements the contextmanager interface so allows usage of `with`, under which any
    `hera.workflows.steps.Step` objects instantiated will be added to the Steps' list of sub_steps.

    * Step and Parallel objects initialised within a Steps context will be added to the list of sub_steps
    in the order they are initialised.
    * All Step objects initialised within a Parallel context will run in parallel.
    """

    sub_steps: List[
        Union[
            Step,
            Parallel,
            List[Step],
            _ModelWorkflowStep,
            List[_ModelWorkflowStep],
        ]
    ] = []

    def _build_steps(self) -> Optional[List[List[_ModelWorkflowStep]]]:
        steps = []
        for workflow_step in self.sub_steps:
            if isinstance(workflow_step, Steppable):
                steps.append(workflow_step._build_step())
            elif isinstance(workflow_step, _ModelWorkflowStep):
                steps.append([workflow_step])
            elif isinstance(workflow_step, List):
                substeps = []
                for s in workflow_step:
                    if isinstance(s, Step):
                        substeps.append(s._build_as_workflow_step())
                    elif isinstance(s, _ModelWorkflowStep):
                        substeps.append(s)
                    else:
                        raise InvalidType(type(s))
                steps.append(substeps)
            else:
                raise InvalidType(type(workflow_step))

        return steps or None

    def _add_sub(self, node: Any):
        if not isinstance(node, (Step, Parallel)):
            raise InvalidType(type(node))

        self.sub_steps.append(node)

    def parallel(self) -> Parallel:
        """Returns a Parallel object which can be used in a sub-context manager."""
        return Parallel()

    def _build_template(self) -> _ModelTemplate:
        return _ModelTemplate(
            active_deadline_seconds=self.active_deadline_seconds,
            affinity=self.affinity,
            archive_location=self.archive_location,
            automount_service_account_token=self.automount_service_account_token,
            container=None,
            container_set=None,
            daemon=self.daemon,
            dag=None,
            data=None,
            executor=self.executor,
            fail_fast=self.fail_fast,
            http=None,
            host_aliases=self.host_aliases,
            init_containers=self.init_containers,
            inputs=self._build_inputs(),
            memoize=self.memoize,
            metadata=self._build_metadata(),
            metrics=self._build_metrics(),
            name=self.name,
            node_selector=self.node_selector,
            outputs=self._build_outputs(),
            parallelism=self.parallelism,
            plugin=self.plugin,
            pod_spec_patch=self.pod_spec_patch,
            priority=self.priority,
            priority_class_name=self.priority_class_name,
            resource=None,
            retry_strategy=self.retry_strategy,
            scheduler_name=self.scheduler_name,
            script=None,
            security_context=self.pod_security_context,
            service_account_name=self.service_account_name,
            sidecars=self._build_sidecars(),
            steps=self._build_steps(),
            suspend=None,
            synchronization=self.synchronization,
            timeout=self.timeout,
            tolerations=self.tolerations,
        )

active_deadline_seconds class-attribute instance-attribute

active_deadline_seconds = None

affinity class-attribute instance-attribute

affinity = None

annotations class-attribute instance-attribute

annotations = None

archive_location class-attribute instance-attribute

archive_location = None

arguments class-attribute instance-attribute

arguments = None

automount_service_account_token class-attribute instance-attribute

automount_service_account_token = None

daemon class-attribute instance-attribute

daemon = None

executor class-attribute instance-attribute

executor = None

fail_fast class-attribute instance-attribute

fail_fast = None

host_aliases class-attribute instance-attribute

host_aliases = None

http class-attribute instance-attribute

http = None

init_containers class-attribute instance-attribute

init_containers = None

inputs class-attribute instance-attribute

inputs = None

labels class-attribute instance-attribute

labels = None

memoize class-attribute instance-attribute

memoize = None

metrics class-attribute instance-attribute

metrics = None

name class-attribute instance-attribute

name = None

node_selector class-attribute instance-attribute

node_selector = None

outputs class-attribute instance-attribute

outputs = None

parallelism class-attribute instance-attribute

parallelism = None

plugin class-attribute instance-attribute

plugin = None

pod_security_context class-attribute instance-attribute

pod_security_context = None

pod_spec_patch class-attribute instance-attribute

pod_spec_patch = None

priority class-attribute instance-attribute

priority = None

priority_class_name class-attribute instance-attribute

priority_class_name = None

retry_strategy class-attribute instance-attribute

retry_strategy = None

scheduler_name class-attribute instance-attribute

scheduler_name = None

service_account_name class-attribute instance-attribute

service_account_name = None

sidecars class-attribute instance-attribute

sidecars = None

sub_steps class-attribute instance-attribute

sub_steps = []

synchronization class-attribute instance-attribute

synchronization = None

timeout class-attribute instance-attribute

timeout = None

tolerations class-attribute instance-attribute

tolerations = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

parallel

parallel()

Returns a Parallel object which can be used in a sub-context manager.

Source code in src/hera/workflows/steps.py
def parallel(self) -> Parallel:
    """Returns a Parallel object which can be used in a sub-context manager."""
    return Parallel()

StorageOSVolume

StorageOSVolume represents a Storage OS volume to mount.

Source code in src/hera/workflows/volume.py
class StorageOSVolume(_BaseVolume, _ModelStorageOSVolumeSource):
    """`StorageOSVolume` represents a Storage OS volume to mount."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            storageos=_ModelStorageOSVolumeSource(
                fs_type=self.fs_type,
                read_only=self.read_only,
                secret_ref=self.secret_ref,
                volume_name=self.volume_name,
                volume_namespace=self.volume_namespace,
            ),
        )

fs_type class-attribute instance-attribute

fs_type = Field(default=None, alias='fsType', description='Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

secret_ref class-attribute instance-attribute

secret_ref = Field(default=None, alias='secretRef', description='SecretRef specifies the secret to use for obtaining the StorageOS API credentials.  If not specified, default values will be attempted.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

volume_name class-attribute instance-attribute

volume_name = Field(default=None, alias='volumeName', description='VolumeName is the human-readable name of the StorageOS volume.  Volume names are only unique within a namespace.')

volume_namespace class-attribute instance-attribute

volume_namespace = Field(default=None, alias='volumeNamespace', description='VolumeNamespace specifies the scope of the volume within StorageOS.  If no namespace is specified then the Pod\'s namespace will be used.  This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

Suspend

The Suspend template allows the user to pause a workflow for a specified length of time.

The workflow can pause based on the given by duration or indefinitely (i.e. until manually resumed). The Suspend template also allows you to specify intermediate_parameters which will replicate the given parameters to the “inputs” and “outputs” of the template, resulting in a Suspend template that pauses and waits for values from the user for the given list of parameters.

Source code in src/hera/workflows/suspend.py
class Suspend(
    TemplateMixin,
    CallableTemplateMixin,
):
    """The Suspend template allows the user to pause a workflow for a specified length of time.

    The workflow can pause based on the given by `duration` or indefinitely (i.e. until manually resumed).
    The Suspend template also allows you to specify `intermediate_parameters` which will replicate the given
    parameters to the "inputs" and "outputs" of the template, resulting in a Suspend template that pauses and
    waits for values from the user for the given list of parameters.
    """

    duration: Optional[Union[int, str]] = None
    intermediate_parameters: List[Parameter] = []

    def _build_suspend_template(self) -> _ModelSuspendTemplate:
        return _ModelSuspendTemplate(
            duration=self.duration,
        )

    def _build_outputs(self) -> Optional[Outputs]:
        outputs = []
        for param in self.intermediate_parameters:
            outputs.append(
                Parameter(name=param.name, value_from={"supplied": {}}, description=param.description).as_output()
            )
        return Outputs(parameters=outputs) if outputs else None

    def _build_inputs(self) -> Optional[Inputs]:
        inputs = []
        for param in self.intermediate_parameters:
            inputs.append(param.as_input())
        return Inputs(parameters=inputs) if inputs else None

    def _build_template(self) -> _ModelTemplate:
        return _ModelTemplate(
            active_deadline_seconds=self.active_deadline_seconds,
            affinity=self.affinity,
            archive_location=self.archive_location,
            automount_service_account_token=self.automount_service_account_token,
            executor=self.executor,
            fail_fast=self.fail_fast,
            host_aliases=self.host_aliases,
            init_containers=self.init_containers,
            inputs=self._build_inputs(),
            memoize=self.memoize,
            metadata=self._build_metadata(),
            name=self.name,
            node_selector=self.node_selector,
            outputs=self._build_outputs(),
            plugin=self.plugin,
            priority_class_name=self.priority_class_name,
            priority=self.priority,
            retry_strategy=self.retry_strategy,
            scheduler_name=self.scheduler_name,
            security_context=self.pod_security_context,
            service_account_name=self.service_account_name,
            sidecars=self._build_sidecars(),
            suspend=self._build_suspend_template(),
            synchronization=self.synchronization,
            timeout=self.timeout,
            tolerations=self.tolerations,
        )

active_deadline_seconds class-attribute instance-attribute

active_deadline_seconds = None

affinity class-attribute instance-attribute

affinity = None

annotations class-attribute instance-attribute

annotations = None

archive_location class-attribute instance-attribute

archive_location = None

arguments class-attribute instance-attribute

arguments = None

automount_service_account_token class-attribute instance-attribute

automount_service_account_token = None

daemon class-attribute instance-attribute

daemon = None

duration class-attribute instance-attribute

duration = None

executor class-attribute instance-attribute

executor = None

fail_fast class-attribute instance-attribute

fail_fast = None

host_aliases class-attribute instance-attribute

host_aliases = None

http class-attribute instance-attribute

http = None

init_containers class-attribute instance-attribute

init_containers = None

intermediate_parameters class-attribute instance-attribute

intermediate_parameters = []

labels class-attribute instance-attribute

labels = None

memoize class-attribute instance-attribute

memoize = None

metrics class-attribute instance-attribute

metrics = None

name class-attribute instance-attribute

name = None

node_selector class-attribute instance-attribute

node_selector = None

parallelism class-attribute instance-attribute

parallelism = None

plugin class-attribute instance-attribute

plugin = None

pod_security_context class-attribute instance-attribute

pod_security_context = None

pod_spec_patch class-attribute instance-attribute

pod_spec_patch = None

priority class-attribute instance-attribute

priority = None

priority_class_name class-attribute instance-attribute

priority_class_name = None

retry_strategy class-attribute instance-attribute

retry_strategy = None

scheduler_name class-attribute instance-attribute

scheduler_name = None

service_account_name class-attribute instance-attribute

service_account_name = None

sidecars class-attribute instance-attribute

sidecars = None

synchronization class-attribute instance-attribute

synchronization = None

timeout class-attribute instance-attribute

timeout = None

tolerations class-attribute instance-attribute

tolerations = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

TarArchiveStrategy

TarArchiveStrategy indicates artifacts should be serialized using the tar strategy.

Tar archiving is performed using the specified compression level.

Source code in src/hera/workflows/archive.py
class TarArchiveStrategy(ArchiveStrategy):
    """`TarArchiveStrategy` indicates artifacts should be serialized using the `tar` strategy.

    Tar archiving is performed using the specified compression level.
    """

    compression_level: Optional[int] = None

    def _build_archive_strategy(self) -> _ModelArchiveStrategy:
        return _ModelArchiveStrategy(tar=_ModelTarStrategy(compression_level=self.compression_level))

compression_level class-attribute instance-attribute

compression_level = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

Task

Task is used to run a given template within a DAG. Must be instantiated under a DAG context.

Source code in src/hera/workflows/task.py
class Task(
    TemplateInvocatorSubNodeMixin,
    ArgumentsMixin,
    SubNodeMixin,
    ParameterMixin,
    ItemMixin,
):
    """Task is used to run a given template within a DAG. Must be instantiated under a DAG context."""

    dependencies: Optional[List[str]] = None
    depends: Optional[str] = None

    def _get_dependency_tasks(self) -> List[str]:
        if self.depends is None:
            return []

        # filter out operators
        all_operators = [o for o in Operator]
        tasks = [t for t in self.depends.split() if t not in all_operators]

        # remove dot suffixes
        task_names = [t.split(".")[0] for t in tasks]
        return task_names

    @property
    def _subtype(self) -> str:
        return "tasks"

    def next(self, other: Task, operator: Operator = Operator.and_, on: Optional[TaskResult] = None) -> Task:
        """Set self as a dependency of `other`."""
        assert issubclass(other.__class__, Task)

        condition = f".{on.value}" if on else ""

        if other.depends is None:
            # First dependency
            other.depends = self.name + condition
        elif self.name in other._get_dependency_tasks():
            raise ValueError(f"{self.name} already in {other.name}'s depends: {other.depends}")
        else:
            # Add follow-up dependency
            other.depends += f" {operator} {self.name + condition}"
        return other

    def __rrshift__(self, other: List[Union[Task, str]]) -> Task:
        """Set `other` as a dependency self."""
        assert isinstance(other, list), f"Unknown type {type(other)} specified using reverse right bitshift operator"
        for o in other:
            if isinstance(o, Task):
                o.next(self)
            else:
                assert isinstance(
                    o, str
                ), f"Unknown list item type {type(o)} specified using reverse right bitshift operator"
                if self.depends is None:
                    self.depends = o
                else:
                    self.depends += f" && {o}"
        return self

    def __rshift__(self, other: Union[Task, List[Task]]) -> Union[Task, List[Task]]:
        """Set self as a dependency of `other` which can be a single Task or list of Tasks."""
        if isinstance(other, Task):
            return self.next(other)
        elif isinstance(other, list):
            for o in other:
                assert isinstance(
                    o, Task
                ), f"Unknown list item type {type(o)} specified using right bitshift operator `>>`"
                self.next(o)
            return other
        raise ValueError(f"Unknown type {type(other)} provided to `__rshift__`")

    def __or__(self, other: Union[Task, str]) -> str:
        """Adds a condition of."""
        if isinstance(other, Task):
            return f"({self.name} || {other.name})"
        assert isinstance(other, str), f"Unknown type {type(other)} specified using `|` operator"
        return f"{self.name} || {other}"

    def on_workflow_status(self, status: WorkflowStatus, op: Operator = Operator.equals) -> Task:
        """Sets the current task to run when the workflow finishes with the specified status."""
        expression = f"{{{{workflow.status}}}} {op} {status}"
        if self.when:
            self.when += f" {Operator.and_} {expression}"
        else:
            self.when = expression
        return self

    def on_success(self, other: Task) -> Task:
        """Sets the current task to run when the given `other` task succeeds."""
        return self.next(other, on=TaskResult.succeeded)

    def on_failure(self, other: Task) -> Task:
        """Sets the current task to run when the given `other` task fails."""
        return self.next(other, on=TaskResult.failed)

    def on_error(self, other: Task) -> Task:
        """Sets the current task to run when the given `other` task errors."""
        return self.next(other, on=TaskResult.errored)

    def on_other_result(self, other: Task, value: str, operator: Operator = Operator.equals) -> Task:
        """Sets the current task to run when the given `other` task results in the specified `value` result."""
        expression = f"{other.result} {operator} {value}"
        if self.when:
            self.when += f" {Operator.and_} {expression}"
        else:
            self.when = expression
        other.next(self)
        return self

    def when_any_succeeded(self, other: Task) -> Task:
        """Sets the current task to run when the given `other` task succeedds."""
        assert (self.with_param is not None) or (
            self.with_sequence is not None
        ), "Can only use `when_all_failed` when using `with_param` or `with_sequence`"

        return self.next(other, on=TaskResult.any_succeeded)

    def when_all_failed(self, other: Task) -> Task:
        """Sets the current task to run when the given `other` task has failed."""
        assert (self.with_param is not None) or (
            self.with_sequence is not None
        ), "Can only use `when_all_failed` when using `with_param` or `with_sequence`"

        return self.next(other, on=TaskResult.all_failed)

    def _build_dag_task(self) -> _ModelDAGTask:
        _template = None
        if isinstance(self.template, str):
            _template = self.template
        elif isinstance(self.template, (Template, TemplateMixin)):
            _template = self.template.name

        _inline = None
        if isinstance(self.inline, Template):
            _inline = self.inline
        elif isinstance(self.inline, Templatable):
            _inline = self.inline._build_template()

        return _ModelDAGTask(
            arguments=self._build_arguments(),
            continue_on=self.continue_on,
            dependencies=self.dependencies,
            depends=self.depends,
            hooks=self.hooks,
            inline=_inline,
            name=self.name,
            on_exit=self._build_on_exit(),
            template=_template,
            template_ref=self.template_ref,
            when=self.when,
            with_items=self._build_with_items(),
            with_param=self._build_with_param(),
            with_sequence=self.with_sequence,
        )

arguments class-attribute instance-attribute

arguments = None

continue_on class-attribute instance-attribute

continue_on = None

dependencies class-attribute instance-attribute

dependencies = None

depends class-attribute instance-attribute

depends = None

exit_code property

exit_code

ExitCode holds the exit code of a script template.

finished_at property

finished_at

Time at which this node completed.

hooks class-attribute instance-attribute

hooks = None

id property

id

ID of this node.

inline class-attribute instance-attribute

inline = None

ip property

ip

IP of this node.

name instance-attribute

name

on_exit class-attribute instance-attribute

on_exit = None

result property

result

Result holds the result (stdout) of a script template.

started_at property

started_at

Time at which this node started.

status property

status

Status of this node.

template class-attribute instance-attribute

template = None

template_ref class-attribute instance-attribute

template_ref = None

when class-attribute instance-attribute

when = None

with_items class-attribute instance-attribute

with_items = None

with_param class-attribute instance-attribute

with_param = None

with_sequence class-attribute instance-attribute

with_sequence = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

get_artifact

get_artifact(name)

Gets an artifact from the outputs of this subnode.

Source code in src/hera/workflows/_mixins.py
def get_artifact(self, name: str) -> Artifact:
    """Gets an artifact from the outputs of this subnode."""
    return self._get_artifact(name=name, subtype=self._subtype)

get_parameter

get_parameter(name)

Gets a parameter from the outputs of this subnode.

Source code in src/hera/workflows/_mixins.py
def get_parameter(self, name: str) -> Parameter:
    """Gets a parameter from the outputs of this subnode."""
    return self._get_parameter(name=name, subtype=self._subtype)

get_parameters_as

get_parameters_as(name)

Returns a Parameter that represents all the outputs of this subnode.

Parameters

name: str The name of the parameter to search for.

Returns:

Parameter The parameter, named based on the given name, along with a value that references all outputs.

Source code in src/hera/workflows/_mixins.py
def get_parameters_as(self, name: str) -> Parameter:
    """Returns a `Parameter` that represents all the outputs of this subnode.

    Parameters
    ----------
    name: str
        The name of the parameter to search for.

    Returns:
    -------
    Parameter
        The parameter, named based on the given `name`, along with a value that references all outputs.
    """
    return self._get_parameters_as(name=name, subtype=self._subtype)

get_result_as

get_result_as(name)

Returns a Parameter specification with the given name containing the results of self.

Source code in src/hera/workflows/_mixins.py
def get_result_as(self, name: str) -> Parameter:
    """Returns a `Parameter` specification with the given name containing the `results` of `self`."""
    return Parameter(name=name, value=self.result)

next

next(other, operator=Operator.and_, on=None)

Set self as a dependency of other.

Source code in src/hera/workflows/task.py
def next(self, other: Task, operator: Operator = Operator.and_, on: Optional[TaskResult] = None) -> Task:
    """Set self as a dependency of `other`."""
    assert issubclass(other.__class__, Task)

    condition = f".{on.value}" if on else ""

    if other.depends is None:
        # First dependency
        other.depends = self.name + condition
    elif self.name in other._get_dependency_tasks():
        raise ValueError(f"{self.name} already in {other.name}'s depends: {other.depends}")
    else:
        # Add follow-up dependency
        other.depends += f" {operator} {self.name + condition}"
    return other

on_error

on_error(other)

Sets the current task to run when the given other task errors.

Source code in src/hera/workflows/task.py
def on_error(self, other: Task) -> Task:
    """Sets the current task to run when the given `other` task errors."""
    return self.next(other, on=TaskResult.errored)

on_failure

on_failure(other)

Sets the current task to run when the given other task fails.

Source code in src/hera/workflows/task.py
def on_failure(self, other: Task) -> Task:
    """Sets the current task to run when the given `other` task fails."""
    return self.next(other, on=TaskResult.failed)

on_other_result

on_other_result(other, value, operator=Operator.equals)

Sets the current task to run when the given other task results in the specified value result.

Source code in src/hera/workflows/task.py
def on_other_result(self, other: Task, value: str, operator: Operator = Operator.equals) -> Task:
    """Sets the current task to run when the given `other` task results in the specified `value` result."""
    expression = f"{other.result} {operator} {value}"
    if self.when:
        self.when += f" {Operator.and_} {expression}"
    else:
        self.when = expression
    other.next(self)
    return self

on_success

on_success(other)

Sets the current task to run when the given other task succeeds.

Source code in src/hera/workflows/task.py
def on_success(self, other: Task) -> Task:
    """Sets the current task to run when the given `other` task succeeds."""
    return self.next(other, on=TaskResult.succeeded)

on_workflow_status

on_workflow_status(status, op=Operator.equals)

Sets the current task to run when the workflow finishes with the specified status.

Source code in src/hera/workflows/task.py
def on_workflow_status(self, status: WorkflowStatus, op: Operator = Operator.equals) -> Task:
    """Sets the current task to run when the workflow finishes with the specified status."""
    expression = f"{{{{workflow.status}}}} {op} {status}"
    if self.when:
        self.when += f" {Operator.and_} {expression}"
    else:
        self.when = expression
    return self

when_all_failed

when_all_failed(other)

Sets the current task to run when the given other task has failed.

Source code in src/hera/workflows/task.py
def when_all_failed(self, other: Task) -> Task:
    """Sets the current task to run when the given `other` task has failed."""
    assert (self.with_param is not None) or (
        self.with_sequence is not None
    ), "Can only use `when_all_failed` when using `with_param` or `with_sequence`"

    return self.next(other, on=TaskResult.all_failed)

when_any_succeeded

when_any_succeeded(other)

Sets the current task to run when the given other task succeedds.

Source code in src/hera/workflows/task.py
def when_any_succeeded(self, other: Task) -> Task:
    """Sets the current task to run when the given `other` task succeedds."""
    assert (self.with_param is not None) or (
        self.with_sequence is not None
    ), "Can only use `when_all_failed` when using `with_param` or `with_sequence`"

    return self.next(other, on=TaskResult.any_succeeded)

TaskResult

The enumeration of Task Results.

See Also

Argo Depends Docs

Source code in src/hera/workflows/task.py
class TaskResult(Enum):
    """The enumeration of Task Results.

    See Also:
        [Argo Depends Docs](https://argoproj.github.io/argo-workflows/enhanced-depends-logic/#depends)
    """

    failed = "Failed"
    succeeded = "Succeeded"
    errored = "Errored"
    skipped = "Skipped"
    omitted = "Omitted"
    daemoned = "Daemoned"
    any_succeeded = "AnySucceeded"
    all_failed = "AllFailed"

all_failed class-attribute instance-attribute

all_failed = 'AllFailed'

any_succeeded class-attribute instance-attribute

any_succeeded = 'AnySucceeded'

daemoned class-attribute instance-attribute

daemoned = 'Daemoned'

errored class-attribute instance-attribute

errored = 'Errored'

failed class-attribute instance-attribute

failed = 'Failed'

omitted class-attribute instance-attribute

omitted = 'Omitted'

skipped class-attribute instance-attribute

skipped = 'Skipped'

succeeded class-attribute instance-attribute

succeeded = 'Succeeded'

UserContainer

UserContainer is a container type that is specifically used as a side container.

Source code in src/hera/workflows/user_container.py
class UserContainer(_ModelUserContainer):
    """`UserContainer` is a container type that is specifically used as a side container."""

    env: Optional[List[Union[_BaseEnv, EnvVar]]] = None  # type: ignore[assignment]
    env_from: Optional[List[Union[_BaseEnvFrom, EnvFromSource]]] = None  # type: ignore[assignment]
    image_pull_policy: Optional[Union[str, ImagePullPolicy]] = None  # type: ignore[assignment]
    resources: Optional[Union[Resources, ResourceRequirements]] = None  # type: ignore[assignment]
    volumes: Optional[List[_BaseVolume]] = None

    def build(self) -> _ModelUserContainer:
        """Builds the Hera auto-generated model of the user container."""
        return _ModelUserContainer(
            args=self.args,
            command=self.command,
            env=self.env,
            env_from=self.env_from,
            image=self.image,
            image_pull_policy=self.image_pull_policy,
            lifecycle=self.lifecycle,
            liveness_probe=self.liveness_probe,
            mirror_volume_mounts=self.mirror_volume_mounts,
            name=self.name,
            ports=self.ports,
            readiness_probe=self.readiness_probe,
            resources=self.resources,
            security_context=self.security_context,
            startup_probe=self.startup_probe,
            stdin=self.stdin,
            stdin_once=self.stdin_once,
            termination_message_path=self.termination_message_path,
            termination_message_policy=self.termination_message_policy,
            tty=self.tty,
            volume_devices=self.volume_devices,
            volume_mounts=None if self.volumes is None else [v._build_volume_mount() for v in self.volumes],
            working_dir=self.working_dir,
        )

args class-attribute instance-attribute

args = Field(default=None, description='Arguments to the entrypoint. The container image\'s CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container\'s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell')

command class-attribute instance-attribute

command = Field(default=None, description='Entrypoint array. Not executed within a shell. The container image\'s ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container\'s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell')

env class-attribute instance-attribute

env = None

env_from class-attribute instance-attribute

env_from = None

image class-attribute instance-attribute

image = Field(default=None, description='Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets.')

image_pull_policy class-attribute instance-attribute

image_pull_policy = None

lifecycle class-attribute instance-attribute

lifecycle = Field(default=None, description='Actions that the management system should take in response to container lifecycle events. Cannot be updated.')

liveness_probe class-attribute instance-attribute

liveness_probe = Field(default=None, alias='livenessProbe', description='Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes')

mirror_volume_mounts class-attribute instance-attribute

mirror_volume_mounts = Field(default=None, alias='mirrorVolumeMounts', description='MirrorVolumeMounts will mount the same volumes specified in the main container to the container (including artifacts), at the same mountPaths. This enables dind daemon to partially see the same filesystem as the main container in order to use features such as docker volume binding')

name class-attribute instance-attribute

name = Field(..., description='Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.')

ports class-attribute instance-attribute

ports = Field(default=None, description='List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.')

readiness_probe class-attribute instance-attribute

readiness_probe = Field(default=None, alias='readinessProbe', description='Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes')

resources class-attribute instance-attribute

resources = None

security_context class-attribute instance-attribute

security_context = Field(default=None, alias='securityContext', description='SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/')

startup_probe class-attribute instance-attribute

startup_probe = Field(default=None, alias='startupProbe', description="StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes")

stdin class-attribute instance-attribute

stdin = Field(default=None, description='Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false.')

stdin_once class-attribute instance-attribute

stdin_once = Field(default=None, alias='stdinOnce', description='Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false')

termination_message_path class-attribute instance-attribute

termination_message_path = Field(default=None, alias='terminationMessagePath', description="Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated.")

termination_message_policy class-attribute instance-attribute

termination_message_policy = Field(default=None, alias='terminationMessagePolicy', description='Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated.')

tty class-attribute instance-attribute

tty = Field(default=None, description="Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false.")

volume_devices class-attribute instance-attribute

volume_devices = Field(default=None, alias='volumeDevices', description='volumeDevices is the list of block devices to be used by the container.')

volume_mounts class-attribute instance-attribute

volume_mounts = Field(default=None, alias='volumeMounts', description="Pod volumes to mount into the container's filesystem. Cannot be updated.")

volumes class-attribute instance-attribute

volumes = None

working_dir class-attribute instance-attribute

working_dir = Field(default=None, alias='workingDir', description="Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated.")

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

build

build()

Builds the Hera auto-generated model of the user container.

Source code in src/hera/workflows/user_container.py
def build(self) -> _ModelUserContainer:
    """Builds the Hera auto-generated model of the user container."""
    return _ModelUserContainer(
        args=self.args,
        command=self.command,
        env=self.env,
        env_from=self.env_from,
        image=self.image,
        image_pull_policy=self.image_pull_policy,
        lifecycle=self.lifecycle,
        liveness_probe=self.liveness_probe,
        mirror_volume_mounts=self.mirror_volume_mounts,
        name=self.name,
        ports=self.ports,
        readiness_probe=self.readiness_probe,
        resources=self.resources,
        security_context=self.security_context,
        startup_probe=self.startup_probe,
        stdin=self.stdin,
        stdin_once=self.stdin_once,
        termination_message_path=self.termination_message_path,
        termination_message_policy=self.termination_message_policy,
        tty=self.tty,
        volume_devices=self.volume_devices,
        volume_mounts=None if self.volumes is None else [v._build_volume_mount() for v in self.volumes],
        working_dir=self.working_dir,
    )

Volume

Volume represents a basic, dynamic, volume representation.

This Volume cannot only be instantiated to be used for mounting purposes but also for dynamically privisioning volumes in K8s. When the volume is used a corresponding persistent volume claim is also created on workflow submission.

Source code in src/hera/workflows/volume.py
class Volume(_BaseVolume, _ModelPersistentVolumeClaimSpec):
    """Volume represents a basic, dynamic, volume representation.

    This `Volume` cannot only be instantiated to be used for mounting purposes but also for dynamically privisioning
    volumes in K8s. When the volume is used a corresponding persistent volume claim is also created on workflow
    submission.
    """

    size: Optional[str] = None  # type: ignore
    resources: Optional[ResourceRequirements] = None
    metadata: Optional[ObjectMeta] = None
    access_modes: Optional[List[Union[str, AccessMode]]] = [AccessMode.read_write_once]  # type: ignore
    storage_class_name: Optional[str] = None

    @validator("access_modes", pre=True, always=True)
    def _check_access_modes(cls, v):
        if not v:
            return [AccessMode.read_write_once]

        result = []
        for mode in v:
            if isinstance(mode, AccessMode):
                result.append(mode)
            else:
                result.append(AccessMode(mode))
        return result

    @validator("name", pre=True, always=True)
    def _check_name(cls, v):
        return v or str(uuid.uuid4())

    @root_validator(pre=True)
    def _merge_reqs(cls, values):
        if "size" in values and "resources" in values:
            resources: ResourceRequirements = values.get("resources")
            if resources.requests is not None:
                if "storage" in resources.requests:
                    pass  # take the storage specification in resources
                else:
                    resources.requests["storage"] = values.get("size")
            values["resources"] = resources
        elif "resources" not in values:
            assert "size" in values, "at least one of `size` or `resources` must be specified"
            validate_storage_units(cast(str, values.get("size")))
            values["resources"] = ResourceRequirements(requests={"storage": values.get("size")})
        elif "resources" in values:
            resources = cast(ResourceRequirements, values.get("resources"))
            assert resources.requests is not None, "Resource requests are required"
            storage = resources.requests.get("storage")
            assert storage is not None, "At least one of `size` or `resources.requests.storage` must be specified"
            validate_storage_units(cast(str, storage))
        return values

    def _build_persistent_volume_claim(self) -> _ModelPersistentVolumeClaim:
        return _ModelPersistentVolumeClaim(
            metadata=self.metadata or ObjectMeta(name=self.name),
            spec=_ModelPersistentVolumeClaimSpec(
                access_modes=[str(cast(AccessMode, am).value) for am in self.access_modes]
                if self.access_modes is not None
                else None,
                data_source=self.data_source,
                data_source_ref=self.data_source_ref,
                resources=self.resources,
                selector=self.selector,
                storage_class_name=self.storage_class_name,
                volume_mode=self.volume_mode,
                volume_name=self.volume_name,
            ),
        )

    def _build_volume(self) -> _ModelVolume:
        claim = self._build_persistent_volume_claim()
        assert claim.metadata is not None, "claim metadata is required"
        return _ModelVolume(
            name=self.name,
            persistent_volume_claim=_ModelPersistentVolumeClaimVolumeSource(
                claim_name=cast(str, claim.metadata.name),
                read_only=self.read_only,
            ),
        )

access_modes class-attribute instance-attribute

access_modes = [AccessMode.read_write_once]

data_source class-attribute instance-attribute

data_source = Field(default=None, alias='dataSource', description='This field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.')

data_source_ref class-attribute instance-attribute

data_source_ref = Field(default=None, alias='dataSourceRef', description='Specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef\n  allows any non-core object, as well as PersistentVolumeClaim objects.\n* While DataSource ignores disallowed values (dropping them), DataSourceRef\n  preserves all values, and generates an error if a disallowed value is\n  specified.\n(Alpha) Using this field requires the AnyVolumeDataSource feature gate to be enabled.')

metadata class-attribute instance-attribute

metadata = None

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

resources class-attribute instance-attribute

resources = None

selector class-attribute instance-attribute

selector = Field(default=None, description='A label query over volumes to consider for binding.')

size class-attribute instance-attribute

size = None

storage_class_name class-attribute instance-attribute

storage_class_name = None

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

volume_mode class-attribute instance-attribute

volume_mode = Field(default=None, alias='volumeMode', description='volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.')

volume_name class-attribute instance-attribute

volume_name = Field(default=None, alias='volumeName', description='VolumeName is the binding reference to the PersistentVolume backing this claim.')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

VsphereVirtualDiskVolume

VsphereVirtualDiskVolume represents a vSphere virtual disk volume to mount.

Source code in src/hera/workflows/volume.py
class VsphereVirtualDiskVolume(_BaseVolume, _ModelVsphereVirtualDiskVolumeSource):
    """`VsphereVirtualDiskVolume` represents a vSphere virtual disk volume to mount."""

    def _build_volume(self) -> _ModelVolume:
        return _ModelVolume(
            name=self.name,
            vsphere_volume=_ModelVsphereVirtualDiskVolumeSource(
                fs_type=self.fs_type,
                storage_policy_id=self.storage_policy_id,
                storage_policy_name=self.storage_policy_name,
                volume_path=self.volume_path,
            ),
        )

fs_type class-attribute instance-attribute

fs_type = Field(default=None, alias='fsType', description='Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.')

mount_path class-attribute instance-attribute

mount_path = None

mount_propagation class-attribute instance-attribute

mount_propagation = Field(default=None, alias='mountPropagation', description='mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.')

name class-attribute instance-attribute

name = None

read_only class-attribute instance-attribute

read_only = Field(default=None, alias='readOnly', description='Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.')

storage_policy_id class-attribute instance-attribute

storage_policy_id = Field(default=None, alias='storagePolicyID', description='Storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName.')

storage_policy_name class-attribute instance-attribute

storage_policy_name = Field(default=None, alias='storagePolicyName', description='Storage Policy Based Management (SPBM) profile name.')

sub_path class-attribute instance-attribute

sub_path = Field(default=None, alias='subPath', description='Path within the volume from which the container\'s volume should be mounted. Defaults to "" (volume\'s root).')

sub_path_expr class-attribute instance-attribute

sub_path_expr = Field(default=None, alias='subPathExpr', description='Expanded path within the volume from which the container\'s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container\'s environment. Defaults to "" (volume\'s root). SubPathExpr and SubPath are mutually exclusive.')

volume_path class-attribute instance-attribute

volume_path = Field(..., alias='volumePath', description='Path that identifies vSphere volume vmdk')

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

Workflow

The base Workflow class for Hera.

Workflow implements the contextmanager interface so allows usage of with, under which any hera.workflows.protocol.Templatable object instantiated under the context will be added to the Workflow’s list of templates.

Workflows can be created directly on your Argo cluster via create. They can also be dumped to yaml via to_yaml or built according to the Argo schema via build to get an OpenAPI model object.

Source code in src/hera/workflows/workflow.py
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
class Workflow(
    ArgumentsMixin,
    ContextMixin,
    HookMixin,
    VolumeMixin,
    MetricsMixin,
    ModelMapperMixin,
):
    """The base Workflow class for Hera.

    Workflow implements the contextmanager interface so allows usage of `with`, under which
    any `hera.workflows.protocol.Templatable` object instantiated under the context will be
    added to the Workflow's list of templates.

    Workflows can be created directly on your Argo cluster via `create`. They can also be dumped
    to yaml via `to_yaml` or built according to the Argo schema via `build` to get an OpenAPI model
    object.
    """

    def _build_volume_claim_templates(self) -> Optional[List]:
        return ((self.volume_claim_templates or []) + (self._build_persistent_volume_claims() or [])) or None

    def _build_on_exit(self) -> Optional[str]:
        if isinstance(self.on_exit, Templatable):
            return self.on_exit._build_template().name  # type: ignore
        return self.on_exit

    def _build_templates(self) -> Optional[List[TTemplate]]:
        """Builds the templates into an Argo schema."""
        templates = []
        for template in self.templates:
            if isinstance(template, HookMixin):
                template = template._dispatch_hooks()

            if isinstance(template, Templatable):
                templates.append(template._build_template())
            elif isinstance(template, get_args(TTemplate)):
                templates.append(template)
            else:
                raise InvalidType(f"{type(template)} is not a valid template type")

            if isinstance(template, VolumeClaimable):
                claims = template._build_persistent_volume_claims()
                # If there are no claims, continue, nothing to add
                if not claims:
                    continue
                # If there are no volume claim templates, set them to the constructed claims
                elif self.volume_claim_templates is None:
                    self.volume_claim_templates = claims
                else:
                    # otherwise, we need to merge the two lists of volume claim templates. This prioritizes the
                    # already existing volume claim templates under the assumption that the user has already set
                    # a claim template on the workflow intentionally, or the user is sharing the same volumes across
                    # different templates
                    current_volume_claims_map = {}
                    for claim in self.volume_claim_templates:
                        assert claim.metadata is not None, "expected a workflow volume claim with metadata"
                        assert claim.metadata.name is not None, "expected a named workflow volume claim"
                        current_volume_claims_map[claim.metadata.name] = claim

                    new_volume_claims_map = {}
                    for claim in claims:
                        assert claim.metadata is not None, "expected a volume claim with metadata"
                        assert claim.metadata.name is not None, "expected a named volume claim"
                        new_volume_claims_map[claim.metadata.name] = claim

                    for claim_name, claim in new_volume_claims_map.items():
                        if claim_name not in current_volume_claims_map:
                            self.volume_claim_templates.append(claim)
        return templates or None

    # Workflow fields - https://argoproj.github.io/argo-workflows/fields/#workflow
    api_version: Annotated[Optional[str], _WorkflowModelMapper("api_version")] = None
    kind: Annotated[Optional[str], _WorkflowModelMapper("kind")] = None
    status: Annotated[Optional[_ModelWorkflowStatus], _WorkflowModelMapper("status")] = None

    # ObjectMeta fields - https://argoproj.github.io/argo-workflows/fields/#objectmeta
    annotations: Annotated[Optional[Dict[str, str]], _WorkflowModelMapper("metadata.annotations")] = None
    cluster_name: Annotated[Optional[str], _WorkflowModelMapper("metadata.cluster_name")] = None
    creation_timestamp: Annotated[Optional[Time], _WorkflowModelMapper("metadata.creation_timestamp")] = None
    deletion_grace_period_seconds: Annotated[
        Optional[int], _WorkflowModelMapper("metadata.deletion_grace_period_seconds")
    ] = None
    deletion_timestamp: Annotated[Optional[Time], _WorkflowModelMapper("metadata.deletion_timestamp")] = None
    finalizers: Annotated[Optional[List[str]], _WorkflowModelMapper("metadata.finalizers")] = None
    generate_name: Annotated[Optional[str], _WorkflowModelMapper("metadata.generate_name")] = None
    generation: Annotated[Optional[int], _WorkflowModelMapper("metadata.generation")] = None
    labels: Annotated[Optional[Dict[str, str]], _WorkflowModelMapper("metadata.labels")] = None
    managed_fields: Annotated[
        Optional[List[ManagedFieldsEntry]], _WorkflowModelMapper("metadata.managed_fields")
    ] = None
    name: Annotated[Optional[str], _WorkflowModelMapper("metadata.name")] = None
    namespace: Annotated[Optional[str], _WorkflowModelMapper("metadata.namespace")] = None
    owner_references: Annotated[
        Optional[List[OwnerReference]], _WorkflowModelMapper("metadata.owner_references")
    ] = None
    resource_version: Annotated[Optional[str], _WorkflowModelMapper("metadata.resource_version")] = None
    self_link: Annotated[Optional[str], _WorkflowModelMapper("metadata.self_link")] = None
    uid: Annotated[Optional[str], _WorkflowModelMapper("metadata.uid")] = None

    # WorkflowSpec fields - https://argoproj.github.io/argo-workflows/fields/#workflowspec
    active_deadline_seconds: Annotated[Optional[int], _WorkflowModelMapper("spec.active_deadline_seconds")] = None
    affinity: Annotated[Optional[Affinity], _WorkflowModelMapper("spec.affinity")] = None
    archive_logs: Annotated[Optional[bool], _WorkflowModelMapper("spec.archive_logs")] = None
    artifact_gc: Annotated[Optional[ArtifactGC], _WorkflowModelMapper("spec.artifact_gc")] = None
    artifact_repository_ref: Annotated[
        Optional[ArtifactRepositoryRef], _WorkflowModelMapper("spec.artifact_repository_ref")
    ] = None
    automount_service_account_token: Annotated[
        Optional[bool], _WorkflowModelMapper("spec.automount_service_account_token")
    ] = None
    dns_config: Annotated[Optional[PodDNSConfig], _WorkflowModelMapper("spec.dns_config")] = None
    dns_policy: Annotated[Optional[str], _WorkflowModelMapper("spec.dns_policy")] = None
    entrypoint: Annotated[Optional[str], _WorkflowModelMapper("spec.entrypoint")] = None
    executor: Annotated[Optional[ExecutorConfig], _WorkflowModelMapper("spec.executor")] = None
    hooks: Annotated[Optional[Dict[str, LifecycleHook]], _WorkflowModelMapper("spec.hooks")] = None
    host_aliases: Annotated[Optional[List[HostAlias]], _WorkflowModelMapper("spec.host_aliases")] = None
    host_network: Annotated[Optional[bool], _WorkflowModelMapper("spec.host_network")] = None
    image_pull_secrets: Annotated[ImagePullSecretsT, _WorkflowModelMapper("spec.image_pull_secrets")] = None
    node_selector: Annotated[Optional[Dict[str, str]], _WorkflowModelMapper("spec.node_selector")] = None
    on_exit: Annotated[Optional[Union[str, Templatable]], _WorkflowModelMapper("spec.on_exit", _build_on_exit)] = None
    parallelism: Annotated[Optional[int], _WorkflowModelMapper("spec.parallelism")] = None
    pod_disruption_budget: Annotated[
        Optional[PodDisruptionBudgetSpec], _WorkflowModelMapper("spec.pod_disruption_budget")
    ] = None
    pod_gc: Annotated[Optional[PodGC], _WorkflowModelMapper("spec.pod_gc")] = None
    pod_metadata: Annotated[Optional[Metadata], _WorkflowModelMapper("spec.pod_metadata")] = None
    pod_priority: Annotated[Optional[int], _WorkflowModelMapper("spec.pod_priority")] = None
    pod_priority_class_name: Annotated[Optional[str], _WorkflowModelMapper("spec.pod_priority_class_name")] = None
    pod_spec_patch: Annotated[Optional[str], _WorkflowModelMapper("spec.pod_spec_patch")] = None
    priority: Annotated[Optional[int], _WorkflowModelMapper("spec.priority")] = None
    retry_strategy: Annotated[Optional[RetryStrategy], _WorkflowModelMapper("spec.retry_strategy")] = None
    scheduler_name: Annotated[Optional[str], _WorkflowModelMapper("spec.scheduler_name")] = None
    security_context: Annotated[Optional[PodSecurityContext], _WorkflowModelMapper("spec.security_context")] = None
    service_account_name: Annotated[Optional[str], _WorkflowModelMapper("spec.service_account_name")] = None
    shutdown: Annotated[Optional[str], _WorkflowModelMapper("spec.shutdown")] = None
    suspend: Annotated[Optional[bool], _WorkflowModelMapper("spec.suspend")] = None
    synchronization: Annotated[Optional[Synchronization], _WorkflowModelMapper("spec.synchronization")] = None
    template_defaults: Annotated[Optional[_ModelTemplate], _WorkflowModelMapper("spec.template_defaults")] = None
    templates: Annotated[
        List[Union[_ModelTemplate, Templatable]], _WorkflowModelMapper("spec.templates", _build_templates)
    ] = []
    tolerations: Annotated[Optional[List[Toleration]], _WorkflowModelMapper("spec.tolerations")] = None
    ttl_strategy: Annotated[Optional[TTLStrategy], _WorkflowModelMapper("spec.ttl_strategy")] = None
    volume_claim_gc: Annotated[Optional[VolumeClaimGC], _WorkflowModelMapper("spec.volume_claim_gc")] = None
    volume_claim_templates: Annotated[
        Optional[List[PersistentVolumeClaim]],
        _WorkflowModelMapper("spec.volume_claim_templates", _build_volume_claim_templates),
    ] = None
    workflow_metadata: Annotated[Optional[WorkflowMetadata], _WorkflowModelMapper("spec.workflow_metadata")] = None
    workflow_template_ref: Annotated[
        Optional[WorkflowTemplateRef], _WorkflowModelMapper("spec.workflow_template_ref")
    ] = None

    # Override types for mixin fields
    arguments: Annotated[
        ArgumentsT,
        _WorkflowModelMapper("spec.arguments", ArgumentsMixin._build_arguments),
    ]
    metrics: Annotated[
        MetricsT,
        _WorkflowModelMapper("spec.metrics", MetricsMixin._build_metrics),
    ]
    volumes: Annotated[VolumesT, _WorkflowModelMapper("spec.volumes", VolumeMixin._build_volumes)]

    # Hera-specific fields
    workflows_service: Optional[WorkflowsService] = None

    @validator("name", pre=True, always=True)
    def _set_name(cls, v):
        if v is not None and len(v) > NAME_LIMIT:
            raise ValueError(f"name must be no more than {NAME_LIMIT} characters: {v}")
        return v

    @validator("generate_name", pre=True, always=True)
    def _set_generate_name(cls, v):
        if v is not None and len(v) > NAME_LIMIT:
            raise ValueError(f"generate_name must be no more than {NAME_LIMIT} characters: {v}")
        return v

    @validator("api_version", pre=True, always=True)
    def _set_api_version(cls, v):
        if v is None:
            return global_config.api_version
        return v

    @validator("workflows_service", pre=True, always=True)
    def _set_workflows_service(cls, v):
        if v is None:
            return WorkflowsService()
        return v

    @validator("kind", pre=True, always=True)
    def _set_kind(cls, v):
        if v is None:
            return cls.__name__  # type: ignore
        return v

    @validator("namespace", pre=True, always=True)
    def _set_namespace(cls, v):
        if v is None:
            return global_config.namespace
        return v

    @validator("service_account_name", pre=True, always=True)
    def _set_service_account_name(cls, v):
        if v is None:
            return global_config.service_account_name
        return v

    @validator("image_pull_secrets", pre=True, always=True)
    def _set_image_pull_secrets(cls, v):
        if v is None:
            return None

        if isinstance(v, str):
            return [LocalObjectReference(name=v)]
        elif isinstance(v, LocalObjectReference):
            return [v]

        assert isinstance(v, list), (
            "`image_pull_secrets` expected to be either a `str`, a `LocalObjectReferences`, a list of `str`, "
            "or a list of `LocalObjectReferences`"
        )
        result = []
        for secret in v:
            if isinstance(secret, str):
                result.append(LocalObjectReference(name=secret))
            elif isinstance(secret, LocalObjectReference):
                result.append(secret)
        return result

    def get_parameter(self, name: str) -> Parameter:
        """Attempts to find and return a `Parameter` of the specified name."""
        arguments = self._build_arguments()
        if arguments is None:
            raise KeyError("Workflow has no arguments set")
        if arguments.parameters is None:
            raise KeyError("Workflow has no argument parameters set")

        parameters = arguments.parameters
        if next((p for p in parameters if p.name == name), None) is None:
            raise KeyError(f"`{name}` is not a valid workflow parameter")
        return Parameter(name=name, value=f"{{{{workflow.parameters.{name}}}}}")

    def build(self) -> TWorkflow:
        """Builds the Workflow and its components into an Argo schema Workflow object."""
        self = self._dispatch_hooks()

        model_workflow = _ModelWorkflow(
            metadata=ObjectMeta(),
            spec=_ModelWorkflowSpec(),
        )
        return _WorkflowModelMapper.build_model(Workflow, self, model_workflow)

    def to_dict(self) -> Any:
        """Builds the Workflow as an Argo schema Workflow object and returns it as a dictionary."""
        return self.build().dict(exclude_none=True, by_alias=True)

    def __eq__(self, other) -> bool:
        """Verifies equality of `self` with the specified `other`."""
        if other.__class__ is self.__class__:
            return self.to_dict() == other.to_dict()

        return False

    def to_yaml(self, *args, **kwargs) -> str:
        """Builds the Workflow as an Argo schema Workflow object and returns it as yaml string."""
        if not _yaml:
            raise ImportError("`PyYAML` is not installed. Install `hera[yaml]` to bring in the extra dependency")
        # Set some default options if not provided by the user
        kwargs.setdefault("default_flow_style", False)
        kwargs.setdefault("sort_keys", False)
        return _yaml.dump(self.to_dict(), *args, **kwargs)

    def create(self, wait: bool = False, poll_interval: int = 5) -> TWorkflow:
        """Creates the Workflow on the Argo cluster.

        Parameters
        ----------
        wait: bool = False
            If false then the workflow is created asynchronously and the function returns immediately.
            If true then the workflow is created and the function blocks until the workflow is done executing.
        poll_interval: int = 5
            The interval in seconds to poll the workflow status if wait is true. Ignored when wait is true.
        """
        assert self.workflows_service, "workflow service not initialized"
        assert self.namespace, "workflow namespace not defined"

        wf = self.workflows_service.create_workflow(
            WorkflowCreateRequest(workflow=self.build()), namespace=self.namespace
        )
        # set the workflow name to the name returned by the API, which helps cover the case of users relying on
        # `generate_name=True`
        self.name = wf.metadata.name

        if wait:
            return self.wait(poll_interval=poll_interval)
        return wf

    def wait(self, poll_interval: int = 5) -> TWorkflow:
        """Waits for the Workflow to complete execution.

        Parameters
        ----------
        poll_interval: int = 5
            The interval in seconds to poll the workflow status.
        """
        assert self.workflows_service is not None, "workflow service not initialized"
        assert self.namespace is not None, "workflow namespace not defined"
        assert self.name is not None, "workflow name not defined"

        # here we use the sleep interval to wait for the workflow post creation. This is to address a potential
        # race conditions such as:
        # 1. Argo server says "workflow was accepted" but the workflow is not yet created
        # 2. Hera wants to verify the status of the workflow, but it's not yet defined because it's not created
        # 3. Argo finally creates the workflow
        # 4. Hera throws an `AssertionError` because the phase assertion fails
        time.sleep(poll_interval)
        wf = self.workflows_service.get_workflow(self.name, namespace=self.namespace)
        assert wf.metadata.name is not None, f"workflow name not defined for workflow {self.name}"

        assert wf.status is not None, f"workflow status not defined for workflow {wf.metadata.name}"
        assert wf.status.phase is not None, f"workflow phase not defined for workflow status {wf.status}"
        status = WorkflowStatus.from_argo_status(wf.status.phase)

        # keep polling for workflow status until completed, at the interval dictated by the user
        while status == WorkflowStatus.running:
            time.sleep(poll_interval)
            wf = self.workflows_service.get_workflow(wf.metadata.name, namespace=self.namespace)
            assert wf.status is not None, f"workflow status not defined for workflow {wf.metadata.name}"
            assert wf.status.phase is not None, f"workflow phase not defined for workflow status {wf.status}"
            status = WorkflowStatus.from_argo_status(wf.status.phase)
        return wf

    def lint(self) -> TWorkflow:
        """Lints the Workflow using the Argo cluster."""
        assert self.workflows_service, "workflow service not initialized"
        assert self.namespace, "workflow namespace not defined"
        return self.workflows_service.lint_workflow(
            WorkflowLintRequest(workflow=self.build()), namespace=self.namespace
        )

    def _add_sub(self, node: Any):
        """Adds the given node (expected to satisfy the `Templatable` protocol) to the context."""
        if not isinstance(node, (Templatable, _ModelTemplate)):
            raise InvalidType(type(node))
        self.templates.append(node)

    def to_file(self, output_directory: Union[Path, str] = ".", name: str = "", *args, **kwargs) -> Path:
        """Writes the Workflow as an Argo schema Workflow object to a YAML file and returns the path to the file.

        Args:
            output_directory: The directory to write the file to. Defaults to the current working directory.
            name: The name of the file to write without the file extension.  Defaults to the Workflow's name or a
                  generated name.
            *args: Additional arguments to pass to `yaml.dump`.
            **kwargs: Additional keyword arguments to pass to `yaml.dump`.
        """
        workflow_name = self.name or (self.generate_name or "workflow").rstrip("-")
        name = name or workflow_name
        output_directory = Path(output_directory)
        output_path = Path(output_directory) / f"{name}.yaml"
        output_directory.mkdir(parents=True, exist_ok=True)
        output_path.write_text(self.to_yaml(*args, **kwargs))
        return output_path.absolute()

    @classmethod
    def from_dict(cls, model_dict: Dict) -> ModelMapperMixin:
        """Create a Workflow from a Workflow contained in a dict.

        Examples:
            >>> my_workflow = Workflow(name="my-workflow")
            >>> my_workflow == Workflow.from_dict(my_workflow.to_dict())
            True
        """
        return cls._from_dict(model_dict, _ModelWorkflow)

    @classmethod
    def from_yaml(cls, yaml_str: str) -> ModelMapperMixin:
        """Create a Workflow from a Workflow contained in a YAML string.

        Examples:
            >>> my_workflow = Workflow.from_yaml(yaml_str)
        """
        return cls._from_yaml(yaml_str, _ModelWorkflow)

    @classmethod
    def from_file(cls, yaml_file: Union[Path, str]) -> ModelMapperMixin:
        """Create a Workflow from a Workflow contained in a YAML file.

        Examples:
            >>> yaml_file = Path(...)
            >>> my_workflow = Workflow.from_file(yaml_file)
        """
        return cls._from_file(yaml_file, _ModelWorkflow)

active_deadline_seconds class-attribute instance-attribute

active_deadline_seconds = None

affinity class-attribute instance-attribute

affinity = None

annotations class-attribute instance-attribute

annotations = None

api_version class-attribute instance-attribute

api_version = None

archive_logs class-attribute instance-attribute

archive_logs = None

arguments instance-attribute

arguments

artifact_gc class-attribute instance-attribute

artifact_gc = None

artifact_repository_ref class-attribute instance-attribute

artifact_repository_ref = None

automount_service_account_token class-attribute instance-attribute

automount_service_account_token = None

cluster_name class-attribute instance-attribute

cluster_name = None

creation_timestamp class-attribute instance-attribute

creation_timestamp = None

deletion_grace_period_seconds class-attribute instance-attribute

deletion_grace_period_seconds = None

deletion_timestamp class-attribute instance-attribute

deletion_timestamp = None

dns_config class-attribute instance-attribute

dns_config = None

dns_policy class-attribute instance-attribute

dns_policy = None

entrypoint class-attribute instance-attribute

entrypoint = None

executor class-attribute instance-attribute

executor = None

finalizers class-attribute instance-attribute

finalizers = None

generate_name class-attribute instance-attribute

generate_name = None

generation class-attribute instance-attribute

generation = None

hooks class-attribute instance-attribute

hooks = None

host_aliases class-attribute instance-attribute

host_aliases = None

host_network class-attribute instance-attribute

host_network = None

image_pull_secrets class-attribute instance-attribute

image_pull_secrets = None

kind class-attribute instance-attribute

kind = None

labels class-attribute instance-attribute

labels = None

managed_fields class-attribute instance-attribute

managed_fields = None

metrics instance-attribute

metrics

name class-attribute instance-attribute

name = None

namespace class-attribute instance-attribute

namespace = None

node_selector class-attribute instance-attribute

node_selector = None

on_exit class-attribute instance-attribute

on_exit = None

owner_references class-attribute instance-attribute

owner_references = None

parallelism class-attribute instance-attribute

parallelism = None

pod_disruption_budget class-attribute instance-attribute

pod_disruption_budget = None

pod_gc class-attribute instance-attribute

pod_gc = None

pod_metadata class-attribute instance-attribute

pod_metadata = None

pod_priority class-attribute instance-attribute

pod_priority = None

pod_priority_class_name class-attribute instance-attribute

pod_priority_class_name = None

pod_spec_patch class-attribute instance-attribute

pod_spec_patch = None

priority class-attribute instance-attribute

priority = None

resource_version class-attribute instance-attribute

resource_version = None

retry_strategy class-attribute instance-attribute

retry_strategy = None

scheduler_name class-attribute instance-attribute

scheduler_name = None

security_context class-attribute instance-attribute

security_context = None
self_link = None

service_account_name class-attribute instance-attribute

service_account_name = None

shutdown class-attribute instance-attribute

shutdown = None

status class-attribute instance-attribute

status = None

suspend class-attribute instance-attribute

suspend = None

synchronization class-attribute instance-attribute

synchronization = None

template_defaults class-attribute instance-attribute

template_defaults = None

templates class-attribute instance-attribute

templates = []

tolerations class-attribute instance-attribute

tolerations = None

ttl_strategy class-attribute instance-attribute

ttl_strategy = None

uid class-attribute instance-attribute

uid = None

volume_claim_gc class-attribute instance-attribute

volume_claim_gc = None

volume_claim_templates class-attribute instance-attribute

volume_claim_templates = None

volumes instance-attribute

volumes

workflow_metadata class-attribute instance-attribute

workflow_metadata = None

workflow_template_ref class-attribute instance-attribute

workflow_template_ref = None

workflows_service class-attribute instance-attribute

workflows_service = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

ModelMapper

Source code in src/hera/workflows/_mixins.py
class ModelMapper:
    def __init__(self, model_path: str, hera_builder: Optional[Callable] = None):
        self.model_path = None
        self.builder = hera_builder

        if not model_path:
            # Allows overriding parent attribute annotations to remove the mapping
            return

        self.model_path = model_path.split(".")
        curr_class: Type[BaseModel] = self._get_model_class()
        for key in self.model_path:
            if key not in curr_class.__fields__:
                raise ValueError(f"Model key '{key}' does not exist in class {curr_class}")
            curr_class = curr_class.__fields__[key].outer_type_

    @classmethod
    def _get_model_class(cls) -> Type[BaseModel]:
        raise NotImplementedError

    @classmethod
    def build_model(
        cls, hera_class: Type[ModelMapperMixin], hera_obj: ModelMapperMixin, model: TWorkflow
    ) -> TWorkflow:
        assert isinstance(hera_obj, ModelMapperMixin)

        for attr, annotation in hera_class._get_all_annotations().items():
            if get_origin(annotation) is Annotated and isinstance(
                get_args(annotation)[1], ModelMapperMixin.ModelMapper
            ):
                mapper = get_args(annotation)[1]
                # Value comes from builder function if it exists on hera_obj, otherwise directly from the attr
                value = (
                    getattr(hera_obj, mapper.builder.__name__)()
                    if mapper.builder is not None
                    else getattr(hera_obj, attr)
                )
                if value is not None:
                    _set_model_attr(model, mapper.model_path, value)

        return model

builder instance-attribute

builder = hera_builder

model_path instance-attribute

model_path = model_path.split('.')

build_model classmethod

build_model(hera_class, hera_obj, model)
Source code in src/hera/workflows/_mixins.py
@classmethod
def build_model(
    cls, hera_class: Type[ModelMapperMixin], hera_obj: ModelMapperMixin, model: TWorkflow
) -> TWorkflow:
    assert isinstance(hera_obj, ModelMapperMixin)

    for attr, annotation in hera_class._get_all_annotations().items():
        if get_origin(annotation) is Annotated and isinstance(
            get_args(annotation)[1], ModelMapperMixin.ModelMapper
        ):
            mapper = get_args(annotation)[1]
            # Value comes from builder function if it exists on hera_obj, otherwise directly from the attr
            value = (
                getattr(hera_obj, mapper.builder.__name__)()
                if mapper.builder is not None
                else getattr(hera_obj, attr)
            )
            if value is not None:
                _set_model_attr(model, mapper.model_path, value)

    return model

build

build()

Builds the Workflow and its components into an Argo schema Workflow object.

Source code in src/hera/workflows/workflow.py
def build(self) -> TWorkflow:
    """Builds the Workflow and its components into an Argo schema Workflow object."""
    self = self._dispatch_hooks()

    model_workflow = _ModelWorkflow(
        metadata=ObjectMeta(),
        spec=_ModelWorkflowSpec(),
    )
    return _WorkflowModelMapper.build_model(Workflow, self, model_workflow)

create

create(wait=False, poll_interval=5)

Creates the Workflow on the Argo cluster.

Parameters

wait: bool = False If false then the workflow is created asynchronously and the function returns immediately. If true then the workflow is created and the function blocks until the workflow is done executing. poll_interval: int = 5 The interval in seconds to poll the workflow status if wait is true. Ignored when wait is true.

Source code in src/hera/workflows/workflow.py
def create(self, wait: bool = False, poll_interval: int = 5) -> TWorkflow:
    """Creates the Workflow on the Argo cluster.

    Parameters
    ----------
    wait: bool = False
        If false then the workflow is created asynchronously and the function returns immediately.
        If true then the workflow is created and the function blocks until the workflow is done executing.
    poll_interval: int = 5
        The interval in seconds to poll the workflow status if wait is true. Ignored when wait is true.
    """
    assert self.workflows_service, "workflow service not initialized"
    assert self.namespace, "workflow namespace not defined"

    wf = self.workflows_service.create_workflow(
        WorkflowCreateRequest(workflow=self.build()), namespace=self.namespace
    )
    # set the workflow name to the name returned by the API, which helps cover the case of users relying on
    # `generate_name=True`
    self.name = wf.metadata.name

    if wait:
        return self.wait(poll_interval=poll_interval)
    return wf

from_dict classmethod

from_dict(model_dict)

Create a Workflow from a Workflow contained in a dict.

Examples:

>>> my_workflow = Workflow(name="my-workflow")
>>> my_workflow == Workflow.from_dict(my_workflow.to_dict())
True
Source code in src/hera/workflows/workflow.py
@classmethod
def from_dict(cls, model_dict: Dict) -> ModelMapperMixin:
    """Create a Workflow from a Workflow contained in a dict.

    Examples:
        >>> my_workflow = Workflow(name="my-workflow")
        >>> my_workflow == Workflow.from_dict(my_workflow.to_dict())
        True
    """
    return cls._from_dict(model_dict, _ModelWorkflow)

from_file classmethod

from_file(yaml_file)

Create a Workflow from a Workflow contained in a YAML file.

Examples:

>>> yaml_file = Path(...)
>>> my_workflow = Workflow.from_file(yaml_file)
Source code in src/hera/workflows/workflow.py
@classmethod
def from_file(cls, yaml_file: Union[Path, str]) -> ModelMapperMixin:
    """Create a Workflow from a Workflow contained in a YAML file.

    Examples:
        >>> yaml_file = Path(...)
        >>> my_workflow = Workflow.from_file(yaml_file)
    """
    return cls._from_file(yaml_file, _ModelWorkflow)

from_yaml classmethod

from_yaml(yaml_str)

Create a Workflow from a Workflow contained in a YAML string.

Examples:

>>> my_workflow = Workflow.from_yaml(yaml_str)
Source code in src/hera/workflows/workflow.py
@classmethod
def from_yaml(cls, yaml_str: str) -> ModelMapperMixin:
    """Create a Workflow from a Workflow contained in a YAML string.

    Examples:
        >>> my_workflow = Workflow.from_yaml(yaml_str)
    """
    return cls._from_yaml(yaml_str, _ModelWorkflow)

get_parameter

get_parameter(name)

Attempts to find and return a Parameter of the specified name.

Source code in src/hera/workflows/workflow.py
def get_parameter(self, name: str) -> Parameter:
    """Attempts to find and return a `Parameter` of the specified name."""
    arguments = self._build_arguments()
    if arguments is None:
        raise KeyError("Workflow has no arguments set")
    if arguments.parameters is None:
        raise KeyError("Workflow has no argument parameters set")

    parameters = arguments.parameters
    if next((p for p in parameters if p.name == name), None) is None:
        raise KeyError(f"`{name}` is not a valid workflow parameter")
    return Parameter(name=name, value=f"{{{{workflow.parameters.{name}}}}}")

lint

lint()

Lints the Workflow using the Argo cluster.

Source code in src/hera/workflows/workflow.py
def lint(self) -> TWorkflow:
    """Lints the Workflow using the Argo cluster."""
    assert self.workflows_service, "workflow service not initialized"
    assert self.namespace, "workflow namespace not defined"
    return self.workflows_service.lint_workflow(
        WorkflowLintRequest(workflow=self.build()), namespace=self.namespace
    )

to_dict

to_dict()

Builds the Workflow as an Argo schema Workflow object and returns it as a dictionary.

Source code in src/hera/workflows/workflow.py
def to_dict(self) -> Any:
    """Builds the Workflow as an Argo schema Workflow object and returns it as a dictionary."""
    return self.build().dict(exclude_none=True, by_alias=True)

to_file

to_file(output_directory='.', name='', *args, **kwargs)

Writes the Workflow as an Argo schema Workflow object to a YAML file and returns the path to the file.

Parameters:

Name Type Description Default
output_directory Union[Path, str]

The directory to write the file to. Defaults to the current working directory.

'.'
name str

The name of the file to write without the file extension. Defaults to the Workflow’s name or a generated name.

''
*args

Additional arguments to pass to yaml.dump.

()
**kwargs

Additional keyword arguments to pass to yaml.dump.

{}
Source code in src/hera/workflows/workflow.py
def to_file(self, output_directory: Union[Path, str] = ".", name: str = "", *args, **kwargs) -> Path:
    """Writes the Workflow as an Argo schema Workflow object to a YAML file and returns the path to the file.

    Args:
        output_directory: The directory to write the file to. Defaults to the current working directory.
        name: The name of the file to write without the file extension.  Defaults to the Workflow's name or a
              generated name.
        *args: Additional arguments to pass to `yaml.dump`.
        **kwargs: Additional keyword arguments to pass to `yaml.dump`.
    """
    workflow_name = self.name or (self.generate_name or "workflow").rstrip("-")
    name = name or workflow_name
    output_directory = Path(output_directory)
    output_path = Path(output_directory) / f"{name}.yaml"
    output_directory.mkdir(parents=True, exist_ok=True)
    output_path.write_text(self.to_yaml(*args, **kwargs))
    return output_path.absolute()

to_yaml

to_yaml(*args, **kwargs)

Builds the Workflow as an Argo schema Workflow object and returns it as yaml string.

Source code in src/hera/workflows/workflow.py
def to_yaml(self, *args, **kwargs) -> str:
    """Builds the Workflow as an Argo schema Workflow object and returns it as yaml string."""
    if not _yaml:
        raise ImportError("`PyYAML` is not installed. Install `hera[yaml]` to bring in the extra dependency")
    # Set some default options if not provided by the user
    kwargs.setdefault("default_flow_style", False)
    kwargs.setdefault("sort_keys", False)
    return _yaml.dump(self.to_dict(), *args, **kwargs)

wait

wait(poll_interval=5)

Waits for the Workflow to complete execution.

Parameters

poll_interval: int = 5 The interval in seconds to poll the workflow status.

Source code in src/hera/workflows/workflow.py
def wait(self, poll_interval: int = 5) -> TWorkflow:
    """Waits for the Workflow to complete execution.

    Parameters
    ----------
    poll_interval: int = 5
        The interval in seconds to poll the workflow status.
    """
    assert self.workflows_service is not None, "workflow service not initialized"
    assert self.namespace is not None, "workflow namespace not defined"
    assert self.name is not None, "workflow name not defined"

    # here we use the sleep interval to wait for the workflow post creation. This is to address a potential
    # race conditions such as:
    # 1. Argo server says "workflow was accepted" but the workflow is not yet created
    # 2. Hera wants to verify the status of the workflow, but it's not yet defined because it's not created
    # 3. Argo finally creates the workflow
    # 4. Hera throws an `AssertionError` because the phase assertion fails
    time.sleep(poll_interval)
    wf = self.workflows_service.get_workflow(self.name, namespace=self.namespace)
    assert wf.metadata.name is not None, f"workflow name not defined for workflow {self.name}"

    assert wf.status is not None, f"workflow status not defined for workflow {wf.metadata.name}"
    assert wf.status.phase is not None, f"workflow phase not defined for workflow status {wf.status}"
    status = WorkflowStatus.from_argo_status(wf.status.phase)

    # keep polling for workflow status until completed, at the interval dictated by the user
    while status == WorkflowStatus.running:
        time.sleep(poll_interval)
        wf = self.workflows_service.get_workflow(wf.metadata.name, namespace=self.namespace)
        assert wf.status is not None, f"workflow status not defined for workflow {wf.metadata.name}"
        assert wf.status.phase is not None, f"workflow phase not defined for workflow status {wf.status}"
        status = WorkflowStatus.from_argo_status(wf.status.phase)
    return wf

WorkflowStatus

Placeholder for workflow statuses.

Source code in src/hera/workflows/workflow_status.py
class WorkflowStatus(str, Enum):
    """Placeholder for workflow statuses."""

    running = "Running"
    succeeded = "Succeeded"
    failed = "Failed"
    error = "Error"
    terminated = "Terminated"

    def __str__(self) -> str:
        """Returns the value representation of the workflow status enum."""
        return str(self.value)

    @classmethod
    def from_argo_status(cls, s: str) -> "WorkflowStatus":
        """Turns an Argo status into a Hera workflow status representation."""
        switch = {
            "Running": WorkflowStatus.running,
            "Succeeded": WorkflowStatus.succeeded,
            "Failed": WorkflowStatus.failed,
            "Error": WorkflowStatus.error,
            "Terminated": WorkflowStatus.terminated,
        }

        ss = switch.get(s)
        if not ss:
            raise KeyError(f"Unrecognized status {s}. " f"Available Argo statuses are: {list(switch.keys())}")
        return ss

error class-attribute instance-attribute

error = 'Error'

failed class-attribute instance-attribute

failed = 'Failed'

running class-attribute instance-attribute

running = 'Running'

succeeded class-attribute instance-attribute

succeeded = 'Succeeded'

terminated class-attribute instance-attribute

terminated = 'Terminated'

from_argo_status classmethod

from_argo_status(s)

Turns an Argo status into a Hera workflow status representation.

Source code in src/hera/workflows/workflow_status.py
@classmethod
def from_argo_status(cls, s: str) -> "WorkflowStatus":
    """Turns an Argo status into a Hera workflow status representation."""
    switch = {
        "Running": WorkflowStatus.running,
        "Succeeded": WorkflowStatus.succeeded,
        "Failed": WorkflowStatus.failed,
        "Error": WorkflowStatus.error,
        "Terminated": WorkflowStatus.terminated,
    }

    ss = switch.get(s)
    if not ss:
        raise KeyError(f"Unrecognized status {s}. " f"Available Argo statuses are: {list(switch.keys())}")
    return ss

WorkflowTemplate

WorkflowTemplates are definitions of Workflows that live in your namespace in your cluster.

This allows you to create a library of frequently-used templates and reuse them by referencing them from your Workflows.

Source code in src/hera/workflows/workflow_template.py
class WorkflowTemplate(Workflow):
    """WorkflowTemplates are definitions of Workflows that live in your namespace in your cluster.

    This allows you to create a library of frequently-used templates and reuse them by referencing
    them from your Workflows.
    """

    # Removes status mapping
    status: Annotated[Optional[_ModelWorkflowStatus], _WorkflowTemplateModelMapper("")] = None

    # WorkflowTemplate fields match Workflow exactly except for `status`, which WorkflowTemplate
    # does not have - https://argoproj.github.io/argo-workflows/fields/#workflowtemplate
    @validator("status", pre=True, always=True)
    def _set_status(cls, v):
        if v is not None:
            raise ValueError("status is not a valid field on a WorkflowTemplate")

    def create(self) -> TWorkflow:  # type: ignore
        """Creates the WorkflowTemplate on the Argo cluster."""
        assert self.workflows_service, "workflow service not initialized"
        assert self.namespace, "workflow namespace not defined"
        return self.workflows_service.create_workflow_template(
            WorkflowTemplateCreateRequest(template=self.build()), namespace=self.namespace
        )

    def get(self) -> TWorkflow:
        """Attempts to get a workflow template based on the parameters of this template e.g. name + namespace."""
        assert self.workflows_service, "workflow service not initialized"
        assert self.namespace, "workflow namespace not defined"
        assert self.name, "workflow name not defined"
        return self.workflows_service.get_workflow_template(name=self.name, namespace=self.namespace)

    def update(self) -> TWorkflow:
        """Attempts to perform a template update based on the parameters of this template.

        This creates the template if it does not exist. In addition, this performs
        a get prior to updating to get the resource version to update in the first place. If you know the template
        does not exist ahead of time, it is more efficient to use `create()` directly to avoid one round trip.
        """
        assert self.workflows_service, "workflow service not initialized"
        assert self.namespace, "workflow namespace not defined"
        assert self.name, "workflow name not defined"
        # we always need to do a get prior to updating to get the resource version to update in the first place
        # https://github.com/argoproj/argo-workflows/pull/5465#discussion_r597797052

        template = self.build()
        try:
            curr = self.get()
            template.metadata.resource_version = curr.metadata.resource_version
        except NotFound:
            return self.create()
        return self.workflows_service.update_workflow_template(
            self.name,
            WorkflowTemplateUpdateRequest(template=template),
            namespace=self.namespace,
        )

    def lint(self) -> TWorkflow:
        """Lints the WorkflowTemplate using the Argo cluster."""
        assert self.workflows_service, "workflow service not initialized"
        assert self.namespace, "workflow namespace not defined"
        return self.workflows_service.lint_workflow_template(
            WorkflowTemplateLintRequest(template=self.build()), namespace=self.namespace
        )

    def build(self) -> TWorkflow:
        """Builds the WorkflowTemplate and its components into an Argo schema WorkflowTemplate object."""
        self = self._dispatch_hooks()

        model_workflow = _ModelWorkflowTemplate(
            metadata=ObjectMeta(),
            spec=_ModelWorkflowSpec(),
        )

        return _WorkflowTemplateModelMapper.build_model(WorkflowTemplate, self, model_workflow)

    @classmethod
    def from_dict(cls, model_dict: Dict) -> ModelMapperMixin:
        """Create a WorkflowTemplate from a WorkflowTemplate contained in a dict.

        Examples:
            >>> my_workflow_template = WorkflowTemplate(name="my-wft")
            >>> my_workflow_template == WorkflowTemplate.from_dict(my_workflow_template.to_dict())
            True
        """
        return cls._from_dict(model_dict, _ModelWorkflowTemplate)

    @classmethod
    def from_yaml(cls, yaml_str: str) -> ModelMapperMixin:
        """Create a WorkflowTemplate from a WorkflowTemplate contained in a YAML string.

        Examples:
            >>> my_workflow_template = WorkflowTemplate.from_yaml(yaml_str)
        """
        return cls._from_yaml(yaml_str, _ModelWorkflowTemplate)

    @classmethod
    def from_file(cls, yaml_file: Union[Path, str]) -> ModelMapperMixin:
        """Create a WorkflowTemplate from a WorkflowTemplate contained in a YAML file.

        Examples:
            >>> yaml_file = Path(...)
            >>> my_workflow_template = WorkflowTemplate.from_file(yaml_file)
        """
        return cls._from_file(yaml_file, _ModelWorkflowTemplate)

    def _get_as_workflow(self, generate_name: Optional[str]) -> Workflow:
        workflow = cast(Workflow, Workflow.from_dict(self.to_dict()))
        workflow.kind = "Workflow"

        if generate_name is not None:
            workflow.generate_name = generate_name
        else:
            # As this function is mainly for improved DevEx when iterating on a WorkflowTemplate, we do a basic
            # truncation of the WT's name in case it being > _TRUNCATE_LENGTH, to assign to generate_name.
            assert workflow.name is not None
            workflow.generate_name = workflow.name[:_TRUNCATE_LENGTH]

        workflow.name = None

        return workflow

    def create_as_workflow(
        self,
        generate_name: Optional[str] = None,
        wait: bool = False,
        poll_interval: int = 5,
    ) -> TWorkflow:
        """Run this WorkflowTemplate instantly as a Workflow.

        If generate_name is given, the workflow created uses generate_name as a prefix, as per the usual for
        hera.workflows.Workflow.generate_name. If not given, the WorkflowTemplate's name will be used, truncated to 57
        chars and appended with a hyphen.

        Note: this function does not require the WorkflowTemplate to already exist on the cluster
        """
        workflow = self._get_as_workflow(generate_name)
        return workflow.create(wait=wait, poll_interval=poll_interval)

active_deadline_seconds class-attribute instance-attribute

active_deadline_seconds = None

affinity class-attribute instance-attribute

affinity = None

annotations class-attribute instance-attribute

annotations = None

api_version class-attribute instance-attribute

api_version = None

archive_logs class-attribute instance-attribute

archive_logs = None

arguments instance-attribute

arguments

artifact_gc class-attribute instance-attribute

artifact_gc = None

artifact_repository_ref class-attribute instance-attribute

artifact_repository_ref = None

automount_service_account_token class-attribute instance-attribute

automount_service_account_token = None

cluster_name class-attribute instance-attribute

cluster_name = None

creation_timestamp class-attribute instance-attribute

creation_timestamp = None

deletion_grace_period_seconds class-attribute instance-attribute

deletion_grace_period_seconds = None

deletion_timestamp class-attribute instance-attribute

deletion_timestamp = None

dns_config class-attribute instance-attribute

dns_config = None

dns_policy class-attribute instance-attribute

dns_policy = None

entrypoint class-attribute instance-attribute

entrypoint = None

executor class-attribute instance-attribute

executor = None

finalizers class-attribute instance-attribute

finalizers = None

generate_name class-attribute instance-attribute

generate_name = None

generation class-attribute instance-attribute

generation = None

hooks class-attribute instance-attribute

hooks = None

host_aliases class-attribute instance-attribute

host_aliases = None

host_network class-attribute instance-attribute

host_network = None

image_pull_secrets class-attribute instance-attribute

image_pull_secrets = None

kind class-attribute instance-attribute

kind = None

labels class-attribute instance-attribute

labels = None

managed_fields class-attribute instance-attribute

managed_fields = None

metrics instance-attribute

metrics

name class-attribute instance-attribute

name = None

namespace class-attribute instance-attribute

namespace = None

node_selector class-attribute instance-attribute

node_selector = None

on_exit class-attribute instance-attribute

on_exit = None

owner_references class-attribute instance-attribute

owner_references = None

parallelism class-attribute instance-attribute

parallelism = None

pod_disruption_budget class-attribute instance-attribute

pod_disruption_budget = None

pod_gc class-attribute instance-attribute

pod_gc = None

pod_metadata class-attribute instance-attribute

pod_metadata = None

pod_priority class-attribute instance-attribute

pod_priority = None

pod_priority_class_name class-attribute instance-attribute

pod_priority_class_name = None

pod_spec_patch class-attribute instance-attribute

pod_spec_patch = None

priority class-attribute instance-attribute

priority = None

resource_version class-attribute instance-attribute

resource_version = None

retry_strategy class-attribute instance-attribute

retry_strategy = None

scheduler_name class-attribute instance-attribute

scheduler_name = None

security_context class-attribute instance-attribute

security_context = None
self_link = None

service_account_name class-attribute instance-attribute

service_account_name = None

shutdown class-attribute instance-attribute

shutdown = None

status class-attribute instance-attribute

status = None

suspend class-attribute instance-attribute

suspend = None

synchronization class-attribute instance-attribute

synchronization = None

template_defaults class-attribute instance-attribute

template_defaults = None

templates class-attribute instance-attribute

templates = []

tolerations class-attribute instance-attribute

tolerations = None

ttl_strategy class-attribute instance-attribute

ttl_strategy = None

uid class-attribute instance-attribute

uid = None

volume_claim_gc class-attribute instance-attribute

volume_claim_gc = None

volume_claim_templates class-attribute instance-attribute

volume_claim_templates = None

volumes instance-attribute

volumes

workflow_metadata class-attribute instance-attribute

workflow_metadata = None

workflow_template_ref class-attribute instance-attribute

workflow_template_ref = None

workflows_service class-attribute instance-attribute

workflows_service = None

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

ModelMapper

Source code in src/hera/workflows/_mixins.py
class ModelMapper:
    def __init__(self, model_path: str, hera_builder: Optional[Callable] = None):
        self.model_path = None
        self.builder = hera_builder

        if not model_path:
            # Allows overriding parent attribute annotations to remove the mapping
            return

        self.model_path = model_path.split(".")
        curr_class: Type[BaseModel] = self._get_model_class()
        for key in self.model_path:
            if key not in curr_class.__fields__:
                raise ValueError(f"Model key '{key}' does not exist in class {curr_class}")
            curr_class = curr_class.__fields__[key].outer_type_

    @classmethod
    def _get_model_class(cls) -> Type[BaseModel]:
        raise NotImplementedError

    @classmethod
    def build_model(
        cls, hera_class: Type[ModelMapperMixin], hera_obj: ModelMapperMixin, model: TWorkflow
    ) -> TWorkflow:
        assert isinstance(hera_obj, ModelMapperMixin)

        for attr, annotation in hera_class._get_all_annotations().items():
            if get_origin(annotation) is Annotated and isinstance(
                get_args(annotation)[1], ModelMapperMixin.ModelMapper
            ):
                mapper = get_args(annotation)[1]
                # Value comes from builder function if it exists on hera_obj, otherwise directly from the attr
                value = (
                    getattr(hera_obj, mapper.builder.__name__)()
                    if mapper.builder is not None
                    else getattr(hera_obj, attr)
                )
                if value is not None:
                    _set_model_attr(model, mapper.model_path, value)

        return model

builder instance-attribute

builder = hera_builder

model_path instance-attribute

model_path = model_path.split('.')

build_model classmethod

build_model(hera_class, hera_obj, model)
Source code in src/hera/workflows/_mixins.py
@classmethod
def build_model(
    cls, hera_class: Type[ModelMapperMixin], hera_obj: ModelMapperMixin, model: TWorkflow
) -> TWorkflow:
    assert isinstance(hera_obj, ModelMapperMixin)

    for attr, annotation in hera_class._get_all_annotations().items():
        if get_origin(annotation) is Annotated and isinstance(
            get_args(annotation)[1], ModelMapperMixin.ModelMapper
        ):
            mapper = get_args(annotation)[1]
            # Value comes from builder function if it exists on hera_obj, otherwise directly from the attr
            value = (
                getattr(hera_obj, mapper.builder.__name__)()
                if mapper.builder is not None
                else getattr(hera_obj, attr)
            )
            if value is not None:
                _set_model_attr(model, mapper.model_path, value)

    return model

build

build()

Builds the WorkflowTemplate and its components into an Argo schema WorkflowTemplate object.

Source code in src/hera/workflows/workflow_template.py
def build(self) -> TWorkflow:
    """Builds the WorkflowTemplate and its components into an Argo schema WorkflowTemplate object."""
    self = self._dispatch_hooks()

    model_workflow = _ModelWorkflowTemplate(
        metadata=ObjectMeta(),
        spec=_ModelWorkflowSpec(),
    )

    return _WorkflowTemplateModelMapper.build_model(WorkflowTemplate, self, model_workflow)

create

create()

Creates the WorkflowTemplate on the Argo cluster.

Source code in src/hera/workflows/workflow_template.py
def create(self) -> TWorkflow:  # type: ignore
    """Creates the WorkflowTemplate on the Argo cluster."""
    assert self.workflows_service, "workflow service not initialized"
    assert self.namespace, "workflow namespace not defined"
    return self.workflows_service.create_workflow_template(
        WorkflowTemplateCreateRequest(template=self.build()), namespace=self.namespace
    )

create_as_workflow

create_as_workflow(generate_name=None, wait=False, poll_interval=5)

Run this WorkflowTemplate instantly as a Workflow.

If generate_name is given, the workflow created uses generate_name as a prefix, as per the usual for hera.workflows.Workflow.generate_name. If not given, the WorkflowTemplate’s name will be used, truncated to 57 chars and appended with a hyphen.

Note: this function does not require the WorkflowTemplate to already exist on the cluster

Source code in src/hera/workflows/workflow_template.py
def create_as_workflow(
    self,
    generate_name: Optional[str] = None,
    wait: bool = False,
    poll_interval: int = 5,
) -> TWorkflow:
    """Run this WorkflowTemplate instantly as a Workflow.

    If generate_name is given, the workflow created uses generate_name as a prefix, as per the usual for
    hera.workflows.Workflow.generate_name. If not given, the WorkflowTemplate's name will be used, truncated to 57
    chars and appended with a hyphen.

    Note: this function does not require the WorkflowTemplate to already exist on the cluster
    """
    workflow = self._get_as_workflow(generate_name)
    return workflow.create(wait=wait, poll_interval=poll_interval)

from_dict classmethod

from_dict(model_dict)

Create a WorkflowTemplate from a WorkflowTemplate contained in a dict.

Examples:

>>> my_workflow_template = WorkflowTemplate(name="my-wft")
>>> my_workflow_template == WorkflowTemplate.from_dict(my_workflow_template.to_dict())
True
Source code in src/hera/workflows/workflow_template.py
@classmethod
def from_dict(cls, model_dict: Dict) -> ModelMapperMixin:
    """Create a WorkflowTemplate from a WorkflowTemplate contained in a dict.

    Examples:
        >>> my_workflow_template = WorkflowTemplate(name="my-wft")
        >>> my_workflow_template == WorkflowTemplate.from_dict(my_workflow_template.to_dict())
        True
    """
    return cls._from_dict(model_dict, _ModelWorkflowTemplate)

from_file classmethod

from_file(yaml_file)

Create a WorkflowTemplate from a WorkflowTemplate contained in a YAML file.

Examples:

>>> yaml_file = Path(...)
>>> my_workflow_template = WorkflowTemplate.from_file(yaml_file)
Source code in src/hera/workflows/workflow_template.py
@classmethod
def from_file(cls, yaml_file: Union[Path, str]) -> ModelMapperMixin:
    """Create a WorkflowTemplate from a WorkflowTemplate contained in a YAML file.

    Examples:
        >>> yaml_file = Path(...)
        >>> my_workflow_template = WorkflowTemplate.from_file(yaml_file)
    """
    return cls._from_file(yaml_file, _ModelWorkflowTemplate)

from_yaml classmethod

from_yaml(yaml_str)

Create a WorkflowTemplate from a WorkflowTemplate contained in a YAML string.

Examples:

>>> my_workflow_template = WorkflowTemplate.from_yaml(yaml_str)
Source code in src/hera/workflows/workflow_template.py
@classmethod
def from_yaml(cls, yaml_str: str) -> ModelMapperMixin:
    """Create a WorkflowTemplate from a WorkflowTemplate contained in a YAML string.

    Examples:
        >>> my_workflow_template = WorkflowTemplate.from_yaml(yaml_str)
    """
    return cls._from_yaml(yaml_str, _ModelWorkflowTemplate)

get

get()

Attempts to get a workflow template based on the parameters of this template e.g. name + namespace.

Source code in src/hera/workflows/workflow_template.py
def get(self) -> TWorkflow:
    """Attempts to get a workflow template based on the parameters of this template e.g. name + namespace."""
    assert self.workflows_service, "workflow service not initialized"
    assert self.namespace, "workflow namespace not defined"
    assert self.name, "workflow name not defined"
    return self.workflows_service.get_workflow_template(name=self.name, namespace=self.namespace)

get_parameter

get_parameter(name)

Attempts to find and return a Parameter of the specified name.

Source code in src/hera/workflows/workflow.py
def get_parameter(self, name: str) -> Parameter:
    """Attempts to find and return a `Parameter` of the specified name."""
    arguments = self._build_arguments()
    if arguments is None:
        raise KeyError("Workflow has no arguments set")
    if arguments.parameters is None:
        raise KeyError("Workflow has no argument parameters set")

    parameters = arguments.parameters
    if next((p for p in parameters if p.name == name), None) is None:
        raise KeyError(f"`{name}` is not a valid workflow parameter")
    return Parameter(name=name, value=f"{{{{workflow.parameters.{name}}}}}")

lint

lint()

Lints the WorkflowTemplate using the Argo cluster.

Source code in src/hera/workflows/workflow_template.py
def lint(self) -> TWorkflow:
    """Lints the WorkflowTemplate using the Argo cluster."""
    assert self.workflows_service, "workflow service not initialized"
    assert self.namespace, "workflow namespace not defined"
    return self.workflows_service.lint_workflow_template(
        WorkflowTemplateLintRequest(template=self.build()), namespace=self.namespace
    )

to_dict

to_dict()

Builds the Workflow as an Argo schema Workflow object and returns it as a dictionary.

Source code in src/hera/workflows/workflow.py
def to_dict(self) -> Any:
    """Builds the Workflow as an Argo schema Workflow object and returns it as a dictionary."""
    return self.build().dict(exclude_none=True, by_alias=True)

to_file

to_file(output_directory='.', name='', *args, **kwargs)

Writes the Workflow as an Argo schema Workflow object to a YAML file and returns the path to the file.

Parameters:

Name Type Description Default
output_directory Union[Path, str]

The directory to write the file to. Defaults to the current working directory.

'.'
name str

The name of the file to write without the file extension. Defaults to the Workflow’s name or a generated name.

''
*args

Additional arguments to pass to yaml.dump.

()
**kwargs

Additional keyword arguments to pass to yaml.dump.

{}
Source code in src/hera/workflows/workflow.py
def to_file(self, output_directory: Union[Path, str] = ".", name: str = "", *args, **kwargs) -> Path:
    """Writes the Workflow as an Argo schema Workflow object to a YAML file and returns the path to the file.

    Args:
        output_directory: The directory to write the file to. Defaults to the current working directory.
        name: The name of the file to write without the file extension.  Defaults to the Workflow's name or a
              generated name.
        *args: Additional arguments to pass to `yaml.dump`.
        **kwargs: Additional keyword arguments to pass to `yaml.dump`.
    """
    workflow_name = self.name or (self.generate_name or "workflow").rstrip("-")
    name = name or workflow_name
    output_directory = Path(output_directory)
    output_path = Path(output_directory) / f"{name}.yaml"
    output_directory.mkdir(parents=True, exist_ok=True)
    output_path.write_text(self.to_yaml(*args, **kwargs))
    return output_path.absolute()

to_yaml

to_yaml(*args, **kwargs)

Builds the Workflow as an Argo schema Workflow object and returns it as yaml string.

Source code in src/hera/workflows/workflow.py
def to_yaml(self, *args, **kwargs) -> str:
    """Builds the Workflow as an Argo schema Workflow object and returns it as yaml string."""
    if not _yaml:
        raise ImportError("`PyYAML` is not installed. Install `hera[yaml]` to bring in the extra dependency")
    # Set some default options if not provided by the user
    kwargs.setdefault("default_flow_style", False)
    kwargs.setdefault("sort_keys", False)
    return _yaml.dump(self.to_dict(), *args, **kwargs)

update

update()

Attempts to perform a template update based on the parameters of this template.

This creates the template if it does not exist. In addition, this performs a get prior to updating to get the resource version to update in the first place. If you know the template does not exist ahead of time, it is more efficient to use create() directly to avoid one round trip.

Source code in src/hera/workflows/workflow_template.py
def update(self) -> TWorkflow:
    """Attempts to perform a template update based on the parameters of this template.

    This creates the template if it does not exist. In addition, this performs
    a get prior to updating to get the resource version to update in the first place. If you know the template
    does not exist ahead of time, it is more efficient to use `create()` directly to avoid one round trip.
    """
    assert self.workflows_service, "workflow service not initialized"
    assert self.namespace, "workflow namespace not defined"
    assert self.name, "workflow name not defined"
    # we always need to do a get prior to updating to get the resource version to update in the first place
    # https://github.com/argoproj/argo-workflows/pull/5465#discussion_r597797052

    template = self.build()
    try:
        curr = self.get()
        template.metadata.resource_version = curr.metadata.resource_version
    except NotFound:
        return self.create()
    return self.workflows_service.update_workflow_template(
        self.name,
        WorkflowTemplateUpdateRequest(template=template),
        namespace=self.namespace,
    )

wait

wait(poll_interval=5)

Waits for the Workflow to complete execution.

Parameters

poll_interval: int = 5 The interval in seconds to poll the workflow status.

Source code in src/hera/workflows/workflow.py
def wait(self, poll_interval: int = 5) -> TWorkflow:
    """Waits for the Workflow to complete execution.

    Parameters
    ----------
    poll_interval: int = 5
        The interval in seconds to poll the workflow status.
    """
    assert self.workflows_service is not None, "workflow service not initialized"
    assert self.namespace is not None, "workflow namespace not defined"
    assert self.name is not None, "workflow name not defined"

    # here we use the sleep interval to wait for the workflow post creation. This is to address a potential
    # race conditions such as:
    # 1. Argo server says "workflow was accepted" but the workflow is not yet created
    # 2. Hera wants to verify the status of the workflow, but it's not yet defined because it's not created
    # 3. Argo finally creates the workflow
    # 4. Hera throws an `AssertionError` because the phase assertion fails
    time.sleep(poll_interval)
    wf = self.workflows_service.get_workflow(self.name, namespace=self.namespace)
    assert wf.metadata.name is not None, f"workflow name not defined for workflow {self.name}"

    assert wf.status is not None, f"workflow status not defined for workflow {wf.metadata.name}"
    assert wf.status.phase is not None, f"workflow phase not defined for workflow status {wf.status}"
    status = WorkflowStatus.from_argo_status(wf.status.phase)

    # keep polling for workflow status until completed, at the interval dictated by the user
    while status == WorkflowStatus.running:
        time.sleep(poll_interval)
        wf = self.workflows_service.get_workflow(wf.metadata.name, namespace=self.namespace)
        assert wf.status is not None, f"workflow status not defined for workflow {wf.metadata.name}"
        assert wf.status.phase is not None, f"workflow phase not defined for workflow status {wf.status}"
        status = WorkflowStatus.from_argo_status(wf.status.phase)
    return wf

WorkflowsService

The core workflows service for interacting with the Argo server.

Source code in src/hera/workflows/service.py
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
class WorkflowsService:
    """The core workflows service for interacting with the Argo server."""

    def __init__(
        self,
        host: Optional[str] = None,
        verify_ssl: Optional[bool] = None,
        token: Optional[str] = None,
        namespace: Optional[str] = None,
    ) -> None:
        """Workflows service constructor."""
        self.host = cast(str, host or global_config.host)
        self.verify_ssl = verify_ssl if verify_ssl is not None else global_config.verify_ssl
        self.token = token or global_config.token
        self.namespace = namespace or global_config.namespace

    def list_archived_workflows(
        self,
        label_selector: Optional[str] = None,
        field_selector: Optional[str] = None,
        watch: Optional[bool] = None,
        allow_watch_bookmarks: Optional[bool] = None,
        resource_version: Optional[str] = None,
        resource_version_match: Optional[str] = None,
        timeout_seconds: Optional[str] = None,
        limit: Optional[str] = None,
        continue_: Optional[str] = None,
        name_prefix: Optional[str] = None,
    ) -> WorkflowList:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "api/v1/archived-workflows"),
            params={
                "listOptions.labelSelector": label_selector,
                "listOptions.fieldSelector": field_selector,
                "listOptions.watch": watch,
                "listOptions.allowWatchBookmarks": allow_watch_bookmarks,
                "listOptions.resourceVersion": resource_version,
                "listOptions.resourceVersionMatch": resource_version_match,
                "listOptions.timeoutSeconds": timeout_seconds,
                "listOptions.limit": limit,
                "listOptions.continue": continue_,
                "namePrefix": name_prefix,
            },
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return WorkflowList(**resp.json())

        raise exception_from_server_response(resp)

    def list_archived_workflow_label_keys(self) -> LabelKeys:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "api/v1/archived-workflows-label-keys"),
            params=None,
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return LabelKeys(**resp.json())

        raise exception_from_server_response(resp)

    def list_archived_workflow_label_values(
        self,
        label_selector: Optional[str] = None,
        field_selector: Optional[str] = None,
        watch: Optional[bool] = None,
        allow_watch_bookmarks: Optional[bool] = None,
        resource_version: Optional[str] = None,
        resource_version_match: Optional[str] = None,
        timeout_seconds: Optional[str] = None,
        limit: Optional[str] = None,
        continue_: Optional[str] = None,
    ) -> LabelValues:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "api/v1/archived-workflows-label-values"),
            params={
                "listOptions.labelSelector": label_selector,
                "listOptions.fieldSelector": field_selector,
                "listOptions.watch": watch,
                "listOptions.allowWatchBookmarks": allow_watch_bookmarks,
                "listOptions.resourceVersion": resource_version,
                "listOptions.resourceVersionMatch": resource_version_match,
                "listOptions.timeoutSeconds": timeout_seconds,
                "listOptions.limit": limit,
                "listOptions.continue": continue_,
            },
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return LabelValues(**resp.json())

        raise exception_from_server_response(resp)

    def get_archived_workflow(self, uid: str) -> Workflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "api/v1/archived-workflows/{uid}").format(uid=uid),
            params=None,
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return Workflow(**resp.json())

        raise exception_from_server_response(resp)

    def delete_archived_workflow(self, uid: str) -> ArchivedWorkflowDeletedResponse:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.delete(
            url=urljoin(self.host, "api/v1/archived-workflows/{uid}").format(uid=uid),
            params=None,
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return ArchivedWorkflowDeletedResponse()

        raise exception_from_server_response(resp)

    def resubmit_archived_workflow(self, uid: str, req: ResubmitArchivedWorkflowRequest) -> Workflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.put(
            url=urljoin(self.host, "api/v1/archived-workflows/{uid}/resubmit").format(uid=uid),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return Workflow(**resp.json())

        raise exception_from_server_response(resp)

    def retry_archived_workflow(self, uid: str, req: RetryArchivedWorkflowRequest) -> Workflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.put(
            url=urljoin(self.host, "api/v1/archived-workflows/{uid}/retry").format(uid=uid),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return Workflow(**resp.json())

        raise exception_from_server_response(resp)

    def list_cluster_workflow_templates(
        self,
        label_selector: Optional[str] = None,
        field_selector: Optional[str] = None,
        watch: Optional[bool] = None,
        allow_watch_bookmarks: Optional[bool] = None,
        resource_version: Optional[str] = None,
        resource_version_match: Optional[str] = None,
        timeout_seconds: Optional[str] = None,
        limit: Optional[str] = None,
        continue_: Optional[str] = None,
    ) -> ClusterWorkflowTemplateList:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "api/v1/cluster-workflow-templates"),
            params={
                "listOptions.labelSelector": label_selector,
                "listOptions.fieldSelector": field_selector,
                "listOptions.watch": watch,
                "listOptions.allowWatchBookmarks": allow_watch_bookmarks,
                "listOptions.resourceVersion": resource_version,
                "listOptions.resourceVersionMatch": resource_version_match,
                "listOptions.timeoutSeconds": timeout_seconds,
                "listOptions.limit": limit,
                "listOptions.continue": continue_,
            },
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return ClusterWorkflowTemplateList(**resp.json())

        raise exception_from_server_response(resp)

    def create_cluster_workflow_template(self, req: ClusterWorkflowTemplateCreateRequest) -> ClusterWorkflowTemplate:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.post(
            url=urljoin(self.host, "api/v1/cluster-workflow-templates"),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return ClusterWorkflowTemplate(**resp.json())

        raise exception_from_server_response(resp)

    def lint_cluster_workflow_template(self, req: ClusterWorkflowTemplateLintRequest) -> ClusterWorkflowTemplate:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.post(
            url=urljoin(self.host, "api/v1/cluster-workflow-templates/lint"),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return ClusterWorkflowTemplate(**resp.json())

        raise exception_from_server_response(resp)

    def get_cluster_workflow_template(
        self, name: str, resource_version: Optional[str] = None
    ) -> ClusterWorkflowTemplate:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "api/v1/cluster-workflow-templates/{name}").format(name=name),
            params={"getOptions.resourceVersion": resource_version},
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return ClusterWorkflowTemplate(**resp.json())

        raise exception_from_server_response(resp)

    def update_cluster_workflow_template(
        self, name: str, req: ClusterWorkflowTemplateUpdateRequest
    ) -> ClusterWorkflowTemplate:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.put(
            url=urljoin(self.host, "api/v1/cluster-workflow-templates/{name}").format(name=name),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return ClusterWorkflowTemplate(**resp.json())

        raise exception_from_server_response(resp)

    def delete_cluster_workflow_template(
        self,
        name: str,
        grace_period_seconds: Optional[str] = None,
        uid: Optional[str] = None,
        resource_version: Optional[str] = None,
        orphan_dependents: Optional[bool] = None,
        propagation_policy: Optional[str] = None,
        dry_run: Optional[list] = None,
    ) -> ClusterWorkflowTemplateDeleteResponse:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.delete(
            url=urljoin(self.host, "api/v1/cluster-workflow-templates/{name}").format(name=name),
            params={
                "deleteOptions.gracePeriodSeconds": grace_period_seconds,
                "deleteOptions.preconditions.uid": uid,
                "deleteOptions.preconditions.resourceVersion": resource_version,
                "deleteOptions.orphanDependents": orphan_dependents,
                "deleteOptions.propagationPolicy": propagation_policy,
                "deleteOptions.dryRun": dry_run,
            },
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return ClusterWorkflowTemplateDeleteResponse()

        raise exception_from_server_response(resp)

    def list_cron_workflows(
        self,
        namespace: Optional[str] = None,
        label_selector: Optional[str] = None,
        field_selector: Optional[str] = None,
        watch: Optional[bool] = None,
        allow_watch_bookmarks: Optional[bool] = None,
        resource_version: Optional[str] = None,
        resource_version_match: Optional[str] = None,
        timeout_seconds: Optional[str] = None,
        limit: Optional[str] = None,
        continue_: Optional[str] = None,
    ) -> CronWorkflowList:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "api/v1/cron-workflows/{namespace}").format(
                namespace=namespace if namespace is not None else self.namespace
            ),
            params={
                "listOptions.labelSelector": label_selector,
                "listOptions.fieldSelector": field_selector,
                "listOptions.watch": watch,
                "listOptions.allowWatchBookmarks": allow_watch_bookmarks,
                "listOptions.resourceVersion": resource_version,
                "listOptions.resourceVersionMatch": resource_version_match,
                "listOptions.timeoutSeconds": timeout_seconds,
                "listOptions.limit": limit,
                "listOptions.continue": continue_,
            },
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return CronWorkflowList(**resp.json())

        raise exception_from_server_response(resp)

    def create_cron_workflow(self, req: CreateCronWorkflowRequest, namespace: Optional[str] = None) -> CronWorkflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.post(
            url=urljoin(self.host, "api/v1/cron-workflows/{namespace}").format(
                namespace=namespace if namespace is not None else self.namespace
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return CronWorkflow(**resp.json())

        raise exception_from_server_response(resp)

    def lint_cron_workflow(self, req: LintCronWorkflowRequest, namespace: Optional[str] = None) -> CronWorkflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.post(
            url=urljoin(self.host, "api/v1/cron-workflows/{namespace}/lint").format(
                namespace=namespace if namespace is not None else self.namespace
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return CronWorkflow(**resp.json())

        raise exception_from_server_response(resp)

    def get_cron_workflow(
        self, name: str, namespace: Optional[str] = None, resource_version: Optional[str] = None
    ) -> CronWorkflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "api/v1/cron-workflows/{namespace}/{name}").format(
                name=name, namespace=namespace if namespace is not None else self.namespace
            ),
            params={"getOptions.resourceVersion": resource_version},
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return CronWorkflow(**resp.json())

        raise exception_from_server_response(resp)

    def update_cron_workflow(
        self, name: str, req: UpdateCronWorkflowRequest, namespace: Optional[str] = None
    ) -> CronWorkflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.put(
            url=urljoin(self.host, "api/v1/cron-workflows/{namespace}/{name}").format(
                name=name, namespace=namespace if namespace is not None else self.namespace
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return CronWorkflow(**resp.json())

        raise exception_from_server_response(resp)

    def delete_cron_workflow(
        self,
        name: str,
        namespace: Optional[str] = None,
        grace_period_seconds: Optional[str] = None,
        uid: Optional[str] = None,
        resource_version: Optional[str] = None,
        orphan_dependents: Optional[bool] = None,
        propagation_policy: Optional[str] = None,
        dry_run: Optional[list] = None,
    ) -> CronWorkflowDeletedResponse:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.delete(
            url=urljoin(self.host, "api/v1/cron-workflows/{namespace}/{name}").format(
                name=name, namespace=namespace if namespace is not None else self.namespace
            ),
            params={
                "deleteOptions.gracePeriodSeconds": grace_period_seconds,
                "deleteOptions.preconditions.uid": uid,
                "deleteOptions.preconditions.resourceVersion": resource_version,
                "deleteOptions.orphanDependents": orphan_dependents,
                "deleteOptions.propagationPolicy": propagation_policy,
                "deleteOptions.dryRun": dry_run,
            },
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return CronWorkflowDeletedResponse()

        raise exception_from_server_response(resp)

    def resume_cron_workflow(
        self, name: str, req: CronWorkflowResumeRequest, namespace: Optional[str] = None
    ) -> CronWorkflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.put(
            url=urljoin(self.host, "api/v1/cron-workflows/{namespace}/{name}/resume").format(
                name=name, namespace=namespace if namespace is not None else self.namespace
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return CronWorkflow(**resp.json())

        raise exception_from_server_response(resp)

    def suspend_cron_workflow(
        self, name: str, req: CronWorkflowSuspendRequest, namespace: Optional[str] = None
    ) -> CronWorkflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.put(
            url=urljoin(self.host, "api/v1/cron-workflows/{namespace}/{name}/suspend").format(
                name=name, namespace=namespace if namespace is not None else self.namespace
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return CronWorkflow(**resp.json())

        raise exception_from_server_response(resp)

    def get_info(self) -> InfoResponse:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "api/v1/info"),
            params=None,
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return InfoResponse()

        raise exception_from_server_response(resp)

    def get_user_info(self) -> GetUserInfoResponse:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "api/v1/userinfo"),
            params=None,
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return GetUserInfoResponse()

        raise exception_from_server_response(resp)

    def get_version(self) -> Version:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "api/v1/version"),
            params=None,
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return Version(**resp.json())

        raise exception_from_server_response(resp)

    def list_workflow_templates(
        self,
        namespace: Optional[str] = None,
        label_selector: Optional[str] = None,
        field_selector: Optional[str] = None,
        watch: Optional[bool] = None,
        allow_watch_bookmarks: Optional[bool] = None,
        resource_version: Optional[str] = None,
        resource_version_match: Optional[str] = None,
        timeout_seconds: Optional[str] = None,
        limit: Optional[str] = None,
        continue_: Optional[str] = None,
    ) -> WorkflowTemplateList:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "api/v1/workflow-templates/{namespace}").format(
                namespace=namespace if namespace is not None else self.namespace
            ),
            params={
                "listOptions.labelSelector": label_selector,
                "listOptions.fieldSelector": field_selector,
                "listOptions.watch": watch,
                "listOptions.allowWatchBookmarks": allow_watch_bookmarks,
                "listOptions.resourceVersion": resource_version,
                "listOptions.resourceVersionMatch": resource_version_match,
                "listOptions.timeoutSeconds": timeout_seconds,
                "listOptions.limit": limit,
                "listOptions.continue": continue_,
            },
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return WorkflowTemplateList(**resp.json())

        raise exception_from_server_response(resp)

    def create_workflow_template(
        self, req: WorkflowTemplateCreateRequest, namespace: Optional[str] = None
    ) -> WorkflowTemplate:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.post(
            url=urljoin(self.host, "api/v1/workflow-templates/{namespace}").format(
                namespace=namespace if namespace is not None else self.namespace
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return WorkflowTemplate(**resp.json())

        raise exception_from_server_response(resp)

    def lint_workflow_template(
        self, req: WorkflowTemplateLintRequest, namespace: Optional[str] = None
    ) -> WorkflowTemplate:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.post(
            url=urljoin(self.host, "api/v1/workflow-templates/{namespace}/lint").format(
                namespace=namespace if namespace is not None else self.namespace
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return WorkflowTemplate(**resp.json())

        raise exception_from_server_response(resp)

    def get_workflow_template(
        self, name: str, namespace: Optional[str] = None, resource_version: Optional[str] = None
    ) -> WorkflowTemplate:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "api/v1/workflow-templates/{namespace}/{name}").format(
                name=name, namespace=namespace if namespace is not None else self.namespace
            ),
            params={"getOptions.resourceVersion": resource_version},
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return WorkflowTemplate(**resp.json())

        raise exception_from_server_response(resp)

    def update_workflow_template(
        self, name: str, req: WorkflowTemplateUpdateRequest, namespace: Optional[str] = None
    ) -> WorkflowTemplate:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.put(
            url=urljoin(self.host, "api/v1/workflow-templates/{namespace}/{name}").format(
                name=name, namespace=namespace if namespace is not None else self.namespace
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return WorkflowTemplate(**resp.json())

        raise exception_from_server_response(resp)

    def delete_workflow_template(
        self,
        name: str,
        namespace: Optional[str] = None,
        grace_period_seconds: Optional[str] = None,
        uid: Optional[str] = None,
        resource_version: Optional[str] = None,
        orphan_dependents: Optional[bool] = None,
        propagation_policy: Optional[str] = None,
        dry_run: Optional[list] = None,
    ) -> WorkflowTemplateDeleteResponse:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.delete(
            url=urljoin(self.host, "api/v1/workflow-templates/{namespace}/{name}").format(
                name=name, namespace=namespace if namespace is not None else self.namespace
            ),
            params={
                "deleteOptions.gracePeriodSeconds": grace_period_seconds,
                "deleteOptions.preconditions.uid": uid,
                "deleteOptions.preconditions.resourceVersion": resource_version,
                "deleteOptions.orphanDependents": orphan_dependents,
                "deleteOptions.propagationPolicy": propagation_policy,
                "deleteOptions.dryRun": dry_run,
            },
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return WorkflowTemplateDeleteResponse()

        raise exception_from_server_response(resp)

    def list_workflows(
        self,
        namespace: Optional[str] = None,
        label_selector: Optional[str] = None,
        field_selector: Optional[str] = None,
        watch: Optional[bool] = None,
        allow_watch_bookmarks: Optional[bool] = None,
        resource_version: Optional[str] = None,
        resource_version_match: Optional[str] = None,
        timeout_seconds: Optional[str] = None,
        limit: Optional[str] = None,
        continue_: Optional[str] = None,
        fields: Optional[str] = None,
    ) -> WorkflowList:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "api/v1/workflows/{namespace}").format(
                namespace=namespace if namespace is not None else self.namespace
            ),
            params={
                "listOptions.labelSelector": label_selector,
                "listOptions.fieldSelector": field_selector,
                "listOptions.watch": watch,
                "listOptions.allowWatchBookmarks": allow_watch_bookmarks,
                "listOptions.resourceVersion": resource_version,
                "listOptions.resourceVersionMatch": resource_version_match,
                "listOptions.timeoutSeconds": timeout_seconds,
                "listOptions.limit": limit,
                "listOptions.continue": continue_,
                "fields": fields,
            },
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return WorkflowList(**resp.json())

        raise exception_from_server_response(resp)

    def create_workflow(self, req: WorkflowCreateRequest, namespace: Optional[str] = None) -> Workflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.post(
            url=urljoin(self.host, "api/v1/workflows/{namespace}").format(
                namespace=namespace if namespace is not None else self.namespace
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return Workflow(**resp.json())

        raise exception_from_server_response(resp)

    def lint_workflow(self, req: WorkflowLintRequest, namespace: Optional[str] = None) -> Workflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.post(
            url=urljoin(self.host, "api/v1/workflows/{namespace}/lint").format(
                namespace=namespace if namespace is not None else self.namespace
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return Workflow(**resp.json())

        raise exception_from_server_response(resp)

    def submit_workflow(self, req: WorkflowSubmitRequest, namespace: Optional[str] = None) -> Workflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.post(
            url=urljoin(self.host, "api/v1/workflows/{namespace}/submit").format(
                namespace=namespace if namespace is not None else self.namespace
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return Workflow(**resp.json())

        raise exception_from_server_response(resp)

    def get_workflow(
        self,
        name: str,
        namespace: Optional[str] = None,
        resource_version: Optional[str] = None,
        fields: Optional[str] = None,
    ) -> Workflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}").format(
                name=name, namespace=namespace if namespace is not None else self.namespace
            ),
            params={"getOptions.resourceVersion": resource_version, "fields": fields},
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return Workflow(**resp.json())

        raise exception_from_server_response(resp)

    def delete_workflow(
        self,
        name: str,
        namespace: Optional[str] = None,
        grace_period_seconds: Optional[str] = None,
        uid: Optional[str] = None,
        resource_version: Optional[str] = None,
        orphan_dependents: Optional[bool] = None,
        propagation_policy: Optional[str] = None,
        dry_run: Optional[list] = None,
        force: Optional[bool] = None,
    ) -> WorkflowDeleteResponse:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.delete(
            url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}").format(
                name=name, namespace=namespace if namespace is not None else self.namespace
            ),
            params={
                "deleteOptions.gracePeriodSeconds": grace_period_seconds,
                "deleteOptions.preconditions.uid": uid,
                "deleteOptions.preconditions.resourceVersion": resource_version,
                "deleteOptions.orphanDependents": orphan_dependents,
                "deleteOptions.propagationPolicy": propagation_policy,
                "deleteOptions.dryRun": dry_run,
                "force": force,
            },
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return WorkflowDeleteResponse()

        raise exception_from_server_response(resp)

    def workflow_logs(
        self,
        name: str,
        namespace: Optional[str] = None,
        pod_name: Optional[str] = None,
        container: Optional[str] = None,
        follow: Optional[bool] = None,
        previous: Optional[bool] = None,
        since_seconds: Optional[str] = None,
        seconds: Optional[str] = None,
        nanos: Optional[int] = None,
        timestamps: Optional[bool] = None,
        tail_lines: Optional[str] = None,
        limit_bytes: Optional[str] = None,
        insecure_skip_tls_verify_backend: Optional[bool] = None,
        grep: Optional[str] = None,
        selector: Optional[str] = None,
    ) -> V1alpha1LogEntry:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}/log").format(
                name=name, namespace=namespace if namespace is not None else self.namespace
            ),
            params={
                "podName": pod_name,
                "logOptions.container": container,
                "logOptions.follow": follow,
                "logOptions.previous": previous,
                "logOptions.sinceSeconds": since_seconds,
                "logOptions.sinceTime.seconds": seconds,
                "logOptions.sinceTime.nanos": nanos,
                "logOptions.timestamps": timestamps,
                "logOptions.tailLines": tail_lines,
                "logOptions.limitBytes": limit_bytes,
                "logOptions.insecureSkipTLSVerifyBackend": insecure_skip_tls_verify_backend,
                "grep": grep,
                "selector": selector,
            },
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return V1alpha1LogEntry(**resp.json())

        raise exception_from_server_response(resp)

    def resubmit_workflow(self, name: str, req: WorkflowResubmitRequest, namespace: Optional[str] = None) -> Workflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.put(
            url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}/resubmit").format(
                name=name, namespace=namespace if namespace is not None else self.namespace
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return Workflow(**resp.json())

        raise exception_from_server_response(resp)

    def resume_workflow(self, name: str, req: WorkflowResumeRequest, namespace: Optional[str] = None) -> Workflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.put(
            url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}/resume").format(
                name=name, namespace=namespace if namespace is not None else self.namespace
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return Workflow(**resp.json())

        raise exception_from_server_response(resp)

    def retry_workflow(self, name: str, req: WorkflowRetryRequest, namespace: Optional[str] = None) -> Workflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.put(
            url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}/retry").format(
                name=name, namespace=namespace if namespace is not None else self.namespace
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return Workflow(**resp.json())

        raise exception_from_server_response(resp)

    def set_workflow(self, name: str, req: WorkflowSetRequest, namespace: Optional[str] = None) -> Workflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.put(
            url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}/set").format(
                name=name, namespace=namespace if namespace is not None else self.namespace
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return Workflow(**resp.json())

        raise exception_from_server_response(resp)

    def stop_workflow(self, name: str, req: WorkflowStopRequest, namespace: Optional[str] = None) -> Workflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.put(
            url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}/stop").format(
                name=name, namespace=namespace if namespace is not None else self.namespace
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return Workflow(**resp.json())

        raise exception_from_server_response(resp)

    def suspend_workflow(self, name: str, req: WorkflowSuspendRequest, namespace: Optional[str] = None) -> Workflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.put(
            url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}/suspend").format(
                name=name, namespace=namespace if namespace is not None else self.namespace
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return Workflow(**resp.json())

        raise exception_from_server_response(resp)

    def terminate_workflow(
        self, name: str, req: WorkflowTerminateRequest, namespace: Optional[str] = None
    ) -> Workflow:
        """API documentation."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.put(
            url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}/terminate").format(
                name=name, namespace=namespace if namespace is not None else self.namespace
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
            data=req.json(
                exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
            ),
            verify=self.verify_ssl,
        )

        if resp.ok:
            return Workflow(**resp.json())

        raise exception_from_server_response(resp)

    def pod_logs(
        self,
        name: str,
        pod_name: str,
        namespace: Optional[str] = None,
        container: Optional[str] = None,
        follow: Optional[bool] = None,
        previous: Optional[bool] = None,
        since_seconds: Optional[str] = None,
        seconds: Optional[str] = None,
        nanos: Optional[int] = None,
        timestamps: Optional[bool] = None,
        tail_lines: Optional[str] = None,
        limit_bytes: Optional[str] = None,
        insecure_skip_tls_verify_backend: Optional[bool] = None,
        grep: Optional[str] = None,
        selector: Optional[str] = None,
    ) -> V1alpha1LogEntry:
        """DEPRECATED: Cannot work via HTTP if podName is an empty string. Use WorkflowLogs."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}/{podName}/log").format(
                name=name, podName=pod_name, namespace=namespace if namespace is not None else self.namespace
            ),
            params={
                "logOptions.container": container,
                "logOptions.follow": follow,
                "logOptions.previous": previous,
                "logOptions.sinceSeconds": since_seconds,
                "logOptions.sinceTime.seconds": seconds,
                "logOptions.sinceTime.nanos": nanos,
                "logOptions.timestamps": timestamps,
                "logOptions.tailLines": tail_lines,
                "logOptions.limitBytes": limit_bytes,
                "logOptions.insecureSkipTLSVerifyBackend": insecure_skip_tls_verify_backend,
                "grep": grep,
                "selector": selector,
            },
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return V1alpha1LogEntry(**resp.json())

        raise exception_from_server_response(resp)

    def get_artifact_file(
        self,
        id_discriminator: str,
        id_: str,
        node_id: str,
        artifact_name: str,
        artifact_discriminator: str,
        namespace: Optional[str] = None,
    ) -> str:
        """Get an artifact."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(
                self.host,
                "artifact-files/{namespace}/{idDiscriminator}/{id}/{nodeId}/{artifactDiscriminator}/{artifactName}",
            ).format(
                idDiscriminator=id_discriminator,
                id=id_,
                nodeId=node_id,
                artifactName=artifact_name,
                artifactDiscriminator=artifact_discriminator,
                namespace=namespace if namespace is not None else self.namespace,
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return str(resp.content)

        raise exception_from_server_response(resp)

    def get_output_artifact_by_uid(self, uid: str, node_id: str, artifact_name: str) -> str:
        """Get an output artifact by UID."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "artifacts-by-uid/{uid}/{nodeId}/{artifactName}").format(
                uid=uid, nodeId=node_id, artifactName=artifact_name
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return str(resp.content)

        raise exception_from_server_response(resp)

    def get_output_artifact(self, name: str, node_id: str, artifact_name: str, namespace: Optional[str] = None) -> str:
        """Get an output artifact."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "artifacts/{namespace}/{name}/{nodeId}/{artifactName}").format(
                name=name,
                nodeId=node_id,
                artifactName=artifact_name,
                namespace=namespace if namespace is not None else self.namespace,
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return str(resp.content)

        raise exception_from_server_response(resp)

    def get_input_artifact_by_uid(self, uid: str, node_id: str, artifact_name: str) -> str:
        """Get an input artifact by UID."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "input-artifacts-by-uid/{uid}/{nodeId}/{artifactName}").format(
                uid=uid, nodeId=node_id, artifactName=artifact_name
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return str(resp.content)

        raise exception_from_server_response(resp)

    def get_input_artifact(self, name: str, node_id: str, artifact_name: str, namespace: Optional[str] = None) -> str:
        """Get an input artifact."""
        assert valid_host_scheme(self.host), "The host scheme is required for service usage"
        resp = requests.get(
            url=urljoin(self.host, "input-artifacts/{namespace}/{name}/{nodeId}/{artifactName}").format(
                name=name,
                nodeId=node_id,
                artifactName=artifact_name,
                namespace=namespace if namespace is not None else self.namespace,
            ),
            params=None,
            headers={"Authorization": f"Bearer {self.token}"},
            data=None,
            verify=self.verify_ssl,
        )

        if resp.ok:
            return str(resp.content)

        raise exception_from_server_response(resp)

host instance-attribute

host = cast(str, host or global_config.host)

namespace instance-attribute

namespace = namespace or global_config.namespace

token instance-attribute

token = token or global_config.token

verify_ssl instance-attribute

verify_ssl = verify_ssl if verify_ssl is not None else global_config.verify_ssl

create_cluster_workflow_template

create_cluster_workflow_template(req)

API documentation.

Source code in src/hera/workflows/service.py
def create_cluster_workflow_template(self, req: ClusterWorkflowTemplateCreateRequest) -> ClusterWorkflowTemplate:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.post(
        url=urljoin(self.host, "api/v1/cluster-workflow-templates"),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return ClusterWorkflowTemplate(**resp.json())

    raise exception_from_server_response(resp)

create_cron_workflow

create_cron_workflow(req, namespace=None)

API documentation.

Source code in src/hera/workflows/service.py
def create_cron_workflow(self, req: CreateCronWorkflowRequest, namespace: Optional[str] = None) -> CronWorkflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.post(
        url=urljoin(self.host, "api/v1/cron-workflows/{namespace}").format(
            namespace=namespace if namespace is not None else self.namespace
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return CronWorkflow(**resp.json())

    raise exception_from_server_response(resp)

create_workflow

create_workflow(req, namespace=None)

API documentation.

Source code in src/hera/workflows/service.py
def create_workflow(self, req: WorkflowCreateRequest, namespace: Optional[str] = None) -> Workflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.post(
        url=urljoin(self.host, "api/v1/workflows/{namespace}").format(
            namespace=namespace if namespace is not None else self.namespace
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return Workflow(**resp.json())

    raise exception_from_server_response(resp)

create_workflow_template

create_workflow_template(req, namespace=None)

API documentation.

Source code in src/hera/workflows/service.py
def create_workflow_template(
    self, req: WorkflowTemplateCreateRequest, namespace: Optional[str] = None
) -> WorkflowTemplate:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.post(
        url=urljoin(self.host, "api/v1/workflow-templates/{namespace}").format(
            namespace=namespace if namespace is not None else self.namespace
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return WorkflowTemplate(**resp.json())

    raise exception_from_server_response(resp)

delete_archived_workflow

delete_archived_workflow(uid)

API documentation.

Source code in src/hera/workflows/service.py
def delete_archived_workflow(self, uid: str) -> ArchivedWorkflowDeletedResponse:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.delete(
        url=urljoin(self.host, "api/v1/archived-workflows/{uid}").format(uid=uid),
        params=None,
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return ArchivedWorkflowDeletedResponse()

    raise exception_from_server_response(resp)

delete_cluster_workflow_template

delete_cluster_workflow_template(name, grace_period_seconds=None, uid=None, resource_version=None, orphan_dependents=None, propagation_policy=None, dry_run=None)

API documentation.

Source code in src/hera/workflows/service.py
def delete_cluster_workflow_template(
    self,
    name: str,
    grace_period_seconds: Optional[str] = None,
    uid: Optional[str] = None,
    resource_version: Optional[str] = None,
    orphan_dependents: Optional[bool] = None,
    propagation_policy: Optional[str] = None,
    dry_run: Optional[list] = None,
) -> ClusterWorkflowTemplateDeleteResponse:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.delete(
        url=urljoin(self.host, "api/v1/cluster-workflow-templates/{name}").format(name=name),
        params={
            "deleteOptions.gracePeriodSeconds": grace_period_seconds,
            "deleteOptions.preconditions.uid": uid,
            "deleteOptions.preconditions.resourceVersion": resource_version,
            "deleteOptions.orphanDependents": orphan_dependents,
            "deleteOptions.propagationPolicy": propagation_policy,
            "deleteOptions.dryRun": dry_run,
        },
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return ClusterWorkflowTemplateDeleteResponse()

    raise exception_from_server_response(resp)

delete_cron_workflow

delete_cron_workflow(name, namespace=None, grace_period_seconds=None, uid=None, resource_version=None, orphan_dependents=None, propagation_policy=None, dry_run=None)

API documentation.

Source code in src/hera/workflows/service.py
def delete_cron_workflow(
    self,
    name: str,
    namespace: Optional[str] = None,
    grace_period_seconds: Optional[str] = None,
    uid: Optional[str] = None,
    resource_version: Optional[str] = None,
    orphan_dependents: Optional[bool] = None,
    propagation_policy: Optional[str] = None,
    dry_run: Optional[list] = None,
) -> CronWorkflowDeletedResponse:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.delete(
        url=urljoin(self.host, "api/v1/cron-workflows/{namespace}/{name}").format(
            name=name, namespace=namespace if namespace is not None else self.namespace
        ),
        params={
            "deleteOptions.gracePeriodSeconds": grace_period_seconds,
            "deleteOptions.preconditions.uid": uid,
            "deleteOptions.preconditions.resourceVersion": resource_version,
            "deleteOptions.orphanDependents": orphan_dependents,
            "deleteOptions.propagationPolicy": propagation_policy,
            "deleteOptions.dryRun": dry_run,
        },
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return CronWorkflowDeletedResponse()

    raise exception_from_server_response(resp)

delete_workflow

delete_workflow(name, namespace=None, grace_period_seconds=None, uid=None, resource_version=None, orphan_dependents=None, propagation_policy=None, dry_run=None, force=None)

API documentation.

Source code in src/hera/workflows/service.py
def delete_workflow(
    self,
    name: str,
    namespace: Optional[str] = None,
    grace_period_seconds: Optional[str] = None,
    uid: Optional[str] = None,
    resource_version: Optional[str] = None,
    orphan_dependents: Optional[bool] = None,
    propagation_policy: Optional[str] = None,
    dry_run: Optional[list] = None,
    force: Optional[bool] = None,
) -> WorkflowDeleteResponse:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.delete(
        url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}").format(
            name=name, namespace=namespace if namespace is not None else self.namespace
        ),
        params={
            "deleteOptions.gracePeriodSeconds": grace_period_seconds,
            "deleteOptions.preconditions.uid": uid,
            "deleteOptions.preconditions.resourceVersion": resource_version,
            "deleteOptions.orphanDependents": orphan_dependents,
            "deleteOptions.propagationPolicy": propagation_policy,
            "deleteOptions.dryRun": dry_run,
            "force": force,
        },
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return WorkflowDeleteResponse()

    raise exception_from_server_response(resp)

delete_workflow_template

delete_workflow_template(name, namespace=None, grace_period_seconds=None, uid=None, resource_version=None, orphan_dependents=None, propagation_policy=None, dry_run=None)

API documentation.

Source code in src/hera/workflows/service.py
def delete_workflow_template(
    self,
    name: str,
    namespace: Optional[str] = None,
    grace_period_seconds: Optional[str] = None,
    uid: Optional[str] = None,
    resource_version: Optional[str] = None,
    orphan_dependents: Optional[bool] = None,
    propagation_policy: Optional[str] = None,
    dry_run: Optional[list] = None,
) -> WorkflowTemplateDeleteResponse:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.delete(
        url=urljoin(self.host, "api/v1/workflow-templates/{namespace}/{name}").format(
            name=name, namespace=namespace if namespace is not None else self.namespace
        ),
        params={
            "deleteOptions.gracePeriodSeconds": grace_period_seconds,
            "deleteOptions.preconditions.uid": uid,
            "deleteOptions.preconditions.resourceVersion": resource_version,
            "deleteOptions.orphanDependents": orphan_dependents,
            "deleteOptions.propagationPolicy": propagation_policy,
            "deleteOptions.dryRun": dry_run,
        },
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return WorkflowTemplateDeleteResponse()

    raise exception_from_server_response(resp)

get_archived_workflow

get_archived_workflow(uid)

API documentation.

Source code in src/hera/workflows/service.py
def get_archived_workflow(self, uid: str) -> Workflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "api/v1/archived-workflows/{uid}").format(uid=uid),
        params=None,
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return Workflow(**resp.json())

    raise exception_from_server_response(resp)

get_artifact_file

get_artifact_file(id_discriminator, id_, node_id, artifact_name, artifact_discriminator, namespace=None)

Get an artifact.

Source code in src/hera/workflows/service.py
def get_artifact_file(
    self,
    id_discriminator: str,
    id_: str,
    node_id: str,
    artifact_name: str,
    artifact_discriminator: str,
    namespace: Optional[str] = None,
) -> str:
    """Get an artifact."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(
            self.host,
            "artifact-files/{namespace}/{idDiscriminator}/{id}/{nodeId}/{artifactDiscriminator}/{artifactName}",
        ).format(
            idDiscriminator=id_discriminator,
            id=id_,
            nodeId=node_id,
            artifactName=artifact_name,
            artifactDiscriminator=artifact_discriminator,
            namespace=namespace if namespace is not None else self.namespace,
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return str(resp.content)

    raise exception_from_server_response(resp)

get_cluster_workflow_template

get_cluster_workflow_template(name, resource_version=None)

API documentation.

Source code in src/hera/workflows/service.py
def get_cluster_workflow_template(
    self, name: str, resource_version: Optional[str] = None
) -> ClusterWorkflowTemplate:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "api/v1/cluster-workflow-templates/{name}").format(name=name),
        params={"getOptions.resourceVersion": resource_version},
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return ClusterWorkflowTemplate(**resp.json())

    raise exception_from_server_response(resp)

get_cron_workflow

get_cron_workflow(name, namespace=None, resource_version=None)

API documentation.

Source code in src/hera/workflows/service.py
def get_cron_workflow(
    self, name: str, namespace: Optional[str] = None, resource_version: Optional[str] = None
) -> CronWorkflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "api/v1/cron-workflows/{namespace}/{name}").format(
            name=name, namespace=namespace if namespace is not None else self.namespace
        ),
        params={"getOptions.resourceVersion": resource_version},
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return CronWorkflow(**resp.json())

    raise exception_from_server_response(resp)

get_info

get_info()

API documentation.

Source code in src/hera/workflows/service.py
def get_info(self) -> InfoResponse:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "api/v1/info"),
        params=None,
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return InfoResponse()

    raise exception_from_server_response(resp)

get_input_artifact

get_input_artifact(name, node_id, artifact_name, namespace=None)

Get an input artifact.

Source code in src/hera/workflows/service.py
def get_input_artifact(self, name: str, node_id: str, artifact_name: str, namespace: Optional[str] = None) -> str:
    """Get an input artifact."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "input-artifacts/{namespace}/{name}/{nodeId}/{artifactName}").format(
            name=name,
            nodeId=node_id,
            artifactName=artifact_name,
            namespace=namespace if namespace is not None else self.namespace,
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return str(resp.content)

    raise exception_from_server_response(resp)

get_input_artifact_by_uid

get_input_artifact_by_uid(uid, node_id, artifact_name)

Get an input artifact by UID.

Source code in src/hera/workflows/service.py
def get_input_artifact_by_uid(self, uid: str, node_id: str, artifact_name: str) -> str:
    """Get an input artifact by UID."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "input-artifacts-by-uid/{uid}/{nodeId}/{artifactName}").format(
            uid=uid, nodeId=node_id, artifactName=artifact_name
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return str(resp.content)

    raise exception_from_server_response(resp)

get_output_artifact

get_output_artifact(name, node_id, artifact_name, namespace=None)

Get an output artifact.

Source code in src/hera/workflows/service.py
def get_output_artifact(self, name: str, node_id: str, artifact_name: str, namespace: Optional[str] = None) -> str:
    """Get an output artifact."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "artifacts/{namespace}/{name}/{nodeId}/{artifactName}").format(
            name=name,
            nodeId=node_id,
            artifactName=artifact_name,
            namespace=namespace if namespace is not None else self.namespace,
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return str(resp.content)

    raise exception_from_server_response(resp)

get_output_artifact_by_uid

get_output_artifact_by_uid(uid, node_id, artifact_name)

Get an output artifact by UID.

Source code in src/hera/workflows/service.py
def get_output_artifact_by_uid(self, uid: str, node_id: str, artifact_name: str) -> str:
    """Get an output artifact by UID."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "artifacts-by-uid/{uid}/{nodeId}/{artifactName}").format(
            uid=uid, nodeId=node_id, artifactName=artifact_name
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return str(resp.content)

    raise exception_from_server_response(resp)

get_user_info

get_user_info()

API documentation.

Source code in src/hera/workflows/service.py
def get_user_info(self) -> GetUserInfoResponse:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "api/v1/userinfo"),
        params=None,
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return GetUserInfoResponse()

    raise exception_from_server_response(resp)

get_version

get_version()

API documentation.

Source code in src/hera/workflows/service.py
def get_version(self) -> Version:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "api/v1/version"),
        params=None,
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return Version(**resp.json())

    raise exception_from_server_response(resp)

get_workflow

get_workflow(name, namespace=None, resource_version=None, fields=None)

API documentation.

Source code in src/hera/workflows/service.py
def get_workflow(
    self,
    name: str,
    namespace: Optional[str] = None,
    resource_version: Optional[str] = None,
    fields: Optional[str] = None,
) -> Workflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}").format(
            name=name, namespace=namespace if namespace is not None else self.namespace
        ),
        params={"getOptions.resourceVersion": resource_version, "fields": fields},
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return Workflow(**resp.json())

    raise exception_from_server_response(resp)

get_workflow_template

get_workflow_template(name, namespace=None, resource_version=None)

API documentation.

Source code in src/hera/workflows/service.py
def get_workflow_template(
    self, name: str, namespace: Optional[str] = None, resource_version: Optional[str] = None
) -> WorkflowTemplate:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "api/v1/workflow-templates/{namespace}/{name}").format(
            name=name, namespace=namespace if namespace is not None else self.namespace
        ),
        params={"getOptions.resourceVersion": resource_version},
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return WorkflowTemplate(**resp.json())

    raise exception_from_server_response(resp)

lint_cluster_workflow_template

lint_cluster_workflow_template(req)

API documentation.

Source code in src/hera/workflows/service.py
def lint_cluster_workflow_template(self, req: ClusterWorkflowTemplateLintRequest) -> ClusterWorkflowTemplate:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.post(
        url=urljoin(self.host, "api/v1/cluster-workflow-templates/lint"),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return ClusterWorkflowTemplate(**resp.json())

    raise exception_from_server_response(resp)

lint_cron_workflow

lint_cron_workflow(req, namespace=None)

API documentation.

Source code in src/hera/workflows/service.py
def lint_cron_workflow(self, req: LintCronWorkflowRequest, namespace: Optional[str] = None) -> CronWorkflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.post(
        url=urljoin(self.host, "api/v1/cron-workflows/{namespace}/lint").format(
            namespace=namespace if namespace is not None else self.namespace
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return CronWorkflow(**resp.json())

    raise exception_from_server_response(resp)

lint_workflow

lint_workflow(req, namespace=None)

API documentation.

Source code in src/hera/workflows/service.py
def lint_workflow(self, req: WorkflowLintRequest, namespace: Optional[str] = None) -> Workflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.post(
        url=urljoin(self.host, "api/v1/workflows/{namespace}/lint").format(
            namespace=namespace if namespace is not None else self.namespace
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return Workflow(**resp.json())

    raise exception_from_server_response(resp)

lint_workflow_template

lint_workflow_template(req, namespace=None)

API documentation.

Source code in src/hera/workflows/service.py
def lint_workflow_template(
    self, req: WorkflowTemplateLintRequest, namespace: Optional[str] = None
) -> WorkflowTemplate:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.post(
        url=urljoin(self.host, "api/v1/workflow-templates/{namespace}/lint").format(
            namespace=namespace if namespace is not None else self.namespace
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return WorkflowTemplate(**resp.json())

    raise exception_from_server_response(resp)

list_archived_workflow_label_keys

list_archived_workflow_label_keys()

API documentation.

Source code in src/hera/workflows/service.py
def list_archived_workflow_label_keys(self) -> LabelKeys:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "api/v1/archived-workflows-label-keys"),
        params=None,
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return LabelKeys(**resp.json())

    raise exception_from_server_response(resp)

list_archived_workflow_label_values

list_archived_workflow_label_values(label_selector=None, field_selector=None, watch=None, allow_watch_bookmarks=None, resource_version=None, resource_version_match=None, timeout_seconds=None, limit=None, continue_=None)

API documentation.

Source code in src/hera/workflows/service.py
def list_archived_workflow_label_values(
    self,
    label_selector: Optional[str] = None,
    field_selector: Optional[str] = None,
    watch: Optional[bool] = None,
    allow_watch_bookmarks: Optional[bool] = None,
    resource_version: Optional[str] = None,
    resource_version_match: Optional[str] = None,
    timeout_seconds: Optional[str] = None,
    limit: Optional[str] = None,
    continue_: Optional[str] = None,
) -> LabelValues:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "api/v1/archived-workflows-label-values"),
        params={
            "listOptions.labelSelector": label_selector,
            "listOptions.fieldSelector": field_selector,
            "listOptions.watch": watch,
            "listOptions.allowWatchBookmarks": allow_watch_bookmarks,
            "listOptions.resourceVersion": resource_version,
            "listOptions.resourceVersionMatch": resource_version_match,
            "listOptions.timeoutSeconds": timeout_seconds,
            "listOptions.limit": limit,
            "listOptions.continue": continue_,
        },
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return LabelValues(**resp.json())

    raise exception_from_server_response(resp)

list_archived_workflows

list_archived_workflows(label_selector=None, field_selector=None, watch=None, allow_watch_bookmarks=None, resource_version=None, resource_version_match=None, timeout_seconds=None, limit=None, continue_=None, name_prefix=None)

API documentation.

Source code in src/hera/workflows/service.py
def list_archived_workflows(
    self,
    label_selector: Optional[str] = None,
    field_selector: Optional[str] = None,
    watch: Optional[bool] = None,
    allow_watch_bookmarks: Optional[bool] = None,
    resource_version: Optional[str] = None,
    resource_version_match: Optional[str] = None,
    timeout_seconds: Optional[str] = None,
    limit: Optional[str] = None,
    continue_: Optional[str] = None,
    name_prefix: Optional[str] = None,
) -> WorkflowList:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "api/v1/archived-workflows"),
        params={
            "listOptions.labelSelector": label_selector,
            "listOptions.fieldSelector": field_selector,
            "listOptions.watch": watch,
            "listOptions.allowWatchBookmarks": allow_watch_bookmarks,
            "listOptions.resourceVersion": resource_version,
            "listOptions.resourceVersionMatch": resource_version_match,
            "listOptions.timeoutSeconds": timeout_seconds,
            "listOptions.limit": limit,
            "listOptions.continue": continue_,
            "namePrefix": name_prefix,
        },
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return WorkflowList(**resp.json())

    raise exception_from_server_response(resp)

list_cluster_workflow_templates

list_cluster_workflow_templates(label_selector=None, field_selector=None, watch=None, allow_watch_bookmarks=None, resource_version=None, resource_version_match=None, timeout_seconds=None, limit=None, continue_=None)

API documentation.

Source code in src/hera/workflows/service.py
def list_cluster_workflow_templates(
    self,
    label_selector: Optional[str] = None,
    field_selector: Optional[str] = None,
    watch: Optional[bool] = None,
    allow_watch_bookmarks: Optional[bool] = None,
    resource_version: Optional[str] = None,
    resource_version_match: Optional[str] = None,
    timeout_seconds: Optional[str] = None,
    limit: Optional[str] = None,
    continue_: Optional[str] = None,
) -> ClusterWorkflowTemplateList:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "api/v1/cluster-workflow-templates"),
        params={
            "listOptions.labelSelector": label_selector,
            "listOptions.fieldSelector": field_selector,
            "listOptions.watch": watch,
            "listOptions.allowWatchBookmarks": allow_watch_bookmarks,
            "listOptions.resourceVersion": resource_version,
            "listOptions.resourceVersionMatch": resource_version_match,
            "listOptions.timeoutSeconds": timeout_seconds,
            "listOptions.limit": limit,
            "listOptions.continue": continue_,
        },
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return ClusterWorkflowTemplateList(**resp.json())

    raise exception_from_server_response(resp)

list_cron_workflows

list_cron_workflows(namespace=None, label_selector=None, field_selector=None, watch=None, allow_watch_bookmarks=None, resource_version=None, resource_version_match=None, timeout_seconds=None, limit=None, continue_=None)

API documentation.

Source code in src/hera/workflows/service.py
def list_cron_workflows(
    self,
    namespace: Optional[str] = None,
    label_selector: Optional[str] = None,
    field_selector: Optional[str] = None,
    watch: Optional[bool] = None,
    allow_watch_bookmarks: Optional[bool] = None,
    resource_version: Optional[str] = None,
    resource_version_match: Optional[str] = None,
    timeout_seconds: Optional[str] = None,
    limit: Optional[str] = None,
    continue_: Optional[str] = None,
) -> CronWorkflowList:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "api/v1/cron-workflows/{namespace}").format(
            namespace=namespace if namespace is not None else self.namespace
        ),
        params={
            "listOptions.labelSelector": label_selector,
            "listOptions.fieldSelector": field_selector,
            "listOptions.watch": watch,
            "listOptions.allowWatchBookmarks": allow_watch_bookmarks,
            "listOptions.resourceVersion": resource_version,
            "listOptions.resourceVersionMatch": resource_version_match,
            "listOptions.timeoutSeconds": timeout_seconds,
            "listOptions.limit": limit,
            "listOptions.continue": continue_,
        },
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return CronWorkflowList(**resp.json())

    raise exception_from_server_response(resp)

list_workflow_templates

list_workflow_templates(namespace=None, label_selector=None, field_selector=None, watch=None, allow_watch_bookmarks=None, resource_version=None, resource_version_match=None, timeout_seconds=None, limit=None, continue_=None)

API documentation.

Source code in src/hera/workflows/service.py
def list_workflow_templates(
    self,
    namespace: Optional[str] = None,
    label_selector: Optional[str] = None,
    field_selector: Optional[str] = None,
    watch: Optional[bool] = None,
    allow_watch_bookmarks: Optional[bool] = None,
    resource_version: Optional[str] = None,
    resource_version_match: Optional[str] = None,
    timeout_seconds: Optional[str] = None,
    limit: Optional[str] = None,
    continue_: Optional[str] = None,
) -> WorkflowTemplateList:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "api/v1/workflow-templates/{namespace}").format(
            namespace=namespace if namespace is not None else self.namespace
        ),
        params={
            "listOptions.labelSelector": label_selector,
            "listOptions.fieldSelector": field_selector,
            "listOptions.watch": watch,
            "listOptions.allowWatchBookmarks": allow_watch_bookmarks,
            "listOptions.resourceVersion": resource_version,
            "listOptions.resourceVersionMatch": resource_version_match,
            "listOptions.timeoutSeconds": timeout_seconds,
            "listOptions.limit": limit,
            "listOptions.continue": continue_,
        },
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return WorkflowTemplateList(**resp.json())

    raise exception_from_server_response(resp)

list_workflows

list_workflows(namespace=None, label_selector=None, field_selector=None, watch=None, allow_watch_bookmarks=None, resource_version=None, resource_version_match=None, timeout_seconds=None, limit=None, continue_=None, fields=None)

API documentation.

Source code in src/hera/workflows/service.py
def list_workflows(
    self,
    namespace: Optional[str] = None,
    label_selector: Optional[str] = None,
    field_selector: Optional[str] = None,
    watch: Optional[bool] = None,
    allow_watch_bookmarks: Optional[bool] = None,
    resource_version: Optional[str] = None,
    resource_version_match: Optional[str] = None,
    timeout_seconds: Optional[str] = None,
    limit: Optional[str] = None,
    continue_: Optional[str] = None,
    fields: Optional[str] = None,
) -> WorkflowList:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "api/v1/workflows/{namespace}").format(
            namespace=namespace if namespace is not None else self.namespace
        ),
        params={
            "listOptions.labelSelector": label_selector,
            "listOptions.fieldSelector": field_selector,
            "listOptions.watch": watch,
            "listOptions.allowWatchBookmarks": allow_watch_bookmarks,
            "listOptions.resourceVersion": resource_version,
            "listOptions.resourceVersionMatch": resource_version_match,
            "listOptions.timeoutSeconds": timeout_seconds,
            "listOptions.limit": limit,
            "listOptions.continue": continue_,
            "fields": fields,
        },
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return WorkflowList(**resp.json())

    raise exception_from_server_response(resp)

pod_logs

pod_logs(name, pod_name, namespace=None, container=None, follow=None, previous=None, since_seconds=None, seconds=None, nanos=None, timestamps=None, tail_lines=None, limit_bytes=None, insecure_skip_tls_verify_backend=None, grep=None, selector=None)

DEPRECATED: Cannot work via HTTP if podName is an empty string. Use WorkflowLogs.

Source code in src/hera/workflows/service.py
def pod_logs(
    self,
    name: str,
    pod_name: str,
    namespace: Optional[str] = None,
    container: Optional[str] = None,
    follow: Optional[bool] = None,
    previous: Optional[bool] = None,
    since_seconds: Optional[str] = None,
    seconds: Optional[str] = None,
    nanos: Optional[int] = None,
    timestamps: Optional[bool] = None,
    tail_lines: Optional[str] = None,
    limit_bytes: Optional[str] = None,
    insecure_skip_tls_verify_backend: Optional[bool] = None,
    grep: Optional[str] = None,
    selector: Optional[str] = None,
) -> V1alpha1LogEntry:
    """DEPRECATED: Cannot work via HTTP if podName is an empty string. Use WorkflowLogs."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}/{podName}/log").format(
            name=name, podName=pod_name, namespace=namespace if namespace is not None else self.namespace
        ),
        params={
            "logOptions.container": container,
            "logOptions.follow": follow,
            "logOptions.previous": previous,
            "logOptions.sinceSeconds": since_seconds,
            "logOptions.sinceTime.seconds": seconds,
            "logOptions.sinceTime.nanos": nanos,
            "logOptions.timestamps": timestamps,
            "logOptions.tailLines": tail_lines,
            "logOptions.limitBytes": limit_bytes,
            "logOptions.insecureSkipTLSVerifyBackend": insecure_skip_tls_verify_backend,
            "grep": grep,
            "selector": selector,
        },
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return V1alpha1LogEntry(**resp.json())

    raise exception_from_server_response(resp)

resubmit_archived_workflow

resubmit_archived_workflow(uid, req)

API documentation.

Source code in src/hera/workflows/service.py
def resubmit_archived_workflow(self, uid: str, req: ResubmitArchivedWorkflowRequest) -> Workflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.put(
        url=urljoin(self.host, "api/v1/archived-workflows/{uid}/resubmit").format(uid=uid),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return Workflow(**resp.json())

    raise exception_from_server_response(resp)

resubmit_workflow

resubmit_workflow(name, req, namespace=None)

API documentation.

Source code in src/hera/workflows/service.py
def resubmit_workflow(self, name: str, req: WorkflowResubmitRequest, namespace: Optional[str] = None) -> Workflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.put(
        url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}/resubmit").format(
            name=name, namespace=namespace if namespace is not None else self.namespace
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return Workflow(**resp.json())

    raise exception_from_server_response(resp)

resume_cron_workflow

resume_cron_workflow(name, req, namespace=None)

API documentation.

Source code in src/hera/workflows/service.py
def resume_cron_workflow(
    self, name: str, req: CronWorkflowResumeRequest, namespace: Optional[str] = None
) -> CronWorkflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.put(
        url=urljoin(self.host, "api/v1/cron-workflows/{namespace}/{name}/resume").format(
            name=name, namespace=namespace if namespace is not None else self.namespace
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return CronWorkflow(**resp.json())

    raise exception_from_server_response(resp)

resume_workflow

resume_workflow(name, req, namespace=None)

API documentation.

Source code in src/hera/workflows/service.py
def resume_workflow(self, name: str, req: WorkflowResumeRequest, namespace: Optional[str] = None) -> Workflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.put(
        url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}/resume").format(
            name=name, namespace=namespace if namespace is not None else self.namespace
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return Workflow(**resp.json())

    raise exception_from_server_response(resp)

retry_archived_workflow

retry_archived_workflow(uid, req)

API documentation.

Source code in src/hera/workflows/service.py
def retry_archived_workflow(self, uid: str, req: RetryArchivedWorkflowRequest) -> Workflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.put(
        url=urljoin(self.host, "api/v1/archived-workflows/{uid}/retry").format(uid=uid),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return Workflow(**resp.json())

    raise exception_from_server_response(resp)

retry_workflow

retry_workflow(name, req, namespace=None)

API documentation.

Source code in src/hera/workflows/service.py
def retry_workflow(self, name: str, req: WorkflowRetryRequest, namespace: Optional[str] = None) -> Workflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.put(
        url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}/retry").format(
            name=name, namespace=namespace if namespace is not None else self.namespace
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return Workflow(**resp.json())

    raise exception_from_server_response(resp)

set_workflow

set_workflow(name, req, namespace=None)

API documentation.

Source code in src/hera/workflows/service.py
def set_workflow(self, name: str, req: WorkflowSetRequest, namespace: Optional[str] = None) -> Workflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.put(
        url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}/set").format(
            name=name, namespace=namespace if namespace is not None else self.namespace
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return Workflow(**resp.json())

    raise exception_from_server_response(resp)

stop_workflow

stop_workflow(name, req, namespace=None)

API documentation.

Source code in src/hera/workflows/service.py
def stop_workflow(self, name: str, req: WorkflowStopRequest, namespace: Optional[str] = None) -> Workflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.put(
        url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}/stop").format(
            name=name, namespace=namespace if namespace is not None else self.namespace
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return Workflow(**resp.json())

    raise exception_from_server_response(resp)

submit_workflow

submit_workflow(req, namespace=None)

API documentation.

Source code in src/hera/workflows/service.py
def submit_workflow(self, req: WorkflowSubmitRequest, namespace: Optional[str] = None) -> Workflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.post(
        url=urljoin(self.host, "api/v1/workflows/{namespace}/submit").format(
            namespace=namespace if namespace is not None else self.namespace
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return Workflow(**resp.json())

    raise exception_from_server_response(resp)

suspend_cron_workflow

suspend_cron_workflow(name, req, namespace=None)

API documentation.

Source code in src/hera/workflows/service.py
def suspend_cron_workflow(
    self, name: str, req: CronWorkflowSuspendRequest, namespace: Optional[str] = None
) -> CronWorkflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.put(
        url=urljoin(self.host, "api/v1/cron-workflows/{namespace}/{name}/suspend").format(
            name=name, namespace=namespace if namespace is not None else self.namespace
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return CronWorkflow(**resp.json())

    raise exception_from_server_response(resp)

suspend_workflow

suspend_workflow(name, req, namespace=None)

API documentation.

Source code in src/hera/workflows/service.py
def suspend_workflow(self, name: str, req: WorkflowSuspendRequest, namespace: Optional[str] = None) -> Workflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.put(
        url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}/suspend").format(
            name=name, namespace=namespace if namespace is not None else self.namespace
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return Workflow(**resp.json())

    raise exception_from_server_response(resp)

terminate_workflow

terminate_workflow(name, req, namespace=None)

API documentation.

Source code in src/hera/workflows/service.py
def terminate_workflow(
    self, name: str, req: WorkflowTerminateRequest, namespace: Optional[str] = None
) -> Workflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.put(
        url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}/terminate").format(
            name=name, namespace=namespace if namespace is not None else self.namespace
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return Workflow(**resp.json())

    raise exception_from_server_response(resp)

update_cluster_workflow_template

update_cluster_workflow_template(name, req)

API documentation.

Source code in src/hera/workflows/service.py
def update_cluster_workflow_template(
    self, name: str, req: ClusterWorkflowTemplateUpdateRequest
) -> ClusterWorkflowTemplate:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.put(
        url=urljoin(self.host, "api/v1/cluster-workflow-templates/{name}").format(name=name),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return ClusterWorkflowTemplate(**resp.json())

    raise exception_from_server_response(resp)

update_cron_workflow

update_cron_workflow(name, req, namespace=None)

API documentation.

Source code in src/hera/workflows/service.py
def update_cron_workflow(
    self, name: str, req: UpdateCronWorkflowRequest, namespace: Optional[str] = None
) -> CronWorkflow:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.put(
        url=urljoin(self.host, "api/v1/cron-workflows/{namespace}/{name}").format(
            name=name, namespace=namespace if namespace is not None else self.namespace
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return CronWorkflow(**resp.json())

    raise exception_from_server_response(resp)

update_workflow_template

update_workflow_template(name, req, namespace=None)

API documentation.

Source code in src/hera/workflows/service.py
def update_workflow_template(
    self, name: str, req: WorkflowTemplateUpdateRequest, namespace: Optional[str] = None
) -> WorkflowTemplate:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.put(
        url=urljoin(self.host, "api/v1/workflow-templates/{namespace}/{name}").format(
            name=name, namespace=namespace if namespace is not None else self.namespace
        ),
        params=None,
        headers={"Authorization": f"Bearer {self.token}", "Content-Type": "application/json"},
        data=req.json(
            exclude_none=True, by_alias=True, skip_defaults=True, exclude_unset=True, exclude_defaults=True
        ),
        verify=self.verify_ssl,
    )

    if resp.ok:
        return WorkflowTemplate(**resp.json())

    raise exception_from_server_response(resp)

workflow_logs

workflow_logs(name, namespace=None, pod_name=None, container=None, follow=None, previous=None, since_seconds=None, seconds=None, nanos=None, timestamps=None, tail_lines=None, limit_bytes=None, insecure_skip_tls_verify_backend=None, grep=None, selector=None)

API documentation.

Source code in src/hera/workflows/service.py
def workflow_logs(
    self,
    name: str,
    namespace: Optional[str] = None,
    pod_name: Optional[str] = None,
    container: Optional[str] = None,
    follow: Optional[bool] = None,
    previous: Optional[bool] = None,
    since_seconds: Optional[str] = None,
    seconds: Optional[str] = None,
    nanos: Optional[int] = None,
    timestamps: Optional[bool] = None,
    tail_lines: Optional[str] = None,
    limit_bytes: Optional[str] = None,
    insecure_skip_tls_verify_backend: Optional[bool] = None,
    grep: Optional[str] = None,
    selector: Optional[str] = None,
) -> V1alpha1LogEntry:
    """API documentation."""
    assert valid_host_scheme(self.host), "The host scheme is required for service usage"
    resp = requests.get(
        url=urljoin(self.host, "api/v1/workflows/{namespace}/{name}/log").format(
            name=name, namespace=namespace if namespace is not None else self.namespace
        ),
        params={
            "podName": pod_name,
            "logOptions.container": container,
            "logOptions.follow": follow,
            "logOptions.previous": previous,
            "logOptions.sinceSeconds": since_seconds,
            "logOptions.sinceTime.seconds": seconds,
            "logOptions.sinceTime.nanos": nanos,
            "logOptions.timestamps": timestamps,
            "logOptions.tailLines": tail_lines,
            "logOptions.limitBytes": limit_bytes,
            "logOptions.insecureSkipTLSVerifyBackend": insecure_skip_tls_verify_backend,
            "grep": grep,
            "selector": selector,
        },
        headers={"Authorization": f"Bearer {self.token}"},
        data=None,
        verify=self.verify_ssl,
    )

    if resp.ok:
        return V1alpha1LogEntry(**resp.json())

    raise exception_from_server_response(resp)

ZipArchiveStrategy

ZipArchiveStrategy indicates artifacts should be serialized using the zip strategy.

Source code in src/hera/workflows/archive.py
class ZipArchiveStrategy(ArchiveStrategy):
    """`ZipArchiveStrategy` indicates artifacts should be serialized using the `zip` strategy."""

    def _build_archive_strategy(self) -> _ModelArchiveStrategy:
        return _ModelArchiveStrategy(zip=_ModelZipStrategy())

Config

Config class dictates the behavior of the underlying Pydantic model.

See Pydantic documentation for more info.

Source code in src/hera/shared/_base_model.py
class Config:
    """Config class dictates the behavior of the underlying Pydantic model.

    See Pydantic documentation for more info.
    """

    allow_population_by_field_name = True
    """support populating Hera object fields via keyed dictionaries"""

    allow_mutation = True
    """supports mutating Hera objects post instantiation"""

    use_enum_values = True
    """supports using enums, which are then unpacked to obtain the actual `.value`, on Hera objects"""

    arbitrary_types_allowed = True
    """supports specifying arbitrary types for any field to support Hera object fields processing"""

    smart_union = True
    """uses smart union for matching a field's specified value to the underlying type that's part of a union"""

allow_mutation class-attribute instance-attribute

allow_mutation = True

supports mutating Hera objects post instantiation

allow_population_by_field_name class-attribute instance-attribute

allow_population_by_field_name = True

support populating Hera object fields via keyed dictionaries

arbitrary_types_allowed class-attribute instance-attribute

arbitrary_types_allowed = True

supports specifying arbitrary types for any field to support Hera object fields processing

smart_union class-attribute instance-attribute

smart_union = True

uses smart union for matching a field’s specified value to the underlying type that’s part of a union

use_enum_values class-attribute instance-attribute

use_enum_values = True

supports using enums, which are then unpacked to obtain the actual .value, on Hera objects

Comments