Note that APIs imported from Dagster submodules are not considered stable, and are potentially subject to change in the future.
If you find yourself consulting these docs because you are writing custom components and plug-ins, please get in touch with the core team on our Slack. We’re curious what you’re up to, happy to help, excited for new community contributions, and eager to make the system as easy to work with as possible – including for teams who are looking to customize it.
APIs for constructing custom executors. This is considered advanced experimental usage. Please note that using Dagster-provided executors is considered stable, common usage.
Define an executor.
The decorated function should accept an InitExecutorContext
and return an instance
of Executor
.
name (Optional[str]) – The name of the executor.
config_schema (Optional[ConfigSchema]) – The schema for the config. Configuration data available in init_context.executor_config. If not set, Dagster will accept any config provided for.
requirements (Optional[List[ExecutorRequirement]]) – Any requirements that must be met in order for the executor to be usable for a particular pipeline execution.
An executor is responsible for executing the steps of a job.
name (str) – The name of the executor.
config_schema (Optional[ConfigSchema]) – The schema for the config. Configuration data available in init_context.executor_config. If not set, Dagster will accept any config provided.
requirements (Optional[List[ExecutorRequirement]]) – Any requirements that must be met in order for the executor to be usable for a particular pipeline execution.
executor_creation_fn (Optional[Callable]) – Should accept an InitExecutorContext
and return an instance of Executor
required_resource_keys (Optional[Set[str]]) – Keys for the resources required by the executor.
description (Optional[str]) – A description of the executor.
Wraps this object in an object of the same type that provides configuration to the inner object.
Using configured
may result in config values being displayed in
Dagit, so it is not recommended to use this API with sensitive values,
such as secrets.
config_or_config_fn (Union[Any, Callable[[Any], Any]]) – Either (1) Run configuration
that fully satisfies this object’s config schema or (2) A function that accepts run
configuration and returns run configuration that fully satisfies this object’s
config schema. In the latter case, config_schema must be specified. When
passing a function, it’s easiest to use configured()
.
name (Optional[str]) – Name of the new definition. If not provided, the emitted definition will inherit the name of the ExecutorDefinition upon which this function is called.
config_schema (Optional[ConfigSchema]) – If config_or_config_fn is a function, the config schema that its input must satisfy. If not set, Dagster will accept any config provided.
description (Optional[str]) – Description of the new definition. If not specified, inherits the description of the definition being configured.
Returns (ConfigurableDefinition): A configured version of this object.
Description of executor, if provided.
Name of the executor.
Executor-specific initialization context.
The job to be executed.
IPipeline
The definition of the executor currently being constructed.
The parsed config passed to the executor.
dict
The current instance.
For the given context and execution plan, orchestrate a series of sub plan executions in a way that satisfies the whole plan being executed.
plan_context (PlanOrchestrationContext) – The plan’s orchestration context.
execution_plan (ExecutionPlan) – The plan to execute.
A stream of dagster events.
Whether retries are enabled or disabled for this instance of the executor.
Executors should allow this to be controlled via configuration if possible.
Returns: RetryMode
Base class for all file managers in dagster.
The file manager is an interface that can be implemented by resources to provide abstract access to a file system such as local disk, S3, or other cloud storage.
For examples of usage, see the documentation of the concrete file manager implementations.
Copy a file represented by a file handle to a temp file.
In an implementation built around an object store such as S3, this method would be expected
to download the file from S3 to local filesystem in a location assigned by the standard
library’s python:tempfile
module.
Temp files returned by this method are not guaranteed to be reusable across solid
boundaries. For files that must be available across solid boundaries, use the
read()
,
read_data()
,
write()
, and
write_data()
methods.
file_handle (FileHandle) – The handle to the file to make available as a local temp file.
Path to the local temp file.
str
Delete all local temporary files created by previous calls to
copy_handle_to_local_temp()
.
Should typically only be called by framework implementors.
Return a file-like stream for the file handle.
This may incur an expensive network call for file managers backed by object stores such as S3.
file_handle (FileHandle) – The file handle to make available as a stream.
mode (str) – The mode in which to open the file. Default: "rb"
.
A file-like stream.
Union[TextIO, BinaryIO]
Return the bytes for a given file handle. This may incur an expensive network call for file managers backed by object stores such as s3.
file_handle (FileHandle) – The file handle for which to return bytes.
Bytes for a given file handle.
bytes
Write the bytes contained within the given file object into the file manager.
file_obj (Union[TextIO, StringIO]) – A file-like object.
mode (Optional[str]) – The mode in which to write the file into the file manager.
Default: "wb"
.
ext (Optional[str]) – For file managers that support file extensions, the extension with
which to write the file. Default: None
.
A handle to the newly created file.
Write raw bytes into the file manager.
data (bytes) – The bytes to write into the file manager.
ext (Optional[str]) – For file managers that support file extensions, the extension with
which to write the file. Default: None
.
A handle to the newly created file.
FileManager that provides abstract access to a local filesystem.
By default, files will be stored in <local_artifact_storage>/storage/file_manager where
<local_artifact_storage> can be configured the dagster.yaml
file in $DAGSTER_HOME
.
Implements the FileManager
API.
Examples
import tempfile
from dagster import job, local_file_manager, op
@op(required_resource_keys={"file_manager"})
def write_files(context):
fh_1 = context.resources.file_manager.write_data(b"foo")
with tempfile.NamedTemporaryFile("w+") as fd:
fd.write("bar")
fd.seek(0)
fh_2 = context.resources.file_manager.write(fd, mode="w", ext=".txt")
return (fh_1, fh_2)
@op(required_resource_keys={"file_manager"})
def read_files(context, file_handles):
fh_1, fh_2 = file_handles
assert context.resources.file_manager.read_data(fh_2) == b"bar"
fd = context.resources.file_manager.read(fh_2, mode="r")
assert fd.read() == "foo"
fd.close()
@job(resource_defs={"file_manager": local_file_manager})
def files_pipeline():
read_files(write_files())
Or to specify the file directory:
@job(
resource_defs={
"file_manager": local_file_manager.configured({"base_dir": "/my/base/dir"})
}
)
def files_pipeline():
read_files(write_files())
A reference to a file as manipulated by a FileManager.
Subclasses may handle files that are resident on the local file system, in an object store, or in any arbitrary place where a file can be stored.
This exists to handle the very common case where you wish to write a computation that reads, transforms, and writes files, but where you also want the same code to work in local development as well as on a cluster where the files will be stored in a globally available object store such as S3.
A representation of the file path for display purposes only.
Core abstraction for managing Dagster’s access to storage and other resources.
Use DagsterInstance.get() to grab the current DagsterInstance which will load based on
the values in the dagster.yaml
file in $DAGSTER_HOME
.
Alternatively, DagsterInstance.ephemeral() can use used which provides a set of transient in-memory components.
Configuration of this class should be done by setting values in $DAGSTER_HOME/dagster.yaml
.
For example, to use Postgres for dagster storage, you can write a dagster.yaml
such as the
following:
storage:
postgres:
postgres_db:
username: my_username
password: my_password
hostname: my_hostname
db_name: my_database
port: 5432
instance_type (InstanceType) – Indicates whether the instance is ephemeral or persistent.
Users should not attempt to set this value directly or in their dagster.yaml
files.
local_artifact_storage (LocalArtifactStorage) – The local artifact storage is used to
configure storage for any artifacts that require a local disk, such as schedules, or
when using the filesystem system storage to manage files and intermediates. By default,
this will be a dagster._core.storage.root.LocalArtifactStorage
. Configurable
in dagster.yaml
using the ConfigurableClass
machinery.
run_storage (RunStorage) – The run storage is used to store metadata about ongoing and past
pipeline runs. By default, this will be a
dagster._core.storage.runs.SqliteRunStorage
. Configurable in dagster.yaml
using the ConfigurableClass
machinery.
event_storage (EventLogStorage) – Used to store the structured event logs generated by
pipeline runs. By default, this will be a
dagster._core.storage.event_log.SqliteEventLogStorage
. Configurable in
dagster.yaml
using the ConfigurableClass
machinery.
compute_log_manager (ComputeLogManager) – The compute log manager handles stdout and stderr
logging for solid compute functions. By default, this will be a
dagster._core.storage.local_compute_log_manager.LocalComputeLogManager
.
Configurable in dagster.yaml
using the
ConfigurableClass
machinery.
run_coordinator (RunCoordinator) – A runs coordinator may be used to manage the execution of pipeline runs.
run_launcher (Optional[RunLauncher]) – Optionally, a run launcher may be used to enable a Dagster instance to launch pipeline runs, e.g. on a remote Kubernetes cluster, in addition to running them locally.
settings (Optional[Dict]) – Specifies certain per-instance settings,
such as feature flags. These are set in the dagster.yaml
under a set of whitelisted
keys.
ref (Optional[InstanceRef]) – Used by internal machinery to pass instances across process boundaries.
Serializable representation of a DagsterInstance
.
Users should not instantiate this class directly.
Abstract mixin for classes that can be loaded from config.
This supports a powerful plugin pattern which avoids both a) a lengthy, hard-to-synchronize list of conditional imports / optional extras_requires in dagster core and b) a magic directory or file in which third parties can place plugin packages. Instead, the intention is to make, e.g., run storage, pluggable with a config chunk like:
run_storage:
module: very_cool_package.run_storage
class: SplendidRunStorage
config:
magic_word: "quux"
This same pattern should eventually be viable for other system components, e.g. engines.
The ConfigurableClass
mixin provides the necessary hooks for classes to be instantiated from
an instance of ConfigurableClassData
.
Pieces of the Dagster system which we wish to make pluggable in this way should consume a config type such as:
{'module': str, 'class': str, 'config': Field(Permissive())}
Serializable tuple describing where to find a class and the config fragment that should be used to instantiate it.
Users should not instantiate this class directly.
Classes intended to be serialized in this way should implement the
dagster.serdes.ConfigurableClass
mixin.
Abstract base class for Dagster persistent storage, for reading and writing data for runs, events, and schedule/sensor state.
Users should not directly instantiate concrete subclasses of this class; they are instantiated
by internal machinery when dagit
and dagster-daemon
load, based on the values in the
dagster.yaml
file in $DAGSTER_HOME
. Configuration of concrete subclasses of this class
should be done by setting values in that file.
Serializable internal representation of a dagster run, as stored in a
RunStorage
.
Defines a filter across job runs, for use when querying storage directly.
Each field of the RunsFilter represents a logical AND with each other. For example, if you specify job_name and tags, then you will receive only runs with the specified job_name AND the specified tags. If left blank, then all values will be permitted for that field.
run_ids (Optional[List[str]]) – A list of job run_id values.
job_name (Optional[str]) – Name of the job to query for. If blank, all job_names will be accepted.
statuses (Optional[List[DagsterRunStatus]]) – A list of run statuses to filter by. If blank, all run statuses will be allowed.
tags (Optional[Dict[str, Union[str, List[str]]]]) – A dictionary of run tags to query by. All tags specified here must be present for a given run to pass the filter.
snapshot_id (Optional[str]) – The ID of the job snapshot to query for. Intended for internal use.
updated_after (Optional[DateTime]) – Filter by runs that were last updated before this datetime.
created_before (Optional[DateTime]) – Filter by runs that were created before this datetime.
mode (Optional[str]) – (deprecated)
pipeline_name (Optional[str]) – (deprecated)
Abstract base class for storing pipeline run history.
Note that run storages using SQL databases as backing stores should implement
SqlRunStorage
.
Users should not directly instantiate concrete subclasses of this class; they are instantiated
by internal machinery when dagit
and dagster-graphql
load, based on the values in the
dagster.yaml
file in $DAGSTER_HOME
. Configuration of concrete subclasses of this class
should be done by setting values in that file.
SQLite-backed run storage.
Users should not directly instantiate this class; it is instantiated by internal machinery when
dagit
and dagster-graphql
load, based on the values in the dagster.yaml
file in
$DAGSTER_HOME
. Configuration of this class should be done by setting values in that file.
This is the default run storage when none is specified in the dagster.yaml
.
To explicitly specify SQLite for run storage, you can add a block such as the following to your
dagster.yaml
:
run_storage:
module: dagster._core.storage.runs
class: SqliteRunStorage
config:
base_dir: /path/to/dir
The base_dir
param tells the run storage where on disk to store the database.
Internal representation of a run record, as stored in a
RunStorage
.
Users should not invoke this class directly.
See also: dagster_postgres.PostgresRunStorage
and dagster_mysql.MySQLRunStorage
.
Entries in the event log.
Users should not instantiate this object directly. These entries may originate from the logging machinery (DagsterLogManager/context.log), from framework events (e.g. EngineEvent), or they may correspond to events yielded by user code (e.g. Output).
error_info (Optional[SerializableErrorInfo]) – Error info for an associated exception, if any, as generated by serializable_error_info_from_exc_info and friends.
level (Union[str, int]) – The Python log level at which to log this event. Note that framework and user code events are also logged to Python logging. This value may be an integer or a (case-insensitive) string member of PYTHON_LOGGING_LEVELS_NAMES.
user_message (str) – For log messages, this is the user-generated message.
run_id (str) – The id of the run which generated this event.
timestamp (float) – The Unix timestamp of this event.
step_key (Optional[str]) – The step key for the step which generated this event. Some events are generated outside of a step context.
job_name (Optional[str]) – The job which generated this event. Some events are generated outside of a job context.
dagster_event (Optional[DagsterEvent]) – For framework and user events, the associated structured event.
Return the message from the structured DagsterEvent if present, fallback to user_message.
Internal representation of an event record, as stored in a
EventLogStorage
.
Users should not instantiate this class directly.
Defines a set of filter fields for fetching a set of event log entries or event log records.
event_type (DagsterEventType) – Filter argument for dagster event type
asset_key (Optional[AssetKey]) – Asset key for which to get asset materialization event entries / records.
asset_partitions (Optional[List[str]]) – Filter parameter such that only asset events with a partition value matching one of the provided values. Only valid when the asset_key parameter is provided.
after_cursor (Optional[Union[int, RunShardedEventsCursor]]) – Filter parameter such that only records with storage_id greater than the provided value are returned. Using a run-sharded events cursor will result in a significant performance gain when run against a SqliteEventLogStorage implementation (which is run-sharded)
before_cursor (Optional[Union[int, RunShardedEventsCursor]]) – Filter parameter such that records with storage_id less than the provided value are returned. Using a run-sharded events cursor will result in a significant performance gain when run against a SqliteEventLogStorage implementation (which is run-sharded)
after_timestamp (Optional[float]) – Filter parameter such that only event records for events with timestamp greater than the provided value are returned.
before_timestamp (Optional[float]) – Filter parameter such that only event records for events with timestamp less than the provided value are returned.
Pairs an id-based event log cursor with a timestamp-based run cursor, for improved performance on run-sharded event log storages (e.g. the default SqliteEventLogStorage). For run-sharded storages, the id field is ignored, since they may not be unique across shards.
Abstract base class for storing structured event logs from pipeline runs.
Note that event log storages using SQL databases as backing stores should implement
SqlEventLogStorage
.
Users should not directly instantiate concrete subclasses of this class; they are instantiated
by internal machinery when dagit
and dagster-graphql
load, based on the values in the
dagster.yaml
file in $DAGSTER_HOME
. Configuration of concrete subclasses of this class
should be done by setting values in that file.
Base class for SQL backed event log storages.
Distinguishes between run-based connections and index connections in order to support run-level sharding, while maintaining the ability to do cross-run queries
SQLite-backed event log storage.
Users should not directly instantiate this class; it is instantiated by internal machinery when
dagit
and dagster-graphql
load, based on the values in the dagster.yaml
file in
$DAGSTER_HOME
. Configuration of this class should be done by setting values in that file.
This is the default event log storage when none is specified in the dagster.yaml
.
To explicitly specify SQLite for event log storage, you can add a block such as the following
to your dagster.yaml
:
event_log_storage:
module: dagster._core.storage.event_log
class: SqliteEventLogStorage
config:
base_dir: /path/to/dir
The base_dir
param tells the event log storage where on disk to store the databases. To
improve concurrent performance, event logs are stored in a separate SQLite database for each
run.
SQLite-backed consolidated event log storage intended for test cases only.
Users should not directly instantiate this class; it is instantiated by internal machinery when
dagit
and dagster-graphql
load, based on the values in the dagster.yaml
file in
$DAGSTER_HOME
. Configuration of this class should be done by setting values in that file.
To explicitly specify the consolidated SQLite for event log storage, you can add a block such as
the following to your dagster.yaml
:
run_storage:
module: dagster._core.storage.event_log
class: ConsolidatedSqliteEventLogStorage
config:
base_dir: /path/to/dir
The base_dir
param tells the event log storage where on disk to store the database.
Internal representation of an asset record, as stored in a EventLogStorage
.
Users should not invoke this class directly.
See also: dagster_postgres.PostgresEventLogStorage
and dagster_mysql.MySQLEventLogStorage
.
Abstract base class for capturing the unstructured logs (stdout/stderr) in the current process, stored / retrieved with a provided log_key.
Abstract base class for storing unstructured compute logs (stdout/stderr) from the compute steps of pipeline solids.
Stores copies of stdout & stderr for each compute step locally on disk.
See also: dagster_aws.S3ComputeLogManager
.
Immediately send runs to the run launcher.
The maximum number of runs that are allowed to be in progress at once. Defaults to 10. Set to -1 to disable the limit. Set to 0 to stop any runs from launching. Any other negative values are disallowed.
A set of limits that are applied to runs with particular tags. If a value is set, the limit is applied to only that key-value pair. If no value is set, the limit is applied across all values of that key. If the value is set to a dict with applyLimitPerUniqueValue: true, the limit will apply to the number of unique values for that key.
The interval in seconds at which the Dagster Daemon should periodically check the run queue for new runs to launch.
Whether or not to use threads for concurrency when launching dequeued runs.
If dequeue_use_threads is true, limit the number of concurrent worker threads.
If there is an error reaching a Dagster gRPC server while dequeuing the run, how many times to retry the dequeue before failing it. The only run launcher that requires the gRPC server to be running is the DefaultRunLauncher, so setting this will have no effect unless that run launcher is being used.
Default Value: 0
If there is an error reaching a Dagster gRPC server while dequeuing the run, how long to wait before retrying any runs from that same code location. The only run launcher that requires the gRPC server to be running is the DefaultRunLauncher, so setting this will have no effect unless that run launcher is being used.
Default Value: 60
Enqueues runs via the run storage, to be deqeueued by the Dagster Daemon process. Requires the Dagster Daemon process to be alive in order for runs to be launched.
Abstract base class for a scheduler. This component is responsible for interfacing with an external system such as cron to ensure scheduled repeated execution according.
Abstract class for managing persistance of scheduler artifacts.
Base class for SQL backed schedule storage.
Local SQLite backed schedule storage.
see also: dagster_postgres.PostgresScheduleStorage
and dagster_mysql.MySQLScheduleStorage
.
Wraps the execution of user-space code in an error boundary. This places a uniform policy around any user code invoked by the framework. This ensures that all user errors are wrapped in an exception derived from DagsterUserCodeExecutionError, and that the original stack trace of the user error is preserved, so that it can be reported without confusing framework code in the stack trace, if a tool author wishes to do so.
Examples: .. code-block:: python
- with user_code_error_boundary(
# Pass a class that inherits from DagsterUserCodeExecutionError DagsterExecutionStepExecutionError, # Pass a function that produces a message “Error occurred during step execution”
- ):
call_user_provided_function()
A StepLauncher is responsible for executing steps, either in-process or in an external process.
A serializable object that specifies what’s needed to hydrate a step so that it can be executed in a process outside the plan process.
Users should not instantiate this class directly.
Context for the execution of a step. Users should not instantiate this class directly.
This context assumes that user code can be run directly, and thus includes resource and information.