Data pipelines API . projects . locations

Instance Methods

pipelines()

Returns the pipelines Resource.

transformDescriptions()

Returns the transformDescriptions Resource.

close()

Close httplib2 connections.

computeSchema(location, body=None, x__xgafv=None)

Computes the schema for the transform. Computation from `raw_schema` will always occur if it is set. This requires that the transform supports that encoding. If no raw schema is provided and if the transform is for an IO, then this will attempt to connect to the resource using the details provided in `config` and infer the schema from that. If the transform is not an IO, is a sink that doesn't exist yet, or is a sink with no schema requirement, then this will fall back to basing the schema off the one provided in `input_schemas`. The computed schema will be validated.

listPipelines(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

Lists pipelines. Returns a "FORBIDDEN" error if the caller doesn't have permission to access it.

listPipelines_next()

Retrieves the next page of results.

Method Details

close()
Close httplib2 connections.
computeSchema(location, body=None, x__xgafv=None)
Computes the schema for the transform. Computation from `raw_schema` will always occur if it is set. This requires that the transform supports that encoding. If no raw schema is provided and if the transform is for an IO, then this will attempt to connect to the resource using the details provided in `config` and infer the schema from that. If the transform is not an IO, is a sink that doesn't exist yet, or is a sink with no schema requirement, then this will fall back to basing the schema off the one provided in `input_schemas`. The computed schema will be validated.

Args:
  location: string, Required. The full location formatted as "projects/{your-project}/locations/{google-cloud-region}". If attempting to infer the schema from an existing Google Cloud resource, the default Data Pipelines service account for this project will be used in making requests for the resource. If the region given for "{google-cloud-region}" is different than the region where the resource is stored, then the data will be transferred to and processed in the region specified here, but it will not be persistently stored in this region. (required)
  body: object, The request body.
    The object takes the form of:

{ # Request message for ComputeSchema
  "config": { # A fully configured transform that can be validated. # Required. The configuration for the transform. If this is not a source, then each input with its schema must be set. It is not required to have any outputs set.
    "config": { # Represents an Apache Beam row, though the `Any` nature of values is replaced with more concrete representations of valid values. # Configuration values provided. These must match the schema provided in the row's schema.
      "schema": { # Holds a schema or a reference to a schema in some repository. # Required. The schema of the row's data.
        "localSchema": { # Represents a simplified Apache Beam schema. # Schema located locally with the message.
          "fields": [ # Fields in the schema. Every field within a schema must have a unique name.
            { # Info for a single field in the schema.
              "name": "A String", # Name of the field.
              "type": { # Type info about a field. # Type info for the field.
                "collectionElementType": # Object with schema name: GoogleCloudDatapipelinesV1FieldType # If `type` is an array or iterable, this is the type contained in that array or iterable.
                "logicalType": { # Represents the input for creating a specified logical type. # If `type` is a logical type, this is the info for the specific logical type.
                  "enumerationType": { # Represents the Beam EnumerationType logical type. # The enum represented by this logical type.
                    "values": [ # Names of the values. The numeric value is the same as the index.
                      "A String",
                    ],
                  },
                  "fixedBytes": { # Represents the Beam FixedBytes logical type. # The fixed-size byte collection represented by this logical type.
                    "sizeBytes": 42, # Number of bytes to allocate.
                  },
                },
                "mapType": { # Represents a map in a schema. # If `type` is a map, this is the key and value types for that map.
                  "mapKeyType": # Object with schema name: GoogleCloudDatapipelinesV1FieldType # Key type of the map. Only atomic types are supported.
                  "mapValueType": # Object with schema name: GoogleCloudDatapipelinesV1FieldType # Value type of the map.
                },
                "nullable": True or False, # Whether or not this field is nullable.
                "rowSchema": # Object with schema name: GoogleCloudDatapipelinesV1Schema # If `type` is a row, this is the schema of that row.
                "type": "A String", # Specific type of the field. For non-atomic types, the corresponding type info for that non-atomic must be set.
              },
            },
          ],
          "referenceId": "A String", # An identifier of the schema for looking it up in a repository. This only needs to be set if the schema is stored in a repository.
        },
        "referenceId": "A String", # The `reference_id` value of a schema in a repository.
      },
      "values": [ # Required. The values of this Row. A fully built row is required to hold to the schema specified by `schema`.
        { # A single value in a row. The value set must correspond to the correct type from the row's schema.
          "arrayValue": { # Represents an array of values. The elements can be of any type. # The array value of this field. Corresponds to TYPE_NAME_ARRAY in the schema.
            "elements": [ # The elements of the array.
              # Object with schema name: GoogleCloudDatapipelinesV1FieldValue
            ],
          },
          "atomicValue": { # Represents a non-dividable value. # The atomic value of this field. Must correspond to the correct atomic type in the schema.
            "booleanValue": True or False, # A boolean value.
            "byteValue": 42, # An 8-bit signed value.
            "bytesValue": "A String", # An array of raw bytes.
            "datetimeValue": { # Represents civil time (or occasionally physical time). This type can represent a civil time in one of a few possible ways: * When utc_offset is set and time_zone is unset: a civil time on a calendar day with a particular offset from UTC. * When time_zone is set and utc_offset is unset: a civil time on a calendar day in a particular time zone. * When neither time_zone nor utc_offset is set: a civil time on a calendar day in local time. The date is relative to the Proleptic Gregorian Calendar. If year, month, or day are 0, the DateTime is considered not to have a specific year, month, or day respectively. This type may also be used to represent a physical time if all the date and time fields are set and either case of the `time_offset` oneof is set. Consider using `Timestamp` message for physical time instead. If your use case also would like to store the user's timezone, that can be done in another field. This type is more flexible than some applications may want. Make sure to document and validate your application's limitations. # A datetime value.
              "day": 42, # Optional. Day of month. Must be from 1 to 31 and valid for the year and month, or 0 if specifying a datetime without a day.
              "hours": 42, # Optional. Hours of day in 24 hour format. Should be from 0 to 23, defaults to 0 (midnight). An API may choose to allow the value "24:00:00" for scenarios like business closing time.
              "minutes": 42, # Optional. Minutes of hour of day. Must be from 0 to 59, defaults to 0.
              "month": 42, # Optional. Month of year. Must be from 1 to 12, or 0 if specifying a datetime without a month.
              "nanos": 42, # Optional. Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999, defaults to 0.
              "seconds": 42, # Optional. Seconds of minutes of the time. Must normally be from 0 to 59, defaults to 0. An API may allow the value 60 if it allows leap-seconds.
              "timeZone": { # Represents a time zone from the [IANA Time Zone Database](https://www.iana.org/time-zones). # Time zone.
                "id": "A String", # IANA Time Zone Database time zone, e.g. "America/New_York".
                "version": "A String", # Optional. IANA Time Zone Database version number, e.g. "2019a".
              },
              "utcOffset": "A String", # UTC offset. Must be whole seconds, between -18 hours and +18 hours. For example, a UTC offset of -4:00 would be represented as { seconds: -14400 }.
              "year": 42, # Optional. Year of date. Must be from 1 to 9999, or 0 if specifying a datetime without a year.
            },
            "decimalValue": { # A representation of a decimal value, such as 2.5. Clients may convert values into language-native decimal formats, such as Java's BigDecimal or Python's decimal.Decimal. [BigDecimal]: https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/math/BigDecimal.html [decimal.Decimal]: https://docs.python.org/3/library/decimal.html # A large decimal value, equivalent to Java BigDecimal.
              "value": "A String", # The decimal value, as a string. The string representation consists of an optional sign, `+` (`U+002B`) or `-` (`U+002D`), followed by a sequence of zero or more decimal digits ("the integer"), optionally followed by a fraction, optionally followed by an exponent. An empty string **should** be interpreted as `0`. The fraction consists of a decimal point followed by zero or more decimal digits. The string must contain at least one digit in either the integer or the fraction. The number formed by the sign, the integer and the fraction is referred to as the significand. The exponent consists of the character `e` (`U+0065`) or `E` (`U+0045`) followed by one or more decimal digits. Services **should** normalize decimal values before storing them by: - Removing an explicitly-provided `+` sign (`+2.5` -> `2.5`). - Replacing a zero-length integer value with `0` (`.5` -> `0.5`). - Coercing the exponent character to upper-case, with explicit sign (`2.5e8` -> `2.5E+8`). - Removing an explicitly-provided zero exponent (`2.5E0` -> `2.5`). Services **may** perform additional normalization based on its own needs and the internal decimal implementation selected, such as shifting the decimal point and exponent value together (example: `2.5E-1` <-> `0.25`). Additionally, services **may** preserve trailing zeroes in the fraction to indicate increased precision, but are not required to do so. Note that only the `.` character is supported to divide the integer and the fraction; `,` **should not** be supported regardless of locale. Additionally, thousand separators **should not** be supported. If a service does support them, values **must** be normalized. The ENBF grammar is: DecimalString = '' | [Sign] Significand [Exponent]; Sign = '+' | '-'; Significand = Digits '.' | [Digits] '.' Digits; Exponent = ('e' | 'E') [Sign] Digits; Digits = { '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' }; Services **should** clearly document the range of supported values, the maximum supported precision (total number of digits), and, if applicable, the scale (number of digits after the decimal point), as well as how it behaves when receiving out-of-bounds values. Services **may** choose to accept values passed as input even when the value has a higher precision or scale than the service supports, and **should** round the value to fit the supported scale. Alternatively, the service **may** error with `400 Bad Request` (`INVALID_ARGUMENT` in gRPC) if precision would be lost. Services **should** error with `400 Bad Request` (`INVALID_ARGUMENT` in gRPC) if the service receives a value outside of the supported range.
            },
            "doubleValue": 3.14, # A 64-bit floating point value.
            "floatValue": 3.14, # A 32-bit floating point value.
            "int16Value": 42, # A 16-bit signed value.
            "int32Value": 42, # A 32-bit signed value.
            "int64Value": "A String", # A 64-bit signed value.
            "stringValue": "A String", # A string value.
          },
          "enumValue": { # Represents a selected value from an EnumerationType. # The enum value of this field. Corresponds to TYPE_NAME_LOGICAL_TYPE in the schema if that logical type represents an `EnumerationType` type.
            "name": "A String", # Name of the enum option.
          },
          "fixedBytesValue": { # Represents a collection of bytes whose size is the same as the associated FixedBytes size value. # The fixed-length byte collection of this field. Corresponds to TYPE_NAME_LOGICAL_TYPE in the schema if that logical type represents a `FixedBytes` type.
            "value": "A String", # The raw bytes. It must be exactly the size specified in the schema.
          },
          "iterableValue": { # Represents an iterable of values. The elements can be of any type. # The iterable value of this field. Corresponds to TYPE_NAME_ITERABLE in the schema.
            "elements": [ # The elements of the iterable.
              # Object with schema name: GoogleCloudDatapipelinesV1FieldValue
            ],
          },
          "mapValue": { # Represents a key/value pairing. # The map value of this field. Corresponds to TYPE_NAME_MAP in the schema.
            "entries": [ # The entries in the map.
              { # A single entry in the map. Each entry must have a unique key.
                "key": # Object with schema name: GoogleCloudDatapipelinesV1FieldValue # The key value. Only atomic values are supported.
                "value": # Object with schema name: GoogleCloudDatapipelinesV1FieldValue # The value associated with the key. It may be of any type.
              },
            ],
          },
          "rowValue": # Object with schema name: GoogleCloudDatapipelinesV1Row # The row value of this field. Corresponds to TYPE_NAME_ROW in the schema. This row also holds to its own schema.
        },
      ],
    },
    "uniformResourceName": "A String", # Unique resource name of the transform. This should be the same as the equivalent `TransformDescription` value.
  },
  "inputSchemas": [ # Optional. In relation to the full pipeline graph, the schemas of the transforms that are used as inputs to the one for `config`. If `config` represents a transform for reading from some resource, then this should be empty. For all other transforms, at least one value must be provided.
    { # Represents a simplified Apache Beam schema.
      "fields": [ # Fields in the schema. Every field within a schema must have a unique name.
        { # Info for a single field in the schema.
          "name": "A String", # Name of the field.
          "type": { # Type info about a field. # Type info for the field.
            "collectionElementType": # Object with schema name: GoogleCloudDatapipelinesV1FieldType # If `type` is an array or iterable, this is the type contained in that array or iterable.
            "logicalType": { # Represents the input for creating a specified logical type. # If `type` is a logical type, this is the info for the specific logical type.
              "enumerationType": { # Represents the Beam EnumerationType logical type. # The enum represented by this logical type.
                "values": [ # Names of the values. The numeric value is the same as the index.
                  "A String",
                ],
              },
              "fixedBytes": { # Represents the Beam FixedBytes logical type. # The fixed-size byte collection represented by this logical type.
                "sizeBytes": 42, # Number of bytes to allocate.
              },
            },
            "mapType": { # Represents a map in a schema. # If `type` is a map, this is the key and value types for that map.
              "mapKeyType": # Object with schema name: GoogleCloudDatapipelinesV1FieldType # Key type of the map. Only atomic types are supported.
              "mapValueType": # Object with schema name: GoogleCloudDatapipelinesV1FieldType # Value type of the map.
            },
            "nullable": True or False, # Whether or not this field is nullable.
            "rowSchema": # Object with schema name: GoogleCloudDatapipelinesV1Schema # If `type` is a row, this is the schema of that row.
            "type": "A String", # Specific type of the field. For non-atomic types, the corresponding type info for that non-atomic must be set.
          },
        },
      ],
      "referenceId": "A String", # An identifier of the schema for looking it up in a repository. This only needs to be set if the schema is stored in a repository.
    },
  ],
  "rawSchema": { # The raw schema and its type. # Optional. If set, this will use the provided raw schema to compute the schema rather than connecting to any resources. Validation will still occur to make sure it is compatible with all input schemas. If the transform is an IO, the IO must support that schema type.
    "rawSchema": "A String", # The schema.
    "type": "A String", # The type of the schema.
  },
}

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Represents a simplified Apache Beam schema.
  "fields": [ # Fields in the schema. Every field within a schema must have a unique name.
    { # Info for a single field in the schema.
      "name": "A String", # Name of the field.
      "type": { # Type info about a field. # Type info for the field.
        "collectionElementType": # Object with schema name: GoogleCloudDatapipelinesV1FieldType # If `type` is an array or iterable, this is the type contained in that array or iterable.
        "logicalType": { # Represents the input for creating a specified logical type. # If `type` is a logical type, this is the info for the specific logical type.
          "enumerationType": { # Represents the Beam EnumerationType logical type. # The enum represented by this logical type.
            "values": [ # Names of the values. The numeric value is the same as the index.
              "A String",
            ],
          },
          "fixedBytes": { # Represents the Beam FixedBytes logical type. # The fixed-size byte collection represented by this logical type.
            "sizeBytes": 42, # Number of bytes to allocate.
          },
        },
        "mapType": { # Represents a map in a schema. # If `type` is a map, this is the key and value types for that map.
          "mapKeyType": # Object with schema name: GoogleCloudDatapipelinesV1FieldType # Key type of the map. Only atomic types are supported.
          "mapValueType": # Object with schema name: GoogleCloudDatapipelinesV1FieldType # Value type of the map.
        },
        "nullable": True or False, # Whether or not this field is nullable.
        "rowSchema": # Object with schema name: GoogleCloudDatapipelinesV1Schema # If `type` is a row, this is the schema of that row.
        "type": "A String", # Specific type of the field. For non-atomic types, the corresponding type info for that non-atomic must be set.
      },
    },
  ],
  "referenceId": "A String", # An identifier of the schema for looking it up in a repository. This only needs to be set if the schema is stored in a repository.
}
listPipelines(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)
Lists pipelines. Returns a "FORBIDDEN" error if the caller doesn't have permission to access it.

Args:
  parent: string, Required. The location name. For example: `projects/PROJECT_ID/locations/LOCATION_ID`. (required)
  filter: string, An expression for filtering the results of the request. If unspecified, all pipelines will be returned. Multiple filters can be applied and must be comma separated. Fields eligible for filtering are: + `type`: The type of the pipeline (streaming or batch). Allowed values are `ALL`, `BATCH`, and `STREAMING`. + `status`: The activity status of the pipeline. Allowed values are `ALL`, `ACTIVE`, `ARCHIVED`, and `PAUSED`. For example, to limit results to active batch processing pipelines: type:BATCH,status:ACTIVE
  pageSize: integer, The maximum number of entities to return. The service may return fewer than this value, even if there are additional pages. If unspecified, the max limit is yet to be determined by the backend implementation.
  pageToken: string, A page token, received from a previous `ListPipelines` call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to `ListPipelines` must match the call that provided the page token.
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # Response message for ListPipelines.
  "nextPageToken": "A String", # A token, which can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages.
  "pipelines": [ # Results that matched the filter criteria and were accessible to the caller. Results are always in descending order of pipeline creation date.
    { # The main pipeline entity and all the necessary metadata for launching and managing linked jobs.
      "createTime": "A String", # Output only. Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
      "displayName": "A String", # Required. The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
      "jobCount": 42, # Output only. Number of jobs.
      "lastUpdateTime": "A String", # Output only. Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
      "name": "A String", # The pipeline name. For example: `projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID`. * `PROJECT_ID` can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see [Identifying projects](https://cloud.google.com/resource-manager/docs/creating-managing-projects#identifying_projects). * `LOCATION_ID` is the canonical ID for the pipeline's location. The list of available locations can be obtained by calling `google.cloud.location.Locations.ListLocations`. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in [App Engine regions](https://cloud.google.com/about/locations#region). * `PIPELINE_ID` is the ID of the pipeline. Must be unique for the selected project and location.
      "pipelineSources": { # Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
        "a_key": "A String",
      },
      "scheduleInfo": { # Details of the schedule the pipeline runs on. # Internal scheduling information for a pipeline. If this information is provided, periodic jobs will be created per the schedule. If not, users are responsible for creating jobs externally.
        "nextJobTime": "A String", # Output only. When the next Scheduler job is going to run.
        "schedule": "A String", # Unix-cron format of the schedule. This information is retrieved from the linked Cloud Scheduler.
        "timeZone": "A String", # Timezone ID. This matches the timezone IDs used by the Cloud Scheduler API. If empty, UTC time is assumed.
      },
      "schedulerServiceAccountEmail": "A String", # Optional. A service account email to be used with the Cloud Scheduler job. If not specified, the default compute engine service account will be used.
      "state": "A String", # Required. The state of the pipeline. When the pipeline is created, the state is set to 'PIPELINE_STATE_ACTIVE' by default. State changes can be requested by setting the state to stopping, paused, or resuming. State cannot be changed through UpdatePipeline requests.
      "type": "A String", # Required. The type of the pipeline. This field affects the scheduling of the pipeline and the type of metrics to show for the pipeline.
      "workload": { # Workload details for creating the pipeline jobs. # Workload information for creating new jobs.
        "dataflowFlexTemplateRequest": { # A request to launch a Dataflow job from a Flex Template. # Template information and additional parameters needed to launch a Dataflow job using the flex launch API.
          "launchParameter": { # Launch Flex Template parameter. # Required. Parameter to launch a job from a Flex Template.
            "containerSpecGcsPath": "A String", # Cloud Storage path to a file with a JSON-serialized ContainerSpec as content.
            "environment": { # The environment values to be set at runtime for a Flex Template. # The runtime environment for the Flex Template job.
              "additionalExperiments": [ # Additional experiment flags for the job.
                "A String",
              ],
              "additionalUserLabels": { # Additional user labels to be specified for the job. Keys and values must follow the restrictions specified in the [labeling restrictions](https://cloud.google.com/compute/docs/labeling-resources#restrictions). An object containing a list of key/value pairs. Example: `{ "name": "wrench", "mass": "1kg", "count": "3" }`.
                "a_key": "A String",
              },
              "enableStreamingEngine": True or False, # Whether to enable Streaming Engine for the job.
              "flexrsGoal": "A String", # Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/flexrs
              "ipConfiguration": "A String", # Configuration for VM IPs.
              "kmsKeyName": "A String", # Name for the Cloud KMS key for the job. Key format is: projects//locations//keyRings//cryptoKeys/
              "machineType": "A String", # The machine type to use for the job. Defaults to the value from the template if not specified.
              "maxWorkers": 42, # The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
              "network": "A String", # Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
              "numWorkers": 42, # The initial number of Compute Engine instances for the job.
              "serviceAccountEmail": "A String", # The email address of the service account to run the job as.
              "subnetwork": "A String", # Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
              "tempLocation": "A String", # The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with `gs://`.
              "workerRegion": "A String", # The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, defaults to the control plane region.
              "workerZone": "A String", # The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane region is chosen based on available capacity. If both `worker_zone` and `zone` are set, `worker_zone` takes precedence.
              "zone": "A String", # The Compute Engine [availability zone](https://cloud.google.com/compute/docs/regions-zones/regions-zones) for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
            },
            "jobName": "A String", # Required. The job name to use for the created job. For an update job request, the job name should be the same as the existing running job.
            "launchOptions": { # Launch options for this Flex Template job. This is a common set of options across languages and templates. This should not be used to pass job parameters.
              "a_key": "A String",
            },
            "parameters": { # The parameters for the Flex Template. Example: `{"num_workers":"5"}`
              "a_key": "A String",
            },
            "transformNameMappings": { # Use this to pass transform name mappings for streaming update jobs. Example: `{"oldTransformName":"newTransformName",...}`
              "a_key": "A String",
            },
            "update": True or False, # Set this to true if you are sending a request to update a running streaming job. When set, the job name should be the same as the running job.
          },
          "location": "A String", # Required. The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. For example, `us-central1`, `us-west1`.
          "projectId": "A String", # Required. The ID of the Cloud Platform project that the job belongs to.
          "validateOnly": True or False, # If true, the request is validated but not actually executed. Defaults to false.
        },
        "dataflowLaunchTemplateRequest": { # A request to launch a template. # Template information and additional parameters needed to launch a Dataflow job using the standard launch API.
          "gcsPath": "A String", # A Cloud Storage path to the template from which to create the job. Must be a valid Cloud Storage URL, beginning with 'gs://'.
          "launchParameters": { # Parameters to provide to the template being launched. # The parameters of the template to launch. This should be part of the body of the POST request.
            "environment": { # The environment values to set at runtime. # The runtime environment for the job.
              "additionalExperiments": [ # Additional experiment flags for the job.
                "A String",
              ],
              "additionalUserLabels": { # Additional user labels to be specified for the job. Keys and values should follow the restrictions specified in the [labeling restrictions](https://cloud.google.com/compute/docs/labeling-resources#restrictions) page. An object containing a list of key/value pairs. Example: { "name": "wrench", "mass": "1kg", "count": "3" }.
                "a_key": "A String",
              },
              "bypassTempDirValidation": True or False, # Whether to bypass the safety checks for the job's temporary directory. Use with caution.
              "enableStreamingEngine": True or False, # Whether to enable Streaming Engine for the job.
              "ipConfiguration": "A String", # Configuration for VM IPs.
              "kmsKeyName": "A String", # Name for the Cloud KMS key for the job. The key format is: projects//locations//keyRings//cryptoKeys/
              "machineType": "A String", # The machine type to use for the job. Defaults to the value from the template if not specified.
              "maxWorkers": 42, # The maximum number of Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
              "network": "A String", # Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
              "numWorkers": 42, # The initial number of Compute Engine instances for the job.
              "serviceAccountEmail": "A String", # The email address of the service account to run the job as.
              "subnetwork": "A String", # Subnetwork to which VMs will be assigned, if desired. You can specify a subnetwork using either a complete URL or an abbreviated path. Expected to be of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must use the complete URL.
              "tempLocation": "A String", # The Cloud Storage path to use for temporary files. Must be a valid Cloud Storage URL, beginning with `gs://`.
              "workerRegion": "A String", # The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
              "workerZone": "A String", # The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity. If both `worker_zone` and `zone` are set, `worker_zone` takes precedence.
              "zone": "A String", # The Compute Engine [availability zone](https://cloud.google.com/compute/docs/regions-zones/regions-zones) for launching worker instances to run your pipeline. In the future, worker_zone will take precedence.
            },
            "jobName": "A String", # Required. The job name to use for the created job.
            "parameters": { # The runtime parameters to pass to the job.
              "a_key": "A String",
            },
            "transformNameMapping": { # Map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job. Only applicable when updating a pipeline.
              "a_key": "A String",
            },
            "update": True or False, # If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state.
          },
          "location": "A String", # The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request.
          "projectId": "A String", # Required. The ID of the Cloud Platform project that the job belongs to.
          "validateOnly": True or False, # If true, the request is validated but not actually executed. Defaults to false.
        },
      },
    },
  ],
}
listPipelines_next()
Retrieves the next page of results.

        Args:
          previous_request: The request for the previous page. (required)
          previous_response: The response from the request for the previous page. (required)

        Returns:
          A request object that you can call 'execute()' on to request the next
          page. Returns None if there are no more items in the collection.