gcp_bigquery_table – Creates a GCP Table

From Get docs
Ansible/docs/2.9/modules/gcp bigquery table module


gcp_bigquery_table – Creates a GCP Table

New in version 2.8.


Synopsis

  • A Table that belongs to a Dataset .

Requirements

The below requirements are needed on the host that executes this module.

  • python >= 2.6
  • requests >= 2.18.4
  • google-auth >= 1.3.0

Parameters

Parameter Choices/Defaults Comments

auth_kind

string / required

  • application
  • machineaccount
  • serviceaccount

The type of credential used.

clustering

list

added in 2.9

One or more fields on which data should be clustered. Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.

dataset

string

Name of the dataset.

description

string

A user-friendly description of the dataset.

encryption_configuration

dictionary

Custom encryption configuration.

kms_key_name

string

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.

env_type

string

Specifies which Ansible environment you're running this module within.

This should not be set unless you know what you're doing.

This only alters the User Agent string for any API requests.

expiration_time

integer

The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely.

external_data_configuration

dictionary

Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table.

autodetect

boolean

  • no
  • yes

Try to detect schema and format options automatically. Any option specified explicitly will be honored.

bigtable_options

dictionary

Additional options if sourceFormat is set to BIGTABLE.

column_families

list

List of column families to expose in the table schema along with their types.

columns

list

Lists of columns that should be exposed as individual fields as opposed to a list of (column name, value) pairs.

encoding

string

The encoding of the values when the type is not STRING.

Some valid choices include: "TEXT", "BINARY"

field_name

string

If the qualifier is not a valid BigQuery field identifier, a valid identifier must be provided as the column field name and is used as field name in queries.

only_read_latest

boolean

  • no
  • yes

If this is set, only the latest version of value in this column are exposed .

qualifier_string

string / required

Qualifier of the column.

type

string

The type to convert the value in cells of this column.

Some valid choices include: "BYTES", "STRING", "INTEGER", "FLOAT", "BOOLEAN"

encoding

string

The encoding of the values when the type is not STRING.

Some valid choices include: "TEXT", "BINARY"

family_id

string

Identifier of the column family.

only_read_latest

boolean

  • no
  • yes

If this is set only the latest version of value are exposed for all columns in this column family .

type

string

The type to convert the value in cells of this column family.

Some valid choices include: "BYTES", "STRING", "INTEGER", "FLOAT", "BOOLEAN"

ignore_unspecified_column_families

boolean

  • no
  • yes

If field is true, then the column families that are not specified in columnFamilies list are not exposed in the table schema .

read_rowkey_as_string

boolean

  • no
  • yes

If field is true, then the rowkey column families will be read and converted to string.

compression

string

The compression type of the data source.

Some valid choices include: "GZIP", "NONE"

csv_options

dictionary

Additional properties to set if sourceFormat is set to CSV.

allow_jagged_rows

boolean

  • no
  • yes

Indicates if BigQuery should accept rows that are missing trailing optional columns .

allow_quoted_newlines

boolean

  • no
  • yes

Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file .

encoding

string

The character encoding of the data.

Some valid choices include: "UTF-8", "ISO-8859-1"

field_delimiter

string

The separator for fields in a CSV file.

quote

string

The value that is used to quote data sections in a CSV file.

skip_leading_rows

integer

Default:

"0"

The number of rows at the top of a CSV file that BigQuery will skip when reading the data.

google_sheets_options

dictionary

Additional options if sourceFormat is set to GOOGLE_SHEETS.

skip_leading_rows

integer

Default:

"0"

The number of rows at the top of a Google Sheet that BigQuery will skip when reading the data.

ignore_unknown_values

boolean

  • no
  • yes

Indicates if BigQuery should allow extra values that are not represented in the table schema .

max_bad_records

integer

Default:

"0"

The maximum number of bad records that BigQuery can ignore when reading data .

schema

dictionary

The schema for the data. Schema is required for CSV and JSON formats.

fields

list

Describes the fields in a table.

description

string

The field description.

fields

list

Describes the nested schema fields if the type property is set to RECORD .

mode

string

Field mode.

Some valid choices include: "NULLABLE", "REQUIRED", "REPEATED"

name

string

Field name.

type

string

Field data type.

Some valid choices include: "STRING", "BYTES", "INTEGER", "FLOAT", "TIMESTAMP", "DATE", "TIME", "DATETIME", "RECORD"

source_format

string

The data format.

Some valid choices include: "CSV", "GOOGLE_SHEETS", "NEWLINE_DELIMITED_JSON", "AVRO", "DATASTORE_BACKUP", "BIGTABLE"

source_uris

list

The fully-qualified URIs that point to your data in Google Cloud.

For Google Cloud Storage URIs: Each URI can contain one '*' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '*' wildcard character is not allowed.

friendly_name

string

A descriptive name for this table.

labels

dictionary

The labels associated with this dataset. You can use these to organize and group your datasets .

name

string

Name of the table.

num_rows

integer

added in 2.9

The number of rows of data in this table, excluding any data in the streaming buffer.

project

string

The Google Cloud Platform project to use.

schema

dictionary

Describes the schema of this table.

fields

list

Describes the fields in a table.

description

string

The field description. The maximum length is 1,024 characters.

fields

list

Describes the nested schema fields if the type property is set to RECORD.

mode

string

The field mode.

Some valid choices include: "NULLABLE", "REQUIRED", "REPEATED"

name

string

The field name.

type

string

The field data type.

Some valid choices include: "STRING", "BYTES", "INTEGER", "FLOAT", "TIMESTAMP", "DATE", "TIME", "DATETIME", "RECORD"

scopes

list

Array of scopes to be used.

service_account_contents

jsonarg

The contents of a Service Account JSON file, either in a dictionary or as a JSON string that represents it.

service_account_email

string

An optional service account email address if machineaccount is selected and the user does not wish to use the default email.

service_account_file

path

The path of a Service Account JSON file if serviceaccount is selected as type.

state

string

  • present

  • absent

Whether the given object should exist in GCP

table_reference

dictionary

Reference describing the ID of this table.

dataset_id

string

The ID of the dataset containing this table.

project_id

string

The ID of the project containing this table.

table_id

string

The ID of the the table.

time_partitioning

dictionary

If specified, configures time-based partitioning for this table.

expiration_ms

integer

Number of milliseconds for which to keep the storage for a partition.

field

string

added in 2.9

If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.

type

string

The only type supported is DAY, which will generate one partition per day.

Some valid choices include: "DAY"

view

dictionary

The view definition.

use_legacy_sql

boolean

  • no
  • yes

Specifies whether to use BigQuery's legacy SQL for this view .

user_defined_function_resources

list

Describes user-defined function resources used in the query.

inline_code

string

An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.

resource_uri

string

A code resource to load from a Google Cloud Storage URI (gs://bucket/path).



Notes

Note

  • for authentication, you can set service_account_file using the c(gcp_service_account_file) env variable.
  • for authentication, you can set service_account_contents using the c(GCP_SERVICE_ACCOUNT_CONTENTS) env variable.
  • For authentication, you can set service_account_email using the GCP_SERVICE_ACCOUNT_EMAIL env variable.
  • For authentication, you can set auth_kind using the GCP_AUTH_KIND env variable.
  • For authentication, you can set scopes using the GCP_SCOPES env variable.
  • Environment variables values will only be used if the playbook values are not set.
  • The service_account_email and service_account_file options are mutually exclusive.


Examples

- name: create a dataset
  gcp_bigquery_dataset:
    name: example_dataset
    dataset_reference:
      dataset_id: example_dataset
    project: "{{ gcp_project }}"
    auth_kind: "{{ gcp_cred_kind }}"
    service_account_file: "{{ gcp_cred_file }}"
    state: present
  register: dataset

- name: create a table
  gcp_bigquery_table:
    name: example_table
    dataset: example_dataset
    table_reference:
      dataset_id: example_dataset
      project_id: test_project
      table_id: example_table
    project: test_project
    auth_kind: serviceaccount
    service_account_file: "/tmp/auth.pem"
    state: present

Return Values

Common return values are documented here, the following are the fields unique to this module:

Key Returned Description

clustering

list

success

One or more fields on which data should be clustered. Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.


creationTime

integer

success

The time when this dataset was created, in milliseconds since the epoch.


dataset

string

success

Name of the dataset.


description

string

success

A user-friendly description of the dataset.


encryptionConfiguration

complex

success

Custom encryption configuration.


kmsKeyName

string

success

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.


expirationTime

integer

success

The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely.


externalDataConfiguration

complex

success

Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table.


autodetect

boolean

success

Try to detect schema and format options automatically. Any option specified explicitly will be honored.


bigtableOptions

complex

success

Additional options if sourceFormat is set to BIGTABLE.


columnFamilies

complex

success

List of column families to expose in the table schema along with their types.


columns

complex

success

Lists of columns that should be exposed as individual fields as opposed to a list of (column name, value) pairs.


encoding

string

success

The encoding of the values when the type is not STRING.


fieldName

string

success

If the qualifier is not a valid BigQuery field identifier, a valid identifier must be provided as the column field name and is used as field name in queries.


onlyReadLatest

boolean

success

If this is set, only the latest version of value in this column are exposed .


qualifierString

string

success

Qualifier of the column.


type

string

success

The type to convert the value in cells of this column.


encoding

string

success

The encoding of the values when the type is not STRING.


familyId

string

success

Identifier of the column family.


onlyReadLatest

boolean

success

If this is set only the latest version of value are exposed for all columns in this column family .


type

string

success

The type to convert the value in cells of this column family.


ignoreUnspecifiedColumnFamilies

boolean

success

If field is true, then the column families that are not specified in columnFamilies list are not exposed in the table schema .


readRowkeyAsString

boolean

success

If field is true, then the rowkey column families will be read and converted to string.


compression

string

success

The compression type of the data source.


csvOptions

complex

success

Additional properties to set if sourceFormat is set to CSV.


allowJaggedRows

boolean

success

Indicates if BigQuery should accept rows that are missing trailing optional columns .


allowQuotedNewlines

boolean

success

Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file .


encoding

string

success

The character encoding of the data.


fieldDelimiter

string

success

The separator for fields in a CSV file.


quote

string

success

The value that is used to quote data sections in a CSV file.


skipLeadingRows

integer

success

The number of rows at the top of a CSV file that BigQuery will skip when reading the data.


googleSheetsOptions

complex

success

Additional options if sourceFormat is set to GOOGLE_SHEETS.


skipLeadingRows

integer

success

The number of rows at the top of a Google Sheet that BigQuery will skip when reading the data.


ignoreUnknownValues

boolean

success

Indicates if BigQuery should allow extra values that are not represented in the table schema .


maxBadRecords

integer

success

The maximum number of bad records that BigQuery can ignore when reading data .


schema

complex

success

The schema for the data. Schema is required for CSV and JSON formats.


fields

complex

success

Describes the fields in a table.


description

string

success

The field description.


fields

list

success

Describes the nested schema fields if the type property is set to RECORD .


mode

string

success

Field mode.


name

string

success

Field name.


type

string

success

Field data type.


sourceFormat

string

success

The data format.


sourceUris

list

success

The fully-qualified URIs that point to your data in Google Cloud.

For Google Cloud Storage URIs: Each URI can contain one '*' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '*' wildcard character is not allowed.


friendlyName

string

success

A descriptive name for this table.


id

string

success

An opaque ID uniquely identifying the table.


labels

dictionary

success

The labels associated with this dataset. You can use these to organize and group your datasets .


lastModifiedTime

integer

success

The time when this table was last modified, in milliseconds since the epoch.


location

string

success

The geographic location where the table resides. This value is inherited from the dataset.


name

string

success

Name of the table.


numBytes

integer

success

The size of this table in bytes, excluding any data in the streaming buffer.


numLongTermBytes

integer

success

The number of bytes in the table that are considered "long-term storage".


numRows

integer

success

The number of rows of data in this table, excluding any data in the streaming buffer.


requirePartitionFilter

boolean

success

If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.


schema

complex

success

Describes the schema of this table.


fields

complex

success

Describes the fields in a table.


description

string

success

The field description. The maximum length is 1,024 characters.


fields

list

success

Describes the nested schema fields if the type property is set to RECORD.


mode

string

success

The field mode.


name

string

success

The field name.


type

string

success

The field data type.


streamingBuffer

complex

success

Contains information regarding this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer.


estimatedBytes

integer

success

A lower-bound estimate of the number of bytes currently in the streaming buffer.


estimatedRows

integer

success

A lower-bound estimate of the number of rows currently in the streaming buffer.


oldestEntryTime

integer

success

Contains the timestamp of the oldest entry in the streaming buffer, in milliseconds since the epoch, if the streaming buffer is available.


tableReference

complex

success

Reference describing the ID of this table.


datasetId

string

success

The ID of the dataset containing this table.


projectId

string

success

The ID of the project containing this table.


tableId

string

success

The ID of the the table.


timePartitioning

complex

success

If specified, configures time-based partitioning for this table.


expirationMs

integer

success

Number of milliseconds for which to keep the storage for a partition.


field

string

success

If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.


type

string

success

The only type supported is DAY, which will generate one partition per day.


type

string

success

Describes the table type.


view

complex

success

The view definition.


useLegacySql

boolean

success

Specifies whether to use BigQuery's legacy SQL for this view .


userDefinedFunctionResources

complex

success

Describes user-defined function resources used in the query.


inlineCode

string

success

An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.


resourceUri

string

success

A code resource to load from a Google Cloud Storage URI (gs://bucket/path).





Status

Authors

  • Google Inc. (@googlecloudplatform)

Hint

If you notice any issues in this documentation, you can edit this document to improve it.


© 2012–2018 Michael DeHaan
© 2018–2019 Red Hat, Inc.
Licensed under the GNU General Public License version 3.
https://docs.ansible.com/ansible/2.9/modules/gcp_bigquery_table_module.html