如何在GCP Dataflow中使用python管道代码阅读BigQuery表

时间:2023-01-30 15:46:42

Could someone please share syntax to read/write bigquery table in a pipeline written in python for GCP Dataflow

有人可以分享语法来读取/写入在python中为GCP Dataflow编写的管道中的bigquery表

2 个解决方案

#1


1  

Run on Dataflow

在Dataflow上运行

First, construct a Pipeline with the following options for it to run on GCP DataFlow:

首先,使用以下选项构造一个Pipeline,以便在GCP DataFlow上运行:

import apache_beam as beam

options = {'project': <project>,
           'runner': 'DataflowRunner',
           'region': <region>,
           'setup_file': <setup.py file>}
pipeline_options = beam.pipeline.PipelineOptions(flags=[], **options)
pipeline = beam.Pipeline(options = pipeline_options)

Read from BigQuery

从BigQuery读取

Define a BigQuerySource with your query and use beam.io.Read to read data from BQ:

使用您的查询定义BigQuerySource并使用beam.io.Read从BQ读取数据:

BQ_source = beam.io.BigQuerySource(query = <query>)
BQ_data = pipeline | beam.io.Read(BQ_source)

Write to BigQuery

写信给BigQuery

There are two options to write to bigquery:

写入bigquery有两种选择:

  • use a BigQuerySink and beam.io.Write:

    使用BigQuerySink和beam.io.Write:

    BQ_sink = beam.io.BigQuerySink(<table>, dataset=<dataset>, project=<project>)
    BQ_data | beam.io.Write(BQ_sink)
    
  • use beam.io.WriteToBigQuery:

    使用beam.io.WriteToBigQuery:

    BQ_data | beam.io.WriteToBigQuery(<table>, dataset=<dataset>, project=<project>)
    

#2


0  

Reading from Bigquery

从Bigquery读书

rows = (p | 'ReadFromBQ' >> beam.io.Read(beam.io.BigQuerySource(query=QUERY, use_standard_sql=True))

writing to Bigquery

写给Bigquery

rows | 'writeToBQ' >> beam.io.Write(
beam.io.BigQuerySink('{}:{}.{}'.format(PROJECT, BQ_DATASET_ID, BQ_TEST), schema='CONVERSATION:STRING, LEAD_ID:INTEGER', create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
    write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE))

#1


1  

Run on Dataflow

在Dataflow上运行

First, construct a Pipeline with the following options for it to run on GCP DataFlow:

首先,使用以下选项构造一个Pipeline,以便在GCP DataFlow上运行:

import apache_beam as beam

options = {'project': <project>,
           'runner': 'DataflowRunner',
           'region': <region>,
           'setup_file': <setup.py file>}
pipeline_options = beam.pipeline.PipelineOptions(flags=[], **options)
pipeline = beam.Pipeline(options = pipeline_options)

Read from BigQuery

从BigQuery读取

Define a BigQuerySource with your query and use beam.io.Read to read data from BQ:

使用您的查询定义BigQuerySource并使用beam.io.Read从BQ读取数据:

BQ_source = beam.io.BigQuerySource(query = <query>)
BQ_data = pipeline | beam.io.Read(BQ_source)

Write to BigQuery

写信给BigQuery

There are two options to write to bigquery:

写入bigquery有两种选择:

  • use a BigQuerySink and beam.io.Write:

    使用BigQuerySink和beam.io.Write:

    BQ_sink = beam.io.BigQuerySink(<table>, dataset=<dataset>, project=<project>)
    BQ_data | beam.io.Write(BQ_sink)
    
  • use beam.io.WriteToBigQuery:

    使用beam.io.WriteToBigQuery:

    BQ_data | beam.io.WriteToBigQuery(<table>, dataset=<dataset>, project=<project>)
    

#2


0  

Reading from Bigquery

从Bigquery读书

rows = (p | 'ReadFromBQ' >> beam.io.Read(beam.io.BigQuerySource(query=QUERY, use_standard_sql=True))

writing to Bigquery

写给Bigquery

rows | 'writeToBQ' >> beam.io.Write(
beam.io.BigQuerySink('{}:{}.{}'.format(PROJECT, BQ_DATASET_ID, BQ_TEST), schema='CONVERSATION:STRING, LEAD_ID:INTEGER', create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
    write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE))