CloudSQL to BigQuery Dataflow Pipeline in GCP

Introduction

Moving data between Cloud SQL and BigQuery is fairly straightforward with federated queries. However, federated queries are not available for Cloud SQL instances created with a private IP address, which might be the only option in many organisation due to security constraints. As an alternative, a Dataflow pipeline can be built to do the job. Moreover, there is a template readily available (JDBC to BigQuery) which in an ideal world would have made this approach easy as well. However, there are some bits and pieces which can be not quite obvious. At least, they weren’t for me — I had to spend a few days building a working pipeline and in the end I had to ask a GCP expert for help. In this blog article I’m trying to address these issues to make life easier for other people facing the same challenge. In my example I’m using Postgres Cloud SQL although I would expect the Mysql case to be very similar if not identical.

Prerequisites

  • A Postgres Cloud SQL instance with private IP and a database in it (might require enabling Compute Engine API if not enabled already)
  • A BigQuery dataset and a table in it (for simplicity, make it with just one column X of type NUMERIC)
  • Enabled Dataflow API
  • A storage bucket
  • Maven (either on your local machine or a Cloud Compute Engine) for building the jars

Jar files

The Dataflow template is “JDBC to BigQuery”, not “CloudSQL to BigQuery”. So you’re going to need JDBC drivers for Cloud SQL. You will need jars for Cloud SQL The important part is that you’re going to need both so-called “socket factory JDBC driver” and the actual JDBC driver (e.g. Postgres one if that’s what your Cloud SQL database is).

The “socket factory” jar can be built from the source code available here: https://github.com/GoogleCloudPlatform/cloud-sql-jdbc-socket-factory. Build instructions in the readme file are fairly trivial, you just need Maven installed and then copy and paste the line from readme into your terminal window. The result will appear in the target directory — be sure to pick the jar with required dependencies (mine was called postgres-socket-factory-1.3.1-SNAPSHOT-jar-with-dependencies.jar). The appropriate JDBC driver jar for Postgres can be found at https://jdbc.postgresql.org/download.html.

Once you have the jar files, you’ll need to upload them to your storage bucket using gsutil or web console as you prefer.

Creating the job

At this point, you should be able to go to the Dataflow page in the Cloud console and create the job as shown on the screenshot below.

You might need to click on “Show optional parameters” to be able to specify the username and the password for the database connection.

Details to pay attention to

Coming from the Oracle database background, I sometimes tend to resolve problems by experimenting a little bit, letting error messages to guide me when I’m doing something wrong. In the cloud world, error message aren’t always as helpful, so this approach wouldn’t necessarily be productive, and attention to detail can save you a lot of time. In particular, pay attention to the following:

  1. The JDBC url needs to be correctly formed. The format that worked for me jdbc:postgresql://<database_instance_IP_address>:5432/<database_name>. Interestingly, “Cloud SQL socket factory” documentation mentions two url formats, a brief one and a complete one, but neither one has the form above and neither worked for me
  2. With Postgres, creating an instance doesn’t mean you have a database — don’t get caught out by that. Create the database and use its name in the final portion of the JDBC url (not the instance name)
  3. Don’t forget to include both jars, the socket factory one and the actual driver one. Use the comma to separate their paths as the hint in the template suggests
  4. Be sure to include the dataset name with the BigQuery table name and use the correct format (see the screenshot)
  5. The query in the source database should produce the same schema as the input table in BigQuery. Basically this means that column names must match
  6. Forgetting to specify the database username and password when submitting the job will is likely to cause the job to fail (although in this case at least the error message should be clear enough to make you realise that).

Troubleshooting

If steps above don’t work as expected try checking APIs and permissions, if that’s not it and the error message is not particularly clear — try to look in temp/staging directories specified in job parameters, you might find useful clues there.

Leave a comment