Create a destination for GCP + GCS + Databricks or Snowflake

Last updated: Oct 31, 2024
IMPLEMENTATION
HEALTH TECH VENDOR

For cloud connectivity with Redox, you decide which cloud provider and cloud product(s) to use. Then, you'll need to create a cloud destination in your Redox organization.

You'll need to perform some steps in your cloud product(s) and some in Redox. You can perform Redox setup in our dashboard or with the Redox Platform API.

Cloud products

This article is for this combination of cloud products:

  • Google Cloud Platform (GCP)
  • Google Cloud Storage (GCS)

or

  • Google Cloud Platform (GCP)
  • Google Cloud Storage (GCS)
  • Databricks or Snowflake

Configure in GCP

  1. Navigate to the GCP dashboard and log in.
  2. Create a service account in your GCP project.
  3. Grant the service account read/write access to the GCS bucket.
  4. Create a new key for your service account.
  5. A new key file is downloaded. Save this file, since you'll need it for Redox setup later.

Create a cloud destination in Redox

Next, create a cloud destination in your Redox organization. This destination will be where your data is pushed to.

In the dashboard

    1. From the Product type field, select Databricks or Snowflake if you're using one of those cloud products with GCS. Your GCS settings will be ingested with the additional cloud product. Select Cloud Storage if you're not also using Databricks or Snowflake.
  1. For the configure destination step, populate these fields.
    1. Bucket name: Enter the GCS bucket name. Locate this value in the GCP dashboard.
    2. Object filename prefix: Enter any prefix you want prepended to new files when they're created in the GCS bucket. Add / to put the files in a subdirectory. For example, redox/ puts all the files in the redox directory.
  2. Click the Next button.
  3. For the auth credential step, either a drop-down list of existing auth credentials displays or a new auth credential form opens. Learn how to create an auth credential for OAuth 2.0 2-legged with JWT for GCP.

With the Redox Platform API

  1. In your terminal, prepare the /v1/authcredentials request.
  2. Specify these values in the request.
    • Locate the keyId, clientId (value of client_email), and privateKey values in the downloaded key file from GCP.
    • Make sure the auth credential's environment ID matches the environment ID of the destination you intend to link it to.
    • Use the values we provide in the example for authStrategy, grantType, algorithm, url, and scopes. These are fixed values.
      Example: Create auth credential for GCP + GCS + Databricks or Snowflake
      json
      1
      curl 'https://api.redoxengine.com/platform/v1/authcredentials' \
      2
      --request POST \
      3
      --header 'Authorization: Bearer $API_TOKEN' \
      4
      --header 'accept: application/json' \
      5
      --header 'content-type: application/json' \
      6
      --data '{
      7
      "organization": "<Redox_organization_id>"
      8
      "name": "<human_readable_name_for_auth_credential>"
      9
      "environmentId": "<Redox_environment_ID>"
      10
      "authStrategy": "OAuth_2.0_JWT_GCP"
      11
      "grantType": "client_credentials"
      12
      "url": "https://oauth2.googleapis.com/token"
      13
      "algorithm": "RS256"
      14
      "clientId": "<client_email>"
      15
      "scopes": "https://www.googleapis.com/auth/devstorage.read_write"
      16
      "keyId": "<keyId_from_GCP>"
      17
      "privateKey": "<privateKey_from_GCP>"
      18
      }
  3. You should get a successful response with details for the new auth credential.
  4. In your terminal, prepare the /v1/environments/{environmentId}/destinations request.
  5. Specify these values in the request.
    • Set authCredential to the auth credential ID from the response you received in step #4.
    • Populate cloudProviderSettings with the settings below.
      • The fileNamePrefix is optional. If specified, the filename format will be the prefix you define appended by the log ID. If not specified, the filename will simply be the log ID.
        Example: Values for GCP + Databricks or Snowflake cloudProviderSettings
        json
        1
        {
        2
        "cloudProviderSettings": {
        3
        "typeId": "gcp",
        4
        "productId": "<databricks_or_snowflake_or_gcs>",
        5
        "settings": {
        6
        "bucketName": "<bucket_name_from_GCS>",
        7
        "fileNamePrefix": "<optional_prefix>",
        8
        //You can append `/` after the prefix name to indicate a directory path"
        9
        }
        10
        }
        11
        }
  6. You should get a successful response with details for the new destination for GCP.
  7. Your new destination will now be able to receive messages. These messages get stored in your GCS bucket, which can then be picked up by Databricks or Snowflake for further processing.