Last updated: Oct 31, 2024
IMPLEMENTATION
HEALTH TECH VENDOR
Closed beta feature
For cloud connectivity with Redox, you decide which cloud provider and cloud product(s) to use. Then, you'll need to create a cloud destination in your Redox organization.
You'll need to perform some steps in your cloud product(s) and some in Redox. You can perform Redox setup in our dashboard or with the Redox Platform API.
This article is for this combination of cloud products:
- Google Cloud Platform (GCP)
- Google Cloud Storage (GCS)
or
- Google Cloud Platform (GCP)
- Google Cloud Storage (GCS)
- Databricks or Snowflake
- Navigate to the GCP dashboard and log in.
- Create a service account in your GCP project.
- Grant the service account read/write access to the GCS bucket.
- Create a new key for your service account.
- A new key file is downloaded. Save this file, since you'll need it for Redox setup later.
Next, create a cloud destination in your Redox organization. This destination will be where your data is pushed to.
- For the select destination step, follow the instructions for creating a cloud destination.
- From the Product type field, select Databricks or Snowflake if you're using one of those cloud products with GCS. Your GCS settings will be ingested with the additional cloud product. Select Cloud Storage if you're not also using Databricks or Snowflake.
- For the configure destination step, populate these fields.
- Bucket name: Enter the GCS bucket name. Locate this value in the GCP dashboard.Bucket naming requirements
- Object filename prefix: Enter any prefix you want prepended to new files when they're created in the GCS bucket. Add / to put the files in a subdirectory. For example, redox/ puts all the files in the redox directory.
- Click the Next button.
- For the auth credential step, either a drop-down list of existing auth credentials displays or a new auth credential form opens. Learn how to create an auth credential for OAuth 2.0 2-legged with JWT for GCP.Existing or new auth credential
- For the verify step, follow the instructions for verifying a destination.
- In your terminal, prepare the /v1/authcredentials request.
- Specify these values in the request.
- Locate the keyId, clientId (value of client_email), and privateKey values in the downloaded key file from GCP.
- Make sure the auth credential's environment ID matches the environment ID of the destination you intend to link it to.
- Use the values we provide in the example for authStrategy, grantType, algorithm, url, and scopes. These are fixed values.Example: Create auth credential for GCP + GCS + Databricks or Snowflakejson
- You should get a successful response with details for the new auth credential.
- In your terminal, prepare the /v1/environments/{environmentId}/destinations request.
- Specify these values in the request.
- Set authCredential to the auth credential ID from the response you received in step #4.
- Populate cloudProviderSettings with the settings below.
- The fileNamePrefix is optional. If specified, the filename format will be the prefix you define appended by the log ID. If not specified, the filename will simply be the log ID.Product ID for multiple cloud products with GCPExample: Values for GCP + Databricks or Snowflake cloudProviderSettingsjson
- You should get a successful response with details for the new destination for GCP.
- Your new destination will now be able to receive messages. These messages get stored in your GCS bucket, which can then be picked up by Databricks or Snowflake for further processing.