Create a destination for AWS + S3, Databricks, or Snowflake

Last updated: Oct 31, 2024
IMPLEMENTATION
HEALTH TECH VENDOR

For cloud connectivity with Redox, you decide which cloud provider and cloud product(s) to use. Then, you'll need to create a cloud destination in your Redox organization.

You'll need to perform some steps in your cloud product(s) and some in Redox. You can perform Redox setup in our dashboard or with the Redox Platform API.

Cloud products

This article is for this combination of cloud products:

  • Amazon Web Services (AWS)
  • AWS S3

or

  • Amazon Web Services (AWS)
  • AWS S3
  • Databricks or Snowflake

Configure in AWS

  1. Navigate to the AWS dashboard and log in.
  2. Attach a policy to the IAM User that allows PutObject actions against the new S3 bucket.
  3. Generate an access key and secret pair. Save the secret key, since it will only be visible once. You'll need it for Redox setup later.

Create a cloud destination in Redox

Next, create a cloud destination in your Redox organization. This destination will be where your data is pushed to.

In the dashboard

    1. From the Product type field, select Databricks or Snowflake if you're using one of those cloud products with S3. Your S3 settings will be ingested with the additional cloud product. Select S3 if you're not also using Databricks or Snowflake.
  1. For the configure destination step, populate these fields. Then click the Next button.
    1. Bucket name: Enter the S3 bucket name. Locate this value in the AWS dashboard.
    2. Object key prefix (optional): Enter any prefix you want prepended to new files when they're created in the S3 bucket. Add / to put the files in a subdirectory. For example, redox/ puts all the files in the redox directory.
  2. For the auth credential step, either a drop-down list of existing auth credentials displays or a new auth credential form opens. Learn how to create an auth credential for AWS Sigv4.

With the Redox Platform API

  1. In your terminal, prepare the /v1/authcredentials request.
  2. Specify these values in the request.
    • Locate the accessKeyId and secretAccessKey values in the AWS dashboard.
      Example: Create auth credential for AWS S3 + Databricks or Snowflake
      json
      1
      curl 'https://api.redoxengine.com/platform/v1/authcredentials' \
      2
      --request POST \
      3
      --header 'Authorization: Bearer $API_TOKEN' \
      4
      --header 'accept: application/json' \
      5
      --header 'content-type: application/json' \
      6
      --data '{
      7
      "organization": "<Redox_organization_id>"
      8
      "name": "<human_readable_name_for_auth_credential>"
      9
      "environmentId": "<Redox_environment_ID>"
      10
      "authStrategy": "AwsSigV4"
      11
      "accessKey": "<access_key_from_AWS>"
      12
      "secretKey": "<secret_key_from_AWS>"
      13
      "serviceName": "s3"
      14
      "awsRegion": "<aws_region_of_AWS_S3_bucket>"
      15
      }
  3. You should get a successful response with details for the new auth credential.
  4. In your terminal, prepare the /v1/environments/{environmentId}/destinations request.
  5. Specify these values in the request.
    • Set authCredential to the auth credential ID from the response you received in step #4.
    • Populate cloudProviderSettings with these settings.
      • Locate the bucketName in the AWS dashboard. The keyPrefix is optional, and if added, it gets prepended to the created file path in AWS S3.
        Example: Values for S3, Databricks, or Snowflake cloudProviderSettings
        json
        1
        {
        2
        "cloudProviderSettings": {
        3
        "typeId": "aws",
        4
        "productId": "<s3_databricks_or_snowflake>",
        5
        "settings": {
        6
        "bucketName": "<bucket_name>",
        7
        "keyPrefix": "<optional_prefix>"
        8
        //You can append `/` after the prefix name to indicate a directory path
        9
        }
        10
        }
        11
        }
  6. You should get a successful response with details for the new destination.
  7. Your new destination will now be able to receive messages. Each message pushed to this destination will create files in the S3 bucket. The uploaded file will be named based on the log ID of the message.