Last updated: Dec 13, 2024
IMPLEMENTATION
HEALTH TECH VENDOR
For cloud connectivity with Redox, you decide which cloud provider and cloud product(s) to use. Then, you'll need to create a cloud destination in your Redox organization.
You'll need to perform some steps in your cloud product(s) and some in Redox. You can perform Redox setup in our dashboard or with the Redox Platform API.
This article is for this combination of cloud products:
- Amazon Web Services (AWS)
- AWS S3
or
- Amazon Web Services (AWS)
- AWS S3
- Databricks or Snowflake
- Navigate to the AWS dashboard and log in.
- Attach a policy to the IAM User that allows PutObject actions against the new S3 bucket.
- Generate an access key and secret pair. Save the secret key, since it will only be visible once. You'll need it for Redox setup later.
Next, create a cloud destination in your Redox organization. This destination will be where your data is pushed to.
- For the select destination step, follow the instructions for creating a cloud destination.
- From the Product type field, select Databricks or Snowflake if you're using one of those cloud products with S3. Your S3 settings will be ingested with the additional cloud product. Select S3 if you're not also using Databricks or Snowflake.
- For the configure destination step, populate these fields. Then click the Next button.
- Bucket name: Enter the S3 bucket name. Locate this value in the AWS dashboard.Bucket naming requirements
- Object key prefix (optional): Enter any prefix you want prepended to new files when they're created in the S3 bucket. Add / to put the files in a subdirectory. For example, redox/ puts all the files in the redox directory.
- For the auth credential step, either a drop-down list of existing auth credentials displays or a new auth credential form opens. Learn how to create an auth credential for AWS Sigv4.Existing or new auth credential
- For the verify step, follow the instructions for verifying a destination.
- In your terminal, prepare the /v1/authcredentials request.
- Specify these values in the request.
- Locate the accessKeyId and secretAccessKey values in the AWS dashboard.Example: Create auth credential for AWS S3 + Databricks or Snowflakejson
- You should get a successful response with details for the new auth credential.
- In your terminal, prepare the /v1/environments/{environmentId}/destinations request.
- Specify these values in the request.
- Set authCredential to the auth credential ID from the response you received in step #4.
- Populate cloudProviderSettings with these settings.
- Locate the bucketName in the AWS dashboard. The keyPrefix is optional, and if added, it gets prepended to the created file path in AWS S3.Product ID for multiple cloud products with AWSExample: Values for S3, Databricks, or Snowflake cloudProviderSettingsjson
- You should get a successful response with details for the new destination.
- Your new destination will now be able to receive messages. Each message pushed to this destination will create files in the S3 bucket. The uploaded file will be named based on the log ID of the message.