Orchestrate your unique data workflows

Last updated: Feb 17, 2026
PRODUCT OWNER
IMPLEMENTATION
HEALTH TECH VENDOR

With Redox, you can orchestrate how data moves across your healthcare ecosystem. Orchestrating means you control how and when data moves with conditional rules and custom logic.

Orchestration treats data processing like a set of building blocks that can be assembled to solve specific problems rather than laying out a static assembly line that runs the same way every time.

Use cases

Some common use cases for using custom rule processing are:

  • Multi-destination routing: Send the same data to multiple destinations at once, based on rules or message type.
    • Example: Send to your app but also to your FHIR® store.
  • Conditional rules: Define business logic that decides how data should be process or routed based on conditions
    • Example: Send CHF discharges to PCP and cardio clinic; all others go only to PCP.
  • Data fetching and bundling: Pull in extra data automatically and package it together in one.
    • Example: If you see a discharge, get patient summary from HIE.
  • Enrichment: Partway through processing, you can gather data from a partner vendor to enhance existing data before being delivered to a destination.
    • Example: Normalize a list of procedures into CPT codes.
    • Example: Extract ICD-10 codes from clinical notes.
    • Example: Match result against EMPI to add correct patient ID.
  • Scoring and calculations: Perform real-time computations on data so destination systems don't need to handle raw field.s
    • Example: Calculate patient's BMI from height and weight.

Check out this case study to see orchestration in action.

Benefits

  • Use modular steps to accomplish your workflow.
  • Condense or combine processing stops.
  • Have visibility into your processing rather than a "black box" of guesswork.

Building blocks

To orchestrate your data, you can compose unique workflows that meet the healthcare needs of your organization.

Specifically, you can use composable processors, which are inserted into your data processing. A processor is like a recipe with base ingredients (i.e., operations) along with spices or garnishes that can be tweaked to your liking (i.e., parameters). This means you can use the same processor in multiple contexts.

Currently, a Redoxer can build or edit processor(s) for you.

Processors in logs

You can view processors directly in your logs by clicking on the log stage where a processor is linked.

You can click the Expand button to view more details, including the processor operations, parameters, and input and output payloads. For operation and parameters definitions, review the technical reference at the end of this article.

View processor details in the linked log stage with the option to expand for more details.
View a processor in the linked log stage
Expand a processor to view parameter, payload, and operation details.
Expanded view of processor details

By clicking the three-dots icon of an operation, you can open the operation to view more details as well.

View an individual operation’s details.
View an individual operation’s details

Review the technical reference below to understand the individual pieces of a processor.

Log stages

There are four log stages where processors can run:

  1. Pre-processing (outbound)
  2. Post-processing (outbound)
  3. Pre-processing (inbound)
  4. Post-processing (inbound)

Learn more about log stages. A Redoxer must link a processor to one of these processing locations for the processor to run. Only one processor can be linked at a given log stage.

Technical reference

In more technical terms, a processor looks a little like this:

  1. Processor
    1. Input payload: The initial data the processor ingests to begin working. The input doesn't necessarily have to be the output of a previous processor. The input is determined by the defined rules and custom logic.
    2. Operations: The action(s) to take during this period of processing. These execute in a dynamic flow based on the sub-pieces. You can think of operations like a flow chart, which says what should happen next if this and when.
      1. Parameters: The custom settings for operations, which determine what action to take based on the context during processing.
      2. Outcome: The outcome of an operation during a specific processor run. This could be success, failure, or stop. Based on the outcome, the next operation can vary, depending on what just happened in the previous processing step.
      3. Outlet: The logic that determines what happens next.
        1. If the operation fails, the "on error" operation will happen next.
        2. If the operation stops, the "on stop" operation will happen next.
        3. If the operation succeeds, the "on success" operation will happen next.
    3. Output payload: The final data the processor produces. This can be used an input for any following processors, but doesn't have to be the subsequent processor.
A visual diagram of the parts of a processor.
Visual diagram to show the parts of a processor.

Composable operation types

A processor can use the same type of operation multiples. Each instance of an operation gets a unique ID to reference in a log.

There are two categories of operation types, specifically for either data processing or flow control.

Review the available operation types.

Data processing operations

These operations process data to make more data.

Each operation has parameters you can use to fine tune how it executes.

Flow control operations

These operations control the flow of the processor.

Grants

Processors belong to the specific Redox organization they're created for. Each processor has an owner. However, you can grant access to a processor that you own to give your connection access to view and use it.

A grant only allows the processor to be used in the same environment type. For example, if you grant access to a staging processor, it can only be used in another organization's staging environment.

When a grant is used, it's called a grant activation. Redox creates an activation record to track usage.