One of the most popular options for exchanging data is using the Redox Data Model API. With this modern, standardized API, we handle all of your data mapping, translation, and connectivity.
The Data Model API uses standardized data models, which describe the categories of data that can be exchanged with your connection via an encrypted communication method, typically HTTPS. You can combine data models to perform a given action to accomplish your unique workflow.
There are many ways to connect with Redox—our Data Model API is only one. Most of our customers choose this option so they can use a single interface to manage integrations with a variety of different communication methods, data formats, authentication, or other connectivity requirements. But you can also check out our other integration methods.
There are two possible methods for exchanging data using our Data Model API:
We can only pass data that we have access to. Depending on the healthcare organization or EHR system, some fields may or may not be available to us. We do our best to get as much data as possible from each EHR system, but there may be some differences. During the testing process, we identify which fields we can rely on for you, based on the given healthcare organization and EHR system.
Each data model has supported event types with a corresponding push or pull method. These event types consist of JSON fields and values. We note for you in each data model which fields are required. If a field is marked as required, it means that you must include this field for your request to be successfully processed.
We can help you determine which data models fit your needs best.
Depending on your system, we may recommend sending and receiving data via polling or data on demand. We can help you decide what may suit your system the best based on your unique integrations. Learn more about data on demand.
Currently, we support data on demand for the following Redox data models:
The Data Model API works with all of our data models, but check out some of the caveats below.
Some data model fields can only contain a limited set of values, which is called a valueset. Whether you are sending or receiving API requests, you should use or expect only the supported values listed for the relevant data field.
Our data models support a wide range of data, but as we noted above, we may not have access to certain fields. This means that some fields may not be populated, depending on your connection's system.
To help illustrate this in our data model reference, we provide a reliability rating for each field:
This field is present in requests from nearly every organization.
This field is present in requests from most organizations.
This field is present in requests from some organizations.
Don't worry too much if a field is only Probable or Possible. By default, healthcare organizations may restrict the data they send for these reasons:
Before onboarding, it's important to have a clear understanding of what data your connection will or won't provide. Our Implementation team works with you to document the data integration needs or plan for any customization.
Some data models might have an option for including extensions with additional data. Read more about extensions.
With the Data Model API, we can rapidly support newly emerging use cases and quickly go live with them in production. As our data models evolve with new data fields, our focus is always developer satisfaction. Check out the information below to learn how we handle updates to our data models and what our plans are down the road.
We do our best to not make any of the following types of changes:
If we do ever need to change any of the above, rest assured that we will notify all customers well in advance and create transition plans if the update affects you.
We may make these changes, however:
Review our Change Log to check for any additions to existing data models, or join our Slack community to watch for any announcements with changes to the data models. Whenever there is a change, our data models will be automatically reflected in the schemas. Download our data model schemas to explore them for yourself.
The best way to build against our models and account for these additions is to be as tolerant a reader as possible by ignoring data fields that aren’t necessary and not parsing everything into strongly typed objects. If this isn’t feasible for you, or if you run into issues with this based on your specific stack or environment, submit a ticket via our Help Desk so we can talk through other potential solutions.
We have lots of exciting projects in the works that will make using the Data Model API even easier to consume and more enjoyable to use. As our API evolves, we plan to introduce versioning so that you have greater flexibility over when to introduce certain additions that we make to our models.