Skip to main content

Azure OpenAI Adapter

Overview

The Azure OpenAI adapter is designed to provide a template for getting started with Azure Foundry integrations. It leverages the Azure OpenAI Library.

This component can be customized and adapted according to your workflow needs. By default the component includes a FHIR sample that can be used as input data to the models.

Note: data extracted or inferred from input data is inherently uncertain and can be biased or incorrect. Use this component with discretion.


Requirements

  • A deployed AI model.

    • The model's URI.
  • The model API key (if using API Key authentication).

  • The Azure Client ID, Azure Tenant ID, Azure Client Secret (if using Entra ID authentication).

Running the Component

STEP 1: Import the Azure OpenAI Adapter component

Using +COMPONENT, import the Azure OpenAI Adapter component.

STEP 2: Set up the component configurations

Bolded fields are required, otherwise default values defined in config.json are used:

Connection Configurations:

Field

Description

Default Value

Authentication Mode

Defines the type of authentication to use when communicating with the Azure foundry.

Options: API Key, Entra ID

API Key

Model URI

The unique resource identifier for the model endpoint that you want to interact with.

API Key

You can find it in the model's deployment page in Azure's Foundry.

* Only required if Authentication Mode configuration is set to API Key.

Azure Client ID

Identifier for your Azure application.

* Only required if Authentication Mode configuration is set to Entra ID.

Azure Tenant ID

Identifier for the Azure organization or directory.

* Only required if Authentication Mode configuration is set to Entra ID.

Azure Client Secret

A confidential password/key for the Azure application.

* Only required if Authentication Mode configuration is set to Entra ID.

Model Configurations:

Field

Description

Default Value

System Prompt

Global instructions. Add prompts to set rules, tone, output format, and safety constraints (e.g., "return in JSON format", "do not include PHI").

You are a healthcare interoperability assistant. Based solely on the structure and coded elements of the following FHIR or HL7 v2 message, infer non-identifying attributes such as whether the context is adult or pediatric, acute or routine, lab/imaging/clinical observation, screening or diagnostic, and real-time or historical data flow. Do not restate any field values or patient-identifying information, and respond in no more than three sentences.

Model Name

Model deployment identifier like 'gpt-4o-mini'

* Only required for some models.

Max Tokens

Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter.

4096

Temperature

Controls randomness (value is between 0.0-1.0). Lower temperature results in less random results. As the temperature approaches zero, the model will become deterministic and repetitive. Higher temperature results in more random outputs.

* Not all models support this parameter.

TopP

Also frequently called nucleus sampling. Controls diversity via nucleus sampling (value is between 0.0-1.0). 0.5 means half of all likelihood-weighted options are considered.

* Not all models support this parameter.

Request Timeout

HTTP Timeout (in seconds) to ensure the API call waits long enough to get the results back.

30

STEP 3: Start the component and view the returned output by the model in the component card

Security

  • The adapter is not using the public OpenAI API, (i.e., api.openai.com). Cloud-hosted OpenAI instances (e.g., Azure OpenAI) are typically built for enterprise and compliance which can be figured to HIPPA standards.

  • Rigid access control can be implemented so only authorized users and systems can access the AI, using Azure's access control systems via Entra ID.

  • Microsoft does not train models on customer data. They also do not share data with OpenAI or third parties.

  • Azure OpenAI by default does not store or reuse customer prompts or responses; the service processes requests transiently and discards them after completion unless the customer explicitly stores them elsewhere.

  • As a risk-reduction measure, in Iguana, PHI can be masked or minimized before transmission, ensuring the AI service processes only the minimum necessary, non-identifying data.