Setup Azure OpenAI Model Deployment and Authentication
Overview
This guide explains how to:
-
Create an Azure OpenAI resource
-
Deploy a model in Azure AI Foundry (or the Azure OpenAI deployment UX)
-
Copy the correct API endpoint(s)
-
Configure authentication:
-
API Key
-
Microsoft Entra ID (Service Principal / client credentials)
-
Step 1 — Create an Azure OpenAI resource
-
In Azure Portal, create a new Azure OpenAI (Azure AI Foundry / OpenAI) resource.
-
Choose subscription, resource group, and region.
-
Once created, open the resource and locate:
-
Endpoint
-
Keys (if using API key auth)
-
![]()
Step 2 — Create model deployments in Azure AI Foundry
-
Open Azure AI Foundry for your resource.
-
Create a deployment for the model you want (for example a GPT model).
-
Record:
-
Deployment name (used in the URL for chat completions)
-
Model name (used for Responses API depending on your pattern)
-
Azure supports both Chat Completions and Responses API usage patterns.
![]()
Step 3 — Copy the API endpoint URL
You will typically use one of these URL styles:
Chat Completions endpoint style
https://{resource}.openai.azure.com/openai/deployments/{deployment}/chat/completions?api-version=YYYY-MM-DD[-preview]
Responses API endpoint style
https://{resource}.openai.azure.com/openai/responses?api-version=YYYY-MM-DD[-preview]
Your adapter/library's checkEndpoint() function uses this URL to determine which API to call automatically.
![]()
Step 4 — Authentication option A: API Key
-
In the Azure OpenAI resource, go to Keys and Endpoint.
-
Copy KEY 1 (or KEY 2).
-
In Iguana component configuration:
-
AuthMode = api_key
-
ApiKey = copied key
-