Configuring a OneLake shortcut in Microsoft Fabric to an Azure Data Lake Storage Gen2 (ADLS Gen2) account that’s behind a VNet (using a private endpoint) requires careful setup of networking, DNS, authentication, and permissions.
This guide compares the two supported authentication methods:
- Trusted Workspace Access using a fabric workspace identity
- Service Principal
This is a step-by-step instructions applicable to both production and dev/test environments. This guide includes Azure CLI steps for configuration, DNS setup , validation methods, and key security considerations .
Scenario
Connect Microsoft Fabric to an ADLS Gen2 account that has public access disabled and is accessible only via Private Endpoint in a VNet.
Authentication Methods Overview
Fabric shortcuts to ADLS Gen2 support two primary Azure AD authentication methods: Workspace Identity (Trusted Workspace Access) and Service Principal. Both methods rely on Azure AD tokens and require the storage firewall to explicitly allow Fabric’s access.
Both access are based on service principal authentication in Microsoft Entra ID
- Trusted Workspace Access (Workspace Identity)
- Service Principal Authentication

Comparison of Workspace Identity vs Service Principal for ADLS Shortcuts
| Authentication Method | Workspace Identity (Trusted Access) | Service Principal Authentication |
| Identity Type | Fabric-managed Workspace Identity (service principal in Entra ID). One per Fabric workspace. | Custom AAD App Registration (service principal) created and managed by you. |
| Credentials Management | No secret needed – tokens obtained by Fabric using workspace identity. Simplified credential management. | Client secret or certificate required. You must store and manage credentials securely. |
| Network Access to ADLS | Requires ADLS firewall to allow the Fabric workspace (via Resource Instance rule). Enables secure access even if public access is off. Not usable if Fabric is in a Trial capacity (requires F SKU). | If ADLS firewall is on, similarly requires Resource Instance rule or enabling trusted Azure services. (With public network disabled, the rule is essential.) Usable in any Fabric SKU (including those where workspace identity is allowed). |
| Cross-Tenant Support | No – Workspace identity auth works only within the same tenant (no cross-tenant). | Yes (with setup) – Service principal can be made multi-tenant or created in target tenant for cross-tenant scenarios (with appropriate trust & consent). |
| Conditional Access Impact | Treated as a workload identity in AAD. If a CA policy blocks all service principals, you must exclude the Fabric workspace SP or it will be unable to authenticate. | Also a workload identity – ensure any CA policy allows this app (e.g. exclude it or define appropriate conditions). If MFA or device enforcement is applied to SP logins, use certificate auth or adapt CA policy. |
| Permissions Setup | Need to grant the workspace identity RBAC access (e.g. Storage Blob Data Reader) on the ADLS account Managed identity name is typically the workspace name. No user assignment in Fabric needed after creation – any workspace admin/member can use it for connections. | Need to grant the SP the same RBAC role on the ADLS account. Also, in Fabric, the SP must be used to create the connection (entered in connection settings). If using Fabric’s REST APIs, the SP might need a Fabric admin consent as well. |
| Use Cases | Great for internal Azure AD data lakes where Fabric and ADLS are in same org/tenant. Simplifies dev/prod parity (no secrets in code). Not suitable for external data sources or scenarios where you can’t create the required resource rule. | Useful for scenarios needing independent credentials (e.g., partner access, scripts), or if using a separate tenant’s storage. Also if you prefer full control over credentials. Slightly more setup due to app registration. |
| Limitations | Not supported in Fabric Trial capacities (requires Fabric F SKU subscription). Not for cross-tenant. Each workspace identity is tied to one workspace; if you have many workspaces, must manage each identity’s access individually. | Requires secure secret management (rotate keys regularly, etc.). The service principal’s access is only as secure as its secret. If multiple Fabric workspaces need to use the same SP, you’ll need to share the credentials (or create multiple SPs). |
Architecture & Requirements
Before diving into CLI steps, ensure you have the following prerequisites in place:
- Fabric Workspace & Identity:
- A Microsoft Fabric workspace in an F SKU capacity, not a trial, where you will create the shortcut.
- You should have admin rights on this workspace. For the trusted access method, create a Workspace Identity for this workspace (in Fabric portal under Workspace Settings > Workspace Identity click “Create”). This will register a service principal in Entra ID for the workspace. Note the workspace’s ID (GUID) from the URL and its identity name (often the same as the workspace name).
- ADLS Gen2 Storage Account:
- An existing or new Storage Account with Hierarchical Namespace enabled (StorageV2 with ADLS Gen2).
- We will disable public network access on this account and use a Private Endpoint.
- For best performance and to avoid egress costs, use the same Azure region for the Fabric capacity and the storage account.
- Virtual Network for Private Endpoint:
- A VNet (and subnet) in the same Azure region as the storage, where the private endpoint will reside.
- If your Fabric environment does not have any VNet integration (Fabric runs as a service), this VNet is simply part of your Azure environment to host the private link. (No need to connect the Fabric workspace to this VNet – access is handled by Azure’s backbone once configured.)
- DNS Configuration:
- Plan for DNS resolution for the storage account’s private endpoint.
- If you use Azure Private DNS, Azure can automatically create and manage the necessary DNS zone (privatelink.dfs.core.windows.net for ADLS Gen2) and link it to your VNet.
- If you use custom DNS servers (on-prem or custom DNS forwarders), you’ll need to configure a conditional forwarder or manual DNS entries to resolve the storage account’s public name to the private IP. We cover both approaches in the steps below.
- Permissions:
- You need Azure roles to configure these resources:
- Owner/Contributor on Azure resources: to create private endpoints, DNS zones, and modify the storage firewall. Storage Account Contributor: to modify storage networking settings if not covered by Owner.
- Storage Blob Data Contributor/Reader: for the identities that will access data (or Owner at storage account if easier). For least privilege, plan to grant only Storage Blob Data Reader if read-only access via shortcuts is needed.
- You need Azure roles to configure these resources:
- Conditional Access:
- If your organization uses Conditional Access policies that might affect service principals (workload identities), coordinate with your Azure AD admin to ensure the Fabric workspace identity and/or the service principal you use are allowed to sign in. (For example, if there’s a policy requiring MFA for all logins including service principals, you’ll need an exclusion for these identities, as they can’t perform MFA).
Step-by-Step Configuration
1. Prepare Fabric Workspace and Identity
In Fabric: Make sure you have a workspace ready (or create a new one) with a Fabric capacity (F SKU). If it’s a new workspace, assign it to an F capacity in the Fabric admin portal. (Trusted access will not work in a Trial capacity.)
Create the Workspace Identity: As a workspace Admin, go to the workspace settings and select Workspace identity, then click Create. This registers a service principal in Azure AD for the workspace (you’ll see the identity name and its Application (client) ID once created). If it doesn’t show immediately, refresh the page.
Security tip: The workspace identity is by default a member of the workspace (not a global admin). Only workspace admins can create or delete it, and regular members can use it in connections if it’s created.
If you plan to use Service Principal authentication instead or in addition:
- Create an Azure AD App for Fabric to use. You can use Azure Portal or CLI to register an app. For example, using Azure CLI:
- This will output a JSON with appId, displayName, password (client secret), and tenant. Save the appId and password securely (the password is the client secret). We’ll assign it permissions next. (The –skip-assignment flag means no Azure role is granted by default; we’ll do a specific assignment below.)
2. Set Up ADLS Gen2 Storage Account
If you don’t already have the ADLS Gen2 account created, use Azure CLI or Portal to create one. For example, via CLI:
| # Set variables for reuse RESOURCE_GROUP=”rg-data” STORAGE_ACCOUNT=”mystorageadls” LOCATION=”eastus” # ensure this matches your Fabric workspace’s region if possible SKU=”Standard_LRS” az storage account create -g $RESOURCE_GROUP -n $STORAGE_ACCOUNT -l $LOCATION \ 8 –sku $SKU –kind StorageV2 –hns true |
- –hns true enables Hierarchical Namespace (ADLS Gen2).
- We choose a region (like East US) that aligns with the Fabric capacity region for best performance.
If the account already exists, ensure it’s StorageV2 with HNS. You can verify:
| 1 az storage account show -g $RESOURCE_GROUP -n $STORAGE_ACCOUNT –query “isHnsEnabled” |
That should return true.
Optionally, create a container and upload a sample file for testing:
| CONTAINER=”demo-data” az storage container create –account-name $STORAGE_ACCOUNT -n $CONTAINER # Upload a sample file for testing (assuming sample.txt in current directory) az storage blob upload -f sample.txt –account-name $STORAGE_ACCOUNT -c $CONTAINER -n sample.txt |
(If public access is still enabled right now, you can also upload via Portal. We will disable public access next.)
3. Assign Data Permissions (RBAC)
Now assign the necessary Azure RBAC role on the storage account to the identity that Fabric will use:
- For Workspace Identity (trusted access method): assign a Storage Blob Data Reader and Storage Blob Data Contributor to the workspace’s service principal. The workspace identity’s Application ID can be found in Fabric or Entra portal. You can search it by name workspace name.
- For Service Principal method: assign same roles to the app’s service principal. If you created it with CLI above
- Ensure the role assignment is applied at the correct scope :account or container. Following least privilege, you might assign the SP only to the needed container as well.
Verify RBAC Assignment: You can verify the assignment with:
| 1 az role assignment list –scope $SCOPE –assignee $PRINCIPAL_ID -o table |
It should list the role assignment for the identity on the storage account.
4. Configure Storage Networking (Disable Public Access & Private Endpoint)
If you are configuring a new ADLS Gen2 account, we will now lock down the new storage account’s network to VNet only and set up a private endpoint.
Disable Public Network Access: Set the storage account to deny all public access (so it only accepts traffic via private endpoints or trusted Azure resources):
| az storage account update -g $RESOURCE_GROUP -n $STORAGE_ACCOUNT –public-network-access Disabled |
After this, any attempt to reach the storage’s public endpoint from the internet will be blocked (e.g., if you try to list the container in Azure Portal, you’d get a 403 now).
Create Private Endpoint: Now create a Private Endpoint in the VNet for the storage account. We will create two private endpoints – one for the dfs endpoint (ADLS Gen2) and one for the blob endpoint – to cover all operations. This ensures that both DFS API calls and any fallback blob API calls can route through the VNet privately.
Assuming you have a VNet and subnet ready (e.g., VNet name myVNet and subnet default in resource group rg-network), use:
| # Variables for network VNET_RG=”rg-network” VNET_NAME=”myVNet” SUBNET_NAME=”default” # Private endpoint for dfs (Data Lake) az network private-endpoint create -g $RESOURCE_GROUP -n “${STORAGE_ACCOUNT}-pe-dfs” \ –vnet-name $VNET_NAME –subnet $SUBNET_NAME –private-connection-resource-id \ “/subscriptions/<sub_id>/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Storage/storageAccounts/$STORAGE_ACCOUNT” \ –group-id dfs –connection-name “${STORAGE_ACCOUNT}-conn-dfs” # Private endpoint for blob az network private-endpoint create -g $RESOURCE_GROUP -n “${STORAGE_ACCOUNT}-pe-blob” \ –vnet-name $VNET_NAME –subnet $SUBNET_NAME –private-connection-resource-id \ “/subscriptions/<sub_id>/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Storage/storageAccounts/$STORAGE_ACCOUNT” \ –group-id blob –connection-name “${STORAGE_ACCOUNT}-conn-blob” |
Some notes on the above:
- We specify –group-id dfs or blob to target the DFS endpoint (used for ADLS Gen2) and Blob endpoint of the storage account.
- Azure will allocate private IPs in your subnet for each endpoint.
- The connection-name is just a label for the link, you can name it freely.
Approve Private Endpoint (if needed): If you are not the Storage Account owner or if using different subscriptions, you might need to approve the connection. Check in the Azure Portal under the storage account’s Private endpoints blade that the two endpoints show as “Approved” and Connected. The CLI creation with proper permissions typically auto-approves them.
5. Configure DNS for Private Endpoint
If you are configuring a new ADLS Gen2 account, for clients in the VNet to reach the storage via private IP, DNS must resolve the storage account name (<storage>.dfs.core.windows.net and <storage>.blob.core.windows.net) to the private endpoint’s IP.
Option A: Azure Private DNS (recommended) – Azure can create and manage the DNS zone:
- When creating the private endpoint via CLI, if the VNet was not linked to a Private DNS zone, Azure by default creates a DNS zone for the privatelink subdomain and a DNS record. However, to ensure this, we’ll explicitly set up the zones.
- Create Private DNS Zones for privatelink.dfs.core.windows.net and privatelink.blob.core.windows.net, and link them to the VNet:
- Setting -e true (registration enabled) isn’t strictly necessary here, but it won’t hurt. Now, we need to ensure A records exist for our storage in those zones. Azure might have automatically added them during private endpoint creation:
- You should see an entry like <storage-account-name> with an A record pointing to the private IP (same for blob zone). If not present, you can manually create it:
- With these zones linked, any VM or service query from the VNet for <storage>.dfs.core.windows.net will receive the private IP. Azure’s default DNS resolution process will use the CNAME chain: <storage>.dfs.core.windows.net → <storage>.privatelink.dfs.core.windows.net -> private A record. This means you use the normal storage account FQDN in connection strings; DNS will transparently redirect to private IP when inside the VNet.
Option B: Custom DNS – If you manage your own DNS server (on-prem or custom DNS resolver):
- Configure conditional forwarding for dfs.core.windows.net and blob.core.windows.net to Azure’s 168.63.129.16 (the Azure DNS) or to a DNS server that holds the privatelink zones.
- Alternatively, manually create DNS records in your own DNS to map <storage>.dfs.core.windows.net to the private IP (and same for blob). Make sure to update if IP changes (private endpoint IPs are generally static within the VNet, but could change if re-created).
Validation (DNS): From a VM in the VNet (or using the Azure Cloud Shell with that VNet if possible), run:
| 1 nslookup ${STORAGE_ACCOUNT}.dfs.core.windows.net |
It should resolve to a 10.x.x.x address (the VNet’s private IP) rather than a public Azure IP. If it still shows a public IP, the DNS configuration is not correctly applied.
6. Enable Trusted Workspace Access on Storage (Resource Instance Rule)
At this stage, the storage account is isolated from public networks and only reachable via private link. However, Azure services like Fabric still need explicit firewall permission to access it. We will add a Resource Instance exception for our Fabric workspace. This tells the storage firewall “allow requests from this specific Fabric workspace resource,” enabling the Fabric backend to connect securely.
Azure CLI currently doesn’t have a one-liner for adding resource instance rules, so we’ll use an ARM template deployment via CLI:
First, gather required information:
- Fabric Workspace Resource ID: This is a synthetic resource ID for your Fabric workspace. It follows the format: /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/Fabric/providers/Microsoft.Fabric/workspaces/ Note: The subscription ID is a placeholder 00000000-0000-0000-0000-000000000000 for all Fabric workspaces (you literally use all zeroes as shown). The <WorkspaceGUID> is the GUID of your workspace (from the URL or Fabric admin settings).
- Tenant ID: Your Azure AD tenant ID (GUID). You can get this from Azure Portal (Azure AD overview) or via CLI: az account show –query tenantId.
Create a JSON file fabric_trusted_access.json with the following content (fill in the placeholders):
| { “$schema”: “https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#”, “contentVersion”: “1.0.0.0”, “resources”: [ { “type”: “Microsoft.Storage/storageAccounts”, “apiVersion”: “2023-01-01”, “name”: “<STORAGE_ACCOUNT_NAME>”, “properties”: { “networkAcls”: { “resourceAccessRules”: [ { “tenantId”: “<YOUR_TENANT_ID>”, “resourceId”: “/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/Fabric/providers/Microsoft.Fabric/workspaces/<YOUR_WORKSPACE_GUID>” } ] } } } ] } |
This template adds one resourceAccessRule entry with the tenant and workspace. Now deploy this template using Azure CLI:
| az deployment group create -g $RESOURCE_GROUP -f fabric_trusted_access.json -n FabricTrustedAccessDeployment |
After a minute, the deployment should succeed. This effectively updates the storage account’s firewall settings.
Alternatively, you could use Azure PowerShell:
| # (PowerShell Az module) $resourceId = “/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/Fabric/providers/Microsoft.Fabric/workspaces/<YOUR_WORKSPACE_GUID>” $tenantId = “<YOUR_TENANT_ID>” $rgName = “<RESOURCE_GROUP_OF_STORAGE_ACCOUNT>” $accountName= “<STORAGE_ACCOUNT_NAME>” Add-AzStorageAccountNetworkRule -ResourceGroupName $rgName -Name $accountName -TenantId $tenantId -ResourceId $resourceId |
Verify the Resource Instance Rule: There’s no direct CLI command to list resource instance rules alone, but you can check the storage account’s network settings:
| az storage account show -g $RESOURCE_GROUP -n $STORAGE_ACCOUNT –query “networkRuleSet.resourceAccessRules” |
This should output an array with your rule. You can also verify via Portal: go to the storage account > Networking > Firewalls and virtual networks. Temporarily toggle “Public network access” to Enabled from selected virtual networks and IP addresses (don’t save) to view the Resource instances section. You should see an entry for Microsoft.Fabric/workspaces with your workspace ID listed.
Now your storage account will accept traffic from the Fabric workspace (and deny from other sources). Even if the data flows via the private endpoint, the firewall checks this exception under the hood.
7. Create the OneLake Shortcut in Fabric
With Azure now configured, final steps are in Microsoft Fabric. We will create a OneLake shortcut in a Lakehouse (or Warehouse) that points to the ADLS Gen2 account.
- Open or Create a Lakehouse in the Fabric workspace. (In the workspace, click New and create a Lakehouse if you don’t have one, or use an existing Lakehouse item).
- Add Shortcut: In the Lakehouse, click Get Data, or use the ellipsis menu in the Lakehouse’s file explorer and choose Add Shortcut. In the New Shortcut wizard:
- For External data source, select Azure Data Lake Storage Gen2.Connection: Choose Create new connection (unless you set up a connection earlier).
- Enter the DFS URL of the storage account. For example: https://<storage-account-name>.dfs.core.windows.net/. (Do not use the .blob.core.windows.net URL here – Fabric expects the DFS endpoint for ADLS Gen2) Authentication kind: Choose either Workspace identity or Service principal, according to the method:
- If Workspace identity: Simply select it. Fabric will use the workspace’s managed identity to authenticate .If Service principal: You’ll need to enter the Client ID, Tenant ID, and Client Secret for the Azure AD app. Use the values from the app registration you created (or any app that has permissions on the storage).
- Enter the DFS URL of the storage account. For example: https://<storage-account-name>.dfs.core.windows.net/. (Do not use the .blob.core.windows.net URL here – Fabric expects the DFS endpoint for ADLS Gen2) Authentication kind: Choose either Workspace identity or Service principal, according to the method:
- For External data source, select Azure Data Lake Storage Gen2.Connection: Choose Create new connection (unless you set up a connection earlier).
- The shortcut will appear as a folder in your Lakehouse. You can expand it to see files, or use Spark/SQL to query it. At this point, Fabric is reading data from ADLS Gen2 via the private link, using the method you configured. For example, you can preview the CSV or run a notebook to read it – it should succeed as if it were part of OneLake.
- If you chose workspace identity auth, behind the scenes Fabric’s service has used the workspace’s service principal to get an OAuth token and access the storage. If service principal, it used the provided credentials to do the same. In both cases, the storage firewall recognized the call as coming from an allowed Fabric workspace resource (thanks to the resource rule) and the ADLS access control validated the token’s identity has the required RBAC role, allowing the data to be accessed.
8. Validation and Troubleshooting
Finally, validate that everything is working and address any issues:
Verification Steps:
- Listing/Previewing Data: In the Fabric Lakehouse, you should be able to see the shortcut folder and the file you uploaded. Try to preview the file (or read it in a notebook or Spark job). Success indicates networking and auth are configured correctly. Any errors at this point indicate something to address (see below).
- Test from Fabric Pipeline: If you have a Data Pipeline in Fabric, you can add a Copy Activity with the ADLS Gen2 as a source (using the connection you created) to ensure pipelines can read it. Or use T-SQL in a Warehouse via COPY INTO from the shortcut path.
- Check Azure Monitor: On the storage account, the Metrics for traffic or the Azure Monitor logs might show access requests. A successful read will show up as transactions in the storage metrics.
Common Issues & Resolutions
| Issue | Possible Cause & Solution |
| Connection creation failed (in Fabric wizard, cannot list containers) | Networking/DNS issue: Fabric cannot reach the storage endpoint. Ensure the storage URL is correct (use dfs.core.windows.net). The Fabric service will resolve <storage>.dfs.core.windows.net. If using trusted access, Azure handles routing internally, but if something is misconfigured, it could fail. Double-check that the Resource Instance rule is in place – without it, the storage firewall will reject the connection (error 403). Also verify DNS: if Fabric is not in the VNet, Azure’s backend service should still route correctly via the private link due to the resource rule. (Fabric uses the Azure backbone, not your corporate network, so DNS for Fabric is Microsoft-managed.) If you suspect DNS, consider temporarily enabling the storage’s “Allow public access from all networks” just to test resolution; if that then works, it points to DNS. |
| 403 Forbidden when listing or reading data | Authorization issue: The identity authenticated, but doesn’t have permission on the data. Check that the workspace identity or service principal has the Storage Blob Data Reader/Contributor role on the container or account. Also, if the data is under a folder and you used a folder-level RBAC role, ensure the path is correct. If using service principal, verify you entered the secret correctly and that the secret is not expired. |
| No workspace identity available in Fabric UI | Ensure your workspace is on an F SKU capacity (not trial) and that you created the identity in the workspace settings. Workspace identity is not supported in My Workspace or trial environments. |
| Conditional Access blocking sign-in | If you see an Azure AD sign-in log for the workspace SP or your service principal being blocked by Conditional Access, work with your AAD admin to adjust policy. For workspace identity, you likely need to exclude the Fabric workspace’s service principal from any broad CA policy targeting “All cloud apps” or “All service principals”. For a custom service principal, you may allow it via an application exemption or a managed identity policy. |
| DNS name not resolving (on custom DNS) | If you’re testing connectivity from an on-prem environment or a dev box that is linked to the VNet via VPN, ensure your DNS is forwarding .dfs.core.windows.net queries to Azure. On a Windows machine, you can edit the hosts file as a quick test (map <storage>.dfs.core.windows.net to the private IP) to see if that fixes connectivity – if yes, you need to fix your DNS setup. Using Azure Private DNS linked to the VNet where your VPN lands can simplify this. |
| Private endpoint not working | Check that the private endpoint connection is approved and in Connected state. Also verify the Network Policies on the subnet (should be disabled for Private Endpoint by default), and that no NSG is blocking traffic. Private endpoints use azure backbone so typically no NSG issues unless you have custom rules. |
Security/Compliance Checklist:
- Ensure that no unnecessary IP or VNet access is open on the storage account. We used the most restrictive model (public access disabled, specific resource instance allowed). This is ideal for production.
- Audit the Azure AD service principals (workspace identity or custom SP) in your tenant. They are sensitive objects; keep track of who has credentials to any custom SP. For workspace identities, by default only Fabric handles its credentials (there is no user-exposed secret).
- If requirement demands, you can rotate the service principal secret periodically and update the Fabric connection (for workspace identity this is managed by Microsoft).
- If using Customer Managed Keys (CMK) encryption on the storage, ensure Fabric is compatible (OneLake shortcuts do work with CMK-encrypted ADLS, since Fabric just reads data as a client).
- Monitor usage and set up alerts for suspicious access patterns on the storage. Because the storage now trusts the Fabric workspace, any compromise of that workspace or the service principal could lead to data access – treat those identities with high privilege.
Finally, both dev and prod environments should follow the same steps. In a dev/test environment, you might use a smaller Fabric capacity, and perhaps a separate storage account and VNet, but the configuration is identical. Just remember to replicate the Resource Instance rule and DNS setup in each environment. Once configured, you have a secure pipeline: Fabric can access ADLS Gen2 privately and securely without exposing the storage to the internet, and you have flexibility in choosing a managed identity or your own service principal for authentication.
Additional Sources
https://learn.microsoft.com/en-us/fabric/security/workspace-identity-authenticate
https://learn.microsoft.com/en-us/fabric/security/security-trusted-workspace-access
https://learn.microsoft.com/en-us/azure/storage/common/storage-private-endpoints