Giter Site home page Giter Site logo

azure / appservice-landing-zone-accelerator Goto Github PK

View Code? Open in Web Editor NEW
200.0 19.0 94.0 34.61 MB

The Azure App Service landing zone accelerator is an open-source collection of architectural guidance and reference implementation to accelerate deployment of Azure App Service at scale.

Home Page: https://build.microsoft.com/en-US/sessions/58f92fab-3298-444d-b215-6b93219cd5d7?source=sessions

License: MIT License

Bicep 60.49% PowerShell 2.16% HTML 0.64% C# 0.79% CSS 0.17% JavaScript 0.04% HCL 30.13% Shell 5.46% Dockerfile 0.12%
architecture azure bicep iac terraform app-service app-service-environment lza landing-zone landing-zone-accelerator

appservice-landing-zone-accelerator's Introduction

App Service Landing Zone Accelerator

This repository provides both enterprise architecture guidelines and a reference implementation for deploying Azure App Service solutions in multi-tenant and App Service Environment scenarios. It includes best practices, considerations, and deployable artifacts for implementing a common reference architecture.

Table of Contents

Visit EnterpriseScale-AppService for more information.

image

Enterprise-Scale Architecture

The enterprise architecture is broken down into six different design areas, where you can find the links to each at:

Design Area Considerations Recommendations
Identity and Access Management Design Considerations Design Recommendations
Network Topology and Connectivity Design Considerations Design Recommendations
Management and Monitoring Design Considerations Design Recommendations
Business Continuity and Disaster Recovery Design Considerations Design Recommendations
Security, Governance, and Compliance Design Considerations Design Recommendations
Application Automation and DevOps Design Considerations Design Recommendations

Prerequisites

Before you begin, ensure you have met the following requirements:

  • Azure Subscription: You need an Azure subscription to create resources in Azure. If you don't have one, you can create a free account.

  • Azure CLI or Azure PowerShell: You need either Azure CLI or Azure PowerShell installed and configured to interact with your Azure account. You can download them from here and here respectively.

  • Terraform or Bicep: Depending on your preference, you need either Terraform or Bicep installed to deploy the infrastructure. You can download Terraform from here and Bicep from here.

  • Knowledge of Azure App Service: This project involves deploying and managing Azure App Service resources. Familiarity with Azure App Service and its concepts is recommended.

Please replace the links and the software versions with the ones that are relevant to your project.

Getting Started

Follow the steps below to get started with the App Service Landing Zone Accelerator.

Step 1. Reference implementations

In this project, we currently have the following reference implementations:

Scenario Description Documentation Pipeline Status
▶️ Scenario 1: App Service Secure Baseline Multi-Tenant This scenario deploys a multi-tenant App Service environment with a Hub and Spoke network topology. README Scenario 1: Terraform HUB Multi-tenant Secure Baseline Scenario 1: Terraform SPOKE Multi-tenant Secure Baseline Scenario 1: Bicep Multi-Tenant ASEv3 Secure Baseline

Note
Currently, the App Service Secure Baseline Multi-Tenant is the only reference implementation available. However, both the Terraform and Bicep configuration files have feature flags available to accommodate additional scenarios. More reference input files will be provided to accommodate additional reference implementations in the future.

Step 2. Configure and test the deployment in your own environment

With the selected reference implementation, you can now choose between Bicep or Terraform to deploy the scenario's infrastructure.

Deploy with Azure Portal (Bicep/ARM)

Deploy to Azure

Locally deploy with Bicep

For additional information, view the Bicep README here.

The Bicep configuration files are located in the scenarios/secure-baseline-multitenant/bicep directory.

Before deploying the Bicep IaC artifacts, you need to review and customize the values of the parameters in the main.parameters.jsonc file.

Note
Azure Developer CLI (azd) is also supported as a deployment method. Since azd CLI does not support parameter files with jsonc extension, we provide a simple json parameter file (which does not contain inline comments)

The expandable table below summarizes the available parameters and the possible values that can be set.

Bicep Configuration Parameters Table
Name Description Example
workloadName A suffix that will be used to name the resources in a pattern similar to <resourceAbbreviation>-<workloadName> . Must be up to 10 characters long, alphanumeric with dashes app-svc-01
location Azure region where the resources will be deployed in northeurope
environment Required. The name of the environment (e.g. "dev", "test", "prod", "preprod", "staging", "uat", "dr", "qa"). Up to 8 characters long. dev
deployAseV3 Optional, default is false. Set to true if you want to deploy ASE v3 instead of Multitenant App Service Plan. false
vnetHubResourceId If empty, then a new hub will be created. If you select not to deploy a new Hub resource group, set the resource id of the Hub Virtual Network that you want to peer to. In that case, no new hub will be created and a peering will be created between the new spoke and and existing hub vnet /subscriptions/<subscription_id>/ resourceGroups/<rg_name>/providers/ Microsoft.Network/virtualNetworks/<vnet_name>
firewallInternalIp If you select to create a new Hub, the UDR for locking the egress traffic will be created as well, no matter what value you set to that variable. However, if you select to connect to an existing hub, then you need to provide the internal IP of the azure firewal so that the deployment can create the UDR for locking down egress traffic. If not given, no UDR will be created
vnetHubAddressSpace If you deploy a new hub, you need to set the appropriate CIDR of the newly created Hub virtual network 10.242.0.0/20
subnetHubFirewallAddressSpace CIDR of the subnet that will host the azure Firewall 10.242.0.0/26
subnetHubFirewallManagementAddressSpace CIDR to use for the AzureFirewallManagementSubnet, which is required by AzFW Basic 10.242.0.64/26
subnetHubBastionAddressSpace CIDR of the subnet that will host the Bastion Service 10.242.0.128/26
vnetSpokeAddressSpace CIDR of the spoke vnet that will hold the app services plan and the rest supporting services (and their private endpoints) 10.240.0.0/20
subnetSpokeAppSvcAddressSpace CIDR of the subnet that will hold the app services plan. ATTENTION: If you deploy ASEv3 this CIDR should be x.x.x.x/24 10.240.0.0/26 (USE 10.240.0.0/24 if deployAseV3=true)
subnetSpokeDevOpsAddressSpace CIDR of the subnet that will hold devOps agents etc 10.240.10.128/26
subnetSpokePrivateEndpointAddressSpace CIDR of the subnet that will hold the private endpoints of the supporting services 10.240.11.0/24
webAppPlanSku Defines the name, tier, size, family and capacity of the App Service Plan. Plans ending to _AZ, are deplying at least three instances in three Availability Zones. select one from: 'S1', 'S2', 'S3', 'P1V3', 'P2V3', 'P3V3', 'P1V3_AZ', 'P2V3_AZ', 'EP1', 'EP2', 'EP3', 'ASE_I1V2_AZ', 'ASE_I2V2_AZ', 'ASE_I3V2_AZ'
webAppBaseOs The OS for the App service plan. Two options available: Windows or Linux
resourceTags Resource tags that we might need to add to all resources (i.e. Environment, Cost center, application name etc) "resourceTags": {
"value": {
"deployment": "bicep",
"key1": "value1"
}
}
enableEgressLockdown Feature Flag: te (or not) a UDR for the App Service Subnet, to route all egress traffic through Hub Azure Firewall
deployRedis Feature Flag: Deploy (or not) a redis cache
deployAzureSql Feature Flag: Deploy (or not) an Azure SQL with default database
deployAppConfig Feature Flag: Deploy (or not) an Azure app configuration
deployJumpHost Feature Flag: Deploy (or not) an Azure virtual machine (to be used as jumphost)
autoApproveAfdPrivateEndpoint Default value: true. Set to true if you want to auto approve the Private Endpoint of the AFD Premium. See details regarding approving the App Service private endpoint connection from Front Door false
sqlServerAdministrators The Microsoft Entra ID administrator group used for SQL Server authentication. The Microsoft Entra ID group must be created before running deployment. This has three values that need to be filled, as shown below
login: the name of the Microsoft Entra ID Group
sid: the object id of the Microsoft Entra ID Group
tenantId: The tenantId of the Microsoft Entra ID
Locally deploy with Terraform 1. Ensure you are logged in to Azure CLI and have selected the correct subscription. 1. Navigate to the Terraform deployment directory (same directory as the `main.tf` file). - [scenarios/secure-baseline-multitenant/terraform/hub](scenarios/secure-baseline-multitenant/terraform/hub/) - [scenarios/secure-baseline-multitenant/terraform/spoke](scenarios/secure-baseline-multitenant/terraform/spoke/) > **Note** > The GitHub Action deployments for Terraform `hub` and `spoke` are currently separated due to the amount of time both components take to deploy. It is advised to use a self-hosted agent to ensure the deployment does not timeout. 1. Familiarize yourself with the deployment files: - `main.tf` - Contains the Terraform provider configurations for the selected deployment/module. Note the `backend "azurerm" {}` block as this configures your [Terraform deployment's remote state](https://developer.hashicorp.com/terraform/language/settings/backends/azurerm). Also contains the resource group definitions to host the deployed resources. - `_locals.tf` - Contains the local variable declarations as well as custom logic to support naming and tagging conventions across each module. - `variables.tf` - Contains the input variable declarations for the selected deployment/module. - `outputs.tf` - Contains the output variable declarations for the selected deployment/module. - other `.tf` files - Contains groupings of resources for organizational purposes. - `Parameters/uat.tfvars` - Reference input parameter file for the UAT environment. 1. Navigate to the Terraform deployment directory (same directory as the `main.tf` file). 1. Run `terraform init` to initialize the deployment. 1. Run `terraform plan -var-file="Parameters/uat.tfvars"` to review the deployment plan. 1. Run `terraform apply -var-file="Parameters/uat.tfvars"` to deploy the resources.

Step 3. Configure GitHub Actions

Note
The GitHub Actions pipelines are currently configured to deploy the Terraform hub and spoke deployments. The Bicep pipelines are currently in development.

GitHub Actions pipelines are located in the .github/workflows directory with templates stored in the .github/actions directory.i

  1. Create an Microsoft Entra ID Service Principal for OIDC Authentication
  2. Configure your GitHub Actions Secrets
    • In your forked repository, navigate to Settings > Secrets and variables > Actions.
    • Create the following secrets:
      Secret Name Description Example Value
      AZURE_CLIENT_ID GUID value for the Client ID of the service principal to authenticate with 00000000-0000-0000-0000-000000000000
      AZURE_SUBSCRIPTION_ID GUID value for the Subscription ID to deploy resources to 00000000-0000-0000-0000-000000000000
      AZURE_TENANT_ID GUID value for the Tenant ID of the service principal to authenticate with 00000000-0000-0000-0000-000000000000
      AZURE_TF_STATE_RESOURCE_GROUP_NAME [Optional] For Terraform only: override value to configure the remote state resource group name rg-terraform-state
      AZURE_TF_STATE_STORAGE_ACCOUNT_NAME [Optional] For Terraform only: override value to configure the remote state storage account name tfstate
      AZURE_TF_STATE_STORAGE_CONTAINER_NAME [Optional] For Terraform only: override value to configure the remote state storage container name tfstate
      ACCOUNT_NAME [Optional] The Azure DevOps organization URL or GitHub Actions account name (see Example Value column) to use when provisioning the pipeline agent on the self-hosted DevOps Agent VM https://dev.azure.com/ORGNAME OR github.com/ORGUSERNAME OR none
      PAT [Optional] Personal Access Token for the DevOps VM to leverage on provisioning the pipeline agent on the self-hosted DevOps Agent VM asdf1234567

App Patterns

Looking for developer-focused reference implementation? Check out Reliable Web Patterns for App Service.

▶️ Reliable web app pattern for .NET


Got a feedback

Please leverage issues if you have any feedback or request on how we can improve on this repository.


Data Collection

The software may collect information about you and your use of the software and send it to Microsoft. Microsoft may use this information to provide services and improve our products and services. You may turn off the telemetry as described in the repository. There are also some features in the software that may enable you and Microsoft to collect data from users of your applications. If you use these features, you must comply with applicable law, including providing appropriate notices to users of your applications together with a copy of Microsoft's privacy statement. Our privacy statement is located at https://go.microsoft.com/fwlink/?LinkId=521839. You can learn more about data collection and use in the help documentation and our privacy statement. Your use of the software operates as your consent to these practices.

Telemetry Configuration

Telemetry collection is on by default.

To opt-out, set the variable enableTelemetry to false in Bicep/ARM file and disable_terraform_partner_id to false on Terraform files.


Contributing

See more at Contributing

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.

Any use of third-party trademarks or logos are subject to those third-party's policies.

appservice-landing-zone-accelerator's People

Contributors

aarthiem avatar ahmedsza avatar byte-master avatar cenkms avatar cykreng avatar dependabot[bot] avatar dmossberg avatar elyusubov avatar gunsringer avatar haithamshahin333 avatar hkamel avatar ibersanoms avatar jinlee794 avatar kunalbabre avatar kyleburnsdev avatar nabeelp avatar nianton avatar petemessina avatar robertopc1 avatar saumilkumarshah avatar thotheod avatar torokzs avatar whsalazar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

appservice-landing-zone-accelerator's Issues

Improved Deployment Resource Consistency in Multitenant Scenario with Terraform and Bicep

[Multitenant-all] (Importance: High) It seems that for the Multitenant Scenario, the resources eventually being deployed with Terraform are different from the ones being deployed with the bicep. I would expect some parity, no matter what the language of deployment. Some Examples:

  • In Bicep, we have an Azure Container Registry, while Terraform doesn't.
  • In TF, we have App Service Deployment Slots, but in Bicep, we don't
  • In TF, we have Route tables and NSGs; in Bicep we don't
  • In TF, we have S1 App Service Plan, and in Bicep S2 App Service Plan

User Guide - clone documentation

From Enterprise-Scale-AppService created by ahmedsza: cykreng/Enterprise-Scale-AppService#71

The user guide mentions cloning the repo, but does not make any mention of then pushing this to your own repo. Should this be a fork? or we do recommend and a clone followed by a push? I did a fork and then turned on Actions (they are off by default for forked repos)

Provide Linux/Windows App Service Plan choice for End-Users in Multitenant Deployments

  • [Multitenant-all] (Importance: Low) Should we give an option to the end user to select between Linux/Windows OS and language/tech stack? Currently, I see only Windows and .NET 6. If the end user wants a Linux-based app service plan, it is not really a straightforward task to change that. It would be nice if we provided that through parametrization.

Feedback to be converted into issues

  • [Bicep-Multitenant](Importance: High) Thanks Mutasem for all the changes you have done so far, but the bicep implementation still lacks consistent coding style, human readability best practices (comments, etc.) A good starting point for best practices is this doc: https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/best-practices

  • [Bicep-Multitenant](Importance: High) Current implementation is one huge file. We need to adhere to best practices, and break down this file into smaller, reusable modules. This will make it easier to understand and maintain the deployment, and will also facilitate re-usability of big portions of the code (i.e. from ASE Scenario etc) (Guidelines: https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/modules)

  • [Bicep-Multitenant](Importance: Medium) Some resources have being deployed without appropriate/extra configuration. For Instance, there is azure Firewall but I don't see any FW Application or Network Rules.

  • [Bicep-Multitenant](Importance: High) All Resources are being deployed in one Resource Group. There should be two Resource Groups (one for Hub, one for Spoke)

  • [Bicep-Multitenant](Importance: Low) In bicep it seems that "telemetry partner id" deployment is not working when the deployment is scoped at the Resource Group level.

  • [Multitenant-all](Importance: High) I might be wrong for that (at least for some implementations), so I apologise. I would expect to give users/customers the option to whether deploy a new Hub and all the required resources or not. I think I have not seen that as a simple feature-flag or instruction in the readme docs. (sorry again if wrong)

  • [Multitenant-all](Importance: Low) I would expect to give users the option to "connect" to existing hubs and hub resources, which might reside on different subscriptions etc. That would possibly require big effort and eventually would not cover all customer options, but we should at least cover the easiest/most common scenarios and/or give some guidelines around that

  • [Multitenant-all](Importance: Medium) There are some resources that are not workload-agnostic. For instance, SQL Database could potentially be substituted by other DBs (i.e. cosmosDB, MongoDB etc). I believe we should have some kind of "Feature flag" for those kind of resources, and let the users/customer decide on whether they need those resources.

  • [Multitenant-all](Importance: High) It seems that for the Multitenant Scenario the resources eventually being deployed with Terraform are different from the ones being deployed with bicep. I would expect some parity no matter the language of deployment. Some Examples:
    - In Bicep we have an Azure Container Registry, while in Terraform we don't.
    - In TF we have App Service Deployment Slots, but in Bicep we don't
    - In TF we have Route tables and NSGs, in Bicep we don't
    - In TF we have S1 App Service Plan, in Bicep S2 App Service Plan

  • [Bicep-Multitenant](Importance: Low) It would be nice to have some kind of sanitization and guidelines for the names and params of the resources. This can be achieved with Bicep Param Decorators (https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/parameters#decorators). On top of that, some variables might be properly sanitized to avoid deployment errors. For instance, a Storage account's name has strange rules (name must #be max 24 chars, globally unique, all lowercase letters or numbers with no spaces). We could have a variable for that name that would impose those rules (i.e.

    • var storageNameValid = toLower(replace(name, '-', '')) and then
    • var uniqueStorageName = length(storageNameValid) > maxNameLength ? substring(storageNameValid, 0, maxNameLength) : storageNameValid)
  • [Multitenant-all](Importance: Low) There are different approaches in the naming convention of resources between Bicep and Terraform. One implementation uses a suffix-based approach, while the other a prefix-based approach. So for instance, if we have a Resource group named Example001, in Terraform, the resource will be created with the name rg-Example001, while in bicep it will be created with the name Example001-rg.

  • [Multitenant-all] (Importance: Low) Should we give an option to the end user to select between Linux/Windows OS and language/tech stack? Currently, I see only Windows and .NET 6. If the end user wants Linux based app service plan, it is not really a straightforward task to change that. It would be nice if we provided that through parametrization.

  • [Multitenant-all] (Importance: Low) In terraform there is some script (ssms-setup.ps1) to install SQL Tools on a deployed VM. This script could be on a higher level "Shared" folder so that can be utilized (if possible) by both implementations

  • [ASE-Scenario-Bicep] (Importance: Low) I didn't have instructions on how to deploy the ASE scenario with the bicep. The instructions are for deploying those bicep files through GitHub actions. While this is good, it is far easier for someone to just "manually" deploy the LZA scenario of his choice than setting up the GitHub actions (create AAD service principles, add the GitHub secret etc.). For the same scenario, the terraform implementation gives clear instructions on how to deploy with Terraform (not GitHub action).

  • Pipelines that validate and deploy the templates make it mandatory for a successful pipeline

review usage of minutes used for deployment

From Enterprise-Scale-AppService created by ahmedsza: cykreng/Enterprise-Scale-AppService#77

My current run is more than 3.5 hours (still not done). We should review the implications of this in terms of consuming action minutes, and if it blocks other pipelines. I am sure that Azure DevOps task allowed to kickoff a deployment and continue (aka not wait). But cannot find an equivalent for GH Actions.

no location expressions outside of parameter default values

resourceGroup().location or deployment().location used outside of a parameter default value causing deployment to fail

Solution:

  1. define param location string = resourceGroup().location
  2. Replace all resourceGroup().location or deployment().location in bicep files with new param "location" instead

Improving Bicep Deployment: Adhering to Best Practices for Code Style, Readability, and Resource Group Structure

  • [Bicep-Multitenant] (Importance: High) Bicep implementation still lacks consistent coding style and human readability best practices (comments, etc.) A good starting point for best practices is this doc: https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/best-practices

  • [Bicep-Multitenant] (Importance: High) The current implementation is one huge file. We need to adhere to best practices and break down this file into smaller, reusable modules. This will make it easier to understand and maintain the deployment and will also facilitate re-usability of big portions of the code (i.e. from ASE Scenario etc.) (Guidelines: https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/modules)

  • [Bicep-Multitenant] (Importance: High) All Resources are being deployed in one Resource Group. There should be two Resource Groups (one for Hub, and one for Spoke)

  • [Bicep-Multitenant] (Importance: Low) It would be nice to have some kind of sanitization and guidelines for the names and params of the resources. This can be achieved with Bicep Param Decorators (https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/parameters#decorators). On top of that, some variables might be properly sanitized to avoid deployment errors. For instance, a Storage account's name has strange rules (name must #be max 24 chars, globally unique, all lowercase letters or numbers with no spaces).
    We could have a variable for that name that would impose those rules (i.e. var storageNameValid = toLower(replace(name, '-', '')) and then var uniqueStorageName = length(storageNameValid) > maxNameLength ? substring(storageNameValid, 0, - maxNameLength) : storageNameValid)

Variable substitution and account name? - breaks pipeline agent deployment

From Enterprise-Scale-AppService created by ahmedsza: cykreng/Enterprise-Scale-AppService#76

What is the variable substitution suppose to do - and it seems to have a bug. It seems to replace the account name with the Azure subscription. the account name should be something like https://dev.azure.com/orgname. the azure sub looks more like a guid.
The guide seems to indicate that a secret called ACCOUNT_NAME should be created in which case why not just have accountName=${{ secrets.ACCOUNT_NAME }} like we have for everything else

Improving Bicep Multitenant Deployment: Addressing Missing Configuration and Making Workload-Agnostic Resources

This item focuses on Bicep.

  • [Bicep-Multitenant] (Importance: Medium) Some resources have been deployed without appropriate/extra configuration. For Instance, there is an Azure Firewall, but I don't see any FW Application or Network Rules.

  • [Multitenant-all] (Importance: Medium) some resources are not workload-agnostic. For instance, SQL Database could potentially be substituted by other DBs (i.e. cosmos DB, MongoDB etc.). I believe we should have some kind of "Feature flag" for that kind of resource and let the users/customer decide whether they need those resources.

Review Telemetry for Bicep Template

  • [Bicep-Multitenant] (Importance: Low) In the bicep, it seems that the "telemetry partner id" deployment is not working when the deployment is scoped at the Resource Group level.

Enhancing Multitenant Deployment: Adding Hub Deployment Option and Feature Flag for Optional Resources

  • [Multitenant-all] (Importance: High) I would expect to give users/customers the option to deploy a new Hub and all the required resources or not. I think I have not seen that as a simple feature flag or instruction in the readme docs.
  • [Multitenant-all] (Importance: Medium) some resources are not workload-agnostic. For instance, SQL Database could potentially be substituted by other DBs (i.e. cosmos DB, MongoDB etc.). I believe we should have some kind of "Feature flag" for those resources and let the users/customer decide whether they need them.

Enhancement Request: Implement Multi-Tenant Hub Connectivity Feature

  • [Multitenant-all] (Importance: Low) I would expect to give users the option to "connect" to existing hubs and hub resources, which might reside on different subscriptions etc. That would require a big effort and eventually would not cover all customer options, but we should at least cover the easiest/most common scenarios and give some guidelines around that

Standardizing Naming Conventions in Multitenant Scenario for Terraform and Bicep Deployments

  • [Multitenant-all] (Importance: Low) There are different approaches in the naming convention of resources between Bicep and Terraform. One implementation uses a suffix-based approach, while the other a prefix-based approach. So, for instance, if we have a Resource group named Example001, in Terraform, the resource will be created with the name rg-Example001, while in bicep it will be created with the name Example001-rg.

Azure Gov Deployment - Name Length Checking

From Enterprise-Scale-AppService created by haithamshahin333: cykreng/Enterprise-Scale-AppService#51

Issue:

When deploying to Azure Gov, may need to consider name checking on length given that regions have longer names than in Azure Commercial which ultimately result in longer resource names (which can result in deployment errors).

Example:

Location: usgovvirginia
workloadName: ase703hs
Error: When creating Microsoft.Network/virtualNetworks/virtualNetworkPeerings resource between hub and spoke

{
    "status": "Failed",
    "error": {
        "code": "InvalidResourceName",
        "message": "Resource name vnet-hub-ase703hs-dev-usgovvirginia-001-vnet-spoke-ase703hs-dev-usgovvirginia-001 is invalid. The name can be up to 80 characters long. It must begin with a word character, and it must end with a word character or with '_'. The name may contain word characters or '.', '-', '_'.",
        "details": []
    }
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.