Giter Site home page Giter Site logo

paloaltonetworks / terraform-google-vmseries-modules Goto Github PK

View Code? Open in Web Editor NEW
16.0 11.0 28.0 26.97 MB

Terraform Reusable Modules for VM-Series on Google Cloud Platform (GCP)

Home Page: https://registry.terraform.io/modules/PaloAltoNetworks/vmseries-modules/google

License: MIT License

HCL 84.52% Shell 1.58% Makefile 0.72% Python 11.28% Go 1.90%
palo-alto-networks ngfw gcp terraform vm-series

terraform-google-vmseries-modules's Introduction

Warning

This repository is now considered archived, and all future development will take place at our new location. For more details see #236

GitHub release (latest by date) GitHub GitHub Workflow Status GitHub issues GitHub pull requests Terraform registry downloads total Terraform registry download month

Terraform Modules for Palo Alto Networks VM-Series on Google Cloud Platform

Overview

A set of modules for using Palo Alto Networks VM-Series firewalls to provide control and protection to your applications running on Google Cloud Platform (GCP). It deploys VM-Series as virtual machine instances and it configures aspects such as Shared VPC connectivity, IAM access, Service Accounts, Panorama virtual machine instances, and more.

The design is heavily based on the Reference Architecture Guide for Google Cloud Platform.

For copyright and license see the LICENSE file.

Structure

This repository has the following directory structure:

  • modules: This directory contains several standalone, reusable, production-grade Terraform modules. Each module is individually documented.
  • examples: This directory shows examples of different ways to combine the modules contained in the modules directory.

Compatibility

The compatibility with Terraform is defined individually per each module. In general, expect the earliest compatible Terraform version to be 1.0.0 across most of the modules.

Roadmap

We are maintaining a public roadmap to help users understand when we will release new features, bug fixes and enhancements.

Versioning

These modules follow the principles of Semantic Versioning. You can find each new release, along with the changelog, on the GitHub Releases page.

Getting Help

Open an issue on Github.

Contributing

Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given. Please follow our contributing guide.

terraform-google-vmseries-modules's People

Contributors

ancoleman avatar darrenmillin avatar devsecfranklin avatar fosix avatar horiagunica avatar jabielecki avatar james-otten-pan avatar jamesholland-uk avatar lstadnik avatar mariuszgebala avatar mattmclimans avatar mazwettler avatar michalbil avatar migara avatar mrichardson03 avatar ntwrkguru avatar pavelrn avatar salsop avatar sebastianczech avatar sokupski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-google-vmseries-modules's Issues

Changing machine_type doesn't trigger a rebuild of the vm-series

Describe the bug

Changing the machine_type variable in the vmseries_common object has no effect.

Expected behavior

Changing the machine_type, should trigger a change to the instance.

Current behavior

Changing the machine_type variable in the vmseries_common object has no effect.

Possible solution

Steps to reproduce

  1. Change machine_type in vmseries_common and execute a terraform plan/apply

Screenshots

Context

Requires a destroy/apply to update instance type

Your Environment

[Community Health Assessment] Changes needed

Health Check Pass Score More Info
Contains a meaningful README.md file 20 / 20 More info
SUPPORT.md file exists 20 / 20 More info
Repo has a description 15 / 15 More info
Has a recognized open source license 15 / 15 More info
Has a descriptive repo name 15 / 15 More info
Required topics attached to repo 15 / 15 More info
CONTRIBUTING.md file with contribution guidelines 5 / 5 More info
Has custom issue and pull request templates 0 / 5 More info

Current score: 105
Target threshold: 100
Total possible: 110

checkov hook crashes importing LegacyVersion from packaging module

Describe the bug

Pre-commit hook crashes while checkov importing LegacyVersion from packaging module which prevents commiting.

Current behavior

Checkov hook crashes with the following error:
ImportError: cannot import name 'LegacyVersion' from 'packaging.version' (/Users/alp/.cache/pre-commit/repovr64b5yh/py_env-python3.11/lib/python3.11/site-packages/packaging/version.py)

Bug report and PR on checkov project.
bridgecrewio/checkov#4011
bridgecrewio/checkov#4012

Possible solution

Use checkov version 2.2.125 in pre-commit hook which uses a pinned version of packaging for now.

Bump hashicorp/google provider version to 4.x

Currently all modules have following version constraint defined:

terraform {
  required_version = ">= 0.15.3, < 2.0"

  required_providers {
    null   = { version = "~> 3.1" }
    google = { version = "~> 3.30" }
  }
}

However the latest version of the 3.x google provider was released over a year ago and version 4.x was released on Nov 2, 2021.

I think it would be time to bump the modules to 4.x provider version to be able to use this module in terraform projects which also deploy new resources added in 4.x.

BR,
Markus

Enhance bootstrap file handling

Is your feature request related to a problem?

Currently, the bootstrap module looks for files in an object in a key/value format where the exact file names must be provided. While this is fine for static files like bootstrap.xml and init-cfg.txt, this makes it administratively difficult to maintain dynamically named files such as used for content updates.

Describe the solution you'd like

I propose to incorporate an opinionated structure that matches what is already required for the bootstrap package and have the module read the filenames and move the objects accordingly. This is a resource block I created that does this, but it would be nice to have this built into the module.

Describe alternatives you've considered.

No response

Additional context

No response

[Community Health Assessment] Changes needed

This issue was opened by a bot called Community Health (PANW) because this repo has failed too many community health checks.

Repo maintainers: Please take the time to fix the issues in the table to reach the target score. These improvements will help others find your work and contribute to it. This issue will update as your score improves until it hits the target score.

Click More info for instructions to fix each item.

Health Check Pass Score More Info
Contains a meaningful README.md file 20 / 20 More info
SUPPORT.md file exists 20 / 20 More info
Repo has a description 15 / 15 More info
Has a recognized open source license 15 / 15 More info
Has a descriptive repo name 15 / 15 More info
Required topics attached to repo 0 / 15 More info
CONTRIBUTING.md file with contribution guidelines 5 / 5 More info
Has custom issue and pull request templates 0 / 5 More info

Current score: 90
Target threshold: 100
Total possible: 110

There should be a contributing guide

Documentation link

There should be a contributing guide

Describe the problem

There is currently no CONTRIBUTING.md

Suggested fix

Add CONTRIBUTING.md which is standard amongst the other reppositories

Upgrade versions of null and random providers for M1/ARM macOS machines

Describe the bug

Mac M1-based machines require null and random providers to be minimum v3.1

Expected behavior

Able to use modules on Mac M1-based machines

Current behavior

Mac M1-based machines fail to use modules with older versions of the null and random providers

Possible solution

Increase version numbers in versions.tf for the modules (and examples ideally)

Steps to reproduce

Use modules on a Mac M1-based machine

Your Environment

Terraform v1.1.9 on darwin_arm64
macOS 12.4

Internal load balancer should support "connection_draining_timeout_sec"

Is your feature request related to a problem?

Internal load balancer should support "connection_draining_timeout_sec"

Describe the solution you'd like

Internal load balancer should support "connection_draining_timeout_sec"

Describe alternatives you've considered.

No response

Additional context

No response

The panorama module fails with "Cross-project references for this resource are not allowed"

When deploying the panorama module through a Jenkins pipeline, the deployment fails with "Cross-project references for this resource are not allowed". This was resolved by adding the project parameter to resources in modules/panorma/main.tf:
Line 9 - google_compute_resource
Line 18 - google_compute_resource
Line 27 - google_compute_disk
Line 35 - google_compute_attached_disk

Request to add `deletion_protection` to the GCP Compute Resource

Hello,

Would it be possible to add the deletion_protection flag onto your compute module?

This should be an easy change that would not affect anyone's deployments as you can set a default variable with the value false but would allow us to provide some additional protection on our Palo Alto VM deployments in GCP to ensure they are not accidentally removed.

Thanks!

Error: Invalid value for region: project: required field is not set

Describe the bug

I'm calling module externally like this:

source = "github.com/PaloAltoNetworks/terraform-google-vmseries-modules//modules/vmseries"
project = var.project_id

Expected behavior

TF apply should succeed, as I've supplied all required parameters and plan looks reasonable.

Current behavior

TF apply fails with message "Invalid value for region: project: required field is not set"

Plan looks like this:

  # module.vmseries.google_compute_address.private["0"] will be created
  + resource "google_compute_address" "private" {
      + address                     = (known after apply)
      + address_type             = "INTERNAL"
      + creation_timestamp  = (known after apply)
      + id                              = (known after apply)
      + name                        = "test-0-private"
      + network_tier             = (known after apply)
      + project                     = (known after apply)
      + purpose                    = (known after apply)
      + region                      = "us-central1"
      + self_link                    = (known after apply)
      + subnetwork              = "https://www.googleapis.com/compute/v1/projects/XXX/regions/us-central1/subnetworks/YYY"
      + users                        = (known after apply)
    }

Note project is not being set

Possible solution

Add the project attribute to the google_compute_address resources, similar to how has been done for google_compute_instance

Alternate solution

In the parent module, set the project and region at provider level:

provider "google" {
  project = var.project_id
  region = var.region
}

Steps to reproduce

locals {
  subnet_prefix = "https://www.googleapis.com/compute/v1/projects/${var.project_id}/regions/${var.region}/subnetworks"
}

module "test" {
  source                     = "github.com/PaloAltoNetworks/terraform-google-vmseries-modules//modules/vmseries"
  project                    = var.project_id
  network_interfaces = [
    for subnet_name in var.subnet_names :
    {
      subnetwork = "${local.subnet_prefix}/${subnet_name}"
    }
  ]
}

Screenshots

Context

Doing Proof of Concept. Similar experiments with CheckPoint and Fortigate worked fine.

Your Environment

Google Cloud Platform

Multiple google_compute_firewall

Add L3_Default option for External LB to allow all ports and protocols to be forwarded

Is your feature request related to a problem?

To ease the number of changes to the LB configuration L3_DEFAULT allows all ports and protocols to be forwarded to the backend service.

Describe the solution you'd like

Add this option to ip_protocol, and associated backend_service resources

Describe alternatives you've considered

Considered creating a new LB_EXTERNAL module for this, but probably makes the most sense to merge the functionality in the existing one.

Additional context

Without this you constantly need to keep adding services to the LB

Panorama Licensing Plugin & bootstrapping through user data

Is your feature request related to a problem?

When using the Panorama licensing plugin, the authcode is passed on to the vm-series by Panorama and when bootstrapping vm-series there is an auth-key generated by the plugin to use for authentication purposes. In this case, the bootstrap method uses user data instead on a bucket.

Describe the solution you'd like

A new command set that deal with the licensing plugin and bootstrapping using userdata

Describe alternatives you've considered.

There isn't an alternative using Terraform natively

Additional context

Writing on behalf of a customer trying to automate the roll out of vm-series using bootstrapping and licensing the vm-series from Panorama

The vmseries module fails with "Cross-project references for this resource are not allowed"

When deploying the vmseries module through a Jenkins pipeline, the deployment fails with "Cross-project references for this resource are not allowed". This was resolved by adding the project parameter to the following resources in modules/vmseries/main.tf:

data "google_compute_image" "vmseries" on line 11
data "google_compute_subnetwork" "this" on line 18
resource "google_compute_address" "private" on line 24
resource "google_compute_address" "public" on line 30

Update Terraform versions to latest (12.x -> 1.x)

Describe the bug

We need the Terraform versions updated in the modules. We are expected to deliver customer projects in GCP with these modules.

Expected behavior

At least Terraform 1.0.8 at the time of this writing.

Current behavior

We would have to ask our customer to use Terraform 12.x in cloud shells and local env.

create_public_ip always creates public IP if either true or false

Describe the bug

When using the vmseries module, specifying any value for create_public_ip parameter results in a public IP address being created. The only way to not have a public IP is to omit the field.

Expected behavior

create_public_up = false should not create a public IP address

Current behavior

When using the vmseries module, specifying any value for create_public_ip parameter results in a public IP address being created. The only way to not have a public IP is to omit the field.

Possible solution

Steps to reproduce

  network_interfaces = [
    {
      subnetwork       = module.vpc.subnetworks["${var.name_prefix}outside"].self_link
      private_address  = each.value.private_ips["outside"]
      create_public_ip = false
    }

Screenshots

Context

Your Environment

Panorama

Improve documentation of panorama example

  • Provide code snippets to generate and update the SSH key on the inputs.

Conditional Disk support for Panorama

For Panorama VM creation, we need to have the ability to conditionally create n number of 2TB disk attachments.

Some customers may be running in management only and need 0 disks created.

When a customer is doing logging on panorama, they will typically start with a single 2TB disk, but then others can later be added if needed for more logging space. Due to the limitations of Panorama, it only is support to add disks in 2TB

Hopefully this can be done with GCP provider in a way where additional disks can be added to an existing VM without forcing recreation of the VM.

location of "google_storage_bucket" in module "bootstrap" not set - defaults to "US"

Is your feature request related to a problem?

Currently all storage buckets created in the module "bootstrap" have the location argument not set, which leads to it defaulting to "US".
https://registry.terraform.io/providers/hashicorp/google/3.90.1/docs/resources/storage_bucket

Describe the solution you'd like

The bootstrap module should take a GCP location as input and use that to deploy the bootstrap bucket into the specified location.

Additional context

Many customers have location restrictions defined in their GCP projects which leads to errors like this:
Error: googleapi: Error 412: 'us' violates constraint 'constraints/gcp.resourceLocations', conditionNotMet

when trying to deploy a VM-Series firewall with the module from this repository.

Also, with the current version of the hashicorp/google provider this argument becomes required - so this would also make it easier to bump the provider version of the module up to 4.x.

The vmseries module fails with "Cross-project references for this resource are not allowed"

When deploying the vmseries module through a Jenkins pipeline, the deployment fails with "Cross-project references for this resource are not allowed". This was resolved by adding the project parameter to the following resources in modules/vmseries/main.tf:

  1. data "google_compute_image" "vmseries" on line 11
  2. data "google_compute_subnetwork" "this" on line 18
  3. resource "google_compute_address" "private" on line 24
  4. resource "google_compute_address" "public" on line 30

[Community Health Assessment] Changes needed

This issue was opened by a bot called Community Health (PANW) because this repo has failed too many community health checks.

Repo maintainers: Please take the time to fix the issues in the table to reach the target score. These improvements will help others find your work and contribute to it. This issue will update as your score improves until it hits the target score.

Click More info for instructions to fix each item.

Health Check Pass Score More Info
Contains a meaningful README.md file 20 / 20 More info
SUPPORT.md file exists 20 / 20 More info
Repo has a description 15 / 15 More info
Has a recognized open source license 15 / 15 More info
Has a descriptive repo name 15 / 15 More info
Required topics attached to repo 0 / 15 More info
CONTRIBUTING.md file with contribution guidelines 5 / 5 More info
Has custom issue and pull request templates 0 / 5 More info

Current score: 90
Target threshold: 100
Total possible: 110

Conditional support for Panorama logging disk

We need to have support for n number of logging disks to attach to each panorama VM.

Some customers will be running management-only mode without any logging disks.

Most commonly, customers will start with a single 2TB disk but need the option to later attach additional disks if needed for more space. Due to limitations of panorama, it can only grow logging capacity by attaching these disks in 2TB increments.

Verify/Cleanup Examples

  • regional_workspacesc - get rid of this example and ensure it doesn't break any other examples
  • iam_service_account - get rid of this example and ensure it doesn't break any other examples
  • lb_http_ext_global

root README documentation

Add documentation in the root directory that would be ready for a public release to Terraform Registry.

Bump up minimum Terraform version to 1.0

Is your feature request related to a problem?

Since we popping up the minimum Terraform version in AzureRM modules. there is a good moment to bump up TF version on our repositories.

Describe the solution you'd like

Bump up min TF version in whole code stack

Improve checkov output in pre-commit.

Is your feature request related to a problem?

Right now we don't specify which test we want to check, and the result of that is whole errors which checkov makes are skipped because they touch something we don't need or wont to check.

Describe the solution you'd like

We need to clarify which test we want to do with check and properly configure it.

vmseries module - discrepancy in network_interface parameters in main.tf

Describe the bug

Line 35 of main.tf uses private_address as a network_interface parameter, but private_ip is used in variables.tf and in
the module documentation

Expected behavior

If private_ip is used when calling the module, the specified ip address should be used.

Current behavior

The content of private_ip is ignored, and a random ip address is used.

Possible solution

The parameter, private_address, on line 35 of main.tf should be changed to private_ip

Steps to reproduce

Screenshots

Context

Your Environment

Refactor Panorama module

  • Remove custom image creation from the existing module. Move the custom image creation to an example
  • By default use the Panorama image from the GCP Marketplace
  • The module should accept a custom image URI as an optional input
  • Ensure the Panorama creation is adhering to the singleton pattern

Panorama logging disks are removed upon subsequent apply

Describe the bug

Log disks attached to a panorama instance at initial apply will get removed from it upon next run.

Expected behavior

Disks remain attached.

Current behavior

Disks are removed.

Possible solution

  1. Add lifecycle argument to ignored changes to attached_disk as described in google_compute_attached_disk resource documentation.
  2. Dynamically attach disks within google_compute_instance resource.

Steps to reproduce

  1. Apply panorama example
  2. Issue another apply

Drop the `tcp` part from the load balancer modules

The two load balancers we have currently named as lb_tcp_external and lb_tcp_internal

However, these modules can be used with other supported IP protocols, not just with TCP. Therefore we should rename these modules to remove the TCP part from the name of these modules

lb_tcp_external -> lb_external
lb_tcp_internal -> lb_internal

We must also fix the examples to use the new module name

HA Support

Introduce HA support for GCP load balancers

Remove provider version in autoscale module

Describe the bug

The version of null and random providers are in conflict in different module (bootstrap).

Expected behavior

The provider version should start from 3.1

Current behavior

null | ~> 2.1
random | ~> 2.3

Possible solution

Update the autoscale module.

Your Environment

null ~> 2.1
random ~> 2.3

Refactor vmseries module

The vmseries module should follow the singleton design pattern as we did with AWS and Azure module. The current module creates multiple instances of vmseries firewalls.

Remove hardcoded variable values from cloud nat module in Autoscale example

Describe the bug

The variables values for project_id and region in the mgmt_cloud_nat module used in the autoscale example are hardcoded, despite having the mentioned parameters defined in the example.tfvars file.

Expected behavior

The variable values for project_id and region should be referred from the example.tfvars file, not hardcoded - this may be misleading for users who want to use the example.

Current behavior

Resources defined in the cloud_nat module in the autoscale example may be created in a different project and region that the other resources, which refer the variable values from the example.tfvars.

Possible solution

Remove the hardcoded values and substitute them with variable references.

Steps to reproduce

Run the autoscale example - the cloud nat resources will be created in europe-west4, and the rest of the resources will be created in the central1 region

Screenshots

Context

Your Environment

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.