Giter Site home page Giter Site logo

gcloud-computing-foundations's Introduction

Google Cloud Computing Foundations

Repository of my study from Gooogle Cloud Computing Foundations Certificate & Google Cloud Essentials.

Important

Copyright 2024 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

Table of contents

Cloud Shell >_

Cloud Shell is a Debian-based virtual machine with a persistent 5-GB home directory, which makes it easy for you to manage your Google Cloud projects and resources.

Update the OS

sudo apt-get update

Table format in Cloud Shell

gcloud config set accessibility/screen_reader False

For more information see Enabling accessibility features.

List the active account name

gcloud auth list
Expected Output
ACTIVE: *
ACCOUNT: "ACCOUNT"

To set the active account, run:
    $ gcloud config set account `ACCOUNT`

List the project ID

gcloud config list project
Expected Output
[core]
project = "PROJECT_ID"

View the project ID

gcloud config get-value project

View details about the project

gcloud compute project-info describe --project $(gcloud config get-value project)

Note

When the google-compute-default-region and google-compute-default-zone keys and values are missing from the output, no default zone or region is set.

View the list of configurations in your environment

gcloud config list

To see all properties and their settings

gcloud config list --all

List your components

gcloud components list

Filtering

gcloud compute instances list --filter="name=('INSTANCE_NAME')"

List the firewall rules in the project

gcloud compute firewall-rules list

List the firewall rules for the default network

gcloud compute firewall-rules list --filter="network='default'"

List the firewall rules for the default network where the allow rule matches an ICMP rule

gcloud compute firewall-rules list --filter="NETWORK:'default' AND ALLOW:'icmp'"

View the available logs on the system

gcloud logging logs list

View the logs that relate to compute resources

gcloud logging logs list --filter="compute"

Read the logs related to the resource type of gce_instance

gcloud logging read "resource.type=gce_instance" --limit 5

Read the logs for a specific virtual machine

gcloud logging read "resource.type=gce_instance AND labels.instance_name='<INSTANCE_NAME>'" --limit 5

Regions and Zones

Regions Zones
Regions are collections of zones. A zone is a deployment area within a region.
Zones have high-bandwidth, low-latency network connections to other zones in the same region. The fully-qualified name for a zone is made up of <region>-<zone>.

Set the project region by default

gcloud config set compute/region <REGION>

Set the project zone by default

gcloud config set compute/zone <ZONE>

After set the default region and zone, you don't have to append the --zone flag every time.

Create a variable for region

export REGION=<REGION>

Create a variable for zone

export ZONE=<ZONE>
export ZONE=$(gcloud config get-value compute/zone)

Create an enviroment variable for store your PROJECTID

export PROJECT_ID=$(gcloud config get-value project)

To use the variable: $VARIABLE

To verify that your variables were set properly

echo -e "PROJECT ID: $PROJECT_ID\nZONE: $ZONE"

Note

When you run gcloud on your own machine, the config settings are persisted across sessions. But in Cloud Shell, you need to set this for every new session or reconnection.

1. Create a Virtual Machine

Compute Engine allows you to create virtual machines (VMs) that run different operating systems, including multiple flavors of Linux (Debian, Ubuntu, Suse, Red Hat, CoreOS) and Windows Server, on Google infrastructure.

Create a new VM instance

gcloud compute instances create <INSTANCE_NAME> --machine-type <MACHINE_TYPE> --zone=$ZONE
Expected Output
Created [..."INSTANCE_NAME"].
     NAME: "INSTANCE_NAME"
     ZONE:  "ZONE"
     MACHINE_TYPE: "MACHINE_TYPE"
     PREEMPTIBLE:
     INTERNAL_IP: 10.128.0.3
     EXTERNAL_IP: 34.136.51.150
     STATUS: RUNNING
Where

gcloud compute allows you to manage your Compute Engine resources in a format that's simpler than the Compute Engine API.

instances create creates a new instance.

The --machine-type flag specifies the machine type.

The --zone flag specifies where the VM is created.

If you omit the --zone flag, the gcloud tool can infer your desired zone based on your default properties. Other required instance settings, such as machine type and image, are set to default values if not specified in the create command.

List the compute instance available in the project

gcloud compute instances list

To see all defaults

gcloud compute instances create --help

Press Enter or the spacebar to scroll through the help content.

To exit the content, type Q.

To exit help, press CTRL + C.

1.1 Remote Desktop (RDP) into the Windows Server

To create a Windows Server, follow these steps while creating a Virtual Machine in

  • Cloud Console:
  1. In the Boot disk section, click Change to begin configuring your boot disk.
  2. Under Operating system select Windows Server
  • Cloud Shell
gcloud compute instances create <INSTANCE_NAME> \
    --image-project windows-cloud \
    --image-family <IMAGE_FAMILY> \
    --machine-type <MACHINE_TYPE> \
    --boot-disk-size <BOOT_DISK_SIZE> \
    --boot-disk-type <[BOOT_DISK_TYPE>
Where

<INSTANCE_NAME> is the name for the new instance.

<IMAGE_FAMILY> is one of the public image families for Windows Server images.

<MACHINE_TYPE> is one of the available machine types.

<BOOT_DISK_SIZE> is the size of the boot disk in GB. Larger persistent disks have higher throughput.

<BOOT_DISK_TYPE> is the type of the boot disk for your instance. For example, pd-ssd.

To see whether the server instance is ready for an RDP connection

gcloud compute instances get-serial-port-output <INSTANCE_NAME> --zone=$ZONE

If prompted, type N and press ENTER.

Repeat the command until you see the following in the command output: Instance setup finished. instance is ready to use.

To set a password for logging into the RDP

gcloud compute reset-windows-password <INSTANCE_NAME> --zone $ZONE --user <USERNAME>

If asked Would you like to set or reset the password for [admin] (Y/n)?, enter Y.

Record the password for use in later steps to connect.

Connect to your server

Through an RDP app already installed on your computer (Enter the external IP of your VM) or through RDP directly from the Chrome browser using the Spark View extension (Use your Windows username admin and password you previously recorded).

1.2 Use SSH to connect to your instance

gcloud compute ssh <INSTANCE_NAME> --zone=$ZONE

When prompted Do you want to continue? (Y/n) type Y.

Disconnect from SSH by exiting from the remote shell: exit.

To leave the passphrase empty, press Enter twice.

Note

You have connected to a virtual machine and notice how the command prompt changed?

The prompt now says something similar to sa_107021519685252337470@<INSTANCE_NAME>.

The reference before the @ indicates the account being used.

After the @ sign indicates the host machine being accessed.

ssh username@hostname

2. Create a NGINX Web Server

Install NGINX

sudo apt-get install -y nginx
Expected Output
 Reading package lists... Done
 Building dependency tree
 Reading state information... Done
 The following additional packages will be installed:
 ...

Confirm that NGINX is running

ps auwx | grep nginx
Expected Output
root      2330  0.0  0.0 159532  1628 ?        Ss   14:06   0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
www-data  2331  0.0  0.0 159864  3204 ?        S    14:06   0:00 nginx: worker process
www-data  2332  0.0  0.0 159864  3204 ?        S    14:06   0:00 nginx: worker process
root      2342  0.0  0.0  12780   988 pts/0    S+   14:07   0:00 grep nginx

To see the web page

http://EXTERNAL_IP/

2.1 Update the Firewall

List the firewall rules for the project

gcloud compute firewall-rules list

Note

Communication with the virtual machine will fail as it does not have an appropriate firewall rule. The nginx web server is expecting to communicate on tcp:80. To get communication working you need to:

  1. Add a tag to the virtual machine

  2. Add a firewall rule for http traffic

Add a tag to the virtual machine

gcloud compute instances add-tags <INSTANCE_NAME> --tags http-server,https-server

Update the firewall rule to allow

gcloud compute firewall-rules create <default-allow-http> --direction=INGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp:80 --source-ranges=0.0.0.0/0 --target-tags=http-server

List the firewall rules for the project

gcloud compute firewall-rules list --filter=ALLOW:'80'
Expected Output
NAME                  NETWORK  DIRECTION  PRIORITY  ALLOW   DENY  DISABLED
<default-allow-http>  default  INGRESS    1000      tcp:80        False

Verify communication is possible for http to the virtual machine

curl http://$(gcloud compute instances list --filter=name:<INSTANCE_NAME> --format='value(<EXTERNAL_IP>)')

You will see the default nginx output.

3. Kubernetes Engine

Google Kubernetes Engine (GKE) provides a managed environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The GKE environment consists of multiple machines (specifically Compute Engine instances) grouped to form a container cluster.

Create a GKE cluster

gcloud container clusters create --machine-type=<MACHINE_TYPE> --zone=$ZONE <CLUSTER_NAME>

A cluster consists of at least one cluster master machine and multiple worker machines called nodes. Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes processes necessary to make them part of the cluster.

You can ignore any warnings in the output. It might take several minutes to finish creating the cluster.

Note

Cluster names must start with a letter and end with an alphanumeric, and cannot be longer than 40 characters.

Expected Output
NAME: <CLUSTER_NAME>
LOCATION: <ZONE>
MASTER_VERSION: 1.22.8-gke.202
MASTER_IP: 34.67.240.12
MACHINE_TYPE: <MACHINE_TYPE>
NODE_VERSION: 1.22.8-gke.202
NUM_NODES: 3
STATUS: RUNNING

Authenticate with the cluster

gcloud container clusters get-credentials lab-cluster

After creating your cluster, you need authentication credentials to interact with it.

Expected Output
Fetching cluster endpoint and auth data.
kubeconfig entry generated for my-cluster.

3.1 Deploy an application to the cluster

GKE uses Kubernetes objects to create and manage your cluster's resources. Kubernetes provides the Deployment object for deploying stateless applications like web servers. Service objects define rules and load balancing for accessing your application from the internet.

To create a new Deployment hello-server from the hello-app container image

kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0
Expected Output
deployment.apps/hello-server created

This Kubernetes command creates a deployment object that represents hello-server.

In this case, --image specifies a container image to deploy. The command pulls the example image from a Container Registry bucket.

gcr.io/google-samples/hello-app:1.0 indicates the specific image version to pull. If a version is not specified, the latest version is used.

To create a Kubernetes Service

This is a Kubernetes resource that lets you expose your application to external traffic

kubectl expose deployment hello-server --type=LoadBalancer --port 8080
Expected Output
service/hello-server exposed
Where

--port specifies the port that the container exposes.

type="LoadBalancer" creates a Compute Engine load balancer for your container.

To inspect the hello-server Service

kubectl get service
Expected Output
NAME             TYPE            CLUSTER-IP      EXTERNAL-IP     PORT(S)           AGE
hello-server     loadBalancer    10.39.244.36    35.202.234.26   8080:31991/TCP    65s
kubernetes       ClusterIP       10.39.240.1               433/TCP           5m13s

Note

It might take a minute for an external IP address to be generated.

Run the previous command again if the EXTERNAL-IP column status is pending.

To view the application from your web browser

Open a new tab and enter the following address, replacing <EXTERNAL-IP> with the EXTERNAL-IP for hello-server.

http://<EXTERNAL-IP>:8080
Expected Output

The browser tab displays the message Hello, world! as well as the version and hostname.

To delete the cluster

gcloud container clusters delete lab-cluster

When prompted, type Y to confirm.

Deleting the cluster can take a few minutes.

4. Set Up Network and HTTP Load Balancers

There are several ways you can load balance on Google Cloud. Here you will set up the following load balancers:

  • Network Load Balancer
  • HTTP(s) Load Balancer

4.1 Create multiple web server instances

Create a virtual machine <www1> in your default zone

gcloud compute instances create <www1> \
  --zone=$ZONE \
  --tags=network-lb-tag \
  --machine-type=e2-small \
  --image-family=debian-11 \
  --image-project=debian-cloud \
  --metadata=startup-script='#!/bin/bash
    apt-get update
    apt-get install apache2 -y
    service apache2 restart
    echo "
<h3>Web Server: <www1></h3>" | tee /var/www/html/index.html'

For this load balancing scenario, three Compute Engine VM instances were created, and Apache was installed on each of them.

Create a firewall rule to allow external traffic to the VM instances

gcloud compute firewall-rules create <www-firewall-network-lb> \
    --target-tags network-lb-tag --allow tcp:80

Now you need to get the external IP addresses of your instances and verify that they are running.

Run the following to list your instances

gcloud compute instances list

Verify that each instance is running

curl http://<IP_ADDRESS>

Replace <IP_ADDRESS> with the IP address for each of your VMs.

4.2 Configure the load balancing service

When you configure the load balancing service, your virtual machine instances receives packets that are destined for the static external IP address you configure. Instances made with a Compute Engine image are automatically configured to handle this IP address.

Note

Learn more about how to set up network load balancing from the External TCP/UDP Network Load Balancing overview Guide.

Create a static external IP address for your load balancer

gcloud compute addresses create <network-lb-ip-1> \
  --region $REGION
Expected Output
Created [https://www.googleapis.com/compute/v1/projects/qwiklabs-gcp-03-xxxxxxxxxxx/regions//addresses/network-lb-ip-1].

Add a legacy HTTP health check resource

gcloud compute http-health-checks create <basic-check>

To create the target pool and use the health check, which is required for the service to function

gcloud compute target-pools create <www-pool> \
  --region $REGION --http-health-check <basic-check>

Add the instances to the pool:

gcloud compute target-pools add-instances <www-pool> \
    --instances <www1>,<www2>,<www3>

Add a forwarding rule

gcloud compute forwarding-rules create <www-rule> \
    --region  $REGION \
    --ports 80 \
    --address <network-lb-ip-1> \
    --target-pool <www-pool>

4.3 Sending traffic to your instances

To view the external IP address of the <www-rule> forwarding rule used by the load balancer

gcloud compute forwarding-rules describe <www-rule> --region $REGION

Access the external IP address

IPADDRESS=$(gcloud compute forwarding-rules describe <www-rule> --region $REGION --format="json" | jq -r .IPAddress)

Show the external IP address

echo $IPADDRESS

To access the external IP address

while true; do curl -m1 $IPADDRESS; done

Replacing IP_ADDRESS with an external IP address from the previous command

Use Ctrl + C to stop running the command.

Note

The response from the curl command alternates randomly among the three instances. If your response is initially unsuccessful, wait approximately 30 seconds for the configuration to be fully loaded and for your instances to be marked healthy before trying again.

4.4 Create an HTTP load balancer

HTTP(S) Load Balancing is implemented on Google Front End (GFE). GFEs are distributed globally and operate together using Google's global network and control plane. You can configure URL rules to route some URLs to one set of instances and route other URLs to other instances.

Requests are always routed to the instance group that is closest to the user, if that group has enough capacity and is appropriate for the request. If the closest group does not have enough capacity, the request is sent to the closest group that does have capacity.

Important

To set up a load balancer with a Compute Engine backend, your VMs need to be in an instance group. The managed instance group provides VMs running the backend servers of an external HTTP load balancer.

To create the load balancer template

gcloud compute instance-templates create <lb-backend-template> \
   --region=REGION \
   --network=default \
   --subnet=default \
   --tags=allow-health-check \
   --machine-type=e2-medium \
   --image-family=debian-11 \
   --image-project=debian-cloud \
   --metadata=startup-script='#!/bin/bash
     apt-get update
     apt-get install apache2 -y
     a2ensite default-ssl
     a2enmod ssl
     vm_hostname="$(curl -H "Metadata-Flavor:Google" \
     http://169.254.169.254/computeMetadata/v1/instance/name)"
     echo "Page served from: $vm_hostname" | \
     tee /var/www/html/index.html
     systemctl restart apache2'

Managed instance groups (MIGs) let you operate apps on multiple identical VMs.

You can make your workloads scalable and highly available by taking advantage of automated MIG services, including: autoscaling, autohealing, regional (multiple zone) deployment, and automatic updating.

Create a managed instance group based on the template

gcloud compute instance-groups managed create <lb-backend-group> \
   --template=<lb-backend-template> --size=2 --zone=ZONE

Create the <fw-allow-health-check firewall> rule

gcloud compute firewall-rules create <fw-allow-health-check> \
  --network=default \
  --action=allow \
  --direction=ingress \
  --source-ranges=130.211.0.0/22,35.191.0.0/16 \
  --target-tags=allow-health-check \
  --rules=tcp:80

Note

The ingress rule allows traffic from the Google Cloud health checking systems (130.211.0.0/22 and 35.191.0.0/16).

To set up a global static external IP address that your customers use to reach your load balancer

gcloud compute addresses create <lb-ipv4-1> \
  --ip-version=IPV4 \
  --global

Note the IPv4 address that was reserved

gcloud compute addresses describe <lb-ipv4-1> \
  --format="get(address)" \
  --global

Create a health check for the load balancer

gcloud compute health-checks create <http http-basic-check> \
  --port 80

Note

Google Cloud provides health checking mechanisms that determine whether backend instances respond properly to traffic. For more information, please refer to the Creating health checks document.

Create a backend service:

gcloud compute backend-services create <web-backend-service> \
  --protocol=HTTP \
  --port-name=http \
  --health-checks=<http-basic-check> \
  --global

Add your instance group as the backend to the backend service

gcloud compute backend-services add-backend <web-backend-service> \
  --instance-group=<lb-backend-group> \
  --instance-group-zone=ZONE \
  --global

Create a URL map to route the incoming requests to the default backend service

gcloud compute url-maps create <web-map-http> \
    --default-service <web-backend-service>

Note

URL map is a Google Cloud configuration resource used to route requests to backend services or backend buckets.

For example, with an external HTTP(S) load balancer, you can use a single URL map to route requests to different destinations based on the rules configured in the URL map:

Create a target HTTP proxy to route requests to your URL map

gcloud compute target-http-proxies create <http-lb-proxy> \
    --url-map <web-map-http>

Create a global forwarding rule to route incoming requests to the proxy

gcloud compute forwarding-rules create <http-content-rule> \
   --address=<lb-ipv4-1>\
   --global \
   --target-http-proxy=<http-lb-proxy> \
   --ports=80

Note

A forwarding rule and its corresponding IP address represent the frontend configuration of a Google Cloud load balancer. Learn more about the general understanding of forwarding rules from the Forwarding rule overview Guide.

4.5 Testing traffic sent to your instances

  1. In the Google Cloud console, from the Navigation menu, go to Network services > Load balancing.
  2. Click on the load balancer that you just created (web-map-http).
  3. In the Backend section, click on the name of the backend and confirm that the VMs are Healthy. If they are not healthy, wait a few moments and try reloading the page.
  4. When the VMs are healthy, test the load balancer using a web browser, going to http://IP_ADDRESS/, replacing IP_ADDRESS with the load balancer's IP address.

This may take three to five minutes. If you do not connect, wait a minute, and then reload the browser.

Your browser should render a page with content showing the name of the instance that served the page, along with its zone (for example, Page served from: lb-backend-group-xxxx).

5. App Engine with python

App Engine allows developers to focus on doing what they do best, writing code, and not what it runs on. The notion of servers, virtual machines, and instances have been abstracted away, with App Engine providing all the compute necessary. Developers don't have to worry about operating systems, web servers, logging, monitoring, load-balancing, system administration, or scaling, as App Engine takes care of all that.

5.1 Enable Google App Engine Admin API

The App Engine Admin API enables developers to provision and manage their App Engine Applications.

In the Cloud Console

  1. In the left Navigation menu, click APIs & Services > Library.
  2. Type "App Engine Admin API" in the search box.
  3. Click the App Engine Admin API card.

If there is no prompt to enable the API, then it is already enabled and no action is needed.

5.2 Download the Hello World app

There is a simple Hello World app for Python you can use to quickly get a feel for deploying an app to Google Cloud.

To copy the Hello World sample app repository to your Google Cloud instance

git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git

Go to the directory that contains the sample code

cd python-docs-samples/appengine/standard_python3/hello_world

5.3 Test the application

Test the application using the Google Cloud development server (dev_appserver.py), which is included with the preinstalled App Engine SDK.

Start the Google Cloud development server

From within your helloworld directory where the app's app.yaml configuration file is located

dev_appserver.py app.yaml

The development server is now running and listening for requests on port 8080.

View the results by clicking the Web preview > Preview on port 8080.

You'll see a "Hello World!" in a new browser window.

5.4 Make a change

You can leave the development server running while you develop your application. The development server watches for changes in your source files and reloads them if necessary.

Leave the development server running

Click the (+) next to your Cloud Shell tab to open a new command line session

To go to the directory that contains the sample code

cd python-docs-samples/appengine/standard_python3/hello_world

To open main.py in nano to edit the content

nano main.py

Change "Hello World!" to "Hello, Cruel World!". Save the file with CTRL-S and exit with CTRL-X.

Reload the Hello World! Browser or click the Web Preview > Preview on port 8080 to see the results

Browser window with Hello, Cruel World! on the page.

5.5 Deploy your app

To deploy your app to App Engine

From within the root directory of your application where the app.yaml file is located

gcloud app deploy

Enter the number that represents your region

The App Engine application will then be created.

Expected Output
Creating App Engine application in project [qwiklabs-gcp-233dca09c0ab577b] and region ["REGION"]....done.
Services to deploy:

descriptor:      [/home/gcpstaging8134_student/python-docs-samples/appengine/standard/hello_world/app.yaml]
source:          [/home/gcpstaging8134_student/python-docs-samples/appengine/standard/hello_world]
target project:  [qwiklabs-gcp-233dca09c0ab577b]
target service:  [default]
target version:  [20171117t072143]
target url:      [https://qwiklabs-gcp-233dca09c0ab577b.appspot.com]

Do you want to continue (Y/n)?

Enter Y when prompted to confirm the details and begin the deployment of service.

Expected Output
Beginning deployment of service [default]...
Some files were skipped. Pass `--verbosity=info` to see which ones.
You may also view the gcloud log file, found at
[/tmp/tmp.dYC7xGu3oZ/logs/2017.11.17/07.18.27.372768.log].
โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—
โ• โ• Uploading 5 files to Google Cloud Storage                โ•โ•ฃ
โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•File upload done.
Updating service [default]...done.
Waiting for operation [apps/qwiklabs-gcp-233dca09c0ab577b/operations/2e88ab76-33dc-4aed-93c4-fdd944a95ccf] to complete...done.
Updating service [default]...done.
Deployed service [default] to [https://qwiklabs-gcp-233dca09c0ab577b.appspot.com]

You can stream logs from the command line by running:
  $ gcloud app logs tail -s default

To view your application in the web browser run:
  $ gcloud app browse

Note

If you receive an error as "Unable to retrieve P4SA" while deploying the app, then re-run the above command.

5.6 View your application

To launch your browser enter the following command

gcloud app browse

Click on the link it provides:

Expected Output
Did not detect your browser. Go to this link to view your app:
https://qwiklabs-gcp-233dca09c0ab577b.appspot.com

Your application is deployed and you can read the short message in your browser.

6. Cloud Functions

A cloud function is a piece of code that runs in response to an event, such as an HTTP request, a message from a messaging service, or a file upload. Cloud events are things that happen in your cloud environment.

6.1 Create a function

This function writes a message to the Cloud Functions logs.

It is triggered by cloud function events and accepts a callback function used to signal completion of the function.

The cloud function event is a cloud pub/sub topic event. A pub/sub is a messaging service where the senders of messages are decoupled from the receivers of messages. When a message is sent or posted, a subscription is required for a receiver to be alerted and receive the message. To learn more: A Google-Scale Messaging Service.

Set the default region

gcloud config set compute/region <REGION>

Create a directory for the function code

mkdir gcf_hello_world

Move to the gcf_hello_world directory

cd gcf_hello_world

Create and open index.js to edit

nano index.js

Copy the following into the index.js file

/**
* Background Cloud Function to be triggered by Pub/Sub.
* This function is exported by index.js, and executed when
* the trigger topic receives a message.
*
* @param {object} data The event payload.
* @param {object} context The event metadata.
*/
exports.helloWorld = (data, context) => {
const pubSubMessage = data;
const name = pubSubMessage.data
    ? Buffer.from(pubSubMessage.data, 'base64').toString() : "Hello World";

console.log(`My Cloud Function: ${name}`);
};

Exit nano (Ctrl+x) and save (Y) the file.

6.2 Create a Cloud Storage bucket

To create a new Cloud Storage bucket

gsutil mb -p <PROJECT_ID> gs://<BUCKET_NAME>

6.3 Deploy your function

When deploying a new function, you must specify --trigger-topic, --trigger-bucket, or --trigger-http. When deploying an update to an existing function, the function keeps the existing trigger unless otherwise specified.

Disable the Cloud Functions API

gcloud services disable cloudfunctions.googleapis.com

Re-enable the Cloud Functions API

gcloud services enable cloudfunctions.googleapis.com

Add the artifactregistry.reader permission for your appspot service account

gcloud projects add-iam-policy-binding <PROJECT_ID> \
--member="serviceAccount:<PROJECT_ID>@appspot.gserviceaccount.com" \
--role="roles/artifactregistry.reader"

Deploy the function to a pub/sub topic

gcloud functions deploy helloWorld \
  --stage-bucket <BUCKET_NAME> \
  --trigger-topic hello_world \
  --runtime nodejs20

Note

If you get OperationError, ignore the warning and re-run the command.

If prompted, enter Y to allow unauthenticated invocations of a new function.

Verify the status of the function

gcloud functions describe helloWorld

An ACTIVE status indicates that the function has been deployed.

Every message published in the topic triggers function execution, the message contents are passed as input data.

6.4 Test the function

After you deploy the function and know that it's active, test that the function writes a message to the cloud log after detecting an event.

To create a message test of the function

DATA=$(printf 'Hello World!'|base64) && gcloud functions call helloWorld --data '{"data":"'$DATA'"}'

The cloud tool returns the execution ID for the function, which means a message has been written in the log.

Example Output
executionId: 3zmhpf7l6j5b

View logs to confirm that there are log messages with that execution ID.

6.5 View logs

Check the logs to see your messages in the log history

gcloud functions logs read helloWorld
Example Output
LEVEL: D
NAME: helloWorld
EXECUTION_ID: 4bgl3jw2a9i3
TIME_UTC: 2023-03-23 13:45:31.545
LOG: Function execution took 912 ms, finished with status: 'ok'
 
LEVEL: I
NAME: helloWorld
EXECUTION_ID: 4bgl3jw2a9i3
TIME_UTC: 2023-03-23 13:45:31.533
LOG: My Cloud Function: Hello World!
 
LEVEL: D
NAME: helloWorld
EXECUTION_ID: 4bgl3jw2a9i3
TIME_UTC: 2023-03-23 13:45:30.633
LOG: Function execution started

Note

The logs will take around 10 mins to appear. Also, the alternative way to view the logs is, go to Logging > Logs Explorer.

Your application is deployed, tested, and you can view the logs.

7. Cloud Storage

Cloud Storage allows world-wide storage and retrieval of any amount of data at any time. You can use Cloud Storage for a range of scenarios including serving website content, storing data for archival and disaster recovery, or distributing large data objects to users via direct download. It's used for Unstructured Data. Organized in buckets.

7.1 Create a bucket

Bucket naming rules
  • Do not include sensitive information in the bucket name, because the bucket namespace is global and publicly visible.
  • Bucket names must contain only lowercase letters, numbers, dashes (-), underscores (_), and dots (.). Names containing dots require verification.
  • Bucket names must start and end with a number or letter.
  • Bucket names must contain 3 to 63 characters. Names containing dots can contain up to 222 characters, but each dot-separated component can be no longer than 63 characters.
  • Bucket names cannot be represented as an IP address in dotted-decimal notation (for example, 192.168.5.4).
  • Bucket names cannot begin with the "goog" prefix.
  • Bucket names cannot contain "google" or close misspellings of "google".
  • For DNS compliance and future compatibility, you should not use underscores (_) or have a period adjacent to another period or dash. For example, ".." or "-." or ".-" are not valid in DNS names.

To make a bucket

mb make bucket commnad

Remember use the Bucket naming rules

gsutil mb gs://<YOUR-BUCKET-NAME>

This command is creating a bucket with default settings.

To see default settings, use the Cloud console Navigation menu > Cloud Storage, then click on your bucket name, and click on the Configuration tab.

Note

If the bucket name is already taken, either by you or someone else, try again with a different bucket name.

7.2 Upload an object into your bucket

To download this image (ada.jpg) into your bucket

curl https://upload.wikimedia.org/wikipedia/commons/thumb/a/a4/Ada_Lovelace_portrait.jpg/800px-Ada_Lovelace_portrait.jpg --output ada.jpg

To upload the image from the location where you saved it to the bucket you created

gsutil cp ada.jpg gs://YOUR-BUCKET-NAME

Note

When typing your bucket name, you can use the tab key to autocomplete it.

You can see the image load into your bucket from the command line.

Now remove the downloaded image

rm ada.jpg

7.3 Download an object from your bucket

To download the image you stored in your bucket to Cloud Shell

gsutil cp -r gs://YOUR-BUCKET-NAME/ada.jpg .
Expected Output
Copying gs://YOUR-BUCKET-NAME/ada.jpg...
/ [1 files][360.1 KiB/2360.1 KiB]
Operation completed over 1 objects/360.1 KiB.

7.4. Copy an object to a folder in the bucket

To create a folder called image-folder and copy the image (ada.jpg) into it

gsutil cp gs://YOUR-BUCKET-NAME/ada.jpg gs://YOUR-BUCKET-NAME/image-folder/

[!Note[ Compared to local file systems, folders in Cloud Storage have limitations, but many of the same operations are supported.

Expected Output
Copying gs://YOUR-BUCKET-NAME/ada.jpg [Content-Type=image/png]...
- [1 files] [ 360.1 KiB/ 360.1 KiB]
Operation completed over 1 objects/360.1 KiB

7.5 List contents of a bucket or folder

To list the contents of the bucket

gsutil ls gs://YOUR-BUCKET-NAME
Expected Output
gs://YOUR-BUCKET-NAME/ada.jpg
gs://YOUR-BUCKET-NAME/image-folder/

7.6 List details for an object

To get some details about the image file you uploaded to your bucket

gsutil ls -l gs://YOUR-BUCKET-NAME/ada.jpg
Expected Output
306768  2017-12-26T16:07:570Z  gs://YOUR-BUCKET-NAME/ada.jpg
TOTAL: 1 objects, 30678 bytes (360.1 KiB)

7.7 Make your object publicly accessible

acl Acces Control list. Mechanism you can use to define who has acces to your bucket and objects.

To grant all users read permission for the object stored in your bucket

gsutil acl ch -u AllUsers:R gs://YOUR-BUCKET-NAME/ada.jpg
Expected Output
Updated ACL on gs://YOUR-BUCKET-NAME/ada.jpg

Your image is now public, and can be made available to anyone.

Validate that your image is publicly available

Go to Navigation menu > Cloud Storage, then click on the name of your bucket.

You should see your image with the Public link box. Click the Copy URL and open the URL in a new browser tab.

7.8 Remove public access

To remove this permission, use the command:

gsutil acl ch -d AllUsers gs://YOUR-BUCKET-NAME/ada.jpg
Expected Output
Updated ACL on gs://YOUR-BUCKET-NAME/ada.jpg

Verify that you've removed public access by clicking the Refresh button in the console. The checkmark will be removed.

To delete an object - the image file in your bucket

gsutil rm gs://YOUR-BUCKET-NAME/ada.jpg
Expected Output
Removing gs://YOUR-BUCKET-NAME/ada.jpg...

Refresh the console. The copy of the image file is no longer stored on Cloud Storage (though the copy you made in the image-folder/ folder still exists).

8. Cloud SQL for MySQL

8.1 Create a Cloud SQL instance

Cloud Console

  • From the Navigation menu > SQL.
  • Click Create Instance.
  • Choose MySQL database engine.
  • Enter Instance ID as myinstance.
  • In the password field click on the Generate link and the eye icon to see the password. Save the password.
  • Select the database version as MySQL 8.
  • For Choose a Cloud SQL edition, select Enterprise edition.
  • For Preset choose Development (4 vCPU, 16 GB RAM, 100 GB Storage, Single zone).

Warning

If you choose a preset larger than Development, your project will be flagged and your lab will be terminated.

  • Set Region as <REGION>.
  • Set the Multi zones (Highly available) > Primary Zone field as .
  • Click CREATE INSTANCE.

It might take a few minutes for the instance to be created. Once it is, you will see a green checkmark next to the instance name.

  • Click on the Cloud SQL instance. The SQL Overview page opens.

8.2 Connect to your instance using the mysql client in Cloud Shell

To connect to your Cloud SQL

gcloud sql connect <myinstance> --user=root

Enter your root password when prompted. Note: The cursor will not move.

Press the Enter key when you're done typing.

You should now see the mysql prompt.

8.3 Create a database and upload data

To create a SQL database called guestbook on your Cloud SQL instance

CREATE DATABASE guestbook;

Insert the following sample data into the guestbook databas

USE guestbook;
CREATE TABLE entries (guestName VARCHAR(255), content VARCHAR(255),
    entryID INT NOT NULL AUTO_INCREMENT, PRIMARY KEY(entryID));
    INSERT INTO entries (guestName, content) values ("first guest", "I got here!");
INSERT INTO entries (guestName, content) values ("second guest", "Me too!");

Now retrieve the data

SELECT * FROM entries;
Expected Output
  +--------------+-------------------+---------+
  | guestName    | content           | entryID |
  +--------------+-------------------+---------+
  | first guest  | I got here!       |       1 |
  | second guest | Me too!           |       2 |
  +--------------+-------------------+---------+
  2 rows in set (0.00 sec)
  mysql>

9. Cloud Endpoints

9.1 Getting the sample code

To get the sample API and scripts

gsutil cp gs://spls/gsp164/endpoints-quickstart.zip .
unzip endpoints-quickstart.zip

To change to the directory that contains the sample code

cd endpoints-quickstart

9.2 Deploying the Endpoints configuration

To publish a REST API to Endpoints, an OpenAPI configuration file that describes the API is required.

To deploy the Endpoints configuration

In the endpoints-qwikstart directory.

cd scripts

Run the following script

./deploy_api.sh

Cloud Endpoints uses the host field in the OpenAPI configuration file to identify the service.

When you prepare an OpenAPI configuration file for your own service, you will need to sets the ID of your Cloud project as part of the name configured in the host field.

The script then deploys the OpenAPI configuration to Service Management using the command: gcloud endpoints services deploy openapi.yaml

As it is creating and configuring the service, Service Management outputs some information to the console. You can safely ignore the warnings about the paths in openapi.yaml not requiring an API key.

Expected Output

Service Configuration [2017-02-13-r2] uploaded for service [airports-api.endpoints.example-project.cloud.goog]

9.3 Deploying the API backend

To deploy the API backend, make sure you are in the endpoints-quickstart/scripts directory.

To deploy the API backend run the script

./deploy_app.sh ../app/app_template.yaml 

The script runs the following command to create an App Engine flexible environment in the <REGION> region: gcloud app create --region="$REGION"

It takes a couple minutes to create the App Engine flexible backend.

Note

If you get an ERROR: NOT_FOUND: Unable to retrieve P4SA: from GAIA message, rerun the deploy_app.sh script.

You'll see the following displayed in Cloud Shell after the App Engine is created:

Success! The app is now created. Please use `gcloud app deploy` to deploy your first app.

The script goes on to run the gcloud app deploy command to deploy the sample API to App Engine.

Expected Output

Deploying ../app/app_template.yaml...You are about to deploy the following services:

It takes several minutes for the API to be deployed to App Engine.

You'll see a line like the following when the API is successfully deployed to App Engine:

Expected Output

Deployed service [default] to [https://example-project.appspot.com]

9.4 Sending requests to the API

To send requests to he sample API by running the following script

./query_api.sh

The script echoes the curl command that it uses to send a request to the API, and then displays the result. You'll see something like the following in Cloud Shell:

curl "https://example-project.appspot.com/airportName?iataCode=SFO"
San Francisco International Airport

The API expects one query parameter, iataCode, that is set to a valid IATA airport code such as SEA or JFK.

To test, run this example in Cloud Shell

./query_api.sh JFK

9.5 Tracking API activity

With APIs deployed with Cloud Endpoints, you can monitor critical operations metrics in the Cloud Console and gain insight into your users and usage with Cloud Logging.

Run this traffic generation script in Cloud Shell to populate the graphs and logs

./generate_traffic.sh

Note

This script generates requests in a loop and automatically times out in 5 minutes. To end the script sooner, enter CTRL+C in Cloud Shell.

To look at the activity graphs for your service

In the Console:

Navigation menu > Endpoints > Services and click Airport Codes service.

It may take a few moments for the requests to be reflected in the graphs.

You can do this while you wait for data to be displayed

  • Permissions Panel:

The Permissions panel allows you to control who has access to your API and the level of access.

  • Deployment history tab:

This tab displays a history of your API deployments, including the deployment time and who deployed the change.

  • Overview tab:

Here you'll see the traffic coming in. After the traffic generation script has been running for a minute, scroll down to see the three lines on the Total latency graph (50th, 95th, and 99th percentiles). This data provides a quick estimate of response times.

To view the logs

At the bottom of the Endpoints graphs, under Method, click the View logs link for GET/airportName. The** Logs Viewer** page displays the request logs for the API.

Enter CTRL+C in Cloud Shell to stop the script.

9.6 Add a quota to the API

Note

This is a beta release of Quotas. This feature might be changed in backward-incompatible ways and is not subject to any SLA or deprecation policy.

Cloud Endpoints lets you set quotas so you can control the rate at which applications can call your API. Quotas can be used to protect your API from excessive usage by a single client.

Deploy the Endpoints configuration that has a quota

./deploy_api.sh ../openapi_with_ratelimit.yaml

Redeploy your app to use the new Endpoints configuration

This may take a few minutes

./deploy_app.sh ../app/app_template.yaml 
  • In the Console, navigate to Navigation menu > APIs & Services > Credentials.
  • Click Create credentials and choose API key. A new API key is displayed on the screen.
  • Click the Copy to clipboard icon to copy it to your clipboard.

In Cloud Shell, type the following.

Replace YOUR-API-KEY with the API key you just created

export API_KEY=YOUR-API-KEY

Send your API a request using the API key variable you just created

./query_api_with_key.sh $API_KEY

Expected Output

curl -H 'x-api-key: AIzeSyDbdQdaSdhPMdiAuddd_FALbY7JevoMzAB' "https://example-project.appspot.com/airportName?iataCode=SFO"
San Francisco International Airport

The API now has a limit of 5 requests per second.

To send traffic to the API and trigger the quota limit

./generate_traffic_with_key.sh $API_KEY

After running the script for 5-10 seconds, enter CTRL+C in Cloud Shell to stop the script.

Send another authenticated request to the API

./query_api_with_key.sh $API_KEY
Expected Output
{
   "code": 8,
   "message": "Insufficient tokens for quota 'airport_requests' and limit 'limit-on-airport-requests' of service 'example-project.appspot.com' for consumer 'api_key:AIzeSyDbdQdaSdhPMdiAuddd_FALbY7JevoMzAB'.",
   "details": [
    {
     "@type": "type.googleapis.com/google.rpc.DebugInfo",
     "stackEntries": [],
     "detail": "internal"
    }
   ]
  }

If you get a different response, try running the generate_traffic_with_key.sh script again and retry.

gcloud-computing-foundations's People

Contributors

adalidcht avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.