Giter Site home page Giter Site logo

packtpublishing / dp-203-azure-data-engineer-associate-certification-guide Goto Github PK

View Code? Open in Web Editor NEW
59.0 4.0 71.0 72 KB

Azure Data Engineer Associate Certification Guide, published by Packt

License: MIT License

PowerShell 8.89% Jupyter Notebook 63.90% TSQL 25.56% Python 1.64%

dp-203-azure-data-engineer-associate-certification-guide's Introduction

Azure Data Engineer Associate Certification Guide

Azure Data Engineer Associate Certification Guide

This is the code repository for Azure Data Engineer Associate Certification Guide, published by Packt.

A hands-on reference guide to developing your data engineering skills and preparing for the DP-203 exam

What is this book about?

The DP-203: Azure Data Engineer Associate Certification Guide offers complete coverage of the DP-203 certification requirements so that you can take the exam with confidence. Going beyond the requirements for the exam, this book also provides you with additional knowledge to enable you to succeed in your real-life Azure data engineering projects.

This book covers the following exciting features:

  • Gain intermediate-level knowledge of Azure the data infrastructure
  • Design and implement data lake solutions with batch and stream pipelines
  • Identify the partition strategies available in Azure storage technologies
  • Implement different table geometries in Azure Synapse Analytics
  • Use the transformations available in T-SQL, Spark, and Azure Data Factory
  • Use Azure Databricks or Synapse Spark to process data using Notebooks
  • Design security using RBAC, ACL, encryption, data masking, and more
  • Monitor and optimize data pipelines with debugging tips

If you feel this book is for you, get your copy today!

https://www.packtpub.com/

Instructions and Navigations

All of the code is organized into folders.

The code will look like the following:

SELECT trip.[tripId], customer.[name] FROM 
dbo.FactTrips AS trip
JOIN dbo.DimCustomer AS customer
ON trip.[customerId] = customer.[customerId] 
WHERE trip.[endLocation] = 'San Jose';

Following is what you need for this book: This book is for data engineers who want to take the DP-203: Azure Data Engineer Associate exam and are looking to gain in-depth knowledge of the Azure cloud stack. The book will also help engineers and product managers who are new to Azure or interviewing with companies working on Azure technologies, to get hands-on experience of Azure data technologies. A basic understanding of cloud technologies, extract, transform, and load (ETL), and databases will help you get the most out of this book.

With the following software and hardware list you can run all code files present in the book (Chapter 1-15).

Software and Hardware List

Chapter Software required OS required
1-15 Azure account (free or paid) Windows, Mac OS X, and Linux (Any)
1-15 Azure CLI Windows, Mac OS X, and Linux (Any)

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. Click here to download it.

Related products

Get to Know the Authors

Newton Alex leads several Azure Data Analytics teams in Microsoft, India. His team contributes to technologies including Azure Synapse, Azure Databricks, Azure HDInsight, and many open source technologies, including Apache YARN, Apache Spark, and Apache Hive. He started using Hadoop while at Yahoo, USA, where he helped build the first batch processing pipelines for Yahoo’s ad serving team. After Yahoo, he became the leader of the big data team at Pivotal Inc., USA, where he was responsible for the entire open source stack of Pivotal Inc. He later moved to Microsoft and started the Azure Data team in India. He has worked with several Fortune 500 companies to help build their data systems on Azure.

Download a free PDF

If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost.
Simply click on the link to claim your free PDF.

https://packt.link/free-ebook/9781801816069

dp-203-azure-data-engineer-associate-certification-guide's People

Contributors

manikandankurup-packt avatar newtonalex avatar packt-itservice avatar packtutkarshr avatar utkarsha-packt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

dp-203-azure-data-engineer-associate-certification-guide's Issues

Missing code/files for DimDriver and CSV in SCD Example

I'm trying to implement your Type 2 SCD starting on page 136 in the book and had to use my own production data for testing. This is fine but is failing and without more explicit steps or example files it's proving very difficult to debug.

For example, when debugging the data flow in a pipeline, I get the following error:
"message":"Job failed due to reason: at Source 'MaxSurrogateID': Invalid column name 'surrogateID'..

The way I understood the MaxSurrogateID should be setup as its own Source with the Dataset being the Sink table in the dedicated SQL Pool. Then you import the projection of the custom query so that only the MaxSurr column shows up.

bl_SourceMaxSurr
bl_ColumnName
bl_DataFlow

Let me know if there is something I'm missing from the textbook. Thanks

Chapter 9 - Polybase - OPENROWSET is not supported in Synapse Dedicated SQL Pool

When attempting to follow along on page 237-238, the PolyBase example does not function as OPENROWSET is not supported in dedicated SQL pools. However, the steps outlined state to use a dedicated SQL pool instance.

This chapter of the book reads like a step-by-step guide until the PolyBase section, at which point it feels like, "then you maybe do some stuff and things will happen, check the Microsoft docs for more info".

How should we be using PolyBase to ingest the data into an analytics data store like Synapse Dedicated SQL Pool?

P.S. There are also issues with the example code in that it fails to follow the data presented previously in the chapter. For instance, the CREATE EXTERNAL TABLE statement lists seven columns in the SELECT statement, however, if I'm understanding correctly, the EXTERNAL DATA SOURCE is a parquet file with only two columns, City and Fare.

Chapter 5, page 121

I am trying to execute the below SQL, but encounter issue. Please help. Thanks.

179899994-bf0b3356-9c2b-4c96-846c-d28a10441d97

.

Error running Spark Notebooks in Synapse

Hi,

I purchased the DP-203 book and have provisioned resources as outlined in the chapters. However, when it comes to the Spark code samples, I am getting errors in Synapse when trying to run the second block of code (see below) using a new Notebook.

Note: I am not using Azure Databricks, so I'm not running the storage setup block.

When I try to run this cell:

from pyspark.sql.functions import *

columnNames = ["tripId","driverId","customerId","cabId","tripDate","startLocation","endLocation"]
tripData = [
  ('100', '200', '300', '400', '20220101', 'New York', 'New Jersey'),
  ('101', '201', '301', '401', '20220102', 'Tempe', 'Phoenix'),
  ('102', '202', '302', '402', '20220103', 'San Jose', 'San Franciso'),
  ('103', '203', '303', '403', '20220102', 'New York', 'Boston'),
  ('104', '204', '304', '404', '20220103', 'New York', 'Washington'),
  ('105', '205', '305', '405', '20220201', 'Miami', 'Fort Lauderdale'),
  ('106', '206', '306', '406', '20220202', 'Seattle', 'Redmond'),
  ('107', '207', '307', '407', '20220203', 'Los Angeles', 'San Diego'),
  ('108', '208', '308', '408', '20220301', 'Phoenix', 'Las Vegas'),
  ('109', '209', '309', '409', '20220302', 'Washington', 'Baltimore'),
  ('110', '210', '310', '410', '20220303', 'Dallas', 'Austin'),
  ('111', '211', '311', '411', '20220303', 'New York', 'New Jersey'),
  ('112', '212', '312', '412', '20220304', 'New York', 'Boston'),
  ('113', '212', '312', '412', '20220401', 'San Jose', 'San Ramon'),
  ('114', '212', '312', '412', '20220404', 'San Jose', 'Oakland'),
  ('115', '212', '312', '412', '20220404', 'Tempe', 'Scottsdale'),
  ('116', '212', '312', '412', '20220405', 'Washington', 'Atlanta'),
  ('117', '212', '312', '412', '20220405', 'Seattle', 'Portland'),
  ('118', '212', '312', '412', '20220405', 'Miami', 'Tampa')
]
df = spark.createDataFrame(data= tripData, schema = columnNames)

# Split the data according the current timestamp and write to store as parquet files
dftripDate = df.withColumn("tripDate", to_timestamp(col("tripDate"), 'yyyyMMdd')) \
           .withColumn("year", tripDate_format(col("tripDate"), "yyyy")) \
           .withColumn("month", tripDate_format(col("tripDate"), "MM")) \
           .withColumn("day", tripDate_format(col("tripDate"), "dd"))

dftripDate.show(truncate=False)

dftripDate.write.partitionBy("year", "month", "day").mode("overwrite").parquet(commonPath + "/partition/")

I get this error message InvalidHttpRequestToLivy: Your Spark job requested 24 vcores. However, the workspace has a 12 core limit. Try reducing the numbers of vcores requested or increasing your vcore quota. HTTP status code: 400. Trace ID: 92fd0722-d642-4b71-8e60-79d2c4a5e3a0.

However when I increase the scaling (vcore limit), the error still persists albeit with a slightly different message. InvalidHttpRequestToLivy: Your Spark job requested 192 vcores. However, the workspace has a 12 core limit. Try reducing the numbers of vcores requested or increasing your vcore quota. HTTP status code: 400. Trace ID: d45d2403-7e9a-4f26-bd06-d620013a1993.

This seems insane with regard to the size of this sample data I'm trying to create. I'm new to Spark but this seemed straightforward. Is there something I'm missing?

Thanks for your help!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.