Giter Site home page Giter Site logo

azure-data-factory-cookbook's Introduction

Azure Data Factory Cookbook

Azure Data Factory Cookbook

This is the code repository for Azure Data Factory Cookbook, published by Packt.

Build and manage ETL and ELT pipelines with Microsoft Azure's serverless data integration service

What is this book about?

Azure Data Factory (ADF) is a modern data integration tool available on Microsoft Azure. This Azure Data Factory Cookbook helps you get up and running by showing you how to create and execute your first job in ADF. You’ll learn how to branch and chain activities, create custom activities, and schedule pipelines. This book will help you to discover the benefits of cloud data warehousing, Azure Synapse Analytics, and Azure Data Lake Gen2 Storage, which are frequently used for big data analytics. With practical recipes, you’ll learn how to actively engage with analytical tools from Azure Data Services and leverage your on-premise infrastructure with cloud-native tools to get relevant business insights.

As you advance, you’ll be able to integrate the most commonly used Azure Services into ADF and understand how Azure services can be useful in designing ETL pipelines. The book will take you through the common errors that you may encounter while working with ADF and show you how to use the Azure portal to monitor pipelines. You’ll also understand error messages and resolve problems in connectors and data flows with the debugging capabilities of ADF.

By the end of this book, you’ll be able to use ADF as the main ETL and orchestration tool for your data warehouse or data platform projects.

This book covers the following exciting features:

  • Create an orchestration and transformation job in ADF
  • Develop, execute, and monitor data flows using Azure Synapse
  • Create big data pipelines using Azure Data Lake and ADF
  • Build a machine learning app with Apache Spark and ADF
  • Migrate on-premises SSIS jobs to ADF
  • Integrate ADF with commonly used Azure services such as Azure ML, Azure Logic Apps, and Azure Functions
  • Run big data compute jobs within HDInsight and Azure Databricks
  • Copy data from AWS S3 and Google Cloud Storage to Azure Storage using ADF's built-in connectors

If you feel this book is for you, get your copy today!

https://www.packtpub.com/

Instructions and Navigations

All of the code is organized into folders.

The code will look like the following:

from pyspark.ml.evaluation import RegressionEvaluator
regEval = RegressionEvaluator(predictionCol="predictions", labelCol="rating", metricName="mse")
predictedTestDF = alsModel.transform(testDF)
testMse = regEval.evaluate(predictedTestDF)

print('MSE on the test set is {0}'.format(testMse))

Following is what you need for this book: This book is for ETL developers, data warehouse and ETL architects, software professionals, and anyone who wants to learn about the common and not-so-common challenges faced while developing traditional and hybrid ETL solutions using Microsoft's Azure Data Factory. You’ll also find this book useful if you are looking for recipes to improve or enhance your existing ETL pipelines. Basic knowledge of data warehousing is expected. You'll need an Azure subscription to follow all the recipes mentioned in the book. If you're using a paid subscription, make sure to pause or delete the services after using them to avoid high usage costs.

With the following software and hardware list you can run all code files present in the book (Chapter 1-10).

Software and Hardware List

Chapter Software required OS required
1 - 10 An Azure Subscription, SQL Server Management Studio Windows, Mac OS X, and Linux (Any)
AWS and Google Cloud Subscription, Visual Studio 2019

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. Click here to download it.

Related products

Get to Know the Authors

Dmitry Anoshin is an Analytics and Data Engineer Leader with over 10 years of experience working in Business Intelligence, Data Warehouse and Data Integration, BigData, Cloud, and ML space across North America and Europe. He has worked on leading Data Engineering initiatives while working on a petabyte size data platform built using Cloud and BigData technologies for supporting machine learning experiments, data science models, business intelligence reporting, and data exchange with internal and external partners. With expertise in data modeling, Dmitry also has a background in handling privacy compliance and security-critical datasets. He is also an active speaker at data conferences and helps people to adopt cloud analytics.

Dmitry Foshin is a business intelligence team leader, whose main goals are delivering business insights to the management team through data engineering, analytics, and visualization. He has led and executed complex full-stack BI solutions (from ETL processes to building DWH and reporting) using Azure technologies, Data Lake, Data Factory, Data Bricks, MS Office 365, Power BI, and Tableau. He has also successfully launched numerous data analytics projects – both on-premises and cloud – that help achieve corporate goals in international FMCG companies, banking, and manufacturing industries.

Roman Storchak has a PhD, and is a chief data offi cer whose main interest lies in building data-driven cultures through making analytics easy. He has led teams that have built ETL-heavy products in AdTech and retail and often uses Azure Stack, Power BI, and Data Factory.

Xenia Ireton is a soft ware engineer at Microsoft and has extensive knowledge in the field of data engineering, big data pipelines, data warehousing, and systems architecture.

Download a free PDF

If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost.
Simply click on the link to claim your free PDF.

https://packt.link/free-ebook/9781800565296

azure-data-factory-cookbook's People

Contributors

ayaanhoda avatar cmdrlucienn avatar dfoshin avatar dim-ryd avatar manikandankurup-packt avatar packt-itservice avatar packtutkarshr avatar romanstorchak avatar xeniah avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure-data-factory-cookbook's Issues

resource_client.resource_groups.create_or_update(dict_storage["rg_name"], rg_params)

AuthorizationFailed) The client '500aefe7-2b09-4721-' with object id '500aefe7-2b09-4721-' does not have authorization to perform action 'Microsoft.Resources/subscriptions/resourcegroups/write' over scope '/subscriptions/6b149-4d68-4bc9-ad-``````````/resourcegroups/DataCheck2' or the scope is invalid. If access was recently granted, please refresh your credentials.
Code: AuthorizationFailed
Message: The client '500aefe7-
-4721-8ba0-
*******' with object id '500aefe7--4721-' does not have authorization to perform action 'Microsoft.Resources/subscriptions/resourcegroups/write' over scope '/subscriptions/6b161a49-4d68--a1ad-*******/resourcegroups/DataCheck2' or the scope is invalid. If access was recently granted, please refresh your credentials.

Page 16, Bullet 12: AttributeError: 'CredentialWrapper' object has no attribute 'get_token'

Regarding page 16 of the book, bullet 12, I receive the below error, which appears to be an issue with the Azure API function. Was there a change to the API? Is there a workaround and should the book be updated? I have doublechecked my credentials.


AttributeError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_21800/4217901441.py in
1 adf_client = DataFactoryManagementClient(credentials, subscription_id)
2 df_resource = Factory(location='eastus')
----> 3 df = adf_client.factories.create_or_update(rg_name, df_name, df_resource)
4 print_item(df)
5 while df.provisioning_state != 'Succeeded':

~\AppData\Local\Programs\Python\Python39\lib\site-packages\azure\mgmt\datafactory\operations_factories_operations.py in create_or_update(self, resource_group_name, factory_name, factory, if_match, **kwargs)
305 body_content_kwargs['content'] = body_content
306 request = self._client.put(url, query_parameters, header_parameters, **body_content_kwargs)
--> 307 pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
308 response = pipeline_response.http_response
309

~\AppData\Local\Programs\Python\Python39\lib\site-packages\azure\core\pipeline_base.py in run(self, request, **kwargs)
209 else _TransportRunner(self._transport)
210 )
--> 211 return first_node.send(pipeline_request) # type: ignore

~\AppData\Local\Programs\Python\Python39\lib\site-packages\azure\core\pipeline_base.py in send(self, request)
69 _await_result(self._policy.on_request, request)
70 try:
---> 71 response = self.next.send(request)
72 except Exception: # pylint: disable=broad-except
73 if not _await_result(self._policy.on_exception, request):

~\AppData\Local\Programs\Python\Python39\lib\site-packages\azure\core\pipeline_base.py in send(self, request)
69 _await_result(self._policy.on_request, request)
70 try:
---> 71 response = self.next.send(request)
72 except Exception: # pylint: disable=broad-except
73 if not _await_result(self._policy.on_exception, request):

~\AppData\Local\Programs\Python\Python39\lib\site-packages\azure\core\pipeline_base.py in send(self, request)
69 _await_result(self._policy.on_request, request)
70 try:
---> 71 response = self.next.send(request)
72 except Exception: # pylint: disable=broad-except
73 if not _await_result(self._policy.on_exception, request):

~\AppData\Local\Programs\Python\Python39\lib\site-packages\azure\core\pipeline_base.py in send(self, request)
69 _await_result(self._policy.on_request, request)
70 try:
---> 71 response = self.next.send(request)
72 except Exception: # pylint: disable=broad-except
73 if not _await_result(self._policy.on_exception, request):

~\AppData\Local\Programs\Python\Python39\lib\site-packages\azure\core\pipeline_base.py in send(self, request)
69 _await_result(self._policy.on_request, request)
70 try:
---> 71 response = self.next.send(request)
72 except Exception: # pylint: disable=broad-except
73 if not _await_result(self._policy.on_exception, request):

~\AppData\Local\Programs\Python\Python39\lib\site-packages\azure\mgmt\core\policies_base.py in send(self, request)
45 # type: (PipelineRequest[HTTPRequestType], Any) -> PipelineResponse[HTTPRequestType, HTTPResponseType]
46 http_request = request.http_request
---> 47 response = self.next.send(request)
48 if response.http_response.status_code == 409:
49 rp_name = self._check_rp_not_registered_err(response)

~\AppData\Local\Programs\Python\Python39\lib\site-packages\azure\core\pipeline\policies_redirect.py in send(self, request)
156 redirect_settings = self.configure_redirects(request.context.options)
157 while retryable:
--> 158 response = self.next.send(request)
159 redirect_location = self.get_redirect_location(response)
160 if redirect_location and redirect_settings['allow']:

~\AppData\Local\Programs\Python\Python39\lib\site-packages\azure\core\pipeline\policies_retry.py in send(self, request)
443 start_time = time.time()
444 self._configure_timeout(request, absolute_timeout, is_response_error)
--> 445 response = self.next.send(request)
446 if self.is_retry(retry_settings, response):
447 retry_active = self.increment(retry_settings, response=response)

~\AppData\Local\Programs\Python\Python39\lib\site-packages\azure\core\pipeline\policies_authentication.py in send(self, request)
115 :type request: ~azure.core.pipeline.PipelineRequest
116 """
--> 117 self.on_request(request)
118 try:
119 response = self.next.send(request)

~\AppData\Local\Programs\Python\Python39\lib\site-packages\azure\core\pipeline\policies_authentication.py in on_request(self, request)
92
93 if self._token is None or self._need_new_token:
---> 94 self._token = self._credential.get_token(*self._scopes)
95 self._update_headers(request.http_request.headers, self._token.token)
96

AttributeError: 'CredentialWrapper' object has no attribute 'get_token'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.