Download Designing an Azure Data Solution.DP-201.Train4Sure.2019-05-31.25q.vcex

Vendor: Microsoft
Exam Code: DP-201
Exam Name: Designing an Azure Data Solution
Date: May 31, 2019
File Size: 1 MB

How to open VCEX files?

Files with VCEX extension can be opened by ProfExam Simulator.

Purchase
Coupon: EXAM_HUB

Discount: 20%

Demo Questions

Question 1
You need to design the vehicle images storage solution. 
What should you recommend?
  1. Azure Media Services
  2. Azure Premium Storage account
  3. Azure Redis Cache
  4. Azure Cosmos DB
Correct answer: B
Explanation:
Premium Storage stores data on the latest technology Solid State Drives (SSDs) whereas Standard Storage stores data on Hard Disk Drives (HDDs). Premium Storage is designed for Azure Virtual Machine workloads which require consistent high IO performance and low latency in order to host IO intensive workloads like OLTP, Big Data, and Data Warehousing on platforms like SQL Server, MongoDB, Cassandra, and others. With Premium Storage, more customers will be able to lift-and-shift demanding enterprise applications to the cloud. Scenario: Traffic sensors will occasionally capture an image of a vehicle for debugging purposes.You must optimize performance of saving/storing vehicle images. The impact of vehicle images on sensor data throughout must be minimized. References:https://azure.microsoft.com/es-es/blog/introducing-premium-storage-high-performance-storage-for-azure-virtual-machine-workloads/
Premium Storage stores data on the latest technology Solid State Drives (SSDs) whereas Standard Storage stores data on Hard Disk Drives (HDDs). Premium Storage is designed for Azure Virtual Machine workloads which require consistent high IO performance and low latency in order to host IO intensive workloads like OLTP, Big Data, and Data Warehousing on platforms like SQL Server, MongoDB, Cassandra, and others. With Premium Storage, more customers will be able to lift-and-shift demanding enterprise applications to the cloud. 
Scenario: Traffic sensors will occasionally capture an image of a vehicle for debugging purposes.
You must optimize performance of saving/storing vehicle images. 
The impact of vehicle images on sensor data throughout must be minimized. 
References:
https://azure.microsoft.com/es-es/blog/introducing-premium-storage-high-performance-storage-for-azure-virtual-machine-workloads/
Question 2
You need to design a sharding strategy for the Planning Assistance database. 
What should you recommend?
  1. a list mapping shard map on the binary representation of the License Plate column
  2. a range mapping shard map on the binary representation of the speed column
  3. a list mapping shard map on the location column
  4. a range mapping shard map on the time column
Correct answer: A
Explanation:
Data used for Planning Assistance must be stored in a sharded Azure SQL Database. A shard typically contains items that fall within a specified range determined by one or more attributes of the data. These attributes form the shard key (sometimes referred to as the partition key). The shard key should be static. It shouldn't be based on data that might change. References:https://docs.microsoft.com/en-us/azure/architecture/patterns/sharding
Data used for Planning Assistance must be stored in a sharded Azure SQL Database. 
A shard typically contains items that fall within a specified range determined by one or more attributes of the data. These attributes form the shard key (sometimes referred to as the partition key). The shard key should be static. It shouldn't be based on data that might change. 
References:
https://docs.microsoft.com/en-us/azure/architecture/patterns/sharding
Question 3
You need to recommend an Azure SQL Database pricing tier for Planning Assistance. 
Which pricing tier should you recommend?
  1. Business critical Azure SQL Database single database
  2. General purpose Azure SQL Database Managed Instance
  3. Business critical Azure SQL Database Managed Instance
  4. General purpose Azure SQL Database single database
Correct answer: B
Explanation:
Azure resource costs must be minimized where possible. Data used for Planning Assistance must be stored in a sharded Azure SQL Database. The SLA for Planning Assistance is 70 percent, and multiday outages are permitted.
Azure resource costs must be minimized where possible. 
Data used for Planning Assistance must be stored in a sharded Azure SQL Database. 
The SLA for Planning Assistance is 70 percent, and multiday outages are permitted.
Question 4
You need to recommend a solution for storing the image tagging data. 
What should you recommend?
  1. Azure File Storage
  2. Azure Cosmos DB
  3. Azure Blob Storage
  4. Azure SQL Database
  5. Azure SQL Data Warehouse
Correct answer: C
Explanation:
Image data must be stored in a single data store at minimum cost. Note: Azure Blob storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that does not adhere to a particular data model or definition, such as text or binary data.Blob storage is designed for:Serving images or documents directly to a browser. Storing files for distributed access. Streaming video and audio. Writing to log files. Storing data for backup and restore, disaster recovery, and archiving. Storing data for analysis by an on-premises or Azure-hosted service. References:https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction
Image data must be stored in a single data store at minimum cost. 
Note: Azure Blob storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that does not adhere to a particular data model or definition, such as text or binary data.
Blob storage is designed for:
  • Serving images or documents directly to a browser. 
  • Storing files for distributed access. 
  • Streaming video and audio. 
  • Writing to log files. 
  • Storing data for backup and restore, disaster recovery, and archiving. 
  • Storing data for analysis by an on-premises or Azure-hosted service. 
References:
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction
Question 5
You need to design the solution for analyzing customer data. 
What should you recommend?
  1. Azure Databricks
  2. Azure Data Lake Storage
  3. Azure SQL Data Warehouse
  4. Azure Cognitive Services
  5. Azure Batch
Correct answer: A
Explanation:
Customer data must be analyzed using managed Spark clusters. You create spark clusters through Azure Databricks. References:https://docs.microsoft.com/en-us/azure/azure-databricks/quickstart-create-databricks-workspace-portal
Customer data must be analyzed using managed Spark clusters. 
You create spark clusters through Azure Databricks. 
References:
https://docs.microsoft.com/en-us/azure/azure-databricks/quickstart-create-databricks-workspace-portal
Question 6
You need to design a solution to meet the SQL Server storage requirements for CONT_SQL3. 
Which type of disk should you recommend?
  1. Standard SSD Managed Disk
  2. Premium SSD Managed Disk
  3. Ultra SSD Managed Disk
Correct answer: C
Explanation:
CONT_SQL3 requires an initial scale of 35000 IOPS. Ultra SSD Managed Disk Offerings     The following table provides a comparison of ultra solid-state-drives (SSD) (preview), premium SSD, standard SSD, and standard hard disk drives (HDD) for managed disks to help you decide what to use.     References:https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disks-types
CONT_SQL3 requires an initial scale of 35000 IOPS. 
Ultra SSD Managed Disk Offerings 
  
The following table provides a comparison of ultra solid-state-drives (SSD) (preview), premium SSD, standard SSD, and standard hard disk drives (HDD) for managed disks to help you decide what to use. 
  
References:
https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disks-types
Question 7
You need to recommend an Azure SQL Database service tier. 
What should you recommend?
  1. Business Critical
  2. General Purpose
  3. Premium
  4. Standard
  5. Basic
Correct answer: C
Explanation:
The data engineers must set the SQL Data Warehouse compute resources to consume 300 DWUs. Note: There are three architectural models that are used in Azure SQL Database:General Purpose/Standard Business Critical/Premium Hyperscale Incorrect Answers:A: Business Critical service tier is designed for the applications that require low-latency responses from the underlying SSD storage (1-2 ms in average), fast recovery if the underlying infrastructure fails, or need to off-load reports, analytics, and read-only queries to the free of charge readable secondary replica of the primary database.References:https://docs.microsoft.com/en-us/azure/sql-database/sql-database-service-tier-business-critical
The data engineers must set the SQL Data Warehouse compute resources to consume 300 DWUs. 
Note: There are three architectural models that are used in Azure SQL Database:
  • General Purpose/Standard 
  • Business Critical/Premium 
  • Hyperscale 
Incorrect Answers:
A: Business Critical service tier is designed for the applications that require low-latency responses from the underlying SSD storage (1-2 ms in average), fast recovery if the underlying infrastructure fails, or need to off-load reports, analytics, and read-only queries to the free of charge readable secondary replica of the primary database.
References:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-service-tier-business-critical
Question 8
You need to recommend the appropriate storage and processing solution?
What should you recommend?
  1. Enable auto-shrink on the database.
  2. Flush the blob cache using Windows PowerShell.
  3. Enable Apache Spark RDD (RDD) caching.
  4. Enable Databricks IO (DBIO) caching.
  5. Configure the reading speed using Azure Data Studio.
Correct answer: C
Explanation:
Scenario: You must be able to use a file system view of data stored in a blob. You must build an architecture that will allow Contoso to use the DB FS filesystem layer over a blob store.Databricks File System (DBFS) is a distributed file system installed on Azure Databricks clusters. Files in DBFS persist to Azure Blob storage, so you won’t lose data even after you terminate a cluster. The Databricks Delta cache, previously named Databricks IO (DBIO) caching, accelerates data reads by creating copies of remote files in nodes’ local storage using a fast intermediate data format. The data is cached automatically whenever a file has to be fetched from a remote location. Successive reads of the same data are then performed locally, which results in significantly improved reading speed. References:https://docs.databricks.com/delta/delta-cache.html#delta-cache
Scenario: You must be able to use a file system view of data stored in a blob. You must build an architecture that will allow Contoso to use the DB FS filesystem layer over a blob store.
Databricks File System (DBFS) is a distributed file system installed on Azure Databricks clusters. Files in DBFS persist to Azure Blob storage, so you won’t lose data even after you terminate a cluster. 
The Databricks Delta cache, previously named Databricks IO (DBIO) caching, accelerates data reads by creating copies of remote files in nodes’ local storage using a fast intermediate data format. The data is cached automatically whenever a file has to be fetched from a remote location. Successive reads of the same data are then performed locally, which results in significantly improved reading speed. 
References:
https://docs.databricks.com/delta/delta-cache.html#delta-cache
Question 9
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. 
You are designing an HDInsight/Hadoop cluster solution that uses Azure Data Lake Gen1 Storage. 
The solution requires POSIX permissions and enables diagnostics logging for auditing. 
You need to recommend solutions that optimize storage. 
Proposed Solution: Ensure that files stored are larger than 250MB.
Does the solution meet the goal?
  1. Yes
  2. No
Correct answer: A
Explanation:
Depending on what services and workloads are using the data, a good size to consider for files is 256 MB or greater. If the file sizes cannot be batched when landing in Data Lake Storage Gen1, you can have a separate compaction job that combines these files into larger ones. Note: POSIX permissions and auditing in Data Lake Storage Gen1 comes with an overhead that becomes apparent when working with numerous small files. As a best practice, you must batch your data into larger files versus writing thousands or millions of small files to Data Lake Storage Gen1. Avoiding small file sizes can have multiple benefits, such as:Lowering the authentication checks across multiple files Reduced open file connections Faster copying/replication Fewer files to process when updating Data Lake Storage Gen1 POSIX permissions References:https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-best-practices
Depending on what services and workloads are using the data, a good size to consider for files is 256 MB or greater. If the file sizes cannot be batched when landing in Data Lake Storage Gen1, you can have a separate compaction job that combines these files into larger ones. 
Note: POSIX permissions and auditing in Data Lake Storage Gen1 comes with an overhead that becomes apparent when working with numerous small files. As a best practice, you must batch your data into larger files versus writing thousands or millions of small files to Data Lake Storage Gen1. Avoiding small file sizes can have multiple benefits, such as:
  • Lowering the authentication checks across multiple files 
  • Reduced open file connections 
  • Faster copying/replication 
  • Fewer files to process when updating Data Lake Storage Gen1 POSIX permissions 
References:
https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-best-practices
Question 10
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. 
You are designing an HDInsight/Hadoop cluster solution that uses Azure Data Lake Gen1 Storage. 
The solution requires POSIX permissions and enables diagnostics logging for auditing. 
You need to recommend solutions that optimize storage. 
Proposed Solution: Implement compaction jobs to combine small files into larger files.
Does the solution meet the goal?
  1. Yes
  2. No
Correct answer: A
Explanation:
Depending on what services and workloads are using the data, a good size to consider for files is 256 MB or greater. If the file sizes cannot be batched when landing in Data Lake Storage Gen1, you can have a separate compaction job that combines these files into larger ones. Note: POSIX permissions and auditing in Data Lake Storage Gen1 comes with an overhead that becomes apparent when working with numerous small files. As a best practice, you must batch your data into larger files versus writing thousands or millions of small files to Data Lake Storage Gen1. Avoiding small file sizes can have multiple benefits, such as:Lowering the authentication checks across multiple files Reduced open file connections Faster copying/replication Fewer files to process when updating Data Lake Storage Gen1 POSIX permissions References:https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-best-practices
Depending on what services and workloads are using the data, a good size to consider for files is 256 MB or greater. If the file sizes cannot be batched when landing in Data Lake Storage Gen1, you can have a separate compaction job that combines these files into larger ones. 
Note: POSIX permissions and auditing in Data Lake Storage Gen1 comes with an overhead that becomes apparent when working with numerous small files. As a best practice, you must batch your data into larger files versus writing thousands or millions of small files to Data Lake Storage Gen1. Avoiding small file sizes can have multiple benefits, such as:
  • Lowering the authentication checks across multiple files 
  • Reduced open file connections 
  • Faster copying/replication 
  • Fewer files to process when updating Data Lake Storage Gen1 POSIX permissions 
References:
https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-best-practices
HOW TO OPEN VCE FILES

Use VCE Exam Simulator to open VCE files
Avanaset

HOW TO OPEN VCEX AND EXAM FILES

Use ProfExam Simulator to open VCEX and EXAM files
ProfExam Screen

ProfExam
ProfExam at a 20% markdown

You have the opportunity to purchase ProfExam at a 20% reduced price

Get Now!