Download Administering Relational Databases on Microsoft Azure.DP-300.Dump4Pass.2024-10-27.80q.tqb

Vendor: Microsoft
Exam Code: DP-300
Exam Name: Administering Relational Databases on Microsoft Azure
Date: Oct 27, 2024
File Size: 5 MB

How to open VCEX files?

Files with VCEX extension can be opened by ProfExam Simulator.

Purchase
Coupon: EXAM_HUB

Discount: 20%

Demo Questions

Question 1
You have 20 Azure SQL databases provisioned by using the vCore purchasing model. You plan to create an Azure SQL Database elastic pool and add the 20 databases.  
Which three metrics should you use to size the elastic pool to meet the demands of your workload? Each correct answer presents part of the solution.    
NOTE: Each correct selection is worth one point.
  1. total size of all thedatabases
  2. geo-replicationsupport
  3. number of concurrently peaking databases * peak CPU utilization perdatabase
  4. maximum number of concurrent sessions for all the databases
  5. total number of databases * average CPU utilization perdatabase
Correct answer: ACE
Explanation:
CE: Estimate the vCores needed for the pool as follows: For vCore-based purchasing model: MAX(<Total number of DBs X average vCore utilization per DB>, <Number of concurrently peaking DBs X Peak vCore utilization per DB)  A: Estimate the storage space needed for the pool by adding the number of bytes needed for all the databases in the pool. Reference: https://docs.microsoft.com/en-us/azure/azure-sql/database/elastic-pool-overviewconfirmed
CE: Estimate the vCores needed for the pool as follows: 
For vCore-based purchasing model: MAX(<Total number of DBs X average vCore utilization per DB>, <Number of concurrently peaking DBs X Peak vCore utilization per DB)  
A: Estimate the storage space needed for the pool by adding the number of bytes needed for all the databases in the pool. 
Reference: 
https://docs.microsoft.com/en-us/azure/azure-sql/database/elastic-pool-overviewconfirmed
Question 2
You have an Azure SQL database that contains a table named factSales. FactSales contains the columns shown in the following table.  
  
       
  
FactSales has 6 billion rows and is loaded nightly by using a batch process. You must provide the greatest reduction in space for the database and maximize performance.    
Which type of compression provides the greatest space reduction for the database?
  1. page compression 
  2. row compression
  3. columnstore compression
  4. colum nstore archival compression
Correct answer: D
Explanation:
Columnstore tables and indexes are always stored with columnstore compression. You can further reduce the size of columnstore data by configuring an additional compression called archival compression.    Note:Columnstore—Thecolumnstoreindexisalsologicallyorganizedasatablewithrowsandcolumns,butthedataisphysicallystoredinacolumn-wisedata format.   Incorrect Answers: B: Rowstore — The rowstore index is the traditional style that has been around since the initial release of SQL Server. For rowstore tables and indexes, use the data compression feature to help reduce the size of the database.  Reference: https://docs.microsoft.com/en-us/sql/relational-databases/data-compression/data-compression
Columnstore tables and indexes are always stored with columnstore compression. You can further reduce the size of columnstore data by configuring an additional compression called archival compression.    
Note:Columnstore—Thecolumnstoreindexisalsologicallyorganizedasatablewithrowsandcolumns,butthedataisphysicallystoredinacolumn-wisedata format.   
Incorrect Answers: 
B: Rowstore — The rowstore index is the traditional style that has been around since the initial release of SQL 
Server. For rowstore tables and indexes, use the data compression feature to help reduce the size of the database.  
Reference: 
https://docs.microsoft.com/en-us/sql/relational-databases/data-compression/data-compression
Question 3
You have a Microsoft SQL Server 2019 database named DB1 that uses the following database-level and instance-level features.    
  • Clustered columnstore indexes  
  • Automatic tuning  
  • Change tracking  
  • PolyBase    
You plan to migrate DB1 to an Azure SQL database.    
What feature should be removed or replaced before DB1 can be migrated?
  1. Clustered columnstoreindexes
  2. PolyBase
  3. Changetracking
  4. Automatictuning
Correct answer: B
Explanation:
This table lists the key features for PolyBase and the products in which they're available.            Incorrect Answers: C: Change tracking is a lightweight solution that provides an efficient change tracking mechanism for applications. It applies to both Azure SQL Database and SQL Server.    D: Azure SQL Database and Azure SQL Managed Instance automatic tuning provides peak performance and stable workloads through continuous performance tuning based on AI and machine learning.    Reference: https://docs.microsoft.com/en-us/sql/relational-databases/polybase/polybase-versioned-feature-summary
This table lists the key features for PolyBase and the products in which they're available.  
  
       
Incorrect Answers: 
C: Change tracking is a lightweight solution that provides an efficient change tracking mechanism for applications. It applies to both Azure SQL Database and SQL Server.    
D: Azure SQL Database and Azure SQL Managed Instance automatic tuning provides peak performance and stable workloads through continuous performance tuning based on AI and machine learning.    
Reference: 
https://docs.microsoft.com/en-us/sql/relational-databases/polybase/polybase-versioned-feature-summary
Question 4
You have a Microsoft SQL Server 2019 instance in an on-premises datacenter. The instance contains a 4-TB database named DB1. You plan to migrate DB1 to an Azure SQL Database managed instance.  
What should you use to minimize downtime and data loss during the migration?
  1. distributed availabilitygroups
  2. databasemirroring
  3. Always On AvailabilityGroup
  4. Azure Database MigrationService
Correct answer: D
Question 5
You have an on-premises Microsoft SQL Server 2016 server named Server1 that contains a database named DB1.    
You need to perform an online migration of DB1 to an Azure SQL Database managed instance by using Azure Database Migration Service. How should you configure the backup of DB1? To answer, select the appropriate options in the answer area.  
NOTE: Each correct selection is worth one point. 
Correct answer: To display the answer, ProfExam Simulator is required.
Explanation:
Box 1: Full and log backups only Make sure to take every backup on a separate backup media (backup files). Azure Database Migration Service doesn't support backups that are appended to a single backup file. Take full backup and log backups to separate backup files.    Box 2: WITH CHECKSUM Azure Database Migration Service uses the backup and restore method to migrate your on-premises databases to SQL Managed Instance. Azure Database Migration Service only supports backups created using checksum.    Incorrect Answers:   NOINIT Indicates that the backup set is appended to the specified media set, preserving existing backup sets. If a media password is defined for the media set, the password must be supplied. NOINIT is the default.  UNLOAD  Specifies that the tape is automatically rewound and unloaded when the backup is finished. UNLOAD is the default when a session begins.  Reference: https://docs.microsoft.com/en-us/azure/dms/known-issues-azure-sql-db-managed-instance-onlineconfimred
Box 1: Full and log backups only 
Make sure to take every backup on a separate backup media (backup files). Azure Database Migration Service doesn't support backups that are appended to a single backup file. Take full backup and log backups to separate backup files.    
Box 2: WITH CHECKSUM 
Azure Database Migration Service uses the backup and restore method to migrate your on-premises databases to SQL Managed Instance. Azure Database Migration Service only supports backups created using checksum.    
Incorrect Answers:   
NOINIT Indicates that the backup set is appended to the specified media set, preserving existing backup sets. If a media password is defined for the media set, the password must be supplied. NOINIT is the default.  
UNLOAD  
Specifies that the tape is automatically rewound and unloaded when the backup is finished. UNLOAD is the default when a session begins.  
Reference: 
https://docs.microsoft.com/en-us/azure/dms/known-issues-azure-sql-db-managed-instance-onlineconfimred
Question 6
You have a resource group named App1Dev that contains an Azure SQL Database server named DevServer1. DevServer1 contains an Azure SQL database named DB1. The schema and permissions for DB1 are saved in a Microsoft SQL Server Data Tools (SSDT) database project.    
You need to populate a new resource group named App1Test with the DB1 database and an Azure SQL Server named TestServer1. The resources in App1Test must have the same configurations as the resources in App1Dev.    
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.  
Correct answer: To display the answer, ProfExam Simulator is required.
Explanation:
confirmed
confirmed
Question 7
You have an Azure Synapse Analytics dedicated SQL pool named Pool1 and an Azure Data Lake Storage Gen2 account named Account1. You plan to access the files in Account1 by using an external table.  
You need to create a data source in Pool1 that you can reference when you create the external table.    
How should you complete the Transact-SQL statement? To answer, select the appropriate options in the answer area.  
NOTE: Each correct selection is worth one point. 
Correct answer: To display the answer, ProfExam Simulator is required.
Explanation:
Box 1: blob The following example creates an external data source for Azure Data Lake Gen2  CREATE EXTERNAL DATA SOURCE YellowTaxi  WITH ( LOCATION = 'https://azureopendatastorage.blob.core.windows.net/nyctlc/yellow/', TYPE = HADOOP)    Box 2: HADOOP   Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-tables-external-tablescorrected
Box 1: blob 
The following example creates an external data source for Azure Data Lake Gen2  
CREATE EXTERNAL DATA SOURCE YellowTaxi  
WITH ( LOCATION = 'https://azureopendatastorage.blob.core.windows.net/nyctlc/yellow/', 
TYPE = HADOOP)    
Box 2: HADOOP   
Reference: 
https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-tables-external-tablescorrected
Question 8
You plan to develop a dataset named Purchases by using Azure Databricks. Purchases will contain the following columns: 
  • ProductID  
  • ItemPrice 
  • LineTotal  
  • Quantity  
  • StoreID  
  • Minute  
  • Month  
  • Hour Year  
  • Day    
You need to store the data to support hourly incremental load pipelines that will vary for each StoreID. The solution must minimize storage costs. How should you complete the code? To answer, select the appropriate options in the answer area.  
NOTE: Each correct selection is worth one point. 
Correct answer: To display the answer, ProfExam Simulator is required.
Explanation:
Box 1: .partitionBy Example: df.write.partitionBy("y","m","d")    .mode(SaveMode.Append)    .parquet("/data/hive/warehouse/db_name.db/" +  tableName) Box 2: ("Year","Month","Day","Hour","StoreID") Box 3: .parquet("/Purchases")   Reference: https://intellipaat.com/community/11744/how-to-partition-and-write-dataframe-in-spark-without-deleting-partitions-with-no-new-data
Box 1: .partitionBy 
Example: 
df.write.partitionBy("y","m","d")    
.mode(SaveMode.Append)    
.parquet("/data/hive/warehouse/db_name.db/" +  
tableName) Box 2: ("Year","Month","Day","Hour","StoreID") 
Box 3: .parquet("/Purchases")   
Reference: 
https://intellipaat.com/community/11744/how-to-partition-and-write-dataframe-in-spark-without-deleting-partitions-with-no-new-data
Question 9
You are building a database in an Azure Synapse Analytics serverless SQL pool.  
YouhavedatastoredinParquetfilesinanAzureDataLakeStorageGen2container.  
Records are structured as shown in the followingsample.  
       
  
The records contain two applicants at most.    
You need to build a table that includes only the address fields.    
How should you complete the Transact-SQL statement? To answer, select the appropriate options in the answer area.    
NOTE: Each correct selection is worth one point. 
Correct answer: To display the answer, ProfExam Simulator is required.
Explanation:
Box 1: CREATE EXTERNAL TABLE An external table points to data located in Hadoop, Azure Storage blob, or AzureDataLakeStorage. External tables are used to read data from files or write data to files in Azure Storage. With SynapseSQL, you can use external tables to read external data using dedicated SQL pool or serverless SQL pool.    Syntax: CREATE EXTERNAL TABLE { database_name.schema_name.table_name | schema_name.table_name | table_name }  ( <column_definition> [ ,...n ] )  WITH (  LOCATION = 'folder_or_filepath', DATA_SOURCE  = external_data_source_name, FILE_FORMAT =  external_file_format_name    Box 2. OPENROWSET  When using serverless SQL pool, CETAS is used to create an external table and export query results to Azure Storage Blob or Azure Data Lake Storage Gen2.   Exam ple: AS    SELECT decennialTime, stateName, SUM(population) AS population FROM  OPENROWSET(BULK 'https://azureopendatastorage.blob.core.windows.net/censusdatacontainer/release/us_population_county/year=*/*.parquet', FORMAT='PARQUET') AS [r]  GROUP BY decennialTime, stateName  GO   Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-tables-external-tables 
Box 1: CREATE EXTERNAL TABLE 
An external table points to data located in Hadoop, Azure Storage blob, or AzureDataLakeStorage. External tables are used to read data from files or write data to files in Azure Storage. With SynapseSQL, you can use external tables to read external data using dedicated SQL pool or serverless SQL pool.    
Syntax: 
CREATE EXTERNAL TABLE { database_name.schema_name.table_name | schema_name.table_name | table_name }  
( <column_definition> [ ,...n ] )  
WITH (  
LOCATION = 'folder_or_filepath', DATA_SOURCE  
= external_data_source_name, FILE_FORMAT =  
external_file_format_name    
Box 2. OPENROWSET  
When using serverless SQL pool, CETAS is used to create an external table and export query results to Azure Storage Blob or Azure Data Lake Storage Gen2.   
Exam 
ple: 
AS    
SELECT decennialTime, stateName, SUM(population) AS population FROM  
OPENROWSET(BULK 'https://azureopendatastorage.blob.core.windows.net/censusdatacontainer/release/us_population_county/year=*/*.parquet', 
FORMAT='PARQUET') AS [r]  
GROUP BY decennialTime, stateName  
GO   
Reference: 
https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-tables-external-tables 
Question 10
YouaredesigningadatedimensiontableinanAzureSynapseAnalyticsdedicatedSQLpool.Thedatedimensiontablewillbeusedbyallthefacttables. 
Which distribution type should you recommend to minimize datamovement?
  1. HASH 
  2. REPLICATE
  3. ROUND_ROBIN
Correct answer: B
Explanation:
AreplicatedtablehasafullcopyofthetableavailableoneveryComputenode.Queriesrunfastonreplicatedtablessincejoinsonreplicatedtablesdon'trequire data movement.  Replication requires extra storage, though, and isn't practical for largetables.    Incorrect Answers: C: A round-robin distributed table distributes table rows evenly across all distributions. The assignment of rows to distributions is random. Unlike hash-distributed tables, rows with equal values are not guaranteed to be assigned to the same distribution.  As a result, the system sometimes needs to invoke a data movement operation to better organize your data before it can resolve a query. Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute
AreplicatedtablehasafullcopyofthetableavailableoneveryComputenode.Queriesrunfastonreplicatedtablessincejoinsonreplicatedtablesdon'trequire data movement.  
Replication requires extra storage, though, and isn't practical for largetables.    
Incorrect Answers: 
C: A round-robin distributed table distributes table rows evenly across all distributions. The assignment of rows to distributions is random. Unlike hash-distributed tables, rows with equal values are not guaranteed to be assigned to the same distribution.  
As a result, the system sometimes needs to invoke a data movement operation to better organize your data before it can resolve a query. 
Reference: 
https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute
HOW TO OPEN VCE FILES

Use VCE Exam Simulator to open VCE files
Avanaset

HOW TO OPEN VCEX AND EXAM FILES

Use ProfExam Simulator to open VCEX and EXAM files
ProfExam Screen

ProfExam
ProfExam at a 20% markdown

You have the opportunity to purchase ProfExam at a 20% reduced price

Get Now!