Download Implementing a Data Warehouse with Microsoft SQL Server 2012-2014.70-463.PracticeTest.2018-10-15.134q.tqb

Vendor: Microsoft
Exam Code: 70-463
Exam Name: Implementing a Data Warehouse with Microsoft SQL Server 2012/2014
Date: Oct 15, 2018
File Size: 3 MB

How to open TQB files?

Files with TQB (Taurus Question Bank) extension can be opened by Taurus Exam Studio.

Demo Questions

Question 1
You are reviewing the design of an existing fact table named factSales, which is loaded from a SQL Azure database by a SQL Server Integration Services (SSIS) package each day. The fact table has approximately 1 billion rows and is dimensioned by product, sales date, and sales time of day. 
  
The database administrator is concerned about the growth of the database. Users report poor reporting performance against this database. Reporting requirements have recently changed and the only remaining report that uses this fact table reports sales by product name, sale month, and sale year. No other reports will be created against this table. 
You need to reduce the report processing time and minimize the growth of the database. 
What should you do?
  1. Partition the table by product type.
  2. Create a view over the fact table to aggregate sales by month.
  3. Change the granularity of the fact table to month.
  4. Create an indexed view over the fact table to aggregate sales by month.
Correct answer: C
Question 2
You are designing a data warehouse with two fact tables. The first table contains sales per month and the second table contains orders per day. 
Referential integrity must be enforced declaratively. 
You need to design a solution that can join a single time dimension to both fact tables. 
What should you do?
  1. Create a time mapping table.
  2. Change the level of granularity in both fact tables to be the same.
  3. Merge the fact tables.
  4. Create a view on the sales table.
Correct answer: B
Explanation:
Microsoft SQL Server Analysis Services, a time dimension is a dimension type whose attributes represent time periods, such as years, semesters, quarters, months, and days. The periods in a time dimension provide time-based levels of granularity for analysis and reporting. The attributes are organized in hierarchies, and the granularity of the time dimension is determined largely by the business and reporting requirements for historical data References: https://docs.microsoft.com/en-us/sql/analysis-services/multidimensional-models/database-dimensions-create-a-date-type-dimension
Microsoft SQL Server Analysis Services, a time dimension is a dimension type whose attributes represent time periods, such as years, semesters, quarters, months, and days. The periods in a time dimension provide time-based levels of granularity for analysis and reporting. The attributes are organized in hierarchies, and the granularity of the time dimension is determined largely by the business and reporting requirements for historical data 
References: https://docs.microsoft.com/en-us/sql/analysis-services/multidimensional-models/database-dimensions-create-a-date-type-dimension
Question 3
You are designing a data warehouse for a software distribution business that stores sales by software title. It stores sales targets by software category. Software titles are classified into subcategories and categories. Each software title is included in only a single software subcategory, and each subcategory is included in only a single category. The data warehouse will be a data source for an Analysis Services cube. 
The data warehouse contains two fact tables:
  • factSales, used to record daily sales by software title 
  • factTarget, used to record the monthly sales targets by software category 
Reports must be developed against the warehouse that reports sales by software title, category and subcategory, and sales targets. 
You need to design the software title dimension. The solution should use as few tables as possible while supporting all the requirements. 
What should you do?
  1. Create three software tables, dimSoftware, dimSoftwareCategory, and dimSoftwareSubcategory and a fourth bridge table that joins software titles to their appropriate category and subcategory table records with foreign key constraints. Direct the cube developer to use key granularity attributes.
  2. Create three software tables, dimSoftware, dimSoftwareCategory, and dimSoftwareSubcategory. Connect factSales to all three tables and connect factTarget to dimSoftwareCategory with foreign key constraints. Direct the cube developer to use key granularity attributes.
  3. Create one table, dimSoftware, which contains Software Detail, Category, and Subcategory columns. Connect factSales to dimSoftware with a foreign key constraint. Direct the cube developer to use a non-key granularity attribute for factTarget.
  4. Create two tables, dimSoftware and dimSoftwareCategory. Connect factSales to dimSoftware and factTarget to dimSoftwareCategory with foreign key constraints. Direct the cube developer to use key granularity attributes.
Correct answer: C
Question 4
You are designing a data warehouse hosted on SQL Azure. The data warehouse currently includes the dimUser and dimDistrict dimension tables and the factSales fact table. The dimUser table contains records for each user permitted to run reports against the warehouse; and the dimDistrict table contains information about sales districts. 
The system is accessed by users from certain districts, as well as by area supervisors and users from the corporate headquarters. 
You need to design a table structure to ensure that certain users can see sales data for only certain districts. Some users must be permitted to see sales data from multiple districts. 
What should you do?
  1. Add a district column to the dimUser table.
  2. Partition the factSales table on the district column.
  3. Create a userDistrict table that contains primary key columns from the dimUser and dimDistrict tables.
  4. For each district, create a view of the factSales table that includes a WHERE clause for the district.
Correct answer: C
Question 5
You are reviewing the design of a customer dimension table in an existing data warehouse hosted on SQL Azure. 
The current dimension design does not allow the retention of historical changes to customer attributes such as Postcode. 
You need to redesign the dimension to enable the full historical reporting of changes to multiple customer attributes including Postcode. 
What should you do?
  1. Add StartDate and EndDate columns to the customer dimension.
  2. Add an IsCurrent column to the customer dimension.
  3. Enable Snapshot Isolation on the data warehouse.
  4. Add CurrentValue and PreviousValue columns to the customer dimension.
Correct answer: A
Explanation:
Adding a start and end date will give you this ability as when a record is inserted and given a start and end date, you’ll have the ability to determine when they were active therefore giving you a retention of historical changes
Adding a start and end date will give you this ability as when a record is inserted and given a start and end date, you’ll have the ability to determine when they were active therefore giving you a retention of historical changes
Question 6
You are implementing the indexing strategy for a fact table in a data warehouse. The fact table is named Quotes. The table has no indexes and consists of seven columns:
  • [ID] 
  • [QuoteDate] 
  • [Open] 
  • [Close] 
  • [High] 
  • [Low] 
  • [Volume] 
Each of the following queries must be able to use a columnstore index:
  • SELECT AVG ([Close]) AS [AverageClose] FROMQuotes WHERE [QuoteDate] BETWEEN '20100101' AND '20101231'. 
  • SELECT AVG([High] - [Low]) AS [AverageRange] FROM Quotes WHERE [QuoteDate] BETWEEN '20100101' AND '20101231'. 
  • SELECT SUM([Volume]) AS [SumVolume] FROM Quotes WHERE [QuoteDate] BETWEEN '20100101' AND '20101231'. 
You need to ensure that the indexing strategy meets the requirements. The strategy must also minimize the number and size of the indexes. 
What should you do?
  1. Create one columnstore index that contains [ID], [Close], [High], [Low], [Volume], and [QuoteDate].
  2. Create three columnstore indexes:One containing [QuoteDate] and [Close]One containing [QuoteDate], [High], and [Low]One containing [QuoteDate] and [Volume]
  3. Create one columnstore index that contains [QuoteDate], [Close], [High], [Low], and [Volume].
  4. Create two columnstore indexes:One containing [ID], [QuoteDate], [Volume], and [Close]One containing [ID], [QuoteDate], [High], and [Low]
Correct answer: C
Explanation:
References: http://msdn.microsoft.com/en-us/library/gg492088.aspxhttp://msdn.microsoft.com/en-us/library/gg492153.aspx
References: http://msdn.microsoft.com/en-us/library/gg492088.aspx
http://msdn.microsoft.com/en-us/library/gg492153.aspx
Question 7
You are designing an enterprise star schema that will consolidate data from three independent data marts. One of the data marts is hosted on SQL Azure. 
Most of the dimensions have the same structure and content. However, the geography dimension is slightly different in each data mart. 
You need to design a consolidated dimensional structure that will be easy to maintain while ensuring that all dimensional data from the three original solutions is represented. 
What should you do?
  1. Create a junk dimension for the geography dimension.
  2. Implement change data capture.
  3. Create a conformed dimension for the geography dimension.
  4. Create three geography dimensions.
Correct answer: C
Question 8
To facilitate the troubleshooting of SQL Server Integration Services (SSIS) packages, a logging methodology is put in place. 
The methodology has the following requirements:
  • The deployment process must be simplified. 
  • All the logs must be centralized in SQL Server. 
  • Log data must be available via reports or T-SQL. 
  • Log archival must be automated. 
You need to configure a logging methodology that meets the requirements while minimizing the amount of deployment and development effort. 
What should you do?
  1. Open a command prompt and run the gacutil command.
  2. Open a command prompt and execute the package by using the SQL Log provider and running the dtexecui.exe utility.
  3. Add an OnError event handler to the SSIS project.
  4. Use an msi file to deploy the package on the server.
  5. Configure the output of a component in the package dataflow to use a data tap.
  6. Run the dtutil command to deploy the package to the SSIS catalog and store the configuration in SQL Server.
  7. Open a command prompt and run the dtexec /rep /conn command.
  8. Open a command prompt and run the dtutil /copy command.
  9. Open a command prompt and run the dtexec /dumperror /conn command.
  10. Configure the SSIS solution to use the Project Deployment Model.
  11. Create a reusable custom logging component and use it in the SSIS project.
Correct answer: J
Explanation:
References:http://msdn.microsoft.com/en-us/library/ms140246.aspxhttp://www.element61.be/en/resource/sql-server-integration-services-2012-%E2%80%93-project-deployment-model
References:
http://msdn.microsoft.com/en-us/library/ms140246.aspx
http://www.element61.be/en/resource/sql-server-integration-services-2012-%E2%80%93-project-deployment-model
Question 9
You are developing a SQL Server Integration Services (SSIS) project that copies a large amount of rows from a SQL Azure database. The project uses the Package Deployment Model. This project is deployed to SQL Server on a test server. 
You need to ensure that the project is deployed to the SSIS catalog on the production server. 
What should you do?
  1. Open a command prompt and run the dtexec /dumperror /conn command.
  2. Create a reusable custom logging component and use it inthe SSIS project.
  3. Open a command prompt and run the gacutil command.
  4. Add an OnError event handler to the SSIS project.
  5. Open a command prompt and execute the package by using the SQL Log provider and running the dtexecui.exe utility.
  6. Open a command prompt and run the dtexec /rep /conn command.
  7. Open a command prompt and run the dtutil /copy command.
  8. Use an msi file to deploy the package on the server.
  9. Configure the SSIS solution to use the Project Deployment Model.
  10. Configure the output of a component in the package data flow to use a data tap.
  11. Run the dtutil command to deploy the package to the SSIS catalog and store the configuration in SQL Server.
Correct answer: I
Explanation:
References:http://msdn.microsoft.com/en-us/library/hh231102.aspxhttp://msdn.microsoft.com/en-us/library/hh213290.aspxhttp://msdn.microsoft.com/en-us/library/hh213373.aspx
References:
http://msdn.microsoft.com/en-us/library/hh231102.aspx
http://msdn.microsoft.com/en-us/library/hh213290.aspx
http://msdn.microsoft.com/en-us/library/hh213373.aspx
Question 10
Note: This question is a part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series.
Information and details provided in a question apply only to that question. 
You are developing a SQL Server Integration Services (SSIS) package. 
To process complex scientific data originating from a Microsoft Azure SQL Database database, a custom task component is added to the project. 
You need to ensure that the custom component is deployed on a test environment correctly. 
What should you do?
  1. Add an OnError event handler to the SSIS project.
  2. Use an msi file to deploy the package on the server.
  3. Open a command prompt and run the gacutil command.
  4. Open a command prompt and run the dtutil /copy command.
  5. Open a command prompt and run the dtexec /rep /conn command.
  6. Open a command prompt and run the dtexec /dumperror /conn command.
  7. Open a command prompt and execute the package by using the SQL Log provider and running the dtexecui.exe utility.
  8. Create a reusable custom logging component and use it in the SSIS project.
  9. Configure the SSIS solution to use the Project Deployment Model.
  10. Configure the output of a component in the package data flow to use a data tap.
  11. Run thedtutil command to deploy the package to the SSIS catalog and store the configuration in SQL Server.
Correct answer: C
Explanation:
References:http://msdn.microsoft.com/en-us/library/ms403356.aspx
References:
http://msdn.microsoft.com/en-us/library/ms403356.aspx
HOW TO OPEN VCE FILES

Use VCE Exam Simulator to open VCE files
Avanaset

HOW TO OPEN VCEX FILES

Use ProfExam Simulator to open VCEX files
ProfExam Screen

ProfExam
ProfExam at a 20% markdown

You have the opportunity to purchase ProfExam at a 20% reduced price

Get Now!