Download Oracle Cloud Platform Big Data Management 2018 Associate.1z0-928.PracticeTest.2019-09-08.28q.vcex

Vendor: Oracle
Exam Code: 1z0-928
Exam Name: Oracle Cloud Platform Big Data Management 2018 Associate
Date: Sep 08, 2019
File Size: 19 KB

How to open VCEX files?

Files with VCEX extension can be opened by ProfExam Simulator.

Purchase
Coupon: EXAM_HUB

Discount: 20%

Demo Questions

Question 1
You have data in large files you need to copy from your Hadoop HDFS to other storage providers. You decided to use the Oracle Big Data Cloud Service distributed copy utility odcp. 
As odcp is compatible with Cloudera Distribution Including Apache Hadoop, which four are supported when copying files?
  1. Secure WebHDFS (SWebHDFS)
  2. Apache Hadoop Distributed File Service (HDFS)
  3. Apache Flume
  4. Hypertext Transfer Protocol (HTTP)
  5. Oracle Cloud Infrastructure Object Storage
  6. Apache Sqoop
Correct answer: ABDE
Explanation:
Reference: https://docs.oracle.com/en/cloud/paas/big-data-cloud/csbdi/copy-data-odcp.html#GUID-4049DB4F-2E9A-4050-AB6F-B8F99918059F
Reference: https://docs.oracle.com/en/cloud/paas/big-data-cloud/csbdi/copy-data-odcp.html#GUID-4049DB4F-2E9A-4050-AB6F-B8F99918059F
Question 2
ABC Media receives thousands of files every day from many sources. Each text-formatted file is typically 1-2 MB in size. They need to store all these files for at least two years. They heard about Hadoop and about the HDFS filesystem, and want to take advantage of the cost-effective storage to store the vast number of files. 
Which two recommendations could you provide to the customer to maintain the effectiveness of HDFS with the growing number of files?
  1. Consider breaking down files into smaller files before ingesting.
  2. Consider adding additional Name Nodes to increase data storage capacity.
  3. Reduce the memory available for namenode as 1-2 MB files don’t need a lot of memory. 
  4. Consider concatenating files after ingesting.
  5. Use compression to free up space.
Correct answer: AE
Question 3
During provisioning, which can you create in order to integrate Big Data Cloud with Other Oracle PaaS services?
  1. Attachments
  2. Associations
  3. Couplings
  4. Data Pipelines
Correct answer: B
Question 4
You have easily and successfully created clusters with the Oracle Big Data Cloud wizard. You want to create a cluster that will be very specific to the needs of your business. 
How would you customize Oracle Big Data Cloud clusters during provisioning?
  1. by using Stack Manager
  2. by using Oracle Enterprise Manager
  3. by using Platform Service Manager UI
  4. by using a Bootstrap script
Correct answer: D
Explanation:
Reference: https://docs.oracle.com/en/cloud/paas/big-data-compute-cloud/csspc/using-oracle-big-data-cloud.pdf
Reference: https://docs.oracle.com/en/cloud/paas/big-data-compute-cloud/csspc/using-oracle-big-data-cloud.pdf
Question 5
What is the optimal way in Event Hub Cloud Service to stream data into Object Storage?
  1. Use block storage as temporary data landing.
  2. Use the external database system to push the data to the object store.
  3. Use Kafka connectors.
  4. It is not possible to stream data to object store.
Correct answer: C
Explanation:
Reference: https://cloud.oracle.com/event-hub
Reference: https://cloud.oracle.com/event-hub
Question 6
Oracle Data Integrator for Big Data offers customers with Enterprise big data Integration. 
What component does Oracle Data Integrator for Big Data use to give you the ability to solve your most complex and time-sensitive data transformation and data movement challenges?
  1. RDD
  2. Knowledge modules
  3. Predefined MapReduce job for data transformation
  4. Package scripts
Correct answer: B
Explanation:
Reference: http://www.oracle.com/us/products/middleware/data-integration/odieebd-ds-2464372.pdf
Reference: http://www.oracle.com/us/products/middleware/data-integration/odieebd-ds-2464372.pdf
Question 7
What is the difference between permanent nodes and edge nodes? 
  1. Permanent nodes cannot be stopped, whereas you can start and stop edge nodes.
  2. Permanent nodes are for the life of the cluster, whereas edge nodes are temporary for the duration of processing the data Permanent nodes contain your Hadoop data, but edge nodes do not have Hadoop data.
  3. Permanent nodes contain your Hadoop data, but edge nodes do not have Hadoop data.
  4. Permanent nodes contain your Hadoop data, but edge nodes give you the “edge” in processing your data with more processors.
Correct answer: B
Explanation:
Reference: https://docs.oracle.com/en/cloud/paas/big-data-cloud/csbdi/using-oracle-big-data-cloud-service.pdf
Reference: https://docs.oracle.com/en/cloud/paas/big-data-cloud/csbdi/using-oracle-big-data-cloud-service.pdf
Question 8
What is the result of the FLATMAP () function in Spark?
  1. It always returns a new RDD by passing the supplied function used to filter the results.
  2. It always returns a new RDD that contains elements in the source dataset and the argument.
  3. It always returns an RDD with 0, 1, or more elements.
  4. It always returns an RDD with identical size of the input RDD.
Correct answer: A
Explanation:
Reference: https://backtobazics.com/big-data/spark/apache-spark-flatmap-example/
Reference: https://backtobazics.com/big-data/spark/apache-spark-flatmap-example/
Question 9
As the Big Data Cloud Service Administrator, you need to access one of the cluster nodes. Using secure shell, what are the two things you will need in order to access the node?
  1. Name of the cluster node
  2. Private ssh key pair
  3. Name of the service instance
  4. IP Address of the cluster node
  5. Public ssh key pair 
Correct answer: BD
Explanation:
Reference: https://docs.oracle.com/en/cloud/paas/big-data-cloud/csbdi/connect-cluster-node-secure-shell-ssh.html#GUID-29C53AFA-66ED-4649-9A3B-2B5480EC5B53
Reference: https://docs.oracle.com/en/cloud/paas/big-data-cloud/csbdi/connect-cluster-node-secure-shell-ssh.html#GUID-29C53AFA-66ED-4649-9A3B-2B5480EC5B53
Question 10
You execute the command hdfs dfs –is to view the directory listing on the Edge/Gateway node in a secured cluster, but get an error. You were able to execute the same command successfully 24 hours ago. No changes have been made to the user or the infrastructure. 
How do you fix the problem?
  1. You need to obtain a valid Kerberos ticket by executing the KINIT command.
  2. You need to reboot the Hadoop cluster to reinstate access to users.
  3. A valid Kerberos ticket exists but you need to initialize it using the KINIT command.
  4. You are on wrong node. The hdfs dfs –is command can only be executed from the Name Node.
Correct answer: A
HOW TO OPEN VCE FILES

Use VCE Exam Simulator to open VCE files
Avanaset

HOW TO OPEN VCEX AND EXAM FILES

Use ProfExam Simulator to open VCEX and EXAM files
ProfExam Screen

ProfExam
ProfExam at a 20% markdown

You have the opportunity to purchase ProfExam at a 20% reduced price

Get Now!