Download MuleSoft Certified Integration Architect - Level 1.MCIA-Level-1.CertDumps.2022-10-22.83q.vcex

Vendor: Mulesoft
Exam Code: MCIA-Level-1
Exam Name: MuleSoft Certified Integration Architect - Level 1
Date: Oct 22, 2022
File Size: 4 MB

How to open VCEX files?

Files with VCEX extension can be opened by ProfExam Simulator.

Purchase
Coupon: EXAM_HUB

Discount: 20%

Demo Questions

Question 1
An organization is designing an integration solution to replicate financial transaction data from a legacy system into a data warehouse (DWH).
The DWH must contain a daily snapshot of financial transactions, to be delivered as a CSV file. Daily transaction volume exceeds tens of millions of records, with significant spikes in volume during popular shopping periods.
What is the most appropriate integration style for an integration solution that meets the organization's current requirements?
  1. API-led connectivity
  2. Batch-triggered ETL
  3. Event-driven architecture
  4. Microservice architecture
Correct answer: D
Question 2
A set of integration Mule applications, some of which expose APIs, are being created to enable a new business process. Various stakeholders may be impacted by this. These stakeholders are a combination of semi-technical users (who understand basic integration terminology and concepts such as JSON and XML) and technically skilled potential consumers of the Mule applications and APIs.
What is an effective way for the project team responsible for the Mule applications and APIs being built to communicate with these stakeholders using Anypoint Platform and its supplied toolset?
  1. Create Anypoint Exchange entries with pages elaborating the integration design, including API notebooks(where applicable) to help the stakeholders understand and interact with the Mule applications and APIs at
    various levels of technical depth
  2. Capture documentation about the Mule applications and APIs inline within the Mule integration flows anduse Anypoint Studio's Export Documentation feature to provide an HTML version of this documentation to
    the stakeholders
  3. Use Anypoint Design Center to implement the Mule applications and APIs and give the variousstakeholders access to these Design Center projects, so they can collaborate and provide feedback
  4. Use Anypoint Exchange to register the various Mule applications and APIs and share the RAML definitionswith the stakeholders, so they can be discovered
Correct answer: D
Question 3
A Mule application is being designed to do the following:
Step 1: Read a SalesOrder message from a JMS queue, where each SalesOrder consists of a header and a list of SalesOrderLineltems.
Step 2: Insert the SalesOrder header and each SalesOrderLineItem into different tables in an RDBMS.
Step 3: Insert the SalesOrder header and the sum of the prices of all its SalesOrderLineltems into a table in a different RDBMS.
No SalesOrder message can be lost and the consistency of all SalesOrder-related information in both RDBMSs must be ensured at all times.
What design choice (including choice of transactions) and order of steps addresses these requirements?
  1. 1. Read the JMS message (NOT in an XA transaction)
    2. Perform EACH DB insert in a SEPARATE DB transaction
    3. Acknowledge the JMS message
  2. 1. Read and acknowledge the JMS message (NOT in an XA transaction) 
    2. In a NEW XA transaction, perform BOTH DB inserts
  3. 1. Read the JMS message in an XA transaction 
    2. In the SAME XA transaction, perform BOTH DB inserts but do NOT acknowledge the JMS message
  4. 1. Read the JMS message (NOT in an XA transaction) 
    2. Perform BOTH DB inserts in ONE DB transaction
    3. Acknowledge the JMS message
Correct answer: C
Question 4
Refer to the exhibit. A Mule application is being designed to be deployed to several CloudHub workers. The Mule application's integration logic is to replicate changed Accounts from Salesforce to a backend system every 5 minutes.
A watermark will be used to only retrieve those Salesforce Accounts that have been modified since the last time the integration logic ran.
What is the most appropriate way to implement persistence for the watermark in order to support the required data replication integration logic?
    
  1. Persistent Object Store
  2. Persistent Cache Scope
  3. Persistent Anypoint MQ Queue
  4. Persistent VM Queue
Correct answer: A
Question 5
Refer to the exhibit. A shopping cart checkout process consists of a web store backend sending a sequence of API invocations to an Experience API, which in turn invokes a Process API. All API invocations are over HTTPS POST. The Java web store backend executes in a Java EE application server, while all API implementations are Mule applications executing in a customer-hosted Mule runtime.
End-to-end correlation of all HTTP requests and responses belonging to each individual checkout instance is required. This is to be done through a common correlation ID, so that all log entries written by the web store backend, Experience API implementation, and Process API implementation include the same correlation ID for all requests and responses belonging to the same checkout instance.
What is the most efficient way (using the least amount of custom coding or configuration) for the web store backend and the implementations of the Experience API and Process API to participate in end-to-end correlation of the API invocations for each checkout instance?
    
  1. The Experience API implementation generates a correlation ID for each incoming HTTP request and passesit to the web store backend in the HTTP response, which includes it in all subsequent API invocations to the Experience API
    The Experience API implementation must be coded to also propagate the correlation ID to the Process API in a suitable HTTP request header
        
  2. The web store backend generates a new correlation ID value at the start of checkout and sets it on theXCORRELATION-ID HTTP request header in each API invocation belonging to that checkout
    No special code or configuration is included in the Experience API and Process API implementations to generate and manage the correlation ID
        
  3. The web store backend, being a Java EE application, automatically makes use of the thread-local correlationID generated by the Java EE application server and automatically transmits that to the Experience API using HTTP-standard headers
    No special code or configuration is included in the web store backend, Experience API, and Process API implementations to generate and manage the correlation ID
        
  4. The web store backend sends a correlation ID value in the HTTP request body in the way required by theExperience API
    The Experience API and Process API implementations must be coded to receive the custom correlation ID in the HTTP requests and propagate it in suitable HTTP request headers
        
Correct answer: B
Question 6
Refer to the exhibit. A Mule application is deployed to a cluster of two customer-hosted Mule runtimes. The Mule application has a flow that polls a database and another flow with an HTTP Listener.
HTTP clients send HTTP requests directly to individual cluster nodes.
What happens to database polling and HTTP request handling in the time after the primary (master) node of the cluster has failed, but before that node is restarted?
  1. Database polling stopsAll HTTP requests are rejected
  2. Database polling stopsAll HTTP requests continue to be accepted
  3. Database polling continuesOnly HTTP requests sent to the remaining node continue to be accepted
  4. Database polling continuesAll HTTP requests continue to be accepted, but requests to the failed node incur increased latency
Correct answer: A
Question 7
What aspects of a CI/CD pipeline for Mule applications can be automated using MuleSoft-provided Maven plugins?
  1. Import from API designer, compile, package, unit test, deploy, publish to Anypoint Exchange
  2. Compile, package, unit test, validate unit test coverage, deploy
  3. Compile, package, unit test, deploy, integration test
  4. Compile, package, unit test, deploy, create associated API instances in API Manager
Correct answer: C
Explanation:
Reference: http://workshop.tools.mulesoft.com/modules/module7_lab4#step-2-configure-the-mule-mavenplugin     
Reference: http://workshop.tools.mulesoft.com/modules/module7_lab4#step-2-configure-the-mule-mavenplugin
    
Question 8
An integration Mule application is deployed to a customer-hosted multi-node Mule 4 runtime cluster. The Mule application uses a Listener operation of a JMS connector to receive incoming messages from a JMS queue.
How are the messages consumed by the Mule application?
  1. Regardless of the Listener operation configuration, all messages are consumed by ONLY the primarycluster node
  2. Depending on the JMS provider's configuration, either all messages are consumed by ONLY the primarycluster node or else ALL messages are consumed by ALL cluster nodes
  3. Regardless of the Listener operation configuration, all messages are consumed by ALL cluster nodes
  4. Depending on the Listener operation configuration, either all messages are consumed by ONLY theprimary cluster node or else EACH message is consumed by ANY ONE cluster node
Correct answer: B
Question 9
Additional nodes are being added to an existing customer-hosted Mule runtime cluster to improve performance. Mule applications deployed to this cluster are invoked by API clients through a load balancer.
What is also required to carry out this change?
  1. API implementations using an object store must be adjusted to recognize the new nodes and persist tothem
  2. A new load balancer must be provisioned to allow traffic to the new nodes in a round-robin fashion
  3. External monitoring tools or log aggregators must be configured to recognize the new nodes
  4. New firewall rules must be configured to accommodate communication between API clients and the newnodes
Correct answer: A
Explanation:
Reference: https://www.folkstalk.com/2019/11/mulesoft-integration-and-platform.html     
Reference: https://www.folkstalk.com/2019/11/mulesoft-integration-and-platform.html
    
Question 10
Refer to the exhibit. One of the backend systems invoked by an API implementation enforces rate limits on the number of requests a particular client can make. Both the backend system and the API implementation are deployed to several non-production environments in addition to production.
Rate limiting of the backend system applies to all non-production environments. The production environment, however, does NOT have any rate limiting.
What is the most effective approach to conduct performance tests of the API implementation in a staging (nonproduction) environment?
    
  1. Use MUnit to simulate standard responses from the backend systemThen conduct performance tests to identify other bottlenecks in the system
  2. Create a mocking service that replicates the backend system's production performance characteristicsThen configure the API implementation to use the mocking service and conduct the performance tests
  3. Conduct scaled-down performance tests in the staging environment against the rate limited backendsystem
    Then upscale performance results to full production scale
  4. Include logic within the API implementation that bypasses invocations of the backend system in aperformance test situation, instead invoking local stubs that replicate typical backend system responses
    Then conduct performance tests using this API implementation
Correct answer: C
HOW TO OPEN VCE FILES

Use VCE Exam Simulator to open VCE files
Avanaset

HOW TO OPEN VCEX AND EXAM FILES

Use ProfExam Simulator to open VCEX and EXAM files
ProfExam Screen

ProfExam
ProfExam at a 20% markdown

You have the opportunity to purchase ProfExam at a 20% reduced price

Get Now!