Download AWS Certified Developer - Associate.DVA-C02.Dump4Pass.2025-03-28.209q.tqb

Vendor: Amazon
Exam Code: DVA-C02
Exam Name: AWS Certified Developer - Associate
Date: Mar 28, 2025
File Size: 2 MB

How to open TQB files?

Files with TQB (Taurus Question Bank) extension can be opened by Taurus Exam Studio.

Demo Questions

Question 1
A company is offering APIs as a service over the internet to provide unauthenticated read access to statistical information that is updated daily. The company uses Amazon API Gateway and AWS Lambda to develop the APIs. The service has become popular, and the company wants to enhance the responsiveness of the APIs.
Which action can help the company achieve this goal?
  1. Enable API caching in API Gateway.
  2. Configure API Gateway to use an interface VPC endpoint.
  3. Enable cross-origin resource sharing (CORS) for the APIs.
  4. Configure usage plans and API keys in API Gateway.
Correct answer: A
Explanation:
Enable API caching in API Gateway.Enabling API caching in API Gateway can help enhance the responsiveness of the APIs by reducing the need to repeatedly process the same requests and responses. When a client makes a request to an API, the API Gateway can cache the response, and subsequent identical requests can be served from the cache, saving processing time and reducing the load on backend resources like AWS Lambda.This option makes the most sense in the context of improving responsiveness. While the other options (B, C, and D) are important considerations for various aspects of API development and security, they are not directly related to enhancing responsiveness in the same way that caching is.
Enable API caching in API Gateway.
Enabling API caching in API Gateway can help enhance the responsiveness of the APIs by reducing the need to repeatedly process the same requests and responses. When a client makes a request to an API, the API Gateway can cache the response, and subsequent identical requests can be served from the cache, saving processing time and reducing the load on backend resources like AWS Lambda.
This option makes the most sense in the context of improving responsiveness. While the other options (B, C, and D) are important considerations for various aspects of API development and security, they are not directly related to enhancing responsiveness in the same way that caching is.
Question 2
A developer is creating an application for a company. The application needs to read the file doc.txt that is placed in the folder of an Amazon S3 bucket that is named DOC-EXAMPLE-BUCKET. The company's security team requires the principle of least privilege to be applied to the application's IAM policy.
Which IAM policy statement will meet these security requirements?
  1.  
  2.  
  3.  
  4.  
Correct answer: A
Question 3
A developer maintains an Amazon API Gateway REST API. Customers use the API through a frontend UI-and Amazon authentication.
The developer has a new version of the API that contains new endpoints and backward-incompatible interface The developer needs to provide access to other developers on the team without affecting customers.
Which solution will meet these requirements with the LEAST operational overhead?
  1. Define a development stage on the API Gateway API. Instruct the other developers to point to thedevelopment stage.
  2. Define a new API Gateway API that points to the new API application code. Instruct the other developersto point the endpoints to the new API.
  3. Implement a query parameter in the API application code that determines which version to call.
  4. Specify new API Gateway endpoints for the API endpoints that the developer wants to add.
Correct answer: A
Explanation:
Define a development stage on the API Gateway API. Instruct the other developers to point to the development stage.Creating a separate development stage within the existing API Gateway REST API allows the other developers to work on the new version of the API without affecting the customers who are using the existing frontend UI and Amazon authentication. This approach provides isolation and flexibility for development while keeping the existing production version intact.Option A minimizes operational overhead by allowing the new version to be developed and tested independently in a controlled environment (the development stage) without impacting the production stage that customers are using. It also avoids the need to create a completely new API or modify the existing one.The other options (B, C, and D) involve more complex changes, such as creating entirely new APIs, implementing version selection mechanisms in the application code, or specifying new endpoints, which could introduce additional operational complexity and potential disruption to existing customers.
Define a development stage on the API Gateway API. Instruct the other developers to point to the development stage.
Creating a separate development stage within the existing API Gateway REST API allows the other developers to work on the new version of the API without affecting the customers who are using the existing frontend UI and Amazon authentication. This approach provides isolation and flexibility for development while keeping the existing production version intact.
Option A minimizes operational overhead by allowing the new version to be developed and tested independently in a controlled environment (the development stage) without impacting the production stage that customers are using. It also avoids the need to create a completely new API or modify the existing one.
The other options (B, C, and D) involve more complex changes, such as creating entirely new APIs, implementing version selection mechanisms in the application code, or specifying new endpoints, which could introduce additional operational complexity and potential disruption to existing customers.
Question 4
A developer is creating an application that Will store personal health information (PHI). The PHI needs to be encrypted at all times. An encrypted Amazon RDS MySQL DB instance is storing the data. The developer wants to increase the performance of the application by caching frequently accessed data while adding the ability to sort or rank the cached datasets. Which solution will meet these requirements?
  1. Create an Amazon ElastiCache for Redis instance. Enable encryption of data in transit and at rest. Storefrequently accessed data the cache.
  2. Create an Amazon ElastiCache Memcached instance. Enable encryption data in transit and at rest Storefrequently accessed data in the cache.
  3. Create an Ammon RDS for MySQL read replica. Connect to the read replica by using SSL. Configurethe read replica to store frequently accessed data.
  4. Create an DynamoDB table and a DynamoDB Accelerator (DAX) cluster for the table. Store frequentlyaccessed data in the DynamoDB table.
Correct answer: A
Question 5
A company has an Amazon S3 bucket that contains sensitive data. The data must be encrypted in transit and at rest. The company encrypts the data in the S3 bucket by using an AWS Key Management Service (AWS KMS) key. A developer needs to grant several other AWS accounts the permission to use the S3 GetObject operation to retrieve the data from the S3 bucket. How can the developer enforce that all requests to retrieve the data provide encryption in transit?
  1. Define a resource-based policy on the S3 bucket to deny access when a request meets the condition"aws:SecureTransport": "false"-.
  2. Define a resource-based policy on the S3 bucket to allow when a request meets the condition"aws:SecureTransport"-: "false".
  3. Define a role-based on the other accounts' roles to deny access when a request meets the condition of"aws:SecureTransort": "false".
  4. Define a resource-based policy on the KMS key to deny access when a request meets the condition of"aws:SecureTransport": "false".
Correct answer: A
Explanation:
This policy denies the s3:GetObject action for any request that is not made over a secure (encrypted) connection.  
This policy denies the s3:GetObject action for any request that is not made over a secure (encrypted) connection.
 
Question 6
An e-commerce web application that shares session state on-premises is being migrated to AWS. The application must be fault tolerant, natively highly scalable, and any service interruption should not affect the user experience.
What is the best option to store the session state?
  1. Store the session state in Amazon ElastiCache.
  2. Store the session state in Amazon CloudFront.
  3. Store the session state in Amazon S3.
  4. Enable session stickiness using elastic load balancers.
Correct answer: A
Explanation:
Store the session state in Amazon ElastiCache.Amazon ElastiCache is a managed in-memory data store service provided by AWS. It's designed to enhance the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores. In the context of session state management, ElastiCache offers several benefits that align with the given requirements:Fault Tolerance: ElastiCache provides fault tolerance by automatically replicating data across multiple Availability Zones, ensuring high availability even in the event of an infrastructure failure.Natively Highly Scalable: ElastiCache is designed for scalability, allowing you to scale the cache as your application demands grow. It supports clustering, which enables you to distribute data across multiple nodes.Service Interruption Mitigation: Storing session state in ElastiCache helps mitigate service interruptions because cached data is stored in-memory, which provides faster access than traditional databases. This can lead to a more responsive user experience even if there's a temporary interruption to other services.Session Stickiness: ElastiCache can be used to manage session state for applications that require session stickiness. Elastic Load Balancers (ELBs) can be configured to route requests to the appropriate cache node based on session information.Amazon CloudFront (Option B) is a content delivery network (CDN) service that helps distribute content globally with low latency. While it can enhance performance, it's not specifically designed for storing and managing session state.Amazon S3 (Option C) is a scalable object storage service, but it's not typically used for storing dynamic session state due to the fact that read and write latencies can be higher compared to in-memory data stores like ElastiCache.Enabling session stickiness using elastic load balancers (Option D) is a valid approach, but it doesn't address the need for a fault-tolerant, highly scalable, and natively responsive session state storage solution, which ElastiCache provides.Thus, Option A (Store the session state in Amazon ElastiCache) is the best option for the given requirements.
Store the session state in Amazon ElastiCache.
Amazon ElastiCache is a managed in-memory data store service provided by AWS. It's designed to enhance the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores. In the context of session state management, ElastiCache offers several benefits that align with the given requirements:
Fault Tolerance: ElastiCache provides fault tolerance by automatically replicating data across multiple Availability Zones, ensuring high availability even in the event of an infrastructure failure.
Natively Highly Scalable: ElastiCache is designed for scalability, allowing you to scale the cache as your application demands grow. It supports clustering, which enables you to distribute data across multiple nodes.
Service Interruption Mitigation: Storing session state in ElastiCache helps mitigate service interruptions because cached data is stored in-memory, which provides faster access than traditional databases. This can lead to a more responsive user experience even if there's a temporary interruption to other services.
Session Stickiness: ElastiCache can be used to manage session state for applications that require session stickiness. Elastic Load Balancers (ELBs) can be configured to route requests to the appropriate cache node based on session information.
Amazon CloudFront (Option B) is a content delivery network (CDN) service that helps distribute content globally with low latency. While it can enhance performance, it's not specifically designed for storing and managing session state.
Amazon S3 (Option C) is a scalable object storage service, but it's not typically used for storing dynamic session state due to the fact that read and write latencies can be higher compared to in-memory data stores like ElastiCache.
Enabling session stickiness using elastic load balancers (Option D) is a valid approach, but it doesn't address the need for a fault-tolerant, highly scalable, and natively responsive session state storage solution, which ElastiCache provides.
Thus, Option A (Store the session state in Amazon ElastiCache) is the best option for the given requirements.
Question 7
A developer is creating an AWS Serverless Application Model (AWS SAM) template. The AWS SAM template contains the definition of multiple AWS Lambda functions. An Amazon S3 bucket. and an Amazon CloudFront One of the Lambda functions runs on Lambda@Edge in the CloudFront distribution. The S3 bucket is configured as an origin for the CloudFront distribution.
When the developer deploys the AWS SAM template in the eu-west-1 Region, the creation of the stack fails.
Which of the following could be the reason for this issue?
  1. CloudFront distributions can be created only in the us-east-1 Region.
  2. Lambda@Edge functions can be created only in the us-east-1 Region.
  3. A single AWS SAM template cannot contain multiple Lambda functions.
  4. The CloudFront distribution and the S3 bucket cannot be created in the same Region.
Correct answer: B
Explanation:
Lambda@Edge functions can be created only in the us-east-1 Region.Lambda@Edge functions, which are designed to run in conjunction with Amazon CloudFront distributions to provide serverless compute capabilities closer to the user, can currently only be created in the us-east-1 Region. This restriction means that when using Lambda@Edge, you must create the Lambda@Edge functions in the us-east-1 Region, even if your other resources (like the CloudFront distribution, S3 bucket, etc.) are in a different region.Given that the developer is creating a Lambda@Edge function in the CloudFront distribution, and the deployment is failing in the eu-west-1 Region, the most likely reason is that the Lambda@Edge function is not supported in the eu-west-1 Region. Therefore, Option B is the correct explanation for the issue.The other options (A, C, and D) are not accurate explanations for the problem:Option A is not true. CloudFront distributions can be created in multiple AWS regions, not just us-east-1.Option C is not true. A single AWS SAM template can contain multiple Lambda functions.Option D is not true. There is no restriction preventing a CloudFront distribution and an S3 bucket from being created in the same region.
Lambda@Edge functions can be created only in the us-east-1 Region.
Lambda@Edge functions, which are designed to run in conjunction with Amazon CloudFront distributions to provide serverless compute capabilities closer to the user, can currently only be created in the us-east-1 Region. This restriction means that when using Lambda@Edge, you must create the Lambda@Edge functions in the us-east-1 Region, even if your other resources (like the CloudFront distribution, S3 bucket, etc.) are in a different region.
Given that the developer is creating a Lambda@Edge function in the CloudFront distribution, and the deployment is failing in the eu-west-1 Region, the most likely reason is that the Lambda@Edge function is not supported in the eu-west-1 Region. Therefore, Option B is the correct explanation for the issue.
The other options (A, C, and D) are not accurate explanations for the problem:
Option A is not true. CloudFront distributions can be created in multiple AWS regions, not just us-east-1.
Option C is not true. A single AWS SAM template can contain multiple Lambda functions.
Option D is not true. There is no restriction preventing a CloudFront distribution and an S3 bucket from being created in the same region.
Question 8
A company is planning to use AWS CodeDeploy to deploy an application to Amazon Elastic Container Service (Amazon ECS). During the deployment of a new version of the application, the company initially must expose only 10% of live traffic to the new version of the deployed application. Then, after 15 minutes elapse, the company must route all the remaining live traffic to the new version of the deployed application.
Which CodeDeploy predefined configuration will meet these requirements?
  1. CodeDeployDefault. ECSCanary10Percent 15Minutes
  2. Code DeployDefault.LambdaCanary10Percent5Minutes
  3. CodeDeployDefault.LambdaCanary10Percent15Minutes
  4. CodeDeployDefault. ECSLinear10PercentEvery1 Minutes
Correct answer: A
Explanation:
The correct answer is:CodeDeployDefault.ECSCanary10Percent15MinutesIn AWS CodeDeploy, predefined configurations help you define deployment strategies based on the platform you are deploying to. In this case, you are deploying an application to Amazon ECS and you have specific requirements for gradually exposing traffic to the new version."Canary" deployment strategy involves gradually shifting traffic from the old version to the new version."ECSCanary" specifies that you are deploying to Amazon ECS."10Percent" indicates that initially, only 10% of live traffic will be exposed to the new version."15Minutes" means that after 15 minutes have elapsed, all the remaining live traffic will be routed to the new version.So, the correct predefined configuration that meets your requirements is"CodeDeployDefault.ECSCanary10Percent15Minutes" (Option A).
The correct answer is:
CodeDeployDefault.ECSCanary10Percent15Minutes
In AWS CodeDeploy, predefined configurations help you define deployment strategies based on the platform you are deploying to. In this case, you are deploying an application to Amazon ECS and you have specific requirements for gradually exposing traffic to the new version.
"Canary" deployment strategy involves gradually shifting traffic from the old version to the new version.
"ECSCanary" specifies that you are deploying to Amazon ECS.
"10Percent" indicates that initially, only 10% of live traffic will be exposed to the new version.
"15Minutes" means that after 15 minutes have elapsed, all the remaining live traffic will be routed to the new version.
So, the correct predefined configuration that meets your requirements is
"CodeDeployDefault.ECSCanary10Percent15Minutes" (Option A).
Question 9
A developer is implementing an AWS Cloud Development Kit (AWS CDK) serverless application. The developer will provision several AWS Lambda functions and Amazon API Gateway APIs during AWS CloudFormation stack creation. The developer's workstation has the AWS Serverless Application (AWS SAM) and the AWS CDK installed locally. How can the developer test a specific Lambda function locally?
  1. Run the sam package and sam deploy commands. Create a Lambda test event from the AWSManagement Console. Test the Lambda function.
  2. Run the cdk synth and cdk deploy commands. Create a Lambda test event from the AWS ManagementConsole. Test the Lambda function.
  3. Run the cdk synth and sam local invoke commands with the function construct identifier and the path tothe synthesized CloudFormation template.
  4. Run the cdk synth and sam local start-lambda commands with the function construct identifier and thepath to the synthesized CloudFormation template.
Correct answer: C
Explanation:
Run the cdk synth and sam local invoke commands with the function construct identifier and the path to the synthesized CloudFormation template.To test a specific Lambda function locally when using AWS CDK, you can follow these steps:Use the cdk synth command to generate the CloudFormation template that represents your AWS CDK stack.Use the sam local invoke command along with the function construct identifier to test the specific Lambda function. The sam local invoke command simulates the Lambda invocation environment locally.Here's how you would do it:shCopy codecdk synth --output cdk.outsam local invoke MyFunctionName -t cdk.out/MyStack.template.jsonReplace MyFunctionName with the name of your Lambda function and MyStack with the name of your AWS CDK stack.This approach leverages the sam local invoke command from AWS SAM to locally test a specific Lambda function defined in your AWS CDK stack.Option A is incorrect because it mentions using SAM commands (sam package and sam deploy), which are related to AWS SAM, not AWS CDK.Option B is incorrect because it mentions using CDK commands (cdk synth and cdk deploy), but it doesn't use the appropriate method for locally testing a specific Lambda function.Option D is incorrect because there's no sam local start-lambda command in AWS SAM or AWS CDK.
Run the cdk synth and sam local invoke commands with the function construct identifier and the path to the synthesized CloudFormation template.
To test a specific Lambda function locally when using AWS CDK, you can follow these steps:
Use the cdk synth command to generate the CloudFormation template that represents your AWS CDK stack.
Use the sam local invoke command along with the function construct identifier to test the specific Lambda function. The sam local invoke command simulates the Lambda invocation environment locally.
Here's how you would do it:
sh
Copy code
cdk synth --output cdk.out
sam local invoke MyFunctionName -t cdk.out/MyStack.template.json
Replace MyFunctionName with the name of your Lambda function and MyStack with the name of your AWS CDK stack.
This approach leverages the sam local invoke command from AWS SAM to locally test a specific Lambda function defined in your AWS CDK stack.
Option A is incorrect because it mentions using SAM commands (sam package and sam deploy), which are related to AWS SAM, not AWS CDK.
Option B is incorrect because it mentions using CDK commands (cdk synth and cdk deploy), but it doesn't use the appropriate method for locally testing a specific Lambda function.
Option D is incorrect because there's no sam local start-lambda command in AWS SAM or AWS CDK.
Question 10
A company is running a custom application on a set of on-premises Linux servers that are accessed using Amazon API Gateway. AWS X-Ray tracing has been enabled on the API test stage. How can a developer enable X-Ray tracing on the on-premises servers with the LEAST amount of configuration?
  1. Install and run the X-Ray SDK on the on-premises servers to capture and relay the data to the X-Rayservice.
  2. Install and run the X-Ray daemon on the on-premises servers to capture and relay the data to the X-Rayservice.
  3. Capture incoming requests on-premises and configure an AWS Lambda function to pull, process, andrelay relevant data to X-Ray using the PutTraceSegments API call.
  4. Capture incoming requests on-premises and configure an AWS Lambda function to pull, process, andrelay relevant data to X-Ray using the PutTelemetryRecords API call.
Correct answer: B
Explanation:
Install and run the X-Ray daemon on the on-premises servers to capture and relay the data to the X-Ray service.The X-Ray daemon (Option B) is the correct option to enable X-Ray tracing on on-premises servers with the least amount of configuration. The X-Ray daemon simplifies the process of capturing and relaying tracing data from on-premises applications to the X-Ray service. It requires minimal configuration and can be quickly set up to send trace data to X-Ray.The other options (A, C, and D) involve more complex setup and configuration:Option A requires installing and integrating the X-Ray SDK into the application code running on the on-premises servers.Option C and Option D involve setting up AWS Lambda functions to pull, process, and relay trace data to X-Ray, which introduces additional complexity compared to using the X-Ray daemon.By using the X-Ray daemon, you can achieve X-Ray tracing with minimal configuration and quickly start capturing trace data from the on-premises servers.
Install and run the X-Ray daemon on the on-premises servers to capture and relay the data to the X-Ray service.
The X-Ray daemon (Option B) is the correct option to enable X-Ray tracing on on-premises servers with the least amount of configuration. The X-Ray daemon simplifies the process of capturing and relaying tracing data from on-premises applications to the X-Ray service. It requires minimal configuration and can be quickly set up to send trace data to X-Ray.
The other options (A, C, and D) involve more complex setup and configuration:
Option A requires installing and integrating the X-Ray SDK into the application code running on the on-premises servers.
Option C and Option D involve setting up AWS Lambda functions to pull, process, and relay trace data to X-Ray, which introduces additional complexity compared to using the X-Ray daemon.
By using the X-Ray daemon, you can achieve X-Ray tracing with minimal configuration and quickly start capturing trace data from the on-premises servers.
HOW TO OPEN VCE FILES

Use VCE Exam Simulator to open VCE files
Avanaset

HOW TO OPEN VCEX FILES

Use ProfExam Simulator to open VCEX files
ProfExam Screen

ProfExam
ProfExam at a 20% markdown

You have the opportunity to purchase ProfExam at a 20% reduced price

Get Now!