Download AWS Certified CloudOps Engineer-Associate.SOA-C03.Actual4Test.2026-04-02.25q.tqb

Vendor: Amazon
Exam Code: SOA-C03
Exam Name: AWS Certified CloudOps Engineer-Associate
Date: Apr 02, 2026
File Size: 141 KB

How to open TQB files?

Files with TQB (Taurus Question Bank) extension can be opened by Taurus Exam Studio.

Demo Questions

Question 1
A SysOps administrator needs to give an existing AWS Lambda function access to an existing Amazon S3 bucket. Traffic between the Lambda function and the S3 bucket must not use public IP addresses. The Lambda function has been configured to run in a VPC.
Which solution will meet these requirements?
  1. Configure VPC sharing between the Lambda VPC and the S3 bucket.
  2. Attach a transit gateway to the Lambda VPC to allow the Lambda function to connect to the S3 bucket.
  3. Create a NAT gateway. Associate the NAT gateway with the subnet where the Lambda function is configured to run.
  4. Create an S3 interface endpoint. Change the Lambda function to use the new S3 DNS name.
Correct answer: D
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Documents:The requirement is that traffic from a VPC-connected Lambda to Amazon S3 must not use public IP addresses. The AWS-native way to keep traffic private is to use VPC endpoints, which provide private connectivity to supported AWS services without traversing the public internet. Among the options, creating an S3 VPC endpoint is the only approach that satisfies "no public IP addresses" while allowing access to the bucket. Option D is the best match because it explicitly configures an S3 endpoint and directs the Lambda function to use the endpoint-specific DNS name for private routing.Option C (NAT gateway) is incorrect for this requirement because NAT provides outbound internet access from private subnets and typically uses public IP addressing at the NAT gateway. That violates the intent to avoid public IP paths for S3 traffic. Option A is not applicable because S3 buckets are not placed "inside" a VPC and do not participate in VPC sharing in a way that provides private network paths. Option B (transit gateway) connects VPCs and on-prem networks, but it does not create private service connectivity to S3 by itself; you would still need the correct service endpoint solution for S3 access.Using a VPC endpoint also aligns with CloudOps best practices: it reduces exposure, simplifies network egress controls, and supports least-privilege access via endpoint policies (where applicable) alongside IAM policies.References:Amazon VPC User Guide - VPC endpoints for AWS services and private connectivity AWS Lambda Developer Guide - Lambda networking in a VPC Amazon S3 User Guide - Accessing S3 privately using VPC endpoints
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Documents:
The requirement is that traffic from a VPC-connected Lambda to Amazon S3 must not use public IP addresses. The AWS-native way to keep traffic private is to use VPC endpoints, which provide private connectivity to supported AWS services without traversing the public internet. Among the options, creating an S3 VPC endpoint is the only approach that satisfies "no public IP addresses" while allowing access to the bucket. Option D is the best match because it explicitly configures an S3 endpoint and directs the Lambda function to use the endpoint-specific DNS name for private routing.
Option C (NAT gateway) is incorrect for this requirement because NAT provides outbound internet access from private subnets and typically uses public IP addressing at the NAT gateway. That violates the intent to avoid public IP paths for S3 traffic. Option A is not applicable because S3 buckets are not placed "inside" a VPC and do not participate in VPC sharing in a way that provides private network paths. Option B (transit gateway) connects VPCs and on-prem networks, but it does not create private service connectivity to S3 by itself; you would still need the correct service endpoint solution for S3 access.
Using a VPC endpoint also aligns with CloudOps best practices: it reduces exposure, simplifies network egress controls, and supports least-privilege access via endpoint policies (where applicable) alongside IAM policies.
References:
Amazon VPC User Guide - VPC endpoints for AWS services and private connectivity AWS Lambda Developer Guide - Lambda networking in a VPC Amazon S3 User Guide - Accessing S3 privately using VPC endpoints
Question 2
A CloudOps engineer created a VPC with a private subnet, a security group allowing all outbound traffic, and an endpoint for EC2 Instance Connect in the private subnet. The EC2 instance was launched without an SSH key pair, using the same subnet and security group. However, the engineer cannot connect via EC2 Instance Connect endpoint.
How can the CloudOps engineer connect to the instance?
  1. Create an inbound rule in the security group to allow HTTPS traffic on port 443 from the private subnet.
  2. Create an inbound rule in the security group to allow SSH traffic on port 22 from the private subnet.
  3. Create an IAM instance profile that allows AWS Systems Manager Session Manager to access the EC2 instance. Associate the instance profile with the instance.
  4. Recreate the EC2 instance. Associate an SSH key pair with the instance.
Correct answer: C
Explanation:
According to the AWS Cloud Operations and EC2 Connectivity documentation, EC2 Instance Connect Endpoint allows access to instances without internet exposure or open SSH ports. However, for successful connectivity, the EC2 instance must have Systems Manager permissions through an IAM instance profile.If no IAM instance profile is attached, the instance cannot establish a control channel with the Systems Manager service, and EC2 Instance Connect cannot authenticate the session.Opening port 22 (Option B) is unnecessary and contradicts the private subnet design. HTTPS rules (Option A) are irrelevant because EC2 Instance Connect communicates through AWS APIs, not direct HTTPS connections. Recreating the instance with a key pair (Option D) bypasses the intended keyless connection mechanism.Therefore, Option C - attaching an IAM instance profile with Systems Manager permissions - enables secure, private access through EC2 Instance Connect Endpoint.Reference: AWS Cloud Operations & EC2 Connectivity Guide - Enabling EC2 Instance Connect Endpoint Access via Systems Manager Permissions
According to the AWS Cloud Operations and EC2 Connectivity documentation, EC2 Instance Connect Endpoint allows access to instances without internet exposure or open SSH ports. However, for successful connectivity, the EC2 instance must have Systems Manager permissions through an IAM instance profile.
If no IAM instance profile is attached, the instance cannot establish a control channel with the Systems Manager service, and EC2 Instance Connect cannot authenticate the session.
Opening port 22 (Option B) is unnecessary and contradicts the private subnet design. HTTPS rules (Option A) are irrelevant because EC2 Instance Connect communicates through AWS APIs, not direct HTTPS connections. Recreating the instance with a key pair (Option D) bypasses the intended keyless connection mechanism.
Therefore, Option C - attaching an IAM instance profile with Systems Manager permissions - enables secure, private access through EC2 Instance Connect Endpoint.
Reference: AWS Cloud Operations & EC2 Connectivity Guide - Enabling EC2 Instance Connect Endpoint Access via Systems Manager Permissions
Question 3
A company has an application running on EC2 that stores data in an Amazon RDS for MySQL Single-AZ DB instance. The application requires both read and write operations, and the company needs failover capability with minimal downtime.
Which solution will meet these requirements?
  1. Modify the DB instance to be a Multi-AZ DB instance deployment.
  2. Add a read replica in the same Availability Zone where the DB instance is deployed.
  3. Add the DB instance to an Auto Scaling group that has a minimum capacity of 2 and a desired capacity of 2.
  4. Use RDS Proxy to configure a proxy in front of the DB instance.
Correct answer: A
Explanation:
According to the AWS Cloud Operations and Database Reliability documentation, Amazon RDS Multi-AZ deployments provide high availability and automatic failover by maintaining a synchronous standby replica in a different Availability Zone.In the event of instance failure, planned maintenance, or Availability Zone outage, Amazon RDS automatically promotes the standby to primary with minimal downtime (typically less than 60 seconds). The failover is transparent to applications because the DB endpoint remains the same.By contrast, read replicas (Option B) are asynchronous and do not provide automated failover. Auto Scaling (Option C) applies to EC2, not RDS. RDS Proxy (Option D) improves connection management but does not add redundancy.Thus, Option A - converting the RDS instance into a Multi-AZ deployment - delivers the required high availability and business continuity with minimal operational effort.
According to the AWS Cloud Operations and Database Reliability documentation, Amazon RDS Multi-AZ deployments provide high availability and automatic failover by maintaining a synchronous standby replica in a different Availability Zone.
In the event of instance failure, planned maintenance, or Availability Zone outage, Amazon RDS automatically promotes the standby to primary with minimal downtime (typically less than 60 seconds). The failover is transparent to applications because the DB endpoint remains the same.
By contrast, read replicas (Option B) are asynchronous and do not provide automated failover. Auto Scaling (Option C) applies to EC2, not RDS. RDS Proxy (Option D) improves connection management but does not add redundancy.
Thus, Option A - converting the RDS instance into a Multi-AZ deployment - delivers the required high availability and business continuity with minimal operational effort.
Question 4
A company hosts a static website in Amazon S3 behind an Amazon CloudFront distribution. When new versions are deployed, users sometimes do not see updated content immediately.
Which solution will meet this requirement?
  1. Configure the CloudFront distribution to add a custom Cache-Control header to requests for content from the S3 bucket.
  2. Modify the distribution settings to specify the protocol as HTTPS only.
  3. Attach the CachingOptimized managed cache policy to the distribution.
  4. Create a CloudFront invalidation.
Correct answer: D
Explanation:
The AWS Cloud Operations and Content Delivery documentation explains that Amazon CloudFront caches objects in edge locations for a defined time based on TTL settings or origin headers. When new content is deployed to the S3 origin, previously cached versions remain in edge caches until they expire.To immediately serve the new version, CloudOps engineers must initiate a CloudFront invalidation, which removes cached objects from all edge locations. This forces CloudFront to fetch the latest version from the origin (S3).Invalidations can target individual objects (e.g., /index.html) or wildcard paths (e.g., /*) and are the AWS-recommended approach for dynamic content refresh after static site updates.Changing headers (Option A), enforcing HTTPS (Option B), or applying caching policies (Option C) do not directly refresh outdated cache content.Thus, Option D - issuing a CloudFront invalidation - ensures users receive the latest website content immediately after deployment.
The AWS Cloud Operations and Content Delivery documentation explains that Amazon CloudFront caches objects in edge locations for a defined time based on TTL settings or origin headers. When new content is deployed to the S3 origin, previously cached versions remain in edge caches until they expire.
To immediately serve the new version, CloudOps engineers must initiate a CloudFront invalidation, which removes cached objects from all edge locations. This forces CloudFront to fetch the latest version from the origin (S3).
Invalidations can target individual objects (e.g., /index.html) or wildcard paths (e.g., /*) and are the AWS-recommended approach for dynamic content refresh after static site updates.
Changing headers (Option A), enforcing HTTPS (Option B), or applying caching policies (Option C) do not directly refresh outdated cache content.
Thus, Option D - issuing a CloudFront invalidation - ensures users receive the latest website content immediately after deployment.
Question 5
A multinational company uses an organization in AWS Organizations to manage over 200 member accounts across multiple AWS Regions. The company must ensure that all AWS resources meet specific security requirements.
The company must not deploy any EC2 instances in the ap-southeast-2 Region. The company must completely block root user actions in all member accounts. The company must prevent any user from deleting AWS CloudTrail logs, including administrators. The company requires a centrally managed solution that the company can automatically apply to all existing and future accounts. Which solution will meet these requirements?
  1. Create AWS Config rules with remediation actions in each account to detect policy violations. Implement IAM permissions boundaries for the account root users.
  2. Enable AWS Security Hub across the organization. Create custom security standards to enforce the security requirements. Use AWS CloudFormation StackSets to deploy the standards to all the accounts in the organization. Set up Security Hub automated remediation actions.
  3. Use AWS Control Tower for account governance. Configure Region deny controls. Use Service Control Policies (SCPs) to restrict root user access.
  4. Configure AWS Firewall Manager with security policies to meet the security requirements. Use an AWS Config aggregator with organization-wide conformance packs to detect security policy violations.
Correct answer: C
Explanation:
AWS CloudOps governance best practices emphasize centralized account management and preventive guardrails. AWS Control Tower integrates directly with AWS Organizations and provides "Region deny controls" and "Service Control Policies (SCPs)" that apply automatically to all existing and newly created member accounts. SCPs are organization-wide guardrails that define the maximum permissions for accounts. They can explicitly deny actions such as launching EC2 instances in a specific Region, or block root user access.To prevent CloudTrail log deletion, SCPs can also include denies on cloudtrail:DeleteTrail and s3:DeleteObject actions targeting the CloudTrail log S3 bucket. These SCPs ensure that no user, including administrators, can violate the compliance requirements.AWS documentation under the Security and Compliance domain for CloudOps states:"Use AWS Control Tower to establish a secure, compliant, multi-account environment with preventive guardrails through service control policies and detective controls through AWS Config." This approach meets all stated needs: centralized enforcement, automatic propagation to new accounts, region-based restrictions, and immutable audit logs. Options A, B, and D either detect violations reactively or lack complete enforcement and automation across future accounts.References (AWS CloudOps Documents / Study Guide):* AWS Certified CloudOps Engineer - Associate (SOA-C03) Exam Guide - Domain 4: Security and Compliance* AWS Control Tower - Preventive and Detective Guardrails* AWS Organizations - Service Control Policies (SCPs)* AWS Well-Architected Framework - Security Pillar (Governance and Centralized Controls)
AWS CloudOps governance best practices emphasize centralized account management and preventive guardrails. AWS Control Tower integrates directly with AWS Organizations and provides "Region deny controls" and "Service Control Policies (SCPs)" that apply automatically to all existing and newly created member accounts. SCPs are organization-wide guardrails that define the maximum permissions for accounts. They can explicitly deny actions such as launching EC2 instances in a specific Region, or block root user access.
To prevent CloudTrail log deletion, SCPs can also include denies on cloudtrail:DeleteTrail and s3:DeleteObject actions targeting the CloudTrail log S3 bucket. These SCPs ensure that no user, including administrators, can violate the compliance requirements.
AWS documentation under the Security and Compliance domain for CloudOps states:
"Use AWS Control Tower to establish a secure, compliant, multi-account environment with preventive guardrails through service control policies and detective controls through AWS Config." This approach meets all stated needs: centralized enforcement, automatic propagation to new accounts, region-based restrictions, and immutable audit logs. Options A, B, and D either detect violations reactively or lack complete enforcement and automation across future accounts.
References (AWS CloudOps Documents / Study Guide):
* AWS Certified CloudOps Engineer - Associate (SOA-C03) Exam Guide - Domain 4: Security and Compliance
* AWS Control Tower - Preventive and Detective Guardrails
* AWS Organizations - Service Control Policies (SCPs)
* AWS Well-Architected Framework - Security Pillar (Governance and Centralized Controls)
Question 6
An Amazon EC2 instance is running an application that uses Amazon Simple Queue Service (Amazon SQS) queues. A CloudOps engineer must ensure that the application can read, write, and delete messages from the SQS queues.
Which solution will meet these requirements in the MOST secure manner?
  1. Create an IAM user with an IAM policy that allows the sqs:SendMessage permission, the sqs:ReceiveMessage permission, and the sqs:DeleteMessage permission to the appropriate queues. Embed the IAM user's credentials in the application's configuration.
  2. Create an IAM user with an IAM policy that allows the sqs:SendMessage permission, the sqs:ReceiveMessage permission, and the sqs:DeleteMessage permission to the appropriate queues. Export the IAM user's access key and secret access key as environment variables on the EC2 instance.
  3. Create and associate an IAM role that allows EC2 instances to call AWS services. Attach an IAM policy to the role that allows sqs:* permissions to the appropriate queues.
  4. Create and associate an IAM role that allows EC2 instances to call AWS services. Attach an IAM policy to the role that allows the sqs:SendMessage permission, the sqs:ReceiveMessage permission, and the sqs:DeleteMessage permission to the appropriate queues.
Correct answer: D
Explanation:
The most secure pattern is to use an IAM role for Amazon EC2 with the minimum required permissions. AWS guidance states: "Use roles for applications that run on Amazon EC2 instances" and "grant least privilege by allowing only the actions required to perform a task." By attaching a role to the instance, short-lived credentials are automatically provided through the instance metadata service; this removes the need to create long-term access keys or embed secrets. Granting only sqs:SendMessage, sqs:ReceiveMessage, and sqs:DeleteMessage against the specific SQS queues enforces least privilege and aligns with CloudOps security controls. Options A and B rely on IAM user access keys, which contravene best practices for workloads on EC2 and increase credential-management risk. Option C uses a role but grants sqs:*, violating least-privilege principles. Therefore, Option D meets the security requirement with scoped, temporary credentials and precise permissions.References (AWS CloudOps Documents / Study Guide):* AWS Certified CloudOps Engineer - Associate (SOA-C03) Exam Guide - Security & Compliance* IAM Best Practices - "Use roles instead of long-term access keys," "Grant least privilege"* IAM Roles for Amazon EC2 - Temporary credentials for applications on EC2* Amazon SQS - Identity and access management for Amazon SQS
The most secure pattern is to use an IAM role for Amazon EC2 with the minimum required permissions. AWS guidance states: "Use roles for applications that run on Amazon EC2 instances" and "grant least privilege by allowing only the actions required to perform a task." By attaching a role to the instance, short-lived credentials are automatically provided through the instance metadata service; this removes the need to create long-term access keys or embed secrets. Granting only sqs:SendMessage, sqs:ReceiveMessage, and sqs:DeleteMessage against the specific SQS queues enforces least privilege and aligns with CloudOps security controls. Options A and B rely on IAM user access keys, which contravene best practices for workloads on EC2 and increase credential-management risk. Option C uses a role but grants sqs:*, violating least-privilege principles. Therefore, Option D meets the security requirement with scoped, temporary credentials and precise permissions.
References (AWS CloudOps Documents / Study Guide):
* AWS Certified CloudOps Engineer - Associate (SOA-C03) Exam Guide - Security & Compliance
* IAM Best Practices - "Use roles instead of long-term access keys," "Grant least privilege"
* IAM Roles for Amazon EC2 - Temporary credentials for applications on EC2
* Amazon SQS - Identity and access management for Amazon SQS
Question 7
A company runs a website on Amazon EC2 instances. Users can upload images to an Amazon S3 bucket and publish the images to the website. The company wants to deploy a serverless image-processing application that uses an AWS Lambda function to resize the uploaded images.
The company's development team has created the Lambda function. A CloudOps engineer must implement a solution to invoke the Lambda function when users upload new images to the S3 bucket.
Which solution will meet this requirement?
  1. Configure an Amazon Simple Notification Service (Amazon SNS) topic to invoke the Lambda function when a user uploads a new image to the S3 bucket.
  2. Configure an Amazon CloudWatch alarm to invoke the Lambda function when a user uploads a new image to the S3 bucket.
  3. Configure S3 Event Notifications to invoke the Lambda function when a user uploads a new image to the S3 bucket.
  4. Configure an Amazon Simple Queue Service (Amazon SQS) queue to invoke the Lambda function when a user uploads a new image to the S3 bucket.
Correct answer: C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:Use Amazon S3 Event Notifications with AWS Lambda to trigger image processing on object creation. S3 natively supports invoking Lambda for events such as s3:ObjectCreated:*, providing a serverless, low-latency pipeline without managing additional services. AWS operational guidance states that "Amazon S3 can directly invoke a Lambda function in response to object-created events," allowing you to pass event metadata (bucket/key) to the function for resizing and writing results back to S3. This approach minimizes operational overhead, scales automatically with upload volume, and integrates with standard retry semantics. SNS or SQS can be added for fan-out or buffering patterns, but they are not required when the requirement is simply "invoke the Lambda function on upload." CloudWatch alarms do not detect individual S3 object uploads and cannot directly satisfy per-object triggers. Therefore, configuring S3 → Lambda event notifications meets the requirement most directly and aligns with CloudOps best practices for event-driven, serverless automation.References (AWS CloudOps Documents / Study Guide):* Using AWS Lambda with Amazon S3 (Lambda Developer Guide)* Amazon S3 Event Notifications (S3 User Guide)* AWS Well-Architected - Serverless Applications (Operational Excellence)
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:
Use Amazon S3 Event Notifications with AWS Lambda to trigger image processing on object creation. S3 natively supports invoking Lambda for events such as s3:ObjectCreated:*, providing a serverless, low-latency pipeline without managing additional services. AWS operational guidance states that "Amazon S3 can directly invoke a Lambda function in response to object-created events," allowing you to pass event metadata (bucket/key) to the function for resizing and writing results back to S3. This approach minimizes operational overhead, scales automatically with upload volume, and integrates with standard retry semantics. SNS or SQS can be added for fan-out or buffering patterns, but they are not required when the requirement is simply "invoke the Lambda function on upload." CloudWatch alarms do not detect individual S3 object uploads and cannot directly satisfy per-object triggers. Therefore, configuring S3 → Lambda event notifications meets the requirement most directly and aligns with CloudOps best practices for event-driven, serverless automation.
References (AWS CloudOps Documents / Study Guide):
* Using AWS Lambda with Amazon S3 (Lambda Developer Guide)
* Amazon S3 Event Notifications (S3 User Guide)
* AWS Well-Architected - Serverless Applications (Operational Excellence)
Question 8
An ecommerce company uses Amazon ElastiCache (Redis OSS) for caching product queries. The CloudOps engineer observes a large number of cache evictions in Amazon CloudWatch metrics and needs to reduce evictions while retaining popular data in cache.
Which solution meets these requirements with the least operational overhead?
  1. Add another node to the ElastiCache cluster.
  2. Increase the ElastiCache TTL value.
  3. Decrease the ElastiCache TTL value.
  4. Migrate to a new ElastiCache cluster with larger nodes.
Correct answer: D
Explanation:
According to the AWS Cloud Operations and ElastiCache documentation, cache evictions occur when the cache runs out of memory and must remove items to make space for new data.To reduce evictions and retain frequently accessed items, AWS recommends increasing the total available memory - either by scaling up to larger node types or scaling out by adding shards/nodes. Migrating to a cluster with larger nodes is the simplest and most efficient solution because it immediately expands capacity without architectural changes.Adjusting TTL (Options B and C) controls expiration timing, not memory allocation. Adding a single node (Option A) may help, but redistributing data requires resharding, introducing more complexity.Thus, Option D provides the lowest operational overhead and ensures high cache hit rates by increasing total cache memory.Reference: AWS Cloud Operations & Performance Optimization Guide - Reducing Evictions and Scaling Amazon ElastiCache Clusters
According to the AWS Cloud Operations and ElastiCache documentation, cache evictions occur when the cache runs out of memory and must remove items to make space for new data.
To reduce evictions and retain frequently accessed items, AWS recommends increasing the total available memory - either by scaling up to larger node types or scaling out by adding shards/nodes. Migrating to a cluster with larger nodes is the simplest and most efficient solution because it immediately expands capacity without architectural changes.
Adjusting TTL (Options B and C) controls expiration timing, not memory allocation. Adding a single node (Option A) may help, but redistributing data requires resharding, introducing more complexity.
Thus, Option D provides the lowest operational overhead and ensures high cache hit rates by increasing total cache memory.
Reference: AWS Cloud Operations & Performance Optimization Guide - Reducing Evictions and Scaling Amazon ElastiCache Clusters
Question 9
A company's architecture team must receive immediate email notifications whenever new Amazon EC2 instances are launched in the company's main AWS production account.
What should a CloudOps engineer do to meet this requirement?
  1. Create a user data script that sends an email message through a smart host connector. Include the architecture team's email address in the user data script as the recipient. Ensure that all new EC2 instances include the user data script as part of a standardized build process.
  2. Create an Amazon Simple Notification Service (Amazon SNS) topic and a subscription that uses the email protocol. Enter the architecture team's email address as the subscriber. Create an Amazon EventBridge rule that reacts when EC2 instances are launched. Specify the SNS topic as the rule's target.
  3. Create an Amazon Simple Queue Service (Amazon SQS) queue and a subscription that uses the email protocol. Enter the architecture team's email address as the subscriber. Create an Amazon EventBridge rule that reacts when EC2 instances are launched. Specify the SQS queue as the rule's target.
  4. Create an Amazon Simple Notification Service (Amazon SNS) topic. Configure AWS Systems Manager to publish EC2 events to the SNS topic. Create an AWS Lambda function to poll the SNS topic. Configure the Lambda function to send any messages to the architecture team's email address.
Correct answer: B
Explanation:
As per the AWS Cloud Operations and Event Monitoring documentation, the most efficient method for event-driven notification is to use Amazon EventBridge to detect specific EC2 API events and trigger a Simple Notification Service (SNS) alert.EventBridge continuously monitors AWS service events, including RunInstances, which signals the creation of new EC2 instances. When such an event occurs, EventBridge sends it to an SNS topic, which then immediately emails subscribed recipients - in this case, the architecture team.This combination provides real-time, serverless notifications with minimal management. SQS (Option C) is designed for queue-based processing, not direct user alerts. User data scripts (Option A) and custom polling with Lambda (Option D) introduce unnecessary operational complexity and latency.Hence, Option B is the correct and AWS-recommended CloudOps design for immediate launch notifications.Reference: AWS Cloud Operations & Monitoring Guide - Section: EventBridge and SNS Integration for EC2 Event Notifications
As per the AWS Cloud Operations and Event Monitoring documentation, the most efficient method for event-driven notification is to use Amazon EventBridge to detect specific EC2 API events and trigger a Simple Notification Service (SNS) alert.
EventBridge continuously monitors AWS service events, including RunInstances, which signals the creation of new EC2 instances. When such an event occurs, EventBridge sends it to an SNS topic, which then immediately emails subscribed recipients - in this case, the architecture team.
This combination provides real-time, serverless notifications with minimal management. SQS (Option C) is designed for queue-based processing, not direct user alerts. User data scripts (Option A) and custom polling with Lambda (Option D) introduce unnecessary operational complexity and latency.
Hence, Option B is the correct and AWS-recommended CloudOps design for immediate launch notifications.
Reference: AWS Cloud Operations & Monitoring Guide - Section: EventBridge and SNS Integration for EC2 Event Notifications
Question 10
A company uses AWS Systems Manager Session Manager to manage EC2 instances in the eu-west-1 Region. The company wants private connectivity using VPC endpoints.
Which VPC endpoints are required to meet these requirements? (Select THREE.)
  1. com.amazonaws.eu-west-1.ssm
  2. com.amazonaws.eu-west-1.ec2messages
  3. com.amazonaws.eu-west-1.ec2
  4. com.amazonaws.eu-west-1.ssmmessages
  5. com.amazonaws.eu-west-1.s3
  6. com.amazonaws.eu-west-1.states
Correct answer: A, B, D
Explanation:
The AWS Cloud Operations and Systems Manager documentation states that to use Session Manager privately within a VPC (without internet access), three interface VPC endpoints must be configured:* com.amazonaws..ssm - enables Systems Manager core API communication.* com.amazonaws..ec2messages - allows the agent to send and receive messages between EC2 and Systems Manager.* com.amazonaws..ssmmessages - enables real-time interactive communication for Session Manager connections.These endpoints ensure secure, private connectivity over the AWS network, eliminating the need for public internet routing.Endpoints for S3, Step Functions, or EC2 API (Options C, E, F) are not required for Session Manager functionality.Thus, the correct combination is A, B, and D, aligning with AWS CloudOps best practices for secure, private Systems Manager access.Reference: AWS Cloud Operations & Systems Manager Guide - Configuring VPC Endpoints for Session Manager Private Connectivity
The AWS Cloud Operations and Systems Manager documentation states that to use Session Manager privately within a VPC (without internet access), three interface VPC endpoints must be configured:
* com.amazonaws..ssm - enables Systems Manager core API communication.
* com.amazonaws..ec2messages - allows the agent to send and receive messages between EC2 and Systems Manager.
* com.amazonaws..ssmmessages - enables real-time interactive communication for Session Manager connections.
These endpoints ensure secure, private connectivity over the AWS network, eliminating the need for public internet routing.
Endpoints for S3, Step Functions, or EC2 API (Options C, E, F) are not required for Session Manager functionality.
Thus, the correct combination is A, B, and D, aligning with AWS CloudOps best practices for secure, private Systems Manager access.
Reference: AWS Cloud Operations & Systems Manager Guide - Configuring VPC Endpoints for Session Manager Private Connectivity
HOW TO OPEN VCE FILES

Use VCE Exam Simulator to open VCE files
Avanaset

HOW TO OPEN VCEX FILES

Use ProfExam Simulator to open VCEX files
ProfExam Screen

ProfExam
ProfExam at a 20% markdown

You have the opportunity to purchase ProfExam at a 20% reduced price

Get Now!