Download AWS Certified SysOps Administrator - Associate.SOA-C02.VCEplus.2025-03-19.201q.tqb

Vendor: Amazon
Exam Code: SOA-C02
Exam Name: AWS Certified SysOps Administrator - Associate
Date: Mar 19, 2025
File Size: 2 MB

How to open TQB files?

Files with TQB (Taurus Question Bank) extension can be opened by Taurus Exam Studio.

Demo Questions

Question 1
A SysOps administrator needs to automate the invocation of an AWS Lambda function. The Lambda function must run at the end of each day to generate a report on data that is stored in an Amazon S3 bucket. What is the MOST operationally efficient solution that meets these requirements?
  1. Create an Amazon EventBridge {Amazon CloudWatch Events) rule that has an event pattern for Amazon S3 and the Lambda function as a target.
  2. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that has a schedule and the Lambda function as a target.
  3. Create an S3 event notification to invoke the Lambda function whenever objects change in the S3 bucket.
  4. Deploy an Amazon EC2 instance with a cron job to invoke the Lambda function.
Correct answer: C
Question 2
A company deployed a new web application on multiple Amazon EC2 instances behind an Application Load Balancer (ALB). The EC2 instances run in an Auto Scaling group. Users report that they are frequently being prompted to log in.
What should a SysOps administrator do to resolve this issue?
  1. Configure an Amazon CloudFront distribution with the ALB as the origin. 
  2. Enable sticky sessions (session affinity) for the target group of EC2 instances.
  3. Redeploy the EC2 instances in a spread placement group.
  4. Replace the ALB with a Network Load Balancer.
Correct answer: C
Question 3
A SysOps administrator manages the caching of an Amazon CloudFront distribution that serves pages of a website. The SysOps administrator needs to configure the distribution so that the TTL of individual pages can vary. The TTL of the individual pages must remain within the maximum TTL and the minimum TTL that are set for the distribution. Which solution will meet these requirements?
  1. Create an AWS Lambda function that calls the Create Invalid at ion API operation when a change in cache time is necessary.
  2. Add a Cache-Control: max-age directive to the object at the origin when content is being returned to CloudFront.
  3. Add a no-cache header through a Lambda@Edge function in response to the Viewer response.
  4. Add an Expires header through a CloudFront function in response to the Viewer response.
Correct answer: B
Explanation:
To allow the TTL (Time to Live) of individual pages to vary while adhering to the maximum and minimum TTL settings configured for the Amazon CloudFront distribution, setting cache behaviors directly at the origin is most effective:Use Cache-Control Headers: By configuring the Cache-Control: max-age directive in the HTTP headers of the objects served from the origin, you can specify how long an object should be cached by CloudFront before it is considered stale.Integration with CloudFront: When CloudFront receives a request for an object, it checks the cache-control header to determine the TTL for that specific object. This allows individual objects to have their own TTL settings, as long as they are within the globally set minimum and maximum TTL values for the distribution.Operational Efficiency: This method does not require any additional AWS services or modifications to the distribution settings. It leverages HTTP standard practices, ensuring compatibility and ease of management.Implementing the TTL management through cache-control headers at the origin provides precise control over caching behavior, aligning with varying content freshness requirements without complex configurations.
To allow the TTL (Time to Live) of individual pages to vary while adhering to the maximum and minimum TTL settings configured for the Amazon CloudFront distribution, setting cache behaviors directly at the origin is most effective:
Use Cache-Control Headers: By configuring the Cache-Control: max-age directive in the HTTP headers of the objects served from the origin, you can specify how long an object should be cached by CloudFront before it is considered stale.
Integration with CloudFront: When CloudFront receives a request for an object, it checks the cache-control header to determine the TTL for that specific object. This allows individual objects to have their own TTL settings, as long as they are within the globally set minimum and maximum TTL values for the distribution.
Operational Efficiency: This method does not require any additional AWS services or modifications to the distribution settings. It leverages HTTP standard practices, ensuring compatibility and ease of management.
Implementing the TTL management through cache-control headers at the origin provides precise control over caching behavior, aligning with varying content freshness requirements without complex configurations.
Question 4
A company is running Amazon EC2 On-Demand Instances in an Auto Scaling group. The instances process messages from an Amazon Simple Queue Service (Amazon SQS) queue. The Auto Scaling group is set to scale based on the number of messages in the queue. Messages can take up to 12 hours to process completely. A SysOps administrator must ensure that instances are not interrupted during message processing.
What should the SysOps administrator do to meet these requirements?
  1. Enable instance scale-in protection for the specific instance in the Auto Scaling group at the start of message processing by calling the Amazon EC2 Auto Scaling API from the processing script. Disable instance scale-inprotection after message processing is complete by calling the Amazon EC2 Auto Scaling API from the processing script.
  2. Set the Auto Scaling group's termination policy to OldestInstance.
  3. Set the Auto Scaling group's termination policy to OldestLaunchConfiguration.
  4. Suspend the Launch and Terminate scaling processes for the specific instance in the Auto Scaling group at the start of message processing by calling the Amazon EC2 Auto Scaling API from the processing script. Resumethe scaling processes after message processing is complete by calling the Amazon EC2 Auto Scaling API from the processing script.
Correct answer: A
Explanation:
# Enable instance scale-in protection for specific instance.aws autoscaling set-instance-protection --instance-ids i-5f2e8a0d --auto-scaling-group-name my-asg --protected-from-scale-in# Disable instance scale-in protection for the specified instance.aws autoscaling set-instance-protection --instance-ids i-5f2e8a0d --auto-scaling-group-name my-asg --no-protected-from-scale-inhttps://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-instance-protection.htmlTo ensure that EC2 instances in an Auto Scaling group are not interrupted during message processing, the most effective method is to implement scale-in protection for the instances while they are actively processing messages. This can be done programmatically by modifying the Auto Scaling group's settings using the Amazon EC2 Auto Scaling API.Starting Message Processing: When an instance begins processing a message, your application should make an API call to enable scale-in protection. This is done using the SetInstanceProtection action, setting the ProtectedFromScaleIn parameter to true for that specific instance.Completing Message Processing: Once the message has been processed, another API call should be made to disable scale-in protection. This is done by calling the SetInstanceProtection action again, but this time setting the ProtectedFromScaleIn parameter to false.This method ensures that while messages are being processed, the instances are not terminated by the Auto Scaling group regardless of any scale-in activities that might be triggered by other parameters like CPU utilization or a decrease in the number of messages in the queue.AWS DocumentationReference: You can refer to the AWS documentation on managing instance scale-in protection in Auto Scaling groups for more details: Instance Scale-In Protection.
# Enable instance scale-in protection for specific instance.
aws autoscaling set-instance-protection --instance-ids i-5f2e8a0d --auto-scaling-group-name my-asg --protected-from-scale-in
# Disable instance scale-in protection for the specified instance.
aws autoscaling set-instance-protection --instance-ids i-5f2e8a0d --auto-scaling-group-name my-asg --no-protected-from-scale-in
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-instance-protection.html
To ensure that EC2 instances in an Auto Scaling group are not interrupted during message processing, the most effective method is to implement scale-in protection for the instances while they are actively processing messages. This can be done programmatically by modifying the Auto Scaling group's settings using the Amazon EC2 Auto Scaling API.
Starting Message Processing: When an instance begins processing a message, your application should make an API call to enable scale-in protection. This is done using the SetInstanceProtection action, setting the ProtectedFromScaleIn parameter to true for that specific instance.
Completing Message Processing: Once the message has been processed, another API call should be made to disable scale-in protection. This is done by calling the SetInstanceProtection action again, but this time setting the ProtectedFromScaleIn parameter to false.
This method ensures that while messages are being processed, the instances are not terminated by the Auto Scaling group regardless of any scale-in activities that might be triggered by other parameters like CPU utilization or a decrease in the number of messages in the queue.
AWS Documentation
Reference: 
You can refer to the AWS documentation on managing instance scale-in protection in Auto Scaling groups for more details: Instance Scale-In Protection.
Question 5
A company is managing a website with a global user base hosted on Amazon EC2 with an Application Load Balancer (ALB). To reduce the load on the web servers, a SysOps administrator configures an Amazon CloudFront distribution with the ALB as the origin. After a week of monitoring the solution, the administrator notices that requests are still being served by the ALB and there is no change in the web server load.
What are possible causes for this problem? (Choose two.)
  1. CloudFront does not have the ALB configured as the origin access identity.
  2. The DNS is still pointing to the ALB instead of the CloudFront distribution.
  3. The ALB security group is not permitting inbound traffic from CloudFront.
  4. The default, minimum, and maximum Time to Live (TTL) are set to 0 seconds on the CloudFront distribution.
  5. The target groups associated with the ALB are configured for sticky sessions.      
Correct answer: BD
Explanation:
To effectively use Amazon CloudFront as a content delivery network for an application using an Application Load Balancer as the origin, several configuration steps need to be correctly implemented:DNS Configuration: Ensure that the DNS records for the domain serving the content point to the CloudFront distribution's DNS name rather than directly to the ALB. If the DNS still points to the ALB, users' requests will bypass CloudFront, leading directly to the ALB and maintaining the existing load on your web servers.TTL Settings: The Time to Live (TTL) settings in the CloudFront distribution dictate how long the content is cached in CloudFront edge locations before CloudFront fetches a fresh copy from the origin. If the TTL values are set to 0, it means that CloudFront does not cache the content at all, resulting in each user request being forwarded to the ALB, which does not reduce the load.AWS DocumentationReference: For more information on DNS and TTL configurations for CloudFront, you can refer to the following AWS documentation:Configuring DNSCloudFront TTL Settings.
To effectively use Amazon CloudFront as a content delivery network for an application using an Application Load Balancer as the origin, several configuration steps need to be correctly implemented:
DNS Configuration: Ensure that the DNS records for the domain serving the content point to the CloudFront distribution's DNS name rather than directly to the ALB. If the DNS still points to the ALB, users' requests will bypass CloudFront, leading directly to the ALB and maintaining the existing load on your web servers.
TTL Settings: The Time to Live (TTL) settings in the CloudFront distribution dictate how long the content is cached in CloudFront edge locations before CloudFront fetches a fresh copy from the origin. If the TTL values are set to 0, it means that CloudFront does not cache the content at all, resulting in each user request being forwarded to the ALB, which does not reduce the load.
AWS Documentation
Reference: 
For more information on DNS and TTL configurations for CloudFront, you can refer to the following AWS documentation:
  • Configuring DNS
  • CloudFront TTL Settings.
Question 6
A company's SysOps administrator manages a fleet of hundreds of Amazon EC2 instances that run Windows-based workloads and Linux-based workloads. Each EC2 instance has a tag that identifies its operating system. All the EC2 instances run AWS Systems Manager Session Manager.
A zero-day vulnerability is reported, and no patches are available. The company's security team provides code for all the relevant operating systems to reduce the risk of the vulnerability. The SysOps administrator needs to implement the code on the EC2 instances and must provide a report that shows that the code has successfully run on all the instances.
What should the SysOps administrator do to meet these requirements as quickly as possible?
  1. Use Systems Manager Run Command. Choose either the AWS-RunShellScript document or the AWS-RunPowerShellScript document. Configure Run Command with the code from the security team. Specify the operatingsystem tag in the Targets parameter. Run the command. Provide the command history's evidence to the security team.
  2. Create an AWS Lambda function that connects to the EC2 instances through Session Manager. Configure the Lambda function to identify the operating system, run the code from the security team, and return the resultsto an Amazon RDS DB instance. Query the DB instance for the results. Provide the results as evidence to the security team. 
  3. Log on to each EC2 instance. Run the code from the security team on each EC2 instance. Copy and paste the results of each run into a single spreadsheet. Provide the spreadsheet as evidence to the security team.
  4. Update the launch templates of the EC2 instances to include the code from the security team in the user data. Relaunch the EC2 instances by using the updated launch templates. Retrieve the EC2 instance logs of eachinstance. Provide the EC2 instance logs as evidence to the security team.
Correct answer: A
Explanation:
AWS Systems Manager Run Command provides an efficient method to execute administrative tasks on EC2 instances. This solution will minimize the time and complexity involved:Select Document: Choose AWS-RunShellScript for Linux-based instances or AWS-RunPowerShellScript for Windows-based instances.Configure Command: Enter the mitigation script provided by the security team into the command document.Target Instances: Use the tagging system to target only the instances that match the specific OS as identified by their tags.Execute Command: Run the command across the targeted instances.Verification and Reporting: The command history in Systems Manager will serve as evidence of execution and success, which can be reported back to the security team. AWS DocumentationReference: More about Run Command can be found here: AWS Systems Manager Run Command.
AWS Systems Manager Run Command provides an efficient method to execute administrative tasks on EC2 instances. This solution will minimize the time and complexity involved:
Select Document: Choose AWS-RunShellScript for Linux-based instances or AWS-RunPowerShellScript for Windows-based instances.
Configure Command: Enter the mitigation script provided by the security team into the command document.
Target Instances: Use the tagging system to target only the instances that match the specific OS as identified by their tags.
Execute Command: Run the command across the targeted instances.
Verification and Reporting: The command history in Systems Manager will serve as evidence of execution and success, which can be reported back to the security team. AWS Documentation
Reference: More about Run Command can be found here: AWS Systems Manager Run Command.
Question 7
Accompany wants to monitor the number of Amazon EC2 instances that it is running. The company also wants to automate a service quota increase when the number of instances reaches a specific threshold. Which solution meets these requirements?
  1. Create an Amazon CloudWatch alarm to monitor Service Quotas. Configure the alarm to invoke an AWS Lambda function to request a quota increase when the alarm reaches the threshold.
  2. Create an AWS Config rule to monitor Service Quotas. Call an AWS Lambda function to remediate the action and increase the quota.
  3. Create an Amazon CloudWateh alarm to monitor the AWS Health Dashboard. Configure the alarm to invoke an AWS Lambda function to request a quota increase when the alarm reaches the threshold.
  4. Create an Amazon CloudWatch alarm to monitor AWS Trusted Advisor service quotas. Configure the alarm to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic to increase the quota.  
Correct answer: A
Explanation:
This approach uses CloudWatch for monitoring and Lambda for automation, allowing for quick and efficient quota management:Setup CloudWatch Alarm: Monitor the usage of EC2 instances against the service quota using CloudWatch.Lambda Function: Write a Lambda function that triggers a quota increase request via the Service Quotas API when the threshold is met.Integration: Configure the CloudWatch alarm to trigger this Lambda function when the instance count approaches the service quota.AWS DocumentationReference: Information on monitoring with CloudWatch and automating actions with Lambda can be found in these guides: Amazon CloudWatch Alarms, AWS Lambda.
This approach uses CloudWatch for monitoring and Lambda for automation, allowing for quick and efficient quota management:
  • Setup CloudWatch Alarm: Monitor the usage of EC2 instances against the service quota using CloudWatch.
  • Lambda Function: Write a Lambda function that triggers a quota increase request via the Service Quotas API when the threshold is met.
  • Integration: Configure the CloudWatch alarm to trigger this Lambda function when the instance count approaches the service quota.
AWS Documentation
Reference: 
Information on monitoring with CloudWatch and automating actions with Lambda can be found in these guides: Amazon CloudWatch Alarms, AWS Lambda.
Question 8
A company uses AWS Organizations to manage its multi-account environment. The organization contains a dedicated account for security and a dedicated account for logging. A SysOps administrator needs to implement a centralized solution that provides alerts when a resource metric in any account crosses a standard defined threshold. Which solution will meet these requirements?
  1. Deploy an AWS CloudFormation stack set to the accounts in the organization. Use a template that creates the required Amazon CloudWatch alarms and references an Amazon Simple Notification Service (Amazon SNS)topic in the logging account with publish permissions for all the accounts.
  2. Deploy an AWS CloudFormation stack in each account. Use the stack to deploy the required Amazon CloudWalch alarms and the required Amazon Simple Notification Service (Amazon SNS) topic.
  3. Deploy an AWS Lambda function on a cron job in each account. Configure the Lambda function to read resources that are in the account and to invoke an Amazon Simple Notification Service (Amazon SNS) topic if anymetrics cross the defined threshold.
  4. Deploy an AWS CloudFormation change set to the organization. Use a template to create the required Amazon CloudWatch alarms and to send alerts to a verified Amazon Simple Email Service (Amazon SES) identity.
Correct answer: A
Explanation:
Using AWS CloudFormation stack sets allows you to manage CloudWatch alarms across multiple accounts efficiently:Create Stack Set: Use a CloudFormation template that defines the required CloudWatch alarms and configures them to publish alerts to an SNS topic.Specify SNS Topic: Ensure the SNS topic is located in the logging account and has the necessary permissions set to receive publications from all accounts in the organization.Deploy Across Organization: Implement the stack set across all accounts, ensuring centralized management and standardized deployment.AWS DocumentationReference: Learn more about deploying resources with CloudFormation StackSets: Working with AWS CloudFormation StackSets.
Using AWS CloudFormation stack sets allows you to manage CloudWatch alarms across multiple accounts efficiently:
Create Stack Set: Use a CloudFormation template that defines the required CloudWatch alarms and configures them to publish alerts to an SNS topic.
Specify SNS Topic: Ensure the SNS topic is located in the logging account and has the necessary permissions set to receive publications from all accounts in the organization.
Deploy Across Organization: Implement the stack set across all accounts, ensuring centralized management and standardized deployment.
AWS Documentation
Reference: 
Learn more about deploying resources with CloudFormation StackSets: Working with AWS CloudFormation StackSets.
Question 9
A company has developed a service that is deployed on a fleet of Linux-based Amazon EC2 instances that are in an Auto Scaling group. The service occasionally fails unexpectedly because of an error in the application code.
The company's engineering team determines that resolving the underlying cause of the service failure could take several weeks.
A SysOps administrator needs to create a solution to automate recovery if the service crashes on any of the EC2 instances.
Which solutions will meet this requirement? (Select TWO.)
  1. Install the Amazon CloudWatch agent on the EC2 instances. Configure the CloudWatch agent to monitor the service. Set the CloudWatch action to restart if the service health check fails.
  2. Tag the EC2 instances. Create an AWS Lambda function that uses AWS Systems Manager Session Manager to log in to the tagged EC2 instances and restart the service. Schedule the Lambda function to run every 5minutes.
  3. Tag the EC2 instances. Use AWS Systems Manager State Manager to create an association that uses the AWS-RunSheIIScript document. Configure the association command with a script that checks if the service is runningand that starts the service if the service is not running. For targets, specify the EC2 instance tag. Schedule the association to run every 5 minutes.
  4. Update the EC2 user data that is specified in the Auto Scaling group's launch template to include a script that runs on a cron schedule every 5 minutes.
  5. Update the EC2 user data that is specified in the Auto Scaling group's launch template to ensure that the service runs during startup. Redeploy all the EC2 instances in the Auto Scaling group with the updated launchtemplate.
Correct answer: AC
Explanation:
The requirement is to automate recovery if the service crashes on any of the EC2 instances.Option A: Install the Amazon CloudWatch agent on the EC2 instances. Configure the CloudWatch agent to monitor the service.Set the CloudWatch action to restart if the service health check fails .This is a valid solutionbecause the CloudWatch agent can be configured to monitor the service and take action (restart the service) if the health check fails .Option C: Tag the EC2 instances. Use AWS Systems Manager State Manager to create an association that uses the AWS-RunShellScript document. Configure the association command with a script that checks if the service isrunning and that starts the service if the service is not running. For targets, specify the EC2 instance tag.Schedule the association to run every 5 minutes678. This is a valid solution because AWS Systems Manager StateManager can be used to maintain a consistent state of the EC2 instances.It can run a script to check if the service is running and start the service if it's not running678.Option B: Tag the EC2 instances. Create an AWS Lambda function that uses AWS Systems Manager Session Manager to log in to the tagged EC2 instances and restart the service.Schedule the Lambda function to run every 5minutes . This is not a valid solution because AWS Lambda functions are not designed to log in to EC2 instances and restart services. They are used for running serverless applications.Option D: Update the EC2 user data that is specified in the Auto Scaling group's launch template to include a script that runs on a cron schedule every 5 minutes131415. This is not a valid solution because user data scripts arerun only during the launch of an EC2 instance. They are not designed to run on a schedule.Option E: Update the EC2 user data that is specified in the Auto Scaling group's launch template to ensure that the service runs during startup.Redeploy all the EC2 instances in the Auto Scaling group with the updated launchtemplate131416. This is not a valid solution because while user data can be used to ensure that the service runs during startup, it does not provide a solution for when the service crashes after the EC2 instance has started.
The requirement is to automate recovery if the service crashes on any of the EC2 instances.
Option A: Install the Amazon CloudWatch agent on the EC2 instances. Configure the CloudWatch agent to monitor the service.Set the CloudWatch action to restart if the service health check fails .This is a valid solution
because the CloudWatch agent can be configured to monitor the service and take action (restart the service) if the health check fails .
Option C: Tag the EC2 instances. Use AWS Systems Manager State Manager to create an association that uses the AWS-RunShellScript document. Configure the association command with a script that checks if the service is
running and that starts the service if the service is not running. For targets, specify the EC2 instance tag.Schedule the association to run every 5 minutes678. This is a valid solution because AWS Systems Manager State
Manager can be used to maintain a consistent state of the EC2 instances.It can run a script to check if the service is running and start the service if it's not running678.
Option B: Tag the EC2 instances. Create an AWS Lambda function that uses AWS Systems Manager Session Manager to log in to the tagged EC2 instances and restart the service.Schedule the Lambda function to run every 5
minutes . This is not a valid solution because AWS Lambda functions are not designed to log in to EC2 instances and restart services. They are used for running serverless applications.
Option D: Update the EC2 user data that is specified in the Auto Scaling group's launch template to include a script that runs on a cron schedule every 5 minutes131415. This is not a valid solution because user data scripts are
run only during the launch of an EC2 instance. They are not designed to run on a schedule.
Option E: Update the EC2 user data that is specified in the Auto Scaling group's launch template to ensure that the service runs during startup.Redeploy all the EC2 instances in the Auto Scaling group with the updated launch
template131416. This is not a valid solution because while user data can be used to ensure that the service runs during startup, it does not provide a solution for when the service crashes after the EC2 instance has started.
Question 10
A SysOps administrator must analyze Amazon CloudWatch logs across 10 AWS Lambda functions for historical errors. The logs are in JSON format and are stored in Amazon S3. Errors sometimes do not appear in the same field, but all errors begin with the same string prefix.
What is the MOST operationally efficient way for the SysOps administrator to analyze the log files?
  1. Use S3 Select to write a query to search for errors. Run the query across all log groups of interest.
  2. Create an AWS Glue processing job to index the logs of interest. Run a query in Amazon Athena to search for errors.
  3. Use Amazon CloudWatch Logs Insights to write a query to search for errors. Run the query across all log groups of interest. 
  4. Use Amazon CloudWatch Contributor Insights to create a rule. Apply the rule across all log groups of interest.
Correct answer: C
HOW TO OPEN VCE FILES

Use VCE Exam Simulator to open VCE files
Avanaset

HOW TO OPEN VCEX FILES

Use ProfExam Simulator to open VCEX files
ProfExam Screen

ProfExam
ProfExam at a 20% markdown

You have the opportunity to purchase ProfExam at a 20% reduced price

Get Now!