Dumpscafe Amazon Web Services-SOA-C02 Exam Dumps

Dumpscafe Amazon Web Services-SOA-C02 Exam Dumps, updated 11/12/24, 6:44 AM

personcaris040
categoryOther
visibility5

Dumpscafe Amazon Web Services-SOA-C02 Exam Dumps

https://www.dumpscafe.com/Braindumps-SOA-C02.html

https://www.dumpscafe.com/Amazon-Web-Services-exams.html

https://www.dumpscafe.com/AWS-Certified-Associate-Dumps.html

https://www.dumpscafe.com/allproducts.html

https://www.dumpscafe.com/unlimited-access.html

https://www.dumpscafe.com/

Tag Cloud


https://www.dumpscafe.com


https://www.dumpscafe.com/Braindumps-SOA-C02.html

AWS Certified SysOps
Administrator -
Associate (SOA-C02)
Version: Demo
[ Total Questions: 10]
Web: www.dumpscafe.com
Email: support@dumpscafe.com
Amazon Web Services
SOA-C02
IMPORTANT NOTICE
Feedback
We have developed quality product and state-of-art service to ensure our customers interest. If you have any
suggestions, please feel free to contact us at feedback@dumpscafe.com
Support
If you have any questions about our product, please provide the following items:
exam code
screenshot of the question
login id/email
please contact us at
and our technical experts will provide support within 24 hours.
support@dumpscafe.com
Copyright
The product of each order has its own encryption code, so you should use it independently. Any unauthorized
changes will inflict legal punishment. We reserve the right of final explanation for this statement.

4


9

Amazon Web Services - SOA-C02
Pass Exam
1 of 15
Verified Solution - 100% Result
Exam Topic Breakdown
Exam Topic
Number of Questions
Topic 1 : Mix Questions
5
Topic 2 : Simulation
3
TOTAL
8

4


4

Amazon Web Services - SOA-C02
Pass Exam
2 of 15
Verified Solution - 100% Result
A.
B.
C.
D.
Topic 1, Mix Questions
Question #:1 - (Exam Topic 1)
A company has a high-performance Windows workload. The workload requires a storage volume mat
provides consistent performance of 10.000 KDPS. The company does not want to pay for additional unneeded
capacity to achieve this performance.
Which solution will meet these requirements with the LEAST cost?
Use a Provisioned IOPS SSD (lol) Amazon Elastic Block Store (Amazon EBS) volume that is
configured with 10.000 provisioned IOPS
Use a General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume that is
configured with 10.000 provisioned IOPS.
Use an Amazon Elastic File System (Amazon EFS) file system w\ Max I/O mode.
Use an Amazon FSx for Windows Fife Server foe system that is configured with 10.000 IOPS
Answer: B
Explanation
To meet the requirement of consistent performance of 10,000 IOPS with the least cost, the SysOps
administrator should use a General Purpose SSD (gp3) Amazon EBS volume:
General Purpose SSD (gp3) EBS Volume:
The gp3 volume type can be provisioned with the required IOPS without the need for additional
capacity, making it a cost-effective solution compared to Provisioned IOPS SSD (io1) volumes.
Reference: Amazon EBS Volume Types
Configuration Steps:
Create a gp3 EBS volume and specify 10,000 IOPS and the required throughput.
Attach the volume to the EC2 instance running the Windows workload.
Reference: Modifying the Size, IOPS, or Throughput of an EBS Volume
Question #:2 - (Exam Topic 1)
A company is uploading important files as objects to Amazon S3 The company needs to be informed if an
object is corrupted during the upload

4

Amazon Web Services - SOA-C02
Pass Exam
3 of 15
Verified Solution - 100% Result
A.
B.
C.
D.
A.
B.
C.
D.
What should a SysOps administrator do to meet this requirement?
Pass the Content-Disposition value as a request body during the object upload.
Pass the Content-MD5 value as a request header during the object upload.
Pass x-amz-objecWock-mode as a request header during the object upload
Pass x-amz-server-side-encryption-customer-algorithm as a request body during the object upload.
Answer: B
Explanation
Content-MD5 Header:
The Content-MD5 header provides an MD5 checksum of the object being uploaded. Amazon S3 uses
this checksum to verify the integrity of the object.
Steps:
When uploading an object to S3, calculate the MD5 checksum of the object.
Include the Content-MD5 header with the base64-encoded MD5 checksum value in the upload
request.
This ensures that S3 can detect if the object is corrupted during the upload process.
: PUT Object - Amazon Simple Storage Service
Question #:3 - (Exam Topic 1)
A development team recently deployed a new version of a web application to production. After the release
penetration testing revealed a cross-site scripting vulnerability that could expose user data.
Which AWS service will mitigate this issue?
AWS Shield Standard
AWS WAF
Elastic Load Balancing
Amazon Cognito
Answer: B
Explanation

4

Amazon Web Services - SOA-C02
Pass Exam
4 of 15
Verified Solution - 100% Result
A.
B.
C.
D.
AWS WAF (Web Application Firewall) helps protect web applications from common web exploits that could
affect availability, compromise security, or consume excessive resources. It can be used to mitigate cross-site
scripting (XSS) vulnerabilities.
Set Up AWS WAF:
Open the AWS WAF console at AWS WAF Console.
Create a new Web ACL.
Add Rules for Protection:
Add managed rules that include protection against common vulnerabilities, including XSS.
AWS provides managed rule groups, such as the AWS Managed Rules for Common
Vulnerabilities and Exposures (CVE) which include protections against XSS.
Associate WAF with the Application:
Associate the Web ACL with the resources you want to protect (e.g., CloudFront distribution,
Application Load Balancer).
References:
AWS WAF
AWS WAF Managed Rules
Question #:4 - (Exam Topic 1)
ASysOps administrator configures an application to run on Amazon EC2 instances behind an Application
Load Balancer (ALB) in a simple scaling Auto Scaling group with the default settings. The Auto Scaling
group is configured to use the RequestCountPerTarget metric for scaling. The SysOps administrator notices
that the RequestCountPerTarget metric exceeded the specified limit twice in 180 seconds.
How will the number of EC2 instances in this Auto Scaling group be affected in this scenario?
The Auto Scaling group will launch an additional EC2 instance every time the RequestCountPerTarget
metric exceeds the predefined limit.
The Auto Scaling group will launch one EC2 instance and will wait for the default cooldown period
before launching another instance.
The Auto Scaling group will send an alert to the ALB to rebalance the traffic and not add new EC2
instances until the load is normalized.
The Auto Scaling group will try to distribute the traffic among all EC2 instances before launching
another instance.
Answer: B

4

Amazon Web Services - SOA-C02
Pass Exam
5 of 15
Verified Solution - 100% Result
A.
B.
C.
D.
Explanation
When using the
metric for scaling in an Auto Scaling group, the behavior of
RequestCountPerTarget
instance scaling follows specific rules set by Auto Scaling policies and cooldown periods:
Scaling Trigger: The Auto Scaling group triggers a scaling action whenever the
exceeds the predefined limit set in the scaling policy.
RequestCountPerTarget
Cooldown Period: After launching an EC2 instance due to a scaling action, the Auto Scaling group
enters a cooldown period. During this period, despite further breaches of the threshold, no additional
instances will be launched. This is designed to give the newly launched instance time to start and begin
handling traffic, preventing the Auto Scaling group from launching too many instances too quickly.
This mechanism helps maintain efficient use of resources by adapting to changes in load while avoiding rapid,
unnecessary scaling actions.
Question #:5 - (Exam Topic 1)
A software development company has multiple developers who work on the same product. Each developer
must have their own development environment, and these development environments must be identical. Each
development environment consists of Amazon EC2 instances and an Amazon RDS DB instance. The
development environments should be created only when necessary, and they must be terminated each night to
minimize costs.
What is the MOST operationally efficient solution that meets these requirements?
Provide developers with access to the same AWS CloudFormation template so that they can provision
their development environment when necessary. Schedule a nightly cron job on each development
instance to stop all running processes to reduce CPU utilization to nearly zero.
Provide developers with access to the same AWS CloudFormation template so that they can provision
their development environment when necessary. Schedule a nightly Amazon EventBridge (Amazon
CloudWatch Events) rule to invoke an AWS Lambda function to delete the AWS CloudFormation
stacks.
Provide developers with CLI commands so that they can provision their own development environment
when necessary. Schedule a nightly Amazon EventBridge (Amazon CloudWatch Events) rule to invoke
an AWS Lambda function to terminate all EC2 instances and the DB instance.
Provide developers with CLI commands so that they can provision their own development environment
when necessary. Schedule a nightly Amazon EventBridge (Amazon CloudWatch Events) rule to cause
AWS CloudFormation to delete all of the development environment resources.
Answer: B
Explanation
To efficiently manage and automate the creation and termination of development environments:
Amazon Web Services - SOA-C02
Pass Exam
6 of 15
Verified Solution - 100% Result
AWS CloudFormation Templates:
Provide a standardized CloudFormation template for developers to create identical development
environments.
Reference: AWS CloudFormation User Guide
Automate Termination:
Use Amazon EventBridge (CloudWatch Events) to schedule a nightly rule that invokes an AWS Lambda
function.
The Lambda function should be designed to delete the CloudFormation stacks created for development
environments.
Reference: Amazon EventBridge
This solution ensures operational efficiency and cost management.

9

Amazon Web Services - SOA-C02
Pass Exam
7 of 15
Verified Solution - 100% Result
Topic 2, Simulation
Question #:6 - (Exam Topic 2)
If your AWS Management Console browser does not show that you are logged in to an AWS account, close
the browser and relaunch the
console by using the AWS Management Console shortcut from the VM desktop.
If the copy-paste functionality is not working in your environment, refer to the instructions file on the VM
desktop and use Ctrl+C, Ctrl+V or Command-C , Command-V.
Configure Amazon EventBridge to meet the following requirements.
1. use the us-east-2 Region for all resources,
2. Unless specified below, use the default configuration settings.
3. Use your own resource naming unless a resource
name is specified below.
4. Ensure all Amazon EC2 events in the default event
bus are replayable for the past 90 days.
5. Create a rule named RunFunction to send the exact message every 1 5 minutes to an existing AWS Lambda
function named LogEventFunction.
6. Create a rule named SpotWarning to send a notification to a new standard Amazon SNS topic named
TopicEvents whenever an Amazon EC2
Spot Instance is interrupted. Do NOT create any topic subscriptions. The notification must match the
following structure:
Input Path:
{“instance” : “$.detail.instance-id”}
Input template:
“ The EC2 Spot Instance has been on account.
See the Explanation for solution.
Explanation

9

Amazon Web Services - SOA-C02
Pass Exam
8 of 15
Verified Solution - 100% Result
Here are the steps to configure Amazon EventBridge to meet the above requirements:
Log in to the AWS Management Console by using the AWS Management Console shortcut from the
VM desktop. Make sure that you are logged in to the desired AWS account.
Go to the EventBridge service in the us-east-2 Region.
In the EventBridge service, navigate to the "Event buses" page.
Click on the "Create event bus" button.
Give a name to your event bus, and select "default" as the event source type.
Navigate to "Rules" page and create a new rule named "RunFunction"
In the "Event pattern" section, select "Schedule" as the event source and set the schedule to run every
15 minutes.
In the "Actions" section, select "Send to Lambda" and choose the existing AWS Lambda function
named "LogEventFunction"
Create another rule named "SpotWarning"
In the "Event pattern" section, select "EC2" as the event source, and filter the events on "EC2 Spot
Instance interruption"
In the "Actions" section, select "Send to SNS topic" and create a new standard Amazon SNS topic
named "TopicEvents"
In the "Input Transformer" section, set the Input Path to {“instance” : “$.detail.instance-id”} and Input
template to “The EC2 Spot Instance has been interrupted on account.
Now all Amazon EC2 events in the default event bus will be replayable for past 90 days.
Note:
You can use the AWS Management Console, AWS CLI, or SDKs to create and manage EventBridge
resources.
You can use CloudTrail event history to replay events from the past 90 days.
You can refer to the AWS EventBridge documentation for more information on how to configure and
use the service: https://aws.amazon.com/eventbridge/
Question #:7 - (Exam Topic 2)
A webpage is stored in an Amazon S3 bucket behind an Application Load Balancer (ALB). Configure the SS
bucket to serve a static error page in the event of a failure at the primary site.
1. Use the us-east-2 Region for all resources.
Amazon Web Services - SOA-C02
Pass Exam
9 of 15
Verified Solution - 100% Result
2. Unless specified below, use the default configuration settings.
3. There is an existing hosted zone named lab-
751906329398-26023898.com that contains an A record with a simple routing policy that routes traffic to an
existing ALB.
4. Configure the existing S3 bucket named lab-751906329398-26023898.com as a static hosted website using
the object named index.html as the index document
5. For the index-html object, configure the S3 ACL to allow for public read access. Ensure public access to
the S3 bucketjs allowed.
6. In Amazon Route 53, change the A record for domain lab-751906329398-26023898.com to a primary
record for a failover routing policy. Configure the record so that it evaluates the health of the ALB to
determine failover.
7. Create a new secondary failover alias record for the domain lab-751906329398-26023898.com that routes
traffic to the existing 53 bucket.
See the Explanation for solution.
Explanation
Here are the steps to configure an Amazon S3 bucket to serve a static error page in the event of a failure at the
primary site:
Log in to the AWS Management Console and navigate to the S3 service in the us-east-2 Region.
Find the existing S3 bucket named lab-751906329398-26023898.com and click on it.
In the "Properties" tab, click on "Static website hosting" and select "Use this bucket to host a website".
In "Index Document" field, enter the name of the object that you want to use as the index document, in
this case, "index.html"
In the "Permissions" tab, click on "Block Public Access", and make sure that "Block all public access"
is turned OFF.
Click on "Bucket Policy" and add the following policy to allow public read access:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
Amazon Web Services - SOA-C02
Pass Exam
10 of 15
Verified Solution - 100% Result
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::lab-751906329398-26023898.com/*"
}
]
}
Now navigate to the Amazon Route 53 service, and find the existing hosted zone named lab-
751906329398-26023898.com.
Click on the "A record" and update the routing policy to "Primary - Failover" and add the existing ALB
as the primary record.
Click on "Create Record" button and create a new secondary failover alias record for the domain lab-
751906329398-26023898.com that routes traffic to the existing S3 bucket.
Now, when the primary site (ALB) goes down, traffic will be automatically routed to the S3 bucket
serving the static error page.
Note:
You can use CloudWatch to monitor the health of your ALB.
You can use Amazon S3 to host a static website.
You can use Amazon Route 53 for routing traffic to different resources based on health checks.
You can refer to the AWS documentation for more information on how to configure and use these
services:
https://aws.amazon.com/s3/
https://aws.amazon.com/route53/
https://aws.amazon.com/cloudwatch/
Amazon Web Services - SOA-C02
Pass Exam
11 of 15
Verified Solution - 100% Result
Amazon Web Services - SOA-C02
Pass Exam
12 of 15
Verified Solution - 100% Result
Amazon Web Services - SOA-C02
Pass Exam
13 of 15
Verified Solution - 100% Result
Graphical user interface, text, application Description automatically generated

9

Amazon Web Services - SOA-C02
Pass Exam
14 of 15
Verified Solution - 100% Result
Graphical user interface, application, Teams Description automatically generated
Question #:8 - (Exam Topic 2)
You need to update an existing AWS CloudFormation stack. If needed, a copy to the CloudFormation
template is available in an Amazon SB bucket named cloudformation-bucket
1. Use the us-east-2 Region for all resources.
2. Unless specified below, use the default configuration settings.
3. update the Amazon EQ instance named Devinstance by making the following changes to the stack named
1700182:
a) Change the EC2 instance type to us-east-t2.nano.
b) Allow SSH to connect to the EC2 instance from the IP address range
192.168.100.0/30.
c) Replace the instance profile IAM role with IamRoleB.
Amazon Web Services - SOA-C02
Pass Exam
15 of 15
Verified Solution - 100% Result
4. Deploy the changes by updating the stack using the CFServiceR01e role.
5. Edit the stack options to prevent accidental deletion.
6. Using the output from the stack, enter the value of the Prodlnstanceld in the text box below:
See the Explanation for solution.
Explanation
Here are the steps to update an existing AWS CloudFormation stack:
Log in to the AWS Management Console and navigate to the CloudFormation service in the us-east-2
Region.
Find the existing stack named 1700182 and click on it.
Click on the "Update" button.
Choose "Replace current template" and upload the updated CloudFormation template from the Amazon
S3 bucket named "cloudformation-bucket"
In the "Parameter" section, update the EC2 instance type to us-east-t2.nano and add the IP address
range 192.168.100.0/30 for SSH access.
Replace the instance profile IAM role with IamRoleB.
In the "Capabilities" section, check the checkbox for "IAM Resources"
Choose the role CFServiceR01e and click on "Update Stack"
Wait for the stack to be updated.
Once the update is complete, navigate to the stack and click on the "Stack options" button, and select
"Prevent updates to prevent accidental deletion"
To get the value of the Prodlnstanceld , navigate to the "Outputs" tab in the CloudFormation stack and
find the key "Prodlnstanceld". The value corresponding to it is the value that you need to enter in the
text box below.
Note:
You can use AWS CloudFormation to update an existing stack.
You can use the AWS CloudFormation service role to deploy updates.
You can refer to the AWS CloudFormation documentation for more information on how to update and
manage stacks: https://aws.amazon.com/cloudformation/

https://www.dumpscafe.com


https://www.dumpscafe.com/allproducts.html


https://www.dumpscafe.com/Microsoft-exams.html


https://www.dumpscafe.com/Cisco-exams.html


https://www.dumpscafe.com/Citrix-exams.html


https://www.dumpscafe.com/CompTIA-exams.html


https://www.dumpscafe.com/EMC-exams.html


https://www.dumpscafe.com/ISC-exams.html


https://www.dumpscafe.com/Checkpoint-exams.html


https://www.dumpscafe.com/Juniper-exams.html


https://www.dumpscafe.com/Apple-exams.html


https://www.dumpscafe.com/Oracle-exams.html


https://www.dumpscafe.com/Symantec-exams.html


https://www.dumpscafe.com/VMware-exams.html


mailto:sales@dumpscafe.com


mailto:feedback@dumpscafe.com


mailto:support@dumpscafe.com

About dumpscafe.com
dumpscafe.com was founded in 2007. We provide latest & high quality IT / Business Certification Training Exam
Questions, Study Guides, Practice Tests.
We help you pass any IT / Business Certification Exams with 100% Pass Guaranteed or Full Refund. Especially
Cisco, CompTIA, Citrix, EMC, HP, Oracle, VMware, Juniper, Check Point, LPI, Nortel, EXIN and so on.
View list of all certification exams: All vendors









We prepare state-of-the art practice tests for certification exams. You can reach us at any of the email addresses
listed below.
Sales: sales@dumpscafe.com
Feedback: feedback@dumpscafe.com
Support: support@dumpscafe.com
Any problems about IT certification or our products, You can write us back and we will get back to you within 24
hours.