SAA-C03真题 No.1-100 免费 2025-02-27 SAA-C03 298 0% 0 投票, 0 平均值 2 Report a question What’s wrong with this question? You cannot submit an empty report. Please add some details. 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100 SAA-C03 SAA-C03真题 No.1-100 免费 中英双语,人工翻译,带完整解析 SAA-C03真题 No.1-100 1 / 100 分类: SAA-C03 1. A company collects data for temperature, humidity, and atmospheric pressure in cities across multiple continents. The average volume of data that the company collects from each site daily is 500 GB. Each site has a high-speed Internet connection. The company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon S3 bucket. The solution must minimize operational complexity. Which solution meets these requirements? A. Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket. B. Upload the data from each site to an S3 bucket in the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket. Then remove the data from the origin S3 bucket. C. Schedule AWS Snowball Edge Storage Optimized device jobs daily to transfer data from each site to the closest Region. Use S3 Cross Region Replication to copy objects to the destination S3 bucket. D. Upload the data from each site to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. At regular intervals, take an EBS snapshot and copy it to the Region that contains the destination S3 bucket. Restore the EBS volume in that Region. 一家公司收集来自多个大洲城市的温度、湿度和大气压力数据。公司每天从每个站点收集的平均数据量为500GB。每个站点都拥有高速互联网连接。 公司希望尽可能快速地将所有这些全球站点的数据汇聚到单个Amazon S3存储桶中。该解决方案必须最大程度降低操作复杂度。 以下哪种方案满足这些需求? A. 在目标S3存储桶上启用S3传输加速。使用分段上传直接将站点数据上传至目标S3存储桶。 B. 将每个站点的数据上传至最近区域的S3存储桶。使用S3跨区域复制将对象复制到目标S3存储桶。然后从源S3存储桶中删除数据。 C. 每天安排AWS Snowball Edge存储优化设备任务,将数据从每个站点传输至最近区域。使用S3跨区域复制将对象复制到目标S3存储桶。 D. 将每个站点的数据上传至最近区域的Amazon EC2实例。将数据存储在Amazon弹性块存储(EBS)卷中。定期拍摄EBS快照并将其复制到包含目标S3存储桶的区域。在该区域恢复EBS卷。 A. A B. B C. C D. D 正确答案是A。解析如下: A. 在目标S3存储桶上开启S3传输加速功能,并使用分段上传直接将数据上传到目标S3存储桶。这是最优解决方案,因为S3传输加速通过AWS CloudFront的边缘站点优化了远距离文件传输速度,分段上传可以并行传输数据块提高速度,且只需一步操作(直接上传),完全避免了额外的数据复制或迁移步骤,符合最小化运维复杂性的要求。 B. 将数据先上传到最近的S3区域存储桶,再通过跨区域复制到目标存储桶。虽然可行,但需要额外的复制步骤和重复存储成本,增加了操作复杂度。 C. 使用Snowball Edge设备每日传输数据。虽然适用大数据量离线传输,但每日500GB完全可以通过高速网络直接传输,引入物理设备反而增加调度和管理成本。 D. 使用EC2实例和EBS快照的方案过于复杂,涉及EC2维护、EBS快照创建/复制/恢复等多个步骤,完全不符合简化操作的要求。 2 / 100 分类: SAA-C03 2. A company needs the ability to analyze the log files of its proprietary application. The logs are stored in JSON format in an Amazon S3 bucket. Queries will be simple and will run on-demand. A solutions architect needs to perform the analysis with minimal changes to the existing architecture. What should the solutions architect do to meet these requirements with the LEAST amount of operational overhead? A. Use Amazon Redshift to load all the content into one place and run the SQL queries as needed. B. Use Amazon CloudWatch Logs to store the logs. Run SQL queries as needed from the Amazon CloudWatch console. C. Use Amazon Athena directly with Amazon S3 to run the queries as needed. D. Use AWS Glue to catalog the logs. Use a transient Apache Spark cluster on Amazon EMR to run the SQL queries as needed. 一家公司需要能够分析其专有应用程序的日志文件。日志以JSON格式存储在一个Amazon S3存储桶中。 查询将很简单且按需运行。解决方案架构师需在对现有架构进行最小改动的情况下执行分析。 解决方案架构师应该采取什么措施,以最小的运营开销满足这些要求? A. 使用Amazon Redshift将所有内容加载到一个位置,并按需运行SQL查询。 B. 使用Amazon CloudWatch Logs存储日志。通过Amazon CloudWatch控制台按需运行SQL查询。 C. 直接使用Amazon Athena与Amazon S3按需运行查询。 D. 使用AWS Glue编目日志。在Amazon EMR上使用临时Apache Spark集群按需运行SQL查询。 A. A B. B C. C D. D 正确答案是C. 使用Amazon Athena直接查询S3中的数据。 解析如下: A. 使用Amazon Redshift:虽然Redshift可以运行SQL查询,但需要将数据加载到Redshift集群中,这会带来额外的操作开销和数据迁移工作,不符合’最小化架构变更’的要求。 B. 使用Amazon CloudWatch Logs:CloudWatch Logs主要用于监控日志,不是为分析存储在S3中的JSON日志文件而设计的,且其查询功能有限。 C. 使用Amazon Athena:这是最佳选择。Athena是无服务器查询服务,可以直接查询S3中的JSON数据,无需数据迁移或基础设施管理,完全符合需求。 D. 使用AWS Glue和EMR:虽然可行,但需要维护Glue爬虫和短暂的EMR集群,相比Athena,操作开销更大。 关键点:Athena提供按需查询能力,无服务器架构意味着零管理开销,支持直接查询S3中的JSON数据,完美符合’最简单操作’和’最小架构变更’的要求。 3 / 100 分类: SAA-C03 3. A company uses AWS Organizations to manage multiple AWS accounts for different departments. The management account has an Amazon S3 bucket that contains project reports. The company wants to limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations. Which solution meets these requirements with the LEAST amount of operational overhead? A. Add the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy. B. Create an organizational unit (OU) for each department. Add the aws:PrincipalOrgPaths global condition key to the S3 bucket policy. C. Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization events. Update the S3 bucket policy accordingly. D. Tag each user that needs access to the S3 bucket. Add the aws:PrincipalTag global condition key to the S3 bucket policy. 一家公司使用AWS Organizations来管理不同部门的多个AWS账户。管理账户拥有一个包含项目报告的Amazon S3存储桶。 公司希望限制对该S3存储桶的访问,仅允许AWS Organizations内组织中的账户用户访问。 哪种解决方案能在最小化运维开销的前提下满足这些需求? A. 在S3存储桶策略中添加aws:PrincipalOrgID全局条件键,并引用组织ID。 B. 为每个部门创建一个组织单元(OU)。在S3存储桶策略中添加aws:PrincipalOrgPaths全局条件键。 C. 使用AWS CloudTrail监控CreateAccount、InviteAccountToOrganization、LeaveOrganization和RemoveAccountFromOrganization事件。相应地更新S3存储桶策略。 D. 为每个需要访问S3存储桶的用户添加标签。在S3存储桶策略中添加aws:PrincipalTag全局条件键。 A. A B. B C. C D. D 正确答案是A,因为使用aws:PrincipalOrgID全局条件键可以直接引用AWS组织的ID,来限制只有组织内的账户可以访问S3存储桶。这种方法操作最简单,只需在S3存储桶策略中添加一个条件即可,无需持续维护。 选项B虽然使用了aws:PrincipalOrgPaths,但需要为每个部门创建组织单元(OU),增加了管理复杂度。选项C通过CloudTrail监控账户变动并更新策略,会带来较高的操作开销和维护成本。选项D需要为每个需要访问权限的用户打标签并维护标签,管理起来非常繁琐。 4 / 100 分类: SAA-C03 4. An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2 instance needs to access the S3 bucket without connectivity to the internet. Which solution will provide private network connectivity to Amazon S3? A. Create a gateway VPC endpoint to the S3 bucket. B. Stream the logs to Amazon CloudWatch Logs. Export the logs to the S3 bucket. C. Create an instance profile on Amazon EC2 to allow S3 access. D. Create an Amazon API Gateway API with a private link to access the S3 endpoint. 一个应用程序在VPC中的Amazon EC2实例上运行。该应用程序处理存储在Amazon S3存储桶中的日志。EC2实例需要在无法连接到互联网的情况下访问S3存储桶。 哪种方案能为Amazon S3提供私有网络连接? A. 为S3存储桶创建一个网关VPC终端节点。 B. 将日志流式传输到Amazon CloudWatch Logs。然后导出日志到S3存储桶。 C. 在Amazon EC2上创建一个实例配置文件以允许访问S3。 D. 创建一个带有私有链接的Amazon API Gateway API来访问S3终端节点。 A. A B. B C. C D. D 正确答案是A,因为创建VPC网关终端节点(VPC gateway endpoint)可以无需经过互联网直接通过AWS私有网络连接S3服务。 B选项错误:将日志流式传输到CloudWatch再导出到S3并没有解决私有连接问题,仍然可能涉及公网传输。 C选项错误:实例配置文件(instance profile)仅解决权限问题,不能提供私有网络连接。 D选项错误:API Gateway私有链接(private link)适用于暴露API接口,不适用于直接连接S3的场景,且配置复杂度高。 VPC网关终端节点是专为S3和DynamoDB设计的服务,可直接在VPC内部路由流量到AWS服务,无需NAT设备、互联网网关或防火墙代理。 5 / 100 分类: SAA-C03 5. A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that, each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time. What should a solutions architect propose to ensure users see all of their documents at once? A. Copy the data so both EBS volumes contain all the documents B. Configure the Application Load Balancer to direct a user to the server with the documents C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS D. Configure the Application Load Balancer to send the request to both servers. Return each document from the correct server 一家公司在AWS上使用单个亚马逊EC2实例托管一个网络应用程序,该程序将用户上传的文档存储在亚马逊EBS卷中。为了提高可扩展性和可用性,该公司复制了架构,在另一个可用区创建了第二个EC2实例和EBS卷,并将两者置于应用负载均衡器之后。完成这一更改后,用户反映每次刷新网站时,他们只能看到一部分文档或另一部分文档,但从未同时看到所有文档。 解决方案架构师应提出什么建议来确保用户能够一次性看到他们的所有文档? A. 复制数据,使两个EBS卷包含所有文档 B. 配置应用负载均衡器将用户定向到存有文档的服务器 C. 将两个EBS卷的数据复制到亚马逊EFS中,修改应用程序以将新文档保存到亚马逊EFS D. 配置应用负载均衡器将请求同时发送到两台服务器,然后从正确的服务器返回每个文档 A. A B. B C. C D. D 正确答案是C,因为将数据统一存储在Amazon EFS(弹性文件系统)中可以解决文档分散在两个EBS卷中的问题。Amazon EFS是一个托管的NFS服务,支持多个EC2实例同时访问同一文件系统,并且能够在不同可用区之间自动同步数据。这样无论用户请求被负载均衡器路由到哪个EC2实例,应用程序都能从EFS中访问到完整的文档集。 A选项错误,因为简单地在两个EBS卷之间复制数据不仅效率低下,还难以保持数据的一致性,而且会产生额外的存储成本。 B选项错误,因为配置负载均衡器将用户固定到特定服务器(粘性会话)虽然能确保单个用户的连续性,但不能解决其他用户访问不同服务器时看到不同文档集的问题。 D选项错误,因为负载均衡器不能将单个请求同时发送到多个服务器,而且这种设计会破坏应用程序的正常运行逻辑。 6 / 100 分类: SAA-C03 6. A company uses NFS to store large video files in on-premises network attached storage. Each video file ranges in size from 1 MB to 500 GB. The total storage is 70 TB and is no longer growing. The company decides to migrate the video files to Amazon S3. The company must migrate the video files as soon as possible while using the least possible network bandwidth. Which solution will meet these requirements? A. Create an S3 bucket. Create an IAM role that has permissions to write to the S3 bucket. Use the AWS CLI to copy all files locally to the S3 bucket. B. Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3. C. Deploy an S3 File Gateway on premises. Create a public service endpoint to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS le share to the S3 File Gateway. D. Set up an AWS Direct Connect connection between the on-premises network and AWS. Deploy an S3 File Gateway on premises. Create a public virtual interface (VIF) to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway. 一家公司使用NFS协议在本地网络附加存储中存储大型视频文件。每个视频文件的大小从1 MB到500 GB不等。 总存储容量为70 TB且不再增长。该公司决定将这些视频文件迁移至Amazon S3。 公司必须在尽可能使用最少网络带宽的前提下,尽快完成视频文件迁移。 以下哪种解决方案能够满足这些要求? A. 创建一个S3存储桶。创建具有写入该S3存储桶权限的IAM角色。使用AWS CLI将所有文件从本地复制到S3存储桶。 B. 创建一个AWS Snowball Edge任务。在本地接收Snowball Edge设备,使用Snowball Edge客户端将数据传输至该设备。 归还设备后由AWS将数据导入Amazon S3。 C. 在本地部署S3文件网关。创建公共服务端点以连接S3文件网关。建立一个S3存储桶。 在S3文件网关上创建新的NFS文件共享,并将其指向S3存储桶。将数据从现有NFS共享传输至S3文件网关。 D. 在本地网络与AWS之间建立AWS Direct Connect连接。在本地部署S3文件网关。 创建公共虚拟接口(VIF)连接S3文件网关。建立S3存储桶并在文件网关上创建新的NFS共享,将其指向S3存储桶。 最后将数据从现有NFS共享迁移至S3文件网关。 A. A B. B C. C D. D 最佳解决方案是选择B选项——使用AWS Snowball Edge进行迁移。原因如下: 1. **大容量数据传输需求**:公司有70TB的数据需要迁移,这个量级通过互联网直接上传会耗费大量时间和带宽成本。Snowball Edge是AWS提供的物理设备,专门用于大规模数据传输,避免网络带宽限制。 2. **快速迁移要求**:Snowball Edge通过物理运输方式(快递)传输数据,实际传输速度远高于互联网上传,尤其适合500GB级别的单个大文件。 3. **带宽优化**:题目明确要求使用最少网络带宽,Snowball Edge仅在传输元数据时使用极小网络带宽,主体数据通过设备运输完成。 其他选项分析:– A选项:CLI直接上传70TB数据会占用全部网络带宽,耗时极长,不符合要求。– C选项:S3 File Gateway虽能保持NFS协议,但数据仍需经公网传输,无法解决带宽问题。– D选项:Direct Connect虽然专用,但70TB数据仍会占用大量专线带宽,成本高且非最优解。 7 / 100 分类: SAA-C03 7. A company has an application that ingests incoming messages. Dozens of other applications and microservices then quickly consume these messages. The number of messages varies drastically and sometimes increases suddenly to 100,000 each second. The company wants to decouple the solution and increase scalability. Which solution meets these requirements? A. Persist the messages to Amazon Kinesis Data Analytics. Configure the consumer applications to read and process the messages. B. Deploy the ingestion application on Amazon EC2 instances in an Auto Scaling group to scale the number of EC2 instances based on CPU metrics. C. Write the messages to Amazon Kinesis Data Streams with a single shard. Use an AWS Lambda function to preprocess messages and store them in Amazon DynamoDB. Configure the consumer applications to read from DynamoDB to process the messages. D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue Service (Amazon SOS) subscriptions. Configure the consumer applications to process the messages from the queues. 一家公司有一个用于接收传入消息的应用程序。随后,其他数十个应用程序和微服务会快速消费这些消息。 消息数量变化巨大,有时会突然激增到每秒10万条。该公司希望解耦该解决方案并提高可扩展性。 以下哪种解决方案符合这些要求? A. 将消息持久化到Amazon Kinesis Data Analytics(数据分析服务)。配置消费应用程序读取并处理这些消息。 B. 在Auto Scaling(自动扩展)组中的Amazon EC2实例上部署接收应用程序,根据CPU指标调整EC2实例数量。 C. 将消息写入具有单个分片的Amazon Kinesis Data Streams(数据流服务),使用AWS Lambda函数预处理消息并将其存储在Amazon DynamoDB中。配置消费应用程序从DynamoDB读取以处理消息。 D. 将消息发布到具有多个Amazon Simple Queue Service(简单队列服务,原文误作SOS应为SQS)订阅的Amazon Simple Notification Service(简单通知服务)主题。配置消费应用程序从队列中处理消息。 A. A B. B C. C D. D 正确答案是D。本题考察的是在消息数量剧烈波动(高峰达10万条/秒)的场景下实现解耦和扩展的最佳方案。 A选项错误原因:Kinesis Data Analytics主要用于实时分析而非消息中转,且消费者直接从Analytics读取会失去数据持久性保证。 B选项错误原因:虽然使用了自动扩展,但整体仍是紧耦合架构,消息生产者和消费者没有解耦,EC2垂直扩展能力有限难以应对突发10万条/秒的消息量。 C选项错误原因:单一分片的Kinesis流(默认上限1,000条/秒)完全无法满足10万条/秒的需求,且Lambda+DynamoDB的处理链条过长导致延迟增加。 D选项正确原因:SNS+SQS组合完美满足要求:(1) SNS支持百万级TPS的发布能力 (2) 多SQS队列订阅实现消息扇出 (3) 消费者独立处理各自的队列实现完全解耦 (4) 队列作为缓冲区可应对突发流量 (5) 每个SQS队列支持至少3,000条/秒的消息量,多队列组合可轻松扩展至10万条/秒。 8 / 100 分类: SAA-C03 8. A company is migrating a distributed application to AWS. The application serves variable workloads. The legacy platform consists of a primary server that coordinates jobs across multiple compute nodes. The company wants to modernize the application with a solution that maximizes resiliency and scalability. How should a solutions architect design the architecture to meet these requirements? A. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling to use scheduled scaling. B. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue. C. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure AWS CloudTrail as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the primary server. D. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure Amazon EventBridge (Amazon CloudWatch Events) as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the compute nodes. 一家公司正在将分布式应用程序迁移到AWS。该应用程序服务于可变的工作负载。旧平台由一个主服务器组成,该服务器协调多个计算节点上的作业。公司希望通过一个最大限度提高弹性和可扩展性的解决方案来实现应用程序的现代化。 解决方案架构师应该如何设计架构以满足这些需求? A. 将Amazon Simple Queue Service(Amazon SQS)队列配置为作业的目标地。使用由Auto Scaling组管理的Amazon EC2实例实现计算节点。将EC2 Auto Scaling配置为使用计划扩展。 B. 将Amazon Simple Queue Service(Amazon SQS)队列配置为作业的目标地。使用由Auto Scaling组管理的Amazon EC2实例实现计算节点。根据队列的大小配置EC2 Auto Scaling。 C. 使用由Auto Scaling组管理的Amazon EC2实例实现主服务器和计算节点。将AWS CloudTrail配置为作业的目标地。根据主服务器上的负载配置EC2 Auto Scaling。 D. 使用由Auto Scaling组管理的Amazon EC2实例实现主服务器和计算节点。将Amazon EventBridge(Amazon CloudWatch事件)配置为作业的目标地。根据计算节点上的负载配置EC2 Auto Scaling。 A. A B. B C. C D. D 正确答案是B,因为: 1. **SQS队列的作用**:SQS作为作业目的地可以解耦主服务器和计算节点,提高系统的弹性和可扩展性。作业可以被异步处理,即使主服务器暂时不可用也不会影响计算节点的运行。 2. **基于队列大小的自动扩展**:使用SQS队列的大小作为Auto Scaling的触发指标,能够根据实际负载动态调整计算节点的数量,确保系统资源能够随需求变化而伸缩,最大化资源利用率。 3. **其他选项的问题**:– A选项中*计划扩展*无法适应可变负载,可能导致资源不足或浪费。– C选项中使用CloudTrail作为作业目的地是错误的,CloudTrail用于日志记录而非作业分发。– D选项中依赖EventBridge和计算节点负载进行扩展,增加了架构复杂度,且不如基于SQS队列直接有效。 9 / 100 分类: SAA-C03 9. A company is running an SMB file server in its data center. The file server stores large files that are accessed frequently for the first few days after the files are created. After 7 days the files are rarely accessed. The total data size is increasing and is close to the company’s total storage capacity. A solutions architect must increase the company’s available storage space without losing low-latency access to the most recently accessed files. The solutions architect must also provide le lifecycle management to avoid future storage issues. Which solution will meet these requirements? A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS. B. Create an Amazon S3 File Gateway to extend the company’s storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days. C. Create an Amazon FSx for Windows File Server file system to extend the company’s storage space. D. Install a utility on each user’s computer to access Amazon S3. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible Retrieval after 7 days. 题目: 某公司在其数据中心运行一个SMB文件服务器。该文件服务器存储了大量文件,这些文件在创建后的最初几天内被频繁访问。7天之后,这些文件便很少被访问。 数据总量正在增长,并且已接近公司的总存储容量。解决方案架构师必须在不丢失对最近访问文件的低延迟访问的前提下,增加公司的可用存储空间。同时,解决方案架构师还需要提供生命周期管理,以避免未来的存储问题。 以下哪种解决方案能够满足这些需求? A. 使用AWS DataSync将超过7天的数据从SMB文件服务器复制到AWS。 B. 创建一个Amazon S3文件网关以扩展公司的存储空间。创建一个S3生命周期策略,在7天后将数据转移到S3 Glacier Deep Archive。 C. 创建一个Amazon FSx for Windows文件服务器文件系统以扩展公司的存储空间。 D. 在每个用户的计算机上安装一个实用程序以访问Amazon S3。创建一个S3生命周期策略,在7天后将数据转移到S3 Glacier Flexible Retrieval。 A. A B. B C. C D. D 正确答案是B,原因如下: A选项使用AWS DataSync将超过7天的数据从SMB文件服务器复制到AWS,虽然可以释放本地存储空间,但无法为最近访问的文件提供低延迟访问,因此不符合要求。 B选项创建Amazon S3文件网关扩展存储空间,并通过S3生命周期策略在7天后将数据转移到S3 Glacier Deep Archive,既解决了存储空间问题,又通过文件网关保持了低延迟访问,同时提供了自动化生命周期管理。 C选项创建Amazon FSx for Windows文件系统可以扩展存储空间,但没有提供自动化的生命周期管理功能,无法满足避免未来存储问题的要求。 D选项在每个用户电脑上安装工具直接访问S3,虽然提供了存储扩展和生命周期管理,但这种分散式访问方式不能保证统一的低延迟访问体验,实施管理也更加复杂。 10 / 100 分类: SAA-C03 10. A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway REST API to process. The company wants to ensure that orders are processed in the order that they are received. Which solution will meet these requirements? A. Use an API Gateway integration to publish a message to an Amazon Simple notification Service (Amazon SNS) topic when the application receives an order. Subscribe an AWS Lambda function to the topic to perform processing. B. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application receives an order. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing. C. Use an API Gateway authorizer to block any requests while the application processes an order. D. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) standard queue when the application receives an order. Configure the SQS standard queue to invoke an AWS Lambda function for processing. 一家公司正在AWS上构建一个电子商务网络应用程序。该应用程序将新订单的信息发送到Amazon API Gateway REST API进行处理。公司希望确保订单按照接收顺序进行处理。 哪种解决方案能够满足这些需求? A. 使用API Gateway集成,在应用程序收到订单时向Amazon Simple Notification Service (Amazon SNS)主题发布消息。订阅一个AWS Lambda函数到该主题以执行处理。 B. 使用API Gateway集成,在应用程序收到订单时向Amazon Simple Queue Service (Amazon SQS) FIFO队列发送消息。配置SQS FIFO队列以调用AWS Lambda函数进行处理。 C. 使用API Gateway授权者在应用程序处理订单时阻止任何请求。 D. 使用API Gateway集成,在应用程序收到订单时向Amazon Simple Queue Service (Amazon SQS)标准队列发送消息。配置SQS标准队列以调用AWS Lambda函数进行处理。 A. A B. B C. C D. D 正确答案B解析:1. SQS FIFO队列(B选项)专门设计用于严格保证消息的顺序传递(先进先出),完全符合题目要求。 错误选项分析:2. A选项的SNS主题不能保证消息顺序,因为它是发布/订阅模型,消息可能被异步处理。3. C选项的API授权器只是控制访问权限,与订单处理顺序无关,属于概念性错误。4. D选项的标准SQS队列虽能基本保序,但不保证严格的先进先出(可能出现少量消息乱序),不符合题目严格要求。 FIFO队列通过消息分组和去重机制,确保每个订单严格按接收顺序处理,是AWS环境中处理顺序敏感型任务的最佳实践。 11 / 100 分类: SAA-C03 11. A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the database by using user names and passwords that are stored locally in a file. The company wants to minimize the operational overhead of credential management. What should a solutions architect do to accomplish this goal? A. Use AWS Secrets Manager. Turn on automatic rotation. B. Use AWS Systems Manager Parameter Store. Turn on automatic rotation. C. Create an Amazon S3 bucket to store objects that are encrypted with an AWS Key Management Service (AWS KMS) encryption key. Migrate the credential file to the S3 bucket. Point the application to the S3 bucket. D. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume for each EC2 instance. Attach the new EBS volume to each EC2 instance. Migrate the credential file to the new EBS volume. Point the application to the new EBS volume. 一家公司在Amazon EC2实例上运行一个应用程序,并使用Amazon Aurora数据库。EC2实例通过使用本地存储在文件中的用户名和密码连接到数据库。公司希望尽量减少凭证管理的操作开销。 解决方案架构师应该采取什么措施来实现这一目标? A. 使用AWS Secrets Manager。开启自动轮换功能。 B. 使用AWS Systems Manager Parameter Store。开启自动轮换功能。 C. 创建一个Amazon S3存储桶来存储使用AWS Key Management Service (AWS KMS)加密密钥加密的对象。将凭证文件迁移到S3存储桶。将应用程序指向该S3存储桶。 D. 为每个EC2实例创建一个加密的Amazon Elastic Block Store (Amazon EBS)卷。将新的EBS卷附加到每个EC2实例。将凭证文件迁移到新的EBS卷上。将应用程序指向新的EBS卷。 A. A B. B C. C D. D 正确答案是A,使用AWS Secrets Manager并开启自动轮换功能。以下是各选项的详细解析: A. AWS Secrets Manager专为安全存储和管理凭证而设计,可以自动轮换数据库凭证,减少手动管理的负担。这是最符合题目要求的解决方案。 B. 虽然Systems Manager Parameter Store也可以存储凭证,但其自动轮换功能有限,不如Secrets Manager完善,不适合这个场景。 C. 将凭证文件存储在S3桶中虽然可行,但这种方法需要手动管理凭证轮换,无法满足最小化运营开销的需求。 D. 使用加密EBS卷存储凭证文件同样需要手动管理,且无法实现自动凭证轮换,不符合题目要求的最小化运营开销目标。 12 / 100 分类: SAA-C03 12. A global company hosts its web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The web application has static data and dynamic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce latency for the static data and dynamic data. The company is using its own domain name registered with Amazon Route 53. What should a solutions architect do to meet these requirements? A. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins. Configure Route 53 to route traffic to the CloudFront distribution. B. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint Configure Route 53 to route traffic to the CloudFront distribution. C. Create an Amazon CloudFront distribution that has the S3 bucket as an origin. Create an AWS Global Accelerator standard accelerator that has the ALB and the CloudFront distribution as endpoints. Create a custom domain name that points to the accelerator DNS name. Use the custom domain name as an endpoint for the web application. D. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint. Create two domain names. Point one domain name to the CloudFront DNS name for dynamic content. Point the other domain name to the accelerator DNS name for static content. Use the domain names as endpoints for the web application. 一家全球性公司将其网络应用程序托管在位于应用负载均衡器(ALB)后面的亚马逊EC2实例上。该网络应用程序包含静态数据和动态数据。公司将静态数据存储在亚马逊S3存储桶中。公司希望提升静态数据和动态数据的性能并降低延迟。公司正在使用其通过亚马逊Route 53注册的自有域名。 解决方案架构师应采取什么措施来满足这些需求? A. 创建一个将S3存储桶和ALB作为源的亚马逊CloudFront分发。配置Route 53将流量路由至CloudFront分发。 B. 创建一个将ALB作为源的亚马逊CloudFront分发。创建一个将S3存储桶作为端点的AWS Global Accelerator标准加速器。配置Route 53将流量路由至CloudFront分发。 C. 创建一个将S3存储桶作为源的亚马逊CloudFront分发。创建一个将ALB和CloudFront分发作为端点的AWS Global Accelerator标准加速器。创建一个指向加速器DNS名称的自定义域名,并将该自定义域名用作网络应用程序的端点。 D. 创建一个将ALB作为源的亚马逊CloudFront分发。创建一个将S3存储桶作为端点的AWS Global Accelerator标准加速器。创建两个域名,其中一个域名指向用于动态内容的CloudFront DNS名称,另一个域名指向用于静态内容的加速器DNS名称。将这些域名用作网络应用程序的端点。 A. A B. B C. C D. D 为了提升静态数据和动态数据的性能并降低延迟,最佳的解决方案是创建一个Amazon CloudFront分发,并将S3存储桶(存储静态数据)和ALB(处理动态数据)同时设置为源站。然后通过Amazon Route 53将流量路由到CloudFront分发。 选项A采取的就是这种方式,它是正确的方案。 选项B错误,因为虽然它使用CloudFront加速ALB,但错误地引入AWS Global Accelerator来加速S3存储桶,这会增加复杂性并且不能像CloudFront那样为静态内容提供高效缓存。 选项C错误,因为它虽然设置了CloudFront来加速S3,但又冗余地引入Global Accelerator来加速ALB和CloudFront,导致架构过于复杂且成本增加,且没有性能上的必要优势。 选项D错误,因为它试图通过两个独立的域名分别路由静态和动态内容,这会增加配置复杂度并可能导致用户体验不一致,而选项A的统一分发机制更加简单高效。 13 / 100 分类: SAA-C03 13. A company performs monthly maintenance on its AWS infrastructure. During these maintenance activities, the company needs to rotate the credentials for its Amazon RDS for MySQL databases across multiple AWS Regions. Which solution will meet these requirements with the LEAST operational overhead? A. Store the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions. Configure Secrets Manager to rotate the secrets on a schedule. B. Store the credentials as secrets in AWS Systems Manager by creating a secure string parameter. Use multi-Region secret replication for the required Regions. Configure Systems Manager to rotate the secrets on a schedule. C. Store the credentials in an Amazon S3 bucket that has server-side encryption (SSE) enabled. Use Amazon EventBridge (Amazon CloudWatch Events) to invoke an AWS Lambda function to rotate the credentials. D. Encrypt the credentials as secrets by using AWS Key Management Service (AWS KMS) multi-Region customer managed keys. Store the secrets in an Amazon DynamoDB global table. Use an AWS Lambda function to retrieve the secrets from DynamoDB. Use the RDS API to rotate the secrets. 一家公司每月对其AWS基础设施进行维护。在这些维护活动中,公司需要跨多个AWS区域对其Amazon RDS for MySQL数据库的凭证进行轮换。 哪种解决方案能够以最小的操作开销满足这些需求? A. 将凭证作为密钥存储在AWS Secrets Manager中。为所需区域使用多区域密钥复制功能。配置Secrets Manager按计划轮换密钥。 B. 通过创建安全字符串参数,将凭证作为密钥存储在AWS Systems Manager中。为所需区域使用多区域密钥复制功能。配置Systems Manager按计划轮换密钥。 C. 将凭证存储在启用了服务器端加密(SSE)的Amazon S3存储桶中。使用Amazon EventBridge(Amazon CloudWatch Events)调用AWS Lambda函数来轮换凭证。 D. 使用AWS Key Management Service (AWS KMS)多区域客户托管密钥对凭证进行加密存储。将密钥存储在Amazon DynamoDB全局表中。使用AWS Lambda函数从DynamoDB检索密钥。使用RDS API轮换密钥。 A. A. Store the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions. Configure Secrets Manager to rotate the secrets on a schedule. B. B. Store the credentials as secrets in AWS Systems Manager by creating a secure string parameter. Use multi-Region secret replication for the required Regions. Configure Systems Manager to rotate the secrets on a schedule. C. C. Store the credentials in an Amazon S3 bucket that has server-side encryption (SSE) enabled. Use Amazon EventBridge (Amazon CloudWatch Events) to invoke an AWS Lambda function to rotate the credentials. D. D. Encrypt the credentials as secrets by using AWS Key Management Service (AWS KMS) multi-Region customer managed keys. Store the secrets in an Amazon DynamoDB global table. Use an AWS Lambda function to retrieve the secrets from DynamoDB. Use the RDS API to rotate the secrets. 正确答案是A,原因如下: A选项使用AWS Secrets Manager存储凭据并配置多区域密钥复制,能够自动按照计划轮换密钥,操作开销最小,完全符合题目需求。 B选项错误:AWS Systems Manager虽然支持安全字符串参数,但不提供自动化的密钥轮换功能,需要额外开发运维工作。 C选项错误:虽然S3+Lambda可以实现轮换功能,但需要自行编写和维护轮换逻辑,操作开销大且安全性低于Secrets Manager。 D选项错误:使用DynamoDB全局表+KMS虽然能实现多区域存储,但需要自行开发完整的轮换流程,操作复杂度最高。 14 / 100 分类: SAA-C03 14. A company runs an ecommerce application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales based on CPU utilization metrics. The ecommerce application stores the transaction data in a MySQL 8.0 database that is hosted on a large EC2 instance. The database’s performance degrades quickly as application load increases. The application handles more read requests than write transactions. The company wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads while maintaining high availability. Which solution will meet these requirements? A. Use Amazon Redshift with a single node for leader and compute functionality. B. Use Amazon RDS with a Single-AZ deployment Configure Amazon RDS to add reader instances in a different Availability Zone. C. Use Amazon Aurora with a Multi-AZ deployment. Configure Aurora Auto Scaling with Aurora Replicas. D. Use Amazon ElastiCache for Memcached with EC2 Spot Instances. 一家公司在应用负载均衡器后面的亚马逊EC2实例上运行一个电子商务应用程序。这些实例分布在多个可用区的亚马逊EC2自动扩展组中运行。自动扩展组根据CPU使用率指标进行扩展。 该电子商务应用程序将交易数据存储在一个托管在大型EC2实例上的MySQL 8.0数据库中。 随着应用负载增加,数据库性能快速下降。该应用程序处理的读取请求多于写入事务。公司希望找到一个解决方案,能够自动扩展数据库以满足不可预测的读取工作负载需求,同时保持高可用性。 以下哪个解决方案符合这些要求? A. 使用具有单个节点提供领导节点和计算功能的亚马逊Redshift。 B. 使用单一可用区部署的亚马逊RDS,并配置亚马逊RDS在另一个可用区添加读取实例。 C. 使用多可用区部署的亚马逊Aurora,并配置具有Aurora副本的Aurora自动扩展功能。 D. 在EC2竞价实例上使用亚马逊ElastiCache for Memcached。 A. A B. B C. C D. D 正确答案是C,使用多可用区部署的Amazon Aurora并配置Aurora自动扩展与Aurora副本。原因如下: A选项错误:Amazon Redshift是数据仓库解决方案,主要用于分析工作负载,而不是OLTP事务处理。此外题目要求处理的是MySQL数据库的工作负载。 B选项错误:RDS单可用区部署无法提供高可用性,且添加只读实例在不同可用区需要手动操作,不能自动扩展。 C选项正确:Amazon Aurora与MySQL兼容,多可用区部署提供高可用性,Aurora自动扩展可以根据负载自动添加/删除只读副本来处理读密集型工作负载。 D选项错误:ElastiCache Memcached是内存缓存服务,不能替代主数据库,也不提供自动扩展读副本的功能,更不满足高可用性要求。 15 / 100 分类: SAA-C03 15. A company recently migrated to AWS and wants to implement a solution to protect the traffic that ows in and out of the production VPC. The company had an inspection server in its on-premises data center. The inspection server performed speci c operations such as traffic ow inspection and traffic ltering. The company wants to have the same functionalities in the AWS Cloud. Which solution will meet these requirements? A. Use Amazon GuardDuty for traffic inspection and traffic ltering in the production VPC. B. Use Tra c Mirroring to mirror traffic from the production VPC for traffic inspection and ltering. C. Use AWS Network Firewall to create the required rules for traffic inspection and traffic ltering for the production VPC. D. Use AWS Firewall Manager to create the required rules for traffic inspection and traffic ltering for the production VPC. 一家公司最近迁移到了AWS,并希望实施一个解决方案来保护流入和流出生产VPC的流量。 该公司在其本地数据中心有一台检查服务器。该检查服务器执行了特定操作,如流量检查和流量过滤。 该公司希望在AWS云中拥有相同的功能。 哪种解决方案能满足这些需求? A. 使用Amazon GuardDuty在生产VPC中进行流量检查和流量过滤。 B. 使用流量镜像将生产VPC的流量镜像出来以进行流量检查和过滤。 C. 使用AWS网络防火墙为生产VPC创建所需的流量检查和流量过滤规则。 D. 使用AWS防火墙管理器为生产VPC创建所需的流量检查和流量过滤规则。 A. A B. B C. C D. D AWS Network Firewall(选项C)是正确的解决方案,因为它是专门设计用于VPC流量检查和过滤的托管服务,可以创建自定义规则集来检查进出VPC的流量,并执行深度数据包检查(DPI)。 选项A错误:Amazon GuardDuty是一个威胁检测服务,通过分析AWS日志数据来识别潜在安全威胁,而不是执行实时流量检查或过滤。 选项B错误:Traffic Mirroring只能复制EC2实例的网络流量用于监控或分析,但不能直接执行流量过滤或提供完整的检查功能。 选项D错误:AWS Firewall Manager是一个集中管理AWS WAF、Shield Advanced和VPC安全组规则的服务,但不能直接提供网络层流量检查和过滤功能。 16 / 100 分类: SAA-C03 16. A company hosts a data lake on AWS. The data lake consists of data in Amazon S3 and Amazon RDS for PostgreSQL. The company needs a reporting solution that provides data visualization and includes all the data sources within the data lake. Only the company’s management team should have full access to all the visualizations. The rest of the company should have only limited access. Which solution will meet these requirements? A. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate IAM roles. B. Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate users and groups. C. Create an AWS Glue table and crawler for the data in Amazon S3. Create an AWS Glue extract, transform, and load (ETL) job to produce reports. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports. D. Create an AWS Glue table and crawler for the data in Amazon S3. Use Amazon Athena Federated Query to access data within Amazon RDS for PostgreSQL. Generate reports by using Amazon Athena. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports. 一家公司在AWS上托管了一个数据湖。该数据湖包含存储在Amazon S3和Amazon RDS for PostgreSQL中的数据。 公司需要一个报表解决方案,能够提供数据可视化并涵盖数据湖中的所有数据源。 只有公司的管理团队应拥有对所有可视化的完全访问权限,公司其他人员则应仅具有有限访问权限。 哪种方案可以满足这些需求? A. 在Amazon QuickSight中创建分析,连接所有数据源并创建新数据集。发布仪表板以可视化数据,然后通过适当的IAM角色共享这些仪表板。 B. 在Amazon QuickSight中创建分析,连接所有数据源并创建新数据集。发布仪表板以可视化数据,然后与适当的用户和群组共享这些仪表板。 C. 为Amazon S3中的数据创建AWS Glue表和爬取程序。创建AWS Glue提取、转换和加载(ETL)作业以生成报告。将报告发布到Amazon S3,并使用S3存储桶策略限制对报告的访问。 D. 为Amazon S3中的数据创建AWS Glue表和爬取程序。使用Amazon Athena联合查询访问Amazon RDS for PostgreSQL中的数据。通过Amazon Athena生成报告,并将报告发布到Amazon S3。使用S3存储桶策略限制对报告的访问。 A. A B. B C. C D. D 正确答案是B。详细解析如下: 1. **选项A**虽然正确使用了Amazon QuickSight来创建数据分析和可视化,但它错误地建议通过IAM角色共享仪表板。实际业务场景中,权限通常需要分配给具体用户组而非IAM角色。 2. **选项B**是完美解决方案: – QuickSight天然支持多数据源(S3和RDS) – 提供精细的基于用户/组的权限控制 – 管理层可以获得完整访问权限 – 普通员工可设置受限访问 – 可视化能力完全符合需求 3. **选项C**的问题在于: – 仅处理了S3数据而忽略了RDS数据源 – 使用S3策略做权限控制过于底层 – 缺少真正的可视化组件 – 整体架构不符合数据湖报表需求 4. **选项D**的缺陷: – Athena联邦查询增加了复杂度 – 仍然缺失专可视化工具 – 报表生成流程过于技术化 – 权限方案不符合用户/组级控制要求 综上所述,B方案通过QuickSight的完整功能栈和精细权限管理,完美满足题目中的所有需求。 17 / 100 分类: SAA-C03 17. A company is implementing a new business application. The application runs on two Amazon EC2 instances and uses an Amazon S3 bucket for document storage. A solutions architect needs to ensure that the EC2 instances can access the S3 bucket. What should the solutions architect do to meet this requirement? A. Create an IAM role that grants access to the S3 bucket. Attach the role to the EC2 instances. B. Create an IAM policy that grants access to the S3 bucket. Attach the policy to the EC2 instances. C. Create an IAM group that grants access to the S3 bucket. Attach the group to the EC2 instances. D. Create an IAM user that grants access to the S3 bucket. Attach the user account to the EC2 instances. 一家公司正在实施一个新的业务应用程序。该应用程序运行在两个Amazon EC2实例上,并使用一个Amazon S3存储桶进行文档存储。解决方案架构师需要确保EC2实例能够访问S3存储桶。 解决方案架构师应该采取什么措施来满足这一要求? A. 创建一个授予访问S3存储桶权限的IAM角色。将该角色附加到EC2实例上。 B. 创建一个授予访问S3存储桶权限的IAM策略。将该策略附加到EC2实例上。 C. 创建一个授予访问S3存储桶权限的IAM组。将该组附加到EC2实例上。 D. 创建一个授予访问S3存储桶权限的IAM用户。将该用户账户附加到EC2实例上。 A. A B. B C. C D. D 正确的解决方案是创建一个IAM角色,授权访问S3存储桶,并将该角色附加到EC2实例上(选项A)。原因是:1. IAM角色是为AWS服务间访问设计的最佳实践,EC2实例可以通过实例元数据动态获取临时安全凭证,无需管理长期凭证。2. 直接附加IAM策略到EC2实例(选项B)是不可能的,因为策略必须附加到IAM身份(用户、组或角色)上。3. IAM组(选项C)是用于管理用户权限的,不能直接附加到EC2实例。4. 虽然可以创建IAM用户(选项D)并将其凭证存储在EC2实例上,但这种方法存在安全风险且不符合AWS最佳实践,因为长期凭证可能泄露。 18 / 100 分类: SAA-C03 18. An application development team is designing a microservice that will convert large images to smaller, compressed images. When a user uploads an image through the web interface, the microservice should store the image in an Amazon S3 bucket, process and compress the image with an AWS Lambda function, and store the image in its compressed form in a different S3 bucket. A solutions architect needs to design a solution that uses durable, stateless components to process the images automatically. Which combination of actions will meet these requirements? (Choose two.) A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure the S3 bucket to send a notification to the SQS queue when an image is uploaded to the S3 bucket. B. Configure the Lambda function to use the Amazon Simple Queue Service (Amazon SQS) queue as the invocation source. When the SQS message is successfully processed, delete the message in the queue. C. Configure the Lambda function to monitor the S3 bucket for new uploads. When an uploaded image is detected, write the file name to a text file in memory and use the text file to keep track of the images that were processed. D. Launch an Amazon EC2 instance to monitor an Amazon Simple Queue Service (Amazon SQS) queue. When items are added to the queue, log the file name in a text file on the EC2 instance and invoke the Lambda function. E. Configure an Amazon EventBridge (Amazon CloudWatch Events) event to monitor the S3 bucket. When an image is uploaded, send an alert to an Amazon ample notification Service (Amazon SNS) topic with the application owner’s email address for further processing. 一个应用开发团队正在设计一个微服务,用于将大型图片转换为更小尺寸的压缩图片。当用户通过网页界面上传图片时,该微服务应将图片存储在一个亚马逊S3存储桶中,通过AWS Lambda函数处理并压缩图片,然后将压缩后的图片存储到另一个不同的S3存储桶。 解决方案架构师需要设计一个使用持久化、无状态组件来自动处理图片的解决方案。 下列哪两种操作组合能满足这些需求?(选择两项。) A. 创建一个亚马逊简单队列服务(Amazon SQS)队列。配置S3存储桶在上传图片到S3存储桶时向SQS队列发送通知。 B. 配置Lambda函数使用亚马逊简单队列服务(Amazon SQS)队列作为调用源。当SQS消息成功处理后,删除队列中的消息。 C. 配置Lambda函数监控S3存储桶中的新上传文件。当检测到上传的图片时,将文件名写入内存中的文本文件,并使用该文本文件跟踪已处理的图片。 D. 启动一个亚马逊EC2实例来监控亚马逊简单队列服务(Amazon SQS)队列。当队列中添加项目时,在EC2实例的文本文件中记录文件名并调用Lambda函数。 E. 配置一个亚马逊EventBridge(亚马逊CloudWatch事件)事件来监控S3存储桶。当图片上传时,向一个亚马逊简单通知服务(Amazon SNS)主题发送警报,其中包含应用程序所有者的电子邮件地址以供进一步处理。 A. A B. B C. C D. D E. E 本题考察使用AWS无状态组件自动处理图片上传的方案设计。正确答案AB组合原因: A正确 – S3事件通知+SQS队列是标准异步处理模式,可持久保存事件消息 B正确 – SQS作为Lambda触发器可自动处理消息并在成功后删除,符合无状态要求 C错误 – Lambda函数不应主动监控S3(有状态),且内存跟踪不符合持久性要求 D错误 – 使用EC2实例监控队列会引入有状态组件,与无状态架构原则冲突 E错误 – SNS邮件通知无法直接触发处理流程,不满足自动压缩的需求 检查 19 / 100 分类: SAA-C03 19. A company has a three-tier web application that is deployed on AWS. The web servers are deployed in a public subnet in a VPC. The application servers and database servers are deployed in private subnets in the same VPC. The company has deployed a third-party virtual rewall appliance from AWS Marketplace in an inspection VPC. The appliance is Configured with an IP interface that can accept IP packets. A solutions architect needs to integrate the web application with the appliance to inspect all traffic to the application before the traffic reaches the web server. Which solution will meet these requirements with the LEAST operational overhead? A. Create a Network Load Balancer in the public subnet of the application’s VPC to route the traffic to the appliance for packet inspection. B. Create an Application Load Balancer in the public subnet of the application’s VPC to route the traffic to the appliance for packet inspection. C. Deploy a transit gateway in the inspection VPConfigure route tables to route the incoming packets through the transit gateway. D. Deploy a Gateway Load Balancer in the inspection VPC. Create a Gateway Load Balancer endpoint to receive the incoming packets and forward the packets to the appliance. 一家公司在AWS上部署了一个三层网络应用程序。网络服务器部署在虚拟私有云(VPC)的公共子网中。 应用服务器和数据库服务器部署在同一VPC的私有子网中。该公司已在检查VPC中部署了来自AWS Marketplace的第三方虚拟防火墙设备。 该设备配置有一个可以接受IP数据包的IP接口。 解决方案架构师需要将该网络应用程序与该设备集成,以便在流量到达网络服务器之前检查所有流向该应用程序的流量。 哪种解决方案能在最低运维开销下满足这些要求? A. 在应用程序VPC的公共子网中创建网络负载均衡器,将流量路由到设备进行数据包检查。 B. 在应用程序VPC的公共子网中创建应用负载均衡器,将流量路由到设备进行数据包检查。 C. 在检查VPC中部署一个中转网关,配置路由表以通过中转网关路由传入的数据包。 D. 在检查VPC中部署一个网关负载均衡器,创建一个网关负载均衡器终端节点来接收传入的数据包并将数据包转发到设备。 A. A B. B C. C D. D 选项A(在网络应用的VPC的公共子网中创建网络负载均衡器,将流量路由到防火墙进行数据包检查)不正确,因为网络负载均衡器(Network Load Balancer)主要用于第4层流量路由,无法与Gateway Load Balancer那样直接集成第三方防火墙设备。选项B(在网络应用的VPC的公共子网中创建应用负载均衡器,将流量路由到防火墙进行数据包检查)不正确,因为应用负载均衡器(Application Load Balancer)主要用于第7层流量路由,同样无法直接集成防火墙进行数据包检查。选项C(在检查VPC中部署传输网关,配置路由表通过传输网关路由传入的数据包)不正确,虽然传输网关可以实现流量路由,但其配置复杂,且无法像Gateway Load Balancer那样直接提供防火墙集成功能。选项D(在检查VPC中部署Gateway Load Balancer,创建Gateway Load Balancer终端节点接收传入的数据包并转发到防火墙设备)是正确的,因为Gateway Load Balancer专门设计用于与第三方安全设备集成,可以轻松地将流量重定向到防火墙进行安全检查,且操作维护成本最低。 20 / 100 分类: SAA-C03 20. A company wants to improve its ability to clone large amounts of production data into a test environment in the same AWS Region. The data is stored in Amazon EC2 instances on Amazon Elastic Block Store (Amazon EBS) volumes. Modi cations to the cloned data must not affect the production environment. The software that accesses this data requires consistently high I/O performance. A solutions architect needs to minimize the time that is required to clone the production data into the test environment. Which solution will meet these requirements? A. Take EBS snapshots of the production EBS volumes. Restore the snapshots onto EC2 instance store volumes in the test environment. B. Configure the production EBS volumes to use the EBS Multi-Attach feature. Take EBS snapshots of the production EBS volumes. Attach the production EBS volumes to the EC2 instances in the test environment. C. Take EBS snapshots of the production EBS volumes. Create and initialize new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment before restoring the volumes from the production EBS snapshots. D. Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on the EBS snapshots. Restore the snapshots into new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment. 一家公司希望提升将大量生产数据克隆至同一AWS区域测试环境的能力。这些数据存储在Amazon Elastic Block Store(Amazon EBS)卷上的Amazon EC2实例中。对克隆数据的修改不得影响生产环境。访问该数据的软件需要持续稳定的高I/O性能。 解决方案架构师需最大限度缩短将生产数据克隆至测试环境所需的时间。 下列哪个解决方案能满足这些要求? A. 对生产环境的EBS卷创建EBS快照,将快照恢复至测试环境中的EC2实例存储卷。 B. 将生产环境的EBS卷配置为使用EBS多挂载功能,对生产环境的EBS卷创建快照,然后将生产环境的EBS卷挂载到测试环境中的EC2实例。 C. 对生产环境的EBS卷创建快照,创建并初始化新的EBS卷,在从生产环境EBS快照恢复卷之前,将新EBS卷挂载到测试环境中的EC2实例。 D. 对生产环境的EBS卷创建快照,在EBS快照上启用EBS快速快照恢复功能,将快照恢复到新的EBS卷,然后将新EBS卷挂载到测试环境中的EC2实例。 A. A B. B C. C D. D 正确答案是D选项,因为EBS快速快照恢复(Fast Snapshot Restore)功能可以显著减少从快照创建新卷所需的时间,并立即提供最大性能,满足测试环境对高I/O性能的要求。 其他选项分析:A选项错误:EC2实例存储卷不是持久性存储,不能保证数据持久性,也无法提供与生产环境相同的数据保护级别。B选项错误:EBS Multi-Attach功能允许多个实例同时访问同一EBS卷,但这会直接修改生产数据,违反了题目要求不影响生产环境的条件。C选项错误:创建和初始化新EBS卷后再恢复快照的方式效率低下,不如直接使用Fast Snapshot Restore功能快速创建新卷。 21 / 100 分类: SAA-C03 21. An ecommerce company wants to launch a one-deal-a-day website on AWS. Each day will feature exactly one product on sale for a period of 24 hours. The company wants to be able to handle millions of requests each hour with millisecond latency during peak hours. Which solution will meet these requirements with the LEAST operational overhead? A. Use Amazon S3 to host the full website in different S3 buckets. Add Amazon CloudFront distributions. Set the S3 buckets as origins for the distributions. Store the order data in Amazon S3. B. Deploy the full website on Amazon EC2 instances that run in Auto Scaling groups across multiple Availability Zones. Add an Application Load Balancer (ALB) to distribute the website traffic. Add another ALB for the backend APIs. Store the data in Amazon RDS for MySQL. C. Migrate the full application to run in containers. Host the containers on Amazon Elastic Kubernetes Service (Amazon EKS). Use the Kubernetes Cluster Autoscaler to increase and decrease the number of pods to process bursts in traffic. Store the data in Amazon RDS for MySQL. D. Use an Amazon S3 bucket to host the website’s static content. Deploy an Amazon CloudFront distribution. Set the S3 bucket as the origin. Use Amazon API Gateway and AWS Lambda functions for the backend APIs. Store the data in Amazon DynamoDB. 一家电子商务公司希望在AWS上推出一个每日特价网站。每天将仅展示一款特价商品,持续24小时。该公司希望在高峰时段能够处理每小时数百万次请求,并保持毫秒级延迟。哪种解决方案能够以最小的运营开销满足这些需求? A. 使用Amazon S3在不同存储桶中托管完整网站。添加Amazon CloudFront分发。将S3存储桶设置为分发的源站。将订单数据存储在Amazon S3中。 B. 在跨多个可用区的自动扩展组中的Amazon EC2实例上部署完整网站。添加应用程序负载均衡器(ALB)来分发网站流量。为后端API再添加一个ALB。将数据存储在Amazon RDS for MySQL中。 C. 将整个应用程序迁移到容器中运行。在Amazon Elastic Kubernetes服务(Amazon EKS)上托管容器。使用Kubernetes集群自动扩展器来增减pod数量以处理流量突增。将数据存储在Amazon RDS for MySQL中。 D. 使用Amazon S3存储桶托管网站的静态内容。部署一个Amazon CloudFront分发。将S3存储桶设置为源站。使用Amazon API Gateway和AWS Lambda函数作为后端API。将数据存储在Amazon DynamoDB中。 A. A B. B C. C D. D 正确答案是D选项,原因如下: 1. **S3静态网站托管+CloudFront**:D方案使用S3托管静态内容并通过CloudFront分发,能够以极低延迟处理百万级请求,完全匹配题干要求的”millions of requests with millisecond latency”。2. **无服务器架构优势**:API Gateway+Lambda+DynamoDB的组合实现了全托管的后端服务,无需运维服务器,符合”LEAST operational overhead”要求。3. **对比分析错误选项**: – A选项错误:虽然S3+CloudFront可行,但将订单数据存储在S3不符合数据库最佳实践,S3不适合高频更新场景。 – B选项错误:EC2+ALB+RDS方案需要自行维护服务器和数据库扩展性,运维负担大。 – C选项错误:EKS容器方案虽然弹性但复杂度高,Kubernetes运维需要专业团队。4. **D方案技术匹配**: – CloudFront边缘缓存解决全球访问延迟 – DynamoDB可自动扩展应对秒杀场景 – Lambda按需执行实现零闲置成本该架构完美平衡了性能需求与运维简易性。 22 / 100 分类: SAA-C03 22. A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to the loss of an Availability Zone. Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions architect must minimize the costs of storing and retrieving the media files. Which storage option meets these requirements? A. S3 Standard B. S3 Intelligent-Tiering C. S3 Standard-Infrequent Access (S3 Standard-IA) D. S3 One Zone-Infrequent Access (S3 One Zone-IA) 一位解决方案架构师正在使用亚马逊S3设计一个新的数字媒体应用程序的存储架构。 媒体文件必须能够抵御单个可用区的故障。 有些文件会被频繁访问,而其他文件的访问模式难以预测且很少被访问。 解决方案架构师必须最小化存储和检索媒体文件的成本。 哪种存储选项符合这些要求? A. S3标准存储 B. S3智能分层存储 C. S3标准-不频繁访问存储(S3 Standard-IA) D. S3单区-不频繁访问存储(S3 One Zone-IA) A. A B. B C. C D. D 正确答案是B(S3 Intelligent-Tiering)。原因如下:1. 题目要求媒体文件必须能承受单个可用区的故障(数据需要跨AZ冗余),因此排除了D选项(S3 One Zone 单可用区存储)。2. S3 Intelligent-Tiering智能分层存储能自动将数据在频繁访问层(Frequent)和不频繁访问层(Infrequent)之间自动迁移,完美匹配题目中’部分文件频繁访问、部分文件很少访问且模式不可预测’的需求。3. 相比于单独使用S3 Standard(只适合频繁访问)或S3 Standard-IA(只适合不频繁访问),智能分层的成本优化能力更强。4. 该存储方案在保持跨AZ冗余的同时,通过自动分层机制最小化了存储和检索成本。 其他选项分析:A(S3 Standard):虽然跨AZ冗余,但没有针对不频繁访问数据进行优化,成本较高C(S3 Standard-IA):针对不频繁数据优化,但对频繁访问数据收费更高D(S3 One Zone-IA):虽然成本低但不满足跨AZ冗余的需求 23 / 100 分类: SAA-C03 23. A company is storing backup files by using Amazon S3 Standard storage. The files are accessed frequently for 1 month. However, the files are not accessed after 1 month. The company must keep the files inde nitely. Which storage solution will meet these requirements MOST cost-effectively? A. Configure S3 Intelligent-Tiering to automatically migrate objects. B. Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month. C. Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) after 1 month. D. Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 month. 一家公司正在使用亚马逊S3标准存储来存储备份文件。这些文件在1个月内会被频繁访问。然而,1个月后这些文件就不再被访问。公司必须无限期地保留这些文件。 哪种存储解决方案能以最具成本效益的方式满足这些需求? A. 配置S3智能分层来自动迁移对象。 B. 创建一个S3生命周期配置,在1个月后将对象从S3标准存储转移到S3 Glacier Deep Archive。 C. 创建一个S3生命周期配置,在1个月后将对象从S3标准存储转移到S3标准-不频繁访问(S3 Standard-IA)。 D. 创建一个S3生命周期配置,在1个月后将对象从S3标准存储转移到S3单区-不频繁访问(S3 One Zone-IA)。 A. A B. B C. C D. D 这个问题考察的是如何根据数据访问模式选择最具成本效益的Amazon S3存储方案。根据题目描述,文件在前1个月被频繁访问,之后不再访问但需要永久保留。 各选项分析:A. S3 Intelligent-Tiering虽然可以自动迁移对象,但不适用于长期不访问的数据(访问频率为0),会产生额外的监控费用。B. 正确答案。S3 Glacier Deep Archive是目前AWS存储成本最低的方案(每月每GB仅1美元左右),适合极少访问且需要长期保留的数据,30天后自动迁移可最大限度节省成本。C. S3 Standard-IA虽然比标准存储便宜,但仍比Glacier Deep Archive贵3-4倍,不适合完全不再访问的数据。D. S3 One Zone-IA更不适合,因为数据只保存在单可用区,且有永久性丢失风险,不符合题目永久保留的要求。 最佳实践是:高频访问阶段使用S3 Standard,之后自动转到超低成本的Glacier Deep Archive,这样既满足访问需求又最大程度节约成本。 24 / 100 分类: SAA-C03 24. A company observes an increase in Amazon EC2 costs in its most recent bill. The billing team notices unwanted vertical scaling of instance types for a couple of EC2 instances. A solutions architect needs to create a graph comparing the last 2 months of EC2 costs and perform an in-depth analysis to identify the root cause of the vertical scaling. How should the solutions architect generate the information with the LEAST operational overhead? A. Use AWS Budgets to create a budget report and compare EC2 costs based on instance types. B. Use Cost Explorer’s granular ltering feature to perform an in-depth analysis of EC2 costs based on instance types. C. Use graphs from the AWS Billing and Cost Management dashboard to compare EC2 costs based on instance types for the last 2 months. D. Use AWS Cost and Usage Reports to create a report and send it to an Amazon S3 bucket. Use Amazon QuickSight with Amazon S3 as a source to generate an interactive graph based on instance types. 一家公司发现最近账单中亚马逊EC2成本有所上升。 计费团队注意到几台EC2实例出现了不想要的垂直扩展实例类型。 解决方案架构师需要创建一个对比最近两个月EC2成本的图表,并进行深入分析以确定垂直扩展的根本原因。 解决方案架构师应该如何以最少的操作开销生成这些信息? A. 使用AWS预算创建预算报告,并根据实例类型比较EC2成本。 B. 使用成本管理器的精细过滤功能,基于实例类型对EC2成本进行深入分析。 C. 使用AWS计费和成本管理仪表板中的图表,根据实例类型比较最近两个月的EC2成本。 D. 使用AWS成本和使用报告创建报告并发送到亚马逊S3存储桶。将Amazon QuickSight与Amazon S3作为数据源,基于实例类型生成交互式图表。 A. A B. B C. C D. D 正确答案是B,使用Cost Explorer的精细过滤功能来分析基于实例类型的EC2成本。 解析:1. Cost Explorer (B) 是专门用于成本分析和可视化的AWS服务,提供了预构建的实例类型过滤功能,可以直接进行深入分析,无需额外操作。 2. 其他选项的不足:– A选项(AWS Budgets)主要用于预算警报而不是成本分析,不能创建详细的成本比较图表– C选项的Billing控制台图表不具备足够的精细过滤功能– D选项虽然能实现目标(通过CUR+QuickSight),但需要额外配置S3和QuickSight,操作复杂度最高 3. 关键考量:题目明确要求’最少操作开销’,Cost Explorer是开箱即用的服务,最能满足这个需求 25 / 100 分类: SAA-C03 25. A company is designing an application. The application uses an AWS Lambda function to receive information through Amazon API Gateway and to store the information in an Amazon Aurora PostgreSQL database. During the proof-of-concept stage, the company has to increase the Lambda quotas signi cantly to handle the high volumes of data that the company needs to load into the database. A solutions architect must recommend a new design to improve scalability and minimize the configuration effort. Which solution will meet these requirements? A. Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances. Connect the database by using native Java Database Connectivity (JDBC) drivers. B. Change the platform from Aurora to Amazon DynamoDProvision a DynamoDB Accelerator (DAX) cluster. Use the DAX client SDK to point the existing DynamoDB API calls at the DAX cluster. C. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using Amazon Simple notification Service (Amazon SNS). D. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue. 一家公司正在设计一个应用程序。该应用程序使用AWS Lambda函数通过Amazon API Gateway接收信息 并将信息存储在Amazon Aurora PostgreSQL数据库中。 在概念验证阶段,公司必须大幅提高Lambda配额以处理需要加载到数据库中的大量数据。解决方案架构师必须推荐一种新设计来提高可扩展性并最小化配置工作。 哪种解决方案能满足这些要求? A. 将Lambda函数代码重构为运行在Amazon EC2实例上的Apache Tomcat代码。使用原生Java数据库连接(JDBC)驱动程序连接数据库。 B. 将平台从Aurora更改为Amazon DynamoD。预置一个DynamoDB加速器(DAX)集群。使用DAX客户端SDK将现有DynamoDB API调用指向DAX集群。 C. 设置两个Lambda函数。配置一个函数来接收信息。配置另一个函数将信息加载到数据库中。使用Amazon Simple Notification Service(Amazon SNS)集成Lambda函数。 D. 设置两个Lambda函数。配置一个函数来接收信息。配置另一个函数将信息加载到数据库中。使用Amazon Simple Queue Service(Amazon SQS)队列集成Lambda函数。 A. A B. B C. C D. D 正确答案是D,原因如下: 1. **选项A**错误:重构Lambda代码为运行在EC2上的Tomcat应用并使用JDBC驱动连接数据库,这会增加管理EC2实例的复杂性,且无法解决Lambda配额限制的问题。EC2需要手动扩展,违背了最小化配置努力的要求。 2. **选项B**错误:虽然DynamoDB具备高扩展性,但迁移到DynamoDB需要重构数据模型和API,且题目明确要求继续使用Aurora PostgreSQL数据库。DAX仅适用于DynamoDB的读加速,不适用此场景。 3. **选项C**错误:使用SNS集成两个Lambda函数。SNS是发布/订阅模型,消息无法持久化且缺乏重试机制。在数据量大时可能导致数据丢失,无法保证可靠的数据入库。 4. **选项D正确**: – 采用SQS队列解耦两个Lambda函数,自动缓冲突发流量 – SQS提供消息持久化和至少一次投递保证,确保数据不丢失 – 接收函数快速响应API Gateway,写入队列后立即返回 – 入库函数从队列按需处理,天然实现自动扩展 – 完全托管服务,无需额外配置即可实现高扩展性 – 保持现有Aurora数据库无需迁移,最小化改动成本 26 / 100 分类: SAA-C03 26. A company needs to review its AWS Cloud deployment to ensure that its Amazon S3 buckets do not have unauthorized configuration changes. What should a solutions architect do to accomplish this goal? A. Turn on AWS Config with the appropriate rules. B. Turn on AWS Trusted Advisor with the appropriate checks. C. Turn on Amazon Inspector with the appropriate assessment template. D. Turn on Amazon S3 server access logging. Configure Amazon EventBridge (Amazon Cloud Watch Events). 一家公司需要审查其AWS云部署,以确保其Amazon S3存储桶没有未经授权的配置变更。 解决方案架构师应该做什么来实现这一目标? A. 开启AWS Config并配置适当的规则。 B. 开启AWS Trusted Advisor并配置适当的检查。 C. 开启Amazon Inspector并配置适当的评估模板。 D. 开启Amazon S3服务器访问日志记录。配置Amazon EventBridge(Amazon CloudWatch Events)。 A. A B. B C. C D. D 正确答案是A,启用AWS Config并配置适当规则。 解析: AWS Config是一项服务,可以监控和记录AWS资源的配置变更,帮助识别未经授权的变更。通过设置适当的规则(如s3-bucket-public-read-prohibited等),当S3存储桶的配置发生变更时,可以发出警报或触发自动修复。 错误选项分析: B. AWS Trusted Advisor主要用于成本优化、性能、安全性和容错方面的检查,不能持续监控资源配置变更。 C. Amazon Inspector用于评估EC2实例的安全漏洞和不符合标准的配置,不适用于监控S3存储桶的配置变更。 D. S3服务器访问日志仅记录对存储桶中对象的访问请求,而不是存储桶本身的配置变更。EventBridge虽然可以监控事件,但无法直接防止配置变更。 因此,AWS Config是唯一能够持续监控并防止S3存储桶未经授权配置变更的服务。 27 / 100 分类: SAA-C03 27. A company is launching a new application and will display application metrics on an Amazon CloudWatch dashboard. The company’s product manager needs to access this dashboard periodically. The product manager does not have an AWS account. A solutions architect must provide access to the product manager by following the principle of least privilege. Which solution will meet these requirements? A. Share the dashboard from the CloudWatch console. Enter the product manager’s email address, and complete the sharing steps. Provide a shareable link for the dashboard to the product manager. B. Create an IAM user specifically for the product manager. Attach the CloudWatchReadOnlyAccess AWS managed policy to the user. Share the new login credentials with the product manager. Share the browser URL of the correct dashboard with the product manager. C. Create an IAM user for the company’s employees. Attach the ViewOnlyAccess AWS managed policy to the IAM user. Share the new login credentials with the product manager. Ask the product manager to navigate to the CloudWatch console and locate the dashboard by name in the Dashboards section. D. Deploy a bastion server in a public subnet. When the product manager requires access to the dashboard, start the server and share the RDP credentials. On the bastion server, ensure that the browser is Configured to open the dashboard URL with cached AWS credentials that have appropriate permissions to view the dashboard. 一家公司正在推出一个新应用程序,并将在一个亚马逊CloudWatch仪表盘上展示应用指标。该公司的产品经理需要定期访问这个仪表盘。产品经理没有AWS账户。解决方案架构师必须按照最小权限原则为其提供访问权限。下列哪种解决方案能够满足这些需求? A. 从CloudWatch控制台分享仪表盘。输入产品经理的电子邮件地址,并完成分享步骤。向产品经理提供一个可分享的仪表盘链接。 B. 专门为产品经理创建一个IAM用户。给该用户附加CloudWatchReadOnlyAccess这个AWS托管策略。与产品经理分享新的登录凭据。向产品经理分享正确仪表盘的浏览器URL。 C. 为公司的员工创建一个IAM用户。给该IAM用户附加ViewOnlyAccess这个AWS托管策略。与产品经理分享新的登录凭据。让产品经理导航至CloudWatch控制台,并通过名称在仪表盘部分定位该仪表盘。 D. 在公有子网中部署一个堡垒服务器。当产品经理需要访问仪表盘时,启动服务器并分享RDP凭据。在堡垒服务器上,确保浏览器配置为使用具有查看仪表盘适当权限的缓存的AWS凭据打开仪表盘URL。 A. A B. B C. C D. D 根据最小权限原则和场景需求,正确答案是A选项。原因如下: A选项(正确):CloudWatch原生支持通过邮件分享仪表板链接给非AWS账户用户,无需创建IAM实体,完美实现了最小权限原则且操作最简便。 B选项(错误):虽然设置了只读权限,但不必要地创建了IAM用户,违反了题目中’没有AWS账户’的前提条件。 C选项(错误):与B选项类似的问题,且ViewOnlyAccess策略权限过广(包含所有AWS服务的只读权限),不符合最小权限原则。 D选项(错误):部署堡垒机是严重过度设计的方案,会产生不必要的运维成本和安全隐患,且RDP方式完全不符合云原生最佳实践。 28 / 100 分类: SAA-C03 28. A company is migrating applications to AWS. The applications are deployed in different accounts. The company manages the accounts centrally by using AWS Organizations. The company’s security team needs a single sign-on (SSO) solution across all the company’s accounts. The company must continue managing the users and groups in its on-premises self-managed Microsoft Active Directory. Which solution will meet these requirements? A. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a one-way forest trust or a one-way domain trust to connect the company’s self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory. B. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a two-way forest trust to connect the company’s self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory. C. Use AWS Directory Service. Create a two-way trust relationship with the company’s self-managed Microsoft Active Directory. D. Deploy an identity provider (IdP) on premises. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. 一家公司正在将应用程序迁移到AWS。这些应用程序部署在不同的账户中。该公司通过使用AWS组织集中管理这些账户。 公司的安全团队需要一个跨公司所有账户的单点登录(SSO)解决方案。公司必须继续在其本地自主管理的微软活动目录中管理用户和组。 哪种解决方案能满足这些需求? A. 从AWS SSO控制台启用AWS单点登录(AWS SSO)。使用AWS目录服务为微软活动目录创建单向林信任或单向域信任,将公司自主管理的微软活动目录与AWS SSO连接。 B. 从AWS SSO控制台启用AWS单点登录(AWS SSO)。使用AWS目录服务为微软活动目录创建双向林信任,将公司自主管理的微软活动目录与AWS SSO连接。 C. 使用AWS目录服务。与公司自主管理的微软活动目录创建双向信任关系。 D. 在本地部署身份提供商(IdP)。从AWS SSO控制台启用AWS单点登录(AWS SSO)。 A. A. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a one-way forest trust or a one-way domain trust to connect the company’s self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory. B. B. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a two-way forest trust to connect the company’s self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory. C. C. Use AWS Directory Service. Create a two-way trust relationship with the company’s self-managed Microsoft Active Directory. D. D. Deploy an identity provider (IdP) on premises. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. 正确选项是 B。解释如下: 1. 题目要求通过AWS Organizations集中管理多个账户,并需要跨账户的SSO解决方案,同时需要继续使用本地自托管的Microsoft Active Directory管理用户和组。 2. 选项B:– 通过AWS SSO控制台启用AWS SSO服务– 使用AWS Directory Service for Microsoft Active Directory创建一个双向林信任关系– 双向林信任允许本地AD和AWS托管AD之间的完全身份验证互通– 这是推荐的企业级集成方案,可以保持现有的用户管理方式 3. 其他选项分析:A. 单向信任关系不足以实现题目要求的完整集成,且不如双向信任安全可靠C. 仅使用AWS Directory Service不足以提供跨账户SSO功能D. 部署本地IdP会增加复杂性和管理成本,不符合题目保持现有AD管理的要求 4. 关键点是:AWS SSO提供跨账户SSO能力,双向林信任确保本地AD可以完全集成,同时保持现有的用户管理方式。 29 / 100 分类: SAA-C03 29. A company provides a Voice over Internet Protocol (VoIP) service that uses UDP connections. The service consists of Amazon EC2 instances that run in an Auto Scaling group. The company has deployments across multiple AWS Regions. The company needs to route users to the Region with the lowest latency. The company also needs automated failover between Regions. Which solution will meet these requirements? A. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with the Auto Scaling group. Use the NLB as an AWS Global Accelerator endpoint in each Region. B. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the target group with the Auto Scaling group. Use the ALB as an AWS Global Accelerator endpoint in each Region. C. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with the Auto Scaling group. Create an Amazon Route 53 latency record that points to aliases for each NLB. Create an Amazon CloudFront distribution that uses the latency record as an origin. D. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the target group with the Auto Scaling group. Create an Amazon Route 53 weighted record that points to aliases for each ALB. Deploy an Amazon CloudFront distribution that uses the weighted record as an origin. 一家公司提供基于UDP连接的互联网语音协议(VoIP)服务。该服务由运行在自动扩展组中的亚马逊EC2实例组成。 公司在多个AWS区域部署了该服务。 公司需要将用户路由至延迟最低的区域,同时还需要实现跨区域的自动故障转移。 哪个解决方案能够满足这些需求? A. 部署网络负载均衡器(NLB)及关联的目标组。将目标组与自动扩展组关联。在每个区域使用NLB作为AWS全球加速器终端节点。 B. 部署应用负载均衡器(ALB)及关联的目标组。将目标组与自动扩展组关联。在每个区域使用ALB作为AWS全球加速器终端节点。 C. 部署网络负载均衡器(NLB)及关联的目标组。将目标组与自动扩展组关联。创建指向每个NLB别名的亚马逊Route 53延迟记录。创建使用该延迟记录作为源的亚马逊CloudFront分发。 D. 部署应用负载均衡器(ALB)及关联的目标组。将目标组与自动扩展组关联。创建指向每个ALB别名的亚马逊Route 53加权记录。部署使用该加权记录作为源的亚马逊CloudFront分发。 A. A B. B C. C D. D 这道题考察的是如何实现跨区域低延迟访问和自动故障转移的VoIP服务架构设计。正确答案是选项A,原因如下: 1. 题目要求使用UDP协议(VoIP常用协议),NLB支持UDP而ALB不支持(ALB仅支持HTTP/HTTPS),因此首先排除使用ALB的B/D选项 2. Global Accelerator专门用于优化跨区域流量,自动将用户路由到延迟最低的AWS区域,并内置跨区域故障转移能力 3. 选项C的Route 53延迟路由虽然也能实现区域选择,但需要额外配置CloudFront,且不如Global Accelerator对UDP流量的原生支持 4. 选项D的加权记录无法实现基于延迟的路由,也不适用于UDP流量 5. Global Accelerator可以与NLB直接集成,为UDP应用提供稳定的静态IP地址和自动的跨区域故障转移 错误选项分析: B:ALB不支持UDP协议,不适用于VoIP服务 C:虽然实现了延迟路由,但架构复杂且缺乏Global Accelerator的流量优化能力 D:加权记录不适合延迟敏感场景且ALB不支持UDP 30 / 100 分类: SAA-C03 30. A development team runs monthly resource-intensive tests on its general purpose Amazon RDS for MySQL DB instance with Performance Insights enabled. The testing lasts for 48 hours once a month and is the only process that uses the database. The team wants to reduce the cost of running the tests without reducing the compute and memory attributes of the DB instance. Which solution meets these requirements MOST cost-effectively? A. Stop the DB instance when tests are completed. Restart the DB instance when required. B. Use an Auto Scaling policy with the DB instance to automatically scale when tests are completed. C. Create a snapshot when tests are completed. Terminate the DB instance and restore the snapshot when required. D. Modify the DB instance to a low-capacity instance when tests are completed. Modify the DB instance again when required. 一个开发团队在其启用了Performance Insights的通用型Amazon RDS for MySQL数据库实例上每月运行资源密集型测试。 测试每月进行一次,持续48小时,并且是唯一使用数据库的过程。 团队希望在保持数据库实例的计算和内存属性不变的情况下降低测试运行成本。 哪种解决方案最经济有效地满足这些要求? A. 在测试完成后停止数据库实例。需要时重新启动数据库实例。 B. 对数据库实例使用自动扩展策略,以便在测试完成后自动扩展。 C. 在测试完成后创建快照。终止数据库实例并在需要时恢复快照。 D. 在测试完成后将数据库实例修改为低容量实例。需要时再次修改数据库实例。 A. A B. B C. C D. D 正确答案是C。 A选项(停止DB实例并在需要时重启)看似可行,但实际上RDS实例停止后仍然会收取存储费用,而且重启需要时间,可能影响测试计划。 B选项(使用自动扩展策略)不适用,因为题目明确要求不减少计算和内存属性,且RDS并不支持基于负载的自动扩展实例类型。 C选项(创建快照后终止实例)是最经济的方案:1. 快照存储费用远低于运行实例的费用2. 终止实例后不再产生计算费用3. 需要时可以快速从快照恢复4. 完全保留了原来的计算和内存配置 D选项(修改为低容量实例)虽然有一定节约效果,但仍然会产生计算费用,不如C方案彻底。同时题目要求不减少计算和内存属性,修改实例类型可能影响这些属性。 31 / 100 分类: SAA-C03 31. A company that hosts its web application on AWS wants to ensure all Amazon EC2 instances. Amazon RDS DB instances. and Amazon Redshift clusters are Configured with tags. The company wants to minimize the effort of configuring and operating this check. What should a solutions architect do to accomplish this? A. Use AWS Config rules to de ne and detect resources that are not properly tagged. B. Use Cost Explorer to display resources that are not properly tagged. Tag those resources manually. C. Write API calls to check all resources for proper tag allocation. Periodically run the code on an EC2 instance. D. Write API calls to check all resources for proper tag allocation. Schedule an AWS Lambda function through Amazon CloudWatch to periodically run the code. 一家在AWS上托管其网络应用程序的公司希望确保所有Amazon EC2实例、Amazon RDS数据库实例和Amazon Redshift集群都配置了标签。该公司希望最小化配置和操作此项检查的工作量。解决方案架构师应该采取什么措施来实现这一目标? A. 使用AWS Config规则来定义和检测未正确标记的资源。B. 使用Cost Explorer显示未正确标记的资源,并手动标记这些资源。C. 编写API调用来检查所有资源的标签分配情况,并定期在EC2实例上运行该代码。D. 编写API调用来检查所有资源的标签分配情况,并通过Amazon CloudWatch安排AWS Lambda函数定期运行该代码。 A. A B. B C. C D. D 正确答案解析:A选项是唯一完全正确的解决方案。AWS Config服务可以创建自定义规则来自动检查资源是否符合标签策略,并能持续监控资源配置变化,完全符合题目要求的自动化检查需求。 其他选项错误原因:B选项:Cost Explorer主要用于成本分析,不能自动化执行标签检查,且手动标记不符合题目最小化操作的要求。C选项:自行编写API调用虽然技术上可行,但需要维护代码并管理EC2实例,不符合最小化运维投入的要求。D选项:虽然使用了Lambda实现自动化,但仍需要自行开发维护检查逻辑,不如直接使用AWS Config规则简单高效。AWS Config作为专门的服务已经内置了这类合规性检查功能。 32 / 100 分类: SAA-C03 32. A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side JavaScript, and images. Which method is the MOST cost-effective for hosting the website? A. Containerize the website and host it in AWS Fargate. B. Create an Amazon S3 bucket and host the website there. C. Deploy a web server on an Amazon EC2 instance to host the website. D. Configure an Application Load Balancer with an AWS Lambda target that uses the Express.js framework. 题目: 一个开发团队需要托管一个将被其他团队访问的网站。网站内容包括HTML、CSS、客户端JavaScript和图片。 以下哪种方法托管该网站最具成本效益? A. 将网站容器化并在AWS Fargate上托管。 B. 创建一个Amazon S3存储桶并在其中托管网站。 C. 在Amazon EC2实例上部署Web服务器来托管网站。 D. 配置一个使用Express.js框架的AWS Lambda目标的应用程序负载均衡器。 A. A B. B C. C D. D 正确答案B解析:亚马逊S3是最经济高效的静态网站托管方案,因为:1. S3专门为静态内容优化,无需管理服务器(零运维成本)2. 按实际存储量和访问量计费,无预置资源浪费3. 原生支持HTTP/HTTPS访问,自动处理扩展 其他选项的问题:A. Fargate需要为容器持续付费,适合动态应用但成本过高C. EC2需支付实例持续运行费用,且需自行维护web服务器D. ALB+Lambda架构复杂,适合API服务但成本远高于静态托管 33 / 100 分类: SAA-C03 33. A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of nancial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval. What should a solutions architect recommend to meet these requirements? A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications. B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3. C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream. D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3. 一家公司在AWS上运行一个在线市场网络应用程序。该应用在高峰时段为数以十万计的用户提供服务。 公司需要一个可扩展的近实时解决方案,来与其他几个内部应用分享数百万笔金融交易的详细信息。这些交易还需经过处理以去除敏感数据,然后存入文档数据库以供低延迟检索。 解决方案架构师应推荐什么方案来满足这些需求? A. 将交易数据存储到Amazon DynamoDB。在DynamoDB中设置规则,在写入时从每笔交易中去除敏感数据。使用DynamoDB Streams与其他应用共享交易数据。 B. 将交易数据流式传输到Amazon Kinesis Data Firehose,将数据存入Amazon DynamoDB和Amazon S3。使用与Kinesis Data Firehose集成的AWS Lambda去除敏感数据。其他应用可以消费存储在Amazon S3中的数据。 C. 将交易数据流式传输到Amazon Kinesis Data Streams。使用AWS Lambda集成从每笔交易中去除敏感数据,然后将交易数据存储到Amazon DynamoDB。其他应用可以从Kinesis数据流中消费交易数据。 D. 将批量交易数据以文件形式存储在Amazon S3中。使用AWS Lambda处理每个文件,在更新Amazon S3中的文件前去除敏感数据。Lambda函数随后将数据存入Amazon DynamoDB。其他应用可以消费存储在Amazon S3中的交易文件。 A. A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications. B. B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3. C. C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream. D. D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3. 正确答案解析:C Kinesis Data Streams 是处理实时流数据的最佳选择,配合Lambda函数可以在数据存入DynamoDB前进行敏感信息过滤,同时允许其他应用通过Kinesis流实时消费数据。 选项A错误原因:DynamoDB Streams主要用于跟踪表变更,不具备实时处理能力,且直接操作数据库会影响性能。 选项B错误原因:虽然Kinesis Firehose可存储数据到S3/DynamoDB,但S3适合批量分析而非实时共享,且架构复杂度更高。 选项D错误原因:基于S3文件的批量处理无法满足近实时(near-real-time)需求,存在处理延迟问题。 34 / 100 分类: SAA-C03 34. A company hosts its multi-tier applications on AWS. For compliance, governance, auditing, and security, the company must track configuration changes on its AWS resources and record a history of API calls made to these resources. What should a solutions architect do to meet these requirements? A. Use AWS CloudTrail to track configuration changes and AWS Config to record API calls. B. Use AWS Config to track configuration changes and AWS CloudTrail to record API calls. C. Use AWS Config to track configuration changes and Amazon CloudWatch to record API calls. D. Use AWS CloudTrail to track configuration changes and Amazon CloudWatch to record API calls. 一家公司在AWS上托管其多层应用程序。出于合规性、治理、审计和安全考虑,该公司必须跟踪其AWS资源的配置变更,并记录对这些资源进行的API调用历史记录。解决方案架构师应该采取什么措施来满足这些要求? A. 使用AWS CloudTrail跟踪配置变更,并使用AWS Config记录API调用。 B. 使用AWS Config跟踪配置变更,并使用AWS CloudTrail记录API调用。 C. 使用AWS Config跟踪配置变更,并使用Amazon CloudWatch记录API调用。 D. 使用AWS CloudTrail跟踪配置变更,并使用Amazon CloudWatch记录API调用。 A. A. Use AWS CloudTrail to track configuration changes and AWS Config to record API calls. B. B. Use AWS Config to track configuration changes and AWS CloudTrail to record API calls. C. C. Use AWS Config to track configuration changes and Amazon CloudWatch to record API calls. D. D. Use AWS CloudTrail to track configuration changes and Amazon CloudWatch to record API calls. 正确选项是B:使用AWS Config跟踪配置变更,使用AWS CloudTrail记录API调用。 解析: 1. AWS Config用于持续监控和记录AWS资源的配置变更,提供配置历史记录和变更通知,完全符合题目中『track configuration changes』的要求。 2. AWS CloudTrail专门用于记录账户级别的API调用活动(包括管理事件和数据事件),满足『record API calls』的要求。 错误选项分析: A. 将两个服务的功能弄反了,AWS Config不做API记录,CloudTrail不跟踪资源配置变更。 C. CloudWatch主要用于指标监控和日志收集,不具备原生API调用记录功能(需依赖CloudTrail日志导入)。 D. 同样错误地将CloudTrail用于配置变更跟踪,且CloudWatch不能直接记录原始API调用。 35 / 100 分类: SAA-C03 35. A company is preparing to launch a public-facing web application in the AWS Cloud. The architecture consists of Amazon EC2 instances within a VPC behind an Elastic Load Balancer (ELB). A third-party service is used for the DNS. The company’s solutions architect must recommend a solution to detect and protect against large-scale DDoS attacks. Which solution meets these requirements? A. Enable Amazon GuardDuty on the account. B. Enable Amazon Inspector on the EC2 instances. C. Enable AWS Shield and assign Amazon Route 53 to it. D. Enable AWS Shield Advanced and assign the ELB to it. 一家公司正准备在AWS云中部署一个面向公众的Web应用程序。该架构包含位于弹性负载均衡器(ELB)后面的VPC内的Amazon EC2实例,并使用第三方服务进行DNS解析。公司的解决方案架构师必须推荐一个能够检测并防范大规模DDoS攻击的解决方案。以下哪个方案符合这些要求? A. 在账户上启用Amazon GuardDuty B. 在EC2实例上启用Amazon Inspector C. 启用AWS Shield并将其分配给Amazon Route 53 D. 启用AWS Shield Advanced并将其分配给ELB A. A B. B C. C D. D 此题涉及AWS DDoS防护解决方案的选择。关键要求是在ELB架构下防御大规模DDoS攻击。 A选项Amazon GuardDuty主要用于威胁检测而非DDoS防护,不符合题目需求;B选项Amazon Inspector是用于EC2实例的安全漏洞评估工具,与DDoS防护无关;C选项AWS Shield标准版提供基础DDoS防护,但题目要求防御”大规模”攻击需要高级防护能力,且Route 53并非题目现有架构组件;D选项AWS Shield Advanced专门针对大规模复杂DDoS攻击提供增强保护,且支持直接关联ELB资源进行防护,完全符合题目所有技术要求。其他选项要么防护层级不足,要么与架构不匹配,因此正确答案是D。 36 / 100 分类: SAA-C03 36. A company is building an application in the AWS Cloud. The application will store data in Amazon S3 buckets in two AWS Regions. The company must use an AWS Key Management Service (AWS KMS) customer managed key to encrypt all data that is stored in the S3 buckets. The data in both S3 buckets must be encrypted and decrypted with the same KMS key. The data and the key must be stored in each of the two Regions. Which solution will meet these requirements with the LEAST operational overhead? A. Create an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Configure replication between the S3 buckets. B. Create a customer managed multi-Region KMS key. Create an S3 bucket in each Region. Configure replication between the S3 buckets. Configure the application to use the KMS key with client-side encryption. C. Create a customer managed KMS key and an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Configure replication between the S3 buckets. D. Create a customer managed KMS key and an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with AWS KMS keys (SSE-KMS). Configure replication between the S3 buckets. 一家公司正在AWS云中构建一个应用程序。该应用程序将数据存储在两个AWS区域的Amazon S3存储桶中。 公司必须使用AWS密钥管理服务(AWS KMS)的自定义托管密钥来加密所有存储在S3存储桶中的数据。 两个S3存储桶中的数据都必须使用相同的KMS密钥进行加密和解密。数据和密钥必须存储在这两个区域中。 哪种解决方案能以最小的操作开销满足这些需求? A. 在每个区域创建一个S3存储桶。配置S3存储桶使用Amazon S3托管加密密钥(SSE-S3)进行服务器端加密。配置S3存储桶之间的复制。 B. 创建一个自定义托管的多区域KMS密钥。在每个区域创建一个S3存储桶。配置S3存储桶之间的复制。配置应用程序使用该KMS密钥进行客户端加密。 C. 在每个区域创建一个自定义托管KMS密钥和一个S3存储桶。配置S3存储桶使用Amazon S3托管加密密钥(SSE-S3)进行服务器端加密。配置S3存储桶之间的复制。 D. 在每个区域创建一个自定义托管KMS密钥和一个S3存储桶。配置S3存储桶使用AWS KMS密钥(SSE-KMS)进行服务器端加密。配置S3存储桶之间的复制。 A. A B. B C. C D. D 题目要求在两个AWS区域的S3存储桶中使用相同的客户管理KMS密钥加密数据,并要求数据和密钥都存储在两个区域中,同时操作开销最低。 选项A错误,因为它使用S3管理的密钥(SSE-S3)而不是客户管理的KMS密钥。选项B正确,因为它创建了一个多区域的客户管理KMS密钥,该密钥在两个区域中都可用,这样可以使用相同的密钥加密和解密数据,同时通过配置S3桶间复制来满足数据存储要求,且操作开销最低。选项C错误,与A类似,使用了S3管理的密钥而不是KMS客户管理密钥。选项D错误,因为在每个区域创建独立的KMS密钥,这不符合使用相同密钥的要求。 37 / 100 分类: SAA-C03 37. A company recently launched a variety of new workloads on Amazon EC2 instances in its AWS account. The company needs to create a strategy to access and administer the instances remotely and securely. The company needs to implement a repeatable process that works with native AWS services and follows the AWS Well-Architected Framework. Which solution will meet these requirements with the LEAST operational overhead? A. Use the EC2 serial console to directly access the terminal interface of each instance for administration. B. Attach the appropriate IAM role to each existing instance and new instance. Use AWS Systems Manager Session Manager to establish a remote SSH session. C. Create an administrative SSH key pair. Load the public key into each EC2 instance. Deploy a bastion host in a public subnet to provide a tunnel for administration of each instance. D. Establish an AWS Site-to-Site VPN connection. Instruct administrators to use their local on-premises machines to connect directly to the instances by using SSH keys across the VPN tunnel. 一家公司最近在其AWS账户中的亚马逊EC2实例上推出了各种新工作负载。该公司需要制定一种策略来远程且安全地访问和管理这些实例。公司需要实施一个可重复的流程,该流程需使用原生AWS服务并遵循AWS完善架构框架。 哪种方案能够以最小的运维开销满足这些需求? A. 使用EC2串行控制台直接访问每个实例的终端界面进行管理。 B. 为每个现有实例和新实例附加适当的IAM角色。使用AWS系统管理器会话管理器建立远程SSH会话。 C. 创建一套管理型SSH密钥对。将公钥加载到每个EC2实例中。在公共子网中部署堡垒主机,为每个实例的管理提供隧道。 D. 建立AWS站点到站点VPN连接。指示管理员通过VPN隧道使用SSH密钥从其本地内部机器直接连接到实例。 A. A B. B C. C D. D 正确选项是B,使用AWS Systems Manager Session Manager通过附加IAM角色来远程管理EC2实例,这是最符合要求且运维开销最小的方案。 详细解析:A选项(使用EC2串行控制台)不正确:串行控制台主要用于故障排查场景,不支持批量管理,而且需要在实例级别配置特殊权限。 B选项(使用Session Manager)是正确答案:它完全符合AWS架构完善的框架,无需维护密钥或堡垒机,通过IAM集中管理权限,提供审计日志,并且是原生的AWS服务。 C选项(使用堡垒主机)不正确:虽然可行,但需要维护密钥和堡垒机基础设施,增加了运维复杂度,不符合最小运维开销的要求。 D选项(使用站点到站点VPN)不正确:虽然提供了安全连接,但需要维护VPN基础设施和SSH密钥,且不提供集中的访问审计功能。 38 / 100 分类: SAA-C03 38. A company is hosting a static website on Amazon S3 and is using Amazon Route 53 for DNS. The website is experiencing increased demand from around the world. The company must decrease latency for users who access the website. Which solution meets these requirements MOST cost-effectively? A. Replicate the S3 bucket that contains the website to all AWS Regions. Add Route 53 geolocation routing entries. B. Provision accelerators in AWS Global Accelerator. Associate the supplied IP addresses with the S3 bucket. Edit the Route 53 entries to point to the IP addresses of the accelerators. C. Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the CloudFront distribution. D. Enable S3 Transfer Acceleration on the bucket. Edit the Route 53 entries to point to the new endpoint. 一家公司在Amazon S3上托管了一个静态网站,并使用Amazon Route 53进行DNS解析。该网站在全球范围内的访问需求正在增长。公司必须降低用户访问网站时的延迟。哪种解决方案能以最具成本效益的方式满足这些要求? A. 将包含网站的S3存储桶复制到所有AWS区域。添加Route 53的地理位置路由条目。 B. 在AWS全球加速器中配置加速器。将提供的IP地址与S3存储桶关联。编辑Route 53条目以指向加速器的IP地址。 C. 在S3存储桶前添加Amazon CloudFront分发。编辑Route 53条目以指向CloudFront分发。 D. 为存储桶启用S3传输加速。编辑Route 53条目以指向新端点。 A. A. Replicate the S3 bucket that contains the website to all AWS Regions. Add Route 53 geolocation routing entries. B. B. Provision accelerators in AWS Global Accelerator. Associate the supplied IP addresses with the S3 bucket. Edit the Route 53 entries to point to the IP addresses of the accelerators. C. C. Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the CloudFront distribution. D. D. Enable S3 Transfer Acceleration on the bucket. Edit the Route 53 entries to point to the new endpoint. 正确答案是C,使用Amazon CloudFront分发。解析如下: A方案不正确:将S3存储桶复制到所有AWS区域成本高昂且管理复杂,不符合成本效益要求。 B方案不正确:AWS全球加速器通常用于非HTTP/HTTPS流量或需要静态IP的场景,对于静态网站托管不是最优选择。 C方案正确:CloudFront是专为内容分发设计的CDN服务,可以缓存内容到边缘节点,显著降低全球用户访问延迟,且成本效益高。 D方案不正确:S3传输加速适用于大文件上传下载场景,对于静态网站访问延迟改善效果不如CloudFront明显。 39 / 100 分类: SAA-C03 39. A company maintains a searchable repository of items on its website. The data is stored in an Amazon RDS for MySQL database table that contains more than 10 million rows. The database has 2 TB of General Purpose SSD storage. There are millions of updates against this data every day through the company’s website. The company has noticed that some insert operations are taking 10 seconds or longer. The company has determined that the database storage performance is the problem. Which solution addresses this performance issue? A. Change the storage type to Provisioned IOPS SSD. B. Change the DB instance to a memory optimized instance class. C. Change the DB instance to a burstable performance instance class. D. Enable Multi-AZ RDS read replicas with MySQL native asynchronous replication. 一家公司在其网站上维护着一个可搜索的物品资料库。这些数据存储在一个包含超过1000万行的Amazon RDS for MySQL数据库表中。该数据库配备了2 TB通用型SSD存储。通过公司网站,每天都有数百万次针对这些数据的更新操作。 公司注意到部分插入操作耗时达到10秒甚至更长时间。经排查,公司确定数据库存储性能是问题的根源。 下列哪个解决方案能解决这个性能问题? A. 将存储类型更改为预配置IOPS SSD B. 将数据库实例更改为内存优化型实例类 C. 将数据库实例更改为可突发性能型实例类 D. 启用采用MySQL原生异步复制的多可用区RDS读取副本 A. A B. B C. C D. D 正确答案是A:将存储类型更改为预置IOPS SSD。 解析如下: 1. 选项A正确 – 题目明确说明数据库存储性能是问题所在(’determined that the database storage performance is the problem’)。General Purpose SSD(gp2)的IOPS会随存储容量线性变化,对于2TB的卷,最大只有6,000 IOPS。而Provisioned IOPS SSD(io1/io2)可以单独配置IOPS(最高256,000 IOPS),能直接解决高频率随机写入的性能瓶颈(每天数百万次更新)。 2. 选项B错误 – 内存优化实例类(如R5)主要解决的是内存或CPU瓶颈,而非存储I/O问题。题目已定位到存储性能问题,增加内存并不能缓解存储延迟。 3. 选项C错误 – 可突发性能实例类(如T3)的CPU credits机制适合间歇性工作负载,而题目描述的是持续高频率写入(millions of updates every day),突发性能无法提供稳定的高IOPS。 4. 选项D错误 – 多可用区只读副本通过异步复制分散读负载,但题目中的瓶颈是写入延迟(insert operations taking 10s),该方案既不能提升主实例的写入速度,还会因复制延迟增加写入开销。 40 / 100 分类: SAA-C03 40. A company has thousands of edge devices that collectively generate 1 TB of status alerts each day. Each alert is approximately 2 KB in size. A solutions architect needs to implement a solution to ingest and store the alerts for future analysis. The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional infrastructure. Additionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days. What is the MOST operationally e cient solution that meets these requirements? A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days. B. Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts. Create a script on the EC2 instances that will store the alerts in an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days. C. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon OpenSearch Service (Amazon Elasticsearch Service) cluster. Set up the Amazon OpenSearch Service (Amazon Elasticsearch Service) cluster to take manual snapshots every day and delete data from the cluster that is older than 14 days. D. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to ingest the alerts, and set the message retention period to 14 days. Configure consumers to poll the SQS queue, check the age of the message, and analyze the message data as needed. If the message is 14 days old, the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue. 一家公司拥有数千台边缘设备,每天总共生成1TB的状态警报。每条警报的大小约为2KB。 解决方案架构师需要实施一个解决方案来接收和存储这些警报以供未来分析使用。 公司要求解决方案必须具有高可用性。同时,公司需要尽量降低成本,并且不希望管理额外的基础设施。 此外,公司希望保留14天的数据以供即时分析,并将超过14天的数据存档。 以下哪种方案在满足这些需求的同时最具操作性效率? A. 创建一个Amazon Kinesis Data Firehose传输流来接收警报。配置该Kinesis Data Firehose流将警报传送到Amazon S3存储桶。设置S3生命周期配置,在14天后将数据转移到Amazon S3 Glacier。 B. 在两个可用区启动Amazon EC2实例,并将它们放置在弹性负载均衡器后面以接收警报。在EC2实例上创建脚本将警报存储在Amazon S3存储桶中。设置S3生命周期配置,在14天后将数据转移到Amazon S3 Glacier。 C. 创建一个Amazon Kinesis Data Firehose传输流来接收警报。配置该Kinesis Data Firehose流将警报传送到Amazon OpenSearch服务(Amazon Elasticsearch服务)集群。设置Amazon OpenSearch服务(Amazon Elasticsearch服务)集群每天进行手动快照,并删除超过14天的数据。 D. 创建一个Amazon简单队列服务(Amazon SQS)标准队列来接收警报,并将消息保留期设置为14天。配置消费者轮询SQS队列,检查消息的时效性并按需分析消息数据。如果消息已存在14天,则消费者应将消息复制到Amazon S3存储桶并从SQS队列中删除该消息。 A. A B. B C. C D. D 正确答案是A,原因如下: 1. **Kinesis Data Firehose** 是托管服务,无需额外管理基础设施,符合题干’minimize costs and does not want to manage additional infrastructure’的要求。2. 每天1TB的数据量(约5亿条记录)属于高吞吐场景,Kinesis Data Firehose可以自动扩展。3. 直接写入S3的方案(A)比经过EC2中转(B)更节省成本,且消除了EC2维护开销。4. S3生命周期管理可以完美实现14天热数据和后续冷归档的需求。 其他选项的问题:B:使用EC2集群的方案需要自主管理服务器、负载均衡器等资源,不符合’无需额外管理基础设施’的要求,且操作复杂度高。C:OpenSearch不适合直接存储原始告警数据(尤其是2KB的小文件),会产生高昂的存储成本,且手动快照不符合自动化要求。D:SQS标准队列消息最大保留期仅14天(无法延长),无法满足归档需求;且消费者轮询处理5亿条/天的消息效率极低。 41 / 100 分类: SAA-C03 41. A company’s application integrates with multiple software-as-a-service (SaaS) sources for data collection. The company runs Amazon EC2 instances to receive the data and to upload the data to an Amazon S3 bucket for analysis. The same EC2 instance that receives and uploads the data also sends a notification to the user when an upload is complete. The company has noticed slow application performance and wants to improve the performance as much as possible. Which solution will meet these requirements with the LEAST operational overhead? A. Create an Auto Scaling group so that EC2 instances can scale out. Configure an S3 event notification to send events to an Amazon Simple notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete. B. Create an Amazon AppFlow ow to transfer data between each SaaS source and the S3 bucket. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete. C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for each SaaS source to send output data. Configure the S3 bucket as the rule’s target. Create a second EventBridge (Cloud Watch Events) rule to send events when the upload to the S3 bucket is complete. Configure an Amazon Simple Notification Service (Amazon SNS) topic as the second rule’s target. D. Create a Docker container to use instead of an EC2 instance. Host the containerized application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon CloudWatch Container Insights to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete. 一家公司的应用程序集成了多个软件即服务(SaaS)源以进行数据收集。该公司运行亚马逊EC2实例来接收数据并将数据上传到亚马逊S3存储桶进行分析。 同一个接收和上传数据的EC2实例在上传完成时还会向用户发送通知。公司注意到应用程序性能较慢,希望尽可能提高性能。 哪种解决方案能在最低运维成本下满足这些要求? A. 创建一个自动扩展组,以便EC2实例可以横向扩展。配置S3事件通知,在上传到S3存储桶完成时向亚马逊简单通知服务(Amazon SNS)主题发送事件。 B. 创建一个亚马逊AppFlow流,在每SaaS源和S3存储桶间传输数据。配置S3事件通知,在上传到S3存储桶完成时向亚马逊简单通知服务(Amazon SNS)主题发送事件。 C. 为每个SaaS源创建一个亚马逊EventBridge(亚马逊CloudWatch事件)规则以发送输出数据。将S3存储桶配置为规则的目标。创建第二个EventBridge(CloudWatch事件)规则在上传到S3存储桶完成时发送事件。配置亚马逊简单通知服务(Amazon SNS)主题作为第二个规则的目标。 D. 创建一个Docker容器代替EC2实例。在亚马逊弹性容器服务(Amazon ECS)上托管容器化应用程序。配置亚马逊CloudWatch Container Insights,在上传到S3存储桶完成时向亚马逊简单通知服务(Amazon SNS)主题发送事件。 A. A B. B C. C D. D 选项B是最佳解决方案,因为它使用Amazon AppFlow直接从SaaS源传输数据到S3桶,减少了EC2实例的负担,并利用S3事件通知触发Amazon SNS发送完成通知,操作开销最小。解析如下: A选项虽然通过Auto Scaling提高了EC2的处理能力,但仍然依赖EC2处理数据上传和通知,操作开销较高。 C选项使用EventBridge规则直接传输数据虽可行,但需要为每个SaaS源创建规则,增加了配置复杂性和管理成本。 D选项通过容器化改造应用架构,虽然技术先进但实现复杂,改造成本高,且仍需额外配置CloudWatch Container Insights。 相比之下,B方案: 1. 通过AppFlow原生集成SaaS,绕开EC2数据传输瓶颈 2. 利用事件通知机制实现自动触发,无需维护额外计算资源 3. 完整保留原有通知功能的同时大幅简化架构 42 / 100 分类: SAA-C03 42. A company runs a highly available image-processing application on Amazon EC2 instances in a single VPC. The EC2 instances run inside several subnets across multiple Availability Zones. The EC2 instances do not communicate with each other. However, the EC2 instances download images from Amazon S3 and upload images to Amazon S3 through a single NAT gateway. The company is concerned about data transfer charges. What is the MOST cost-effective way for the company to avoid Regional data transfer charges? A. Launch the NAT gateway in each Availability Zone. B. Replace the NAT gateway with a NAT instance. C. Deploy a gateway VPC endpoint for Amazon S3. D. Provision an EC2 Dedicated Host to run the EC2 instances. 一家公司在单个VPC中的亚马逊EC2实例上运行一个高可用性的图像处理应用。EC2实例分布在多个可用区的若干个子网中运行。 这些EC2实例之间不相互通信。但是,EC2实例通过单一NAT网关从亚马逊S3下载图像,并向亚马逊S3上传图像。公司担心数据传输费用问题。 对于该公司来说,避免区域数据传输费用的最具成本效益的方法是什么? A. 在每个可用区中启动NAT网关。B. 用NAT实例替代NAT网关。C. 为亚马逊S3部署网关VPC终端节点。D. 配置一个EC2专用主机来运行EC2实例。 A. A B. B C. C D. D 正确答案是C:为Amazon S3部署一个网关VPC端点。 解析:– 当前场景中EC2实例通过NAT网关与S3通信会产生跨可用区的数据传输费用。而通过网关VPC端点访问S3时,流量不会离开AWS网络且不经过NAT网关,因此不会产生额外的数据传输费用。 其他选项错误原因:– A:在每个可用区部署NAT网关虽能提高可用性,但仍会产生跨区域数据传输费用且增加了NAT网关成本。– B:使用NAT实例替代NAT网关在成本上没有优势,同样存在数据传输费用问题。– D:使用EC2专用主机专门运行实例无法解决数据传输费用问题,且会增加主机租赁成本。 43 / 100 分类: SAA-C03 43. A company has an on-premises application that generates a large amount of time-sensitive data that is backed up to Amazon S3. The application has grown and there are user complaints about internet bandwidth limitations. A solutions architect needs to design a long-term solution that allows for both timely backups to Amazon S3 and with minimal impact on internet connectivity for internal users. Which solution meets these requirements? A. Establish AWS VPN connections and proxy all traffic through a VPC gateway endpoint. B. Establish a new AWS Direct Connect connection and direct backup traffic through this new connection. C. Order daily AWS Snowball devices. Load the data onto the Snowball devices and return the devices to AWS each day. D. Submit a support ticket through the AWS Management Console. Request the removal of S3 service limits from the account. 一家公司拥有一个本地应用程序,该程序会生成大量对时间敏感的数据,目前这些数据被备份至亚马逊S3。 随着应用程序规模扩大,用户开始抱怨互联网带宽限制问题。解决方案架构师需要设计一个长期解决方案,既能确保数据及时备份到亚马逊S3,又能将对内部用户互联网连接的影响降至最低。 以下哪个方案符合这些要求? A. 建立AWS VPN连接,并通过VPC网关终端节点代理所有流量。 B. 建立新的AWS Direct Connect专用连接,并通过该新连接传输备份流量。 C. 每天订购AWS Snowball设备,将数据加载到Snowball设备上并每日返回给AWS。 D. 通过AWS管理控制台提交支持工单,请求移除该账户的S3服务限制。 A. A B. B C. C D. D 正确答案是B,建立新的AWS Direct Connect连接并通过此新连接引导备份流量。 解析:1. 选项A(建立AWS VPN连接并通过VPC网关端点代理所有流量)不正确,因为VPN仍依赖公共互联网,无法从根本上解决带宽限制问题。虽然VPN提供了加密通道,但带宽仍受ISP限制。 2. 选项B(建立新的AWS Direct Connect连接)是最佳选择。Direct Connect提供专用网络连接,可以绕过公共互联网,提供稳定、高带宽的连接,能够满足大批量时效性数据传输需求,同时不影响内部用户的普通互联网访问。 3. 选项C(每天订购AWS Snowball设备)不适合,虽然Snowball可用于大容量数据传输,但题目要求的是及时备份(time-sensitive),每日人工操作Snowball无法满足及时性需求。 4. 选项D(提交支持票据请求移除S3服务限制)不正确,因为S3的服务限制并不是造成带宽问题的原因,且S3本身不存在会限制数据传输的预设限制。 44 / 100 分类: SAA-C03 44. A company has an Amazon S3 bucket that contains critical data. The company must protect the data from accidental deletion. Which combination of steps should a solutions architect take to meet these requirements? (Choose two.) A. Enable versioning on the S3 bucket. B. Enable MFA Delete on the S3 bucket. C. Create a bucket policy on the S3 bucket. D. Enable default encryption on the S3 bucket. E. Create a lifecycle policy for the objects in the S3 bucket 一家公司拥有一个包含关键数据的亚马逊S3存储桶。公司必须保护这些数据以防意外删除。 解决方案架构师应采取哪些步骤组合来满足这些要求?(选择两项。) A. 在S3存储桶上启用版本控制。 B. 在S3存储桶上启用MFA删除。 C. 在S3存储桶上创建存储桶策略。 D. 在S3存储桶上启用默认加密。 E. 为S3存储桶中的对象创建生命周期策略。 A. A B. B C. C D. D E. E 正确答案是A和B。 解析: A选项(在S3存储桶上启用版本控制)是正确的。版本控制可以在对象被意外删除时恢复之前的版本,从而保护数据不被意外删除。 B选项(在S3存储桶上启用MFA删除)是正确的。MFA删除要求在执行删除操作时提供多重身份验证,大大降低了数据被意外删除的风险。 C选项(在S3存储桶上创建存储桶策略)错误。虽然存储桶策略可以控制访问权限,但它并不能防止数据被意外删除。 D选项(在S3存储桶上启用默认加密)错误。默认加密可以保护数据的机密性,但不能防止数据被意外删除。 E选项(为S3存储桶中的对象创建生命周期策略)错误。生命周期策略主要用于自动转移或删除旧数据以节省成本,与防止意外删除无关。 检查 45 / 100 分类: SAA-C03 45. A company has a data ingestion work ow that consists of the following: • An Amazon Simple Notification Service (Amazon SNS) topic for notifications about new data deliveries • An AWS Lambda function to process the data and record metadata The company observes that the ingestion work ow fails occasionally because of network connectivity issues. When such a failure occurs, the Lambda function does not ingest the corresponding data unless the company manually reruns the job. Which combination of actions should a solutions architect take to ensure that the Lambda function ingests all data in the future? (Choose two.) A. Deploy the Lambda function in multiple Availability Zones. B. Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic. C. Increase the CPU and memory that are allocated to the Lambda function. D. Increase provisioned throughput for the Lambda function. E. Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue. 一家公司拥有一个数据摄入工作流,包含以下组成部分: • 一个用于通知新数据交付的亚马逊简单通知服务(Amazon SNS)主题 • 一个用于处理数据并记录元数据的AWS Lambda函数 该公司观察到,由于网络连接问题,该摄入工作流偶尔会失败。当此类故障发生时,Lambda函数不会摄入相应数据,除非公司手动重新运行任务。 解决方案架构师应采取哪两项措施的组合,以确保Lambda函数将来能够摄入所有数据?(请选择两项。) A. 在多个可用区中部署Lambda函数。 B. 创建一个亚马逊简单队列服务(Amazon SQS)队列,并将其订阅到SNS主题。 C. 增加分配给Lambda函数的CPU和内存。 D. 提高Lambda函数的预置吞吐量。 E. 修改Lambda函数,让其从亚马逊简单队列服务(Amazon SQS)队列中读取数据。 A. A B. B C. C D. D E. E 本题考查使用SQS队列解决AWS数据摄取工作流可靠性问题的方案。 正确选项BE的分析:B选项正确 – 创建一个SQS队列并订阅SNS主题可以将通知消息持久化存储,即使Lambda暂时不可用,消息也不会丢失。SQS的队列特性可以保证消息不丢失并自动重试。E选项正确 – 修改Lambda函数从SQS队列读取数据可以利用SQS的消息可见性超时等机制,实现自动化的失败重试,无需人工干预。 错误选项分析:A选项错误 – Lambda函数本身已经是跨AZ部署的,多AZ部署不能解决消息丢失问题。C选项错误 – 增加Lambda的计算资源与网络连接问题无关,不能解决消息传输可靠性问题。D选项错误 – 预置吞吐量主要影响Lambda的并发能力,与消息可靠性无关,题目中的问题是消息传递而非处理能力不足。 检查 46 / 100 分类: SAA-C03 46. A company has an application that provides marketing services to stores. The services are based on previous purchases by store customers. The stores upload transaction data to the company through SFTP, and the data is processed and analyzed to generate new marketing offers. Some of the files can exceed 200 GB in size. Recently, the company discovered that some of the stores have uploaded files that contain personally identi able information (PII) that should not have been included. The company wants administrators to be alerted if PII is shared again. The company also wants to automate remediation. What should a solutions architect do to meet these requirements with the LEAST development effort? A. Use an Amazon S3 bucket as a secure transfer point. Use Amazon Inspector to scan the objects in the bucket. If objects contain PII, trigger an S3 Lifecycle policy to remove the objects that contain PII. B. Use an Amazon S3 bucket as a secure transfer point. Use Amazon Macie to scan the objects in the bucket. If objects contain PII, use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects that contain PII. C. Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. If objects contain PII, use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects that contain PII. D. Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. If objects contain PII, use Amazon Simple Email Service (Amazon SES) to trigger a notification to the administrators and trigger an S3 Lifecycle policy to remove the meats that contain PII. 一家公司拥有一个为商店提供营销服务的应用程序。这些服务基于商店顾客以往的购买记录。 商店通过SFTP向该公司上传交易数据,数据经过处理和分析后会生成新的营销优惠。 部分文件的大小可能超过200GB。 最近,公司发现一些商店上传的文件包含本不应包含的个人身份信息(PII)。公司希望在再次共享PII时向管理员发出警报,同时希望实现自动化补救。 解决方案架构师应如何以最少的开发工作量来满足这些需求? A. 使用Amazon S3存储桶作为安全传输点。使用Amazon Inspector扫描存储桶中的对象。如果对象包含PII,则触发S3生命周期策略删除包含PII的对象。 B. 使用Amazon S3存储桶作为安全传输点。使用Amazon Macie扫描存储桶中的对象。如果对象包含PII,则使用Amazon Simple Notification Service(Amazon SNS)触发通知,提醒管理员删除包含PII的对象。 C. 在AWS Lambda函数中实现自定义扫描算法。当对象被加载到存储桶时触发该函数。如果对象包含PII,则使用Amazon Simple Notification Service(Amazon SNS)触发通知,提醒管理员删除包含PII的对象。 D. 在AWS Lambda函数中实现自定义扫描算法。当对象被加载到存储桶时触发该函数。如果对象包含PII,则使用Amazon Simple Email Service(Amazon SES)触发通知提醒管理员,并触发S3生命周期策略删除包含PII的meats(注:此处应为objects对象)。 A. A B. B C. C D. D 正确答案是B。Amazon Macie是AWS专门用于数据安全和隐私的服务,能够自动发现、分类和保护敏感数据,包括PII。它可以扫描S3桶中的对象并识别PII,然后通过Amazon SNS通知管理员。这个方案最大限度地减少了开发工作,因为Macie已经内置了这些功能。 A选项虽然使用了Amazon Inspector,但Inspector主要用于EC2实例的安全评估,不适合识别PII。此外,自动删除含有PII的对象可能不符合某些合规要求。 C和D选项虽然技术上可行,但它们需要开发自定义扫描算法,这会增加开发工作量,不符合「最少开发努力」的要求。 因此,B选项是最符合题目要求的解决方案。 47 / 100 分类: SAA-C03 47. A company needs guaranteed Amazon EC2 capacity in three speci c Availability Zones in a speci c AWS Region for an upcoming event that will last 1 week. What should the company do to guarantee the EC2 capacity? A. Purchase Reserved Instances that specify the Region needed. B. Create an On-Demand Capacity Reservation that speci es the Region needed. C. Purchase Reserved Instances that specify the Region and three Availability Zones needed. D. Create an On-Demand Capacity Reservation that speci es the Region and three Availability Zones needed. 一家公司需要在一个特定的亚马逊云计算服务区域中,为即将持续一周的活动预留三个特定可用区的亚马逊弹性计算云服务容量。该公司应该如何做才能保证弹性计算云服务容量?A. 购买指定所需区域的预留实例。B. 创建指定所需区域的按需容量预留。C. 购买指定所需区域和三个可用区的预留实例。D. 创建指定所需区域和三个可用区的按需容量预留。 A. A B. B C. C D. D 正确答案是D。该公司需要为未来1周的活动在特定AWS区域的三个特定可用区中保证EC2容量。 选项D是正确的,因为它使用了按需容量预留(On-Demand Capacity Reservation),可以精确指定区域和三个可用区,确保在需要时获得所需的EC2容量。 选项A和C的错误原因:购买预留实例(Reserved Instances)只能提供成本节约的折扣,但不保证容量可用性,特别是当指定多个可用区时,不能保证所有三个可用区同时都有容量。 选项B的错误原因:虽然使用了按需容量预留,但没有指定具体的可用区,这可能无法满足三个特定可用区的容量需求。 48 / 100 分类: SAA-C03 48. A company’s website uses an Amazon EC2 instance store for its catalog of items. The company wants to make sure that the catalog is highly available and that the catalog is stored in a durable location. What should a solutions architect do to meet these requirements? A. Move the catalog to Amazon ElastiCache for Redis. B. Deploy a larger EC2 instance with a larger instance store. C. Move the catalog from the instance store to Amazon S3 Glacier Deep Archive. D. Move the catalog to an Amazon Elastic File System (Amazon EFS) file system. 一家公司的网站使用亚马逊EC2实例存储来存放其商品目录。公司希望确保该目录具有高可用性,并能存储在持久的位置。为了满足这些要求,解决方案架构师应该怎么做? A. 将商品目录迁移到Amazon ElastiCache for Redis。 B. 部署具有更大实例存储的更大EC2实例。 C. 将商品目录从实例存储迁移到Amazon S3 Glacier Deep Archive。 D. 将商品目录迁移到Amazon弹性文件系统(Amazon EFS)文件系统。 A. A B. B C. C D. D 正确答案是D,将目录移动到Amazon Elastic File System (Amazon EFS) 文件系统。解析如下: A. 将目录移动到Amazon ElastiCache for Redis – 错误。ElastiCache是一种内存缓存服务,不适合存储持久化数据,因为数据在实例停止或故障时会丢失。 B. 部署具有更大实例存储的更大EC2实例 – 错误。虽然更大的实例存储可以提供更多空间,但实例存储本质上是临时的,EC2实例终止时数据会丢失,无法满足持久性要求。 C. 将目录从实例存储移动到Amazon S3 Glacier Deep Archive – 错误。Glacier Deep Archive适用于长期存档数据,访问延迟非常高(需要数小时恢复),不适合需要高可用性的目录服务。 D. 将目录移动到Amazon EFS – 正确。EFS是完全托管的、高可用的网络文件系统,特点是:1) 数据跨多AZ持久存储 2) 支持多EC2实例同时访问 3) 自动扩展存储容量 4) 按实际使用量付费,完美满足高可用和持久性需求。 49 / 100 分类: SAA-C03 49. A company stores call transcript files on a monthly basis. Users access the files randomly within 1 year of the call, but users access the les infrequently after 1 year. The company wants to optimize its solution by giving users the ability to query and retrieve files that are less than 1 year-old as quickly as possible. A delay in retrieving older files is acceptable. Which solution will meet these requirements MOST cost-effectively? A. Store individual files with tags in Amazon S3 Glacier Instant Retrieval. Query the tags to retrieve the files from S3 Glacier Instant Retrieval. B. Store individual files in Amazon S3 Intelligent-Tiering. Use S3 Lifecycle policies to move the files to S3 Glacier Flexible Retrieval after 1 year. Query and retrieve the files that are in Amazon S3 by using Amazon Athena. Query and retrieve the files that are in S3 Glacier by using S3 Glacier Select. C. Store individual files with tags in Amazon S3 Standard storage. Store search metadata for each archive in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3 Glacier Instant Retrieval after 1 year. Query and retrieve the files by searching for metadata from Amazon S3. D. Store individual files in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3 Glacier Deep Archive after 1 year. Store search metadata in Amazon RDS. Query the files from Amazon RDS. Retrieve the files from S3 Glacier Deep Archive. 一家公司每月存储通话记录文本文件。用户在通话后1年内会随机访问这些文件,但1年后就很少访问。公司希望优化解决方案,让用户能够尽可能快速地查询和检索1年内的文件,对于旧文件的检索延迟是可以接受的。 哪种解决方案最能符合这些要求且最具成本效益? A. 将单个文件与标签一起存储在Amazon S3 Glacier即时检索中。通过查询标签从S3 Glacier即时检索中获取文件。 B. 将单个文件存储在Amazon S3智能分层中。使用S3生命周期策略在1年后将文件移至S3 Glacier灵活检索。使用Amazon Athena查询和检索仍位于Amazon S3中的文件。使用S3 Glacier Select查询和检索位于S3 Glacier中的文件。 C. 将单个文件与标签一起存储在Amazon S3标准存储中。在Amazon S3标准存储中为每个存档存储搜索元数据。使用S3生命周期策略在1年后将文件移至S3 Glacier即时检索。通过从Amazon S3中搜索元数据来查询和检索文件。 D. 将单个文件存储在Amazon S3标准存储中。使用S3生命周期策略在1年后将文件移至S3 Glacier深度归档。将搜索元数据存储在Amazon RDS中。从Amazon RDS查询文件。从S3 Glacier深度归档中检索文件。 A. A B. B C. C D. D 解析: 题目要求一个成本效益最高的解决方案,既能快速检索1年内的文件,又允许延迟检索1年以上的文件。 正确答案B的分析:使用Amazon S3 Intelligent-Tiering可以自动将频繁访问的数据放在低延迟层,而对于不频繁访问的数据会自动移动到成本更低的存储层。结合S3 Lifecycle策略,1年后的文件可自动迁移到S3 Glacier Flexible Retrieval(成本更低但检索延迟较高),完全符合题目要求。 为什么其他选项不正确: A选项:S3 Glacier Instant Retrieval虽然是低延迟归档存储,但成本高于Intelligent-Tiering,且没有利用到生命周期策略分层存储的优势,不符合“最成本效益”的要求。 C选项:1年后将文件移动到S3 Glacier Instant Retrieval(即时检索层)虽然能保证低延迟,但长期存储成本仍高于Flexible Retrieval(灵活检索层),且题目允许旧文件延迟检索,因此这不是最优方案。 D选项:S3 Glacier Deep Archive检索延迟最高(数小时),虽然存储成本最低,但题目仅要求1年以上文件“可接受延迟”,并未要求极端低成本。此外,Amazon RDS作为关系型数据库用于存储检索元数据属于过度设计,增加了复杂度。 50 / 100 分类: SAA-C03 50. A company has a production workload that runs on 1,000 Amazon EC2 Linux instances. The workload is powered by third-party software. The company needs to patch the third-party software on all EC2 instances as quickly as possible to remediate a critical security vulnerability. What should a solutions architect do to meet these requirements? A. Create an AWS Lambda function to apply the patch to all EC2 instances. B. Configure AWS Systems Manager Patch Manager to apply the patch to all EC2 instances. C. Schedule an AWS Systems Manager maintenance window to apply the patch to all EC2 instances. D. Use AWS Systems Manager Run Command to run a custom command that applies the patch to all EC2 instances. 一家公司在1,000台亚马逊EC2 Linux实例上运行生产工作负载。该工作负载由第三方软件驱动。 公司需要尽快在所有EC2实例上修补该第三方软件,以修复一个关键安全漏洞。 解决方案架构师应采取什么措施来满足这些要求? A. 创建一个AWS Lambda函数来为所有EC2实例应用补丁。 B. 配置AWS Systems Manager补丁管理器来为所有EC2实例应用补丁。 C. 安排一个AWS Systems Manager维护窗口来为所有EC2实例应用补丁。 D. 使用AWS Systems Manager运行命令来执行自定义命令,为所有EC2实例应用补丁。 A. A B. B C. C D. D 正确答案是D,使用AWS Systems Manager Run Command运行自定义命令来为所有EC2实例打补丁。原因如下: A选项(使用AWS Lambda函数)不合适,因为Lambda主要用于事件驱动的无服务器计算,不适合直接管理EC2实例的补丁操作,且缺乏集中管理能力。 B选项(配置AWS Systems Manager Patch Manager)不完全正确,因为Patch Manager主要用于操作系统级别的补丁,而题目涉及的是第三方软件补丁。 C选项(安排AWS Systems Manager维护窗口)虽然可以在维护时段执行操作,但题目要求尽快修复,而维护窗口需要预先计划时间,不够敏捷。 D选项是最佳方案,因为Run Command可以:1) 大规模批量执行命令 2) 直接针对第三方软件操作 3) 无需停机或预排时间 4) 提供执行状态跟踪。它能最快响应安全漏洞,满足题目”as quickly as possible”的要求。 51 / 100 分类: SAA-C03 51. A company is developing an application that provides order shipping statistics for retrieval by a REST API. The company wants to extract the shipping statistics, organize the data into an easy-to-read HTML format, and send the report to several email addresses at the same time every morning. Which combination of steps should a solutions architect take to meet these requirements? (Choose two.) A. Configure the application to send the data to Amazon Kinesis Data Firehose. B. Use Amazon Simple Email Service (Amazon SES) to format the data and to send the report by email. C. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Glue job to query the application’s API for the data. D. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the application’s API for the data. E. Store the application data in Amazon S3. Create an Amazon Simple Notification Service (Amazon SNS) topic as an S3 event destination to send the report by email. 一家公司正在开发一个应用程序,该程序提供订单运输统计数据,可通过REST API检索。公司希望提取运输统计数据,将数据组织成易于阅读的HTML格式,并每天早晨同一时间将报告发送到多个电子邮件地址。 为了解决这一需求,解决方案架构师应采取哪两项步骤组合?(选择两项。) A. 配置应用程序将数据发送到Amazon Kinesis Data Firehose。 B. 使用Amazon Simple Email Service(Amazon SES)格式化数据并通过电子邮件发送报告。 C. 创建一个Amazon EventBridge(Amazon CloudWatch Events)定时事件,触发AWS Glue作业查询应用程序的API获取数据。 D. 创建一个Amazon EventBridge(Amazon CloudWatch Events)定时事件,触发AWS Lambda函数查询应用程序的API获取数据。 E. 将应用程序数据存储在Amazon S3中。创建一个Amazon Simple Notification Service(Amazon SNS)主题作为S3事件目的地,通过电子邮件发送报告。 A. A B. B C. C D. D E. E 题目要求每天早晨定时从REST API提取订单运输统计数据,组织成易读的HTML格式并发送给多个邮箱地址。正确解决方案是: B选项:使用Amazon SES可以格式化数据(包括生成HTML)并通过邮件发送报告,这是专业的邮件发送服务。 D选项:创建EventBridge定时事件触发Lambda函数查询API获取数据,这是最轻量级的定时触发方案(相比Glue作业更简单经济)。 其他选项错误原因: A选项:Kinesis Data Firehose用于实时流数据处理,不符合定时批量场景需求 C选项:Glue作业更适合大数据ETL场景,此需求用Lambda更轻量 E选项:S3事件通知无法直接生成HTML报告,且SNS不支持邮件格式化功能 检查 52 / 100 分类: SAA-C03 52. A company wants to migrate its on-premises application to AWS. The application produces output files that vary in size from tens of gigabytes to hundreds of terabytes. The application data must be stored in a standard file system structure. The company wants a solution that scales automatically. is highly available, and requires minimum operational overhead. Which solution will meet these requirements? A. Migrate the application to run as containers on Amazon Elastic Container Service (Amazon ECS). Use Amazon S3 for storage. B. Migrate the application to run as containers on Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon Elastic Block Store (Amazon EBS) for storage. C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) for storage. D. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic Block Store (Amazon EBS) for storage. 一家公司希望将其内部部署的应用程序迁移到AWS。该应用程序生成的文件大小从数十GB到数百TB不等。 应用程序数据必须存储在标准文件系统结构中。公司希望找到一个能够自动扩展、 高度可用且运维开销最小的解决方案。 以下哪种方案能满足这些需求? A. 将应用程序迁移到Amazon Elastic Container Service (Amazon ECS)上以容器方式运行,并使用Amazon S3进行存储。 B. 将应用程序迁移到Amazon Elastic Kubernetes Service (Amazon EKS)上以容器方式运行,并使用亚马逊弹性块存储(Amazon EBS)进行存储。 C. 将应用程序迁移到多可用区Auto Scaling组中的Amazon EC2实例,并使用Amazon Elastic File System (Amazon EFS)进行存储。 D. 将应用程序迁移到多可用区Auto Scaling组中的Amazon EC2实例,并使用亚马逊弹性块存储(Amazon EBS)进行存储。 A. A. Migrate the application to run as containers on Amazon Elastic Container Service (Amazon ECS). Use Amazon S3 for storage. B. B. Migrate the application to run as containers on Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon Elastic Block Store (Amazon EBS) for storage. C. C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) for storage. D. D. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic Block Store (Amazon EBS) for storage. 正确答案是C,因为题目中提到了需要自动扩展、高可用性和最小操作开销的解决方案。 A选项使用Amazon S3作为存储,但S3是对象存储,不支持标准的文件系统结构,不符合要求。 B选项使用Amazon EKS和EBS,虽然EKS适合容器化应用程序,但EBS是块存储,不具有自动扩展和高可用性,不符合要求。 D选项使用EC2实例和EBS存储,虽然Multi-AZ Auto Scaling组提供了高可用性,但EBS不具备自动扩展能力,且需要手动管理,操作开销较大,不符合要求。 C选项的Amazon EFS是一个完全托管的、自动扩展的文件存储服务,支持标准的文件系统结构,并且与Multi-AZ Auto Scaling组结合使用,可以提供高可用性和最小操作开销,完全符合题目要求。 53 / 100 分类: SAA-C03 53. A company needs to store its accounting records in Amazon S3. The records must be immediately accessible for 1 year and then must be archived for an additional 9 years. No one at the company, including administrative users and root users, can be able to delete the records during the entire 10-year period. The records must be stored with maximum resiliency. Which solution will meet these requirements? A. Store the records in S3 Glacier for the entire 10-year period. Use an access control policy to deny deletion of the records for a period of 10 years. B. Store the records by using S3 Intelligent-Tiering. Use an IAM policy to deny deletion of the records. After 10 years, change the IAM policy to allow deletion. C. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 year. Use S3 Object Lock in compliance mode for a period of 10 years. D. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 year. Use S3 Object Lock in governance mode for a period of 10 years. 一家公司需要将其会计记录存储在亚马逊简单存储服务(Amazon S3)中。这些记录在1年内必须能够立即访问,然后需要再存档9年。在公司内部,包括管理用户和根用户在内的所有人员,在整个10年期间都不得删除这些记录。这些记录必须以最高恢复力进行存储。 哪个解决方案能够满足这些要求? A. 将记录在整个10年期间都存储在S3 Glacier中。使用访问控制策略拒绝在10年期限内删除记录。 B. 使用S3智能分层存储记录。使用IAM策略拒绝删除记录。10年后,更改IAM策略以允许删除。 C. 使用S3生命周期策略在1年后将记录从S3标准存储转换为S3 Glacier深度归档。使用合规模式的S3对象锁定功能,期限为10年。 D. 使用S3生命周期策略在1年后将记录从S3标准存储转换为S3单区低频访问(S3 One Zone-IA)存储。使用治理模式的S3对象锁定功能,期限为10年。 A. A B. B C. C D. D 正确解决方案必须满足三个核心要求:1. 即时访问1年+归档9年(需要生命周期策略实现存储层级转换)2. 10年内绝对禁止删除(需要S3 Object Lock的compliance模式,该模式下连root用户都无法覆盖保留设置)3. 最高数据 resiliency(S3 Standard和Glacier Deep Archive都提供多AZ存储) 选项分析:A. Glacier无法满足第一年即时访问要求,且Glacier的访问控制策略不能防止管理员删除 B. Intelligent-Tiering无法防止管理员删除,IAM策略可能被更高权限覆盖,不符合绝对防删除要求 C. 完全正确: – S3 Standard满足第一年即时访问 – Lifecycle策略实现1年后自动转存Glacier Deep Archive – compliance模式Object Lock确保10年强制保留期(法律合规模式) – 两种存储类都提供11个9的持久性 D. One Zone-IA是单AZ存储,不符合最高resiliency要求,governance模式允许特定权限覆盖保留设置 54 / 100 分类: SAA-C03 54. A company runs multiple Windows workloads on AWS. The company’s employees use Windows file shares that are hosted on two Amazon EC2 instances. The file shares synchronize data between themselves and maintain duplicate copies. The company wants a highly available and durable storage solution that preserves how users currently access the files. What should a solutions architect do to meet these requirements? A. Migrate all the data to Amazon S3. Set up IAM authentication for users to access files. B. Set up an Amazon S3 File Gateway. Mount the S3 File Gateway on the existing EC2 instances. C. Extend the file share environment to Amazon FSx for Windows File Server with a Multi-AZ configuration. Migrate all the data to FSx for Windows File Server. D. Extend the file share environment to Amazon Elastic File System (Amazon EFS) with a Multi-AZ configuration. Migrate all the data to Amazon EFS. 一家公司在AWS上运行多个Windows工作负载。公司员工使用托管在两个Amazon EC2实例上的Windows文件共享。 这些文件共享在彼此之间同步数据,并保留重复副本。公司希望获得一个高可用且耐久的存储解决方案,同时保留用户当前访问文件的方式。 解决方案架构师应当如何满足这些需求? A. 将所有数据迁移到Amazon S3,并设置IAM认证以供用户访问文件。 B. 设置Amazon S3文件网关,将其挂载到现有的EC2实例上。 C. 使用多可用区配置将文件共享环境扩展到Amazon FSx for Windows文件服务器,并将所有数据迁移到FSx for Windows文件服务器。 D. 使用多可用区配置将文件共享环境扩展到Amazon弹性文件系统(Amazon EFS),并将所有数据迁移到Amazon EFS。 A. A B. B C. C D. D 正确答案是C,原因是:1. 题目要求保持用户现有的文件访问方式(Windows文件共享),而Amazon FSx for Windows File Server正是为Windows环境设计的原生兼容服务,完全满足需求。2. Multi-AZ配置可以提供高可用性,自动故障转移和数据耐久性。 其他选项分析:A. 将数据迁移到Amazon S3和设置IAM认证虽然能够提供耐久性,但完全改变了用户访问文件的方式(不再是Windows文件共享),且S3不适合直接作为文件系统使用。B. S3 File Gateway虽然可以提供NFS/SMB接口,但本质上仍然是基于对象存储的解决方案,不能完全替代Windows文件共享的完整功能。D. Amazon EFS主要是为Linux设计的NFS文件共享服务,与Windows原生文件共享兼容性较差,且性能特性不适合Windows工作负载。 55 / 100 分类: SAA-C03 55. A solutions architect is developing a VPC architecture that includes multiple subnets. The architecture will host applications that use Amazon EC2 instances and Amazon RDS DB instances. The architecture consists of six subnets in two Availability Zones. Each Availability Zone includes a public subnet, a private subnet, and a dedicated subnet for databases. Only EC2 instances that run in the private subnets can have access to the RDS databases. Which solution will meet these requirements? A. Create a new route table that excludes the route to the public subnets’ CIDR blocks. Associate the route table with the database subnets. B. Create a security group that denies inbound traffic from the security group that is assigned to instances in the public subnets. Attach the security group to the DB instances. C. Create a security group that allows inbound traffic from the security group that is assigned to instances in the private subnets. Attach the security group to the DB instances. D. Create a new peering connection between the public subnets and the private subnets. Create a different peering connection between the private subnets and the database subnets. 一位解决方案架构师正在开发一个包含多个子网的VPC架构。该架构将托管使用Amazon EC2实例和Amazon RDS数据库实例的应用程序。架构由分布在两个可用区的六个子网组成。每个可用区包含一个公有子网、一个私有子网和一个专门用于数据库的子网。 只有运行在私有子网中的EC2实例才能访问RDS数据库。 以下哪种解决方案满足这些要求? A. 创建一个排除公有子网CIDR块路由的新路由表。将该路由表与数据库子网关联。 B. 创建一个拒绝来自公有子网实例所分配安全组的入站流量的安全组。将该安全组附加到数据库实例上。 C. 创建一个允许来自私有子网实例所分配安全组的入站流量的安全组。将该安全组附加到数据库实例上。 D. 在公有子网和私有子网之间创建新的对等连接。在私有子网和数据库子网之间创建不同的对等连接。 A. A B. B C. C D. D 正确答案解析: 选项C是正确的,因为题目要求只有私有子网中的EC2实例能够访问RDS数据库。通过创建一个允许来自私有子网实例所在安全组的入站流量的安全组,并将其附加到RDS实例上,可以实现这一访问控制要求。这是AWS推荐的最佳实践,基于安全组的”最小权限原则”,只允许必要的流量进入数据库。 其他选项分析: 选项A(创建排除公共子网CIDR块路由的新路由表)错误,因为路由表控制的是网络路由方向,不能替代安全组提供的精细访问控制。 选项B(创建拒绝公共子网安全组流量的安全组)错误,这虽然可以阻止公共子网的访问,但没有明确允许私有子网的访问,属于”黑名单”思维,不符合安全最佳实践。 选项D(创建对等连接)错误,因为对等连接主要用于不同VPC间的通信,而题目中的需求可以通过安全组在同一个VPC内实现,且对等连接会增加网络复杂性和潜在的安全风险。 56 / 100 分类: SAA-C03 56. A company has registered its domain name with Amazon Route 53. The company uses Amazon API Gateway in the ca-central-1 Region as a public interface for its backend microservice APIs. Third-party services consume the APIs securely. The company wants to design its API Gateway URL with the company’s domain name and corresponding certificate so that the third-party services can use HTTPS. Which solution will meet these requirements? A. Create stage variables in API Gateway with Name=”Endpoint-URL” and Value=”Company Domain Name” to overwrite the default URL. Import the public certificate associated with the company’s domain name into AWS Certificate Manager (ACM). B. Create Route 53 DNS records with the company’s domain name. Point the alias record to the Regional API Gateway stage endpoint. Import the public certificate associated with the company’s domain name into AWS Certificate Manager (ACM) in the us-east-1 Region. C. Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company’s domain name. Import the public certificate associated with the company’s domain name into AWS Certificate Manager (ACM) in the same Region. Attach the certificate to the API Gateway endpoint. Configure Route 53 to route traffic to the API Gateway endpoint. D. Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company’s domain name. Import the public certificate associated with the company’s domain name into AWS Certificate Manager (ACM) in the us-east-1 Region. Attach the certificate to the API Gateway APIs. Create Route 53 DNS records with the company’s domain name. Point an A record to the company’s domain name. 一家公司已在亚马逊Route 53上注册了其域名。该公司在ca-central-1区域使用亚马逊API网关作为其后端微服务API的公共接口。第三方服务通过安全方式使用这些API。该公司希望使用其域名和相应证书设计API网关URL,以便第三方服务能够使用HTTPS。哪种解决方案可以满足这些要求? A. 在API网关中创建阶段变量,设置Name=”Endpoint-URL”和Value=”公司域名”以覆盖默认URL。将与公司域名关联的公开证书导入AWS证书管理器(ACM)。 B. 使用公司域名创建Route 53 DNS记录。将别名记录指向区域API网关阶段端点。将与公司域名关联的公开证书导入us-east-1区域的AWS证书管理器(ACM)。 C. 创建一个区域API网关端点。将API网关端点与公司域名关联。在相同区域将公司域名关联的公开证书导入AWS证书管理器(ACM)。将证书附加到API网关端点。配置Route 53以将流量路由到API网关端点。 D. 创建一个区域API网关端点。将API网关端点与公司域名关联。将公司域名关联的公开证书导入us-east-1区域的AWS证书管理器(ACM)。将证书附加到API网关API。使用公司域名创建Route 53 DNS记录。将A记录指向公司域名。 A. A B. B C. C D. D 正确选项C的解析如下: 1. 创建Regional API Gateway端点 – 这是使用自定义域名的基础。2. 将API Gateway端点与公司域名关联 – 实现通过公司域名访问API。3. 在同一区域(即ca-central-1)的ACM中导入与公司域名关联的证书 – 证书必须在API Gateway所在区域导入才能使用。4. 将证书附加到API Gateway端点 – 启用HTTPS安全连接。5. 配置Route 53将流量路由到API Gateway端点 – 完成DNS解析设置。 其他选项错误原因:A. API Gateway没有’Endpoint-URL’这个stage变量,此方案不可行。B. 虽然部分正确,但证书必须在API Gateway所在区域(ca-central-1)导入,而不是us-east-1区域。D. 错误有二:(1)证书应该在API Gateway所在区域(ca-central-1)导入而非us-east-1;(2)应该使用别名(Alias)记录指向API Gateway端点,而不是简单的A记录。 57 / 100 分类: SAA-C03 57. A company is running a popular social media website. The website gives users the ability to upload images to share with other users. The company wants to make sure that the images do not contain inappropriate content. The company needs a solution that minimizes development effort. What should a solutions architect do to meet these requirements? A. Use Amazon Comprehend to detect inappropriate content. Use human review for low-con dence predictions. B. Use Amazon Rekognition to detect inappropriate content. Use human review for low-con dence predictions. C. Use Amazon SageMaker to detect inappropriate content. Use ground truth to label low-con dence predictions. D. Use AWS Fargate to deploy a custom machine learning model to detect inappropriate content. Use ground truth to label low-con dence predictions. 一家公司运营着一个受欢迎的社交媒体网站。 该网站允许用户上传图片与其他用户分享。 公司希望确保这些图片不包含不适当内容。 公司需要一个能够最大限度减少开发工作量的解决方案。 解决方案架构师应该采取什么措施来满足这些需求? A. 使用Amazon Comprehend检测不适当内容。对低置信度预测进行人工审核。 B. 使用Amazon Rekognition检测不适当内容。对低置信度预测进行人工审核。 C. 使用Amazon SageMaker检测不适当内容。使用真实数据标注低置信度预测。 D. 使用AWS Fargate部署自定义机器学习模型检测不适当内容。使用真实数据标注低置信度预测。 A. A B. B C. C D. D 正确答案是B,因为Amazon Rekognition是AWS专门用于图像和视频分析的托管服务,可以直接检测图像中的不适当内容,这极大减少了开发工作量。对于置信度较低的预测结果,可以通过人工审核进一步确认。 A选项错误,因为Amazon Comprehend是用于文本分析的服务,不适合用于图像内容检测。 C选项错误,虽然Amazon SageMaker可以训练自定义的机器学习模型,但需要大量开发工作来构建和训练模型,不符合题目中最小化开发工作的要求。 D选项错误,使用AWS Fargate部署自定义机器学习模型同样需要大量开发工作,不符合题目要求。 58 / 100 分类: SAA-C03 58. A company wants to run its critical applications in containers to meet requirements for scalability and availability. The company prefers to focus on maintenance of the critical applications. The company does not want to be responsible for provisioning and managing the underlying infrastructure that runs the containerized workload. What should a solutions architect do to meet these requirements? A. Use Amazon EC2 instances, and install Docker on the instances. B. Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 worker nodes. C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. D. Use Amazon EC2 instances from an Amazon Elastic Container Service (Amazon ECS)-optimized Amazon Machine Image (AMI). 一家公司希望将其关键应用运行在容器中,以满足可扩展性和可用性需求。该公司更倾向于专注于维护这些关键应用。 公司不希望负责为运行容器化工作负载的底层基础设施进行配置和管理。 解决方案架构师应该采取什么措施来满足这些需求? A. 使用亚马逊EC2实例,并在实例上安装Docker。 B. 在亚马逊EC2工作节点上使用亚马逊弹性容器服务(Amazon ECS)。 C. 在AWS Fargate上使用亚马逊弹性容器服务(Amazon ECS)。 D. 使用来自亚马逊弹性容器服务(Amazon ECS)优化的亚马逊机器镜像(AMI)的亚马逊EC2实例。 A. A B. B C. C D. D 正确答案是C,使用AWS Fargate上的Amazon Elastic Container Service (Amazon ECS)。AWS Fargate是一种无服务器计算引擎,允许用户运行容器而无需管理底层的基础设施(如EC2实例)。这完美匹配题目中公司不希望负责基础设施配置和管理的需求,同时可以专注于应用程序的维护。 其他选项分析: A选项错误: 使用EC2实例并安装Docker需要公司自行管理和维护底层基础设施,与题意不符。 B选项错误: 虽然在EC2工作节点上使用Amazon ECS提供了一定程度的容器管理,但仍需要公司配置和维护EC2实例。 D选项错误: 使用ECS优化的AMI虽然简化了容器环境的设置,但仍需管理EC2实例,无法完全免除基础设施维护责任。 59 / 100 分类: SAA-C03 59. A company hosts more than 300 global websites and applications. The company requires a platform to analyze more than 30 TB of clickstream data each day. What should a solutions architect do to transmit and process the clickstream data? A. Design an AWS Data Pipeline to archive the data to an Amazon S3 bucket and run an Amazon EMR cluster with the data to generate analytics. B. Create an Auto Scaling group of Amazon EC2 instances to process the data and send it to an Amazon S3 data lake for Amazon Redshift to use for analysis. C. Cache the data to Amazon CloudFront. Store the data in an Amazon S3 bucket. When an object is added to the S3 bucket. run an AWS Lambda function to process the data for analysis. D. Collect the data from Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to transmit the data to an Amazon S3 data lake. Load the data in Amazon Redshift for analysis. 一家公司托管着超过300个全球网站和应用程序。该公司需要一个平台来每天分析30TB以上的点击流数据。 解决方案架构师应该采取什么措施来传输和处理这些点击流数据? A. 设计一个AWS数据管道将数据存档到亚马逊S3存储桶,并运行一个亚马逊EMR集群处理这些数据以生成分析报告。 B. 创建一个亚马逊EC2实例的自动扩展组来处理数据,并将其发送到亚马逊S3数据湖供亚马逊Redshift进行分析使用。 C. 将数据缓存到亚马逊CloudFront。将数据存储在亚马逊S3存储桶中。当有对象添加到S3存储桶时,运行一个AWS Lambda函数来处理这些数据以进行分析。 D. 从亚马逊Kinesis数据流收集数据。使用亚马逊Kinesis数据火线将数据传输到亚马逊S3数据湖。将数据加载到亚马逊Redshift中进行分析。 A. A B. B C. C D. D 本题考察大规模点击流数据的实时传输和处理方案。以下是对各选项的分析: 正确选项D:使用Kinesis Data Streams收集数据,通过Kinesis Data Firehose传输到S3数据湖,最后用Redshift分析是最佳方案。因为:1) Kinesis Data Streams专为实时大数据流设计,支持高吞吐2) Firehose提供自动缩放和到S3的无缝传输3) Redshift适合PB级数据分析4) 全托管服务减少运维负担 选项A错误原因:AWS Data Pipeline主要用于批量数据处理,不适合实时性要求高的点击流分析,且EMR集群需要手动管理。 选项B错误原因:用Auto Scaling EC2实例自建处理系统运维复杂,成本高,难保证30TB/天的数据处理稳定性。 选项C错误原因:CloudFront是CDN服务,不适合作为数据处理管道,且Lambda不适合直接处理原始点击流数据。 60 / 100 分类: SAA-C03 60. A company has a website hosted on AWS. The website is behind an Application Load Balancer (ALB) that is Configured to handle HTTP and HTTPS separately. The company wants to forward all requests to the website so that the requests will use HTTPS. What should a solutions architect do to meet this requirement? A. Update the ALB’s network ACL to accept only HTTPS traffic. B. Create a rule that replaces the HTTP in the URL with HTTPS. C. Create a listener rule on the ALB to redirect HTTP traffic to HTTPS. D. Replace the ALB with a Network Load Balancer Configured to use Server Name Indication (SNI). 一家公司在AWS上托管了一个网站。 该网站位于一个应用负载均衡器(ALB)后面,该负载均衡器被配置为分别处理HTTP和HTTPS流量。 公司希望将所有请求转发到网站,以确保请求使用HTTPS。 解决方案架构师应该采取什么措施来满足这一需求? A. 更新ALB的网络ACL,使其仅接受HTTPS流量。 B. 创建一个规则,将URL中的HTTP替换为HTTPS。 C. 在ALB上创建一个监听器规则,将HTTP流量重定向到HTTPS。 D. 将ALB替换为配置为使用服务器名称指示(SNI)的网络负载均衡器。 A. A B. B C. C D. D 要实现将所有HTTP请求重定向到HTTPS的需求,解决方案架构师应使用ALB的监听器规则功能。具体解析如下: C选项正确:在ALB上创建监听器规则将HTTP流量重定向到HTTPS,这是AWS推荐的标准做法。ALB支持基于端口的监听器,可以在HTTP(80端口)监听器上配置规则,通过返回HTTP 301/302状态码实现协议跳转。 A选项错误:网络ACL只能基于IP和端口过滤流量,无法实现协议转换。且只在网络层工作,不能处理应用层的HTTP到HTTPS转换。 B选项错误:直接修改URL中的协议部分不可靠,这需要应用层处理每个请求,且不能确保所有客户端都遵循。AWS标准实践应使用监听器重定向功能。 D选项错误:网络负载均衡器(NLB)工作在第四层,不支持应用层协议重定向。且SNI(服务器名称指示)仅用于在TLS握手中传达主机名,与HTTP/HTTPS重定向无关。 61 / 100 分类: SAA-C03 61. A company is developing a two-tier web application on AWS. The company’s developers have deployed the application on an Amazon EC2 instance that connects directly to a backend Amazon RDS database. The company must not hardcode database credentials in the application. The company must also implement a solution to automatically rotate the database credentials on a regular basis. Which solution will meet these requirements with the LEAST operational overhead? A. Store the database credentials in the instance metadata. Use Amazon EventBridge (Amazon CloudWatch Events) rules to run a scheduled AWS Lambda function that updates the RDS credentials and instance metadata at the same time. B. Store the database credentials in a configuration file in an encrypted Amazon S3 bucket. Use Amazon EventBridge (Amazon CloudWatch Events) rules to run a scheduled AWS Lambda function that updates the RDS credentials and the credentials in the configuration file at the same time. Use S3 Versioning to ensure the ability to fall back to previous values. C. Store the database credentials as a secret in AWS Secrets Manager. Turn on automatic rotation for the secret. Attach the required permission to the EC2 role to grant access to the secret. D. Store the database credentials as encrypted parameters in AWS Systems Manager Parameter Store. Turn on automatic rotation for the encrypted parameters. Attach the required permission to the EC2 role to grant access to the encrypted parameters. 一家公司正在AWS上开发一个两层式Web应用程序。该公司的开发人员已将应用程序部署在直接连接到后端Amazon RDS数据库的Amazon EC2实例上。 公司不得在应用程序中硬编码数据库凭据。公司还必须实施一个解决方案来定期自动轮换数据库凭据。 哪种方案能够以最少的运维开销满足这些需求? A. 将数据库凭据存储在实例元数据中。使用Amazon EventBridge(Amazon CloudWatch Events)规则运行一个预定的AWS Lambda函数,该函数同时更新RDS凭据和实例元数据。 B. 将数据库凭据存储在加密的Amazon S3桶中的配置文件里。使用Amazon EventBridge(Amazon CloudWatch Events)规则运行一个预定的AWS Lambda函数,该函数同时更新RDS凭据和配置文件中的凭据。使用S3版本控制以确保能够回退到先前的值。 C. 将数据库凭据作为机密存储在AWS Secrets Manager中。开启机密的自动轮换功能。将所需的权限附加到EC2角色以授予对机密的访问权限。 D. 将数据库凭据作为加密参数存储在AWS Systems Manager Parameter Store中。开启加密参数的自动轮换功能。将所需的权限附加到EC2角色以授予对加密参数的访问权限。 A. A B. B C. C D. D 正确答案是C,使用AWS Secrets Manager存储数据库凭证并启用自动轮换功能。 详细解析: A选项不正确,因为EC2实例元数据不适合存储敏感凭证。虽然可以通过Lambda更新,但这种方法缺乏集中管理和自动轮换的内置支持,且操作复杂性较高。 B选项部分解决了凭证存储问题,但需要自行开发轮换逻辑,通过S3版本控制回滚也增加了操作复杂性。这不是AWS推荐的安全凭证管理方式。 C选项完全符合要求,因为AWS Secrets Manager是专为密钥管理设计的服务,提供自动轮换功能,且可以直接集成IAM角色权限控制。这是操作开销最小的解决方案。 D选项虽然Parameter Store可以存储加密参数,但其自动轮换功能实际上是调用Secrets Manager实现的,不如直接使用Secrets Manager简单高效。 62 / 100 分类: SAA-C03 62. A company is deploying a new public web application to AWS. The application will run behind an Application Load Balancer (ALB). The application needs to be encrypted at the edge with an SSL/TLS certificate that is issued by an external certificate authority (CA). The certificate must be rotated each year before the certificate expires. What should a solutions architect do to meet these requirements? A. Use AWS Certificate Manager (ACM) to issue an SSL/TLS certificate. Apply the certificate to the ALB. Use the managed renewal feature to automatically rotate the certificate. B. Use AWS Certificate Manager (ACM) to issue an SSL/TLS certificate. Import the key material from the certificate. Apply the certificate to the ALUse the managed renewal feature to automatically rotate the certificate. C. Use AWS Certificate Manager (ACM) Private Certificate Authority to issue an SSL/TLS certificate from the root CA. Apply the certificate to the ALB. Use the managed renewal feature to automatically rotate the certificate. D. Use AWS Certificate Manager (ACM) to import an SSL/TLS certificate. Apply the certificate to the ALB. Use Amazon EventBridge (Amazon CloudWatch Events) to send a notification when the certificate is nearing expiration. Rotate the certificate manually. 一家公司正在AWS上部署一个新的公共网络应用程序。该应用程序将在应用负载均衡器(ALB)后运行。 应用程序需要使用外部证书颁发机构(CA)颁发的SSL/TLS证书在边缘进行加密。 该证书必须在每年到期前进行轮换。 为了满足这些要求,解决方案架构师应该采取什么措施? A. 使用AWS证书管理器(ACM)颁发SSL/TLS证书。将该证书应用于ALB。使用托管续订功能自动轮换证书。 B. 使用AWS证书管理器(ACM)颁发SSL/TLS证书。从证书导入密钥材料。将该证书应用于ALB。使用托管续订功能自动轮换证书。 C. 使用AWS证书管理器(ACM)私有证书颁发机构从根CA颁发SSL/TLS证书。将该证书应用于ALB。使用托管续订功能自动轮换证书。 D. 使用AWS证书管理器(ACM)导入SSL/TLS证书。将该证书应用于ALB。使用Amazon EventBridge(Amazon CloudWatch Events)在证书即将过期时发送通知。手动轮换证书。 A. Use AWS Certificate Manager (ACM) to issue an SSL/TLS certificate. Apply the certificate to the ALB. Use the managed renewal feature to automatically rotate the certificate. B. Use AWS Certificate Manager (ACM) to issue an SSL/TLS certificate. Import the key material from the certificate. Apply the certificate to the ALUse the managed renewal feature to automatically rotate the certificate. C. Use AWS Certificate Manager (ACM) Private Certificate Authority to issue an SSL/TLS certificate from the root CA. Apply the certificate to the ALB. Use the managed renewal feature to automatically rotate the certificate. D. Use AWS Certificate Manager (ACM) to import an SSL/TLS certificate. Apply the certificate to the ALB. Use Amazon EventBridge (Amazon CloudWatch Events) to send a notification when the certificate is nearing expiration. Rotate the certificate manually. 正确答案是D,因为题目要求使用由外部证书颁发机构(CA)颁发的SSL/TLS证书,而ACM默认只能颁发AWS托管的证书,所以需要导入外部证书。选项A错误,因为ACM颁发的证书不符合外部CA的要求。选项B错误,虽然可以导入密钥材料,但仍然使用的是ACM证书而非外部CA证书。选项C错误,因为ACM Private CA颁发的是私有证书,不符合使用公共CA的要求。D选项正确描述了导入外部证书、应用到ALB并通过EventBridge监控证书到期时间然后手动轮换的完整流程。 63 / 100 分类: SAA-C03 63. A company runs its infrastructure on AWS and has a registered base of 700,000 users for its document management application. The company intends to create a product that converts large .pdf files to .jpg imagefiles. The .pdf files average 5 MB in size. The company needs to store the original files and the convertedfiles. A solutions architect must design a scalable solution to accommodate demand that will grow rapidly over time. Which solution meets these requirements MOST cost-effectively? A. Save the .pdf files to Amazon S3. Configure an S3 PUT event to invoke an AWS Lambda function to convert the files to .jpg format and store them back in Amazon S3. B. Save the .pdf files to Amazon DynamoDUse the DynamoDB Streams feature to invoke an AWS Lambda function to convert the files to .jpg format and store them back in DynamoDB. C. Upload the .pdf files to an AWS Elastic Beanstalk application that includes Amazon EC2 instances, Amazon Elastic Block Store (Amazon EBS) storage, and an Auto Scaling group. Use a program in the EC2 instances to convert the files to .jpg format. Save the .pdf files and the .jpg files in the EBS store. D. Upload the .pdf files to an AWS Elastic Beanstalk application that includes Amazon EC2 instances, Amazon Elastic File System (Amazon EFS) storage, and an Auto Scaling group. Use a program in the EC2 instances to convert the file to .jpg format. Save the .pdf files and the .jpg files in the EBS store. 一家公司在AWS上运行其基础设施,其文档管理应用拥有70万注册用户。该公司计划开发一个产品,用于将大型.pdf文件转换为.jpg图片文件。这些.pdf文件平均大小为5MB。公司需要存储原始文件和转换后的文件。解决方案架构师必须设计一个可扩展的解决方案,以满足随时间快速增长的 需求。 哪种解决方案最具成本效益地满足这些要求? A. 将.pdf文件保存到Amazon S3。配置一个S3 PUT事件来触发AWS Lambda函数,将这些文件转换为.jpg格式并重新存储到Amazon S3。 B. 将.pdf文件保存到Amazon DynamoDB。使用DynamoDB Streams功能触发AWS Lambda函数,将文件转换为.jpg格式并重新存储到DynamoDB。 C. 将.pdf文件上传到包含Amazon EC2实例、Amazon Elastic Block Store (Amazon EBS)存储和Auto Scaling组的AWS Elastic Beanstalk应用。使用EC2实例中的程序将文件转换为.jpg格式。将.pdf和.jpg文件保存在EBS存储中。 D. 将.pdf文件上传到包含Amazon EC2实例、Amazon Elastic File System (Amazon EFS)存储和Auto Scaling组的AWS Elastic Beanstalk应用。使用EC2实例中的程序将文件转换为.jpg格式。将.pdf和.jpg文件保存在EBS存储中。 A. A B. B C. C D. D 正确答案是A,原因如下: 1. **选项A**:使用Amazon S3存储原始PDF文件,并通过S3 PUT事件触发AWS Lambda函数进行格式转换后存回S3。这是一个**无服务器架构**方案,具备高扩展性和成本效益。S3适合存储大量文件,Lambda按需执行且自动扩展,避免了资源闲置和额外管理成本。 2. **选项B错误原因**:DynamoDB是NoSQL数据库,不适用于存储大文件(如5MB的PDF)。其流功能虽能触发Lambda,但存储和读取大文件会浪费吞吐量(RCU/WCU)且费用高昂。 3. **选项C错误原因**:使用Elastic Beanstalk+EC2+EBS会引入不必要的复杂性。EC2实例需持续运行或管理伸缩,EBS存储难以独立扩展,且需要额外运维成本。 4. **选项D错误原因**:EFS虽可共享存储,但题目中提到最终将文件存入EBS(矛盾描述),且EFS对高频小文件更优。整体方案仍依赖EC2的运维开销,成本高于无服务器方案。 **总结**:A方案完全利用托管服务,按用量付费,是唯一兼顾扩展性、成本与维护效率的选项。 64 / 100 分类: SAA-C03 64. A company has more than 5 TB of file data on Windows file servers that run on premises. Users and applications interact with the data each day. The company is moving its Windows workloads to AWS. As the company continues this process, the company requires access to AWS and on premises file storage with minimum latency. The company needs a solution that minimizes operational overhead and requires no signi cant changes to the existing file access patterns. The company uses an AWS Site-to-Site VPN connection for connectivity to AWS. What should a solutions architect do to meet these requirements? A. Deploy and Configure Amazon FSx for Windows File Server on AWS. Move the on-premises file data to FSx for Windows File Server. ReConfigure the workloads to use FSx for Windows File Server on AWS. B. Deploy and Configure an Amazon S3 File Gateway on premises. Move the on-premises file data to the S3 File Gateway. ReConfigure the on-premises workloads and the cloud workloads to use the S3 File Gateway. C. Deploy and Configure an Amazon S3 File Gateway on premises. Move the on-premises file data to Amazon S3. ReConfigure the workloads to use either Amazon S3 directly or the S3 File Gateway. depending on each workload’s location. D. Deploy and Configure Amazon FSx for Windows File Server on AWS. Deploy and Configure an Amazon FSx File Gateway on premises. Move the on-premises file data to the FSx File Gateway. Configure the cloud workloads to use FSx for Windows File Server on AWS. Configure the on-premises workloads to use the FSx File Gateway. 一家公司在本地运行的Windows文件服务器上拥有超过5TB的文件数据。用户和应用程序每天都会与这些数据进行交互。 该公司正在将其Windows工作负载迁移到AWS。在迁移过程中,公司需要能够以最低延迟同时访问AWS和本地文件存储。公司需要的解决方案应当最小化运营开销,并且无需对现有文件访问模式进行重大更改。该公司使用AWS站点到站点VPN连接来实现与AWS的连接。 解决方案架构师应当采取什么措施来满足这些需求? A. 在AWS上部署并配置Amazon FSx for Windows File Server。将本地文件数据迁移至FSx for Windows File Server。重新配置工作负载以使用AWS上的FSx for Windows File Server。 B. 在本地部署并配置Amazon S3文件网关。将本地文件数据迁移至S3文件网关。重新配置本地工作负载和云工作负载以使用S3文件网关。 C. 在本地部署并配置Amazon S3文件网关。将本地文件数据迁移至Amazon S3。根据每个工作负载的位置,重新配置工作负载以直接使用Amazon S3或S3文件网关。 D. 在AWS上部署并配置Amazon FSx for Windows File Server。在本地部署并配置Amazon FSx文件网关。将本地文件数据迁移至FSx文件网关。配置云工作负载使用AWS上的FSx for Windows File Server,配置本地工作负载使用FSx文件网关。 A. A B. B C. C D. D 正确答案是D,因为部署Amazon FSx for Windows File Server结合FSx File Gateway是最佳解决方案,原因如下: – Amazon FSx for Windows File Server提供了完全托管的Windows文件服务器,兼容本地Windows文件服务器,可以最小化操作负担。 – FSx File Gateway能够本地缓存频繁访问的文件,减少延迟,同时通过AWS Site-to-Site VPN连接将数据传输到FSx for Windows File Server上。 – 这种配置允许云工作负载直接使用AWS上的FSx for Windows File Server,本地工作负载通过FSx File Gateway访问,无需显著改变现有的文件访问模式。 其他选项不满足需求的原因: – 选项A中直接将文件移动到FSx for Windows File Server,但没有考虑本地工作负载访问的高延迟问题。 – 选项B和C使用S3 File Gateway和Amazon S3,但S3不提供与Windows文件服务器完全兼容的功能,可能导致现有应用程序的兼容性问题。 65 / 100 分类: SAA-C03 65. A hospital recently deployed a RESTful API with Amazon API Gateway and AWS Lambda. The hospital uses API Gateway and Lambda to upload reports that are in PDF format and JPEG format. The hospital needs to modify the Lambda code to identify protected health information (PHI) in the reports. Which solution will meet these requirements with the LEAST operational overhead? A. Use existing Python libraries to extract the text from the reports and to identify the PHI from the extracted text. B. Use Amazon Textract to extract the text from the reports. Use Amazon SageMaker to identify the PHI from the extracted text. C. Use Amazon Textract to extract the text from the reports. Use Amazon Comprehend Medical to identify the PHI from the extracted text. D. Use Amazon Rekognition to extract the text from the reports. Use Amazon Comprehend Medical to identify the PHI from the extracted text. 一家医院最近使用Amazon API Gateway和AWS Lambda部署了一个RESTful API。医院利用API Gateway和Lambda上传PDF格式和JPEG格式的报告。 医院需要修改Lambda代码以识别报告中的受保护健康信息(PHI)。 哪种方案能够在最低运营开销下满足这些需求? A. 使用现有的Python库从报告中提取文本,并从提取的文本中识别PHI。 B. 使用Amazon Textract从报告中提取文本,使用Amazon SageMaker从提取的文本中识别PHI。 C. 使用Amazon Textract从报告中提取文本,使用Amazon Comprehend Medical从提取的文本中识别PHI。 D. 使用Amazon Rekognition从报告中提取文本,使用Amazon Comprehend Medical从提取的文本中识别PHI。 A. A B. B C. C D. D 正确答案是C,因为Amazon Textract专用于从PDF和JPEG等文档中提取文本,而Amazon Comprehend Medical专门设计用于识别医疗健康信息(PHI),这两个服务的组合可以以最小的运维开销满足需求。A选项使用Python库虽然可行,但需要自行开发和维护文本提取和PHI识别的逻辑,增加了运维复杂度。B选项虽然使用了Amazon Textract,但使用Amazon SageMaker来识别PHI需要自行训练和部署模型,成本高且复杂。D选项使用Amazon Rekognition提取文本并不合适,因为Rekognition主要面向图像和视频分析,对文档文本提取的效果和效率不如Textract。 66 / 100 分类: SAA-C03 66. A company has an application that generates a large number of les, each approximately 5 MB in size. The files are stored in Amazon S3. Company policy requires the files to be stored for 4 years before they can be deleted. Immediate accessibility is always required as the les contain critical business data that is not easy to reproduce. The files are frequently accessed in the first 30 days of the object creation but are rarely accessed after the first 30 days. Which storage solution is MOST cost-effective? A. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Glacier 30 days from object creation. Delete the files 4 years after object creation. B. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) 30 days from object creation. Delete the files 4 years after object creation. C. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object creation. Delete the files 4 years after object creation. D. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object creation. Move the files to S3 Glacier 4 years after object creation. 一家公司有一个应用程序会生成大量文件,每个文件大小约为5MB。这些文件存储在亚马逊S3中。 公司政策要求这些文件必须存储4年后才能删除。由于这些文件包含难以重新生成的关键业务数据,因此需要始终保持即时访问。文件在创建后的前30天内被频繁访问,但在30天后就很少被访问。 哪种存储解决方案最具成本效益? A. 创建一个S3存储桶生命周期策略,在对象创建30天后将文件从S3标准存储移动到S3 Glacier,并在对象创建4年后删除文件。 B. 创建一个S3存储桶生命周期策略,在对象创建30天后将文件从S3标准存储移动到S3单区-不频繁访问(S3 One Zone-IA),并在对象创建4年后删除文件。 C. 创建一个S3存储桶生命周期策略,在对象创建30天后将文件从S3标准存储移动到S3标准-不频繁访问(S3 Standard-IA),并在对象创建4年后删除文件。 D. 创建一个S3存储桶生命周期策略,在对象创建30天后将文件从S3标准存储移动到S3标准-不频繁访问(S3 Standard-IA),并在对象创建4年后将文件移动到S3 Glacier。 A. A B. B C. C D. D 该问题考察的是如何为符合特定访问模式的S3对象选择最具成本效益的存储方案。根据题目描述: 1. 文件需要保留4年且始终要求即时访问能力(排除Glacier选项,因为Glacier检索需要时间)2. 文件在前30天频繁访问(适合Standard存储类)3. 30天后很少访问(适合Infrequent Access存储类) 选项分析:A: 错误 – Glacier不符合即时访问要求B: 错误 – One Zone-IA虽然便宜但可靠性较低(单可用区存储),不适合关键业务数据D: 错误 – 4年后转Glacier违反即时访问要求C: 正确 – Standard-IA既满足即时访问要求,又比Standard节省约40%存储成本,且保持高持久性(11个9) 67 / 100 分类: SAA-C03 67. A company hosts an application on multiple Amazon EC2 instances. The application processes messages from an Amazon SQS queue, writes to an Amazon RDS table, and deletes the message from the queue. Occasional duplicate records are found in the RDS table. The SQS queue does not contain any duplicate messages. What should a solutions architect do to ensure messages are being processed once only? A. Use the CreateQueue API call to create a new queue. B. Use the AddPermission API call to add appropriate permissions. C. Use the ReceiveMessage API call to set an appropriate wait time. D. Use the ChangeMessageVisibility API call to increase the visibility timeout. 一家公司在多台亚马逊EC2实例上托管了一个应用程序。 该应用程序处理来自亚马逊SQS队列的消息,写入亚马逊RDS表,并从队列中删除消息。 在RDS表中偶尔会发现重复记录。SQS队列不包含任何重复消息。 解决方案架构师应如何确保消息仅被处理一次? A. 使用CreateQueue API调用创建新队列。 B. 使用AddPermission API调用添加适当权限。 C. 使用ReceiveMessage API调用设置适当的等待时间。 D. 使用ChangeMessageVisibility API调用来增加可见性超时时间。 A. A B. B C. C D. D 本题考察如何避免SQS消息被重复处理的问题。正确答案是D(使用ChangeMessageVisibility API调用来增加可见性超时)。解析如下: D选项正确:当EC2实例处理消息时间超过默认的可见性超时(Visibility Timeout,默认30秒),消息会重新变为可见并可能被其他实例重复处理。通过增加Visibility Timeout(最长12小时),可以确保消息在被处理完成前不会被其他消费者再次接收。 其他选项错误原因:A选项错误:创建新队列不能解决现有队列的消息重复处理问题。B选项错误:添加权限与消息处理的幂等性无关。C选项错误:设置ReceiveMessage等待时间(WaitTimeSeconds)仅影响长轮询行为,不影响消息可见性超时。 68 / 100 分类: SAA-C03 68. A solutions architect is designing a new hybrid architecture to extend a company’s on-premises infrastructure to AWS. The company requires a highly available connection with consistent low latency to an AWS Region. The company needs to minimize costs and is willing to accept slower traffic if the primary connection fails. What should the solutions architect do to meet these requirements? A. Provision an AWS Direct Connect connection to a Region. Provision a VPN connection as a backup if the primary Direct Connect connection fails. B. Provision a VPN tunnel connection to a Region for private connectivity. Provision a second VPN tunnel for private connectivity and as a backup if the primary VPN connection fails. C. Provision an AWS Direct Connect connection to a Region. Provision a second Direct Connect connection to the same Region as a backup if the primary Direct Connect connection fails. D. Provision an AWS Direct Connect connection to a Region. Use the Direct Connect failover attribute from the AWS CLI to automatically create a backup connection if the primary Direct Connect connection fails. 一位解决方案架构师正在设计一个新的混合架构,以将公司的本地基础设施扩展到AWS。 公司要求与AWS区域建立一个高可用性连接,并保持持续低延迟。 公司需要最小化成本,并且愿意在主连接故障时接受较慢的流量。 解决方案架构师应采取什么措施来满足这些需求? A. 配置一条AWS Direct Connect连接到某个区域。在主Direct Connect连接故障时,配置一个VPN连接作为备份。 B. 配置一个VPN隧道连接到某个区域以实现私有连接。配置第二个VPN隧道作为私有连接,并在主VPN连接故障时作为备份。 C. 配置一条AWS Direct Connect连接到某个区域。配置第二条Direct Connect连接到同一区域,作为主Direct Connect连接故障时的备份。 D. 配置一条AWS Direct Connect连接到某个区域。使用AWS CLI中的Direct Connect故障转移属性在主Direct Connect连接故障时自动创建备份连接。 A. A. Provision an AWS Direct Connect connection to a Region. Provision a VPN connection as a backup if the primary Direct Connect connection fails. B. B. Provision a VPN tunnel connection to a Region for private connectivity. Provision a second VPN tunnel for private connectivity and as a backup if the primary VPN connection fails. C. C. Provision an AWS Direct Connect connection to a Region. Provision a second Direct Connect connection to the same Region as a backup if the primary Direct Connect connection fails. D. D. Provision an AWS Direct Connect connection to a Region. Use the Direct Connect failover attribute from the AWS CLI to automatically create a backup connection if the primary Direct Connect connection fails. 正确答案是A。使用AWS Direct Connect可以提供一个高可用、低延迟的连接,满足公司对稳定性的需求。同时,配置VPN作为备用连接可以在Direct Connect主连接故障时提供备份,虽然VPN的延迟较高,但成本低于第二条Direct Connect连接,符合公司最小化成本的要求。 B选项使用两个VPN隧道虽然成本较低,但VPN的延迟和稳定性不如Direct Connect,无法满足主要连接对低延迟的要求。 C选项虽然提供了高可用性,但配置两条Direct Connect连接的成本过高,不符合成本最小化的需求。 D选项中Direct Connect的failover属性并不存在,这是一个错误的选项,AWS CLI没有这样的功能。 69 / 100 分类: SAA-C03 69. A company is running a business-critical web application on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances are in an Auto Scaling group. The application uses an Amazon Aurora PostgreSQL database that is deployed in a single Availability Zone. The company wants the application to be highly available with minimum downtime and minimum loss of data. Which solution will meet these requirements with the LEAST operational effort? A. Place the EC2 instances in different AWS Regions. Use Amazon Route 53 health checks to redirect traffic. Use Aurora PostgreSQL Cross Region Replication. B. Configure the Auto Scaling group to use multiple Availability Zones. Configure the database as Multi-AZ. Configure an Amazon RDS Proxy instance for the database. C. Configure the Auto Scaling group to use one Availability Zone. Generate hourly snapshots of the database. Recover the database from the snapshots in the event of a failure. D. Configure the Auto Scaling group to use multiple AWS Regions. Write the data from the application to Amazon S3. Use S3 Event Notifications to launch an AWS Lambda function to write the data to the database. 一家公司在应用程序负载均衡器后端的亚马逊EC2实例上运行一个业务关键的Web应用程序。EC2实例位于一个自动扩缩组中。该应用程序使用部署在单一可用区的亚马逊Aurora PostgreSQL数据库。 公司希望该应用程序具备高可用性,实现最小化停机时间和最少的数据丢失。 哪种方案能以最低的操作工作量满足这些需求? A. 将EC2实例部署在不同AWS区域。使用Amazon Route 53健康检查来重定向流量。使用Aurora PostgreSQL跨区域复制。 B. 配置自动扩缩组使用多个可用区。将数据库配置为多可用区。为数据库配置一个Amazon RDS代理实例。 C. 配置自动扩缩组使用一个可用区。生成数据库的每小时快照。在发生故障时从快照恢复数据库。 D. 配置自动扩缩组使用多个AWS区域。将应用程序数据写入Amazon S3。使用S3事件通知启动AWS Lambda函数将数据写入数据库。 A. A B. B C. C D. D 正确答案解析:选项B是最佳解决方案,原因如下:1. Auto Scaling组跨多可用区部署EC2实例:提升应用层的高可用性,当单一可用区故障时其他可用区仍可提供服务2. 多可用区(Multi-AZ)的Aurora数据库:主数据库故障时可自动故障转移到备用副本,保证数据库高可用且不丢失提交事务3. RDS Proxy:管理数据库连接池,提高应用在故障转移时的连接弹性 其他选项分析:选项A:跨区域方案虽然可用性更高,但运营复杂度大幅增加(需要管理跨区域路由、数据同步等),不符合”最小运营工作量”要求选项C:单可用区方案不具备高可用性,且基于快照恢复会导致数据丢失(自上次快照后的数据)和较长停机时间选项D:跨区域+S3方案引入复杂架构(Lambda/S3等),需要修改应用代码且维护多个组件,运营成本最高 70 / 100 分类: SAA-C03 70. A company’s HTTP application is behind a Network Load Balancer (NLB). The NLB’s target group is Configured to use an Amazon EC2 Auto Scaling group with multiple EC2 instances that run the web service. The company notices that the NLB is not detecting HTTP errors for the application. These errors require a manual restart of the EC2 instances that run the web service. The company needs to improve the application’s availability without writing custom scripts or code. What should a solutions architect do to meet these requirements? A. Enable HTTP health checks on the NLB, supplying the URL of the company’s application. B. Add a cron job to the EC2 instances to check the local application’s logs once each minute. If HTTP errors are detected. the application will restart. C. Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the company’s application. Configure an Auto Scaling action to replace unhealthy instances. D. Create an Amazon Cloud Watch alarm that monitors the UnhealthyHostCount metric for the NLB. Configure an Auto Scaling action to replace unhealthy instances when the alarm is in the ALARM state. 一家公司的HTTP应用程序位于网络负载均衡器(NLB)之后。 该NLB的目标组配置为使用一个包含多台运行Web服务的EC2实例的Amazon EC2自动扩展组。 公司发现NLB未能检测到该应用程序的HTTP错误。这些错误需要手动重启运行Web服务的EC2实例。 公司需要在不编写自定义脚本或代码的情况下提高应用程序的可用性。 解决方案架构师应采取什么措施来满足这些需求? A. 在NLB上启用HTTP健康检查,并提供公司应用程序的URL。 B. 在EC2实例上添加cron作业,每分钟检查一次本地应用程序的日志。如果检测到HTTP错误,应用程序将重新启动。 C. 将NLB替换为应用程序负载均衡器(ALB)。通过提供公司应用程序的URL启用HTTP健康检查,并配置自动扩展操作以替换不健康的实例。 D. 创建一个Amazon CloudWatch警报,监控NLB的UnhealthyHostCount指标。当警报处于ALARM状态时,配置自动扩展操作以替换不健康的实例。 A. A B. B C. C D. D 正确的解决方案是C。Network Load Balancer (NLB) 工作在OSI模型的第4层(传输层),无法检测HTTP应用层错误。而Application Load Balancer (ALB) 工作在第7层(应用层),可以配置HTTP健康检查来检测后端EC2实例上运行的应用程序的健康状态。当ALB检测到HTTP错误时,能够自动将不健康实例从目标组中移除,并触发Auto Scaling组替换这些不健康的实例。 A选项不正确,因为NLB本身不支持HTTP健康检查,它只能通过TCP级别的健康检查判断实例是否可达,但无法检测应用程序级别的错误。 B选项不正确,题目明确要求不编写自定义脚本或代码,而使用cron job属于自定义解决方案。 D选项不正确,NLB的UnhealthyHostCount指标只能反映TCP级别的实例健康状态(如实例是否响应TCP连接请求),而不能检测HTTP应用层错误。 71 / 100 分类: SAA-C03 71. A company runs a shopping application that uses Amazon DynamoDB to store customer information. In case of data corruption, a solutions architect needs to design a solution that meets a recovery point objective (RPO) of 15 minutes and a recovery time objective (RTO) of 1 hour. What should the solutions architect recommend to meet these requirements? A. Configure DynamoDB global tables. For RPO recovery, point the application to a different AWS Region. B. Configure DynamoDB point-in-time recovery. For RPO recovery, restore to the desired point in time. C. Export the DynamoDB data to Amazon S3 Glacier on a daily basis. For RPO recovery, import the data from S3 Glacier to DynamoDB. D. Schedule Amazon Elastic Block Store (Amazon EBS) snapshots for the DynamoDB table every 15 minutes. For RPO recovery, restore the DynamoDB table by using the EBS snapshot. 一家公司运营着一个购物应用程序,该程序使用Amazon DynamoDB存储客户信息。为防止数据损坏,解决方案架构师需要设计一个满足15分钟恢复点目标(RPO)和1小时恢复时间目标(RTO)的解决方案。 解决方案架构师应该推荐哪种方案来满足上述要求? A. 配置DynamoDB全局表。为满足RPO恢复需求,将应用程序指向不同的AWS区域。B. 配置DynamoDB时间点恢复。为满足RPO恢复需求,恢复到所需的时间点。C. 每天将DynamoDB数据导出到Amazon S3 Glacier。为满足RPO恢复需求,从S3 Glacier将数据导入回DynamoDB。D. 每15分钟为DynamoDB表安排一次Amazon弹性块存储(Amazon EBS)快照。为满足RPO恢复需求,使用EBS快照恢复DynamoDB表。 A. A B. B C. C D. D 答案解析: 选项B是正确答案,因为DynamoDB的时间点恢复(PITR)功能可以实现15分钟的恢复点目标(RPO)。PITR允许将表恢复到过去35天内的任意时间点(精确到秒级),完全满足RPO要求。恢复时间通常在几分钟内完成,也满足1小时RTO要求。 选项A错误:DynamoDB全局表主要用于跨区域复制和灾难恢复,无法实现精细到15分钟的RPO。 选项C错误:S3 Glacier的导入/导出过程耗时过长(通常需要数小时),无法满足15分钟RPO和1小时RTO的要求。 选项D错误:DynamoDB作为托管服务不使用EBS存储,不能通过EBS快照进行备份和恢复。 72 / 100 分类: SAA-C03 72. A company runs a photo processing application that needs to frequently upload and download pictures from Amazon S3 buckets that are located in the same AWS Region. A solutions architect has noticed an increased cost in data transfer fees and needs to implement a solution to reduce these costs. How can the solutions architect meet this requirement? A. Deploy Amazon API Gateway into a public subnet and adjust the route table to route S3 calls through it. B. Deploy a NAT gateway into a public subnet and attach an endpoint policy that allows access to the S3 buckets. C. Deploy the application into a public subnet and allow it to route through an internet gateway to access the S3 buckets. D. Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3 buckets. 一家公司运营着一个照片处理应用程序,该程序需要频繁地从位于同一AWS区域的Amazon S3存储桶上传和下载图片。解决方案架构师发现数据传输费用有所增加,需要实施一项解决方案来降低这些成本。 解决方案架构师如何满足这一需求? A. 将Amazon API Gateway部署到公共子网中,并调整路由表以通过它路由S3调用。 B. 在公共子网中部署NAT网关,并附加允许访问S3存储桶的终端节点策略。 C. 将应用程序部署到公共子网中,并允许它通过互联网网关路由以访问S3存储桶。 D. 在VPC中部署S3 VPC网关终端节点,并附加允许访问S3存储桶的终端节点策略。 A. A B. B C. C D. D 为了减少同区域S3数据传输费用,最佳实践是使用S3 VPC网关端点(VPC Gateway Endpoint)。 解析各选项:A) 错误 – API网关不适用于优化S3数据传输成本,且会增加复杂性和潜在费用B) 错误 – NAT网关会产生额外费用,且不能免除S3同区域数据传输费C) 错误 – 通过互联网网关访问会产生公网数据传输费用D) 正确 – VPC端点可以:1) 无需经过公网直接访问S32) 免除同区域S3数据传输费3) 通过端点策略精细控制访问权限这是AWS官方推荐的最经济高效的解决方案 73 / 100 分类: SAA-C03 73. A company recently launched Linux-based application instances on Amazon EC2 in a private subnet and launched a Linux-based bastion host on an Amazon EC2 instance in a public subnet of a VPC. A solutions architect needs to connect from the on-premises network, through the company’s internet connection, to the bastion host, and to the application servers. The solutions architect must make sure that the security groups of all the EC2 instances will allow that access. Which combination of steps should the solutions architect take to meet these requirements? (Choose two.) A. Replace the current security group of the bastion host with one that only allows inbound access from the application instances. B. Replace the current security group of the bastion host with one that only allows inbound access from the internal IP range for the company. C. Replace the current security group of the bastion host with one that only allows inbound access from the external IP range for the company. D. Replace the current security group of the application instances with one that allows inbound SSH access from only the private IP address of the bastion host. E. Replace the current security group of the application instances with one that allows inbound SSH access from only the public IP address of the bastion host. 题目: 一家公司最近在私有子网中的亚马逊EC2上启动了基于Linux的应用程序实例,并在一个VPC的公有子网中的亚马逊EC2实例上启动了基于Linux的堡垒主机。 解决方案架构师需要从内部部署网络,通过公司的互联网连接,连接到堡垒主机,然后再连接到应用程序服务器。 解决方案架构师必须确保所有EC2实例的安全组都允许该访问。 解决方案架构师应采取以下哪两种步骤组合来满足这些要求?(选择两个) A. 将堡垒主机的当前安全组替换为仅允许来自应用程序实例入站访问的安全组。 B. 将堡垒主机的当前安全组替换为仅允许来自公司内部IP范围入站访问的安全组。 C. 将堡垒主机的当前安全组替换为仅允许来自公司外部IP范围入站访问的安全组。 D. 将应用程序实例的当前安全组替换为仅允许来自堡垒主机私有IP地址的入站SSH访问的安全组。 E. 将应用程序实例的当前安全组替换为仅允许来自堡垒主机公有IP地址的入站SSH访问的安全组。 A. A B. B C. C D. D E. E 为了满足题目要求,解决方案架构师需要采取以下两个步骤的组合: 1. 选项C正确:堡垒主机的安全组应该只允许来自公司外部IP范围的入站访问,这样可以确保只有通过公司互联网连接的流量才能访问堡垒主机。 2. 选项D正确:应用程序实例的安全组应该只允许来自堡垒主机私有IP地址的SSH入站访问,这样可以确保只有通过堡垒主机才能访问应用程序实例。 其他选项错误原因: – 选项A错误:堡垒主机的安全组只允许来自应用程序实例的入站访问,无法解决从本地网络通过公司互联网连接访问堡垒主机的需求。 – 选项B错误:堡垒主机的安全组只允许来自公司内部IP范围的入站访问,无法解决通过互联网连接的需求。 – 选项E错误:应用程序实例的安全组允许来自堡垒主机公有IP地址的SSH入站访问,不安全且会因公有IP变化导致配置失效。 检查 74 / 100 分类: SAA-C03 74. A solutions architect is designing a two-tier web application. The application consists of a public-facing web tier hosted on Amazon EC2 in public subnets. The database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet. Security is a high priority for the company. How should security groups be Configured in this situation? (Choose two.) A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0. B. Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0. C. Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier. D. Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier. E. Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier. 一名解决方案架构师正在设计一个双层网络应用程序。该应用程序由面向公众的网络层组成,托管在公共子网中的亚马逊EC2上。 数据库层由运行在私有子网中亚马逊EC2上的微软SQL Server组成。安全性对该公司来说至关重要。 在这种情况下应如何配置安全组?(选择两项) A. 为网络层配置安全组,允许从0.0.0.0/0进入的443端口流量。 B. 为网络层配置安全组,允许从0.0.0.0/0出去的443端口流量。 C. 为数据库层配置安全组,允许从网络层安全组进入的1433端口流量。 D. 为数据库层配置安全组,允许通往网络层安全组的443和1433端口出站流量。 E. 为数据库层配置安全组,允许从网络层安全组进入的443和1433端口流量。 A. A B. B C. C D. D E. E 在构建两层Web应用程序架构时,安全组配置应遵循最小权限原则: A选项正确:面向公众的Web层需要通过HTTPS(443端口)接收外部流量,0.0.0.0/0表示允许所有IP访问Web层。 C选项正确:数据库层只需接收来自Web层的SQL Server(1433端口)连接请求,这样配置既保证应用功能又确保数据库不会直接暴露在Internet。 错误分析:B选项错误:Web层的出站规则无需特别配置443端口,默认允许所有出站流量。D选项错误:数据库层不需要配置到Web层的出站规则(443和1433),重点是限制入站流量。E选项错误:数据库层只需开放1433端口(SQL Server),不需要443端口(HTTPS)。 检查 75 / 100 分类: SAA-C03 75. A company wants to move a multi-tiered application from on premises to the AWS Cloud to improve the application’s performance. The application consists of application tiers that communicate with each other by way of RESTful services. Transactions are dropped when one tier becomes overloaded. A solutions architect must design a solution that resolves these issues and modernizes the application. Which solution meets these requirements and is the MOST operationally e cient? A. Use Amazon API Gateway and direct transactions to the AWS Lambda functions as the application layer. Use Amazon Simple Queue Service (Amazon SQS) as the communication layer between application services. B. Use Amazon CloudWatch metrics to analyze the application performance history to determine the servers’ peak utilization during the performance failures. Increase the size of the application server’s Amazon EC2 instances to meet the peak requirements. C. Use Amazon Simple Notification Service (Amazon SNS) to handle the messaging between application servers running on Amazon EC2 in an Auto Scaling group. Use Amazon CloudWatch to monitor the SNS queue length and scale up and down as required. D. Use Amazon Simple Queue Service (Amazon SQS) to handle the messaging between application servers running on Amazon EC2 in an Auto Scaling group. Use Amazon CloudWatch to monitor the SQS queue length and scale up when communication failures are detected. 一家公司希望将多层应用程序从本地迁移到AWS云,以提高应用程序的性能。该应用程序包含多个通过RESTful服务相互通信的应用层。当一个层级过载时,交易就会被丢弃。解决方案架构师必须设计一个能够解决这些问题并实现应用程序现代化的解决方案。以下哪个解决方案既能满足这些需求,又最具操作效率? A. 使用Amazon API Gateway并将交易指向AWS Lambda函数作为应用层。使用Amazon Simple Queue Service (Amazon SQS)作为应用服务之间的通信层。 B. 使用Amazon CloudWatch指标分析应用程序的性能历史,以确定性能故障期间服务器的峰值利用率。增加应用程序服务器Amazon EC2实例的规模以满足峰值需求。 C. 使用Amazon Simple Notification Service (Amazon SNS)处理在Auto Scaling组中运行于Amazon EC2上的应用服务器之间的消息传递。使用Amazon CloudWatch监控SNS队列长度并根据需求扩展或缩减。 D. 使用Amazon Simple Queue Service (Amazon SQS)处理在Auto Scaling组中运行于Amazon EC2上的应用服务器之间的消息传递。使用Amazon CloudWatch监控SQS队列长度并在检测到通信故障时扩展。 A. A B. B C. C D. D 正确答案是A,因为题目要求解决层级间通信问题和性能优化,同时实现应用现代化。选项A使用Amazon API Gateway和AWS Lambda构建无服务器架构,这种设计完全解耦了应用层级,通过Lambda自动扩展来处理负载,避免了因单层过载导致的交易丢失。Amazon SQS作为消息队列可以缓冲请求,处理异步通信,实现更可靠的消息传递。 错误选项解析:B选项仅通过增加EC2实例大小来处理峰值负载,这不能解决层级间通信的根本问题,且不够现代化;C选项虽然使用了Auto Scaling和SNS,但SNS是发布/订阅服务,不适合处理层级间的请求/响应通信模式;D选项使用SQS是正确的,但仍依赖EC2实例而非无服务器架构,不够现代化也不够高效。 76 / 100 分类: SAA-C03 76. A company receives 10 TB of instrumentation data each day from several machines located at a single factory. The data consists of JSON files stored on a storage area network (SAN) in an on-premises data center located within the factory. The company wants to send this data to Amazon S3 where it can be accessed by several additional systems that provide critical near-real-time analytics. A secure transfer is important because the data is considered sensitive. Which solution offers the MOST reliable data transfer? A. AWS DataSync over public internet B. AWS DataSync over AWS Direct Connect C. AWS Database Migration Service (AWS DMS) over public internet D. AWS Database Migration Service (AWS DMS) over AWS Direct Connect 一家公司每天从位于同一工厂的多台机器接收10 TB的检测数据。这些数据由存储在工厂内部本地数据中心存储区域网络(SAN)上的JSON文件组成。 公司希望将这些数据发送到亚马逊S3,以便多个提供关键近实时分析的其他系统可以访问这些数据。由于数据被视为敏感信息,安全传输非常重要。 哪种解决方案能提供最可靠的数据传输? A. 通过公共互联网的AWS DataSync B. 通过AWS Direct Connect的AWS DataSync C. 通过公共互联网的AWS数据库迁移服务(AWS DMS) D. 通过AWS Direct Connect的AWS数据库迁移服务(AWS DMS) A. A B. B C. C D. D 正确的解决方案是B. AWS DataSync over AWS Direct Connect。 解析:1. AWS DataSync是一种专为大规模数据传输优化的服务,支持从本地存储(如SAN)到Amazon S3的高效数据传输,是处理10 TB每日数据的理想选择。 2. 使用AWS Direct Connect可以在本地数据中心和AWS之间建立专用网络连接,相比公共互联网更安全可靠,尤其适合敏感数据的传输需求。 3. 选项A虽然使用了DataSync,但通过公共互联网传输仍存在安全风险,不适合敏感数据。 4. AWS DMS(选项C和D)主要用于数据库迁移而非文件传输,在这里不适用。 5. B选项结合了DataSync的高效数据传输能力和Direct Connect的安全可靠,是满足所有需求的最佳解决方案。 77 / 100 分类: SAA-C03 77. A company needs to Configure a real-time data ingestion architecture for its application. The company needs an API, a process that transforms data as the data is streamed, and a storage solution for the data. Which solution will meet these requirements with the LEAST operational overhead? A. Deploy an Amazon EC2 instance to host an API that sends data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3. B. Deploy an Amazon EC2 instance to host an API that sends data to AWS Glue. Stop source/destination checking on the EC2 instance. Use AWS Glue to transform the data and to send the data to Amazon S3. C. Configure an Amazon API Gateway API to send data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3. D. Configure an Amazon API Gateway API to send data to AWS Glue. Use AWS Lambda functions to transform the data. Use AWS Glue to send the data to Amazon S3. 一家公司需要为其应用程序配置实时数据摄取架构。该公司需要一个API、一个在数据流传输时转换数据的过程,以及一个数据存储解决方案。 哪种方案能够以最少的运维开销满足这些需求? A. 部署一个Amazon EC2实例来托管API,该API将数据发送到Amazon Kinesis数据流。创建一个以Kinesis数据流为数据源的Amazon Kinesis Data Firehose传输流。使用AWS Lambda函数转换数据。使用Kinesis Data Firehose传输流将数据发送到Amazon S3。 B. 部署一个Amazon EC2实例来托管API,该API将数据发送到AWS Glue。停止EC2实例上的源/目标检查。使用AWS Glue转换数据并将数据发送到Amazon S3。 C. 配置一个Amazon API Gateway API将数据发送到Amazon Kinesis数据流。创建一个以Kinesis数据流为数据源的Amazon Kinesis Data Firehose传输流。使用AWS Lambda函数转换数据。使用Kinesis Data Firehose传输流将数据发送到Amazon S3。 D. 配置一个Amazon API Gateway API将数据发送到AWS Glue。使用AWS Lambda函数转换数据。使用AWS Glue将数据发送到Amazon S3。 A. A B. B C. C D. D 正确答案是C,因为该方案完全满足了公司的需求并且具有最低的操作开销。 具体解析如下: A选项:虽然使用了Amazon Kinesis Data Stream和Kinesis Data Firehose来实时处理和存储数据,但是使用了Amazon EC2实例来托管API,这会引入更多的管理开销,比如需要维护和扩展EC2实例。 B选项:使用了AWS Glue来转换数据并将其发送到Amazon S3,但是AWS Glue主要用于ETL(Extract, Transform, Load)批处理作业,不适用于实时数据流处理。此外,EC2实例的管理也会增加操作开销。 C选项:使用了Amazon API Gateway来提供API服务,这是一种全托管的服务,无需管理基础设施。数据通过Kinesis Data Stream实时处理,并通过Kinesis Data Firehose和Lambda函数进行转换和存储到Amazon S3,整个流程高效且操作开销最低。 D选项:同样使用了Amazon API Gateway,但错误地选择了AWS Glue来处理实时数据流(Glue不适合实时场景),尽管使用了Lambda函数,但整体架构不够优化。 78 / 100 分类: SAA-C03 78. A company needs to keep user transaction data in an Amazon DynamoDB table. The company must retain the data for 7 years. What is the MOST operationally e cient solution that meets these requirements? A. Use DynamoDB point-in-time recovery to back up the table continuously. B. Use AWS Backup to create backup schedules and retention policies for the table. C. Create an on-demand backup of the table by using the DynamoDB console. Store the backup in an Amazon S3 bucket. Set an S3 Lifecycle configuration for the S3 bucket. D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda function. Configure the Lambda function to back up the table and to store the backup in an Amazon S3 bucket. Set an S3 Lifecycle configuration for the S3 bucket. 一家公司需要将用户交易数据保存在亚马逊DynamoDB表中,且必须将这些数据保留7年。在这些要求下,以下哪个是最具操作效率的解决方案?A. 使用DynamoDB时间点恢复功能持续备份表。B. 使用AWS Backup为表创建备份计划和保留策略。C. 通过DynamoDB控制台创建表的按需备份,将备份存储在亚马逊S3存储桶中,并为S3存储桶设置S3生命周期配置。D. 创建亚马逊EventBridge(亚马逊CloudWatch事件)规则来调用AWS Lambda函数,配置该Lambda函数以备份表并将备份存储在亚马逊S3存储桶中,同时为S3存储桶设置S3生命周期配置。 A. A B. B C. C D. D 正确的答案是B,使用AWS Backup为表创建备份计划和保留策略。 解析如下: 选项A:DynamoDB时间点恢复(PITR)只能提供35天的连续备份,无法满足7年的保留要求。 选项B(正确答案):AWS Backup是专门为长期数据保留设计的服务,可以设置自定义的备份计划和保留策略(最长达100年),完全满足7年的合规要求。 选项C:手动创建按需备份并存储在S3中虽然可行,但缺乏自动化管理,且需要手动配置生命周期策略,操作效率低。 选项D:使用Lambda函数自定义备份方案虽然技术上可行,但增加了开发和维护成本,不是最优的运维方案。 AWS Backup提供了最完整、最易于管理且符合企业级标准的备份方案,既能满足7年的数据保留要求,又能最大限度地降低运维复杂度。 79 / 100 分类: SAA-C03 79. A company is planning to use an Amazon DynamoDB table for data storage. The company is concerned about cost optimization. The table will not be used on most mornings. In the evenings, the read and write traffic will often be unpredictable. When traffic spikes occur, they will happen very quickly. What should a solutions architect recommend? A. Create a DynamoDB table in on-demand capacity mode. B. Create a DynamoDB table with a global secondary index. C. Create a DynamoDB table with provisioned capacity and auto scaling. D. Create a DynamoDB table in provisioned capacity mode, and Configure it as a global table. 一家公司计划使用Amazon DynamoDB表进行数据存储。该公司对成本优化十分关注。该表在大多数早晨不会被使用。 在晚间时段,读写流量往往会变得难以预测。当出现流量高峰时,它们会非常迅速地发生。 解决方案架构师应该推荐什么方案? A. 创建按需容量模式的DynamoDB表。 B. 创建带有全局二级索引的DynamoDB表。 C. 创建具有预置容量和自动扩展功能的DynamoDB表。 D. 创建预置容量模式的DynamoDB表,并将其配置为全局表。 A. A B. B C. C D. D 正确答案是A(按需容量模式创建DynamoDB表),因为:1. 题目描述明确指出表在多数早晨不会使用,晚上流量不可预测且会突然激增,这非常符合按需模式的适用场景——流量不可预测且波动剧烈2. 按需模式会自动即时扩展以应对流量高峰,无需预置容量或配置自动扩展(选项C的不足)3. 全局二级索引(选项B)和全局表(选项D)主要解决数据访问模式和跨区域复制问题,与成本优化无直接关系4. 预置容量模式(选项C和D)需要对流量进行预估,在题目描述的不可预测流量场景下容易造成资源浪费或性能不足 80 / 100 分类: SAA-C03 80. A company recently signed a contract with an AWS Managed Service Provider (MSP) Partner for help with an application migration initiative. A solutions architect needs ta share an Amazon Machine Image (AMI) from an existing AWS account with the MSP Partner’s AWS account. The AMI is backed by Amazon Elastic Block Store (Amazon EBS) and uses an AWS Key Management Service (AWS KMS) customer managed key to encrypt EBS volume snapshots. What is the MOST secure way for the solutions architect to share the AMI with the MSP Partner’s AWS account? A. Make the encrypted AMI and snapshots publicly available. Modify the key policy to allow the MSP Partner’s AWS account to use the key. B. Modify the launchPermission property of the AMI. Share the AMI with the MSP Partner’s AWS account only. Modify the key policy to allow the MSP Partner’s AWS account to use the key. C. Modify the launchPermission property of the AMI. Share the AMI with the MSP Partner’s AWS account only. Modify the key policy to trust a new KMS key that is owned by the MSP Partner for encryption. D. Export the AMI from the source account to an Amazon S3 bucket in the MSP Partner’s AWS account, Encrypt the S3 bucket with a new KMS key that is owned by the MSP Partner. Copy and launch the AMI in the MSP Partner’s AWS account. 一家公司最近与一家AWS管理服务提供商(MSP)合作伙伴签订了合同,以协助应用迁移计划。一位解决方案架构师需要将现有的AWS账户中的亚马逊机器镜像(AMI)共享给MSP合作伙伴的AWS账户。该AMI由亚马逊弹性块存储(Amazon EBS)支持,并使用AWS密钥管理服务(AWS KMS)客户管理的密钥来加密EBS卷快照。解决方案架构师与MSP合作伙伴的AWS账户共享该AMI的最安全方式是什么? A. 将加密的AMI和快照公开可用。修改密钥策略以允许MSP合作伙伴的AWS账户使用该密钥。B. 修改AMI的launchPermission属性。仅与MSP合作伙伴的AWS账户共享AMI。修改密钥策略以允许MSP合作伙伴的AWS账户使用该密钥。C. 修改AMI的launchPermission属性。仅与MSP合作伙伴的AWS账户共享AMI。修改密钥策略以信任由MSP合作伙伴拥有的新KMS密钥进行加密。D. 将AMI从源账户导出到MSP合作伙伴的AWS账户中的亚马逊S3存储桶。使用由MSP合作伙伴拥有的新KMS密钥对S3存储桶进行加密。在MSP合作伙伴的AWS账户中复制并启动AMI。 A. A B. B C. C D. D 正确答案是B,因为这是最安全的方法共享AMI和KMS加密密钥的过程。 选项A错误:将加密的AMI和快照公开可用会严重降低安全性,暴露给不必要的访问。 选项B正确:通过修改AMI的launchPermission属性,仅与MSP Partner的AWS账户共享AMI,并修改密钥策略以允许MSP Partner使用密钥,确保了共享的最小权限和安全性。 选项C错误:信任MSP Partner拥有的新KMS密钥进行加密,虽然可以共享AMI,但没有必要更改KMS密钥,增加了不必要的复杂性。 选项D错误:将AMI导出到MSP Partner账户的S3存储桶并加密,虽然可行,但步骤繁琐且不是最直接安全的方法。 81 / 100 分类: SAA-C03 81. A solutions architect is designing the cloud architecture for a new application being deployed on AWS. The process should run in parallel while adding and removing application nodes as needed based on the number of jobs to be processed. The processor application is stateless. The solutions architect must ensure that the application is loosely coupled and the job items are durably stored. Which design should the solutions architect use? A. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage. B. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage. C. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue. D. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic. 一位解决方案架构师正在为将部署在AWS上的新应用程序设计云架构。该流程需要并行运行,同时根据待处理任务的数量来动态添加和移除应用节点。该处理器应用程序是无状态的。解决方案架构师必须确保应用程序是松耦合的,且任务项目被持久化存储。以下哪种设计应该被解决方案架构师采用? A. 创建一个Amazon SNS主题来发送需要处理的任务。创建一个包含处理器应用程序的Amazon机器镜像(AMI)。创建一个使用该AMI的启动配置。使用该启动配置创建一个自动扩展组。设置自动扩展组的扩展策略,根据CPU使用率来添加和移除节点。 B. 创建一个Amazon SQS队列来保存需要处理的任务。创建一个包含处理器应用程序的Amazon机器镜像(AMI)。创建一个使用该AMI的启动配置。使用该启动配置创建一个自动扩展组。设置自动扩展组的扩展策略,根据网络使用率来添加和移除节点。 C. 创建一个Amazon SQS队列来保存需要处理的任务。创建一个包含处理器应用程序的Amazon机器镜像(AMI)。创建一个使用该AMI的启动模板。使用该启动模板创建一个自动扩展组。设置自动扩展组的扩展策略,根据SQS队列中的项目数量来添加和移除节点。 D. 创建一个Amazon SNS主题来发送需要处理的任务。创建一个包含处理器应用程序的Amazon机器镜像(AMI)。创建一个使用该AMI的启动模板。使用该启动模板创建一个自动扩展组。设置自动扩展组的扩展策略,根据发布到SNS主题的消息数量来添加和移除节点。 A. A B. B C. C D. D 正确答案是C,原因如下: 1. **使用Amazon SQS队列**:题目要求作业项需要被持久化存储,而SQS提供了持久的消息队列服务,能够确保消息不会丢失。SNS则更适合广播消息,不保证持久存储。 2. **基于队列长度扩展**:题目明确要求根据待处理作业的数量动态扩展节点。选项C中Auto Scaling组基于SQS队列中的消息数量扩展,这是最直接的方式。而选项A和B分别基于CPU和网络使用率,这些指标无法直接反映作业数量的变化。 3. **启动模板**:选项C使用了启动模板(launch template),这是AWS推荐的新方式,比启动配置(launch configuration)更灵活。 其他选项的问题:– A和D使用了SNS,无法保证作业的持久存储。– B虽然使用了SQS,但扩展策略基于网络使用率,与作业数量无关。– D的扩展策略基于SNS消息发布数量,同样无法反映实际待处理作业量。 82 / 100 分类: SAA-C03 82. A company hosts its web applications in the AWS Cloud. The company Configures Elastic Load Balancers to use certificates that are imported into AWS Certificate Manager (ACM). The company’s security team must be noti ed 30 days before the expiration of each certificate. What should a solutions architect recommend to meet this requirement? A. Add a rule in ACM to publish a custom message to an Amazon Simple Notification Service (Amazon SNS) topic every day, beginning 30 days before any certificate will expire. B. Create an AWS Config rule that checks for certificates that will expire within 30 days. Configure Amazon EventBridge (Amazon CloudWatch Events) to invoke a custom alert by way of Amazon Simple Notification Service (Amazon SNS) when AWS Config reports a noncompliant resource. C. Use AWS Trusted Advisor to check for certificates that will expire within 30 days. Create an Amazon CloudWatch alarm that is based on Trusted Advisor metrics for check status changes. Configure the alarm to send a custom alert by way of Amazon Simple Notification Service (Amazon SNS). D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to detect any certificates that will expire within 30 days. Configure the rule to invoke an AWS Lambda function. Configure the Lambda function to send a custom alert by way of Amazon Simple Notification Service (Amazon SNS). 一家公司在AWS云中托管其网络应用程序。该公司配置弹性负载均衡器以使用导入到AWS证书管理器(ACM)中的证书。公司的安全团队必须在每个证书到期前30天收到通知。 解决方案架构师应推荐什么来满足这一要求? A. 在ACM中添加一条规则,从任何证书到期前30天开始,每天向Amazon Simple Notification Service(Amazon SNS)主题发布一条自定义消息。 B. 创建一个AWS Config规则,用于检查将在30天内到期的证书。配置Amazon EventBridge(Amazon CloudWatch Events)在AWS Config报告不合规资源时,通过Amazon Simple Notification Service(Amazon SNS)触发自定义警报。 C. 使用AWS Trusted Advisor检查将在30天内到期的证书。创建一个基于Trusted Advisor指标(用于检查状态变更)的Amazon CloudWatch警报。配置该警报通过Amazon Simple Notification Service(Amazon SNS)发送自定义警报。 D. 创建一个Amazon EventBridge(Amazon CloudWatch Events)规则来检测将在30天内到期的任何证书。配置该规则调用一个AWS Lambda函数。配置该Lambda函数通过Amazon Simple Notification Service(Amazon SNS)发送自定义警报。 A. A B. B C. C D. D 正确答案是B。A选项的ACM本身不提供自动通知功能,需要手动设置或借助其他服务,无法直接满足需求。B选项使用AWS Config规则检查30天内到期的证书,并通过Amazon EventBridge触发Amazon SNS发送警报,是标准的证书到期监控方案,完全符合题目要求。C选项的Trusted Advisor虽然能检查证书到期情况,但该服务不提供自动通知功能,需要通过手动检查,不适合自动化监测场景。D选项虽然理论上可行,但相比B选项不够直接和标准,AWS Config专为资源合规性检查设计,更适合证书到期监控场景。 83 / 100 分类: SAA-C03 83. A company’s dynamic website is hosted using on-premises servers in the United States. The company is launching its product in Europe, and it wants to optimize site loading times for new European users. The site’s backend must remain in the United States. The product is being launched in a few days, and an immediate solution is needed. What should the solutions architect recommend? A. Launch an Amazon EC2 instance in us-east-1 and migrate the site to it. B. Move the website to Amazon S3. Use Cross-Region Replication between Regions. C. Use Amazon CloudFront with a custom origin pointing to the on-premises servers. D. Use an Amazon Route 53 geoproximity routing policy pointing to on-premises servers. 一家公司的动态网站目前托管在美国的本地服务器上。该公司正在欧洲推出其产品,并希望为新欧洲用户优化网站加载速度。网站后端必须保留在美国。产品将在几天内推出,需要一个立即生效的解决方案。解决方案架构师应该建议采取什么方案?A. 在美东-1区域启动Amazon EC2实例,并将网站迁移至该实例。B. 将网站迁移至Amazon S3,并在区域间使用跨区域复制功能。C. 使用Amazon CloudFront并设置指向本地服务器的自定义源站。D. 使用Amazon Route 53地理接近度路由策略指向本地服务器。 A. A B. B C. C D. D 这是一个需要优化欧洲用户访问美国本地服务器托管动态网站加载速度的场景,同时需要保持后端在美国。以下是各选项的详细分析: A选项错误:在us-east-1(美国东部)启动EC2实例并不能解决欧洲用户访问速度慢的问题,只是将服务器从本地迁移到AWS美国区域,没有解决跨大西洋网络延迟的核心问题。 B选项错误:将网站迁移到S3并使用跨区域复制虽然可以加速静态内容分发,但对于动态网站而言,S3不适合托管后端应用逻辑,且跨区域复制无法满足后端必须保留在美国的要求。 C选项正确:使用CloudFront分发网络是理想选择。CloudFront边缘节点可以缓存动态内容,将响应时间从几百毫秒降低到几十毫秒。自定义源站可指向美国本地服务器,既保持后端在美国,又通过边缘节点加速欧洲用户访问。且CloudFront部署快速,能满足立即上线需求。 D选项错误:Route 53基于地理位置的路由策略虽可将用户定向到最优终端节点,但题目中使用的是本地服务器而非全球分布的端点,无法实际减少欧洲用户访问美国服务器的网络延迟。 84 / 100 分类: SAA-C03 84. A company wants to reduce the cost of its existing three-tier web architecture. The web, application, and database servers are running on Amazon EC2 instances for the development, test, and production environments. The EC2 instances average 30% CPU utilization during peak hours and 10% CPU utilization during non-peak hours. The production EC2 instances run 24 hours a day. The development and test EC2 instances run for at least 8 hours each day. The company plans to implement automation to stop the development and test EC2 instances when they are not in use. Which EC2 instance purchasing solution will meet the company’s requirements MOST cost-effectively? A. Use Spot Instances for the production EC2 instances. Use Reserved Instances for the development and test EC2 instances. B. Use Reserved Instances for the production EC2 instances. Use On-Demand Instances for the development and test EC2 instances. C. Use Spot blocks for the production EC2 instances. Use Reserved Instances for the development and test EC2 instances. D. Use On-Demand Instances for the production EC2 instances. Use Spot blocks for the development and test EC2 instances. 一家公司希望降低其现有三层网络架构的成本。该系统目前为开发、测试和生产环境在亚马逊EC2实例上运行网络服务器、应用服务器和数据库服务器。在高峰时段,EC2实例的平均CPU利用率为30%,非高峰时段则为10%。生产环境的EC2实例每天24小时运行。开发与测试环境的EC2实例每天至少运行8小时。公司计划通过自动化手段在非使用时段停止开发与测试环境的EC2实例。哪种EC2实例购买方案能以最具成本效益的方式满足该公司的需求? A. 对生产环境EC2实例使用竞价实例(Spot Instances),对开发与测试环境EC2实例使用预留实例(Reserved Instances)。 B. 对生产环境EC2实例使用预留实例(Reserved Instances),对开发与测试环境EC2实例使用按需实例(On-Demand Instances)。 C. 对生产环境EC2实例使用竞价块(Spot blocks),对开发与测试环境EC2实例使用预留实例(Reserved Instances)。 D. 对生产环境EC2实例使用按需实例(On-Demand Instances),对开发与测试环境EC2实例使用竞价块(Spot blocks)。 A. A B. B C. C D. D 解析: 1. **生产环境(24小时运行)**:选择预留实例(Reserved Instances)最经济,因为长期稳定运行的负载使用预留实例可获得最高达75%的折扣。2. **开发测试环境(每天8小时运行)**:按需实例(On-Demand)更适合,因为: – 虽然有自动启停机制,但运行时长远低于预留实例要求(1年/3年) – 无法预测具体运行时段,不符合Spot实例适用场景 – 按需实例可随时启停且无长期承诺 错误选项分析: A. 生产环境使用Spot实例不可靠,可能被中断 C. Spot blocks最长只能运行6小时,不适合24小时生产环境 D. 生产环境使用按需实例成本高于预留实例,开发测试用Spot blocks无法保证稳定性 85 / 100 分类: SAA-C03 85. A company has a production web application in which users upload documents through a web interface or a mobile app. According to a new regulatory requirement. new documents cannot be modi ed or deleted after they are stored. What should a solutions architect do to meet this requirement? A. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning and S3 Object Lock enabled. B. Store the uploaded documents in an Amazon S3 bucket. Configure an S3 Lifecycle policy to archive the documents periodically. C. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning enabled. Configure an ACL to restrict all access to read-only. D. Store the uploaded documents on an Amazon Elastic File System (Amazon EFS) volume. Access the data by mounting the volume in read-only mode. 一家公司拥有一个生产型网络应用程序,用户通过网页界面或移动应用上传文档。根据新的监管要求,存储后的新文档不能被修改或删除。 解决方案架构师应采取什么措施来满足此要求? A. 将上传的文档存储在启用了S3版本控制和S3对象锁的Amazon S3存储桶中。 B. 将上传的文档存储在Amazon S3存储桶中。配置S3生命周期策略以定期归档文档。 C. 将上传的文档存储在启用了S3版本控制的Amazon S3存储桶中。配置访问控制列表(ACL)将所有访问限制为只读。 D. 将上传的文档存储在Amazon弹性文件系统(Amazon EFS)卷上。通过以只读模式挂载卷来访问数据。 A. A B. B C. C D. D 根据题目要求,新存储的文档不能被修改或删除。 选项A正确:启用S3版本控制和S3对象锁定功能后,可以防止文件被覆盖或删除,强制保留对象的不可变状态,完全符合监管要求。 选项B错误:生命周期策略只能定期转换存储类别或过期删除对象,无法防止文件被修改或立即删除,不满足要求。 选项C错误:虽然版本控制可以保留旧版本,但仅设置ACL为只读不能防止文件被覆盖(通过版本ID仍可修改),而且ACL不控制删除操作。 选项D错误:EFS虽然可以挂载为只读,但本质上仍允许有写入权限的用户修改源文件,且无法防止删除操作。 86 / 100 分类: SAA-C03 86. A company has several web servers that need to frequently access a common Amazon RDS MySQL Multi-AZ DB instance. The company wants a secure method for the web servers to connect to the database while meeting a security requirement to rotate user credentials frequently. Which solution meets these requirements? A. Store the database user credentials in AWS Secrets Manager. Grant the necessary IAM permissions to allow the web servers to access AWS Secrets Manager. B. Store the database user credentials in AWS Systems Manager OpsCenter. Grant the necessary IAM permissions to allow the web servers to access OpsCenter. C. Store the database user credentials in a secure Amazon S3 bucket. Grant the necessary IAM permissions to allow the web servers to retrieve credentials and access the database. D. Store the database user credentials in files encrypted with AWS Key Management Service (AWS KMS) on the web server file system. The web server should be able to decrypt the files and access the database. 一家公司拥有多台需要频繁访问同一个Amazon RDS MySQL多可用区数据库实例的Web服务器。 该公司希望为Web服务器提供一种安全的数据库连接方法,同时满足频繁轮换用户凭证的安全要求。 哪种解决方案符合这些要求? A. 将数据库用户凭证存储在AWS Secrets Manager中。授予必要的IAM权限以允许Web服务器访问AWS Secrets Manager。 B. 将数据库用户凭证存储在AWS Systems Manager OpsCenter中。授予必要的IAM权限以允许Web服务器访问OpsCenter。 C. 将数据库用户凭证存储在安全的Amazon S3存储桶中。授予必要的IAM权限以允许Web服务器检索凭证并访问数据库。 D. 将数据库用户凭证存储在Web服务器文件系统上使用AWS Key Management Service(AWS KMS)加密的文件中。Web服务器应能够解密这些文件并访问数据库。 A. A B. B C. C D. D 该题目考察的是AWS中安全存储并自动轮换数据库凭证的最佳实践。 A选项(正确): AWS Secrets Manager专门用于安全存储和管理敏感信息(如数据库凭证),支持自动轮换凭证功能,且可以通过IAM权限精细控制访问。这完全符合题目中’安全访问’和’频繁轮换凭证’的需求。 B选项错误: Systems Manager OpsCenter主要用于运营问题的修复和自动化,不是专为凭证管理设计的服务,缺乏自动轮换功能。 C选项错误: 虽然S3可以存储加密的凭证文件,但需要自行实现轮换机制,且每次访问都需要下载文件,不如Secrets Manager直接集成RDS来得安全高效。 D选项错误: 在本地文件系统存储加密文件需要自行管理密钥和轮换过程,不符合云服务的最佳实践,且难以实现自动化的凭证轮换。 87 / 100 分类: SAA-C03 87. A company hosts an application on AWS Lambda functions that are invoked by an Amazon API Gateway API. The Lambda functions save customer data to an Amazon Aurora MySQL database. Whenever the company upgrades the database, the Lambda functions fail to establish database connections until the upgrade is complete. The result is that customer data is not recorded for some of the event. A solutions architect needs to design a solution that stores customer data that is created during database upgrades. Which solution will meet these requirements? A. Provision an Amazon RDS proxy to sit between the Lambda functions and the database. Configure the Lambda functions to connect to the RDS proxy. B. Increase the run time of the Lambda functions to the maximum. Create a retry mechanism in the code that stores the customer data in the database. C. Persist the customer data to Lambda local storage. Configure new Lambda functions to scan the local storage to save the customer data to the database. D. Store the customer data in an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Create a new Lambda function that polls the queue and stores the customer data in the database. 一家公司在由亚马逊API网关API调用的AWS Lambda函数上托管了一个应用程序。这些Lambda函数将客户数据保存到亚马逊Aurora MySQL数据库中。每当公司升级数据库时,在升级完成之前,Lambda函数都无法建立数据库连接。结果是部分事件的客户数据未被记录。解决方案架构师需要设计一个解决方案,以存储在数据库升级期间创建的客户数据。哪种解决方案能够满足这些需求? A. 配置一个位于Lambda函数和数据库之间的亚马逊RDS代理。将Lambda函数配置为连接到RDS代理。B. 将Lambda函数的运行时间增加到最大值。在代码中创建重试机制以将客户数据存储到数据库中。C. 将客户数据持久化到Lambda本地存储。配置新的Lambda函数以扫描本地存储并将客户数据保存到数据库。D. 将客户数据存储在亚马逊简单队列服务(Amazon SQS)FIFO队列中。创建一个新的Lambda函数来轮询队列并将客户数据存储在数据库中。 A. Provision an Amazon RDS proxy to sit between the Lambda functions and the database. Configure the Lambda functions to connect to the RDS proxy. B. Increase the run time of the Lambda functions to the maximum. Create a retry mechanism in the code that stores the customer data in the database. C. Persist the customer data to Lambda local storage. Configure new Lambda functions to scan the local storage to save the customer data to the database. D. Store the customer data in an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Create a new Lambda function that polls the queue and stores the customer data in the database. 正确答案是D。解析如下: 选项A(配置Amazon RDS代理):RDS代理确实可以帮助管理数据库连接池并提高可用性,但在数据库升级期间,代理本身也无法访问底层数据库,因此无法解决数据丢失问题。 选项B(增加Lambda运行时间并创建重试机制):虽然增加运行时间和重试可以延长尝试时间,但如果数据库升级持续时间超过Lambda最大超时期限(15分钟),或者期间多次重试都失败,仍会导致数据丢失。 选项C(使用Lambda本地存储):Lambda本地存储是临时性的,当函数实例被回收时数据会丢失,不适合用作持久化方案。此外这种方法还增加了额外的复杂性。 选项D(使用SQS FIFO队列):这是最佳解决方案,原因:1) SQS队列可持久保存消息长达14天;2) 通过队列解耦后,即使数据库升级期间也可以安全存储客户数据;3) 独立的消费者函数可以等到数据库可用后再处理堆积的消息;4) FIFO队列可以保证消息顺序性。此方案既确保了数据持久性,又保持了系统可靠性。 88 / 100 分类: SAA-C03 88. A survey company has gathered data for several years from areas in the United States. The company hosts the data in an Amazon S3 bucket that is 3 TB in size and growing. The company has started to share the data with a European marketing rm that has S3 buckets. The company wants to ensure that its data transfer costs remain as low as possible. Which solution will meet these requirements? A. Configure the Requester Pays feature on the company’s S3 bucket. B. Configure S3 Cross-Region Replication from the company’s S3 bucket to one of the marketing rm’s S3 buckets. C. Configure cross-account access for the marketing rm so that the marketing rm has access to the company’s S3 bucket. D. Configure the company’s S3 bucket to use S3 Intelligent-Tiering. Sync the S3 bucket to one of the marketing rm’s S3 buckets. 一家调查公司多年来从美国各地区收集数据。该公司将数据存储在大小为3TB且不断增长的亚马逊S3存储桶中。 该公司开始与一家拥有S3存储桶的欧洲营销公司共享数据。该公司希望确保其数据传输成本尽可能低。 哪种解决方案能够满足这些要求? A. 在该公司的S3存储桶上配置”请求者付费”功能。 B. 配置从该公司S3存储桶到营销公司某个S3存储桶的跨区域复制。 C. 为营销公司配置跨账户访问权限,使其能够访问该公司的S3存储桶。 D. 将该公司的S3存储桶配置为使用S3智能分层存储。将该S3存储桶同步到营销公司的某个S3存储桶。 A. A B. B C. C D. D 正确答案是B。S3跨区域复制(S3 Cross-Region Replication)是最适合的方案。 选项A:配置“请求者付费”(Requester Pays)功能会让请求数据的营销公司承担数据传输费用,这虽然降低了公司的成本,但不是题目要求的降低总体数据转移成本(营销公司承担费用相当于从公司转移到了对方)。 选项B(正确):S3跨区域复制允许数据自动从源S3桶复制到目标桶(营销公司的桶),复制流量属于AWS内部的区域间传输,成本低于传统跨区域数据传输方式。并且营销公司可以直接访问其所在区域的副本,避免后续重复传输费用。 选项C:跨账号访问权限虽然允许营销公司直接访问原S3桶,但每次访问仍需通过互联网跨区域拉取数据,长期会产生更高昂的传输费用。 选项D:S3智能分层优化的是存储成本而非传输成本,且同步操作仍需支付跨区域数据传输费用。 89 / 100 分类: SAA-C03 89. A company uses Amazon S3 to store its con dential audit documents. The S3 bucket uses bucket policies to restrict access to audit team IAM user credentials according to the principle of least privilege. Company managers are worried about accidental deletion of documents in the S3 bucket and want a more secure solution. What should a solutions architect do to secure the audit documents? A. Enable the versioning and MFA Delete features on the S3 bucket. B. Enable multi-factor authentication (MFA) on the IAM user credentials for each audit team IAM user account. C. Add an S3 Lifecycle policy to the audit team’s IAM user accounts to deny the s3:DeleteObject action during audit dates. D. Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit team IAM user accounts from accessing the KMS key. 一家公司使用亚马逊简单存储服务(Amazon S3)存储其机密审计文件。该S3存储桶采用最小权限原则,通过存储桶策略限制仅审计团队IAM用户凭证可访问。 公司管理层担心S3存储桶中的文件被意外删除,希望获得更安全的解决方案。 解决方案架构师应采取什么措施来保护审计文件? A. 在S3存储桶上启用版本控制和MFA删除功能。 B. 为每个审计团队IAM用户账户的IAM用户凭证启用多因素认证(MFA)。 C. 向审计团队的IAM用户账户添加S3生命周期策略,在审计日期期间拒绝s3:DeleteObject操作。 D. 使用AWS密钥管理服务(AWS KMS)加密S3存储桶,并限制审计团队IAM用户账户访问KMS密钥。 A. Enable the versioning and MFA Delete features on the S3 bucket. B. Enable multi-factor authentication (MFA) on the IAM user credentials for each audit team IAM user account. C. Add an S3 Lifecycle policy to the audit team’s IAM user accounts to deny the s3:DeleteObject action during audit dates. D. Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit team IAM user accounts from accessing the KMS key. 正确答案是A,因为在S3存储桶上启用版本控制和MFA Delete功能可以有效防止文档被意外删除。版本控制可以保留对象的不同版本,即使删除了一个对象也可以恢复之前的版本。MFA Delete则需要额外的身份验证才能删除对象,增加了安全性。 选项B不正确,因为在IAM用户凭证上启用MFA虽然增加了账户安全性,但无法直接防止S3存储桶中的文档被意外删除。 选项C不正确,因为S3生命周期策略用于管理对象的存储类别和过期时间,而不是用于控制删除权限。此外,生命周期策略是应用于存储桶而不是IAM用户账户的。 选项D不正确,因为虽然使用AWS KMS加密S3存储桶可以增加数据的安全性,但并不能防止文档被意外删除。加密更多是针对数据保密性而非防止删除。 90 / 100 分类: SAA-C03 90. A company is using a SQL database to store movie data that is publicly accessible. The database runs on an Amazon RDS Single-AZ DB instance. A script runs queries at random intervals each day to record the number of new movies that have been added to the database. The script must report a nal total during business hours. The company’s development team notices that the database performance is inadequate for development tasks when the script is running. A solutions architect must recommend a solution to resolve this issue. Which solution will meet this requirement with the LEAST operational overhead? A. Modify the DB instance to be a Multi-AZ deployment. B. Create a read replica of the database. Configure the script to query only the read replica. C. Instruct the development team to manually export the entries in the database at the end of each day. D. Use Amazon ElastiCache to cache the common queries that the script runs against the database. 一家公司正在使用一个可公开访问的SQL数据库来存储电影数据。该数据库运行在亚马逊RDS单可用区数据库实例上。 每天有一个脚本在随机时间运行查询,记录数据库中新增的电影数量。该脚本必须在工作时间报告最终总数。 公司的开发团队注意到,当脚本运行时,数据库性能无法满足开发任务需求。解决方案架构师必须推荐一个解决方案来解决这个问题。 哪种方案能以最小的运维开销满足这一需求? A. 将数据库实例修改为多可用区部署。B. 创建数据库的只读副本。配置脚本使其只查询该只读副本。C. 指导开发团队每天结束时手动导出数据库中的条目。D. 使用Amazon ElastiCache来缓存脚本对数据库运行的常见查询。 A. A B. B C. C D. D 正确答案是B,为数据库创建一个只读副本,并配置脚本仅查询该只读副本。以下是详细解析: A选项(将数据库实例修改为多可用区部署):多可用区部署主要提供高可用性和故障转移能力,但不能直接解决读写分离的问题,对减轻主数据库的性能压力帮助有限。此外,多可用区部署会增加成本和操作复杂性,并不是最优解。 B选项(创建只读副本):1. 只读副本可以分担主数据库的读取负载,避免脚本查询对开发任务的干扰2. RDS的只读副本设置简单,维护成本低,符合题目要求的最小运维开销3. 该方案实现了读写分离,让主数据库专注于开发团队的写入操作 C选项(手动导出数据):1. 完全依赖人工操作,不可靠且效率低下2. 无法满足脚本需要随机查询和实时报告的需求3. 增加了人工操作成本,违反最小运维开销原则 D选项(使用ElastiCache缓存查询):1. 适用于重复性高的查询,但题目中脚本是随机查询新增电影数2. 对于这种需要获取最新数据的情况,缓存可能导致返回过时结果3. 需要额外的缓存管理和维护工作 91 / 100 分类: SAA-C03 91. A company has applications that run on Amazon EC2 instances in a VPC. One of the applications needs to call the Amazon S3 API to store and read objects. According to the company’s security regulations, no traffic from the applications is allowed to travel across the internet. Which solution will meet these requirements? A. Configure an S3 gateway endpoint. B. Create an S3 bucket in a private subnet. C. Create an S3 bucket in the same AWS Region as the EC2 instances. D. Configure a NAT gateway in the same subnet as the EC2 instances. 一家公司在虚拟私有云(VPC)中的亚马逊EC2实例上运行应用程序。其中一个应用程序需要调用亚马逊S3应用程序编程接口(API)来存储和读取对象。根据该公司的安全规定,不允许任何来自应用程序的流量通过互联网传输。 哪种解决方案能够满足这些要求? A. 配置一个S3网关终端节点。 B. 在私有子网中创建一个S3存储桶。 C. 在与EC2实例相同的亚马逊云科技(AWS)区域中创建一个S3存储桶。 D. 在与EC2实例相同的子网中配置一个网络地址转换(NAT)网关。 A. A B. B C. C D. D 正确答案是A,配置S3网关端点。因为S3网关端点允许VPC中的EC2实例通过AWS内部网络直接访问S3服务,无需经过互联网,符合公司不允许流量通过互联网的安全规定。 选项B错误,因为S3存储桶本身没有子网的概念,S3是一个区域级别的服务,无法直接部署在私有子网中。 选项C错误,仅仅将S3存储桶创建在与EC2实例相同的AWS区域中并不能解决流量必须通过互联网的问题。 选项D错误,NAT网关虽然可以提供互联网访问能力,但仍然需要经过互联网来访问S3服务,这违反公司的安全规定。 92 / 100 分类: SAA-C03 92. A company is storing sensitive user information in an Amazon S3 bucket. The company wants to provide secure access to this bucket from the application tier running on Amazon EC2 instances inside a VPC. Which combination of steps should a solutions architect take to accomplish this? (Choose two.) A. Configure a VPC gateway endpoint for Amazon S3 within the VPC. B. Create a bucket policy to make the objects in the S3 bucket public. C. Create a bucket policy that limits access to only the application tier running in the VPC. D. Create an IAM user with an S3 access policy and copy the IAM credentials to the EC2 instance. E. Create a NAT instance and have the EC2 instances use the NAT instance to access the S3 bucket. 一家公司正在将敏感用户信息存储在亚马逊S3存储桶中。该公司希望从虚拟私有云(VPC)内运行在亚马逊EC2实例上的应用层安全地访问该存储桶。 解决方案架构师应采取哪两种步骤组合来实现这一目标?(选择两项。) A. 在VPC内为亚马逊S3配置VPC网关终端节点。 B. 创建存储桶策略以使S3存储桶中的对象公开。 C. 创建存储桶策略,将访问权限限制为仅允许在VPC中运行的应用层。 D. 创建一个具有S3访问策略的IAM用户,并将IAM凭证复制到EC2实例上。 E. 创建一个NAT实例,并让EC2实例使用该NAT实例来访问S3存储桶。 A. A B. B C. C D. D E. E 要实现从VPC内的EC2实例安全访问存储敏感用户信息的S3存储桶,最佳组合方案是: A. 在VPC内配置S3的VPC网关终端节点 – 正确。VPC网关终端节点允许从VPC内直接访问S3而无需经过Internet,提供更安全的私有连接通道。 C. 创建桶策略限制仅允许VPC内的应用程序层访问 – 正确。通过添加基于VPC端点ID的条件限制,可以精细控制仅特定VPC内的资源能访问存储桶。 错误选项分析:B. 使存储桶对象公开 – 完全违背安全需求,会暴露敏感数据。D. 将IAM凭证复制到EC2实例 – 违反安全最佳实践,存在凭证泄露风险,应使用IAM角色而非复制凭证。E. 使用NAT实例访问 – 效率低下且增加不必要的中转环节,VPC端点方案更直接安全。 检查 93 / 100 分类: SAA-C03 93. A company runs an on-premises application that is powered by a MySQL database. The company is migrating the application to AWS to increase the application’s elasticity and availability. The current architecture shows heavy read activity on the database during times of normal operation. Every 4 hours, the company’s development team pulls a full export of the production database to populate a database in the staging environment. During this period, users experience unacceptable application latency. The development team is unable to use the staging environment until the procedure completes. A solutions architect must recommend replacement architecture that alleviates the application latency issue. The replacement architecture also must give the development team the ability to continue using the staging environment without delay. Which solution meets these requirements? A. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility. B. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Use database cloning to create the staging database on demand. C. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Use the standby instance for the staging database. D. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility. 一家公司运行着一个基于MySQL数据库的本地应用程序。该公司正在将该应用程序迁移到AWS,以提高应用程序的弹性和可用性。 当前的架构在正常运营期间显示出数据库上沉重的读取活动。每4小时,公司的开发团队会拉取生产数据库的完整导出,以填充到暂存环境中的数据库。在此期间,用户会经历无法接受的应用程序延迟。开发团队在该过程完成前无法使用暂存环境。 解决方案架构师必须推荐一种替换架构,以缓解应用程序延迟问题。替换架构还必须使开发团队能够无延迟地继续使用暂存环境。 哪种解决方案满足这些要求? A. 使用具有多可用区Aurora副本的Amazon Aurora MySQL作为生产环境。通过实现使用mysqldump工具的备份和恢复过程来填充暂存数据库。 B. 使用具有多可用区Aurora副本的Amazon Aurora MySQL作为生产环境。按需使用数据库克隆来创建暂存数据库。 C. 使用具有多可用区部署和读取副本的Amazon RDS for MySQL作为生产环境。将备用实例用于暂存数据库。 D. 使用具有多可用区部署和读取副本的Amazon RDS for MySQL作为生产环境。通过实现使用mysqldump工具的备份和恢复过程来填充暂存数据库。 A. A. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility. B. B. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Use database cloning to create the staging database on demand. C. C. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Use the standby instance for the staging database. D. D. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility. 正确答案是B,因为Amazon Aurora MySQL的数据库克隆功能能够立即创建完整的数据库副本,而不会对生产数据库的性能造成影响。这解决了生产环境在数据导出期间出现的延迟问题,并允许开发团队无需等待即可使用staging环境。选项A和D中的mysqldump工具在进行全量导出时会锁定表或导致性能下降,不能解决延迟问题。选项C中虽然使用了备用实例,但RDS的备用实例是用于高可用性的故障转移,其数据是实时的且与主实例保持一致,不能作为独立的staging环境使用,开发团队使用时可能影响生产性能。 94 / 100 分类: SAA-C03 94. A company is designing an application where users upload small files into Amazon S3. After a user uploads a le, the file requires one-time simple processing to transform the data and save the data in JSON format for later analysis. Each file must be processed as quickly as possible after it is uploaded. Demand will vary. On some days, users will upload a high number of files. On other days, users will upload a few files or no files. Which solution meets these requirements with the LEAST operational overhead? A. Configure Amazon EMR to read text files from Amazon S3. Run processing scripts to transform the data. Store the resulting JSON file in an Amazon Aurora DB cluster. B. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use Amazon EC2 instances to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB. C. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda function to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB. D. Configure Amazon EventBridge (Amazon CloudWatch Events) to send an event to Amazon Kinesis Data Streams when a new file is uploaded. Use an AWS Lambda function to consume the event from the stream and process the data. Store the resulting JSON file in an Amazon Aurora DB cluster. 一家公司正在设计一款应用程序,用户可将小文件上传至亚马逊S3。用户上传文件后,需要对文件进行一次性简单处理,以转换数据并将数据保存为JSON格式供后续分析。每个文件上传后必须尽快处理。需求量会有波动——某些日子用户会批量上传文件,而其他日子可能仅上传少量或不上传文件。 哪种解决方案能以最少的运维开销满足这些需求? A. 配置亚马逊EMR从亚马逊S3读取文本文件,运行处理脚本转换数据,并将生成的JSON文件存储于亚马逊Aurora数据库集群中 B. 配置亚马逊S3向亚马逊简单队列服务(Amazon SQS)发送事件通知,使用亚马逊EC2实例从队列读取并处理数据,将生成的JSON文件存储于亚马逊DynamoDB C. 配置亚马逊S3向亚马逊简单队列服务(Amazon SQS)发送事件通知,使用AWS Lambda函数从队列读取并处理数据,将生成的JSON文件存储于亚马逊DynamoDB D. 配置亚马逊EventBridge(亚马逊CloudWatch Events)在上传新文件时向Amazon Kinesis数据流发送事件,使用AWS Lambda函数从流中消费事件并处理数据,将生成的JSON文件存储于亚马逊Aurora数据库集群 A. A B. B C. C D. D 正确答案是C,原因如下: A选项使用Amazon EMR处理数据,虽然EMR可以处理大量数据,但它主要用于大数据处理场景,不适合处理小文件,且配置和维护EMR集群会增加操作复杂性。 B选项使用Amazon EC2实例从SQS队列中读取和处理数据,虽然可以实现需求,但需要手动管理EC2实例的扩展和缩减,增加操作负担。 C选项使用AWS Lambda函数从SQS队列中读取和处理数据,Lambda可以自动扩缩容,无需管理服务器,完全无服务器架构,操作开销最低,能够快速处理文件,且S3事件通知和Lambda集成良好。 D选项使用Amazon EventBridge和Kinesis Data Streams,虽然也能实现需求,但相比SQS和Lambda的组合,架构更复杂,且Kinesis Data Streams会增加额外成本。 95 / 100 分类: SAA-C03 95. An application allows users at a company’s headquarters to access product data. The product data is stored in an Amazon RDS MySQL DB instance. The operations team has isolated an application performance slowdown and wants to separate read traffic from write traffic. A solutions architect needs to optimize the application’s performance quickly. What should the solutions architect recommend? A. Change the existing database to a Multi-AZ deployment. Serve the read requests from the primary Availability Zone. B. Change the existing database to a Multi-AZ deployment. Serve the read requests from the secondary Availability Zone. C. Create read replicas for the database. Configure the read replicas with half of the compute and storage resources as the source database. D. Create read replicas for the database. Configure the read replicas with the same compute and storage resources as the source database. 一个应用程序允许公司总部的用户访问产品数据。产品数据存储在亚马逊RDS MySQL数据库实例中。运营团队已经发现应用程序性能下降的问题,希望将读取流量与写入流量分离。解决方案架构师需要快速优化应用程序性能。 解决方案架构师应该推荐什么? A. 将现有数据库更改为多可用区部署。从主可用区提供读取请求。 B. 将现有数据库更改为多可用区部署。从次要可用区提供读取请求。 C. 为数据库创建读取副本。将读取副本配置为源数据库一半的计算和存储资源。 D. 为数据库创建读取副本。将读取副本配置为与源数据库相同的计算和存储资源。 A. A. Change the existing database to a Multi-AZ deployment. Serve the read requests from the primary Availability Zone. B. B. Change the existing database to a Multi-AZ deployment. Serve the read requests from the secondary Availability Zone. C. C. Create read replicas for the database. Configure the read replicas with half of the compute and storage resources as the source database. D. D. Create read replicas for the database. Configure the read replicas with the same compute and storage resources as the source database. 正确答案是D,创建与源数据库具有相同计算和存储资源的读取副本。 解析:1. A选项错误:Multi-AZ部署主要用于提高数据库可用性而非性能,且读取请求仍由主可用区处理,无法分离读写流量。2. B选项错误:虽然Multi-AZ部署的次要可用区可用于故障转移,但RDS的次要可用区不自动处理读取请求,需要手动设置才能作为读取副本使用。3. C选项错误:配置一半资源的读取副本可能导致性能瓶颈,无法有效分担读取负载。4. D选项正确:创建与源数据库资源匹配的读取副本可以有效分担读取流量,快速解决性能问题,是AWS推荐的最佳实践。 因此,最优解决方案是创建与源数据库配置相同的读取副本来分离读写流量,同时确保读性能不受资源限制。 96 / 100 分类: SAA-C03 96. A company has a large Microsoft SharePoint deployment running on-premises that requires Microsoft Windows shared file storage. The company wants to migrate this workload to the AWS Cloud and is considering various storage options. The storage solution must be highly available and integrated with Active Directory for access control. Which solution will satisfy these requirements? A. Configure Amazon EFS storage and set the Active Directory domain for authentication. B. Create an SMB file share on an AWS Storage Gateway file gateway in two Availability Zones. C. Create an Amazon S3 bucket and Configure Microsoft Windows Server to mount it as a volume. D. Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain for authentication. 一家公司拥有一个大型的微软SharePoint部署,该部署在本地运行,需要使用微软Windows共享文件存储。 该公司希望将此工作负载迁移到AWS云,并正在考虑各种存储选项。存储解决方案必须具有高可用性,并与Active Directory集成以实现访问控制。 哪种解决方案能够满足这些需求? A. 配置Amazon EFS存储并设置Active Directory域进行身份验证。 B. 在AWS存储网关文件网关的两个可用区中创建一个SMB文件共享。 C. 创建一个Amazon S3存储桶并配置微软Windows服务器将其挂载为一个卷。 D. 在AWS上创建一个Amazon FSx for Windows文件服务器文件系统,并设置Active Directory域进行身份验证。 A. A. Configure Amazon EFS storage and set the Active Directory domain for authentication. B. B. Create an SMB file share on an AWS Storage Gateway file gateway in two Availability Zones. C. C. Create an Amazon S3 bucket and Configure Microsoft Windows Server to mount it as a volume. D. D. Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain for authentication. 正确答案是D,因为Amazon FSx for Windows File Server是专为Windows环境设计的完全托管的文件存储服务,原生支持SMB协议并与Active Directory无缝集成,满足高可用性和AD集成的要求。 A选项错误:Amazon EFS主要针对基于Linux的工作负载,不原生支持Windows文件共享或Active Directory认证。 B选项错误:虽然Storage Gateway文件网关支持SMB协议,但其主要设计用于混合云场景而非完整迁移,且实现跨AZ高可用性需要额外配置。 C选项错误:Amazon S3不能直接作为Windows文件系统挂载,需要通过第三方工具转换访问方式,且缺乏原生的Active Directory集成能力。 97 / 100 分类: SAA-C03 97. An image-processing company has a web application that users use to upload images. The application uploads the images into an Amazon S3 bucket. The company has set up S3 event notifications to publish the object creation events to an Amazon Simple Queue Service (Amazon SQS) standard queue. The SQS queue serves as the event source for an AWS Lambda function that processes the images and sends the results to users through email. Users report that they are receiving multiple email messages for every uploaded image. A solutions architect determines that SQS messages are invoking the Lambda function more than once, resulting in multiple email messages. What should the solutions architect do to resolve this issue with the LEAST operational overhead? A. Set up long polling in the SQS queue by increasing the ReceiveMessage wait time to 30 seconds. B. Change the SQS standard queue to an SQS FIFO queue. Use the message deduplication ID to discard duplicate messages. C. Increase the visibility timeout in the SQS queue to a value that is greater than the total of the function timeout and the batch window timeout. D. Modify the Lambda function to delete each message from the SQS queue immediately after the message is read before processing. 一家图像处理公司拥有一个供用户上传图片的网页应用程序。该程序将图片上传至亚马逊简单存储服务(Amazon S3)存储桶。 公司已配置S3事件通知功能,将对象创建事件发布至亚马逊简单队列服务(Amazon SQS)标准队列。该SQS队列作为AWS Lambda函数的事件源,由Lambda处理图片并通过电子邮件将结果发送给用户。 用户反映他们每次上传图片都会收到多封电子邮件。解决方案架构师发现SQS消息多次触发Lambda函数,导致重复发送邮件。 解决方案架构师应如何以最低运营成本解决此问题? A. 在SQS队列中设置长轮询,将接收消息等待时间增加至30秒 B. 将SQS标准队列更改为SQS先进先出(FIFO)队列,使用消息去重ID来丢弃重复消息 C. 增加SQS队列中的可见性超时时间,使其超过函数超时时间和批处理窗口超时的总和 D. 修改Lambda函数,使其在处理前立即从SQS队列中删除每条已读取的消息 A. A B. B C. C D. D 本题考察的是如何解决SQS标准队列中消息重复处理导致Lambda函数多次触发的问题。 正确答案C(增加SQS队列的可见性超时时间)的分析:1. 标准SQS队列设计上允许偶尔的消息重复(at-least-once delivery)2. 当Lambda处理消息时间超过当前visibility timeout时,消息会重新变为可见状态并被其他消费者再次处理3. 将visibility timeout设置为大于Lambda函数执行时间(包括重试时间)可以确保消息在被处理完前不会被重新投递 其他选项分析:A(设置长轮询)不正确:长轮询虽然可以减少空轮询请求,但无法解决消息重复问题B(改用FIFO队列)不正确:虽然FIFO队列支持去重,但需要修改现有架构(需要添加去重ID),操作开销较大D(立即删除消息)不正确:这样无法保证消息至少被处理一次,如果处理失败会导致消息丢失 98 / 100 分类: SAA-C03 98. A company is implementing a shared storage solution for a gaming application that is hosted in an on-premises data center. The company needs the ability to use Lustre clients to access data. The solution must be fully managed. Which solution meets these requirements? A. Create an AWS Storage Gateway file gateway. Create a file share that uses the required client protocol. Connect the application server to the file share. B. Create an Amazon EC2 Windows instance. Install and Configure a Windows file share role on the instance. Connect the application server to the file share. C. Create an Amazon Elastic File System (Amazon EFS) file system, and Configure it to support Lustre. Attach the file system to the origin server. Connect the application server to the file system. D. Create an Amazon FSx for Lustre file system. Attach the file system to the origin server. Connect the application server to the le system. 一家公司正在为部署在本地数据中心的游戏应用程序实施共享存储解决方案。 公司需要使用Lustre客户端访问数据的能力。该解决方案必须完全托管。 哪个解决方案满足这些要求? A. 创建一个AWS Storage Gateway文件网关。创建一个使用所需客户端协议的文件共享。将应用程序服务器连接到文件共享。 B. 创建一个Amazon EC2 Windows实例。在实例上安装并配置Windows文件共享角色。将应用程序服务器连接到文件共享。 C. 创建一个Amazon Elastic File System(Amazon EFS)文件系统,并将其配置为支持Lustre。将文件系统挂载到源服务器。将应用程序服务器连接到文件系统。 D. 创建一个Amazon FSx for Lustre文件系统。将文件系统挂载到源服务器。将应用程序服务器连接到文件系统。 A. A. Create an AWS Storage Gateway file gateway. Create a file share that uses the required client protocol. Connect the application server to the file share. B. B. Create an Amazon EC2 Windows instance. Install and Configure a Windows file share role on the instance. Connect the application server to the file share. C. C. Create an Amazon Elastic File System (Amazon EFS) file system, and Configure it to support Lustre. Attach the file system to the origin server. Connect the application server to the file system. D. D. Create an Amazon FSx for Lustre file system. Attach the file system to the origin server. Connect the application server to the file system. 正确答案是D,因为Amazon FSx for Lustre是AWS提供的完全托管的Lustre文件系统服务,专门支持高性能计算工作负载,如游戏应用程序。它允许使用Lustre客户端访问数据,并且AWS负责所有底层管理任务。 A选项错误,因为AWS Storage Gateway文件网关不支持Lustre协议,它主要用于通过NFS或SMB协议访问数据。 B选项错误,因为在Amazon EC2 Windows实例上安装Windows文件共享角色是一个自管理解决方案,不符合’完全托管’的要求,也不支持Lustre协议。 C选项错误,因为Amazon EFS虽然是一个完全托管的文件系统服务,但它不支持Lustre协议,而是使用自己的专有协议。 99 / 100 分类: SAA-C03 99. A company’s containerized application runs on an Amazon EC2 instance. The application needs to download security certificates before it can communicate with other business applications. The company wants a highly secure solution to encrypt and decrypt the certificates in near real time. The solution also needs to store data in highly available storage after the data is encrypted. Which solution will meet these requirements with the LEAST operational overhead? A. Create AWS Secrets Manager secrets for encrypted certificates. Manually update the certificates as needed. Control access to the data by using ne-grained IAM access. B. Create an AWS Lambda function that uses the Python cryptography library to receive and perform encryption operations. Store the function in an Amazon S3 bucket. C. Create an AWS Key Management Service (AWS KMS) customer managed key. Allow the EC2 role to use the KMS key for encryption operations. Store the encrypted data on Amazon S3. D. Create an AWS Key Management Service (AWS KMS) customer managed key. Allow the EC2 role to use the KMS key for encryption operations. Store the encrypted data on Amazon Elastic Block Store (Amazon EBS) volumes. 一家公司的容器化应用程序运行在亚马逊EC2实例上。该应用程序需要先下载安全证书才能与其他业务应用程序通信。 公司希望采用高安全性解决方案来近乎实时地加密和解密证书。该解决方案还需要在数据加密后将数据存储在高度可用的存储中。 哪种方案能够以最少的管理工作量满足这些要求? A. 为加密证书创建AWS Secrets Manager密钥。根据需要手动更新证书。通过细粒度的IAM访问控制来控制数据访问权限。 B. 创建一个使用Python加密库来接收和执行加密操作的AWS Lambda函数。将该函数存储在Amazon S3存储桶中。 C. 创建一个AWS密钥管理服务(AWS KMS)客户托管密钥。允许EC2角色使用KMS密钥进行加密操作。将加密数据存储在Amazon S3上。 D. 创建一个AWS密钥管理服务(AWS KMS)客户托管密钥。允许EC2角色使用KMS密钥进行加密操作。将加密数据存储在Amazon弹性块存储(Amazon EBS)卷上。 A. A B. B C. C D. D 正确答案是C。原因如下:1. AWS KMS提供了高安全性的密钥管理服务,能够实时加密解密证书文件,符合企业要求(A选项使用Secrets Manager虽然可存储证书,但主要适用于API密钥等小数据,不适合证书文件)。2. 通过EC2角色授权使用KMS密钥,实现了最小权限原则,操作开销最低(B选项需要开发维护Lambda函数,运营成本较高)。3. 采用Amazon S3存储加密数据,具有11个9的持久性和跨AZ高可用性,优于EBS的单AZ存储(D选项使用EBS不符合高可用要求)。4. 其他选项问题:A选项需要手动更新证书不符合自动化要求;B选项的加密方案需要自行实现安全性不如KMS;D选项的EBS存储可用性不足。 100 / 100 分类: SAA-C03 100. A solutions architect is designing a VPC with public and private subnets. The VPC and subnets use IPv4 CIDR blocks. There is one public subnet and one private subnet in each of three Availability Zones (AZs) for high availability. An internet gateway is used to provide internet access for the public subnets. The private subnets require access to the internet to allow Amazon EC2 instances to download software updates. What should the solutions architect do to enable Internet access for the private subnets? A. Create three NAT gateways, one for each public subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic to the NAT gateway in its AZ. B. Create three NAT instances, one for each private subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic to the NAT instance in its AZ. C. Create a second internet gateway on one of the private subnets. Update the route table for the private subnets that forward non-VPC traffic to the private internet gateway. D. Create an egress-only internet gateway on one of the public subnets. Update the route table for the private subnets that forward non VPC traffic to the egress-only Internet gateway. 一位解决方案架构师正在设计一个包含公有子网和私有子网的VPC。该VPC及子网使用IPv4 CIDR地址块。 为了实现高可用性,在三个可用区(AZ)中各部署了一个公有子网和一个私有子网。互联网网关用于为公有子网提供互联网访问。 私有子网需要访问互联网以允许亚马逊EC2实例下载软件更新。 解决方案架构师应该采取什么措施来为私有子网启用互联网访问? A. 创建三个NAT网关,每个可用区的公有子网中各部署一个。为每个可用区创建私有路由表,将非VPC流量转发至该可用区的NAT网关。 B. 创建三个NAT实例,每个可用区的私有子网中各部署一个。为每个可用区创建私有路由表,将非VPC流量转发至该可用区的NAT实例。 C. 在某个私有子网上创建第二个互联网网关。更新私有子网的路由表,将非VPC流量转发至私有互联网网关。 D. 在某个公有子网上创建仅出口互联网网关。更新私有子网的路由表,将非VPC流量转发至仅出口互联网网关。 A. A B. B C. C D. D 要为私有子网提供互联网访问权限,最佳实践是为每个可用区(AZ)创建一个NAT网关,并将其放在对应的公有子网中。然后为每个AZ创建一个私有路由表,将非VPC流量路由到该AZ的NAT网关。这样设计既能保证高可用性(每个AZ独立),又能确保私有子网的安全访问(通过NAT网关而不是直接连接互联网)。 A是正确答案,因为:1. NAT网关是按AZ部署的高可用托管服务2. 每个AZ独立的NAT网关避免了跨AZ流量3. 路由配置符合AWS最佳实践 B选项不正确因为:1. 应该用NAT网关而不是NAT实例(NAT实例需要自行管理)2. NAT实例应该部署在公有子网而不是私有子网 C选项不正确因为:1. 不能在私有子网创建互联网网关(IGW只能用于公有子网)2. 这种做法会直接暴露私有实例到互联网 D选项不正确因为:1. 仅出口互联网网关(Egress-only IGW)用于IPv6,而题目明确使用IPv42. 这无法提供私有子网所需的互联网访问能力 您的分数是平均分为 1% 0% 重新开始测验 评价表 匿名反馈 感谢评价 发送反馈 作者 WordPress Quiz plugin 本文地址:https://www.neiwangchuantou.com/2025/02/saa-c03-no-1-100/,禁止转载 0 0
评论0