Certified - AWS Certified Cloud Practitioner

In this episode, we introduce Amazon SageMaker, AWS’s fully managed service that provides tools for building, training, and deploying machine learning (ML) models. SageMaker simplifies the process of developing machine learning models by offering a wide range of tools and frameworks that streamline everything from data preparation to model deployment. We’ll walk you through how SageMaker helps you build ML models faster with built-in algorithms, pre-built notebook environments, and automated model tuning.
We’ll also discuss how SageMaker integrates with other AWS services like S3 for data storage and EC2 for compute, allowing you to scale your machine learning workloads easily. By the end of this episode, you’ll have a comprehensive understanding of Amazon SageMaker and how it enables you to quickly create and deploy ML models without having to manage the underlying infrastructure. Whether you’re a data scientist or just getting started with machine learning, SageMaker offers the tools you need to accelerate your ML workflows. Produced by BareMetalCyber.com, your trusted resource for expert-driven cybersecurity education.

What is Certified - AWS Certified Cloud Practitioner ?

Ready to earn your AWS Certified Cloud Practitioner credential? Our prepcast is your ultimate guide to mastering the fundamentals of AWS Cloud, including security, cost management, core services, and cloud economics. Whether you're new to IT or looking to expand your cloud knowledge, this series will help you confidently prepare for the exam and take the next step in your career. Produced by BareMetalCyber.com, your trusted resource for expert-driven cybersecurity education.

Domain 3 of the Cloud Practitioner exam covers core AWS technology services, giving you a foundation for how cloud workloads are designed and operated. This domain is wide in scope, spanning infrastructure, networking, compute, storage, databases, integration, and observability. The key is not to memorize every possible service, but to understand the patterns and trade-offs—when to choose one option over another, how services complement each other, and how AWS’s managed model reduces operational burden. By the end of this domain, you should feel comfortable recognizing AWS’s building blocks and matching them to high-level requirements. This wrap-up consolidates the key takeaways, reinforcing both technical understanding and exam readiness.
The AWS global infrastructure is the backbone that supports all services. Regions are geographically isolated areas, each containing multiple Availability Zones (AZs). AZs are physically separate data centers designed for fault isolation, allowing resilient architectures across them. At the network edge, AWS provides edge locations for caching, acceleration, and DNS resolution, improving performance worldwide. For example, an application deployed in two AZs can survive the loss of one, while CloudFront distributes content globally from edge locations. Understanding Regions, AZs, and edge is fundamental because every service maps onto this structure, whether you’re building highly available applications or ensuring compliance through regional data residency.
Virtual Private Cloud (VPC) basics underpin AWS networking. Subnets carve out IP ranges within AZs, separating resources into public and private tiers. Route tables define how traffic flows, while gateways provide external connectivity. The Internet Gateway allows public access, the NAT Gateway supports outbound-only private access, and VPC endpoints provide private connections to AWS services without leaving the AWS backbone. For example, you might place web servers in a public subnet with an Internet Gateway, while databases live in private subnets accessed only through internal routes. The exam often tests whether you can map requirements like “private access to S3 without the internet” to the correct construct—in this case, a gateway endpoint.
Security controls in VPCs are enforced through security groups (SGs) and network ACLs (NACLs). SGs act as stateful firewalls attached to resources, allowing only explicitly permitted inbound and outbound traffic. NACLs, by contrast, are stateless and apply at the subnet level, with ordered rules that can allow or deny. For example, SGs might allow HTTPS traffic from the internet to a web server, while NACLs block traffic from known malicious IP ranges. The key takeaway is that SGs are the preferred fine-grained control for applications, while NACLs serve as broader subnet guardrails. On the exam, cues like “stateful” point to SGs, while “stateless” or “deny rules” point to NACLs.
Compute is one of AWS’s central pillars, with multiple choices tailored to workload needs. EC2 provides virtual machines for flexible, traditional compute. Lambda offers serverless execution for event-driven workloads, scaling automatically with no server management. Containers run on ECS or EKS, with Fargate providing serverless compute for container tasks. For example, a legacy application may need EC2, while a microservice backend could thrive on Fargate. Lambda fits when workloads are intermittent or event-driven, like processing S3 uploads. The exam emphasizes aligning compute choice with business goals: predictability and control with EC2, portability with containers, and agility with Lambda.
Storage services revolve around matching access patterns to cost. Amazon S3 offers object storage with multiple classes: Standard for frequent access, Intelligent-Tiering for automated optimization, Glacier for archival, and One Zone for lower-cost, less durable options. EBS provides block storage for EC2 instances, with SSD-backed volumes for performance and HDD-backed for throughput. EFS provides managed, elastic file storage, shared across multiple instances. For example, S3 might store logs indefinitely, EBS would back a database volume, and EFS would serve content to a fleet of web servers. Knowing which storage fits each pattern is critical for both architecture and cost optimization.
Databases in AWS come in many flavors. Relational databases are delivered through RDS and Aurora, handling structured, transactional workloads with automated backups and scaling. DynamoDB provides managed NoSQL with high throughput and low latency, ideal for session data or user profiles. ElastiCache offers in-memory storage through Redis or Memcached, supporting leaderboards or caching hot data. For example, an e-commerce site might use RDS for orders, DynamoDB for carts, and ElastiCache to speed up queries. The exam often tests whether you can identify which engine fits the described workload, with keywords like “structured and transactional” pointing to RDS, or “millisecond latency at scale” pointing to DynamoDB.
Networking includes Elastic Load Balancers (ELBs), which distribute traffic across resources. Application Load Balancers handle HTTP and HTTPS with path-based routing, Network Load Balancers manage ultra-low-latency TCP and UDP, and Gateway Load Balancers integrate with security appliances. For example, a microservices architecture might rely on an ALB to route traffic based on URLs, while a gaming backend might use NLBs for real-time protocols. Load balancing is central to resilience and scalability, ensuring workloads adapt to demand. The exam often uses cues like “layer 7 routing” or “millions of connections per second” to differentiate ALB from NLB.
Auto Scaling ensures elasticity, adjusting resources based on demand and health. For EC2, Auto Scaling groups scale instance counts up or down. For serverless, Lambda scales automatically in response to events, while containers can auto-scale through ECS and EKS. Auto Scaling also provides health remediation, replacing failed resources automatically. For example, a website under load might add EC2 instances behind an ALB, then scale them back when traffic subsides. The principle is to match capacity with demand, minimizing waste while ensuring performance. On the exam, words like “elasticity” or “replace failed instances” often point to Auto Scaling.
Edge services accelerate and protect applications closer to users. CloudFront is AWS’s content delivery network, caching static and dynamic content at edge locations, improving performance and reducing origin load. Global Accelerator provides static anycast IPs, routing user traffic through AWS’s backbone to the nearest healthy endpoint, ensuring consistent performance across TCP and UDP applications. For example, CloudFront would deliver media files globally with low latency, while Global Accelerator would enhance a multiplayer gaming experience by routing players to the closest server. The exam highlights the distinction: CloudFront is for caching HTTP/S content, Global Accelerator for accelerating general application traffic.
Private access patterns use VPC endpoints and PrivateLink to keep traffic within AWS networks. Gateway endpoints support S3 and DynamoDB, routing traffic without NAT. Interface endpoints provide ENIs for connecting privately to many AWS services and SaaS providers. PrivateLink allows providers to publish services privately, consumed over VPC endpoints. For example, an enterprise might use an S3 gateway endpoint to restrict access to buckets without internet, or a SaaS vendor might expose APIs through PrivateLink for customers to consume privately. These patterns reduce attack surface and are common exam cues when questions emphasize “no internet access” or “private connectivity.”
Application integration is achieved through managed services. API Gateway provides a secure front door for APIs, enforcing auth and throttling. EventBridge routes events based on patterns, enabling decoupled architectures. SQS buffers workloads, ensuring resilience under bursty loads, while SNS broadcasts to multiple subscribers. For example, an order-processing app might use API Gateway to accept requests, EventBridge to route events, SQS to queue fulfillment tasks, and SNS to notify multiple systems. The exam often asks you to map these tools to integration scenarios, so recognizing keywords like “broadcast,” “buffer,” or “routing” is essential.
Observability is delivered by CloudWatch, which collects metrics, logs, and alarms for virtually every AWS service. Metrics provide visibility into performance, logs capture detailed activity, and alarms notify administrators of anomalies. For example, a CloudWatch alarm might notify when CPU utilization exceeds 80% on an EC2 instance, triggering an Auto Scaling policy. Observability transforms systems from black boxes into monitored, measurable environments. For the exam, “metrics and alarms” nearly always map to CloudWatch, especially when monitoring is in focus.
Audit and compliance tracking are provided by CloudTrail and AWS Config. CloudTrail records all API calls, enabling visibility into who did what and when, supporting both forensics and compliance audits. AWS Config evaluates resource configurations against rules, ensuring compliance with policies like “all S3 buckets must be encrypted.” For example, if a developer accidentally creates a publicly accessible bucket, Config flags it and can even remediate automatically. These tools ensure governance is not left to chance, embedding accountability into AWS operations. Exam cues like “record API activity” point to CloudTrail, while “evaluate compliance against rules” points to Config.
For more cyber related content and books, please check out cyber author dot me. Also, there are other prepcasts on Cybersecurity and more at Bare Metal Cyber dot com.
Encryption and key management are critical to nearly every AWS service. AWS Key Management Service (KMS) provides centralized control of encryption keys, integrating with services like S3, EBS, RDS, and DynamoDB. With KMS, you can choose between AWS-managed keys for simplicity or customer-managed keys for tighter control, including rotation policies and audit logging. For example, enabling server-side encryption on an S3 bucket with a KMS CMK ensures that all data is protected at rest with keys managed according to compliance requirements. On the exam, when you see “encryption at rest” or “integrating keys across services,” the likely answer involves KMS.
Identity and Access Management (IAM) remains the foundation for authorization. IAM users, groups, and roles define who can access resources, and least-privilege access ensures they receive only the permissions required. Roles are particularly important for workloads, as they grant temporary credentials to applications or services. For example, an EC2 instance running with an IAM role can access an S3 bucket without storing static keys. On the exam, watch for scenarios where temporary or cross-service access is required—these nearly always involve IAM roles. Understanding least privilege, policies, and the difference between identity-based and resource-based permissions is central to security.
Serverless patterns simplify operations by removing the need to manage infrastructure. Event-driven architectures combine Lambda with services like S3, DynamoDB, or EventBridge to react automatically to events. Step Functions orchestrate long-running or multi-step workflows, handling retries and branching logic. For example, uploading a file to S3 could trigger a Lambda to process it, with Step Functions coordinating a multi-step pipeline for enrichment and storage. Serverless architectures reduce operational overhead and scale seamlessly, making them a frequent answer on the exam when the requirement is “minimal management” or “event-driven automation.”
Containers offer a middle ground between full servers and serverless. Amazon Elastic Container Registry (ECR) stores container images securely, while ECS and EKS provide orchestration for running those containers. Fargate extends this by offering serverless compute for containers, where you only specify resources and AWS handles the infrastructure. IAM roles for tasks in ECS allow fine-grained access for containers, ensuring each runs with the minimum required permissions. For example, a microservice cluster might use EKS with IAM roles for pods, granting each service unique privileges. Exam cues like “portability,” “Kubernetes,” or “scaling containers” point toward ECS/EKS and Fargate.
Resilience is achieved through designs that leverage Multi-AZ deployments, backups, and cross-Region strategies. RDS Multi-AZ ensures databases fail over automatically to standby instances in another Availability Zone. S3 provides durability across AZs, while DynamoDB global tables replicate data across Regions. Cross-Region replication supports disaster recovery and data sovereignty requirements. For example, a compliance-sensitive workload might replicate data across U.S. and EU Regions to meet local laws. On the exam, words like “high availability,” “business continuity,” and “disaster recovery” point toward Multi-AZ and cross-Region architectures.
Cost-aware architecture requires aligning services with usage patterns. Right-sizing ensures EC2 instances match workload needs, while lifecycle policies move S3 data into cheaper tiers like Glacier when no longer accessed frequently. Caching with ElastiCache or CloudFront reduces expensive database calls and data transfer. For example, an analytics workload might tier logs into Glacier for archival while caching frequently accessed dashboards in ElastiCache. The exam often asks which choice reduces cost without sacrificing requirements; the answer is typically lifecycle management, right-sizing, or caching.
Governance at scale is supported by AWS Organizations, Organizational Units (OUs), and Service Control Policies (SCPs). Control Tower builds on this, automating the setup of multi-account environments with guardrails for security and compliance. For example, an enterprise might use SCPs to deny the creation of unencrypted S3 buckets across all accounts. Governance ensures consistent policies without relying on manual enforcement. Exam keywords like “multi-account,” “centralized policy,” or “govern compliance” map to Organizations, SCPs, and Control Tower.
Infrastructure as code ensures repeatability and reduces human error. CloudFormation allows you to define stacks declaratively, while the AWS Cloud Development Kit (CDK) offers programmatic, code-driven templates. These tools make deployments consistent across environments, reducing drift. For example, deploying the same VPC, IAM roles, and S3 buckets in dev, test, and prod becomes automated with a CloudFormation template. On the exam, “repeatable deployments” or “consistent infrastructure” point to CloudFormation or CDK.
Data movement services support hybrid and migration scenarios. DataSync automates transfers between on-premises and AWS, providing monitoring and error handling. The AWS Transfer Family supports protocols like SFTP, FTPS, and FTP for secure file exchanges. Snow Family devices—like Snowball and Snowmobile—enable petabyte-scale migrations into AWS when network transfers are impractical. For example, a media company might use Snowball to ship 500 TB of video archives to AWS. When the exam mentions “large data migration” or “offline transfer,” Snow devices are the correct answer.
Threat detection is delivered through Amazon GuardDuty, which continuously analyzes logs, DNS queries, and network flows for suspicious behavior. GuardDuty alerts on issues like compromised credentials or unusual access patterns. For example, detecting API calls from an unexpected geographic region may indicate an account breach. Vulnerability scanning and posture management are provided by Amazon Inspector, which evaluates workloads against known vulnerabilities, and AWS Security Hub, which aggregates security findings across accounts. On the exam, cues like “threat detection” map to GuardDuty, while “compliance posture” points to Inspector and Security Hub.
Documentation and evidence for compliance are supported by AWS Artifact. Artifact provides reports and agreements, such as SOC, ISO, or HIPAA compliance documents, which customers can download for audit purposes. For example, a healthcare organization might provide auditors with Artifact evidence that AWS meets HIPAA controls. The exam often uses keywords like “compliance documents” or “download reports” to cue AWS Artifact as the answer.
From the exam lens, the overarching principle is to map requirements to the simplest viable service. If a workload only requires storage for objects, S3 is the simplest fit. If you need event-driven compute, Lambda is preferable over provisioning EC2. If data must remain private, VPC endpoints provide the most direct answer. Complexity is rarely rewarded—choose the service that meets the requirement with minimal overhead. This mindset not only supports exam success but also reflects real-world cloud best practices.
In conclusion, Domain 3 reinforces AWS’s building blocks: compute, storage, databases, networking, and integration, all framed by security, observability, and governance. The lesson is to understand what each tool is for, recognize when resilience or cost optimization matters, and apply the least complex option that still meets requirements. By mastering these fundamentals, you’re well-prepared to both pass the exam and design cloud systems that are robust, efficient, and aligned with business needs.