top of page

Amazon Web Services (AWS) stands out as the cloud platform of choice for many companies, offering an array of services that power millions of businesses globally. As enterprises migrate to the cloud, one of the main concerns is security. This article dives into how AWS’s architecture is typically secure by default and how this foundational principle is supported by features and options available when making architectural decisions.


Understanding AWS Architecture

AWS provides a robust infrastructure with core services like Elastic Compute Cloud (EC2), Simple Storage Service (S3), Relational Database Service (RDS), and Virtual Private Cloud (VPC), each serving a unique role in a cloud ecosystem. With data centres in Regions and Availability Zones worldwide, including two core regions now in Australia, AWS ensures high availability and data redundancy. Central to AWS’s approach to security is its shared responsibility model, delineating the security obligations of AWS and its customers.


Security by Default in AWS

AWS embeds security at the heart of its services. This “secure by default” philosophy means that the default configurations of AWS services offer robust security measures. For instance, data in S3 buckets is encrypted by default, IAM roles provide granular access controls, services are generally within private networks (VPCs) and customers need to intentionally provide ingress and egress capabilities to services. AWS integrates security deeply into its services, from network access controls to data encryption, ensuring that safety is not an afterthought but a foundational element.


Architectural Decisions Influenced by Security

Security considerations directly influence the selection and configuration of AWS services. Architectural patterns, whether serverless or microservices, are chosen with security in mind. AWS encourages architectures that adhere to the principle of least privilege and separation of concerns, ensuring that each component operates with minimal access necessary for its function. These decisions are crucial for constructing secure and resilient systems. This starts with AWS Organizations and multi-Account designs to reduce a potential blast radius in the event of a breach, and also applies to AWS’s network designs with network segmentation almost a “default” on AWS.


Enhancing Security with AWS Tools and Best Practices

AWS offers an array of tools like AWS Shield (DDos), AWS Web Application Firewall (WAF), Amazon GuardDuty for Intelligent Threat Detection, and Amazon Inspector to bolster security. Adhering to best practices such as conducting continuous audits of your cloud infrastructure, encrypting data at rest and in transit, and employing advanced security services can significantly enhance security posture. The AWS Well-Architected Framework (another WAF) is a great framework to follow to implement good security practices across the cloud infrastructure.


Navigating Challenges and Common Misconceptions

Despite AWS’s robust security features, users may encounter pitfalls. Understanding common security mistakes and how to avoid them is crucial. Moreover, dispelling myths about cloud security, such as the misconception that cloud environments are inherently less secure than on-premises solutions, is essential, while at the same time understanding that the responsibility to secure a cloud environment is shared between AWS and the customer and that a cloud environment, no matter the provider, can(!) be configured less securely than on-premises. AWS provides guidance for maintaining security and compliance in hybrid environments, addressing these challenges head-on.


Conclusion

AWS architecture is designed with security as a foundational element, influencing all architectural decisions. Adopting a security-first approach in cloud architecture planning is essential for leveraging the full potential of AWS’s offerings. As AWS continues to evolve, staying informed about the latest security features and practices will be key to maintaining robust cloud infrastructures.

Updated: Nov 29, 2023

Selecting a public cloud provider can sometimes be more challenging than initially anticipated – similar to moving house, (I’m sure many can relate) where you’re mistaken in believing that you have all considerations covered and answered for, though when it comes to reality (moving day), there’s a sudden realisation (once you start with the move) that you’ve underestimated the mammoth task you have ahead.


Unlike moving house, picking the right cloud provider can often involve wading through extensive marketing material to understand the pros and cons of each potential supplier. While this article doesn't strive to be a definitive guide for choosing the perfect cloud provider, it aims to present valuable questions that are worth considering when determining where to host your next service (or removalist). Below are some key points to ponder:


Services/Technologies Offered – Are the services provided by the cloud provider aligned with your requirements for Big Data, Microservices, Kubernetes, AI, and other essential technologies?


Technology Compatibility – Is the version and flavour of programming languages, as well as their microservices, compatible with your current stack? Will your existing technology seamlessly integrate with a particular cloud, or are there features unavailable in a specific cloud environment?


Innovations – Do you seek to be at the cutting edge, or do you prefer adopting technologies after they have proven themselves, thereby minimising potential disruptions and addressing bugs?


Uptime, Stability, Redundancy – Has the cloud provider experienced recent major outages or issues without proper explanations or implemented mitigations?


Locations for Points of Presence and/or Data Centres – Are the cloud provider's servers located in regions compliant with your data storage requirements? Do they have points of presence in proximity to your users or consumers?


Cultural/Licensing – Considering your comfort level with a Microsoft, IBM, or Oracle software stack, do you need to factor in licence repurchasing or potential lack of support for a technology on a rival cloud?


Cost – Are the services you intend to use or purchase the most cost-effective, or do they deliver value for money through other means?


Payment Types and Discounts – Do the payment types and discount options align with your company's operational preferences?


Compliance – Does the cloud provider offer specific regulatory compliance certificates or adhere to data sovereignty rules relevant to your needs?


Operational Support, Migration Support, Vendor Lock-ins – Do you require assistance with application and data migration? Are there measures in place to mitigate vendor lock-ins?


So, when you’re next choosing a location to host your cloud workloads, feel free to consider the points above. Hopefully this article will help make the move somewhat smoother.

In this digital age, we are surrounded by data. It’s everywhere, collected by all sorts of systems and stored in all sorts of places. However, it is easy to overlook a fundamental principle when planning work – ensure you have good information, not just a whole heap of data.


Quinticon helps customers with all forms of initiatives, and an important focus is finding the right information to use when planning and executing projects. Using the example of a server migration initiative, here are some considerations below that we find key to transforming the initial seed of data into genuinely useful information.


Keep a record of your data sources: typically, the data required to plan and execute a server migration will come from a variety of sources such as a CMDB, extracts from tools, operational reports, scanners and design documentation. It is vital to keep track of where the data came from and any data transformations required to make it useable for your purposes; this helps determine the best inputs, show overlaps and clarify gaps.


Understand the format of your data: having lots of different sources means that sometimes the data may not line up as easily as you would like. Some servers may have multiple roles or multiple names that need to be accommodated; different extracts won’t always list data fields in the same order; spelling changes/typos can make it hard to match everything. Not understanding what you see can lead to misalignment of key grouping metrics or duplication of entries.


Watch out for changing naming conventions: systems and applications may have many names across the different data sources. The vendor’s name may have changed over time, the application could have been sold to a different vendor, or the application was first known by something other than its current name. Names can also vary across multiple environments like Development, Test, System Integration Testing, User Acceptance Testing etc, including potentially multiple environments of each. Paying attention here can help avoid replicated activity or even misalignment in migrated systems such as backups.


Ensure repeatability of extraction and collection: it is common to have to draw from a data source multiple times; even after testing phases, there will need to be refreshes and updates applied to your planning. Clearly documenting the process to extract the data from the sources and any post-production work is vital. Save the queries where possible, especially where other teams provide you with the data, to avoid the columns of data coming out in a different order. Have a system to keep track of all the different information you collect and how it fits together.


Normalise the data: an important step in creating an efficient and reliable migration plan is to normalise the data. Make sure to give thought to the best approach for normalising the data and making it meaningful, whether this is through spreadsheet formulas or code written in Perl, PowerShell or Python etc. – go the extra step after sourcing data to get it into a useable form.


Validated data made into reliable information facilitates successful initiatives

Hopefully this helps you think about some areas that require thought before you just jump into your next complex project; thorough discovery, assessment and planning are key elements of the Quinticon approach.

bottom of page