Skip to main content

Explain the FIVE pillars of AWS framework


Do You Know About AWS Pillars?

Creating a software system is a lot like constructing a building. If the foundation is not solid structural problems can undermine the integrity and function of the building. When architect technology solutions, if you neglect the five pillars of operational excellence, security, reliability, performance efficiency, and cost optimization it can become challenging to build a system that delivers on your expectations and requirements. Incorporating these pillars into your architecture will help you produce stable and efficient systems. This will allow you to focus on the other aspects of design, such as functional requirements.


Below are the Five Pillars:


  1. Operational Excellence
  2. Security
  3. Reliability
  4. Performance Efficiency
  5. Cost Optimization

Explain the FIVE pillars of AWS framework
The Five Pillars of AWS Well-Architect Framework


Operational Excellence :



It was introduced to encourage cloud architects to continually re-evaluate their existing environments and the processes around them, i.e. let’s not get idle and complacent! This pillar also encourages teams to get into good process habits such as commentary for audit trails, only making small, easily-reversible changes and always considering potential failure when building.

A great way to start with this pillar is to increase your usage of automation and get into the routine of using Cloud Formation for all operations and configurations. The benefit of using Infrastructure as Code (IaC) is its consistency, speediness and the lower costs for projects to be created and deployed. To help with this pillar and instill more confidence in using IaC, we launched the Cloud formation Template Scanner in beta for Cloud Conformity customers. The tool tests your Cloud Formation scripts before deployment so only the cleanest and most secure templates make it to your environments.

Operational Excellence pillar, suggests six principles define below:

  • Perform operations with code

  • Align operations processes to business objectives

  • Make regular, small, incremental changes

  • Test for responses to unexpected events

  • Learn from operational events and failures

  • Keep operations procedures current



Security :

AWS instead want to keep security high up on the agenda and do this by using the Shared-Responsibility Model. Ultimately it is the user’s responsibility when it comes to the security in the cloud. It’s important that security is looked at from all angles and on multiple levels: before construction with security-led design, during use with proactive risk assessments and after incident mishaps with well-rehearsed and practised plans.

AWS Cloud Trail is also key for this pillar as it records all AWS API calls to your account; how thoroughly has this been enabled throughout your infrastructure?

As more security breaches hit the news and data protection has become a key focus, meeting this pillar’s standard should always be in mind. I’m quite sure everyone could do without a hefty GDPR penalty!


AWS suggests the following design principles for Security:
  • Apply security at all levels

  • Enable traceability

  • Implement a principle of least privilege

  • Secure the system at the application, data, and OS level

  • Automate security best practices




Reliability:


The pillar of reliability seems like a bit of a no-brainier but you’d be surprised at how often it’s not thought about in its entirety. Not only does it involve recovery from failure or service disruptions, but it also includes the issue of capacity management and scalability. Once again, AWS wants to encourage architects to start from a solid foundation from which changes can be easily and dynamically made.

The use of Cloud Formation scripts can help in recovery by creating a Clean Room for deeper and more secure investigation, as can scheduling time to practice and test these very processes.

Whenever it'll comes to capacity and availability, it’s only the part of those pillar that is easily overlooked. We can fall into the trap of not wanting to overspend on resources, however by utilizing AWS Cloud Watch alarms and setting limits you can be sure that what you have is entirely sufficient.



Reality is suggests following design principles:
  • Test recovery procedures

  • Automatically recover from failure

  • Use horizontal scalability to increase system availability

  • Automatically add/remove resources as needed to avoid capacity saturation

  • Manage change in automation



Performance Efficiency:



This pillar is all about computing resources, their ability to meet requirements and to evolve as needs change. Allowing your architecture to be flexible and creative will open up more possibilities, and more than likely you’ll find yourself employing various approaches to suit different workloads.

It’s important to collect data for frequent review to check your infrastructure is working as efficiently as it can. Using any of the AWS monitoring services will help you to know if performance is below the expected and any calls need immediate action. Setting limits here is another great way to heighten performance ability.

Serverless architecture can be a great win for this pillar, as can the use of AWS Lambda and AWS Cloud Front to reduce latency. Experiment often to see what works best where — it’s through this continuous review and testing that you’ll be shown where some easy compromises can be made for the benefit of the entire infrastructure.



Performance Efficiency principles recommended are:
  • Democratize advanced technologies

  • Deploy your system globally at minimal cost for lower latency

  • Use serverless architectures to avoid operational burden

  • Try various comparative testing and configurations to find out what performs better




Cost Optimization:




One of the greatest benefits of using AWS Cloud is the lower costs vs on-prem or data centre setups. However as we’ve often seen, this hasn’t always followed through in reality simply because of oversights and short-term plans.

The best cost optimization model is the utilization and consumption approach. With this you’ll be better equipped to understand what a realistic and economical spend should look like for your projects and workloads. Once again, taking the time to monitor and allocate costs and data will be your friend in the long term here.

While there may be times of compromise or trade-offs such as lengthier processing times for lower costs (or vice versa), by understanding how services like AWS Glacier (archived data) and CloudFormation (automation) can ultimately give you the more significant economical impact, you can prioritize more easily. It’s also hugely beneficial to be aware of the various instance types available as AWS continue to introduce varying versions with cost benefits dependent on your workloads.


Cost Optimization can be achieved by the following principles:
  • Adopt a consumption model

  • Benefit from economies of scale

  • Stop spending money on data center operations

  • Analyze and attribute expenditure

  • Use managed services to reduce cost of ownership




Comments

Popular posts from this blog

What is STP? - Explain Advantages and Disadvantages

The Spanning Tree Protocol is a network protocol that builds a loop-free logical topology for Ethernet networks. The basic function of STP is to prevent bridge loops and the broadcast radiation that results from them. STP is a protocol. It actively monitors all links of the network. To finds a redundant link, it uses an algorithm, known as the STA (spanning-tree algorithm). The STA algorithm first creates a topology database then it finds and disables the redundant links. Once redundant links are disabled, only the STP-chosen links remain active. If a new link is added or an existing link is removed, the STP re-runs the STA algorithm and re-adjusts all links to reflect the change. STP (Spanning Tree Protocol) automatically removes layer 2 switching loops by shutting down the redundant links. A redundant link is an additional link between two switches. A redundant link is usually created for backup purposes. Just like every coin has two sides, a redundant link, along with

What are the Advantages and Disadvantages of TCP/UDP ?? Difference between TCP and UDP

As in previous blog we have define and explain about what is TCP and UDP and from now we are moving ahead with Advantages, Disadvantages and Difference of TCP and UDP but for this you have to know about TCP and UDP hence to understand it go for a What is TCP and UDP . Advantage of TCP Here, are pros/benefits of TCP: It helps you to establish/set up a connection between different types of computers. It operates independently of the operating system. It supports many routing-protocols. It enables the internetworking between the organizations. TCP/IP model has a highly scalable client-server architecture. It can be operated independently. Supports several routing protocols. It can be used to establish a connection between two computers. Disadvantages of TCP Here, are disadvantage of using TCP: TCP never conclude a transmission without all data in motion being explicitly asked. You can't use for broadcast or multicast transmission. TCP has no block boundaries, so you