Ask SAILee!

Do you have a question about software architecture, cloud computing, application modernization, or IT complexity? Ask SAILee! SAILee is the AI voice of Lee Atchison, the noted cloud architect, author, and leader in architecting scalable applications. Ask any question, and you'll get answers based on the books, articles, and other content created by Lee Atchison.


In world of software development, the role of a software architect often appears to stand at the top of the technical leadership ladder. With that responsibility comes an even greater need for continuous learning and adaptation. As someone who's spent years in the trenches of cloud architecture, I can tell you that the moment you stop learning is the moment you start becoming obsolete. The Shifting Sands of Technology Let's face it: the tech landscape is changing faster than ever before. What was cutting-edge yesterday is run-of-the-mill today and potentially outdated tomorrow. As a software architect, you're not just responsible for understanding these changes – you're expected to anticipate them, evaluate their impact, and guide your organization through the stormy seas of technological evolution. Consider this: five years ago, who would have predicted the current explosion in AI and machine learning capabilities? Or the rapid adoption of serverless architectures? As an architect, it's not enough to simply keep pace; you need to stay ahead of the curve. Learning: Not Just Nice-to-Have, But Mission-Critical Here's a hard truth: self-study and advancement aren't just nice-to-haves for an architect – they're critical responsibilities. Your ability to learn and expand your knowledge directly impacts not just your own growth, but the growth of your entire project, team, and company. Think about it this way: how can you make recommendations on technologies to meet new business requirements if you're not up-to-date on the latest available options? Your learning isn't just about personal development; it's about ensuring your company remains competitive and innovative. Strategies for Continuous Learning So, how do you stay on top of this never-ending wave of change? Here are some strategies I've found effective: Hands-On Experimentation: One of the best ways to learn is by doing. Set aside time to write software using new tools and techniques. Try out new cloud services
Migrating to the cloud can be daunting, especially when dealing with complex applications, which can have a life of their own. These applications can act in seemingly random ways when exposed to unexpected stimuli, such as moving from a stable data center environment to a more chaotic cloud environment. This inherent complexity makes migrating to the cloud risky, but there are ways to mitigate the risk. Piecemeal Migration Proper pre-migration preparation is critical to a successful cloud migration. You can often make simple—or more complex—changes to your application to prepare it for the migration. Common changes include reducing the dependency on specific networking topology, changing how you establish database connections, changing data and caching strategies, removing reliance on server-local data, refactoring configuration mechanisms and strategies, and changing how firewalls and other security components interact with the application. Often, a large monolithic application is split into smaller applications or services before it is considered safe to be moved to the cloud. Smaller pieces are easier to move to the cloud, especially when each is moved independently. By performing a migration one component at a time, you limit the risk of a given migration step and simplify the inherent complexity in each migrated module by limiting the scope of the migration. This piecemeal strategy, often called a service-by-service migration, is common for applications composed of multiple services or microservices. However, it can also be used for monolithic applications by performing pre-migration application architecture changes. Using this technique can assist in migrating large monoliths to a more service-based architecture during the migration process, yet the complexity of such a migration can still be extensive. Post-Migration Complexity The journey doesn't end once the migration to the cloud is complete. Post-migration optimization is crucial to ensure that the applic

Welcome to Software Architecture Insights!

Software Architecture Insights is your go-to resource for empowering software architects and aspiring professionals with the knowledge and tools required to navigate the complex landscape of modern software design. SAI provides invaluable insights into crucial aspects of software architecture, including cloud computing, application security, scalability, availability, and more.

Whether you're a seasoned architect looking to stay up-to-date with the latest industry trends or a prospective software architect eager to build a strong foundation in this dynamic field, our platform is here to guide you in making informed decisions that will shape the success of your software projects. Join us on a journey of discovery, learning, and mastery as we delve deep into the architectural principles that drive innovation and excellence in the world of software.


Another critical principle in maintaining a safe and secure application running in the cloud is understanding the Principle of Least Privilege. The Principle of Least Privilege is an industry-standard security principle widely known to reduce the impact of bad actor attacks. The idea behind the principle of least privilege is to: Grant an entity the minimum permission it absolutely needs to perform its operations. Grant no more permissions than that. This principle applies to cloud infrastructures and on-premises infrastructures alike. It is a critical best practice for building a safe and secure cloud infrastructure. Let’s take a look at an example. Figure 1 shows two services communicating via a message queue. Service A inserts messages into the queue, and those messages are read off the queue by Service B.   Figure 1. Two services with two permissions. We see that Service A has a set of permissions assigned to it to access the queue, and Service B has a set of permissions assigned to it to access the queue. What should those permissions be for these two services? For Service A, the permissions granted should be: Access to this single queue. No other queue should be accessible from this service. This service should be able to write messages into the queue, but it should not be able to read or remove messages from the queue. For Service B, the permissions granted should be: Access to this single queue. No other queue should be accessible from this service. This service should be able to read and remove messages from the queue, but it should not be able to write messages into the queue. No other permissions should be granted to either service. Why don’t we just grant both services full access to the queue? We don’t because if the service has an error or failure, or if it gets compromised by a bad actor, we don’t want it to be able to damage any more of our system than possible. Suppose we, for instance, give S
If you’ve ever spoken out to your iPhone and said “Hey Siri, what’s the weather in Seattle, WA”, or spoke to an Amazon Echo device and said “Alexa, turn on the bedroom lights”, you’ve given an AI a set of instructions for it to execute. You’ve given an AI a prompt. In the Alexa case, the phrase is converted from an audio snippet into text, and that text is sent to an AI system — Amazon Alexa in this case — for it to determine the desired actions. The output of that process is a set of actions that must be executed (in this case, turn on the bedroom lights), and those tasks are then carried out by your home automation integration. In the Siri iPhone case, the phrase is similarly converted to text and sent to the Apple Siri AI system for processing. Siri determines it needs to figure out the weather in Seattle, and issues the appropriate commands to get that information. It then converts that weather information into an appropriate response and responds with “The weather in Seattle is rainy and 52 degrees”. Yet sometimes you don’t get the answers you want. If you’ve ever said the following to your echo device: “Hey Alexa, what’s the color of chocolate?” and gotten the response: “Nestle’s chocolate bars are wrapped in a blue wrapper with gold foil” you know the frustrations involved in a mis-understood request. You thought you gave the AI a clear query, but the AI was thinking something totally unrelated to what you were looking for. You were looking for an answer about the different types of chocolate and the colors available for those different types, yet you received information about the brand colors of a particular chocolate bar wrapper. These are all examples of AI prompts, but they are all extremely simple examples. In sophisticated AI systems, longer and complex prompts can be hundreds, thousands, or even tens of thousands of words long
As modern applications grow more complex and valuable, their security becomes increasingly critical. Yet many organizations still operate with a flat security model—one breach and attackers gain access to everything. This approach is like building a house with a reinforced front door but paper-thin walls. How can you improve your application security to reduce your risk of attack? Use isolation zones. Isolation zones aren’t just a best practice—it’s the difference between a minor security incident and a catastrophic breach that makes headlines. How Do You Use Security Zones? When creating the production operational backend infrastructure for a modern application, it’s generally considered best practice for security purposes to split the application infrastructure into multiple security zones. This is so that a security breach in one area can still be limited to impact only the resources within that one zone. Done correctly, this can take a security breach that might otherwise be a massive impact on your application integrity and turn it into a much smaller, perhaps insignificant breach that has minimal impact. There are many ways to architect your security zones, but a typical model involves three standard zones. In this model, the three zones are called the public zone, the private zone, and the DMZ. The three zones have the following purposes: Public zone. This is the zone that is connected directly to the internet. It’s exposed to traffic coming from the internet and, as such, is the least secure zone and the most vulnerable zone to compromises. The only services that exist in this zone are the services that absolutely must be connected directly to the internet to give access to your application. Examples include API Gateways, traffic managers, firewalls, load balancers, and similar services. Private zone. This is the zone where the vast majority of your backend application exists. All of your data is stored in databas
Both you and your cloud provider have a critical role in keeping your application safe. It’s a principle of security known as the Principle of Shared Responsibility. It describes a model for assigning ownership of various security aspects between the cloud provider and you. It’s a principle that has been championed heavily by AWS, but it applies to all cloud providers in all situations. The key to the principle is an agreed-upon set of responsibilities for keeping the application safe for each cloud service you use within an application. Each individual service and each individual service provider have a different set of responsibilities — and hence a different set of assumptions — about who is responsible for each action. To build a secure application, you must understand the assumptions made by the cloud provider, what they are responsible for, and what you are responsible for. To illustrate this, let’s look at an example. Let’s look at the AWS EC2 service. The AWS EC2 service provides virtual servers for your application to utilize as it needs. Keeping the virtual servers secure involves a multi-layer security model. The different layers of security responsibility for this service are shown in Figure 1. Figure 1. Principle of Shared Responsibility as applied to the AWS EC2 service. When you operate an application running on one or more EC2 instances, you create an agreement between you and AWS to manage the application’s security on those services. For the EC2 service, AWS is responsible for the following security aspects of operating your application: Facilities security. AWS is responsible for maintaining the security of the physical data center and the physical plant that operates that data center. Hardware security. AWS is responsible for securing all of the hardware necessary for running the EC2 service. Network infrastructure security. AWS is responsible for the networking between systems and the hardware
The software development landscape has shifted dramatically. In 2024, a whopping 62% of professional developers use AI in their development process. This has done wonders for the productivity of an average software developer, and has led many people to assume this means that either we need fewer software developers or, more likely, we can get more and better software developed with existing staff. However, there are issues with this shift. Last year, I read and loved the 2024 GitClear AI report on the downward pressure that AI has put on code quality—and the corresponding increase in code complexity. I wrote on this topic last year in my newsletter. This year, the 2025 GitClear AI Code Quality Research report analyzed 211 million lines of code across five years. This report shows that as AI usage has increased this last year, the issues with the developed code in large and complex systems have continued to increase as well. In short, as AI is used more and more in the software development process, we’ve seen a corresponding increase in complexity and code size. The result? Developers are spending more time refactoring code and less time writing new code. The Growing Epidemic of Code Duplication The 2025 GitClear report discovered something unprecedented—for the first time in recorded history, the frequency of “Copy/Pasted” lines exceeded “Moved” lines in code repositories. This represents a fundamental change in how our industry builds software. Why, exactly, is this true? When a developer “moves” code, they’re typically refactoring it—rearranging existing functionality into more reusable modules. This is the hallmark of good software architecture and something that all but the greenest developers do regularly. Unfortunately, it’s also something that AI software agents almost never do. You see, code that is “copy/pasted” represents duplication, which directly violates the DRY
Cloud-native applications make heavy use of services and microservice architectures. Distributed applications provide many benefits to modern application development processes and lend themselves particularly well to applications deployed in the public cloud. But microservices can also create additional and unwanted vulnerability points that bad actors can leverage to compromise your application. A single compromised service, no matter how small, can lead to vulnerabilities that can be exploited in neighboring services, ultimately compromising them as well. A single small service can be the entry point to a massive attack that compromises your entire application. Even if your services are in a private network—behind a cloud firewall—they should not assume the network is safe. Services within the application can still be compromised. And, like the infamous Greek Trojan Horse, a compromised service in an otherwise secure network can cause untold damage to your application.   There are many things you can do to keep your service and microservice-based application safe and secure. But two critical and often overlooked security strategies are absolutely necessary. Overlooked Strategy #1. Authenticate all communications between services In microservice-based applications, inter-service communications are crucial. But authentication between services is often deferred or ignored. After all, if you are inside a private, secure area (such as a cloud VPN), why do you need to authenticate communications between services? All the services are part of the same application and support each other. Why would you need to perform authentication on all requests in such an environment? Consider a simple multi-service application. If service A wants to talk to service B, it’s tempting to let service A send a message to service B and let service B process that message. Service B assumes the message came from service A. After all, who e
Software Architecture Insights is a regular newsletter providing insights into architecting your modern applications at scale. Modern digital businesses continually struggle with building new and innovative applications that also maintain high availability at cloud-scale. Software Architecture Insights gives you, well, insights into how tech leaders and software architects function effectively. Learn how to build, operate, and maintain applications at scale, innovate new features and capabilities, and keep teams fully engaged. All while effectively managing IT complexity and technical debt. Insightful Knowledge and Experience All of the articles presented in Software Architecture Insights come from thought leader Lee Atchison. Lee has over 36 years of industry experience, including seven years working at Amazon and AWS building the framework that modern digital applications use today. Lee has been widely quoted in multiple technology publications, including InfoWorld, Diginomica, Cloud Native Now, DevOps Journal, CIO Review, Software Engineering Daily, and Techstrong Live. Lee has written several books, and he’s been a featured speaker at events across the globe. Lee’s helped companies from Amazon to New Relic, Disney to Major League Baseball, Uptycs to Ory. And he can help you, too. How do I start? Quite simple. Just click the subscribe button below, and you'll start receiving content from Lee regularly. We respect your privacy, and you can unsubscribe at any time. Subscribe
© 2025 Atchison Technology LLC, All Rights Reserved.
 
The Creator Central Logo This page was created by The Creator Central.