Why Use Microservices for Legacy System transformation?

vinayak-sharma-D4H4WYBl1Rs-unsplash

German professor Meir Lehman and his Hungarian partner László Bélády developed the eighth principle about the evolution of software technology from 1974 to 1996. Their remarks now seem apparent to us. For example, the law of quality degradation states that software quality deteriorates unless maintained and adapted to changes in the environment. Nonetheless, their observations now serve as instructions to help define legacy systems and understand when to modernize.

Specific forms of software development approaches are more popular than others. Automatic migration, commercial, off-the-shelf software, rehosting, code refactoring, architecture reviews, and, of course, microservices, are the topic of discussion today.

Microservices are one of the most popular approaches if you’re aware of the signs that it’s time to modernize your software and look for a way to do it. There are numerous explanations about why they have been so famous. Let’s take a look at them.

1) Small autonomous teams allow for better communication

By definition, microservices are an independent element and must be operated that way. Therefore, they are usually developed and managed by a small team of up to 8 people (often referred to as two pizza groups). This hierarchical framework lets developers understand how to interact with the codebase and solve issues rapidly and optimize the interaction process within a group.

2) Independent deployment doesn’t require synchronization of processes

If you deploy each service individually, it is less likely that the entire application will go down if one segment of the application goes down. As a result, even if some services are down, most clients are unaware, and the team can quickly resolve it. Besides, developers do not have to adjust local changes during the deployment process, allowing for continuous deployment, saving time and resources.

3) Elements can be scaled separately

You have to compromise on your hardware choices in a monolithic architecture and scale all your components together. You can address only functional bottlenecks, scale parts with p with microservices performance issues, and use the best hardware for your service requirements. Besides, autoscaling allows you to normalize the configuration during off-peak hours, improving customer experience and saving costs.

4) Both microservices are designed utilizing the most suitable technologies

Developers can choose the best language or technology for each service. If your service is small, it’s easy to rewrite it with the latest new technology. With this continuous update, your system doesn’t get out of date quickly. “The good thing about microservices from a corporate perspective is that they can increase developer satisfaction and developer retention. Microservices are self-encapsulated, so developers use the frameworks. You have more freedom in choosing your work, libraries, programming languages, tools, and more,” says Renat Zubair Ov, CEO of elastic.io, a microservices-based hybrid integration platform.

5) Phased implementation helps escape complete rewriting

The microservices architecture allows you to separate small pieces from your existing application and replace them with microservices. By subdividing the monolith, you don’t have to reimplement the monolith completely, but you can identify one or more well-defined functional chunks and pull them out as microservices. However, this approach works well only after the migration and poses many developers’ problems when they start implementing microservices. This is discussed in the next section.

6) Adapting the Service Façade

After defining the service operation, you need to provide the implementation. Although it is possible to perform microservices immediately, we first chose to adapt the existing system to build the performance, as shown in Figure 1c. Therefore, migrating clients to the service façade and migrating platforms both posed considerable risks, but were split into separate steps at the expense of creating a disposable implementation.

The critical challenge for this step is to find the right candidates for adaptation. We found that the results of the entry point analysis in Step 1 were beneficial for this task. However, due to improvements in service operations, some operations had to be implemented from scratch.

Ensuring sufficient testing is essential for successful adaptation. This can be difficult because traditional environments may provide little or no functionality for standard testing techniques such as mocking. New microservices are usually much more comfortable to test because you can set up your environment ad hoc way using technologies such as Docker Compose. In our modernization project, the lack of mocking of legacy databases became a particular challenge. Therefore, all tests are designed to change to the test data are rolled back or explicitly corrected.

7) Migrating Clients to the Service Façade

Once the service operation is implemented, the client application can begin migrating to the new façade by replacing the existing access with a service call (see Figure 1d). This step raises organizational and technical challenges and requires most of the client application to be modified and tested, which typically consumes most of the entire project’s time and budget.

We have created a migration document to support the development team during the migration. This document contained instructions on how to replace each entry point identified in step 1 with one or more service operations. For each of the new processes, detailed instructions and code snippets have been provided to make the migration as easy as possible. This document was considered very useful by the developers.

In the actual migration, many client applications successfully adopted client-side adapters that emulate certain parts of the old interface using the new service façade. Therefore, changes to existing modules can be significantly reduced. Following Idea 5 of the Tolerant Reader pattern, these adapters depended only on the fields and operations that each application needed, preventing interface changes from penetrating client applications. However, the idea of ​​creating a shared adaptive feature for all client applications was abandoned to maintain the very complex interface structure that modernization sought to improve.

During the migration, some service methods were too fine-grained, resulting in poor performance due to call overhead. Therefore, these methods needed to be improved during migration.

In our project, client migration took almost two years. To track the migration’s general direction, we regularly used the static analysis toolset we used in step 1 to report on modules that continue to use the old entry points. We then compared this report with the client application migration plan to ensure that the migration proceeded as planned.