Skip to main content

Microservices – Vertical Slicing

The whole idea of microservices revolves around building a large application from a set of small modules which are composed around business functions.Let me share one concrete example to understand this better. Figure 1, shows the classic layered architecture that I was introduced to back in 2001 when I was building a large product for the insurance providers.
Figure 1 - Classic Architecture of the Insurance Product

Only three key modules from the product are shown here for simplicity. This was a very large product with several other modules. The UI was developed using JSPs (HTML, Stylesheets, Javascript). The UI was packed in a war file. The business logic was written using stateless session beans. The data access layer was all entity beans. The EJBs and the war file was then packaged inside an ear file and finally deployed in a Java application server.

This application was also modular, but horizontally.So that ear file contained everything – the UI, business logic, the data access logic for all the insurance business functions – underwriting, claims, product, accounting, party, agents, reinsurance etc. We had one big “earball” that would be built and thrown in the application server.

Now in India, people tend to buy a lot of life insurance towards the end of the financial year (in the month of March) as it allows them to save taxes. So, for last few weeks in the month of March, there would be a lot of stress on the underwriters. This, in turn, means the underwriting module (and few other related ones like the product, accounting, party etc) is being heavily used in those few weeks. I am assuming the normal volume of claims during that period. In order to handle the high load, the application had to be scaled mostly horizontally. That means new nodes would be procured in the data center, the application server will be installed and the same “earball” would be thrown at them. In summary, we would scale the entire application and not just the underwriting components.

Given that this product’s codebase was very large and complex, a significant amount of hardware would be required across the server nodes to scale. For sake of simplicity let’s assume that this hardware size to be ‘X’. This scaling demand on the system would reduce after 31st March. So the additional servers and nodes will be redundant. However, it is not so easy to get rid of this hardware. Some companies would try to re-use the boxes for some other purpose. But that is also a time-consuming affair. In most cases, the company bleeds financially as hardware seats in the data center without any use.

In order to ensure that only underwriting component is scaled, it is required that is deployed separately from the rest of the application components. Now, what if an unforeseen situation happens and then there is increased activity in the claims department. Ok, let us break apart the claims module. Product management module being common to both underwriting and claims. What would you do? Do you package product module with both underwriting and claims module? That will not be very prudent. Instead, split that out as well and let the other two modules talk to it via some mechanism. In other words, we now have 3 loosely coupled modules to deal with. If you now manage to deploy them separately you can scale them separately. Now you will truly leverage cloud resources. The hardware size now required to scale say the underwriting module is Y and it will be much much less than X which was required earlier. That also means less bill from the cloud provider at the end of the month.

The exercise or operation described above is called “Vertical slicing“. This makes microservices autonomous – i.e they can be changed, deployed and scaled independently of each other. Let us now see how the Figure 1, gets transformed with all this.

Figure 2 - Microservices emerging from Vertical slicing.

In figure 2, vertical columns appear instead of the horizontal layers in figure 1. Each vertical column is a microservice as it is aligned based on business context/capability.

Interesting to note that even with microservice the layered cake is present, but is very lean now. Each microservice has its own UI, business logic and data access logic and database arranged in layers. Microservices are can be thought of as vertically sliced, lean layered cake components which work together to compose a bigger application. This makes me think if the microservices is a misnomer. Best name for this would have been "mini-apps". This is because each vertical pillar is a self-contained application in itself.

Now the underwriting and claims microservice would definitely need to query the product microservice. Even claims module needs to query underwriting module. Underwriting module, in turn, needs to communicate with party microservice (not shown in the image for simplicity) whenever a new policy is created. In short, microservices cannot function in silos, they must communicate. Since microservices widely expose rest endpoints over HTTP, that can be used as the protocol of communication. Microservices can also communicate over asynchronous communication channels like a message system.

So, if the deployment of microservices as an individual component on separate processes was not challenging enough,  internal communication between microservices over the network escalates that challenge even further. You need a solid business alignment, DevOps support on the cloud (public/private) to successfully build applications based on microservices architecture. Luckily, in 2017, we have exceptional cloud providers like AWS, Azure, Google, DigitalOcean to name a few. Add to that the excellent plethora of continuous delivery tools and containerization options. These two compliments to help implement applications based on microservices. You may think that stars have aligned somehow now to build such loosely coupled applications which we have always wanted for years now.

It’s not just scaling needs or support of cloud infrastructure or DevOps tools that’s fueling microservices-based applications. There are other factors as well. Computers have exploded in many form and shapes in the recent past. If laptops, desktops were not enough, you have tablets and mobiles. We are already into the age of wearables (smartphones will soon be a thing of the past in my opinion) and then the best is yet to come with IOT fuelled by 5G – possibly usher a new industrial revolution. This means the applications will not just be accessed by laptops, desktops or phones but there will be a plethora of other clients including robots, drones and maybe even your teeth and hair. This will put tremendous demand/strain on the software applications in the future. Hence, it is imperative for software applications to switch to loosely coupled microservice architecture and gain extreme agility and flexibility.


Popular posts from this blog

Why do you need Spring Cloud Config server?

Last month I wrote a primer on concepts around 12 factor app. Before getting into the details of the Spring Cloud Config Server, I must refresh on the principle #3 from the list presented in that post.

3 – ConfigurationStore config in the environments
Configuration information must be separate from the source code. This may seem so obvious, but often we are guilty of leaving critical configuration parameters in the scattered in the code. Instead, applications should have environment specific configuration files. The sensitive information like database password or API key should be stored in these environment configuration files in encrypted format.
 The key takeaways from this postulate for a cloud-native microservices application are:
Do not store configuration as part of the deployable unit (in the case of lead microservice - inside the jar or war if you are still deploying war like the good old days). Instead, store it in an external location and make it easily accessible during run-…

Upgrading Lead Microservice - Use MariaDB and Flyway with Spring Boot

So far I have been using an in-memory H2 database or Mockito for testing the lead microservice. To make the transition towards using the Spring Cloud Config server, I need to upgrade the micro-application to use MariaDB. I will be adding the configuration in the application.yml  the file which in the subsequent post will move over to the config server store. I will also be using Flyway to make it easy to maintain the database schema changes in future. I will use this post to introduce Flyway in the mix. Spring Boot also provides first class integration with Flyway. I am using Flyway as its really quick and easy to get started, minimal learning curve (no DSL) and I am comfortable with it having used it in the past.


MariaDB 10 is installedBasic familiarity with FlywayHeidi SQL client is installed.
Step 1 - Update build.gradle to include the MariaDB JDBC and Flyway dependencies.
Do not forget to do a Gradle refresh on your IDE (I am using STS 3.8.4 on Java 8)

Step 2 - Rename the…

How to implement Cache Aside Pattern with Spring?

ProblemYou want to boost application performance by loading data from a cache and prevent the network trip to the persistent store (and also the query execution). This can be achieved by loading data from a cache. However, you want to load data on demand or lazily. Also, you want the application to control the cache data management – loading, eviction, and retrieval. Forces Improve performance by loading data from cache lazily.Application code controls cache data management.The underlying caching system does not provide read-through, write-through/write-behind strategies (strange really ??).
Solution Use cache aside design pattern to solve the problems outlined above. This is also one of many caching patterns/strategies. I believe it is named in this because aside from managing the data store, application code is responsible for managing the cache also.
Let's now try to understand how this caching technique works and then explore how it solves the problems.
ReadCache MissThe applic…