Skip to main content

Primer on 12-factor app

The 12-factor app guideline was published by Heroku the platform as a service (PaaS) provider (now a part of These were a set of principles/best practices that one needed to follow to build applications on the Heroku PaaS. Over time, this has become the de-facto standard for building web applications on the cloud or more precisely cloud-native applications. The original 12-factor app guideline is available at and was not really meant for microservices. In this post, I will try to go over the 12-factor app guidelines and check how they fit into microservices architecture.

1 – Codebase
One codebase tracked in revision control, many deploys

The application codebase should be maintained in a version control system (VCS).All developers should have easy access to VCS.There is only one codebase per application in the VCS, but there can be multiple deployment copies of this application codebase. For example, an application can be deployed in different environments in a typical CI pipeline - pre-production, user acceptance, production etc. These environments have the codebase for the same application but they can be in different state or version. The pre-production can be few commits ahead of the code that is currently running in production.

2 – Dependencies
Explicitly declare and isolate dependencies

All web applications rely on external libraries(notably the framework libraries/jars) to run. There is high chance that the target deployment/server environment (say your web server) may not have the dependent libraries/jars. Hence the web application must declare all dependencies with the correct versions. These dependencies can then be included in the web server as part of the deployable unit.

3 – Configuration
Store config in the environments

Configuration information must be separate from the source code. This may seem so obvious, but often we are guilty of leaving critical configuration parameters in the scattered in the code. Instead, applications should have environment specific configuration files. The sensitive information like database password or API key should be stored in these environment configuration files in encrypted format.

4 – Backing Services
Treat backing services as attached resources

An application connects to backend services over the network. Backend service can be databases like MySQL, MariaDB, distributed caching based on Redis, Hazelcast etc, NoSQL stores like MongoDB. Applications typically use connection strings/URL formats to connect to these systems. If these servers are moved to a different node or new node comes up, the connection details of that backend service will change. The application should able to handle such changes without any code changes, rebuilds, and redeployments. These connection settings should be part of the configuration. Another example would be moving the application from pre-production to production. In this case, also the connection settings for database server will definitely differ. The code should be able to detect the environment profile and function as expected without any change.

5 – Build, release, run
Strictly separate build and run stages

This principle states that build, release and run stages should be treated separately.
During the build phase is the developer is in charge. This is where feature/capability branches are created, development is done, tests are run and finally merged to integration/develop branch.
In the release stage the software is prepared for a possible release to production, - may be a release candidate or general availability version. The regression tests and other tests would be run to verify if the software can behave as defined in the specification and can be deployed/pushed to production.
The actual production release is also tagged.Finally, in the run stage, the application is actually pushed to production or is deployed. It should then run without any intervention or modification.

In case a bug is detected in production or a new feature comes along, it has to be addressed all the way back in the build stage after detailed analysis. This disciplined approach minimizes risk, creates traceability and establishes a well-oiled process. This is evident that automation around agile CI/CD process will be key to implementing this guideline.

6 – Processes
Execute the app as one or more stateless processes

This guideline suggests building stateless web applications. These applications are easy to scale, upgrade. The application state is only stored in backend stores like databases. Also, his is kind of a warning that too much session state and sharing it in a cluster is not a best practice.

7 – Port Binding
Export services via port binding

The application services are accessible (securely to the external world) using URL exposed over HTTP protocol.

8 – Concurrency
Scale out via process model

The application should be run one process on the OS. That process can then scale out horizontally across multiple nodes. 

9 – Disposability
Maximize robustness with fast startup and graceful shutdown

The application must startup fast. In the case of a new deployment or upgrade, this should not take hours. In the case of a crash, new spawning a new node and bringing the application up on the new node, should not take more than a few minutes. The application instance must be capable of handling graceful shutdown without impacting overall scalability (#8) and jeopardizing the backing stores (#4). Also while starting down and shutting down, the application must send notifications to the monitoring tools on the network.

It is evident that this principle indirectly points to cloud deployments backed by continuous deployment/CI/CD pipeline.

10 – Dev/Prod parity
Keep development, staging, and production as similar as possible

I would put it differently. I do not think your development environment needs to mimic your staging and production. However then you should ensure that the application is packaged with a good build tool with robust dependency management in place.

Instead, your build and staging environments should resemble production as close as possible.
That implies having the exact same version of the operating system, virtualization, databases and any other relevant services.This helps shorter cycles to unearth infrastructure/environment related issues. Such bugs can be nasty, discovered late and can cause a lot of frustration and consume time to fix them. Also, there is lot more confidence going towards production as you know you have already run the application in exactly same kind of environment for staging.

This guideline again has deep consequences for classical applications. I will explain this in the next post.

11 – Logs
Treat logs as event streams

Logging is critical for debugging failures in a running application. The application must generate logs but it will not bother about the storage, management or analysis of log entries. This is a separate concern for specialized tools. The application will generate
the log event stream and route them to these specialized services for analysis, emitting critical alarms and archival for future references (for example
in some cases due to regulations or laws the logs are maintained for a certain duration of time)

12 – Admin Processes
Run admin/management tasks as one-off processes

After an application is deployed and goes live in production, then it needs to be monitored and managed. You may want to change certain internal parameters via JMX for example, restart one backing server instance, delete some useless files etc. In production, such tasks should be run separately / in a separate process than the actual application.  


Popular posts from this blog

Why do you need Spring Cloud Config server?

Last month I wrote a primer on concepts around 12 factor app. Before getting into the details of the Spring Cloud Config Server, I must refresh on the principle #3 from the list presented in that post.

3 – ConfigurationStore config in the environments
Configuration information must be separate from the source code. This may seem so obvious, but often we are guilty of leaving critical configuration parameters in the scattered in the code. Instead, applications should have environment specific configuration files. The sensitive information like database password or API key should be stored in these environment configuration files in encrypted format.
 The key takeaways from this postulate for a cloud-native microservices application are:
Do not store configuration as part of the deployable unit (in the case of lead microservice - inside the jar or war if you are still deploying war like the good old days). Instead, store it in an external location and make it easily accessible during run-…

Upgrading Lead Microservice - Use MariaDB and Flyway with Spring Boot

So far I have been using an in-memory H2 database or Mockito for testing the lead microservice. To make the transition towards using the Spring Cloud Config server, I need to upgrade the micro-application to use MariaDB. I will be adding the configuration in the application.yml  the file which in the subsequent post will move over to the config server store. I will also be using Flyway to make it easy to maintain the database schema changes in future. I will use this post to introduce Flyway in the mix. Spring Boot also provides first class integration with Flyway. I am using Flyway as its really quick and easy to get started, minimal learning curve (no DSL) and I am comfortable with it having used it in the past.


MariaDB 10 is installedBasic familiarity with FlywayHeidi SQL client is installed.
Step 1 - Update build.gradle to include the MariaDB JDBC and Flyway dependencies.
Do not forget to do a Gradle refresh on your IDE (I am using STS 3.8.4 on Java 8)

Step 2 - Rename the…

How to implement Cache Aside Pattern with Spring?

ProblemYou want to boost application performance by loading data from a cache and prevent the network trip to the persistent store (and also the query execution). This can be achieved by loading data from a cache. However, you want to load data on demand or lazily. Also, you want the application to control the cache data management – loading, eviction, and retrieval. Forces Improve performance by loading data from cache lazily.Application code controls cache data management.The underlying caching system does not provide read-through, write-through/write-behind strategies (strange really ??).
Solution Use cache aside design pattern to solve the problems outlined above. This is also one of many caching patterns/strategies. I believe it is named in this because aside from managing the data store, application code is responsible for managing the cache also.
Let's now try to understand how this caching technique works and then explore how it solves the problems.
ReadCache MissThe applic…