This is the third and final part of our article series on what a real life migration from a monolithic application to a multi- or microservices system can look like. In the previous parts we talked about
- why we thought we’d need a change,
- what we changed in the past,
- what we didn’t change, and
- which problems came along with the changes.
This article will summarise our thoughts about the improvements and worsenings.
At the time of writing approximately one year has passed since we took the first step to change our architecture and there are still a lot of things we’d like to change. In this time our team grew quickly from two to nine developers and then shrank back to 5 developers again. We created new services for parts of the application that are very independent from the old main service. We also added components like elastic and redis and stabilized our database by making it run in a high availability setup.
Despite all the changes we made, at the core it is still more of a multiservice than a microservice architecture. Especially since there remains one service a lot bigger than all others. Additional services have been added and their size probably better matches the expected size of a microservice. They also fulfill the concept of separated concerns better with clearly separated functional topics.
We did not end up with 100 services, but with 8 over the period of 4 years. They are all different in size and all are not cut with only “one singular concern“, but a bigger collection of concerns.
What Made It Easy, What Made It Hard (External Influences)
What helped us a lot to make the change was the agile development process we applied. This allowed us to make incremental changes and add features over time. What would have helped us even more within this process are early user tests. Most of the time, we only worked with ourr vision of a product. This perspective might have changed if we had had input from actual users on the new parts of the application.
The technical side of the hosting infrastructure sadly also was one of the restricting factors. Every new service means new virtual machines that have to be set up initially and managed afterwards. But this setup also is the devil we know very well in this project. A switch from this static type of infrastructure to a more dynamic one comes at the price of many unknowns and maybe even more work than before. We therefore decided against it. To this day we can only estimate the difference in effort and costs between those setups.
During the progress of changes in the system more things popped up that we would have liked to address. But since time and resources were, we did not always do so. Technical tasks have been prioritized by the amount of improvement they will give to users, owners and sometimes developers. We did not put a lot of work into items that would only have a small impact on any of the named parties. This only works in combination with the expectation of them not becoming an impediment further down the road of course. This particularly applies now, that the team considerably shrank in size despite ever ongoing new features being pushed.
Many things changed moving towards multiple smaller services. Here is what improvements it brought us in a nutshell:
- Changing parts of the codebase is less likely to break the overall application
- Deployments are less risky since they impact a smaller portion of the system
- Less technical debt is being built up
- The new services allowed testing new major versions of libraries like Spring
- Less dependencies that cannot be updated due to application-internal version requirements
- Better separation of technical and functional concerns that also makes the overall application more resilient
- New team members can start working more quickly on independent features
- Components of the system that are more likely to fail can be isolated
What Got Worse
With all these improvements there come some tradeoffs of course. We want to make no bones of them either:
- Communications: The separation of concerns is not perfect and some information has to be shared
- Some types of information made us spin up a redis cache. This is now an additional part of the system that has to be maintained.
- Components have to partially rely on others in order to work properly and are not completely independent
- Dependencies also concern the versioning and backward compatibility is important
- The timing of deployments therefore has to be considered
Especially the coupling can probably be solved with other concepts like event sourcing and/or message queues. But we decided against it because problems are still small in comparison to the estimated effort and increase in complexity.
Would We Do It Again?
The way we took is not the only one to go forward. But still: Yes. Our customer will scale out further in the future and this still seems to be the safer bet for us, both for the complexity of development and also a more predictable performance with separate service parts.
Should You Do It?
There surely is no single answer to this question. What we have instead for you is a collection of more questions. By answering these, you might get closer to your solution:
- Do you only want to get rid of a monolith because you heard it’s a bad decision?
- What is currently limiting your monolithic application? Will a multiservice solution solve this?
- Have you thought about the additional complexity multiple services will bring along?
- Can you scale your monolith in other ways that are less complex and will still be sufficient in the future?
- Does it make sense for your current team setup to support the development of multiple services?
- Do you need multiple differently scaled services or microservices™?
Whether you have answers for these questions or still hunger for some abstract recommendation, here you go:
If you are sure your application will grow considerably in the future, you want to address it right now and have the time and resources for it, then it will probably be worth considering microservices. Growth, in this case, can be either in the way of new features or maybe even the load being generated by many applications or users accessing the application. Since we also see complexity coming along with a multiservice application, we still recommend considering the additional effort you will meet along the way. Don’t do it “just because“ or because “it’s the way things are done today“, but make a reasonable decision weighing the pros and cons. And also remember, there is a middle ground between the two architectures.