Before you begin your journey from monolith to microservices, there are some basic steps anyone can take to ensure success. Here’s the approach I like to take to “dismantle the monolith.”
It may be tempting to adopt microservices just because it’s now a buzzword.
However, I would caution against this, as it’s more important to spend time recognizing what your specific organisational needs are to operate more efficiently.
Also, whether your application is a monolith or a set of beautifully crafted microservices, the code in either one would likely be very similar.
So what are the advantages of “moving code around” from the monolith into microservices?
Rapid releases – You can update the frontend and backend code almost instantaneously, giving you rapid releases of any frequency you choose e.g. several times a day, weekly or monthly, and independently to the rest of the monolith. Remember, this does not mean the code itself automatically would be faster, just that you can release code faster
Flexible stack – Allows developers flexibility around their frontend or backend stack choice. Developers can use their favourite programming language, library or framework
Simplified code – As previously stated, the code is likely to look very similar to what’s in the monolith, notwithstanding that it may be written in a different programming language or style, however re-writing this now offers opportunities for simplification both in terms of business and technical logic
However when it comes to migrating large monolithic applications (e.g. JSP, ASP .NET) into a microservices architecture (e.g. Spring Boot, Flask, Seneca, Koa) – your organisation would need to have the following as a minimum set of capabilities:
Reserve setup time – Setting up the foundations of this architecture requires a significant upfront investment of time including tooling and infrastructure, for example, Kubernetes, Docker, AWS or Azure
Adopt an Agile mindset – A certain level of maturity towards Agile working practices needs to be in place before attempting a microservices architecture including Agile development with a recommended 2 week sprint length, a strong automated testing culture ideally with unit tests and end-to-end regression tests) and a DevOps / DevSecOps culture ideally with both Continuous Integration (CI) and Continuous Delivery (CD) in place
Enforce backwards compatibility – If other teams or organisations are going to be relying on your microservices, the API needs to be carefully thought out for ease of use to begin with, versioned and backwards compatibility maintained strictly (good to follow a versioning system like semver)
As you can see there are many capabilities which need to be present in an organisation wanting to make the shift to microservices.
If you feel your company can confidently meet these capabilities, then please proceed to the next step.
However, if you’re feeling hesitant then spend some time developing these capabilities first within your organisation to better prepare yourselves for a future digital transformation.
Once you’ve decided to commit to this journey, you’ll need to analyse the monolith in terms of the various customer journeys it offers.
We want to pick one or more of these journeys to re-create in a microservices world.
The key here is to pick apart the monolith piece by piece, eventually replacing it altogether over time, rather than attempting a huge one-off migration project.
An analogy may be of trying to revamp one room in a house at a time, rather than renovating the entire house in one go.
We want to pick a journey which is end-to-end, but what do we exactly mean by this?
Has a clear entry and exit point within the application which can be replaced whilst keeping the rest of the monolith the same
Is a vertically rather than horizontally sliced journey meaning it would involve both backend & frontend effort rather than just being a front-end only or backend-only effort
This end-to-end journey could be broken down into several user stories spread across several sprints if necessary.
Cross-functional teams help to improve the speed and quality of communication between various disciplines e.g. design, QA, backend, frontend, thus helping deliver stories faster than if each member had stayed within their own functional area.
Cross-functional teams should be physically sitting together in an office environment, although in a remote working scenario, this is of course not necessary as long as team members are available for answering questions at any moment via an instant messaging or video conferencing tool.
The key here is to reduce the lag time in getting an answer to a question a team member may have, in order to unblock them from making progress on a given user story. The faster we can unblock a team member, the faster a user story can be completed.
The ideal size of a cross-functional team would vary depending on the size of the project. Look out for a future Bitbucket Community article on this topic.
Now we can get our new or existing team started on development of our selected customer journey.
What is going to be very important here is having stringent automated testing in place, and as a business, allowing your team the necessary time to write these tests from the start.
Both unit tests and end-to-end or integration tests help ensure the code can be updated now and in the future without worrying about introducing regressions – which is in line with our goals of enabling rapid releases whilst enforcing backwards compatibility.
Finally, I would highly recommend you making documentation a part of your definition of done of any given story or feature. By encouraging your development team to document within their code and also as part of a more detailed set of notes, you’ll ensure that documentation becomes a natural part of your team’s routine.
Let’s face it, even a simple toaster comes with a detailed user guide of how to use it. So why wouldn’t the complex code and systems that our developers build?
This is especially important since we’re now in a microservices world where we can and should expect services to fail. The independent nature of each microservice means that any service could fail at any given time and we need to be able to recover from this quickly. Automated tests will help with this as will readily available documentation.
We’ve picked one or a few customer journeys from our monolith and now we’re ready to make these live to customers as microservices. There may also be a separate or updated front-end application that links to these new microservices which may also have to go live too.
It will be important to ensure that you have a way to roll back to a previous release and/or be able to switch back to the monolithic journey in case things go wrong.
A lot of companies I see try to fiddle about with provisioning servers manually and trying to calculate the load they would receive.
Instead, I recommend just to factor in the premium of using a cloud service like AWS, Azure or GCP to begin with in order to go live instantly and have access to crucial automatic scaling and self-healing capabilities. These features are especially useful in a microservices world.
Order definitely matters when going live. We want the microservices active, before pushing the updated front-end which links to them, otherwise customers may experience failures straight away.
Once the microservices and then the front-end have been made live, monitoring and analytics will become important to ensure customers are experiencing a smooth journey.
If there’s any recurring theme within microservices – it’s that failures should be expected, but by anticipating them, you can recover as quickly as possible.
If you can become successful at this with the few journeys you’ve just made live, you can just rinse and repeat the same process for taking apart the rest of the monolith.
Originally published on https://bitbucket.org/blog/5-steps-from-monolith-to-microservices.
Please do get in touch if you would like to arrange an independent and impartial review on how your digital transformation project is going.