Introduction to Software testing: What is it, Why do we need it and what are the different types?

Software testing is an important step during its development lifecycle. It ensures that the software function or operate as what the stakeholders wanted, which contribute to the perception of quality software. Failure to properly test a software can lead to disastrous consequences, especially if the software is running in an environment where lives are stake or is responsible for the financial health of an organisation or nation.

What is testing?

Software testing is the process in which a piece of software, be it a module, component or an application, is verified and validated. In other words, it is the process to make sure a piece of software is:

  1. Built right (Verified)
  2. The right thing that the user will want to use (Validated)

There are two ways to go about software testing; one is automated testing and the other manual testing.

Automated testing is done to automate certain repetitive but necessary tasks in a formalised testing process, or to perform additional tests that would be difficult to do manually.

On the other hand, manual testing is done by testers playing the roles of the users to identify defects in the software that the automated testing missed. A written test plan is followed by the tester to ensure completeness of a test.

In the latter part of this article, we will look at the different types of testing and how they are done.

Why testing is important

Now, you know what is testing but you may be wondering why it is important. Let us use a simple scenario to illustrate the importance of testing.

A highly-reputable medical device manufacturer, AXT, designed and sells a new surgical robot equipped with a laser scalpel mounted on an arm. The scalpel can cut through the human skin and tissues with precision. The control panel for the robot comes with two joysticks. The one on the left moves the robotic arm along the vertical plane (up or down) while the one on the right moves the robotic arm along the horizontal plane (forward, backwards, left and right).

For the left joystick, AXT stated that for every 1 degree tilt forward or backwards shall move the arm down or up by the same amount in centimetres. AXT also stated that every 5 degree tilt of the right joystick in any direction shall move the arm in the respective direction by one-fifth of an inch.

After seeing several live demonstrations done on dummies and receiving good feedbacks from trials that involved some of the surgeons from Tea General Hospital, the hospital finally bought one such surgical robot from AXT for their new operating theatre. Technicians from AXT went to the hospital installed it in the new operating theatre, indicated on the official checklist stating they have verified that the robot was working.

Three days later, a surgeon who’s trained to use the surgical robot decided to use it to perform a brain surgery on a young patient. With the patient lying on the operating bed, the surgeon powered on the robot and started manipulating the joysticks. He moved the right joystick to bring the scalpel end of the arm above the patient. It worked as intended. Then, the surgeon moved the left joystick, shifting it forward to lower the arm. Even though the joystick has a tilt of five degrees, the arm plunged downwards and hit the patient in the face with the laser scalpel punching through the skull into the brain.

The patient died on the spot, leaving the parents extremely distraught and the surgeon traumatised. The surgeon quit his job a day later and was found dead on the ground floor of his apartment building two days later, having jumping off from his kitchen window on the ninth floor.

An investigation later revealed that the technicians did not test the robot and checked off the checklist confidently, assuming that they have done it correctly. They believed in their installation and setup skills as they have done it many times for other hospitals. If they have tested the robot in the first place, they would have found that they failed to connect the signal regulator for the module that controlled the robot arm’s vertical movement.

The scenario described above maybe seem like it came from some horror movies but it does reflect the reality of what will happen when software system, or any system for that matter, is not tested thoroughly.

Different types of software testing

Software testing can be divided two different categories: functional and non-functional testing.

Functional testing is a quality assurance process that checks the individual software component does what it is supposed to. For example, if a calculator software says it can determine the sum of two numbers, then a check will be performed to verify that it return the correct sum for any two numbers.

On the other hand, non-functional testing checks the way the software operates. Using the calculator example, a non-functional requirement would specify that the calculator has to return a result within a second. So, if the calculator takes up to 20 seconds to return the correct result, it is technically functional. However, who would use a calculator that takes a longer time than a human to calculate the sum of two numbers?

Functional testing

Unit testing

Unit testing is a type of functional testing that tests a piece of software using Unit Tests, which are automated tests written and run by software developers to verify that a section of the software meets its design and behaves correctly. Generally, they are written to cover specific core functions within the application and ensure the functions return the correct response from a given set of inputs.

With continuous delivery and continuous testing, unit tests form a big part of that process since they are used to verify that every section of the software they covered behave correctly. In the event that there are failing tests, this would indicate that certain functionalities within the application has not been implemented properly by the developers.

Smoke testing

Smoke testing is a type of testing that verifies a software is built correctly and it can run. It is commonly used to reveal any simple failures and allowing a prospective software release to be rejected.

Unlike the other type of testing, smoke testing is supposed to run quickly, to give the benefit of faster feedback. This way, developers can quickly fix what went wrong and get the next build ready.

Integration testing

Applications have grown increasingly complex with a lot of moving parts. Integration testing is a type of testing that verifies the different parts are able to come together and work. One way it does that is by ensuring the interfaces between the different software components are defect free and they can communicate with each other correctly.

Exploratory testing

In contrast to other type of functional testing, exploratory testing is a type of informal testing that is more ad-hoc and freestyle, relying more on the tester’s creativity instead of following scripted test cases. The term exploratory testing was coined by Cem Kaner in 1984.

With exploratory testing, it is all about discovery, investigation and learning while the test is happening. It is up to the tester to come up with new test cases as they navigates through an application. This help to ensure that software bugs that were not picked up by other type of testings are identified and resolved.

Non-functional testing

Usability testing

Usability testing measures the ease-of-use of an application by testing it on users who have never seen or used it before. If an application has good design intuitiveness, users would less likely to be confused by it, thus are more likely to use it.

To do usability testing, a scenario or a realistic situation need to be setup where the user can perform a series of tasks on the application being tested. Observers will watch and take notes. In addition, other test instruments such as scripted instructions, paper prototypes and questionnaires are also used to gather feedback. Another popular testing method is the Think Aloud Protocol where users will vocalise what they are thinking about as they navigate through the application and how they will be performing an action.

Performance Testing

Performance testing determines how well an application performs. A non-functional requirement given by the users could specify that able to execute an action and return a result or give a response to the user within some time limit. In this case, it would fall under performance test coverage.

Using the calculator example mentioned earlier, a simple performance test can be conducted using a stop watch and a tester using the calculator to calculate the sum of two numbers. The stop watch will start counting once the user press the “=” button. When the calculator screen shows the result, the stop watch is stopped immediately. Then, the time taken could be record as part of a performance test report.

Stress testing

A modern application generally perform quite well on modern machines and could handle several dozen of people using it. However, when the number increases to several hundred or even several thousands users per minute, the application might not even function. It might start crashing due to limited hardware resources.

Stress testing is about putting the application under heavy load and finding out what is the breaking point. With that information, the amount and type of resources to be provisioned can be done more effectively to ensure availability of the application, or that developers can improve the application in terms of its error handling and prevent it from crashing due to insufficient computation resource, thus improving its robustness.

Agile Software Development Process – Simplifying Domain Driven Design

There are many software development approaches that have been thought up and practised by software engineers around the world.

Domain Driven Design (DDD) is one such approach that was introduced and popularised by Eric Evans in the blue book of the same name that was published in 2004. It uses and expands on the principles and concepts defined in Object Oriented Analysis and Design (OOAD). And, it is considered to be a type of agile software development process because it focuses on connecting the implementation to an evolving model.

Anyone who has picked up and read the Domain Driven Design book would know that it can be a heavy read and is mostly theoretical, making it very hard for people to get started on applying the concepts.

For this article, we will attempt to make DDD easy to understand and applicable to developers.

What does domain mean in Domain Driven Design?

If you pick up a dictionary and look up the word, domain, you will come across an explanation that goes like this:

A specified sphere of activity or knowledge.

But it still does not help us answer the question: what does this mean in the context of software engineering?

It refers to the subject area on which the software is intended to be used. For developers, you can think of it as the business logic of an application or the rules that define how each object in the system are related to and interact with each other to create and modify modelled data.

What is Domain Driven Design?

In the context of Domain Driven Design, and continuing from where we left off before, Domain Driven Design is basically an approach to software development where the “business logic” of an application is the king of the hill; not the RESTful APIs that other applications need to interface with or the databases needed to store the data. The business logic is modelled out in the form of complex objects, properties and behaviours.

But that is not all of it.

Domain Driven Design is also about the software team collaborating with domain experts or subject-matter experts to improve the application model and resolve any domain-related issues.

There are also several terminologies introduced in Domain Driven Design book by Eric Evans that are useful when describing and discussing DDD practices:

  • Context
    It refers to the setting in which a word or a statement appears that determines its meaning. For example, the word ‘flight’ can take on different meaning depending on when and where it is used even within the airline industry.
  • Model
    The domain model is a representation of the concepts and their relationships in a given domain. As a system of abstractions, it describes selected aspects of the domain and can be used to solve problems within it.
  • Bounded Context
    A bounded context is logical boundary within which a particular model is defined and applicable. We can think of the bounded context in the same way as each nation within the Associations of Southeast Asian Nations (ASEAN) or the European Union (EU) having its own official language and policies that do not necessarily apply to its neighbours.
  • Ubiquitous Language
    It refers to the language structured around the domain model that simplifies and standardises the vocabulary used. The software team can used it to connect their activities with the software.

Building blocks of Domain Driven Design

We cannot implement Domain Driven Design without first understanding and knowing about the high-level concepts that were defined for the purpose of creating and modifying the domain model. In other words, there are tools that are specific for solving a particular issue within the domain.

  • Entity
    An entity is an object that is identified by its consistent thread of continuity and has a unique identifier (e.g. A person or user).
  • Value Object
    An immutable object that has attributes but no identity. (e.g. Money or Currency)
  • Domain Event
    An object that is used to record discrete event related to model activity within the system.
  • Aggregate Root
    An aggregate root is a type of entity that group a cluster of entities and value objects within a given bounded context. It serves as the main entry point from which external objects and client codes will access and/or modify the various entities and value objects. Ideally, an aggregate root has its own matching repository.
  • Service
    Not to be confused with application service, a service is an operation or a form of business logic that does not naturally fit within the realm of an object.
  • Repositories
    In DDD, a repository is a service that uses a global interface to provide access to entities and value objects that are within an aggregate collection. It should come with methods that allow for creation, modification, deletion of objects within the aggregate.
  • Factories
    Factories encapsulate the logic of creating complex objects and aggregate, which ensure the client has no knowledge of the inner-working of object manipulation.

Advantages of Domain Driven Design

Better User Experience

When the whole software team utilises domain driven design, domain terms can be captured at the code level such as during the naming of the software classes and their methods. Furthermore, the team could also ensure the software frontend is a reflection of the domain model by using the same terms from the ubiquitous language and implementing the same behaviours described by the APIs. This way, the users of the software have a better and easier time using it to achieve their goals.

Improves Flexibility

DDD is heavily based on OOAD concepts. Domain models will mostly be objects, which make them highly modular and encapsulated. This means that the domain models can be changed and improved upon much more easily throughout the software lifecycle.

Ease of Communication

Domain driven design emphasises on the early development of a common and ubiquitous language related to the domain model. This reduces the need for jargons, which can help make communication between domain experts and the developers easier and also minimises confusion.

Disadvantages of Domain Driven Design

Requires robust domain expertise

It is not enough for a software project to have a team of technically proficient people working on it. If these people do not know the intimate details of the subject area on which the application will be used, there is a high chance the final product will fail to meet the business requirements. The project team would need to collaborate with domain experts or have team members who can act as the subject matter experts during the development lifecycle.

Encourage iterative practices

In a software project, being able to do iterative development is an advantage because requirements change all the time. However, it is a disadvantage for organisations that have been doing software projects using the waterfall methods and are unable to change their processes due to resource or talent limitation.

Not suitable for highly-technical project

DDD places a heavy emphasis on the importance of having domain experts to create proper ubiquitous language and domain models for the project. This makes it useful for situations where the business logic is extremely complex and convoluted.

However, it is not suitable for projects that are technically complex but have marginal complexity in terms of business logic. For such projects, the domain experts may not be able to contribute effectively since they might not be able to grasp the problem.

Basic Rules for effective Onion Architecture

Onion architecture is one of the two known “clean” software architectures. The other “clean” software architecture is widely known as Ports and Adapters pattern or Hexagonal architecture. Both makes an explicit separation on what belongs in the application core and what belongs outside such as databases, user interfaces and 3rd-party APIs.

It is a software architecture introduced by Jeffrey Palermo back in 2008 with his four-part series called The Onion Architecture. Like the Layered Architecture and Hexagonal Architecture, it uses the concept of layers but the difference lies in the following:

  • Domain Model layer – part of the domain layer where our entities and classes closely related to them e.g. value objects reside
  • Domain Services layer – part of the domain layer where domain-defined processes reside
  • Application Services layer – where application-specific logic i.e. our use cases reside
  • Outer layer (Infrastructure, Interfaces, Tests) – which keeps peripheral concerns like UI, databases or tests

In this article, you will get to see a set of rules that have been very helpful for me when I apply onion architecture in my software projects. Every layer has their respective rules and are categorised as such. These rules allowed me to focus on solving the domain problem and reduce the need to think about what codes should go where. I get increased productivity while also having structured flexibility in my codebase.

Now, some of these rules are derived based from what I have understood about the architecture while some of these rules have been developed by other expert software developers. There are also some rules that are not specific to onion architecture, but rather, just innate to the specific software pattern.

Let us dive in…

General

These rules are applicable to the whole application or software module.

Rule 1

Do not skip layers when calling methods or utilising functions that are in the deeper layers. Typical flow of execution is as follows:

Interfaces -> Application -> Domain or Infrastructure

Rule 2

Use static methods and classes as a last resort.

Rule 3

Use a dependency injection framework to implement the onion architecture.

Interface Layer

Rule 1

The interface layer only contains codes that handle the following:

  1. Deserialisation of incoming objects sent via API request/call.
  2. Serialisation of objects or messages for the purpose of responding to an API request/call.
  3. Exposure and implementation of RESTful API and SOAP-based web services

Rule 2

This is the topmost layer in the onion architecture that can work with a domain object such as aggregate root or entities directly.

Rule 3

Data transfer objects (DTO) are to be used when receiving data from an API or responding to an API request as they define the data contracts. Never use domain entities (e.g. aggregate root and value objects) to receive or return data via an API.

Rule 4

Use a facade to provide a common entry point for multiple endpoints (RESTful API, SOAP and direct function call) if they need to consume a service provided by the application layer.

Rule 5

Facades do not contain any business or domain logic.

Application Service Layer

Rule 1

The Application layer contains only codes related to the following:

  1. Coordination between domain objects, services and utilities.
  2. Database transaction control
  3. Logging
  4. Establishing connections to databases
  5. Application control or startup (e.g. main function/main class)

Rule 2

The Application Service layer is only concerned with the software use cases. Each method or function in a service class typically represents one use case.

Rule 3

Classes in the application layer never hold and maintain the state of any domain entity. The only type of state allowed for the application service layer are transaction state.

Rule 4

They handle injection of repositories into domain services that need them to function.

Rule 5

Application services do not contain any business logic.

Rule 6

Application services typically do not return anything with the exception of query services

Domain Layer

Rule 1

Domain entities such as aggregate root or entities do not know anything about storage and do not work directly with repositories even if they are injected as parameters to the method of an entity.

Rule 2

Aggregate roots and entities are not allowed to exit the Interface or Infrastructure layer.

Rule 3

Repositories generally deal with storage such as file or databases but they exists only as interfaces with methods that follows the ubiquitous language. Implementations are done in the infrastructure.

Rule 4

Services in the domain layer exists only if and only if there is a need for operations that does not quite fit into an aggregate root or entity. Never create unnecessary services when domain entity or aggregate root can handle it internally.

Rule 5

Value objects are to be used to return immutable data or represent a state change in the domain.

Infrastructure Layer

Rule 1

The infrastructure layer contains codes related to the following:

  1. Actual implementation of repositories to make use of ORM frameworks such as Hibernate, Entity Framework or call databases directly.
  2. Consumption or utilisation of external or 3rd party APIs, and the mapping and translation of external models to domain models.
  3. Highly technical implementation of services that are required by the domain such as encryption, document processing and image processing.

Rule 2

This is the bottommost layer that can work directly with domain objects.

Conclusion

These rules serve as guidelines for software developers when they need to work with onion architecture and are by no means exhaustive. And they are only effective if the developers themselves are disciplined enough when it comes to applying the rules in their work.

And, more expert developers may have differing opinions or they have additional rules or principles that they discovered to be very helpful during the development and implementation phase. Therefore, if there are corrections to be made, do leave a comment below and I will update the information here. This way, all of us get to benefit and improve the general quality of software.

Software Architectures – Microservices

In the world of software development, there are a lot of software architecture patterns that emerged as a result of expert developers figuring out the best way to solve certain problems in their line of work.

In this multipart series on Introduction to Software Architecture Patterns, we will be looking at some of the common patterns such as:

  1. Event-driven
  2. Hexagonal
  3. Multitier
  4. Peer-to-peer
  5. Service-oriented
  6. Broker patterns
  7. Microservices
  8. Monolithic
  9. Serverless

For the first article, we will be looking at microservices in detail and describing what it is, what it is not, the pros and cons of it, and when to use it.

What is a microservice and what it is not

Microservice is now one of the most hyped software architecture patterns in the tech industry. Maybe you read about it from some tech news or articles. Or maybe you heard about it from your colleagues who happen to read about it. Maybe your boss ask you to design the company’s next software project as microservices and you are scratching your head.

So what is it really?

In its simplest form, it is a variant of the service-oriented architecture style that arrange and structure a software system as a collection of loosely coupled services.

It can also be said that a microservice takes the single responsibility principle coined by Robert C. Martin to the next level by applying it to the loosely coupled services which can be developed, deployed and maintained independently. Each service is built specifically to work on a discrete task and can communicate with the other services through simple API to solve complex problems.

From the description above, microservice seems simple enough to understand. Yet there is no universal definition on what it is. Different industry experts have a differing opinion on it, but as time went by, they came to a consensus on what are some of the defining characteristics of a microservice.

  1. Services in a microservice architecture are processes that communicate over network using technology-agnostic protocols such as HTTP.
  2. Services are independently deployable
  3. Services are organised around business capabilities
  4. Services can be implemented using different programming languages, databases, hardware and software environment
  5. Services are small in size, built with messaging enabled, bounded by context, autonomously developed, decentralised, built and released with automated processes.

Now that we know what is a microservice, we will take a look at some of the advantages of using a microservice.

Advantages

The microservice architecture comes with several advantages that make it a better option for developing applications but only if it is done correctly and properly.

Resilient to partial service failure

The biggest advantage microservice architecture can offer when compared to the other architectures is that any given service for an application should be able to continue operating even if some others goes down due to software bug or crashes. This is because each service are designed and built to be mostly self-contained and autonomous.

Highly maintainable and testable

The modular nature of a microservice architecture meant that each service is small and specialised. A service can then be easily replaced, changed or updated without affecting the rest of the application.

Furthermore, the developer responsible for the service can also easily test it. They would not have to deal with a large and unwieldy application that would inadvertently happen as the application expands in its capabilities or go in depth into the other areas of the application to understand the business just to test their changes.

Loosely coupled

One of the biggest problem with traditional monolithic architecture is that the application could suffer from a vendor lock-in due to the type or kind of technology used. This creates problem when the vendor goes out of business or the pricing for continual use of the vendor’s product rises beyond what makes sense for the application. Attempts to change the application to use alternative technology or platform can be very costly.

With a microservice, services are loosely coupled from each other. Each service can choose to use different technology or run on different platforms. In the event that a particular service needs to change its technology stack, the team which is responsible for it can do so without affecting the others. Furthermore, the loosely coupled nature also meant that any source code changes would be kept within each service and do not propagate to other services.

Independently deployable

Since services in a microservice are loosely coupled and each could come with their own storage mechanism, there is no real need for a service to wait for the others to go offline or go online before it could be deployed.

Organised around business capabilities

When done properly during the initial phase, each microservice would be designed, developed and deployed to solve problems in a specific business domain (eg. customer relationship management, sales order management, invoicing). Developers can be hired and organised such that they could focus on solving problems in that business domain, creating products instead of projects and writing glue codes. This could translate to having developers who are responsible for a specific or a few service(s) and they develop expertise in that area of the business, leading to faster turn around of new features. The final product or service could also be reused in a different process, other contexts or channels, which would lead to cost savings for the organisation since there is no need to green-lit new projects and hire new developers.

Owned by a small team

By breaking down an application into smaller but autonomous services, teams can become smaller and more efficient. There is no need to have dozens of software developers, managers and support staff to run a service. With the service smaller than a monolithic application, its startup time in the production environment will also be faster and so will the development environment the developers used. With that, they can be more productive and focus on delivery of bug fixes, features and improvements.

Furthermore, they would be responsible for understanding the requirements, developing the necessary features, testing, deployment and support. This will give them more ownership and allow them to see how their work affect the users and the business.

Next up, we will take a look at the disadvantages of using microservices.

Disadvantages

Increased Complexity

For microservices to work together to solve a larger or more complex problem, they communicate with each via API calls over the network. This alone is complex enough to manage and implement. In addition, the e-commerce or finance domain typically feature workload that require transactional processing. Such workloads can be difficult to implement with microservices since APIs by default are stateless. To ensure the data integrity and consistency across the different system, additional workarounds or software components would have to be introduced.

Furthermore, different services would have different demand for their runtime environments. Different servers may have to be introduced, configured and maintained, which ultimately adds more points of failure into the whole system.

Lastly, there is also the complexity in terms of collaboration amongst the different stakeholders. Ideally, different parts of the microservices will have their respective owners. These developers would have to coordinate amongst themselves on the best approach to solve a problem, deploy and maintain the system. And when there is a miscommunication, the parts or maybe the whole of the application may not work properly, resulting in loss in productivity and cost.

Requires cultural changes

Traditional application built using the monolithic approach have a singular codebase that developers work on together. Organisations tend to split the project team based on their technical specialties (eg. Infrastructure, networking, UI/UX, backend). This can create a situation where the development team has no access to the production environment and lead to a disconnect between the developers and the customers.

On the other hand, microservices generally require the developers to have ownership of the full lifecycle of what they build. Organisations would need to think about and implement major changes in terms of the following:

  1. The size and the responsibilities of the team.
  2. The development and quality assurance process.
  3. The structure of the software and how it aligns to the business capabilities so that it can be broken down into small and autonomous services connected via APIs.
  4. How to deploy and maintain the services.

Depending on the organisation, this restructuring and cultural shift may impact the business operations and may cause productivity to drop as the people shift into a different mode of operating. Some employees may choose to leave if they are unable to adapt to such a change or unwilling to be responsible for the full lifecycle of an application.

More expensive

When it comes to microservices, it is more expensive to run and maintain them over time because of the number of moving parts. In the event of a major software fault that brings down multiple services, multiple teams of developers/engineers would have to be involved to resolve issues, which translate to hundreds if not thousands of man-hours.

Furthermore, due to the complexity of microservices, it may take a longer time to restore the services back to operation when compared to the traditional monolithic applications. This could translate in monetary losses if the services are part of a on-demand product offered by a company or a realtime processing system.

More resources could also be required if there is a need to scale up the services by adding more virtual machine instances or servers in realtime to handle huge workload or sudden spikes in activities.

Pose security challenges

An application is split into multiple independent running services in a microservice architecture. In order for the different services to coordinate their action to support the business operation, they communicate with each other via APIs that are independent of the machine architecture and programming languages. This creates a large surface area for cybercriminals to disrupt the services and bring down a few services. This means that the security team will need to be hyper-vigilant about any possible interruption.

Organisations, in order to maintain their competitiveness and growth, would prefer to convert existing monolithic application into microservices. Different programming languages and frameworks may be used by different teams to build the services. As no programming languages or frameworks are completely safe and without vulnerability, each new introduction of language or framework to build new services and adding these services into the existing mix of microservices can increase instabilities and the number of security loopholes that can be exploited.

Furthermore, due to the distributed nature of microservices, traditional logs are not as effective when it comes to tracing what is going on with the application. In addition, there will also be more logs generated in a concurrent manner. This means there is a need to consolidate all the logs and correlate the events to generate a good picture of what is happening. Otherwise, there is a high probability that issues will be masked or covered up and prevent effective mitigation.

With this, we now know what is a microservice, the advantages and disadvantages. Next up, we will see what is not a microservice.

What is not a microservice

After reading through the advantages and disadvantages, maybe you came to the conclusion to microservices is easy to do and you feel confident about it. Maybe you think that you could just ask your software engineers to make individual function as a microservice for other parts of the software to use.

But that would be a huge mistake. The benefits of microservices will immediately be overwhelmed by the runtime overhead and operational complexity. Your project will suffer from over-engineering and timeline slip.

Justin Etheredge wrote an article called You’re not actually building microservices where he talks about the possible symptoms of a software system that is not built using microservices.

  1. Any changes to one microservice often require changes to others.
  2. Deploying one microservice requires other microservices to be deployed at the same time
  3. Excessive communication between microservices
  4. Sharing of the same datastore by multiple microservices
  5. Microservices share a same code or models

However, even if an application shows one of the above symptoms, it could still remain as a microservice because there will always be exceptional cases. But alarm bells should go off if a service or application displays more than one of the above symptoms. That could mean it is not a true microservice.

So when do you use it?

Even though microservice architecture is the trendy thing to do now, it is generally advisable not to do it for a new software project or a proof of concept due to the complexity, cost and cultural changes needed. It may even slow down the development process.

But that is not to say to we should stay away from microservices. Rather, the only time to consider converting an application into using microservice architecture is when the following criteria are met:

  1. The software has grown too big to be managed as a monolith as described by Martin Fowler in his article.
  2. The organisation has the resources in place and is ready to go onboard with it.
  3. The software needs to adapt quickly to market needs.
  4. Parts of the application needs to be extremely efficient.

Jake Lumetta wrote an article titled Monolith vs microservices: which architecture is right for your team? that reinforced the above points 2-4.

In the same article, Jake also list down the scenarios that are not suitable for microservices:

  1. Your team is at founding stage and has only a few members.
  2. You are building an unproven product or proof of concept.
  3. You have no microservice experience.