Emotet now hacks nearby Wi-Fi network to spread like a worm

Emotet has evolved multiple times since its initial discovery in 2014 by security researchers. Recently, a sample of Emotet malware was found to have gained the ability to spread itself through insecure Wi-Fi networks that are near an infected device.

Once the malware gains access to the Wi-Fi networks, it will then attempt to infect all the connected devices. It is a tactic that can dramatically escalate Emotet’s spread.

The Wi-Fi spreading binary was only discovered being delivered for the first time on 23 January 2020 by researchers but further analysis suggested that the executable file has a timestamp of 16 April 2018, which hints that the behaviour has been running unnoticed for almost two years.

This Wi-Fi spreading capability further raises the threat level of the already-prevalent Emotet.

Before this discovery, the malware was found to have gained new obfuscation and anti-virus detection capabilities in November 2019. These capabilities enable Emotet to better escape detection. Meanwhile, its authors have also changed their social engineering tactics to keep in line with current events, sending out malicious emails that claimed to be Edward Snowden’s new memoir or with Halloween-themed lures.

What is Emotet?

Emotet is a malware that begin life as a banking trojan in 2014. Its primary goal is to sneak into your computer in order to steal sensitive and private information.

It has gone through a few iterations. Early versions arrived as malicious Javascript files. Subsequently, it evolved to use macro-enabled documents that will retrieve the virus payload from command and control (C&C) servers run by the attackers.

A malware is mostly useless to the attackers if it is detected early or when security researchers can analyse it to determine how it works. In order to prevent that, Emotet comes with a few tricks up its sleeves.

Most notably, it knows if it is running inside a virtual machine (VM) and will lay dormant when that happens. This is because cybersecurity researchers use VMs to observe malware within a safe and controlled space. I

Emotet is also able to use Command and Control (C&C) servers to receive updates, much like the operating system (OS) updates on your PC and could happen seamlessly and without any outward signs. This way, attackers can install updated version of the malware or deliver and install additional malwares on the target. In addition, the C&C servers can also serve as a dumping ground for stolen information such as financial credentials, usernames and password, and email addresses.

How does Emotet spread?

Emotet spreads itself primarily through spam emails (malSpam). It will go through your contact lists and send itself to your friends, family, coworkers and clients. The emails look less like spam because they are coming from your hijacked account, which in turn make the recipients feel safe and more likely to click on the bad URLs in the emails and download infected files.

In order to increase the likelihood recipients click on the bad URLs or open the attachments, the emails may come with contents that contains familiar branding or tempting languages such as ‘Your invoice’ and ‘Payment Details’. In some cases, the content may be about an upcoming shipment from well-known delivery companies.

Furthermore, if there is a connected network present, Emotet attempts to spread through it and gained access to other connected system by using a list of common passwords and brute forcing its way.

For Emotet to spread via Wi-Fi, it first infects the initial system with a self-extracting RAR file containing two binaries (worm.exe and service.exe). Once the RAR file is extracted, worm.exe executes automatically.

The main purpose of worm.exe is to profile wireless networks. Then it would go through each Wi-Fi network to identify their SSID, signal, encryption and authentication methods. After which, the malware will begin to connect to each of the network by brute forcing the passwords.

Once the malware gains access into the network, it will make a request to its command and control (C2) server and establishes a connection to the Wi-Fi network. Next, it will attempt to brute-force the passwords for all users on the newly-infected network. If the brute force is successful and the malware gains access into the device, worm.exe will install the service.exe onto the device.

Finally, once service.exe is installed onto the infected device, it will communicate back to the C2 server and then begin dropping the embedded Emotet executable. The whole spreading and infection process will repeat again in an attempt to infect as many devices as possible.

How do you protect your devices from Emotet?

In order to prevent Emotet from using the Wi-Fi spreading capability and infect connected devices, it is recommended that wireless networks are secured using longer and more complex passwords.

Preventing infection by Emotet is only one part of the solution. Active monitoring of endpoints for new installation of services and subsequent investigation of suspicious services or processes running from temporary folders and application data folders within user profile is equally important. This way, Emotet and its associated malwares can be identify early and be eliminated before they cause any further damage to the rest of the systems.

Furthermore, computers and endpoints should be kept up-to-date with the latest software patches to eliminate as many vulnerabilities in the system as possible. This will prevent the other malwares associated with Emotet infection such as TrickBot from exploiting these said vulnerabilities.

Last but not least, it is important to be aware not to download or open any suspicious attachments or links respectively. This way, Emotet will not be able to gain any initial foothold in the system or network.

IoT and surveillance devices that use Xiongmai Tech firmware discovered to have zero-day backdoor mechanism

Russian security researcher Vladislav Yarmak discovered a backdoor mechanism integrated into DVR/NVR devices built on top of HiSilicon SoC. He published a full-disclosure report on Habr, a Russian IT and Computer Science blog.

The backdoor mechanism is implemented using a mix of exploits that take advantage of bugs discovered years ago, with some dating as early as March 2013.

HiSilicon, a fabless semiconductor Chinese company fully owned by Huawei, was inferred to be responsible for the backdoor mechanism. An earlier version of the HiSilicon firmware came with telnet access enabled using a static root password that can be easily recovered from the firmware image.

In 2017, Istvan Toth did a comprehensive and detailed analysis of the firmware and discovered multiple vulnerabilities with the firmware and the built-in webserver.

He also published a list of brands with the affected firmwares on this GitHub page: https://github.com/tothi/pwn-hisilicon-dvr#summary. From the list, there are hundreds of products across at least a dozen of brands.

Subsequent versions of the firmware had their telnet access and the debug port (9527/tcp) disabled by default. Another port, 9530/tcp, was opened instead to receive a special command to start the telnet daemon and enable shell access with the same static password. This was intentionally baked into the firmware.

Huawei published an official media statement stating that they are not responsible for the discovered vulnerabilities. In addition, they said that they and their affiliates, including HiSilicon have long committed that they will not and have not install backdoors nor will they allow their vendors to do the same.

It was later determined by other security researchers that only devices using Xiongmai firmwares are affected by the vulnerabilities.

Xiongmai (Hangzhou Xiongmai Technology Co, XMtech) is a Chinese technology company founded in 2009 that develops IoT and surveillance devices such as DVR, NVR and IP Cameras.

Given that the vulnerabilities remained unpatched and the company is not responding to the disclosure, it is advised that devices using Xiongmai software are replaced. If the replacement of these devices is not possible, then it is best to restrict network access to these devices to only trusted users. Ports involved in this vulnerability are 23/tcp, 9530/tcp and 9527/tcp, and they should be blocked from external access.

What is the difference between Authentication and Authorisation?

If you have been working as a member of the tech community (System Administrator, Software Engineer, etc.), you might have heard of the terms Authentication and Authorisation. Even though they are often used together when the security of a computer system or application is involved, they are two completely different security processes.

What is Authentication?

Authentication in the security context refers to the act or process that validates if a user of a software, computer or system is who they claim to be. The most common way to do this via the use of a password. If the user enters the correct password, the system assumes the identity is valid and allows access.

The use of password-based authentication is also known as single-factor authentication.

However, it is no longer sufficient to rely on password alone to validate a user’s identity in recent times. Improvements in computer performance have led to the reduction in the time needed to brute force a password (or in layman terms, trying out every combination of letters, numbers and symbols) and gain access into a system. Furthermore, it is human nature to use something short and/or familiar such as birthdays, social security numbers, national identity numbers and names as passwords.

In order to increase the level of security of a system, multi-factor authentication is becoming a norm and highly recommended for systems that process sensitive information.

Two-factor authentication is one of the more common multi-factor authentication scheme employed by companies such as Apple, Google and Microsoft. Under this scheme, the following two factors are commonly used for authentication:

  1. Something that you know (e.g. password)
  2. Something you own (e.g. smart card, smart phone)

This is based on the premise that even if some malicious actors manage to get a hold of a password to a system, they remain unable to log into a system because they do not have access to a registered hardware such as a smart card, security token or smart phone to further prove they are a valid user.

What is Authorisation?

Authorisation takes place after the user has verified their identity. It refers to the act or process that verifies if the authenticated user has the rights or permission to access or use a particular resource. In this context, a resource can refer to a file, a folder, a particular room within a building or an area.

The most common implementation of authorisation is role-based access control (RBAC). It is based on the premise that different users have different roles to play in a given organisation. Their roles ultimately determine the type of information they can access and the amount of responsibility they have.

However, RBAC may not be fine-grain enough to control access to specific resources—a role typically comes with a set of permissions. This is where attribute-based access control (ABAC) comes into the picture. In addition to using the roles and groups a user belongs in to determine the access, additional attributes such as a user’s citizenship, the action performed or the time at which the access is requested can be used to control access.

Conclusion

Both authentication and authorisation are fundamentals of system/information security. Without them or when they are implemented poorly, malicious actors could gain access to the system and extract sensitive information such as personal information and company secrets easily. Then, these actors could use what they acquired to mount further attacks that could include identity fraud or helping the competitors of a business to gain an edge. Even if the attacks do not come from outside the organisation, employees within could accidentally or intentionally access or change information that they are not permitted to.

An opinion on improving voice user interface while ensuring privacy

Voice user interface is going to be one of the ways we interact with our devices as we go about our daily lives. It is just a very intuitive way for us because we communicate primarily via voice with text and images to complement.

But there still are various problems that need people to work on them to improve the overall experience. One of it is related to how the AI behind voice user interface can interact with us more naturally, like how we interact with fellow human beings.

This article written by Cheryl Platz got me thinking about that. It also covered a little on privacy and why it is a contributing factor that make it difficult for current generation of AIs to speak more naturally and understand the context when we speak. Unless, companies don’t give a shit about our privacy and start collecting even more data.

In this article, I am going to share what I think could help improve the AI and ensure user privacy.

Current Implementations and Limitations

What an AI needs to be better at understanding and responding in ways most useful to us are processing power, a good neural network that allows it to self-learn, and a database to store and retrieve whatever it has learnt.

The cloud is the best way for an AI to gain access to a huge amount of processing power and large enough database. Companies like Amazon and Microsoft offer cloud computing and storage services via their AWS and Azure platform respectively at very low cost. Even Google offers such services via their Compute Engine.

The problem with the cloud is reduced level of confidence when privacy is involved. Anything you store up there is vulnerable and available for wholesale retrieval through security flaws or misconfigurations. Companies could choose to encrypt those data via end-to-end encryption to help with protect user’s privacy but the problem is the master keys are owned by said companies. They could decrypt those data whenever they want.

Or you could do it like what Apple did with Siri, storing data locally, and use Differential Privacy to help ensure anonymity but it reduces the AI capabilities because it doesn’t have access to sufficient amount of personal data. Two, Siri runs on devices like Apple Watch, iPhones and iPads, which could be a problem when it comes to processing and compute capabilities, and having enough information to understand the user.

Although those devices have more processing power than room-sized mainframes from decades ago, it’s still not enough, energy-efficiency and capability wise, to handle highly complex neural networks for better experience with voice user interfaces.

Apple did try to change that with its A11 Bionic SoC that has a neural engine. Companies like Qualcomm, Imagination Technologies, and even NVIDIA are also contributing to increase local processing power with energy efficiency for AI through their respective CPU and GPU products.

Possible Solution

The work on the hardware by companies should continue so that there will be even more powerful and energy efficient processors for AI to use.

In addition to that, what we need is a standard, wireless-based protocol (maybe bluetooth) for the AI on our devices, irrespective of companies, to talk to each other when they are near to each other and in our home network. This way, the AI on each of those devices can share information and perform distributed computing, thereby improving its accuracy, overall understanding of the user, and respond accordingly.

A common software kernel is also necessary to provide different implementation of neural network a standardized way of doing distributed computing efficiently and effectively.

So now, imagine Siri talking to Alexa, Google Assistant or even Cortana via this protocol and vice versa.

Taking privacy into account, information exchanged via this protocol should be encrypted by default with keys owned only by the user. Any data created or stored should only reside on device also encrypted and nowhere else. Taking a page out of Apple’s playbook, the generated keys should come from some kind of hardware-based “Secure Enclave”.

To further improve the neural network, Differential Privacy should be applied on any query or information sent by the AI to the cloud for processing.

Conclusion

The above is really just my personal thought of how current the AIs powering voice user interfaces can be improved.

At the end, it’s really up to the companies to decide if they want to come together and improve all our lives taking into account our privacy and security.

Agile Software Development Process – Simplifying Domain Driven Design

There are many software development approaches that have been thought up and practised by software engineers around the world.

Domain Driven Design (DDD) is one such approach that was introduced and popularised by Eric Evans in the blue book of the same name that was published in 2004. It uses and expands on the principles and concepts defined in Object Oriented Analysis and Design (OOAD). And, it is considered to be a type of agile software development process because it focuses on connecting the implementation to an evolving model.

Anyone who has picked up and read the Domain Driven Design book would know that it can be a heavy read and is mostly theoretical, making it very hard for people to get started on applying the concepts.

For this article, we will attempt to make DDD easy to understand and applicable to developers.

What does domain mean in Domain Driven Design?

If you pick up a dictionary and look up the word, domain, you will come across an explanation that goes like this:

A specified sphere of activity or knowledge.

But it still does not help us answer the question: what does this mean in the context of software engineering?

It refers to the subject area on which the software is intended to be used. For developers, you can think of it as the business logic of an application or the rules that define how each object in the system are related to and interact with each other to create and modify modelled data.

What is Domain Driven Design?

In the context of Domain Driven Design, and continuing from where we left off before, Domain Driven Design is basically an approach to software development where the “business logic” of an application is the king of the hill; not the RESTful APIs that other applications need to interface with or the databases needed to store the data. The business logic is modelled out in the form of complex objects, properties and behaviours.

But that is not all of it.

Domain Driven Design is also about the software team collaborating with domain experts or subject-matter experts to improve the application model and resolve any domain-related issues.

There are also several terminologies introduced in Domain Driven Design book by Eric Evans that are useful when describing and discussing DDD practices:

  • Context
    It refers to the setting in which a word or a statement appears that determines its meaning. For example, the word ‘flight’ can take on different meaning depending on when and where it is used even within the airline industry.
  • Model
    The domain model is a representation of the concepts and their relationships in a given domain. As a system of abstractions, it describes selected aspects of the domain and can be used to solve problems within it.
  • Bounded Context
    A bounded context is logical boundary within which a particular model is defined and applicable. We can think of the bounded context in the same way as each nation within the Associations of Southeast Asian Nations (ASEAN) or the European Union (EU) having its own official language and policies that do not necessarily apply to its neighbours.
  • Ubiquitous Language
    It refers to the language structured around the domain model that simplifies and standardises the vocabulary used. The software team can used it to connect their activities with the software.

Building blocks of Domain Driven Design

We cannot implement Domain Driven Design without first understanding and knowing about the high-level concepts that were defined for the purpose of creating and modifying the domain model. In other words, there are tools that are specific for solving a particular issue within the domain.

  • Entity
    An entity is an object that is identified by its consistent thread of continuity and has a unique identifier (e.g. A person or user).
  • Value Object
    An immutable object that has attributes but no identity. (e.g. Money or Currency)
  • Domain Event
    An object that is used to record discrete event related to model activity within the system.
  • Aggregate Root
    An aggregate root is a type of entity that group a cluster of entities and value objects within a given bounded context. It serves as the main entry point from which external objects and client codes will access and/or modify the various entities and value objects. Ideally, an aggregate root has its own matching repository.
  • Service
    Not to be confused with application service, a service is an operation or a form of business logic that does not naturally fit within the realm of an object.
  • Repositories
    In DDD, a repository is a service that uses a global interface to provide access to entities and value objects that are within an aggregate collection. It should come with methods that allow for creation, modification, deletion of objects within the aggregate.
  • Factories
    Factories encapsulate the logic of creating complex objects and aggregate, which ensure the client has no knowledge of the inner-working of object manipulation.

Advantages of Domain Driven Design

Better User Experience

When the whole software team utilises domain driven design, domain terms can be captured at the code level such as during the naming of the software classes and their methods. Furthermore, the team could also ensure the software frontend is a reflection of the domain model by using the same terms from the ubiquitous language and implementing the same behaviours described by the APIs. This way, the users of the software have a better and easier time using it to achieve their goals.

Improves Flexibility

DDD is heavily based on OOAD concepts. Domain models will mostly be objects, which make them highly modular and encapsulated. This means that the domain models can be changed and improved upon much more easily throughout the software lifecycle.

Ease of Communication

Domain driven design emphasises on the early development of a common and ubiquitous language related to the domain model. This reduces the need for jargons, which can help make communication between domain experts and the developers easier and also minimises confusion.

Disadvantages of Domain Driven Design

Requires robust domain expertise

It is not enough for a software project to have a team of technically proficient people working on it. If these people do not know the intimate details of the subject area on which the application will be used, there is a high chance the final product will fail to meet the business requirements. The project team would need to collaborate with domain experts or have team members who can act as the subject matter experts during the development lifecycle.

Encourage iterative practices

In a software project, being able to do iterative development is an advantage because requirements change all the time. However, it is a disadvantage for organisations that have been doing software projects using the waterfall methods and are unable to change their processes due to resource or talent limitation.

Not suitable for highly-technical project

DDD places a heavy emphasis on the importance of having domain experts to create proper ubiquitous language and domain models for the project. This makes it useful for situations where the business logic is extremely complex and convoluted.

However, it is not suitable for projects that are technically complex but have marginal complexity in terms of business logic. For such projects, the domain experts may not be able to contribute effectively since they might not be able to grasp the problem.

Basic Rules for effective Onion Architecture

Onion architecture is one of the two known “clean” software architectures. The other “clean” software architecture is widely known as Ports and Adapters pattern or Hexagonal architecture. Both makes an explicit separation on what belongs in the application core and what belongs outside such as databases, user interfaces and 3rd-party APIs.

It is a software architecture introduced by Jeffrey Palermo back in 2008 with his four-part series called The Onion Architecture. Like the Layered Architecture and Hexagonal Architecture, it uses the concept of layers but the difference lies in the following:

  • Domain Model layer – part of the domain layer where our entities and classes closely related to them e.g. value objects reside
  • Domain Services layer – part of the domain layer where domain-defined processes reside
  • Application Services layer – where application-specific logic i.e. our use cases reside
  • Outer layer (Infrastructure, Interfaces, Tests) – which keeps peripheral concerns like UI, databases or tests

In this article, you will get to see a set of rules that have been very helpful for me when I apply onion architecture in my software projects. Every layer has their respective rules and are categorised as such. These rules allowed me to focus on solving the domain problem and reduce the need to think about what codes should go where. I get increased productivity while also having structured flexibility in my codebase.

Now, some of these rules are derived based from what I have understood about the architecture while some of these rules have been developed by other expert software developers. There are also some rules that are not specific to onion architecture, but rather, just innate to the specific software pattern.

Let us dive in…

General

These rules are applicable to the whole application or software module.

Rule 1

Do not skip layers when calling methods or utilising functions that are in the deeper layers. Typical flow of execution is as follows:

Interfaces -> Application -> Domain or Infrastructure

Rule 2

Use static methods and classes as a last resort.

Rule 3

Use a dependency injection framework to implement the onion architecture.

Interface Layer

Rule 1

The interface layer only contains codes that handle the following:

  1. Deserialisation of incoming objects sent via API request/call.
  2. Serialisation of objects or messages for the purpose of responding to an API request/call.
  3. Exposure and implementation of RESTful API and SOAP-based web services

Rule 2

This is the topmost layer in the onion architecture that can work with a domain object such as aggregate root or entities directly.

Rule 3

Data transfer objects (DTO) are to be used when receiving data from an API or responding to an API request as they define the data contracts. Never use domain entities (e.g. aggregate root and value objects) to receive or return data via an API.

Rule 4

Use a facade to provide a common entry point for multiple endpoints (RESTful API, SOAP and direct function call) if they need to consume a service provided by the application layer.

Rule 5

Facades do not contain any business or domain logic.

Application Service Layer

Rule 1

The Application layer contains only codes related to the following:

  1. Coordination between domain objects, services and utilities.
  2. Database transaction control
  3. Logging
  4. Establishing connections to databases
  5. Application control or startup (e.g. main function/main class)

Rule 2

The Application Service layer is only concerned with the software use cases. Each method or function in a service class typically represents one use case.

Rule 3

Classes in the application layer never hold and maintain the state of any domain entity. The only type of state allowed for the application service layer are transaction state.

Rule 4

They handle injection of repositories into domain services that need them to function.

Rule 5

Application services do not contain any business logic.

Rule 6

Application services typically do not return anything with the exception of query services

Domain Layer

Rule 1

Domain entities such as aggregate root or entities do not know anything about storage and do not work directly with repositories even if they are injected as parameters to the method of an entity.

Rule 2

Aggregate roots and entities are not allowed to exit the Interface or Infrastructure layer.

Rule 3

Repositories generally deal with storage such as file or databases but they exists only as interfaces with methods that follows the ubiquitous language. Implementations are done in the infrastructure.

Rule 4

Services in the domain layer exists only if and only if there is a need for operations that does not quite fit into an aggregate root or entity. Never create unnecessary services when domain entity or aggregate root can handle it internally.

Rule 5

Value objects are to be used to return immutable data or represent a state change in the domain.

Infrastructure Layer

Rule 1

The infrastructure layer contains codes related to the following:

  1. Actual implementation of repositories to make use of ORM frameworks such as Hibernate, Entity Framework or call databases directly.
  2. Consumption or utilisation of external or 3rd party APIs, and the mapping and translation of external models to domain models.
  3. Highly technical implementation of services that are required by the domain such as encryption, document processing and image processing.

Rule 2

This is the bottommost layer that can work directly with domain objects.

Conclusion

These rules serve as guidelines for software developers when they need to work with onion architecture and are by no means exhaustive. And they are only effective if the developers themselves are disciplined enough when it comes to applying the rules in their work.

And, more expert developers may have differing opinions or they have additional rules or principles that they discovered to be very helpful during the development and implementation phase. Therefore, if there are corrections to be made, do leave a comment below and I will update the information here. This way, all of us get to benefit and improve the general quality of software.

Software Architectures – Microservices

In the world of software development, there are a lot of software architecture patterns that emerged as a result of expert developers figuring out the best way to solve certain problems in their line of work.

In this multipart series on Introduction to Software Architecture Patterns, we will be looking at some of the common patterns such as:

  1. Event-driven
  2. Hexagonal
  3. Multitier
  4. Peer-to-peer
  5. Service-oriented
  6. Broker patterns
  7. Microservices
  8. Monolithic
  9. Serverless

For the first article, we will be looking at microservices in detail and describing what it is, what it is not, the pros and cons of it, and when to use it.

What is a microservice and what it is not

Microservice is now one of the most hyped software architecture patterns in the tech industry. Maybe you read about it from some tech news or articles. Or maybe you heard about it from your colleagues who happen to read about it. Maybe your boss ask you to design the company’s next software project as microservices and you are scratching your head.

So what is it really?

In its simplest form, it is a variant of the service-oriented architecture style that arrange and structure a software system as a collection of loosely coupled services.

It can also be said that a microservice takes the single responsibility principle coined by Robert C. Martin to the next level by applying it to the loosely coupled services which can be developed, deployed and maintained independently. Each service is built specifically to work on a discrete task and can communicate with the other services through simple API to solve complex problems.

From the description above, microservice seems simple enough to understand. Yet there is no universal definition on what it is. Different industry experts have a differing opinion on it, but as time went by, they came to a consensus on what are some of the defining characteristics of a microservice.

  1. Services in a microservice architecture are processes that communicate over network using technology-agnostic protocols such as HTTP.
  2. Services are independently deployable
  3. Services are organised around business capabilities
  4. Services can be implemented using different programming languages, databases, hardware and software environment
  5. Services are small in size, built with messaging enabled, bounded by context, autonomously developed, decentralised, built and released with automated processes.

Now that we know what is a microservice, we will take a look at some of the advantages of using a microservice.

Advantages

The microservice architecture comes with several advantages that make it a better option for developing applications but only if it is done correctly and properly.

Resilient to partial service failure

The biggest advantage microservice architecture can offer when compared to the other architectures is that any given service for an application should be able to continue operating even if some others goes down due to software bug or crashes. This is because each service are designed and built to be mostly self-contained and autonomous.

Highly maintainable and testable

The modular nature of a microservice architecture meant that each service is small and specialised. A service can then be easily replaced, changed or updated without affecting the rest of the application.

Furthermore, the developer responsible for the service can also easily test it. They would not have to deal with a large and unwieldy application that would inadvertently happen as the application expands in its capabilities or go in depth into the other areas of the application to understand the business just to test their changes.

Loosely coupled

One of the biggest problem with traditional monolithic architecture is that the application could suffer from a vendor lock-in due to the type or kind of technology used. This creates problem when the vendor goes out of business or the pricing for continual use of the vendor’s product rises beyond what makes sense for the application. Attempts to change the application to use alternative technology or platform can be very costly.

With a microservice, services are loosely coupled from each other. Each service can choose to use different technology or run on different platforms. In the event that a particular service needs to change its technology stack, the team which is responsible for it can do so without affecting the others. Furthermore, the loosely coupled nature also meant that any source code changes would be kept within each service and do not propagate to other services.

Independently deployable

Since services in a microservice are loosely coupled and each could come with their own storage mechanism, there is no real need for a service to wait for the others to go offline or go online before it could be deployed.

Organised around business capabilities

When done properly during the initial phase, each microservice would be designed, developed and deployed to solve problems in a specific business domain (eg. customer relationship management, sales order management, invoicing). Developers can be hired and organised such that they could focus on solving problems in that business domain, creating products instead of projects and writing glue codes. This could translate to having developers who are responsible for a specific or a few service(s) and they develop expertise in that area of the business, leading to faster turn around of new features. The final product or service could also be reused in a different process, other contexts or channels, which would lead to cost savings for the organisation since there is no need to green-lit new projects and hire new developers.

Owned by a small team

By breaking down an application into smaller but autonomous services, teams can become smaller and more efficient. There is no need to have dozens of software developers, managers and support staff to run a service. With the service smaller than a monolithic application, its startup time in the production environment will also be faster and so will the development environment the developers used. With that, they can be more productive and focus on delivery of bug fixes, features and improvements.

Furthermore, they would be responsible for understanding the requirements, developing the necessary features, testing, deployment and support. This will give them more ownership and allow them to see how their work affect the users and the business.

Next up, we will take a look at the disadvantages of using microservices.

Disadvantages

Increased Complexity

For microservices to work together to solve a larger or more complex problem, they communicate with each via API calls over the network. This alone is complex enough to manage and implement. In addition, the e-commerce or finance domain typically feature workload that require transactional processing. Such workloads can be difficult to implement with microservices since APIs by default are stateless. To ensure the data integrity and consistency across the different system, additional workarounds or software components would have to be introduced.

Furthermore, different services would have different demand for their runtime environments. Different servers may have to be introduced, configured and maintained, which ultimately adds more points of failure into the whole system.

Lastly, there is also the complexity in terms of collaboration amongst the different stakeholders. Ideally, different parts of the microservices will have their respective owners. These developers would have to coordinate amongst themselves on the best approach to solve a problem, deploy and maintain the system. And when there is a miscommunication, the parts or maybe the whole of the application may not work properly, resulting in loss in productivity and cost.

Requires cultural changes

Traditional application built using the monolithic approach have a singular codebase that developers work on together. Organisations tend to split the project team based on their technical specialties (eg. Infrastructure, networking, UI/UX, backend). This can create a situation where the development team has no access to the production environment and lead to a disconnect between the developers and the customers.

On the other hand, microservices generally require the developers to have ownership of the full lifecycle of what they build. Organisations would need to think about and implement major changes in terms of the following:

  1. The size and the responsibilities of the team.
  2. The development and quality assurance process.
  3. The structure of the software and how it aligns to the business capabilities so that it can be broken down into small and autonomous services connected via APIs.
  4. How to deploy and maintain the services.

Depending on the organisation, this restructuring and cultural shift may impact the business operations and may cause productivity to drop as the people shift into a different mode of operating. Some employees may choose to leave if they are unable to adapt to such a change or unwilling to be responsible for the full lifecycle of an application.

More expensive

When it comes to microservices, it is more expensive to run and maintain them over time because of the number of moving parts. In the event of a major software fault that brings down multiple services, multiple teams of developers/engineers would have to be involved to resolve issues, which translate to hundreds if not thousands of man-hours.

Furthermore, due to the complexity of microservices, it may take a longer time to restore the services back to operation when compared to the traditional monolithic applications. This could translate in monetary losses if the services are part of a on-demand product offered by a company or a realtime processing system.

More resources could also be required if there is a need to scale up the services by adding more virtual machine instances or servers in realtime to handle huge workload or sudden spikes in activities.

Pose security challenges

An application is split into multiple independent running services in a microservice architecture. In order for the different services to coordinate their action to support the business operation, they communicate with each other via APIs that are independent of the machine architecture and programming languages. This creates a large surface area for cybercriminals to disrupt the services and bring down a few services. This means that the security team will need to be hyper-vigilant about any possible interruption.

Organisations, in order to maintain their competitiveness and growth, would prefer to convert existing monolithic application into microservices. Different programming languages and frameworks may be used by different teams to build the services. As no programming languages or frameworks are completely safe and without vulnerability, each new introduction of language or framework to build new services and adding these services into the existing mix of microservices can increase instabilities and the number of security loopholes that can be exploited.

Furthermore, due to the distributed nature of microservices, traditional logs are not as effective when it comes to tracing what is going on with the application. In addition, there will also be more logs generated in a concurrent manner. This means there is a need to consolidate all the logs and correlate the events to generate a good picture of what is happening. Otherwise, there is a high probability that issues will be masked or covered up and prevent effective mitigation.

With this, we now know what is a microservice, the advantages and disadvantages. Next up, we will see what is not a microservice.

What is not a microservice

After reading through the advantages and disadvantages, maybe you came to the conclusion to microservices is easy to do and you feel confident about it. Maybe you think that you could just ask your software engineers to make individual function as a microservice for other parts of the software to use.

But that would be a huge mistake. The benefits of microservices will immediately be overwhelmed by the runtime overhead and operational complexity. Your project will suffer from over-engineering and timeline slip.

Justin Etheredge wrote an article called You’re not actually building microservices where he talks about the possible symptoms of a software system that is not built using microservices.

  1. Any changes to one microservice often require changes to others.
  2. Deploying one microservice requires other microservices to be deployed at the same time
  3. Excessive communication between microservices
  4. Sharing of the same datastore by multiple microservices
  5. Microservices share a same code or models

However, even if an application shows one of the above symptoms, it could still remain as a microservice because there will always be exceptional cases. But alarm bells should go off if a service or application displays more than one of the above symptoms. That could mean it is not a true microservice.

So when do you use it?

Even though microservice architecture is the trendy thing to do now, it is generally advisable not to do it for a new software project or a proof of concept due to the complexity, cost and cultural changes needed. It may even slow down the development process.

But that is not to say to we should stay away from microservices. Rather, the only time to consider converting an application into using microservice architecture is when the following criteria are met:

  1. The software has grown too big to be managed as a monolith as described by Martin Fowler in his article.
  2. The organisation has the resources in place and is ready to go onboard with it.
  3. The software needs to adapt quickly to market needs.
  4. Parts of the application needs to be extremely efficient.

Jake Lumetta wrote an article titled Monolith vs microservices: which architecture is right for your team? that reinforced the above points 2-4.

In the same article, Jake also list down the scenarios that are not suitable for microservices:

  1. Your team is at founding stage and has only a few members.
  2. You are building an unproven product or proof of concept.
  3. You have no microservice experience.