An electronic project: Designing and creating your own Sensor Hub using multiplexers

The Smart Gardening system project was my attempt at exploring IoT and edge computing with Lego to see what I can create from scratch. Shortly after the project went live, I realised that there are a limited number of analog pins available on the Arduino that I can use with analog sensors. It inspired me to create my own sensor hub to expand the number of available pins.

You might be thinking I could always buy such hub off the shelf if I search hard enough. Yes, I could do that but that’s not what I want to do. I also wanted to learn more about electronics and how they can be put together to achieve something. It’s like learning the basics for me and is best done by actually doing it. Just as with programming, you can’t get around learning and applying the fundamentals such as control flow, functions, classes and variables if you want to be good at it.

There are many approach to this.

You can start with designing the circuit first in a CAD software such as Autodesk Eagle, Fritzing or EasyEDA. However, these are paid software. If you are a hobbyist, the price tag might not be something you want to swallow. There is a free and open source version called Kicad, which is something that I used to design my circuit.

For me, my preference is to do exploratory prototyping straight on the breadboard. With exploratory prototyping, I get to see for myself if an idea work or if my understanding is correct in real time. If there’s a gap in my knowledge, I went online to learn the concept and then immediately apply it.

Preparing the Multiplexer

For this hub to work, we will need an analog multiplexer instead of a digital one.

Sensors such as the soil moisture sensor output its reading in the form of variable voltage, which is a type of analog signal. Depending on the conductivity of the soil (based on the amount of water), the voltage passing through will be different. This voltage level is then passed to a reader such as a multimeter or in our case, an Arduino.

Note: If a digital multiplexer is use, the variable output of the soil moisture sensor will inadvertently be converted to either a “1” or “0”, which is not what we want if we want to determine how much moisture is in the soil.

The multiplexer that we will be using for this project is the 74HCT4052D. This is a 4:1 multiplexer/demultiplexer with two sets of four independent input/output, a pair of common input/output and two select inputs.

I made the mistake of getting the surface-mount version (SMD) instead of dual-inline package (DIP). The former is smaller and does not quite fit on a breadboard. However, I did get a SMD to DIP adapter. It was part of a box of DIP sockets that I got because I didn’t want to solder the chips themselves directly onto the strip boards.

The SMD adapter comes with two side: One for SOP (or SOIC) and another for TSOP. The image below shows the SOP side and it is what we will use.

Note: SOP is short for small-outline package. Sometimes it also abbreviated as SO.

First, we will solder some header pins to it. Place the pins onto a breadboard for support and place the adapter on top, letting the side holes through the pins.

Use a lead-free solder with flux and start soldering away at 350 degree celsius.

Note: To any critic out there, I’m aware that my soldering job could be better. The flux is all over the place.

Place a multiplexer on top of the adapter and ensure the pins are aligned with the solder pads.

Quick Tip: Since the chip is so small, it can be difficult solder with traditional soldering iron and solder wire.

Before you get started, you could solder one of the connection point first. Then, place and align the chip on top before using the solder iron to reflow the solder. Use the smallest tip possible and only apply heating for 1 second. Any longer and you might risk destroying the chip. The chip should “snap” into place. Let the solder to cool and it will provide some sort of an anchor for you to finish the soldering job.

Alternatively, you could always get solder paste and apply them on all the points before placing the chip on top. The paste should provide a little bit of “adhesion”. Then use a heat gun and blow 350 degree celsius hot air to get the solder to melt and solidify.

Beware that you can destroy the chip much more easily with the heat gun than the solder iron if you leave it blowing on the chip for too long

Then, it was soldering time. Before I came across the above tips, I used the smallest possible solder iron tip I’ve got, gently apply the solder and soldering iron to the first pin. Once the first pin is secure, I went on to do the rest.

Here is the final result. And yes, it could be better with more practice.

Once the multiplexer is done, it was time for actual testing and prototyping.

Initial Exploratory Prototyping

The multiplexer is placed on a breadboard that will facilitate our testing. A breadboard power supply unit is added to provide us with 5v supply.

Based on the datasheet provided here, we can easily identify the different pins and their purpose:

  1. Pin 16 and 8 are VCC and GND respectively.
  2. Pin 6 is the chip enable pin and is active low.

As for the input and output of the multiplexer, the pins are as follows (The order they are listed refers to their logical position):

  1. Pin 12, 14, 15 and 11 are the 1st set of independent input/output pins. (They are also known as 1Y0, 1Y1, 1Y2 and 1Y3 respectively.)
  2. Pin 1, 5, 2 and 4 are the 2nd set of independent input/output pins. (They are also known as 2Y0, 2Y1, 2Y2 and 2Y3 respectively.)
  3. Pin 3 and 13 are common input/output pins. (They are also known as 2Z and 1Z respectively)
  4. Pin 9 and 10 are select pins.

To test and validate my understanding of the multiplexer, I used LEDs and connect them to pin 12, 14, 15 and 11. Then, pin 13 is pulled high by connecting it to VCC.

The rationale is that if there is a constant input through the common input pin, the right LED should light up when I manipulate the select pins. The LEDs are also going to be part of the final build to provide a way to indicate to the user (me) which sensor the hub has activated.

This was when I thought of how do I programmatically disable or enable the chip. Given that the chip is enable when pin 6 is low/grounded, this give me an idea of using a combination of resistor, a toggle switch, some wires and a NPN transistor to simulate a programmable on/off switch. And yes, you might be thinking I could use one of the Arduino pins to turn the chip on/off. I didn’t go with that route because I wanted to minimise confusion during programming. I want to send a “1” to indicate turn on and “0” to turn off. Having the extra components allow me to invert the active low requirement.

I connected the transistor’s collector to VCC. The transistor base is connected to a slide switch, which will be used to either turn the transistor on or off. The emitter is connected pin 6 and ground via a resistor.

Below is a video which demonstrated a combined testing of the chip enable and LED select operation.

Now that we have validated our understanding, it’s time to start looking at how we want to route the sensor’s signal back to the host.

Since at most only one sensor is active at any point in time, we could connect all the sensors to a common point and pass that on. I thought of two options. The first is to use diodes to create diode logic. The other was to use an OR gate.

A quick research show that diode logic have one major issue. They can cause the voltage level to drop. In the case of a soil moisture sensor, that return voltage is what we need in order to determine the soil moisture level. We can’t have it dropping. Therefore, the OR gate was selected. In hindsight, it proved to be a bad choice but more on that later.

Below is an image showing how I tested the OR gate. I hooked up one of the output pin to pin 1 on the OR gate. Since I’m going to support up to four sensors, I will need a few dual-input OR gates to consolidate the return signal. The rest of the OR gate input are hooked up to VCC. A LED is added on the “final” output of the OR to simulate reading the sensor value.

Here is the video showing how I tested and validated the OR gate solution.

Circuit Design using Kicad

Now that we have a good idea of how we want to do the circuitry, it’s time to put it down in a schematic. It is also my way of formalising the design.

I spent about 6 hours learning how to use the tool (Kicad) and designing the circuit. While designing this circuit, I watched some videos recommended by YouTube on circuitry. This was when I realised I needed some bypass capacitors.

Shortly after this, I realised that the OR gate does not work at all. The OR gate is a digital device whereas the multiplexer and the sensors are analog. This means the OR gate will effectively convert the variable voltage into either “0” or “1”.

And that is not what we want.

Further Enhancement

Upon realising the OR gate won’t work, it was time to make some modification to the circuit.

After much thinking, another multiplexer is deemed the best choice. This time, instead of connecting the common input/output to VCC, it will be connected to a screw terminal or a header pin so that we can pass the signal back to the host.

The 1Yn pins will be used to receive the sensor values and based on the select pins, we will pass the corresponding signal on.

After finishing the above design, I went ahead and prepare another multiplexer. Since the initial breadboard is too small for me to do any meaningful test, I moved the original prototype to a larger breadboard and started wiring up the two multiplexers together.

Despite the datasheet indicating the 1Yn pins are independent, I couldn’t get the output LED to light up even after selecting and passing the output from the first multiplexer in to 1Y0 pin. I even tried all 4 pins independently. The output LED connected to 1Z refuses to light.

Then, I decided to connect the remaining three pins (1Y1, 1Y2 and 1Y3) to VCC, pulling them high. This time, the output LED lighted up. I repeat the test on the remaining three pins by switching the connection to VCC around. This tell me that all 1Yn pins needed power in order for the multiplexer to output anything on the Z1 pin.

So this bring us to an updated design. We will need supply power to the pins and also allow the sensor value to pass through. This is where an inverter will come into play.

Since only one sensor will be powered up at any given time, that signal from the first multiplexer should also be used to turn off the VCC supply to the corresponding pin on the second multiplexer.

And so this is what we end up with. Although I have an inverter IC for prototyping, I can’t use it as it is also a SOIC/SOP and the IC has only 14 pins. The SOP adapter that I have are either for 8-pin or 16-pin. I did not want to waste the SOP adapter since I only have 2 left and that means I have to make an inverter by hand.

To do it, we will need a NPN transistor, a few resistors and some wires. Also, I got myself another soil moisture sensor for testing purpose.

Once I powered up the circuit, I dipped the sensor probes into the water. Then, I use a multimeter to check the signal the sensor returned. The value was around 2.8 volt. When the sensor is dry, it returned a 0 volt. This further validated my understanding and idea.

The schematic is then updated again.

With this updated design, it is finally time to actually implement it. And that will be for another day.

An IoT project: Monitoring soil moisture (Phase 3) – Adding Watering Capability (Part 2)

In the previous articles, we covered the implementation of a water pump station as a separate module and did some discovery works on adding WiFi capability for the Soil Moisture Monitoring system. They can be found here and here respectively.

In this article, we will continue to look at integrating the water pumping system with the main system and hook up the pipes to the pots.

Building an irrigation system

Before we start connecting the water pump station to the main system, we need to first build the pipe network that will ensure water is delivered to the right place.

At first, I had the idea of using both silicone tubing and Lego for this aspect. The silicone tubes would serve as the water delivery medium while Lego will be used to build the emitters and be installed around the pots.

For the Lego water emitters, I’ll admit I didn’t sketch out to guide me in the construction process. Just like any creative endeavours that I partake, which includes writing, my process is akin to a “gardener” or “pantser”. You can read up more on what these means here. Put it simply, I have some rough ideas in my head, and then set out to explore and experiment trying to achieve a desired outcome.

But I digress.

So, I had the bricks laid out in front of me and I went exploring. It took me several tries before I settled on a final design that I’m happy with.

The idea behind the above design is that the silicone tube will go into the orange cone-shaped bricks. The good thing about those bricks is that the tube could stay relatively secured. This emitter will be attached at a height. Once water exit the tube, it will follow the slope down onto the soil below.

However, my experiment with trying to get water to flow down the slope failed. Either the water jets out the when there’s a slight water pressure or the water bunched up together and roll off the slope as a globe.

I gave up on using Lego to build the water emitter and went out to a hardware store to get actual water emitters. At the store, I could only find 3-way coupling and 0-6l/h line-end dripper by Claber in stock. I got both. In hindsight, the dripper is actually redundant.

Once home, I built a ring-shaped irrigation system and then testing it out.

I used the dripper first to build the irrigation system, which you can see in the picture above sitting on top of the bowl. During testing I realise it was the wrong component. The water pressure from the pump was too much and water was bursting out any openings it found. I had to shut the system immediately because there were electronics around me. In hindsight, I could have reduce the speed of the pump to maybe a fraction of 100% and it might have worked.

Anyway, the next day I switched out the dripper for the coupling and switched to another plastic container since we do eat from the bowl. Not very hygienic to use it for testing.

This time, it works a lot better. Even at 100% pump speed, water didn’t burst out at the seams and dripped into the container gracefully.

And, this is where the integration with the main system comes in. The remaining part of the article will focus mostly on the software aspect of the integration.

Water Pump Integration

Previously, we were able to get the water pump station to work with an Arduino Uno. We figured out how much water the pump could move for a given time in second. This give us the confidence to start deploying it into the real world.

First, we will connect the Grove cable to the I2C on the Arduino shield before we could do anything else.

Connect the Arduino to the computer and load the original soil moisture monitoring Sketch into the Arduino IDE.

We will get the Sketch ready to support the motor driver.

Add the following #include directive.

#include "Grove_I2C_Motor_Driver.h"

And the following #define directive.

#define I2C_ADDRESS 0x0F

Then, we add the following to the setup function after Oled.begin().

 Motor.begin(I2C_ADDRESS);

The following is how the setup function should look like.

void setup()
{
  Serial.begin(115200); //Open serial over USB

  pinMode(sensorVccPin, OUTPUT); //Enable D#7 with power
  digitalWrite(sensorVccPin, LOW); //Set D#7 pin to LOW to cut off power

  pinMode(sensor2VccPin, OUTPUT); //Enable D#8 with power
  digitalWrite(sensor2VccPin, LOW); //Set D#8 pin to LOW to cut off power

  pinMode(alarmVccPin, OUTPUT); //Enable D#5 with power
  noTone(alarmVccPin); //Set the buzzer voltage low and make no noise

  pinMode(button, INPUT);

  if (Oled.begin())
  {
    Oled.setFlipMode(true);
    Oled.setFont(u8x8_font_chroma48medium8_r);
  }
  else
  {
    Serial.print("Fail to initialise OLED");
  }
  Motor.begin(I2C_ADDRESS);
}

After the above, the Sketch is now ready to trigger the pumps when the conditions are right. We will first add a few global variables and functions to handle the water pumping process and stopping the pump.

For global variables, we will need the following declared at the top of the Sketch together with the rest. Note: Motor and pump are used interchangeably in this project because the pump is just a special type of motor.

bool motorRunning = false; //Keep track if the pump is running.
unsigned long motorRunTime = 0; //Keep track of when the pump starts running.
unsigned long motorToRun = 1000 * 5; //5 seconds for a more meaningful test run

The following function determines if the pumps should activate.

For now, since we are just testing, we will just go with a simple check to see if moisture level is below certain value. If it is, then pump water. Given what we know about the soil moisture sensor, dipping it into a bowl of water will give us a reading of more than 800. Taking it out of water will cause the following function to run.

void waterPlantIfNeeded()
{
    if (moistureLevels[1] < 800)
    {
        Serial.println("Pumping water now...");
        pumpWater();        
    }
}

The following function handles the actual water pumping process. The reason for running motor 2/pump 2 for testing is because it is the pump that I have connected the irrigation system to.

void pumpWater()
{
    if (!motorRunning)
    {
        Serial.println("Running pump 2");
        motorRunTime = millis();
        Motor.speed(MOTOR2, 100);
        motorRunning = true;
    }
}

The following function stops the pump when it has run for more than the specified period.

void stopPumpingWater()
{
    if (motorRunning && millis() > motorRunTime + motorToRun)
    {
        Serial.println("Stop pumping water");
        Motor.stop(MOTOR2);
        motorRunning = false;
    }   
}

Then, in the loop function, we added the call to the waterPlantIfNeeded() function in the if statement that does the sensor reading as shown in the screenshot below.

After that, we added the call to stopPumpingWater() at the end of the loop function.

And this is time for testing.

However, the testing didn’t go so well. The motor driver will fail to initialise quite often. That is even after multiple restart. And when it did finally run, the OLED display will fail to run. That was when I’m reminded of the bug in the firmware of the motor driver I mentioned previously.

It was time for firmware update.

Updating Motor Driver Firmware

I came prepared for this step. I ordered and got the Pololu AVR Programmer v2.1 a few days ago that could be used for updating the firmware.

So, I went ahead to take apart the housing to gain access to the motor driver. I hooked up the motor driver to the programmer and connect the programmer to the computer. For the programming of new firmware, we can use avrdude.

On the mac, you can download and install avrdude via homebrew.

Then, do connect the motor driver to a separate power source too since the AVR programmer does not supply sufficient power. I made the mistake of not connecting the motor driver to a power supply and waste half an hour trying to figure out why I couldn’t flash the new firmware in.

You can download the latest firmware for the motor driver here at the official repository on Github.

You should also download and install the Pololu AVR Programmer v2 app onto your computer as before we could flash the firmware on the motor driver, we will need to know which port we are using. The app could tell us that.

Once you have the firmware downloaded, avrdude installed and the AVR programmer is running properly, open terminal and navigate to the folder where the firmware is residing.

Then, use the Arduino IDE to find out what are the ports available on the AVR programmer that you can use.

In my case, the port I have to use for firmware flashing is /dev/cu.usbmodem003268172 as it is the programming port identified in the AVR Programmer app.

To flash the firmware, use following command. Do supply the programming port to use before pressing the ‘Return’ key.

avrdude -p m8 -c avrisp2 -P <the programming port to use> -U flash:w:./mega8motor_v4.hex

If the command is accepted, you should see something in the terminal that is similar to the screenshot below.

Now that the firmware is updated, we could continue our testing and do the rest of the implementation…

More troubleshooting…

It turns out updating the firmware on the motor driver isn’t enough even though the Arduino is able to detect the motor driver and trigger the pumps. After the motor driver is initialised, the Oled display refuses to work. Even though the Oled display initialised correctly without any error, any commands sent to it didn’t yield anything.

Below a screenshot of one of my attempts to investigate what’s going on.

I thought initialising the Wire library manually would help. The Wire library is basically an Arduino library that allow us to work with I2C connections/devices.

There were times when it works. I could use the Oled display after the motor driver is initialised. The first time it worked, I thought everything was fine.

Subsequently, the Oled display continued to fail to work and I spent half a day trying to understand why.

A chance Google search led me to this page on displaying data on oled that is part of the Beginner kit for Arduino.

At the end of that section was a Breakout guide. It describes what to do if the Oled display cutout from the kit.

With that, I went searching for the library on my computer and making the change to U8x8lib.cpp

Then, to test that the Oled works this time, I added a function called displayMessage(String message) to handle the display of text regarding the state of the Sketch. In the setup function, I added calls to this function with the relevant text.

After that, I recompiled the Sketch and upload it to the Arduino. Once the deployment is completed, I kept a close watch on the Oled display. Soon, it became clear that it worked. The Oled display was showing me the text I set when calling the displayMessage function.

A further test of pressing the button I implemented previously to turn on the display revealed that it is indeed properly now. And I also know that the motor driver was initialised properly because I heard the pumps doing test spins as implemented in the setup function.

With that, it would appear that a bug in the U8x8 display library gave us so much woe.

Going Live!

With the Oled display and motor driver working now, it was time to go live with the automatic water pumping. If you are wondering about why the tubes are not in a water bottle/container, it’s because the picture was taken shortly after the successful test run. I didn’t want any water involved just yet.

What’s next?

Even though the water pumping station is integrated, it is not the end of this project yet. We still need to implement the WiFi connectivity and this will be covered in another article soon.

We also need a web-based dashboard and some other quality of life improvements such as buttons or switches to turn on/off specific pumps while the rest of the system is running or an API for us to manually trigger the water pumping via the internet.

If you would like to have a fuller picture of what I did, the source code to the Sketch can before found here.

An IoT project: Monitoring soil moisture (Phase 3) – Adding Watering Capability (Part 1)

In the previous articles, we covered the development of a soil moisture sensing system and the addition of a WiFi module so that the soil moisture information could be sent out. Phase 1 of the project is covered here while Phase 2 (Part 1) is covered here. If you are wondering why there’s no part 2 of Phase 2, it’s taking longer than I anticipated to implement whereas part 1 of Phase 3 is much easier to implement.

And as mentioned in our first article, we will be adding plant watering capabilities in Phase 3 of the project. We will call this the water pumping station.

To do this phase of the project, we will be needing a few additional pieces of hardware to build this station.

Below is the list:

  1. 2x 12v peristaltic pump (also known as Dosing pump)
  2. Grove I2C Motor Driver
  3. Wires – Solid core 22AWG and Stranded (11 to 16 strands)
  4. 12v DC power supply
  5. Stripboard
  6. Soldering kit
  7. 12v barrel connector plugs (5.5mm x 2.1 mm female and male version)

Now that we have the hardware established, let’s get started.

Setting up the power module

This water pumping station will be connected to the existing soil moisture monitoring system. Given that the Arduino Uno accepts a 12v supply through its barrel connector, we could use the same power source as the water pumping station.

To achieve that, we could use a combination of stripboard, wires and barrel connector plugs. Sadly, there were no pictures of how I develop the power module other than the final picture when it’s already installed in the housing. I had to take apart the housing just to take a picture of it.

As you can see, two different type of barrel connector are hooked up and a pair of wires that went to the motor driver board. The female barrel connector is meant to receive the 12v power from the power supply while the male barrel connector plug is meant to connect to the Arduino.

On the other side of the stripboard, the wires are connected in series with no other fancy electrical components. In hindsight, it might have been better if I added a few components such as switches and LED but that would be for some future project.

Implementing the motor driver and adding motors/pumps

Arduino Uno alone can be used to turn on a motor with a combination of:

  1. Digital and analog pins
  2. Electrical components such as resistor, diodes and transistor
  3. External power supply
  4. Breadboard or stripboard

You can even find a tutorial for the above here. What this does is to use the Arduino to generate a signal to control the transistor to run the motor at full speed or turn it off. It does not allow for reversing the motor spin or controlling its speed.

Do not power and drive motors directly from any of the Arduino pins. They do not provide sufficient power to drive motors nor do they have the necessary protection against counter EMF from the motor.

Alternatively, we could go with a motor driver board that comes with all the necessary components installed that will allow us to control multiple motors with ease. It could also allow us to control the speed of the motors and the direction of the speed.

And the second option is what we will go with.

The Grove I2C Motor Driver v1.3 by Seeed is a good choice as it supports driving two motors from its twin channels. The motor driver comes with the L298 IC chip which is a dual full-bridge that support high voltage and current designed to drive inductive load such as relays, solenoids, DC and stepping motors. It also has an ATmega8L chip to enable I2C connection, making it very compatible with Arduino.

We can connect the motor driver to an Arduino using the I2C connection via the Grove port and some jumper wires. But since I don’t have a spare Grove shield, I have to go with the alternative connection. The SCL and SDA pins on the Arduino Uno are A5 and A4 respectively. With that in mind, we can connect the yellow and white wires to those pins. Then connect the 5v and ground pin.

Now we are ready to test the driver with an actual motor or pump.

But first, we will need to connect the wires to the terminals on the motor.

We could also solder the wires to the terminals to ensure a more secure and stable connection. You can see the result below after soldering and the pump is installed in the housing.

After connecting the pump to the motor driver, we could start testing whether the driver works and understand better how to use the hardware.

First, connect the Arduino to the computer. Then upload a BareMinimum sketch to the Arduino before we connect the motor driver. This will remove any existing programs on the Arduino and ensure there is nothing interfering with the driver.

Once the upload is completed, remove the black jumper on the driver. This step is necessary before we power up the driver with the 12v supply as failure to do so will mean 12v going into the Arduino via the 5v pin. Since there’s no electrical protection through that pin, we could fry the Arduino. Once the jumper is removed, we can connect the motor driver to the 12v supply. Alternatively, you could always use a battery to barrel jack adapter such as this and hook up a battery with voltage anywhere between 9v to 12v.

The image below shows the motor driver hooked up to the Arduino with the black jumper removed. The pump is also connected but it’s still in its bubble wrap.

Experimenting with and testing the pump

Based on the specification of the pump, it could do 100ml/min. However, it’s always better to test it and see for yourself the performance of the pump. But before we could do any experimentation, we will attempt to get the motor driver running.

The green LEDs should light up once the driver is powered and running.

But, if you see red LED light up alongside the green, it means that the driver is not initialised properly.

To fix it, simply press the white RESET button on the driver itself while it is connected to a powered Arduino. Everything should work after.

During the various testing session, I noticed that the motor driver always have some initialisation problem after I deploy a Sketch to the Arduino or when it is freshly powered up. That meant I will need to press the RESET button at least once if I want the driver to work properly. This is a problem but we will cover more later as it also affect the housing design. For now, we will continue with figuring out the water flow rate.

To do that, we will need a container filled with water, a measuring beaker and a small medicine cup. After connecting the silicon tubes to the pump, insert the suction tube into the water container.

But how do we know which is the suction end?

For the Gikfun pump that I’m using, the suction end is on the right side if the two tube connectors are facing you. Alternatively, you can determine the suction end by running the pump straight from a 12v supply and check which tube takes in the water. Always ensure the other tube is placed in the measuring beaker. We don’t want to spill water all over the place, especially not when electronics and electricity are involved.

Once we know which end of the pump takes in water and output water, we can start testing how much water is being output.

We can upload a simple Sketch that spins the motor in the pump for a certain amount of time.

#include "Grove_I2C_Motor_Driver.h"

#define I2C_ADDRESS 0x0f

void setup()
{
  Serial.begin(9600);
  Serial.println("Starting motor");

  Motor.begin(I2C_ADDRESS);
  Serial.println("Motor Initialised.");
  delay(2000);
}

void loop() {
 
  Serial.println("Running pump 1");
  Motor.speed(MOTOR1, 100);
  delay(15000); //15 seconds
  Motor.stop(MOTOR1);
  Serial.println("Motor stopped.");
  
  delay(1000UL * 60UL * 5UL);
}

For the test, I went with 15 seconds and ran the pump at 100% for my first experiment. The reason behind for the 15 seconds was based on a gut feel that anything lesser would not be sufficient to water the plant and was kind of pointless to check them out. The 5 minute delay is there to enable us to take measurement of how much water is dumped into the beaker.

Since the smallest value on the beaker was 100ml, there wasn’t sufficient water in the beaker to measure. This is where the medicine cup comes in since the smallest value on it is 1ml and the largest value is 15ml.

I poured the water from the beaker into the medicine cup until it reached the 15 ml mark and checked the beaker. There was no water left in the beaker.

The delay is then increased to 25 seconds and I repeat the measuring process. It turns out the pump put out about 25ml in 25 seconds. This is completely different from what is specified.

If the pump could do 100ml/min, that means it could do 1.6ml per second. At 25 seconds, I should see 41ml of water in 25 seconds and not 25 ml. This just prove that taking actual measurement is always necessary.

With this discovery, I could then determine how long to run the pump for based on the size of the pot, the type of soil and how dry is the soil. This shall be the topic for another article.

Housing the pump station and getting ready for integration

For continuity in design, I will be using Lego bricks again. Previously, I got the Lego Brick Bricks Plate version which comes with 1,500 pieces and some base plates with the intention of doing this project. And I also had left over Lego bricks from my other IoT project that I will be using first.

After choosing the right base plate, the power module was installed first. The bricks were installed with the idea that they should hide the power module since it’s relatively ugly. Once that’s done, the motor driver goes in and the bricks are added.

Below is the initial design that I had before I realised it couldn’t support two pumps.

After some more hours of work, the first pump is now installed and walls went up. The motor driver was moved to the center of the base plate.

This is the side view. As you can see, cable ties are used since the pump has a dimension that’s incompatible with the Lego bricks.

After adding the second pump, the pumping station is now mostly completed.

It was only after the housing is done when I came to the realisation that I need a quick way to access the RESET button as I will be integrating it with the soil moisture monitoring system and will need to deploy new Sketches every now and then. And, I also need to take into account whenever there is a lightning storm, the circuit breaker in my house will trip. In that scenario, the motor driver will probably need a reset.

The good news is there are ISP headers on the board that we can use to perform the reset.

Now, the RST is pulled high by default. To perform the reset, we need to connect it to ground, thereby pulling it low. And we can achieve that with a push button, connecting the RST pin to ground.

So, it was time to take out the motor driver from the housing and solder on the pin headers.

Then, we can prepare a push button and solder the wires on.

Once everything is hooked up, we shall have a quick access reset button.

Then, we will power up the motor driver and test the reset button to make sure it works. As usual, the red LEDs on the driver will light up once powered on. It is because of the driver’s failure to establish the I2C connection. This is when I connected a powered Arduino to the driver and then pressed the reset button. After a while, the red LEDs disappear and that means the I2C connection is established.

It was discovered later that there is a bug with the motor driver’s firmware. It crashes when something that is not hardcoded in the firmware is sent to the driver. This causes the I2C connection to fail. The good news is that a new firmware has since been made available and we could update it by following the instructions here.

Now that I’ve proven the push button to reset works, the housing is then modified to incorporate the button. I had to use several layers of paper to get the button to sit tightly in the window brick.

With that, we can start integrating with the soil moisture monitoring system. The two lego housing are first combined but we won’t be connecting the motor driver or powering it up as we still need to update the sketch program currently running in production.

Here are some picture of the upgraded system.

This marks a good point to stop this article. In the article covering Part 2 Phase 3, we will look at how we setup the tubes around the plant and update the existing sketch program. And let’s not forget that the article for Part 2 Phase 2 is also in the works. In that article, we will look at how to upload telemetry data from the Arduino to a dashboard.

An IoT project: Monitoring soil moisture (Phase 2) – Adding WiFi capability (Part 1)

In phase 1 of the project, we implemented a soil moisture monitoring device that was capable of only telling us the moisture level of the soil through an OLED display and buzzer.

For phase 2, we will be implementing WiFi connectivity for the Arduino so that it could start sending data to a central location. We can then build a dashboard with the data and give us a better picture of the soil status.

To achieve this, we need a WiFi module that will work with the Arduino. There are a few out there in the market but for this home project, we can go with the ESP-01 8266 WiFi module.

We will be splitting the topic on Adding WiFi capability into multiple articles to make it easier to follow through. In this article, we will be discussing the process of understanding and getting the WiFi module ready for the actual implementation.

Setting up the ESP-01 8266 WiFi module

In phase 1, we were using an Arduino Uno for soil moisture monitoring and we will leave it alone since it’s considered running in “production”. Instead, we will be using another Arduino Uno for the development of the WiFi connectivity feature.

We will upload a BareMinimum sketch to the Arduino to get rid of any pre-existing program that is running on the Arduino to test whether the WiFi module is working. Once that is completed, we can start hooking up the ESP-01 to the Arduino.

Let us first disconnect the Arduino from the computer to prevent any accidents such as short-circuiting.

Connecting the WiFi module

Then, we will get a breadboard and some male-male and female-male jumper wires. In my case, I got four male-male and five female-male jumper wires. You can also choose to go with colour-coding the wires so that you know which colour is used for what purpose. We connect the 3.3v and ground from the Arduino to the breadboard’s power rails with the jumper wires. The reason for 3.3V instead of 5V is because the module is designed to run on the former voltage level and has a max tolerance of 3.6V. Running the module on 5V will destroy it.

Next, we connect the Arduino’s TX/RX pins to the breadboard’s vertical rails with another pair of jumper wires. Then, connect the 3V3 and GND from the WiFi module to the power rails on the breadboard with the female-male wires. Following that, the EN (aka CH_PD) is connected to the 3V3 rail. This is the chip enable pin and pulling it high will enable the WiFi module.

This next part is where it gets a little tricky. Getting it wrong could prove fatal for the WiFi module.

This is where colour-coding the wires could prove very useful

Connect the TX pin from the WiFi module to the TX pin from the Arduino on the breadboard. In my case, I use blue wire to indicate this is the data receiving line from the perspective of the Arduino.

The next line to connect is the WiFi module’s RX pin. However, before we can connect it to the Arduino’s RX pin, we need to do something extra. Most of the Arduino digital pin output 5V instead of 3.3V when enabled. This is a problem for the WiFi module since this could destroy the chip. To get around this, we will use a voltage divider.

A voltage divider is a passive linear circuit that produces an output voltage that is a fraction of the input. In its most basic form, we could use two resistors connected in series with the input voltage applied across the resistor pair. The desired output voltage emerges from the connection between the resistor pair.

The output voltage is determined by the following formula:

Vout= (Vin x R2) / (R1   R2) =>
Where:
 R1 and R2 is the resistance value in Ohm
 Vin is the input voltage
 Vout is the output voltage

I used this website to help me make a quick determination of the resistance values I need.

For R1, I used 1.2 kiloohm (Resistor Colour: Brown, Red, Black, Brown, Brown) while R2 is 1.8 kiloohm (Resistor Colour: Brown, Grey, Black, Brown, Brown). This combination gives us a Vout of 3V which is more than sufficient for our purpose.

On the breadboard, R1 connects the Arduino’s RX pin to an empty vertical rail and R2 connects R1 to Ground. Then, we connect the WiFi module’s RX pin to the connection between the resistor pair. I use orange wires for the RX pins.

The final result looks like this:

Once the WiFi module is connected, we can power up the Arduino by connecting it to the computer over USB.

Testing the WiFi module

Once the Arduino is turned on, open the Arduino IDE (if you haven’t already) and then, open the Serial Monitor.

By default, the WiFi module comes with the AT firmware. The documentation for the available commands can be found here.

You can choose to use other type of firmware like the NodeMCU but for the project, we won’t be using that.

For the Serial Monitor, ensure the Baud rate is set to 1152000 and the dropdown option Both NL & CR is selected as WiFi AT firmware depends on the newline and carriage return to determine if a message is ready.

In the textfield, type “AT” and press Enter. This will send the AT command to the WiFi module. It should return an “Ok” response.

Note: All AT commands have to be uppercase.

Next, let’s check the version of the firmware running on the WiFi module by typing the command: AT GMR?

From the response, it appears that the WiFi module has a really old version of the firmware. We will need to update it to use newer commands.

If we go to the download section for the module on ExpressIf website, we can find the latest version of the AT firmware. However, that may not be the best firmware version for us to use at least according to the information found in this forum topic. Another user had attempted to flash the firmware (ESP8266 IDF AT Bin V2.0) to the latest as of the topic post but the WiFi module stopped responding to AT commands. Instead, v1.74 is the better version to use.

Since I’m using a Mac, I can’t run the default Flash Download Tool from ExpressIf website. It is for Windows only. What we can do is to use a python-based firmware flashing application call esptool. It was originally created by Fredik Ahlberg as a an unofficial community project and ExpressIf has since supported it.

We can install the esptool via pip command according to the README found here. After that, we will download the AT v1.74 firmware from here.

Unzip it to a folder of your choice and open Terminal on the Mac. Navigate to the folder containing the unzipped firmware and ensure you are in the ‘bin’ folder. For me, I’ve placed it in my Downloads folder.

Now, we are ready to start flashing the firmware.

Flashing ESP-01 8266 Firmware

Before the firmware flashing process, we need to do two things:

  1. Determine the port of the WiFi module.
  2. Get the module into a programmable state.

To determine the port, we can use the Arduino IDE. Navigate to Tools menu and under Ports, we should see the port the Arduino is connected to.

Take note of the port and then we can proceed to get the WiFi module into a programmable state.

The steps are as follows:

  1. Connect the RESET pin on the Arduino to Ground with a jumper wire.
  2. Connect the WiFi module’s GPIO_0 pin to Ground on the breadboard. Keep this pin connected throughout the whole flashing process.
  3. Connect the WiFi module’s RESET pin to Ground on the breadboard for exactly one second and disconnect it.

The WiFi module should now be ready for programming.

From the Terminal, type the following command. The value for the –port parameter should be the same port you determined earlier. Then, press Enter to start the flashing process.

esptool.py --port /dev/cu.usbmodem1432101 write_flash --flash_mode dout --flash_size 1MB 0x0 boot_v1.7.bin 0x01000 at/512 512/user1.1024.new.2.bin 0xfb000 blank.bin 0xfc000 esp_init_data_default_v08.bin 0xfe000 blank.bin 0x7e000 blank.bin

You should see the following during the flashing process.

After the flashing is done, let’s check the WiFi module to ensure it’s still working.

We will send the “AT” command and we should get an “Ok” response. After that, we send the AT GMR? command. We should get the following response:

Also, the baud rate by default runs at 115200 and that’s too fast. We will need to slow it down to 9600. To do so, we will need to use the AT UART_DEF command. Information about the command can be found here.

To set the baud rate, type the command AT UART_DEF=9600,8,1,0,0 and then press enter.

After running the command, change the baud rate on the Serial Monitor to 9600.

Then, run the AT UART_DEF? command.

The above looks good. The WiFi module is now updated and the baud rate is set. We are ready to go to the next stage.

Scanning for WiFi

In order for the program on the Arduino to communicate with the WiFi module, we will need to use another set of pins. By default, the TX and RX pins on the Arduino is used for Serial communication (e.g. USB). These are the pins that allow us to upload Sketch to the device and also used for the Serial Monitor. When other devices are connected to the Arduino via the TX and RX pins, we are using the Arduino as a USB to TTL serial adapter, which could interfere with the communication with the WiFi module in our case.

To use other Digital Pins on the Arduino for Serial Communication, we can use the SoftwareSerial library, which will emulate the remaining digital pins.

First, let’s disconnect the WiFi’s TX and RX pins from the Arduino. Then, create a new Sketch and define the following.

#include "SoftwareSerial.h"

const byte rxPin = 2;
const byte txPin = 3;

SoftwareSerial wifiSerial(2,3); //PIN 2 to receive data from WiFi, PIN 3 to transmit data to WiFi module

The above code is setting Pin 2 as the RX Pin and Pin 3 as the TX Pin from the perspective of the Arduino. What that means is that Pin 2 will be used to receive data from the WiFi module whereas Pin 3 will be used to send data to the WiFi module for transmission to the outside world.

Then, in the setup function of the Sketch, we will establish the USB serial connection to have a baud rate of 115200. After that, set the pinMode for the rxPin and txPin to INPUT and OUTPUT respectively. Then, we also initialise the SoftwareSerial library to run at 9600 baud.

void setup() {
  // put your setup code here, to run once:
  Serial.begin(115200);

  pinMode(rxPin, INPUT);
  pinMode(txPin, OUTPUT);
  
  wifiSerial.begin(9600);
}

Now, we are ready to connect the WiFi module to the Arduino. Unlike the earlier connection we did, the TX and RX pins are swapped around. We will connect the RX pin from the WiFi module to Pin 3 while the TX pin will go to Pin 2. As mentioned before, we have to be careful not to insert the WiFi TX pin to the Arduino Pin 3 since that pin will now be outputting 5V once the above Sketch is deployed to the Arduino.

To check that the WiFi module is working well with the SoftwareSerial, we will use it to query for available access points.

For this, I tried to send the AT CWLAP command via println function. Then, I will read the input. After several tries, I realised it’s actually very tedious and error-prone to work with serial communication.

I needed a better solution.

During my research, I found that the easiest library to use is the WiFiEsp. This library that can be found via the library manager of the Arduino IDE. After installing the latest version, we will proceed with implementing the WiFi access point scanning capability.

First, let’s modify the start of the Sketch to use the WiFiEsp library.

#include "WiFiEsp.h"

const byte rxPin = 2;
const byte txPin = 3;

#ifndef HAVE_HWSERIAL1
    #include "SoftwareSerial.h"
    SoftwareSerial wifiSerial(rxPin, txPin); //PIN 2 to receive data from WiFi, PIN 3 to transmit data to WiFi module
#endif

Then, modify the setup function as follows:


void setup() {
  // put your setup code here, to run once:
  Serial.begin(115200);

  pinMode(rxPin, INPUT);
  pinMode(txPin, OUTPUT);

  wifiSerial.begin(9600);
  WiFi.init(&wifiSerial);

}

Next, in the loop function, we do the following:

  Serial.println();
  Serial.println("Scanning available networks ... ");

  printNetworkScanResult();
  delay(10000);

For the printNetworkScanResult function, we do the following:


void printNetworkScanResult()
{
  int numSsid = WiFi.scanNetworks();
  while (numSsid == -1)
  {
    Serial.println("Couldn't get a WiFi connection");
    delay(3000);
    numSsid = WiFi.scanNetworks();
  }


  Serial.print("Number of available networks: ");
  Serial.println(numSsid);

  for (int i = 0; i < numSsid; i  )
  {
    Serial.print(i   1);
    Serial.print(")");
    Serial.print(WiFi.SSID(i));
    Serial.println();
  }
}

Once the sketch is deployed to the Arduino and it started running, look at the response returned in the Serial monitor. We should see a list of WiFi access point.

Finally, now we know that the WiFi module is working.

And, this is a good point to end this article before I run the risk of droning on. In the next article, we will look at implementing the actual WiFi communication and integrating it with our existing soil moisture monitoring program.

An IoT project: Monitoring soil moisture (Phase 1)

I have several pots of Dracaena Sanderiana (Lucky Bamboo) at home and I struggle with watering them.

The genus loves moderately moist soil. Overwatering will lead to root rot and them turning yellow while under-watering can lead to dry soil and the plant can die too.

This is why I got the idea to build an IoT solution that will tell me when the soil is too dry or too moist. This way, I would know when to water and have a rough gauge of how much to water.

Hardware and the plan

This project is split into three different phases.

For the first phase, it’s about being able to read the soil moisture information and present it. It should also notify me about low soil moisture.

To get started, we will be using the following hardware:

  1. Arduino Uno
  2. Sparkfun Soil Moisture Sensor (SEN-13322)

Rather than getting other type of sensors individually, I went with the Seeed Arduino Sensor Kit. It comes with a collection of common sensors such as button, accelerometer, humidity and temperature sensors. It also come with a Grove Base Shield that allows me to connect the various sensors easily via the Grove Connectors.

From this kit, I decided to use the buzzer to serve as the notification device. Whenever the moisture level drops below certain value, it should buzz and let me know. To present the moisture information, I went with the small 128 x 64 OLED display.

As I don’t want to burn out the display by keeping it turned on 24×7, I also hooked up the button. When pressed, the button will generate a signal that will turn on the display for a few seconds before it switches off.

For power, I got a power supply with a 5v output and barrel connector instead of powering it via battery.

For the second phase, I will look into implementing WiFi communication for the Arduino so that it could send back soil moisture data back to a database. This way, I could build a dashboard for me to see the history of the soil moisture.

For the third and also the last phase, I will incorporate an automatic water pump system that will help me water the plant when the soil moisture dip below certain level. This way, I could free up more time for me to do more projects.

Implementation (Writing the code)

With the documentation and code provided by Sparkfun for the soil moisture, I managed to get the soil moisture sensor to start reporting its first set of values.

The sensor is powered by digital 5v on Arduino digital pin 7 (D7). The data pin, also known as signal, is hooked up to A0 of the Arduino analog pins.

To read value from the sensor, it must first be powered up. This is done by first setting up digital pin 7 as the OUTPUT on the Arduino and this is done in the setup function.

Then, the moisture value is read from A0 with the use of analogRead in a separate function. After which, D7 is turned off

To get a better picture of the values that the sensor returns, Serial.println() is used to send data back to the Serial Monitor of the Arduino IDE over the UART. When the sensor is not touching anything, the values returned were between 0 to 10. Then, I touched the sensor’s prongs with my moist hands and the moisture reading went anywhere between 800 to 895. I even placed the sensor into a bowl of water and I get the same range of 800 to 895. A dry hand returned values between 300 to 500.

With those values, I worked on implementing the buzzer, which is connected to D5 on the Grove Base Shield. Based on my understanding of the plant, if the soil moisture is between 300 to 600, it’s probably too dry.

In that case, the program will call the buzzUrgent function and the buzzer will go off twice.

Initially, I had it go off three times with 1.5 seconds delay using a while loop. However, it was advised by the Arduino community that delays should be used sparingly since it’s a blocking action and will prevent the Arduino from doing other stuff. Thus, I made the change to use shorter time delay to create the rapid beeping sound that should catch my attention.

The program will also record the time (in miliseconds) the buzzer had gone off. And, if the soil remains dry, the buzzer shall continue to go off twice, once every half an hour (in milliseconds) as defined by the following variables.

On the other hand, if the soil is too wet, the buzzer will go off too. Initially, I went with the value of 850 because that was the highest I got when the sensor was in a bowl of water. Several days later, the buzzer keep going off while the sensor was in the soil and the arduino restarted after I powered it off as I was switching the power socket. When I checked the value read by the sensor, it was hovering between 850 to 890. With that, I had to change the value from 850 to 900.

During the development process, the Arduino, sensor and the OLED display are kept in a plastic container. Once the main development is done, we will look into building a housing.

Next thing that I implemented was the button. I needed a way for me to turn the display on only when I need it and turn off the display after a certain amount of time. This was to protect the display, preventing it from burning out.

I connect the button to D4 Grove socket on the Grove Base Shield. Then, the code to detect if the button is pressed is added to the loop function.

The isButtonPressed is a simple function that checks if the button state is high. If so, returns a true, otherwise a false.

With the button implementation, the program will cause the OLED to activate for 5 seconds every time I press it. This should give me enough time to read the value.

One last thing to mention is that the loop function previously had a blocking delay at the bottom. I removed that in favour of the non-blocking timer. What it does is basically get a soil moisture reading once every 30 seconds. For the initial prototype, this is probably a good amount of time since I don’t want to be waiting for ages before I get a new value. Once the system is stable, I would increase the delay between the sensor read further. Once every thirty minutes is not out of the question since soil moisture level don’t change so fast (unless we are in a heatwave and drought).

The code to the project can be found here.

Problems and Troubleshooting

During the implementation, I came across an issue that got me wondering if there was an issue with the hardware.

When I first connected the OLED display, I couldn’t get it to work. After nearly one day of research and then trial and error, it turns out it was because I’m using Arduino Uno. Based on the README provided here: https://github.com/Seeed-Studio/Seeed_Learning_Space/tree/master/Grove%20-%20OLED%20Display%200.96”(SSD1315)V1.0, software I2C should be used instead.

However, since I’m using the Arduino_Sensorkit library, there’s no direct way of using software I2C. More research pointed me to this commit history at the Arduino_SensorKit repository on Github

With that, I located version 1.0.5 that had support for software I2C and installed it. I retested the program and the OLED display turned on. I could then display information on it.

Housing and deployment

Like the previous IoT project that I did, I went with using Lego for the housing.

Fun Fact: The button was a last minute addition after the housing is done when I realised I needed a way to turn the display on after it has gone to sleep. That’s why it’s dangling outside. A re-design of the sensor housing on the right is necessary.

As you can see below, the buzzer, OLED display and button are connected to the Grove Base Shield via the Grove connectors while the moisture sensor is connected using jumper cables.

This is the finished product. The Arduino is hidden away within the box. The top part is actually a moveable flap that can give me quick access to the hardware. The OLED display is placed on the side with the use of double-sided tape to stick it to two transparent panel bricks. A roof is added to “protect” the display from water splashing on it from the top. On top of the display is the button that allows me to press and turn on the display for a quick glance of the moisture value.

At the back of the housing, there’s a compartment for the buzzer. Lego bricks that look like a window are used to create an opening for the sound to exit instead of sounding muffled.

Once the housing is done, the project is finally deployed. But first, the sensitive part of the moisture sensor had to be protected from water. So I modified an anti-static bag and used it to wrap around the top part of the sensor with the application of a decent amount of sticky tape.

Here is the OLED display in action after the button is pressed. It shows the current soil moisture level. Based on the value, looks like I don’t need to water the plant just yet.

An IoT project: Location check-in and temperature recording using facial recognition

With the COVID-19 pandemic, temperature checking before you enter a premise is a necessary step to ensure you are well and help keep others safe. In Singapore, it is a requirement from the government.

One of the most common way is to use a handheld digital thermometer to find out your own temperature and recording it on some kind of logbook placed near the front door. For a workplace, employees are encouraged to record their temperature twice a day.

This has two potential issues.

The first issue is the amount of surface a person come in contact with: thermometer, logbook and stationary used for recording.

The second issue is the logbook. It is a collection of papers printed with tables for people to put their entries. It could get very unwieldy as people need to flip through the papers to locate an empty slot if it’s their first entry for the day or their first entry of the day and record their temperature for the second time.

How are we going to solve this?

The general idea goes like this:

  1. Detect a face
  2. Check that the person is in the database
  3. If the person is in the system, record their temperature and create an entry with the current timestamp in the system.
  4. Otherwise, notify the admin of the unknown person via email with an attached form to register the person
  5. Once the person is registered, they will be checked-in.
  6. The data will be stored locally for future retrieval.

With the general idea in mind, the next thing to decide on is the hardware that we will be using to implement this solution.

There are many great options these days as computing technology has come so far: tablet such as an iPad, a Raspberry Pi, a Nvidia Jetson or even a decent laptop.

An iPad is an expensive solution. The cheapest model cost US$329. Furthermore, there’s only a limited set of hardware we can use with the iPad in order to capture and record temperatures. One such accessory is the FLIR ONE for iOS Personal Thermal Imager. This accessory is expensive, costing US$260 and is not necessarily available for purchase in Singapore without some kind of import restriction. However, it is something that probably requires the list amount of boilerplate work since Apple has created some of the best API for doing facial recognition and machine learning work.

Nvidia Jetson is another possible option. It cost about the same as a Raspberry Pi and comes with good software support. The hardware comes with a 128-core GPU that could easily churn through images and video feeds without any issue. There’s also a strong community support which could make it easier for us to search for information and troubleshooting issues.

Raspberry Pi ranks about the same as Nvidia Jetson in terms of price and purpose. However, there are a few aspects the Raspberry Pi, especially version 4, edged over the Jetson. The first is its faster, newer and more efficient CPU. The second is power consumption. The Raspberry Pi 4 consume at max 3.4 watt of power compared to 10 watt on the Nvidia Jetson. One could attribute that to the less powerful but efficient VideoCore by Broadcom on the Raspberry Pi. Lastly, it is the size. Nvidia Jetson is rather big due to its need to support a large heatsink and will need a bigger case to house it.

The Hardware and Software

Raspberry Pi 4 won out at the end because we don’t need to do heavy AI or robotic works the Nvidia Jetson is intended for. Furthermore, I could always reuse the Raspberry Pi again for future projects that are more general purpose. Lastly, it is also cheaper than an iPad with the FLIR camera accessory even after taking into account we have to buy the camera, sensor and housing.

Since I got the Raspberry Pi 4 as a hobbyist set from Labists, it came with Raspberry Pi OS. A quick setup process was all it needs to get started.

For the camera, I went with the Raspberry Pi’s NoIR Camera module to capture video feeds and do facial recognition. The initial reason for that was I had assumed it could pick up on temperature difference. However, I was proven wrong when further research suggested that the lack of IR filter on the camera was to allow it to see better in low-light conditions. I saw an opportunity here. I could deploy this solution for use at night or in areas where there’s poor lighting.

Now that I needed a way to pick up temperatures more accurately, several hours of research pointed me to using an AMG8833 thermal sensor. I found one from SparkFun and it was readily available locally. There are other more powerful and probably more accurate thermal sensors or cameras such as the SparkFun IR Array breakout – MLX90640 but they cost more and some are out of stock.

Now that we’ve got the hardware figured out, we need to determine what kind of framework or software we can use for the facial recognition part.

I decided upon OpenCV as it was something I’m familiar with and it comes with good support for the Raspberry Pi. A quick google search will give you a number of results.

The latest version of Python 3 (v3.9 at the time of writing) was used.

The following libraries were also installed to reduce the need to write boilerplate codes just to get the facial recognition functionality working:

Implementation of the facial recognition, thermal reading and check-in

You can see the architecture diagram for the solution below.

From the architecture, we can see that there are two facial recognition-related modules running on the Raspberry Pi. I implemented them based on the solutions provided by Adrian Rosebrock at his website.

For detection of faces, the Haar Cascade classifier is used with KNN classification algorithm.

The Face/Name Detector is responsible for building up the face/name store which will then be used for the check-in via face identification later. It scans a specific folder. Within this folder, there are sub-folders that are named after the person which contains his or her pictures. Below is an example of the folder structure.

The Face Recognition with Check-in module is as its name suggested. It pulls the feed from the camera and check each frame for a face. Once it found one, it will check against the face/name store. Only then, it will read the temperature detected by the thermal sensor and proceed to record the person into the temperature store, which is nothing more than a .csv file.

When it comes to the thermal sensor, I relied on information gleaned from this site. The GitHub repo provided us with the source code necessary to work with the sensor and formed the basis of my implementation.

Once the system detects a face that is known and their temperature recorded, the system will store it in the temperature store. Below you can find the flowchart that describe the execution.

And for those who like to read the source code to the project, it could also be found here.

Building the housing

I had a rough plan for how to house the hardware. It goes like this:

  1. Need a compartment to house the Raspberry Pi
  2. Camera should be placed high enough
  3. Thermal sensor should be able to see
  4. It should hold a display.

There is always the option of using a 3D printer but I knew it would probably take me way too long to figure out how to put everything together. So I went with the one thing I know: LEGO. It gives me the flexibility to change my design whenever the situation demands it while also giving me the power to make whatever I want. All I needed was the right kind of bricks and the right amount.

Below is the first prototype. Its sole purpose was to house the camera and Raspberry Pi.

The above design cannot scale but it gives me the foundation to go further. I changed the Raspberry Pi housing and added more compartments to house the 7inch touchscreen display and thermal sensor.

It was during this time when I realised I’m lacking the right type of jumper cables to wire up the display to the Raspberry Pi. I brought out the breadboard I purchased before to use it as the connection hub. I build a separate compartment to house the breadboard which would be used to connect all the components together via wires. So this is how the following design came about.

Testing

Since this is meant as a prototype and learning purpose, I didn’t get around to do unit testing and all that stuff. My focus was only on getting things to work. I only did some application test to make sure whatever I’ve implemented was working.

Since I was developing primarily on my Macbook Pro, I tested the face identification implementation on it with the webcam as the source of the video feed before deploying it onto the Raspberry Pi.

After the thermal sensor was installed, I got to see for myself the effectiveness of the thermal sensor. The reference repo contains two example Python modules (IR_cam_interp.py and IR_cam_test.py) that I could use to check if the thermal sensor was working.

Using the modules, the sensor could pick up my skin temperature to be around 34 degrees celsius at about 10 cm. Going slightly further away at 15 cm, the temperature detected dropped to about 33 degree celsius. Any further, it becomes harder to get a more accurate read without some kind of offset value added. Thus far, I tried 2.5, 6.0 and 8.0. The latter gave the best accuracy. But this also meant that placing the thermal sensor at the top of the camera isn’t really a good implementation.

What’s next?

Since the thermal sensor don’t give very precise reading for faraway object like a human body, another approach is required. Below you can see a drawing of the improvement that I could make.

Other than that, the solution is also lacking the ability to register a new/unknown person into the system via some type of web interface or from another computer. What I did was to manually upload the pictures and run the Face/Name Detector to build the database.

Last but not least, I could also consider adding a backend support via Microsoft Azure to collect the recorded temperature, alerting users to new unknown users and to enable downloading of the stored record.

Learning to use Micronaut Framework to develop a software pet project

Software engineers who worked extensively with Java technology might have come across and/or use Spring Framework for their projects (professional or personal). In 2013, the Spring Boot project was started to simplify the deployment of spring-based web application.

Personally, I have used it at the beginning of my programming career back in 2015 after being introduced to it by my supervisor/senior. Before that, I have used JSP/Servlet for my school projects that are for the web and Java Swing for desktop-based applications. I was also exposed to working with EJB as part of my diploma course back in 2006/2007. Through all that, I became aware of how fast Spring Boot is and the whole Spring Framework allowed me to be more productive. Annotations were the sliver bullet for me then.

And I had plans to use Spring Boot again for my pet projects since it was something familiar…

Some Background

One of my pet projects was to build a public transport dashboard that could tell me when are the buses for specific bus stops arriving, and if I missed them, how long more do I have to wait for them.

Yes, I know there are apps that I could use on my phone but they are cumbersome. Face mask is also a requirement in Singapore that made it difficult to unlock my phone in a split second…yes I’m impatient.

So, here are my requirements:

  1. I’m only interested in four specific bus stops that are around my apartment block and four specific bus services.
  2. I want the information displayed near my apartment’s main door in a succinct manner.
  3. I don’t want to unlock my phone, open the public transport app and look for the buses.
  4. The app will display the information with live updates every minute.
  5. The tablet/computer running the app will always put it on the foreground for instant access.

What have I done so far?

With the requirements in hand, I went about building a web app that could run on a Raspberry Pi with a touchscreen display.

For the frontend, I went with Angular and it does not matter which version. The goal was just for me to learn how to setup a new Angular project and build something functional with it. The only other time I used Angular was on an actual project but I wasn’t the one who set it up. I simply continued developing on it.

I spend a few hours to get started with Angular and finding my way around. I was developing the app as I learnt.

Here is the prototype Angular application running to display the buses at specific bus stops.

Nothing fancy here as I have not gotten the final design nailed down. Behind the scene, I was using purely bootstrap with Angular 11 to build this app.

While building the app, I came across a situation where the Angular Http Client is unable to connect properly to the LTA API.

It took me an hour of troubleshooting before I realised it was due to CORS caused by the browser. The API the app was calling is not configured to accept the OPTIONS request and returned a 401 error.

Recommendations by other smarter developers indicated I should use a proxy server or separate backend to make the call to such an API.

This is when I decided I will build the backend using Spring Boot…

Here comes Micronaut!

When my colleagues and I were heading out for lunch, one of them mentioned something about Micronaut.

A quick google search revealed that it is the alternative to Spring Boot. It is lightweight and starts up much faster.

It sounded good to me as I know how much time a Spring Boot application takes to start when there are a lot of dependencies. And considering that I am running the application a Raspberry PI, having something lightweight would be good for performance and power consumption.

But I can’t just take the word of some article. I had to see it for myself. And I also wanted to learn something new in the process.

Using Micronaut

Setting it up to use in a project was a relatively painless process.

I installed SDKMAN! on my development machine as it was recommended on the download page of micronaut.io. I followed the SDKMAN! setup process mentioned here. After that, I installed micronaut 2.3.0.

Creating a Micronaut app was as easy as ABC. Following through this article and you will have a project environment running.

Once the IntelliJ project is ready, I got down to developing the backend by following TDD. Here are some codes from the test class for the API controller.

@Test
    void testBusStopResourceShouldReturn200() {
        HttpResponse response = client.toBlocking().exchange(GET("/api"));
        assertThat(response.code()).isEqualTo(200);
    }

    @Test
    void testApiShouldReturnValidResponseWhenGivenValidQueryString() {
        HttpResponse<BusStopDTO> response = client.toBlocking().exchange(GET("/api/busstops?busStopCode=12345"), BusStopDTO.class);
        assertThat(response.code()).isEqualTo(200);
        assertThat(response.body()).isNotNull();
    }

    @Test
    void testApiShouldReturnErrorResponseWhenNotGivenValidQueryString() {

        assertThatThrownBy(() -> {
            HttpResponse<BusStopDTO> response = client.toBlocking().exchange(GET("/api/busstops"), BusStopDTO.class);
        }).isInstanceOf(HttpClientResponseException.class);

    }

Comparing this with JUnit test, there’s nothing really fancy or interesting.

On the other hand, I discovered something interesting with how Micronaut supports URL templates. Below is a snippet of the controller class that will serve the bus stop information.

@Controller("/api")
public class BusStopResource {

    @Get
    @Produces(MediaType.TEXT_PLAIN)
    public String index(){
        return "";
    }


    @Get("/busstops{?busStopCode}")
    @Produces(MediaType.APPLICATION_JSON)
    public HttpResponse<BusStopDTO> getBusStop(@Nullable String busStopCode){
        if (StringUtils.isEmpty(busStopCode)){
            return HttpResponse.badRequest();
        }

        return HttpResponse.ok(new BusStopDTO("12345", "TestName"));
    }

}

The difference here is query string parameters are all optional in Micronaut by default. @Nullable for each query parameter is necessary as the app will not compile if it is not done. If a particular parameter is required, validation of that parameter has to be handled in the code. Spring MVC on the other hand allow you to define an optional parameter with the use of Java 8 Optional.

Micronaut comes with support for mocking out of the box.

As far as I know, Spring Framework does not come with mocking support out of the box. Mockito seems to be the way to go but I could be wrong. Do let me know if that’s the case.

If you want to mock a service class that is used for connecting to external API, you can do so with @MockBean. To test if the mock is called, I use Mockito’s when and thenReturn methods.


@MicronautTest
public class PublicTransportServiceTest {

    @Inject
    LtaServiceAdapter ltaServiceAdapter;

    @Test
    void testShouldReturnBusStopDetailsWhenGivenValidBusStopCode() {
        when(ltaServiceAdapter.getBusArrival("12345")).thenReturn(new BusStop("12345", "TestBusStop"));
        PublicTransportService publicTransportService = new PublicTransportService(ltaServiceAdapter);
        BusStop busStop =publicTransportService.getBusServiceForBusStop("12345");
        assertThat(busStop).isNotNull();
        assertThat(busStop.getBusStopCode()).isEqualTo("12345");

    }

    @Primary
    @MockBean(MockLtaServiceAdapter.class)
    LtaServiceAdapter ltaServiceAdapter(){
        return mock(LtaServiceAdapter.class);
    }

}

As seen above, @Primary is a micronaut annotation that indicate a injectable service/class is the main one to use. In the app’s case, it was necessary as there was another class that implements the same interface and Micronaut wasn’t able to differentiate. There’s nothing new about this as the same annotation can also be found in Spring Framework.

The other difference between Micronaut and Spring is the use of @Inject to perform dependency injection for Micronaut, whereas for Spring Framework the default annotation for dependency injection is @Autowired.

@Inject is more portable since it is part of the JavaEE specification, specifically the JSR-330. Various other dependency injection framework such as Google Guice also uses this. In turn, this allows developers to change their dependency injection framework much more easily.

But, it is also important to note that you can configure your Spring-based projects to use @Inject.

Last but not least, Micronaut’s startup time is indeed faster. My unscientific/subjective judgement of the startup time put it at around 2 seconds. If my memory serves, Spring Boot takes about five to ten seconds on average to complete start up. It might take even longer if the application is huge and have lots of dependencies to manage.

Other than what I have mentioned, I’ll admit I hadn’t gone too far in to be able to tell whether Micronaut is superior to Spring Boot.

What’s next?

I will continue to build on my existing knowledge of Micronaut and Angular and finish the pet project. You can follow along what I’m doing at the following repositories:

  1. Angular Frontend
  2. Java/Micronaut Backend

Introduction to Software testing: What is it, Why do we need it and what are the different types?

Software testing is an important step during its development lifecycle. It ensures that the software function or operate as what the stakeholders wanted, which contribute to the perception of quality software. Failure to properly test a software can lead to disastrous consequences, especially if the software is running in an environment where lives are stake or is responsible for the financial health of an organisation or nation.

What is testing?

Software testing is the process in which a piece of software, be it a module, component or an application, is verified and validated. In other words, it is the process to make sure a piece of software is:

  1. Built right (Verified)
  2. The right thing that the user will want to use (Validated)

There are two ways to go about software testing; one is automated testing and the other manual testing.

Automated testing is done to automate certain repetitive but necessary tasks in a formalised testing process, or to perform additional tests that would be difficult to do manually.

On the other hand, manual testing is done by testers playing the roles of the users to identify defects in the software that the automated testing missed. A written test plan is followed by the tester to ensure completeness of a test.

In the latter part of this article, we will look at the different types of testing and how they are done.

Why testing is important

Now, you know what is testing but you may be wondering why it is important. Let us use a simple scenario to illustrate the importance of testing.

A highly-reputable medical device manufacturer, AXT, designed and sells a new surgical robot equipped with a laser scalpel mounted on an arm. The scalpel can cut through the human skin and tissues with precision. The control panel for the robot comes with two joysticks. The one on the left moves the robotic arm along the vertical plane (up or down) while the one on the right moves the robotic arm along the horizontal plane (forward, backwards, left and right).

For the left joystick, AXT stated that for every 1 degree tilt forward or backwards shall move the arm down or up by the same amount in centimetres. AXT also stated that every 5 degree tilt of the right joystick in any direction shall move the arm in the respective direction by one-fifth of an inch.

After seeing several live demonstrations done on dummies and receiving good feedbacks from trials that involved some of the surgeons from Tea General Hospital, the hospital finally bought one such surgical robot from AXT for their new operating theatre. Technicians from AXT went to the hospital installed it in the new operating theatre, indicated on the official checklist stating they have verified that the robot was working.

Three days later, a surgeon who’s trained to use the surgical robot decided to use it to perform a brain surgery on a young patient. With the patient lying on the operating bed, the surgeon powered on the robot and started manipulating the joysticks. He moved the right joystick to bring the scalpel end of the arm above the patient. It worked as intended. Then, the surgeon moved the left joystick, shifting it forward to lower the arm. Even though the joystick has a tilt of five degrees, the arm plunged downwards and hit the patient in the face with the laser scalpel punching through the skull into the brain.

The patient died on the spot, leaving the parents extremely distraught and the surgeon traumatised. The surgeon quit his job a day later and was found dead on the ground floor of his apartment building two days later, having jumping off from his kitchen window on the ninth floor.

An investigation later revealed that the technicians did not test the robot and checked off the checklist confidently, assuming that they have done it correctly. They believed in their installation and setup skills as they have done it many times for other hospitals. If they have tested the robot in the first place, they would have found that they failed to connect the signal regulator for the module that controlled the robot arm’s vertical movement.

The scenario described above maybe seem like it came from some horror movies but it does reflect the reality of what will happen when software system, or any system for that matter, is not tested thoroughly.

Different types of software testing

Software testing can be divided two different categories: functional and non-functional testing.

Functional testing is a quality assurance process that checks the individual software component does what it is supposed to. For example, if a calculator software says it can determine the sum of two numbers, then a check will be performed to verify that it return the correct sum for any two numbers.

On the other hand, non-functional testing checks the way the software operates. Using the calculator example, a non-functional requirement would specify that the calculator has to return a result within a second. So, if the calculator takes up to 20 seconds to return the correct result, it is technically functional. However, who would use a calculator that takes a longer time than a human to calculate the sum of two numbers?

Functional testing

Unit testing

Unit testing is a type of functional testing that tests a piece of software using Unit Tests, which are automated tests written and run by software developers to verify that a section of the software meets its design and behaves correctly. Generally, they are written to cover specific core functions within the application and ensure the functions return the correct response from a given set of inputs.

With continuous delivery and continuous testing, unit tests form a big part of that process since they are used to verify that every section of the software they covered behave correctly. In the event that there are failing tests, this would indicate that certain functionalities within the application has not been implemented properly by the developers.

Smoke testing

Smoke testing is a type of testing that verifies a software is built correctly and it can run. It is commonly used to reveal any simple failures and allowing a prospective software release to be rejected.

Unlike the other type of testing, smoke testing is supposed to run quickly, to give the benefit of faster feedback. This way, developers can quickly fix what went wrong and get the next build ready.

Integration testing

Applications have grown increasingly complex with a lot of moving parts. Integration testing is a type of testing that verifies the different parts are able to come together and work. One way it does that is by ensuring the interfaces between the different software components are defect free and they can communicate with each other correctly.

Exploratory testing

In contrast to other type of functional testing, exploratory testing is a type of informal testing that is more ad-hoc and freestyle, relying more on the tester’s creativity instead of following scripted test cases. The term exploratory testing was coined by Cem Kaner in 1984.

With exploratory testing, it is all about discovery, investigation and learning while the test is happening. It is up to the tester to come up with new test cases as they navigates through an application. This help to ensure that software bugs that were not picked up by other type of testings are identified and resolved.

Non-functional testing

Usability testing

Usability testing measures the ease-of-use of an application by testing it on users who have never seen or used it before. If an application has good design intuitiveness, users would less likely to be confused by it, thus are more likely to use it.

To do usability testing, a scenario or a realistic situation need to be setup where the user can perform a series of tasks on the application being tested. Observers will watch and take notes. In addition, other test instruments such as scripted instructions, paper prototypes and questionnaires are also used to gather feedback. Another popular testing method is the Think Aloud Protocol where users will vocalise what they are thinking about as they navigate through the application and how they will be performing an action.

Performance Testing

Performance testing determines how well an application performs. A non-functional requirement given by the users could specify that able to execute an action and return a result or give a response to the user within some time limit. In this case, it would fall under performance test coverage.

Using the calculator example mentioned earlier, a simple performance test can be conducted using a stop watch and a tester using the calculator to calculate the sum of two numbers. The stop watch will start counting once the user press the “=” button. When the calculator screen shows the result, the stop watch is stopped immediately. Then, the time taken could be record as part of a performance test report.

Stress testing

A modern application generally perform quite well on modern machines and could handle several dozen of people using it. However, when the number increases to several hundred or even several thousands users per minute, the application might not even function. It might start crashing due to limited hardware resources.

Stress testing is about putting the application under heavy load and finding out what is the breaking point. With that information, the amount and type of resources to be provisioned can be done more effectively to ensure availability of the application, or that developers can improve the application in terms of its error handling and prevent it from crashing due to insufficient computation resource, thus improving its robustness.

10 Wi-Fi terms that you should know

Have you ever taken a look at the Wi-Fi logs generated by your router?

Or if you are on a Mac computer, have you seen the details of the Wi-Fi connection by pressing and holding the Option key while you click on the Wi-Fi icon?

Do you wonder about what do those terms that you see in those places mean? In this article, we will look at 10 Wi-Fi terms that you may come across.

1. HT

HT is short for High Throughput and is the alternative name for 802.11n (Wi-Fi 4). The reason behind the name was due to the speeds improvements, which can range from anywhere between 72mbps to 600mbps, thus making it a lot faster than 802.11g (Wi-Fi 3).

The new technologies introduced with Wi-Fi 4 enable support for more antennas which in turn enable higher data rates, adding 40 MHz channel width, 5GHz band and standardising Multiple Input and Multiple Output (MIMO).

2. VHT

VHT or Very High Throughput is the alternative name for 802.11ac (Wi-Fi 5). It is designed to be the successor to HT. With Wi-Fi 5, wireless communication over the 5GHz band is improved with new technologies, enabling speeds ranging from anywhere between 433mbps to 6933mbps.

Some of the new technologies for Wi-Fi 5 include support for optional 160 MHz channel width and mandatory 80 MHz channel width, increasing the number of MIMO streams from 4 to 8 and 256-QAM support.

3. HE

HE is short for High Efficiency and is the alternative name for 802.11ax. The reason behind this name stemmed from new technologies that improve efficiency and performance. Some of these new technologies include OFDMA and MU-MIMO. For more information about Wi-Fi 6, check out this explainer.

4. MCS Index

MCS Index or Modulation and Coding Scheme Index is a unique reference value that identifies the combination of the following:

  1. Number of Spatial Stream
  2. Modulation Type
  3. Coding Rate

When this value is combined with the Wi-Fi channel width, it allows you to quickly calculate the likely data rate of a given connection. Naturally, the larger the MCS index value, the better as it indicates a faster Wi-Fi connection.

5. NSS

NSS or Number of Spatial Stream refers to the independently and separately coded data signals that are transmitted from multiple antennas of an Access Point (AP). MIMO wireless communication use this technique to increase the throughput of a communication channel by sending and receiving multiple data signals simultaneously.

6. RSSI

RSSI or Received Signal Strength Indication in the Wi-Fi context refers to the relative received signal strength in some arbitrary units. It is calculated from the perspective of the receiving radio. Generally, the greater the value, the stronger the signal. Therefore, it is common to see them represented in a negative form since the closer the value is to zero, the stronger the signal strength.

7. Tx Rate

Tx Rate or Transmission Rate refers to the transmission speed of the wireless communication channel from the perspective of the client device. Naturally, the higher the value, the faster the connection since more data can be sent from the client.

8. Rx Rate

Rx Rate or Receive Rate refers to the receiving speed of the wireless communication channel from the perspective of the client device. Naturally, the higher the value, the faster the connection since more data can be received by the client.

9. DFS

DFS or Dynamic Frequency Selection allows a wireless network to use 5GHz frequencies that are reserved for use by radar stations. Without this feature, ApPs are limited to the following 20 MHz channels:

  1. Channel 36
  2. Channel 40
  3. Channel 44
  4. Channel 48
  5. Channel 149
  6. Channel 153
  7. Channel 157
  8. Channel 161
  9. Channel 165

In environments such as an apartment building where multiple APs can be deployed, this can slow down network performance due to the increased wait time brought on by congestion.

With DFS, the issue of congestion is mostly resolved as APs can use 16 additional channels on the 5 GHz band, thus leading to improved performance. These 16 channels are known as DFS channels.

However, if there is a radar station nearby using any of the DFS channels, the AP will detect that and switch to one of the non-DFS channel. When that happens, client devices will temporarily lose internet connection while they re-establish connection.

10. MUBF

MUBF or Multi-User Beam-Forming is an extension of beam-forming to support multiple receiver devices.

And what is beam-forming then?

Beamforming is a technique that allows an AP to focus radio signals towards a receiver. The AP does this by transmitting multiple radio signals from its antenna array in a manner that results in both constructive and destructive radio interferences. The destructive radio interference will cancel the transmission in the directions that have no receiver while constructive radio interference will increase the power of the transmission towards the receiver, thus improving the transmission quality and range.

Asus ZenWiFi AX (XT8) Tri-Band Mesh System Review

In the time between the announcement of Wi-Fi 6 (IEEE 802.11ax) standards in October 2018 and now, the market have seen a variety of Wi-Fi 6 capable devices released by various vendors ranging from networking devices to smartphones. If you like to know more about Wi-Fi 6 and the benefits it brings, here is an explainer.

Even though Wi-Fi 6 is more secure and performant than Wi-Fi 5, wave 2 Wi-Fi 5 routers or mesh systems remain a good choice for majority of households if they have a small/medium-sized home with a few devices.

Background

Personally, I was using the D-Link WiFi mesh system, COVR-2202, for the past one year. During the early stages of the work-from-home arrangement because of COVID-19 pandemic, I could participate in video/conference calls with minimal issues. You can find a review of this mesh system that I wrote previously here.

However, the mesh system started having performance and stability issues this earlier this month. It was due to a change in my home network environment. The number of networked devices had grown to 24 devices—nearly half of these are smart home devices. My video calls started suffering from connectivity issues with stuttering videos, and sometimes, I could not hear what my colleagues were saying. Even the smart home devices are suffering from connectivity issues.

Therefore, my next networking gear purchase had to fulfil the following conditions:

  1. Is a mesh system
  2. More control on the Wi-Fi configurations
  3. Future-proof for WiFi 6
  4. More powerful hardware that provide good WiFi coverage and stable connection for many devices

When I was looking for a new WiFi 6 mesh system, I narrowed my choice down to Netgear Orbi RBK852 (3 pack) and Asus ZenWiFi. I did not consider the other brands as they do not have a good track record when it comes to keeping their products up to date. Furthermore, their product designs leave much to be desired.

In the end, I went with the ASUS ZenWiFi AX6600 (XT8) as I have used an Asus router (RT-AC68U) before and my experience with that then was good. The RT-AC68U was stable in terms of its performance and connectivity even after going for three or four months without a system reboot. And Asus routers do come with a lot more configuration options in their web interface when compared to the others.

Hardware

The Asus ZenWiFi comes in two colours: black and white. For me, I went with the black version because it fit better with the overall house theme. In terms of pricing, the hardware itself cost SG$775 from Challenger.

You may be wondering why it cost that much. Unlike other mesh systems, the ZenWiFi mesh system consist of two full-featured wireless routers that can be configured to run independently or operate together in a mesh system through ASUS AiMmesh technology.

In terms of design, it is minimalistic and does not stand out. It comes with a single LED light at the front that indicates the state of the router. It has specially designed vents on the sides to help keep the routers cool.

Physical appearance aside, we shall take a look at the specifications.

Below is the specification for each router:

  1. 1.5 GHz quad-core processor
  2. 512 MB RAM
  3. 256 MB flash storage
  4. Tri-band: 2×2 2.4 GHz, 2×2 5 GHz-1, 4×4 5 GHz-2
  5. 6 internal antennas positioned to give maximum WiFi coverage
  6. 3x gigabit Ethernet LAN port and 1x 2.5G WAN port. The latter can be used as a LAN port on the satellite node

From the above, we can see that the Asus ZenWiFi mesh system is a tri-band mesh system.

With that, the 2.4GHz and 5GHz Wi-Fi band are freed up for our devices to connect to while a separate 5 GHz Wi-Fi band is used for the wireless backhaul. This wireless backhaul is used by the satellite node and the main router to communicate with each other.

From the product’s official site, the device is capable of the following:

  • 802.11a: 6,9,12,18,24,36,48,54 Mbps
  • 802.11b: 1, 2, 5.5, 11 Mbps
  • 802.11g: 6,9,12,18,24,36,48,54 Mbps
  • 802.11n: up to 300 Mbps
  • 802.11ac (5GHZ-1):up to 867 Mbps
  • 802.11ac (5GHZ-2):up to 3466 Mbps
  • 802.11ax (2.4GHz): up to 574 Mbps
  • 802.11ax (5GHZ-1):up to 1201 Mbps
  • 802.11ax (5GHZ-2):up to 4804 Mbps

However, the above are just theoretical numbers that is hardly achievable due to various factors such as neighbouring Wi-Fi interference, physical obstacles like walls and the distance between the mesh system and our devices.

If we use the 5GHZ-2 band as an example, the speed indicated is achievable if the mesh system is able to utilise all 4 streams to send and receive data using the 160 MHz channel width.

However, there is only one channel available for 160 MHz and that is assuming there are no interference from your neighbours and you can use DFS channels. The latter is important to note as eight 20 MHz channels will need to be combined into one channel. And in Singapore, most of those 20 MHz channels are DFS channels and their availability is dependent on whether you are living near a radar station. Furthermore, if you are living in a HDB apartment with a lot of neighbours, the mesh system will find itself dealing with a lot of interference and likely to fall back to using the 80 MHz channel width. At least when using 80 MHz, there are 5 channels to choose from.

But all the above is just theory. We will need to test the mesh system in the real.

Performance

As described earlier, the mesh system comes with a dedicated wireless backhaul. However, the use of the wireless backhaul would mean that you will not be able to get higher WiFi speeds since the total amount of available wireless bandwidth will be divided equally between the backhaul and other connected devices.

If you do need a higher backhaul speed, ethernet backhaul connection for the mesh node to the main router is supported and available. With this, the WiFi 5 GHz-2 band can be freed up for use by devices.

For me, I decided to go with the wireless backhaul due to two considerations:

  1. The mesh node should try and stay as close to the center of the house as possible since the main router is at the corner of the house in the living room. It is so the remaining half of the house could get WiFi with no dead spot.
  2. Remove the need to route additional ethernet cables from the main router to the node.

After spending time tweaking the configurations, I was able to achieve a decent WiFi speed on the 5 GHz-1 band with my MacBook Pro connected to the satellite node. In this case, there was a direct line of sight between the node and the MacBook.

This is the result of the first test.

Other than the WiFi performance for devices with direct line of sight to the router, it is equally important to have good performance for devices that are behind walls or further away from the nodes.

Another test of the connection speed was conducted. This time it is between my iPhone X and the satellite node while I’m in the laundry area of the house, which is the furthest possible point from the satellite node with at least one wall between.

The phone being able to achieve 101mbps in download speed is nothing short of impressive. We need to keep in mind that there are 23 other devices connected to the mesh system and at least one wall sitting between the phone and the satellite node.

To achieve the above speeds, the following configurations were used for the mesh system.

Basic Configurations

2.4 GHz and 5 GHz-1 front-haul Wi-Fi configuration

5 GHz-2 dedicated wireless backhaul configuration.

Advanced Configurations

5 GHz-1 advanced configurations

5 GHz-2 advanced configurations

In order to achieve higher speeds, a device ideally should establish a WiFi connection using the 80 MHz channel width. In my case, my laptop was able to do that.

However, there is no guarantee your devices will be able to get that since it is dependent on whether the Wi-Fi hardware supports higher bandwidths and negotiate with the router for that. In addition, there is also a higher chance of interference due to channel overlap with your neighbour’s WiFi routers since a wider channel is nothing more than the combination of multiple smaller channels, which can cause connectivity or performance issues.

Over the following one week since getting the mesh system, I made more changes to the advanced configurations.

5 GHz-1 advanced configurations

5 GHz-2 advanced configurations

A second internet speed test was done using the updated configuration from my 2018 15 inch MacBook Pro.

Stability

Compared to the D-Link Covr-2202 mesh system, the Asus ZenWiFi has been stable for 8 days now since the last restart due to configuration changes. Devices remain connected to the mesh system and could access the internet without any issues. Again, we need to keep in mind that there is a constant 22 to 24 devices connected.

Furthermore, I did not find myself having to deal with stuttering video and audio during Microsoft Team/Google Meet/Zoom calls. The longest call that I have can go up to one hour and a half.

However, I could not say the same for the Covr-2202. When I first got it to replace the Asus RT68U, my devices would not be able to access the internet from time to time. An investigation revealed that the routers would either drop connections or refuse to issue IP addresses. This tend to crop up after a week of use for reasons that remain unknown to me. So, to prevent the dropped connections from happening again, I scheduled a weekly restart that happens at the stroke of midnight on Monday.

Wi-Fi Coverage

Asus states that ZenWiFi mesh system is able to cover up to 5500 square feet (or 6 rooms) when using both routers in mesh mode while single ZenWiFi router is able to cover up to 2750 square feet or 4 rooms. With that in mind, single router is enough for majority of households in Singapore since we live in HDB apartments, which have an average size of 1027 square feet.

However, it did not take into account that there are a lot of concrete walls and solid objects such as cabinets in a HDB apartment. Solid objects such as concrete walls can block or reduce the strength of WiFi signals causing connectivity issues, low speeds and high latencies. In this case, the 5 GHz band is more severely affected than the 2.4 GHz band.

During my unscientific tests, the ZenWiFi did surprise me. My phone was able to stay connected and achieve about 30mbps of download speed even when I am standing in the kitchen, near the common toilet. At least two concrete walls stand between my phone and the mesh node.

The next test was done with me walking around the house. My phone was able to stay connected to WiFi and I was able to stream video without any visible issues.

Furthermore, I lived on the eight floor. When I was on the first floor, my phone was still able to secure a connection to the mesh system. I suspect it is due to the fact that the main router of the mesh system is placed near the window in the living room. Nonetheless, I find this impressive since I can continue to use my WiFi even when I’m outside my house.

Conclusion

The Asus ZenWiFi AX6600 (XT8) is expensive but not as expensive as the Netgear Orbi RBK852 WiFi 6 mesh system, which cost an additional SG$200 or SG$300 depending on where you get it.

In terms of hardware specification, the Asus ZenWiFi comes with only 6 internal antennas compared to the 8 on the Netgear Orbi RBK852 WiFi 6 mesh system. More antennas meant that the router would be able to provide more bandwidth for devices, which translates to better performance. The Asus model also comes with a slower quad-core processor and 1 less gigabit LAN port.

But for that price, what you are getting is two Wi-Fi 6 capable, fully-featured wireless routers that you can choose to give one away to your family or friends. The AsusWRT, which is the operating system of all Asus-made routers, tends to be more stable from my personal experience and comes with more configuration options. The latter can be a consideration point if you want to improve the mesh system’s compatibility with older wireless devices or smart home devices that you might have at home.

For example, I have a few LIFX light bulbs that operate on 2.4GHz band with 20MHz channel width. I was able to set that explicitly in the router and ensure the light bulbs stay connected. Previously on the D-Link Covr-2202, the LIFX light bulbs tend to lose connection and I would be left unable to control them from my phone.

Lastly, you are also future-proofing your home network as there will be more Wi-Fi 6 capable smartphones and laptops coming out in the later half of 2020 and the whole of 2021.