An IoT project: Monitoring soil moisture (Phase 1)

I have several pots of Dracaena Sanderiana (Lucky Bamboo) at home and I struggle with watering them.

The genus loves moderately moist soil. Overwatering will lead to root rot and them turning yellow while under-watering can lead to dry soil and the plant can die too.

This is why I got the idea to build an IoT solution that will tell me when the soil is too dry or too moist. This way, I would know when to water and have a rough gauge of how much to water.

Hardware and the plan

This project is split into three different phases.

For the first phase, it’s about being able to read the soil moisture information and present it. It should also notify me about low soil moisture.

To get started, we will be using the following hardware:

  1. Arduino Uno
  2. Sparkfun Soil Moisture Sensor (SEN-13322)

Rather than getting other type of sensors individually, I went with the Seeed Arduino Sensor Kit. It comes with a collection of common sensors such as button, accelerometer, humidity and temperature sensors. It also come with a Grove Base Shield that allows me to connect the various sensors easily via the Grove Connectors.

From this kit, I decided to use the buzzer to serve as the notification device. Whenever the moisture level drops below certain value, it should buzz and let me know. To present the moisture information, I went with the small 128 x 64 OLED display.

As I don’t want to burn out the display by keeping it turned on 24×7, I also hooked up the button. When pressed, the button will generate a signal that will turn on the display for a few seconds before it switches off.

For power, I got a power supply with a 5v output and barrel connector instead of powering it via battery.

For the second phase, I will look into implementing WiFi communication for the Arduino so that it could send back soil moisture data back to a database. This way, I could build a dashboard for me to see the history of the soil moisture.

For the third and also the last phase, I will incorporate an automatic water pump system that will help me water the plant when the soil moisture dip below certain level. This way, I could free up more time for me to do more projects.

Implementation (Writing the code)

With the documentation and code provided by Sparkfun for the soil moisture, I managed to get the soil moisture sensor to start reporting its first set of values.

The sensor is powered by digital 5v on Arduino digital pin 7 (D7). The data pin, also known as signal, is hooked up to A0 of the Arduino analog pins.

To read value from the sensor, it must first be powered up. This is done by first setting up digital pin 7 as the OUTPUT on the Arduino and this is done in the setup function.

Then, the moisture value is read from A0 with the use of analogRead in a separate function. After which, D7 is turned off

To get a better picture of the values that the sensor returns, Serial.println() is used to send data back to the Serial Monitor of the Arduino IDE over the UART. When the sensor is not touching anything, the values returned were between 0 to 10. Then, I touched the sensor’s prongs with my moist hands and the moisture reading went anywhere between 800 to 895. I even placed the sensor into a bowl of water and I get the same range of 800 to 895. A dry hand returned values between 300 to 500.

With those values, I worked on implementing the buzzer, which is connected to D5 on the Grove Base Shield. Based on my understanding of the plant, if the soil moisture is between 300 to 600, it’s probably too dry.

In that case, the program will call the buzzUrgent function and the buzzer will go off twice.

Initially, I had it go off three times with 1.5 seconds delay using a while loop. However, it was advised by the Arduino community that delays should be used sparingly since it’s a blocking action and will prevent the Arduino from doing other stuff. Thus, I made the change to use shorter time delay to create the rapid beeping sound that should catch my attention.

The program will also record the time (in miliseconds) the buzzer had gone off. And, if the soil remains dry, the buzzer shall continue to go off twice, once every half an hour (in milliseconds) as defined by the following variables.

On the other hand, if the soil is too wet, the buzzer will go off too. Initially, I went with the value of 850 because that was the highest I got when the sensor was in a bowl of water. Several days later, the buzzer keep going off while the sensor was in the soil and the arduino restarted after I powered it off as I was switching the power socket. When I checked the value read by the sensor, it was hovering between 850 to 890. With that, I had to change the value from 850 to 900.

During the development process, the Arduino, sensor and the OLED display are kept in a plastic container. Once the main development is done, we will look into building a housing.

Next thing that I implemented was the button. I needed a way for me to turn the display on only when I need it and turn off the display after a certain amount of time. This was to protect the display, preventing it from burning out.

I connect the button to D4 Grove socket on the Grove Base Shield. Then, the code to detect if the button is pressed is added to the loop function.

The isButtonPressed is a simple function that checks if the button state is high. If so, returns a true, otherwise a false.

With the button implementation, the program will cause the OLED to activate for 5 seconds every time I press it. This should give me enough time to read the value.

One last thing to mention is that the loop function previously had a blocking delay at the bottom. I removed that in favour of the non-blocking timer. What it does is basically get a soil moisture reading once every 30 seconds. For the initial prototype, this is probably a good amount of time since I don’t want to be waiting for ages before I get a new value. Once the system is stable, I would increase the delay between the sensor read further. Once every thirty minutes is not out of the question since soil moisture level don’t change so fast (unless we are in a heatwave and drought).

The code to the project can be found here.

Problems and Troubleshooting

During the implementation, I came across an issue that got me wondering if there was an issue with the hardware.

When I first connected the OLED display, I couldn’t get it to work. After nearly one day of research and then trial and error, it turns out it was because I’m using Arduino Uno. Based on the README provided here: https://github.com/Seeed-Studio/Seeed_Learning_Space/tree/master/Grove%20-%20OLED%20Display%200.96”(SSD1315)V1.0, software I2C should be used instead.

However, since I’m using the Arduino_Sensorkit library, there’s no direct way of using software I2C. More research pointed me to this commit history at the Arduino_SensorKit repository on Github

With that, I located version 1.0.5 that had support for software I2C and installed it. I retested the program and the OLED display turned on. I could then display information on it.

Housing and deployment

Like the previous IoT project that I did, I went with using Lego for the housing.

Fun Fact: The button was a last minute addition after the housing is done when I realised I needed a way to turn the display on after it has gone to sleep. That’s why it’s dangling outside. A re-design of the sensor housing on the right is necessary.

As you can see below, the buzzer, OLED display and button are connected to the Grove Base Shield via the Grove connectors while the moisture sensor is connected using jumper cables.

This is the finished product. The Arduino is hidden away within the box. The top part is actually a moveable flap that can give me quick access to the hardware. The OLED display is placed on the side with the use of double-sided tape to stick it to two transparent panel bricks. A roof is added to “protect” the display from water splashing on it from the top. On top of the display is the button that allows me to press and turn on the display for a quick glance of the moisture value.

At the back of the housing, there’s a compartment for the buzzer. Lego bricks that look like a window are used to create an opening for the sound to exit instead of sounding muffled.

Once the housing is done, the project is finally deployed. But first, the sensitive part of the moisture sensor had to be protected from water. So I modified an anti-static bag and used it to wrap around the top part of the sensor with the application of a decent amount of sticky tape.

Here is the OLED display in action after the button is pressed. It shows the current soil moisture level. Based on the value, looks like I don’t need to water the plant just yet.

An IoT project: Location check-in and temperature recording using facial recognition

With the COVID-19 pandemic, temperature checking before you enter a premise is a necessary step to ensure you are well and help keep others safe. In Singapore, it is a requirement from the government.

One of the most common way is to use a handheld digital thermometer to find out your own temperature and recording it on some kind of logbook placed near the front door. For a workplace, employees are encouraged to record their temperature twice a day.

This has two potential issues.

The first issue is the amount of surface a person come in contact with: thermometer, logbook and stationary used for recording.

The second issue is the logbook. It is a collection of papers printed with tables for people to put their entries. It could get very unwieldy as people need to flip through the papers to locate an empty slot if it’s their first entry for the day or their first entry of the day and record their temperature for the second time.

How are we going to solve this?

The general idea goes like this:

  1. Detect a face
  2. Check that the person is in the database
  3. If the person is in the system, record their temperature and create an entry with the current timestamp in the system.
  4. Otherwise, notify the admin of the unknown person via email with an attached form to register the person
  5. Once the person is registered, they will be checked-in.
  6. The data will be stored locally for future retrieval.

With the general idea in mind, the next thing to decide on is the hardware that we will be using to implement this solution.

There are many great options these days as computing technology has come so far: tablet such as an iPad, a Raspberry Pi, a Nvidia Jetson or even a decent laptop.

An iPad is an expensive solution. The cheapest model cost US$329. Furthermore, there’s only a limited set of hardware we can use with the iPad in order to capture and record temperatures. One such accessory is the FLIR ONE for iOS Personal Thermal Imager. This accessory is expensive, costing US$260 and is not necessarily available for purchase in Singapore without some kind of import restriction. However, it is something that probably requires the list amount of boilerplate work since Apple has created some of the best API for doing facial recognition and machine learning work.

Nvidia Jetson is another possible option. It cost about the same as a Raspberry Pi and comes with good software support. The hardware comes with a 128-core GPU that could easily churn through images and video feeds without any issue. There’s also a strong community support which could make it easier for us to search for information and troubleshooting issues.

Raspberry Pi ranks about the same as Nvidia Jetson in terms of price and purpose. However, there are a few aspects the Raspberry Pi, especially version 4, edged over the Jetson. The first is its faster, newer and more efficient CPU. The second is power consumption. The Raspberry Pi 4 consume at max 3.4 watt of power compared to 10 watt on the Nvidia Jetson. One could attribute that to the less powerful but efficient VideoCore by Broadcom on the Raspberry Pi. Lastly, it is the size. Nvidia Jetson is rather big due to its need to support a large heatsink and will need a bigger case to house it.

The Hardware and Software

Raspberry Pi 4 won out at the end because we don’t need to do heavy AI or robotic works the Nvidia Jetson is intended for. Furthermore, I could always reuse the Raspberry Pi again for future projects that are more general purpose. Lastly, it is also cheaper than an iPad with the FLIR camera accessory even after taking into account we have to buy the camera, sensor and housing.

Since I got the Raspberry Pi 4 as a hobbyist set from Labists, it came with Raspberry Pi OS. A quick setup process was all it needs to get started.

For the camera, I went with the Raspberry Pi’s NoIR Camera module to capture video feeds and do facial recognition. The initial reason for that was I had assumed it could pick up on temperature difference. However, I was proven wrong when further research suggested that the lack of IR filter on the camera was to allow it to see better in low-light conditions. I saw an opportunity here. I could deploy this solution for use at night or in areas where there’s poor lighting.

Now that I needed a way to pick up temperatures more accurately, several hours of research pointed me to using an AMG8833 thermal sensor. I found one from SparkFun and it was readily available locally. There are other more powerful and probably more accurate thermal sensors or cameras such as the SparkFun IR Array breakout – MLX90640 but they cost more and some are out of stock.

Now that we’ve got the hardware figured out, we need to determine what kind of framework or software we can use for the facial recognition part.

I decided upon OpenCV as it was something I’m familiar with and it comes with good support for the Raspberry Pi. A quick google search will give you a number of results.

The latest version of Python 3 (v3.9 at the time of writing) was used.

The following libraries were also installed to reduce the need to write boilerplate codes just to get the facial recognition functionality working:

Implementation of the facial recognition, thermal reading and check-in

You can see the architecture diagram for the solution below.

From the architecture, we can see that there are two facial recognition-related modules running on the Raspberry Pi. I implemented them based on the solutions provided by Adrian Rosebrock at his website.

For detection of faces, the Haar Cascade classifier is used with KNN classification algorithm.

The Face/Name Detector is responsible for building up the face/name store which will then be used for the check-in via face identification later. It scans a specific folder. Within this folder, there are sub-folders that are named after the person which contains his or her pictures. Below is an example of the folder structure.

The Face Recognition with Check-in module is as its name suggested. It pulls the feed from the camera and check each frame for a face. Once it found one, it will check against the face/name store. Only then, it will read the temperature detected by the thermal sensor and proceed to record the person into the temperature store, which is nothing more than a .csv file.

When it comes to the thermal sensor, I relied on information gleaned from this site. The GitHub repo provided us with the source code necessary to work with the sensor and formed the basis of my implementation.

Once the system detects a face that is known and their temperature recorded, the system will store it in the temperature store. Below you can find the flowchart that describe the execution.

And for those who like to read the source code to the project, it could also be found here.

Building the housing

I had a rough plan for how to house the hardware. It goes like this:

  1. Need a compartment to house the Raspberry Pi
  2. Camera should be placed high enough
  3. Thermal sensor should be able to see
  4. It should hold a display.

There is always the option of using a 3D printer but I knew it would probably take me way too long to figure out how to put everything together. So I went with the one thing I know: LEGO. It gives me the flexibility to change my design whenever the situation demands it while also giving me the power to make whatever I want. All I needed was the right kind of bricks and the right amount.

Below is the first prototype. Its sole purpose was to house the camera and Raspberry Pi.

The above design cannot scale but it gives me the foundation to go further. I changed the Raspberry Pi housing and added more compartments to house the 7inch touchscreen display and thermal sensor.

It was during this time when I realised I’m lacking the right type of jumper cables to wire up the display to the Raspberry Pi. I brought out the breadboard I purchased before to use it as the connection hub. I build a separate compartment to house the breadboard which would be used to connect all the components together via wires. So this is how the following design came about.

Testing

Since this is meant as a prototype and learning purpose, I didn’t get around to do unit testing and all that stuff. My focus was only on getting things to work. I only did some application test to make sure whatever I’ve implemented was working.

Since I was developing primarily on my Macbook Pro, I tested the face identification implementation on it with the webcam as the source of the video feed before deploying it onto the Raspberry Pi.

After the thermal sensor was installed, I got to see for myself the effectiveness of the thermal sensor. The reference repo contains two example Python modules (IR_cam_interp.py and IR_cam_test.py) that I could use to check if the thermal sensor was working.

Using the modules, the sensor could pick up my skin temperature to be around 34 degrees celsius at about 10 cm. Going slightly further away at 15 cm, the temperature detected dropped to about 33 degree celsius. Any further, it becomes harder to get a more accurate read without some kind of offset value added. Thus far, I tried 2.5, 6.0 and 8.0. The latter gave the best accuracy. But this also meant that placing the thermal sensor at the top of the camera isn’t really a good implementation.

What’s next?

Since the thermal sensor don’t give very precise reading for faraway object like a human body, another approach is required. Below you can see a drawing of the improvement that I could make.

Other than that, the solution is also lacking the ability to register a new/unknown person into the system via some type of web interface or from another computer. What I did was to manually upload the pictures and run the Face/Name Detector to build the database.

Last but not least, I could also consider adding a backend support via Microsoft Azure to collect the recorded temperature, alerting users to new unknown users and to enable downloading of the stored record.