Thursday, June 25, 2020

Using the VisionAI DevKit to capture photos to Azure regularly

Overview

I've been playing lately with several IoT devices, from the Raspberry Pi to the Jetson Nano, and one (very simple) pet project I wanted to set up was to have one of the these devices capturing photos and uploading them to an Azure Storage Account, and then do either time lapses, or just use them for security home monitoring.
The issue with these small devices is that they're fiddly - they need boxes, power, be correctly positioned, can be fragile, etc., so I decided to use my Vision AI DevKit for this.




The camera is sturdy and has a wide-angle lens, the position is adjustable and the base is heavy/stable, it has built-in Wifi, runs Azure IoT Edge and can be powered with USB-C. It also has built-in ML capabilities (it can run neural network models on a Qualcomm Snapdragon chip pretty fast), but I don't actually want to run any ML, just upload photos regularly. It does use more power than my Raspberry Pi Zero with a camera, that's the main downside.

For my time-lapse use case, I need photos regularly, while for the security one I want to make sure photos are uploaded as fast as they are taken (for which I assume both power and Wifi are on). For this reason I decided to not do any local processing, just upload the photos ASAP and process them in Azure later. I'd save bandwidth by doing processing on the camera, that's not really an issue.

Starting point

I started with one of the Community Projects available for the camera, the Intelligent Alarm by Microsoft colleague Marek Lani. His project is entirely on GitHub, and he has a more complex setup than what I need -- he's doing object recognition on the edge as a trigger for a photo upload, which I don't want to do. He actually has a repo on GitHub for the image capture part of his project: https://github.com/MarekLani/VisionDevKit_CaptureModule . The reason this is relevant is because he is capturing images from the camera using ffmpeg over the built-in RTSP video feed (and calling it from NodeJS), instead of using the SDK's capabilities to take photos. Doing this later option can mess up local ML processing and require a reboot of the camera. So my work was simplified to adapting his code for my scenario.

Code changes

- Modify the capture code

The first thing I did was to look at Marek's app.js file. His code captures a photo whenever a message is received from IoT Hub (more correctly, from another container/module running on the camera). I just commented all of this block and replaced it with a call to the function that calls ffmpeg to capture the photo, TakeAndUploadCaptureFromStream(); . In more detail, I commented out the pipeMessage function and the whole IoT Hub block starting with Client.fromEnvironment.

The second thing was to find a way to call this code regularly. The classical solution to do this is to use cron, and that's what I did, following some hints from this thread on Stackoverflow. So here are the steps:

- Created a cronjobs file with content:

* * * * * node /app/app.js > /dev/stdout

This setup means the command is called once per minute. The redirect is actually not working, I want it to redirect the output to the docker log, but I get an error when I use "> /proc/1/fd/1 2>/proc/1/fd/2". Something to come back to.

- Modified the Dockerfile.arm32v7 to contain:

FROM arm32v7/node:12-alpine3.11

WORKDIR /app/

RUN apk add  --no-cache ffmpeg

COPY package*.json ./

RUN npm install --production

COPY app.js ./

# copy crontabs for root user
COPY cronjobs /etc/crontabs/root

USER root

# start crond with log level 8 in foreground, output to stderr
CMD ["crond", "-f", "-d", "8"]


The changes were:
  • changed the base image to one with alpine (which contains cron)
  • use apk to install ffmpeg instead of apt-get
  • changed the startup command to run cronjobs.
And that was it. I already had a provisioned IoT Hub and the camera registered as an IoT Edge device, as well as an Azure Container Registry to host my container images, and a Storage Account to drop the photos in, so I just had to:
  1. Build the container (docker build)
  2. Tag it with the ACR URL (docker tag)
  3. Push to my ACR (docker push)
  4. Add a module to the edge device configuration (Azure Portal > IoT Hub > my IoT Edge device > Set Modules), remembering to specify the required environment variables: RTSP_IP, RTSP_PORT, RTSP_PATH, STORAGE_CONTAINER and AZURE_STORAGE_CONNECTION_STRING.
After giving a few minutes for the IoT Edge runtime to download and get the container running, my Azure Storage Account now shows the incoming photos:



And this is what is running on the device:







Which matches the configuration on Azure IoT Edge:












Next Steps

After this base setup, my next step is to trigger the execution of an Azure Function or Azure Logic App on a schedule to compare the last two images to check for deltas, or to check if there's a missing photo (indicating camera is possibly off) and then triggering an email alarm. I already have some code to do image processing on an Azure Function (GitHub repo here), which will help.

Hope this helps anyone, and thanks to Marek Lani for his work on the Intelligent Alarm sample.

**EDIT**

Turns out I had to iron out a couple of glitches in the last few days. 

The first was this: after 6-7 hours of image capturing, the AIVisoinDevKitGetStartedModule module would stop working, the container would die and restarting it didn't change the situation. Because the capture module depends on the RTSP stream this exposes, it would also stop. The problem turns out was disk space -- something is filling up the /tmp folder with core* files. My first thought was to again use a cronjob, but cron is read-only in the device, so I went the manual way:

- created a file remove_temp_files.sh in folder /run/user/0, with this content:

#!/bin/bash

while [ : ]
do
rm -f /var/volatile/tmp/core*
sleep 5m
done

Simply delete the core* files every 5 minutes. I then did a chmod +x on the file, and ran it in the background with ./remove_temp_files.sh & . This is not perfect... I'll have to run this every time the device reboots, however.

The second change I made was to organize the files by folder and use the date (in format yyyymmdd-hhmmss) in its name. The changes here were in app.js and included:

- in function TakeAndUploadCaptureFromStream() use the following for the first four lines:

function TakeAndUploadCaptureFromStream()
{
  var rightnow = new Date();
  var folder = rightnow.toISOString().replace('-''').replace('-','').replace('T''-').replace(':''').replace(':','').split('.')[0]; // returns eg 20200627-112802

  //-rtsp_transport tcp  parameter needed to obtain all the packets properly
  var fileName = `${folder}.jpg`

and in uploadImageToBlob() modify this block to calculate the right folder name and use it:

....
if (!error) {
      var justdate = fileName.split('-')[0]; // returns eg 20200627 from 20200627-112802.jpg
      
      blobService.createBlockBlobFromLocalFile(storageContainerjustdate + "/" + fileNamefileNamefunction(errorresultresponse) {
...

With these changes, I now have a new folder per day, and the files have names like 20200627-120401.jpg, much simpler to read and understand.

No comments:

Post a Comment