My apologies. I seem to have totally ignored the existence of the Disqus comments in this blog. Yesterday, I randomly read some article showing all the tracking Disqus does on people and I didn’t like that. I was using it just because I thought it was an easy way to have comments that looked like a forum. So, until I find a new way to add comments to this blog without the risk of people being tracked, I will keep this blog without comments. If you have a suggestion, please, get in touch.
I’m a big fan of the Raspberry Pi Foundation and a user of their single-board computers as well. In the past two years, I worked developing tiny, under-250g, collision resilient quadcopters that had as the main computer a Raspberry Pi Zero W (RPI Zero W). The reasons why I chose the RPI Zero W were size/weight, power consumption, price and the huge community of users. I even considered to use the Banana Pi Zero because it had a faster CPU with more cores, but I gave up in favor of the RPI after talking to a friend that was struggling to set it up. Nowadays, I’m starting a new project on smart IoT sensors that, I hope, will help businesses in the tourism sector to recover faster by understanding the flow of the tourists while respecting people’s privacy. For that reason, I will need a hardware that is low power, small, resonably priced and with good support… the RPI Zero W was the first thing that came to my mind, but it is not powerful enough for some on the edge image processing I’m planning to do. One way to speed up things is to directly compile them for (on) the RPI Zero W. Currently, it’s possible to use cross-compilers, but I was having trouble to cross-compile the TensorFlow Lite runtime library and that’s why I’m writing this post.
TL; DR: I like Google Colab (Colaboratory) and I use it quite a lot because that way I can work during the night without waking up my wife with my laptop’s crazily loud gpu fan noises. Not so long ago I wrote a post where I shared two notebooks that allowed the user to save images and sounds directly from your webcam / mic to use inside a colab notebook.
Now, I put everything together in a python module and I added a super cool way to label images directly from a colab notebook! I’m not 100% sure, but I couldn’t find anything like that after googling a lot.
Here is one example where I added some labels to an image captured from my webcam using a colab notebook:
For more details I suggest you to go straight to the colab_utils repo.
TL; DR: Singularity containers are like Docker containers that don’t force you to be root to run them. Ok, if you want a better explanation, I suggest this presentation or just try searching for it.
The very first example you use to introduce neural nets to students nowadays is always something based on MNIST handwritten numbers. Therefore, I decided to create an interactive notebook where you can directly draw your digits to test your brand new trained neural net.
I may be getting used to short posts, but here it comes: this will be another zippy one! The other day, I realized something quite interesting about the Jupyter notebook (in fact, it comes from IPython…) magic %load. You can use it with an URL!
Ok, this is a straight to the point post! In previous posts I explained how to save an image directly from your webcam. However, that method was using OpenCV and it can only access hardware connected to the host (where the Jupyter notebook server is running). One classic example where you can’t access a webcam directly is Google Colaboratory. As I said at the beginning, you can only access the hardware from host, so the microphone also will not be available. Javascript to the rescue!
My last post was all about creating a TensorFlowdocker image that would work with OpenCV, inside a Jupyter notebook, creating external windows, accessing the webcam, saving things using the current user from host, etc. All that hard work had a reason: use the newest version of TensorFlow for computer vision. So, let’s try it!