Today I was trying to implement, using PyTorch, the Focal Loss (paperswithcode, original paper) for my semantic segmentation model. Focal Loss is “just” Cross Entropy Loss with some extra sauce that allows you to adjust (γ) how much weight you give to examples that are harder to classify otherwise your optimiser will focus on the easy examples because they have more impact on the loss. To save time, I didn’t even considered writing my own code (although the focal loss is fairly simple), and I went directly to google where I found this nice example:
Today I needed to check some connections in the PCB of my ST-Link V2 clone because I wanted to add the trace support following the nice explanations from here. However, my old brandless digital microscope (you know, they all look the same and come in a blue box…) refused to work. dmesg helped me find some repeated error messages (Failed to query (GET_DEF) UVC control 4 on unit 1 or Failed to query (GET_MIN) UVC control 4 on unit 1) and a little bit of google-fu did the rest. I found my system was suffering from a problem with libwebcam0 and uvcdynctrl and the log file /var/log/uvcdynctrl-udev.log was already at 68GB (?!?).
I learned this is a super old bug (first message is from 2011!) and it can slow down your system to a halt. Using apt show and the very useful apt-rdepends I noticed libwebcam0 and uvcdynctrl just depended on each other… so following the suggestion and removing libwebcam0, uvcdynctrl and uvcdynctrl-data solved my problem (sudo apt remove libwebcam0 uvcdynctrl uvcdynctrl-data).
I hope this blog post can help other people avoid spending time on google to solve the same 11-year-old problem…
UPDATE (29/01/2023): Ubuntu Cheese sometimes is too picky and stops the stream with an error message, so I suggest using ffplay. First, connect your microscope (webcam) and check the devices available (v4l2-ctl --list-devices):
I am always using Google Colab, but in many cases I also need to install something using apt-get. The problem was that sometimes you need to add a new repository, update, install… then your notebook becomes full of text and that eats your memory (locally too as the browser needs to render that after all). So, today I found a nice post explaining the reason why the -qq argument may still leave some bits of text behind. You should go there and read it by yourself, but I will copy some info here in case that website disappears.
Today, I decided to close my LinkedIn account. I’d been mulling over this idea for a while, trying to weigh the PROS/CONS. The final result was that I couldn’t see much value on having that account, and I also realized I should be posting things on my own website instead
I still have my twitter account (@ricarbotics), but I’m thinking about shutting it down as well.
After all my efforts, I couldn’t find another place where I can easily keep my professional contacts up to date (not everybody is on Twitter or have a personal website, github profile, maybe even email). So, for the time being I will keep my Linkedin account as an old business card binder…
I decided to re-open my account because it was too deeply available on the search engines and I became afraid someone could take it over to create a fake account. So, now I’m back to LinkedIn, but without any contacts and almost zero personal info available for the time being.
To be honest, after the Twitter take over by Musk, I started using LinkedIn more and it has been useful to learn about new papers, technologies, etc.
From time to time I have a project with some electronics that need testing. This weekend I was checking how to power my Maple Syrup Pi Camera with a solar panel. However, prototypes always have a chance of generating the magic smoke, so it’s nice to be able to limit the current to avoid that fate. In addition to that, I already have a fancy soldering iron that is power by USB-C, so why not a cordless power supply powered by USB-C too? Below you can see the result from my weekend tinkering .
I may write another post in the near future about this, but for now it will be yet another very-short-post™ . I’m working with Tiny ML (or Edge AI or simply trying to run complex stuff on not-so-great hardware) and, currently, my focus is on Google Coral EdgeTPU. In general, I like Google, TensorFlow, etc, but a lot of the things they release are badly documented (or the documentation is just plain outdated) and others simply overcomplicated (ok, it may be useful when many people work on the same codebase…). Sometimes, I even think this is some sort of business strategy because a gigantic company like Google couldn’t do these things by mistake, but who knows. So, back to TFLite models, most of the users know they are Flatbuffers, but it’s so annoyingly hard to make simple things because you can’t find proper documentation (a Google search should ALWAYS return perfect results related to Google stuff, shouldn’t it????).
This is yet another very-short-post™. I really like VSCode because I think it speeds up lots of things. However, when I developing stuff on the Raspberry Pi, I would keep moving files back & forth or I would just use vim. So, today I decided to google a little bit and I found a simple solution: sshfs
After all that story about the lawyer cat, I decided to try to make something interesting to use during webinars, virtual meetings, etc. With the help of Google Coral Edge TPU USB Accelerator, it’s possible to run deep neural models, with very high framerate, without the need of a GPU (and without all the noise coming from the colling fans). Above, I’m using segmentation to transform myself into some sort of semi-invisible blob while showing the results from PoseNet.
My apologies. I seem to have totally ignored the existence of the Disqus comments in this blog. Yesterday, I randomly read some article showing all the tracking Disqus does on people and I didn’t like that. I was using it just because I thought it was an easy way to have comments that looked like a forum. So, until I find a new way to add comments to this blog without the risk of people being tracked, I will keep this blog without comments. If you have a suggestion, please, get in touch.
I’m a big fan of the Raspberry Pi Foundation and a user of their single-board computers as well. In the past two years, I worked developing tiny, under-250g, collision resilient quadcopters that had as the main computer a Raspberry Pi Zero W (RPI Zero W). The reasons why I chose the RPI Zero W were size/weight, power consumption, price and the huge community of users. I even considered to use the Banana Pi Zero because it had a faster CPU with more cores, but I gave up in favor of the RPI after talking to a friend that was struggling to set it up. Nowadays, I’m starting a new project on smart IoT sensors that, I hope, will help businesses in the tourism sector to recover faster by understanding the flow of the tourists while respecting people’s privacy. For that reason, I will need a hardware that is low power, small, resonably priced and with good support… the RPI Zero W was the first thing that came to my mind, but it is not powerful enough for some on the edge image processing I’m planning to do. One way to speed up things is to directly compile them for (on) the RPI Zero W. Currently, it’s possible to use cross-compilers, but I was having trouble to cross-compile the TensorFlow Lite runtime library and that’s why I’m writing this post.