Mmmm, there seems to be a short-cut to valuable insights. A lot faster than deep-learning, and potentially very powerful. Its origins lie not in cognitive biology and neural nets, but rather in politics.
The short cut is called “alternative facts”: https://www.theguardian.com/us-news/2017/jan/22/donald-trump-kellyanne-conway-inauguration-alternative-facts
Making facts great again? Better not post too much on this competing practice.
Back pressure is often mentioned in the context of data streams (TCP for example.) Back pressure refers to a mechanism where a data receiving component is signalling that the incoming data stream should slow down, or speed up. Apache Flink for example handles back pressure well. But that is not what this post is about. It is about adding extra loss functions to neural nets, and how that can improve overall performance.
Consider the picture at the end of the post. This picture is part of a Kaggle competition. (Note: I heavily cropped and blurred the image to FUBAR it and avoid any rights or recognition issues.) The goal of the competition is to classify the fish that is being caught using machine learning.
Here is the remarkable thing. One could straight-forward classify the picture as is using deep-learning. The net would figure out that the fish is the relevant part, and it would then figure out what type of fish it is, ie. classify it. Alternatively one could first extract the fish from the picture, and then classify the extracted hopefully fishy image segment using a separate neural net.
Here it comes. Adding extra bounding box parameters to the initial fish classification net actually improves the classification rate. Yes, a net that only classifies fish, classifies less accurate than a net that classifies a fish and learns the bounding box.
By providing richer feedback about the subject, the internal structure of the net seems to become better. For lack of a better word, I call it back-pressure. Counter intuitive. More is less; error that is.
So sometimes you know that you shouldn’t do stuff. Like buy a big PC. I mean, that is totally not online and not virtual. So why did I do it? Well, I usually work on a 13″ power book. Good for a lot of stuff, but not if you want to run VM’s. Apple brought out a new Mac Pro, which is kind of cool. But then I somewhat miss Linux, or more precise the Debian package system. Installing binaries on a Mac is not always that smooth. Ports, brew, a mano: sometimes there are chains of dependencies that do require a lot of work. So what to do?
I say: bring the beast. And for those that do not know, that is: bring the HP Z800.
Ofcourse it looks and sounds like Darth Vader. But then tool-less. This work horse is the minion of many quant or video editor. With a bit of haggling this is what I got for the price of a big iPad.
Yes, you are right: that is no spaghetti there (I will post on that later). Would I not rather have the iPad? Nah, I have real needs that need satisfying. To cut a long story short. I can now run a 4 VM Hadoop cluster with no sweat. To be honest: it is a bit disappointing. If you have a Lamborghini, you want to hear the engine roar. Starting up a cluster of 4 nodes: not a glitch. Nothing, just some spaghetti if you ask for it. Plenty of room for Eclipse, Netbeans, databases, Chrome, whatever.
I am going to say it: 24 cores will always be enough. Well for development that is.