Monthly Archives: January 2017

Did I just got side-tracked?

Mmmm, there seems to be a short-cut to valuable insights. A lot faster than deep-learning, and potentially very powerful. Its origins lie not in cognitive biology and neural nets, but rather in politics.

The short cut is called “alternative facts”: https://www.theguardian.com/us-news/2017/jan/22/donald-trump-kellyanne-conway-inauguration-alternative-facts

Making facts great again? Better not post too much on this competing practice.

More is less: NN back pressure

Back pressure is often mentioned in the context of data streams (TCP for example.) Back pressure refers to a mechanism where a data receiving component is signalling that the incoming data stream should slow down, or speed up. Apache Flink for example handles back pressure well. But that is not what this post is about. It is about adding extra loss functions to neural nets, and how that can improve overall performance.

Consider the picture at the end of the post. This picture is part of a Kaggle competition. (Note: I heavily cropped and blurred the image to FUBAR it and avoid any rights or recognition issues.) The goal of the competition is to classify the fish that is being caught using machine learning.

Here is the remarkable thing. One could straight-forward classify the picture as is using deep-learning. The net would figure out that the fish is the relevant part, and it would then figure out what type of fish it is, ie. classify it. Alternatively one could first extract the fish from the picture, and then classify the extracted hopefully fishy image segment using a separate neural net.

Here it comes. Adding extra bounding box parameters to the initial fish classification net actually improves the classification rate. Yes, a net that only classifies fish, classifies less accurate than a net that classifies a fish and learns the bounding box.

By providing richer feedback about the subject, the internal structure of the net seems to become better. For lack of a better word, I call it back-pressure. Counter intuitive. More is less; error that is.

fish

Having fun with VGG16

A long long time ago in deep learning time and about two years ago in human years I created a convolutional neural net for the Kaggle National Data Science bowl. In a brave efffort I ported a convolitional net for MNIST to the plankton dataset using python and Theano. More or less a mano in Theano. It worked, I ended up somewhere halfway down the field.

I remember being somewhat proud an somewhat confused. I spent a lot of time learning deep learning concepts, and a lot of time coding in Theano. There was not really time or grit left to improve the working model. Deep learning proved a lot of work.

Flash forward to today. Within a few hours I am running a better convolutional net on the same dataset. Nothing fancy yet, but it works; and I have good hopes because of VGG16. VGG16 is a one time winning model for the ImageNet competition. As it turns out one can create a Franken-VGG16 by chopping of layers and retraining parts of the model specifically for the the plankton dataset. Ergo, the feature learning based on approximately 1.5M images is reused. Just like word embeddings are available for download, feature filters will become available for all types of datasets. Progress.

I added one of the plankton images as an example. The contest is aimed at identifying plankton species to assess the bio diversity in oceans.

88393

As mentioned before, Keras is just kick-ass for creating neural nets. The Keras API allows for the quick creation of deep neural nets and just get on with it. Very impressive. Also many thanks to Jeremy Howard, for democratizing deep learning.