Tag Archives: Deep Learning

Title to image search for improved thumbnail selection

Introduction

One of the key creative aspects of an advertisement is choosing the image that will appear alongside the advertisement text. The advertisers aim is to select an image that will draw the attention of the users and will get them to click on it, while remaining relevant to the advertisement text (naturally, an image of a cute puppy next to an advertisement about insurance doesn’t make much sense) . Below are some examples of advertisements appearing in Taboola’s “Promoted Links” box. Notice that each advertisement contains both a title and an appealing image.

Examples of advertisements placed in Taboola's "Promoted Links" box. Notice that each advertisement contains both a title and an appealing image.

Examples of advertisements placed in Taboola’s “Promoted Links” box. Notice that each advertisement contains both a title and an appealing image.

Given an advertisement title (for example “15 healthy dishes you must try”), the advertiser has endless possibilities of choosing the image thumbnail to accompany it, clearly some more clickable than others. One can apply best practices in choosing the thumbnail, but manually searching for the best image (out of possibly thousands that fit a given title) is time consuming and impractical. Moreover, there is no clear way of quantifying how much a given image is related to a title and more importantly – how clickable the image is, compared to other options.

To Alleviate this problem, we developed a text to image search algorithm that given a proposed title, scans an image gallery to find the most suitable images and estimates their expected click through rate (a common marketing metric depicting the amount of user clicks per a fixed number of advertisement displays).

For example, here are the images returned by our algorithm for the query title “15 healthy dishes you must try” along with their predicted Click Through Ratio (CTR):

Examples of images returned by our algorithm for the title query "15 healthy dishes you must try” along with their predicted Click Through Ratio (CTR). Notice that the images nicely fit the semantics of title.

Examples of images returned by our algorithm for the title query “15 healthy dishes you must try” along with their predicted Click Through Ratio (CTR). Notice that the images nicely fit the semantics of title.

Continue reading

Advertisement

Emotion Recognition in the Wild via Convolutional Neural Networks and Mapped Binary Patterns

In the last post we talked about age and gender classification from face images using deep convolutional neural networks. In this post we will show a similar approach for emotion recognition from face images that also makes use of a novel image representation based on mapping Local Binary Patterns to a 3D space suitable for finetuning Deep Convolutional Neural Networks [8]:

teareser_c

Local Binary Patterns (LBP) Mapping

Our method was presented in the following paper:

Gil Levi and Tal Hassner, Emotion Recognition in the Wild via Convolutional Neural Networks and Mapped Binary Patterns, Proc. ACM International Conference on Multimodal Interaction (ICMI), Seattle, Nov. 2015

For code, models and examples, please see our project page.

Continue reading

Deep Learning 101 talk at DevCon 2016

At the recent DevCon conference I had the pleasure of giving an introductory talk to Deep Learning. A short theoretical overview is given following a technical deep dive on how to train deep networks with a few demos, practical examples and tips.

 

The notebook used in the demo is available here and the various deep networks and definition files used to run the demo are available here.

Age and Gender Classification using Deep Convolutional Neural Networks

In the last few posts we mostly talked about binary image descriptors and the previous post in this line of works described our very own LATCH descriptor [1] and presented an evaluation of various binary and floating point image descriptors. In the current post we will shift our attention to the field of Deep Learning and present our work on Age and Gender classification from face image using Deep Convolutional Neural Networks [2].

Example images from the AdienceFaces benchmark

Example images from the AdienceFaces benchmark

Our method was presented in the following paper:

Gil Levi and Tal Hassner, Age and Gender Classification using Convolutional Neural Networks, IEEE Workshop on Analysis and Modeling of Faces and Gestures (AMFG), at the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, June 2015.

For code, models and examples, please see our project page.

New! Tensor-Flow implementation of our method .

Continue reading