Affective Robots: Emotionally Intelligent Machines

Automatic emotion recognition is an emerging area which leverages and combines knowledge from multiple fields such as machine learning, computer vision and signal processing. It has potential applications in many areas including healthcare, robotic assistance, education, market survey and advertising. Another usage of this information is to improve Human Computer Interaction with what can be described as Affective Computing, an interdisciplinary field that expands into otherwise unrelated fields like psychology and cognitive science. The concept of „affective robots“ refers to leveraging these emotional capabilities in humanoid robots to respond in the most appropriate way based on the user’s current mood and personality traits. In this article, we explore the emotion recognition capabilities of Pepper the robot and how they perform in contrast to other cutting-edge approaches. Weiterlesen

Migrating an embedded Android setup: What could possibly go wrong? (Part 1)

Android updates are rare, especially for development boards. We were running such a deprecated board once built to demonstrate our knowledge in embedded Android. Since we didn’t want to rely on a deprecated showcase, we decided to build a completely new setup bringing together the old show case, an ordinary Android extended with a line LCD display, usable via an SDK-Add-on, and an integrated sensor, previously described here.

Der Feedback Loop: Kernelement eines ertragreichen Datenproduktes

In meinem Artikel über Erfolgsfaktoren habe ich 5 entscheidende Elemente bei der Umsetzung von Datenprodukten herausgearbeitet. Das wichtigste Element ist der Feedback Loop. Dabei geht es darum, die Interaktion des Nutzers mit dem Dienst zu nutzen, um den Dienst selbst zu verbessern oder Input für neue Angebote zu schaffen. Warum das sinnvoll ist, möchte ich gerne anhand von etwas mehr Details verraten. Weiterlesen

Powering a Data Hub at Otto Group BI with Schedoscope

In order to build data services or advanced machine learning models, organizations must integrate large amounts of information from diverse sources. As a central place to consolidate as many data sources as possible we often find what is fashionably called a data lake. Building a data lake usually starts by collecting as much data in raw form as possible. The idea is to give data scientists simple access to all available data so that they can combine information in ways not yet anticipated. Hadoop is the preferred choice for such a system because it is able to store vast amounts of data in a cost-efficient manner and is largely agnostic to structure. Weiterlesen

Catch the inovex Tram [Gewinnspiel]

Juhu, es ist soweit: Seit wenigen Tagen fährt die inovex-Tram durch Karlsruhe (Linie 1 bis 6) – und wird dies auch noch ein ganzes Jahr lang tun. Grund genug, eine neue Verlosung zu starten, bei der ihr jeden Monat ein digitales Gadget gewinnen könnt. Lest weiter und erfahrt, wie das Spiel funktioniert und welcher Preis in der dritten Runde winkt. Weiterlesen

Datenprodukte: 5 Erfolgsfaktoren machen den Unterschied

„Data-rich companies are not an economic threat, but rather are an important source of innovation.“

Sowohl in den USA als auch in Europa gibt es Überlegungen, Unternehmen stärker zu regulieren, die Daten als Grundlage ihrer Geschäftsmodelle sehen. Es gibt also offensichtlich einige Firmen, die besonders gut verstanden haben, wie man Daten monetarisieren kann. Es gibt aber auch sehr viele Firmen, die es versuchen und weniger erfolgreich sind. Daher haben wir nach den Gemeinsamkeiten erfolgreicher datenzentrischer Geschäftsmodelle gesucht und 5 zentrale Erfolgsfaktoren gefunden.

Causal Inference and Propensity Score Methods

In the field of machine learning and particularly in supervised learning, correlation is crucial to predict the target variable with the help of the feature variables. Rarely do we think about causation and the actual effect of a single feature variable or covariate on the target or response. Some even go so far as to say that „correlation trumps causation“ like in the book „Big Data: A Revolution That Will Transform How We Live, Work, and Think“ by Viktor Mayer-Schönberger and Kenneth Cukier. Following their reasoning, with Big Data there is no need to think about causation anymore, since nonparametric models will do just fine using correlation alone. For many practical use cases, this point of view may seem acceptable — but surely not for all.