Ab Februar 2025 Pflicht (EU AI Act): Anbieter und Betreiber von KI-Systemen müssen KI-Kompetenz nachweisen.
Alle Informatonen 
Porting an Android Embedded Setup
Internet of Things (IoT)

Android Sensor Integration Part 1: Sensor Stack and Kernel Module

Lesezeit
8 ​​min

Notice:
This post is older than 5 years – the content might be outdated.

This first part of a four part series will take you on a walk through the integration of a proximity sensor into Android. We use the ultrasonic range sensor SRF02 which is connected to the I2C bus of a Pandaboard ES. Google released some high-level documentation of the operating mode of the sensor stack, while everything which is done by the hardware manufacturers is largely undocumented. This includes the Hardware Abstraction Layer (HAL) and the kernel driver that we want to look at. In this first part we start investigating what occurs in kernel space.

Disclaimer: The integration described here was implemented for the Pandaboard ES Rev B3 with a Linaro Android 4.4.4 and the ultrasonic range sensor SRF02 connected on an I2C bus. The full source code is available on Github. However this is written as a guideline for any standard sensor integration into Android. It should be considered work in progress with several ugly details that I know have to be fixed. And they will be in the future – probably. Only the files I really had to change have been uploaded.

The Android Sensor Stack

What has to be done for the integration of an additional sensor touches just a small part of the complex processes in the Android Sensor Stack. We just have to use an existing framework, but to really understand what is happening between an application and the hardware we should take a look at the big picture.

A schematic explanation of the Android sensor stack
Source: https://source.android.com/devices/sensors/sensor-stack.html

As an app developer you may know the SensorManager. It is your one stop shop to get access to all available sensors and register listeners to pick up their data. Access to this data is provided by the SensorManager, which is written in Java and is part of the application framework. The SensorManager forwards the Java calls via the Java Native Interface (JNI) to a native C++ SensorManager library. This library can access the SensorService, which is one of the system level services. These are long-running operations in the background, just like controlling our sensor through the methods provided by the HAL. In the HAL two areas are relevant: sensors.h/cpp where all available sensors are listed with methods to poll their data and the sensor class itself which implements these methods by accessing the interface a kernel module provides. So let’s start our journey at the kernel module level.

A schematic explanation of the Android Sensor Sub System
Source: http://processors.wiki.ti.com/index.php/Android_Sensor_PortingGuide

Diving into Kernel Space

First we’ll need a driver to communicate with the sensor over I2C. This includes an option to enable or disable the sensor, start a cyclic measurement and read values. Therefore the virtual file system sysfs (everything under /sys/) will be used. It provides information about hardware devices and their drivers to userspace and configures the devices from there. At first we’ll create a sysfs entry for the sensor. When we write ‚1‘ to this file, a cyclic routine will start triggering the sensor and save the last value. So it is easy to read out this file to get the latest value. When writing ‚0‘, the routine should simply stop. This cyclic routine is controlled by a delayed work queue. When we enable the sensor, such a queue is created and calls the function for the measurement with a defined delay. The ranging function writes itself back into the queue until we disable this mode. Just like that we are nearly done with the implementation, but the HAL is designed to look for input events, so we must provide them. We’ll use the input subsystem to generate events each time a measurement is completed.

Initializing the Kernel Module

Kernel modules can be dynamically loaded into the kernel or removed from it. That is why we need to define a function which is called at loading time of the module into the kernel, initializing everything needed for the driver, and one cleaning up when the module is removed. With module_init()  and module_exit()  we define which functions do these jobs.

The driver we initialize in this init-method is a client driver for the I2C-core-driver, so we do not need to implement the protocol. We however have to allocate space for the character device using the automatically assigned device number ( alloc_chrdev_region (&dev_num, 0, 1, DEVICE_NAME);), which is turned into major/minor numbers by major_number=MAJOR(dev_num) and minor_number=MINOR(dev_num) respectively. In the long run, both will be replaced by dev_t device numbers.

The major and minor number of a device characterize a file in /dev for accessing the right driver. Such a file is a virtual layer between userspace and device drivers in the kernel. This happens automatically when we add our character device to the system and just like that a cdev-structure, which is representing a character device within the kernel, is allocated. With cdev_add() the device this structure represents is added to the system.

Now it is required to register this client driver to the I2C-core-driver with i2c_add_driver (&srf02_i2c_driver);  for easy access to the device using existing methods of this protocol.

At this point we’ll take a short detour to the board description file where the driver is added to the address the device pipes up at the I2C bus. The I2C-core-driver learns about all devices registered there at system start time.

 

After that let’s return to the driver and take a look at its I2C-driver-structure. The routines for the I2C-core-driver are defined here, e.g. the probe()-function. To successfully connect the driver to the device the name both at this structure and in the id_table structure have to be equal to the name in the i2c_board_info structure. Also the address in id_table and i2c_board_info  structures must be identical, otherwise the probe()-function where the driver gets access to the hardware and important parts of sysfs are initialized is not called.

 

Later on almost the complete driver logic is controlled via the sysfs, so we need to create entries for the driver there. We do so by writing an entry for the class with class_create (THIS_MODULE, DEVICE_NAME); and with device_create (srf02_class, NULL, dev_num, NULL, DEVICE_NAME);  for device entry. However, these entries are not visible before initializing a sysfs group in the probe()-function with sysfs_create_group (&client -> dev.kobj, &srf02_attr_group);.

Before creating a file in this sysfs group representing the interface for the HAL to interact with the sensor, it is required to initialize the input subsystem for generating input events. We have to remember the name we give to this input device because in the HAL the events we are looking for use the same name. In input_set_abs_params() we set some parameters for the input device srf02_input_dev such as the event code, the minimum and maximum distance the sensor can capture, the fuzziness and the flat.

Each driver needs functions to perform various operations on the device on which the file_operations structure holds pointers. Not every driver needs to implement all accessing though. In this case we have one for a character device, so only functions for writing on or reading from it as well as those to open and release the device are required.

Stay tuned

This concludes Part 1 of our series on Android Sensor Integration. Return later this week for instructions on how to actually use the sensor for range measurement and process its output. In the meantime have a look at our homepage for more information on Android and IoT development.

The Whole Story

3 Kommentare

Hat dir der Beitrag gefallen?

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert