{"id":38815,"date":"2022-12-16T10:16:19","date_gmt":"2022-12-16T09:16:19","guid":{"rendered":"https:\/\/www.inovex.de\/?p=38815"},"modified":"2026-02-18T07:41:53","modified_gmt":"2026-02-18T06:41:53","slug":"how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2","status":"publish","type":"post","link":"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/","title":{"rendered":"How to Use Google\u2019s ML Kit to Enhance Pepper With AI (Part 2)"},"content":{"rendered":"<p><span style=\"font-weight: 400;\">Welcome to the second part of this blog series \u201cHow to use Google\u2019s ML Kit to enhance Pepper with AI\u201c! In case you missed the first part, I recommend you start reading <\/span><a href=\"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-1\"><span style=\"font-weight: 400;\">here<\/span><\/a><span style=\"font-weight: 400;\"> for an introduction on what we\u2019re building and how to start.<\/span><!--more--><\/p>\n<p><span style=\"font-weight: 400;\">In this article, we are going to look at something cool we can build by combining <a href=\"https:\/\/developers.google.com\/ml-kit\" target=\"_blank\" rel=\"noopener\">Google\u2019s ML Kit <\/a>with the humanoid-shaped social robot Pepper. Imagine you can ask the robot via natural language to go pick something up for you. For some reason, an all-time favorite and the most-asked question ever about Pepper is \u201cCan it bring me a coffee?\u201c While that is a considerably challenging endeavor that encompasses complex tasks in several areas of AI and robotics for which Pepper is not quite ready yet, we can start going a step in this direction by embedding image classification to recognize the objects around it or, better yet, object detection to identify the position of those objects in the room and also point at them.\u00a0 <\/span><span style=\"font-weight: 400;\">Have a look at this video illustrating what that looks like:<\/span><\/p>\n<div style=\"width: 640px;\" class=\"wp-video\"><video class=\"wp-video-shortcode\" id=\"video-38815-1\" width=\"640\" height=\"360\" preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/see.mp4?_=1\" \/><a href=\"https:\/\/www.inovex.de\/wp-content\/uploads\/see.mp4\">https:\/\/www.inovex.de\/wp-content\/uploads\/see.mp4<\/a><\/video><\/div>\n<h2><\/h2>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-custom ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\"><p class=\"ez-toc-title\" style=\"cursor:inherit\"><\/p>\n<\/div><nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/#ML-Kits-Object-Detection-API\" >ML Kit&#8217;s Object Detection API<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/#Custom-models\" >Custom models<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/#Implementation\" >Implementation<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/#How-to-build-the-model\" >How to build the model<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/#How-to-analyze-the-images-with-an-object-detector\" >How to analyze the images with an object detector<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/#How-to-process-and-show-the-results\" >How to process and show the results<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/#Voice-interaction-%E2%80%9CPepper-what-do-you-see%E2%80%9C\" >Voice interaction: \u201cPepper, what do you see?\u201c<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/#Voice-interaction-%E2%80%9CPepper-where-is-it%E2%80%9C\" >Voice interaction: \u201cPepper, where is it?\u201c<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/#Conclusion-and-next-steps\" >Conclusion and next steps<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"ML-Kits-Object-Detection-API\"><\/span><span style=\"font-weight: 400;\"><b>ML Kit&#8217;s Object Detection API<\/b><\/span><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400;\">With this demo, our idea is to leverage object detection to recognize objects in an image and their position so that Pepper can localize them in a room. Pepper should be able to respond to the question of what objects he can see and even point at them when asked to.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">To simply categorize everyday objects, the base model of the Image Labeling API returns pretty good results. It is a general-purpose classifier that can identify general objects, places, activities, animal species, and products out of more than 400 categories and takes approximately 200 ms for inference when run on Pepper. However, since we also want to know the position of the objects, Image Labeling is not enough. The <a href=\"https:\/\/developers.google.com\/ml-kit\/vision\/object-detection\" target=\"_blank\" rel=\"noopener\">Object detection and tracking API<\/a> should be the right choice.\u00a0<\/span><\/p>\n<h3><span class=\"ez-toc-section\" id=\"Custom-models\"><\/span><b>Custom models<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><span style=\"font-weight: 400;\">The API offers two modes that are optimized for two core use cases: tracking the most prominent object across images and detecting multiple objects in a static image. However, although it can optionally classify the detected objects, the base coarse classifier used by default and trained by Google is not enough for our use case as it only classifies into five broad categories: Place, Fashion good, Home good, Plant, and Food. Same as for Image Labeling, you can use the API either with the base models or custom TensorFlow Lite models that are more targeted. They can be bundled with the app or downloaded from the cloud using Firebase. The APIs are compatible with a selection of pre-trained models published on TensorFlow Hub or a custom model trained with TensorFlow, AutoML Vision Edge, or TensorFlow Lite Model Maker, provided it meets certain requirements.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Google released a family of image classification models called <\/span><a href=\"https:\/\/ai.googleblog.com\/2019\/05\/efficientnet-improving-accuracy-and.html\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">EfficientNet<\/span><\/a><span style=\"font-weight: 400;\"> in May 2019, which achieved state-of-the-art accuracy with an order of magnitude of fewer computations and parameters and EfficientNet-Lite, which runs on <\/span><a href=\"https:\/\/www.tensorflow.org\/lite\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">TensorFlow Lite<\/span><\/a><span style=\"font-weight: 400;\"> and is designed for performance on mobile CPU, GPU, and EdgeTPU in 2021. It brings the power of EfficientNet to edge devices and comes in five variants, starting from the low latency\/model size option (EfficientNet-Lite0) to the high accuracy option (EfficientNet-Lite4). The largest variant, integer-only quantized EfficientNet-Lite4, achieves 80.4 % ImageNet top-1 accuracy. However, running this model on Pepper\u2019s processor has an inference time of around 15 seconds! Unfortunately, that is too long for any kind of interactive application. That is why we will have to trade off accuracy and settle for the lower accuracy (and lower model size) variants. Even the B0 variant still has a higher latency (over one second) than previous mobile models such as mobilenet v2, which makes the interaction less fluid. That is why, although this could also make a very good candidate for the job, we are using an object labeler based on MobileNet V2 and optimized for TFLite trained by Google using quantization-aware training as our custom model with the Object Detector that yields pretty good results in about 0.8 seconds. This model can be found on <\/span><a href=\"https:\/\/tfhub.dev\/google\/lite-model\/object_detection\/mobile_object_labeler_v1\/1\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">TensorFlow Hub<\/span><\/a><span style=\"font-weight: 400;\"> as a \u201cGoogle Mobile Object Labeler\u201c.<\/span><\/p>\n<h2><span class=\"ez-toc-section\" id=\"Implementation\"><\/span>Implementation<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400;\"><a href=\"https:\/\/github.com\/SilviaSantano\/Pepper-and-MLKit\" target=\"_blank\" rel=\"noopener\">Here<\/a> you can find the full code of the application we are building throughout this series<\/span><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">When this demo has been selected either via voice or via touch, the activity replaces the menu with this fragment. Its layout includes a <\/span><i><span style=\"font-weight: 400;\">PreviewView<\/span><\/i><span style=\"font-weight: 400;\">\u00a0 with the images of the camera currently being processed, on top of which<\/span><span style=\"font-weight: 400;\"> the predicted information will be drawn, the home button to go back, and a button to repeat the rules.\u00a0<\/span><span style=\"font-weight: 400;\">For our demo purposes, the analyzer will be continuously running in the background and updating the results in the form of bounding boxes and text with the labels on the screen even if no question was asked, whenever we are showing this fragment. <\/span><span style=\"font-weight: 400;\">Once the view is created, Pepper briefly explains how it works.<\/span><\/p>\n<p>With regards to the architecture, <span style=\"font-weight: 400;\">we have a fragment where we use data binding to access the views, use a <\/span><i><span style=\"font-weight: 400;\">ViewModel<\/span><\/i><span style=\"font-weight: 400;\"> to store the data, and an <em>Analyzer<\/em> helper class for the recognizer.<\/span><\/p>\n<h3><span class=\"ez-toc-section\" id=\"How-to-build-the-model\"><\/span>How to build the model<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><span style=\"font-weight: 400;\">In the <\/span><i><span style=\"font-weight: 400;\">onCreate<\/span><\/i><span style=\"font-weight: 400;\"> method of our fragment, after inflating and initializing the views, we start by building the <\/span><i><span style=\"font-weight: 400;\">LocalModel <\/span><\/i><span style=\"font-weight: 400;\">we are going to use with the analyzer. So that it can be found, our custom TFLite model needs to be located in the <em>assets<\/em> folder of the project.<\/span><\/p>\n<pre class=\"theme:classic lang:java decode:true\">val localModel = LocalModel.Builder()\r\n    .setAssetFilePath(\"lite-model_object_detection_mobile_object_labeler_v1_1\")\r\n    .build()<\/pre>\n<h3><span class=\"ez-toc-section\" id=\"How-to-analyze-the-images-with-an-object-detector\"><\/span>How to analyze the images with an object detector<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><span style=\"font-weight: 400;\">Let&#8217;s start with our recognizer, which is quite simple. It takes the image, the model, and the function type of a lambda function that we use as a callback with the list of detected objects as parameters. In that way, we will be informed asynchronously when the results are ready.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We create a <em>CustomObjectDetector<\/em> with our <em>LocalModel<\/em> and in the options, we enable the recognition of multiple objects and their classification. We select <\/span><i><span style=\"font-weight: 400;\">SINGLE_IMAGE_MODE<\/span><\/i><span style=\"font-weight: 400;\">. This mode analyzes each image independently.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Preparing the input to our detector is also straightforward, since conversion from a bitmap, which is the format in which we get the camera image from the QiSDK action to get a picture, is easily done in one line of code.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">On completion, we return the detected objects sorted by the confidence of their labels to be processed in the fragment.<\/span><\/p>\n<pre class=\"theme:classic lang:java decode:true\">class ImageAnalyzer {\r\n    fun analyzeImageWithMLKitObjectDetector(\r\n        picture: Bitmap,\r\n        localModel: LocalModel,\r\n        completion: (List&lt;DetectedObject&gt;?) -&gt; Unit\r\n    ) {\r\n        val image = InputImage.fromBitmap(picture, 0)\r\n        val options = CustomObjectDetectorOptions.Builder(localModel)\r\n            .setDetectorMode(ObjectDetectorOptions.SINGLE_IMAGE_MODE)\r\n            .enableMultipleObjects()\r\n            .enableClassification()\r\n            .build()\r\n        val objectDetector = ObjectDetection.getClient(options)\r\n\r\n        \/\/ Extract the recognition results\r\n        objectDetector.process(image)\r\n            .addOnSuccessListener { detectedObjects -&gt;\r\n                Timber.i(\"ImageAnalyzer found ${detectedObjects.size} objects\")\r\n                for (o in detectedObjects) {\r\n                    Timber.i(\"ImageAnalyzer  Object: ${detectedObjects.indexOf(o)}\")\r\n                    for (l in o.labels) {\r\n                        Timber.i(\"ImageAnalyzer    ${l.text}\")\r\n                    }\r\n                }\r\n                completion(\r\n                    detectedObjects.onEach { detectedObject -&gt;\r\n                        detectedObject.labels.sortByDescending { it.confidence }\r\n                    }.take(MAX_RESULT_DISPLAY).toList()\r\n                )\r\n            }\r\n            .addOnFailureListener { e -&gt;\r\n                Timber.e(\"Error processing the image: $e\")\r\n                completion(null)\r\n            }\r\n    }\r\n}<\/pre>\n<h3><span class=\"ez-toc-section\" id=\"How-to-process-and-show-the-results\"><\/span>How to process and show the results<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><span style=\"font-weight: 400;\">The process will work asynchronously in the following way: in the fragment, every time a new image is created, we will set it as the source in the preview and start the analyzer, passing the created local model and the image as arguments. We then observe the results of the analyzer in order to process them and present them to the user via tablet and voice. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">For that purpose, we set up the image observer and the analyzer observer and take the first image. The results from the analyzer have the form of a list of objects of type <\/span><i><span style=\"font-weight: 400;\">DetectedObject<\/span><\/i><span style=\"font-weight: 400;\">s (part of the ML Kit vision package) that enclose a bounding box, a tracking id, and the labels for each object. The labels, in turn, each have fields for the text, its confidence, and its index. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">W<\/span><span style=\"font-weight: 400;\">e save the labels to a chat variable for them to be available in the voice interaction. We will also use the labels to update the results on the screen, which might need to be translated depending on the language of the robot. This is because the models return the recognition in English. To that end, we use the TextTranslate API, if necessary.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The next thing to do is calculate roughly in which area of the image they are situated. We do that by simply dividing the image into six areas and calculating in which of them the center of the object (that can be known since we know the bounding box) lies. We then combine the labels, the areas, and the bounding boxes to show them on the screen whenever the confidence is higher than the recommended 0.35 threshold, by drawing them with a helper class over the preview in our <\/span><i><span style=\"font-weight: 400;\">resultsView<\/span><\/i><span style=\"font-weight: 400;\"> .\u00a0<\/span><\/p>\n<pre class=\"theme:classic lang:java decode:true\">viewModel.getMLKitRecognitionObjects().observe(viewLifecycleOwner) { objects -&gt;\r\n    \/\/ Prepare the chat variable including all results, translating if necessary\r\n    var labelsText = \"\"\r\n    objects.forEach {\r\n        if (it.labels.size &gt; 0 &amp;&amp; it.labels[0].confidence &gt; MIN_CONFIDENCE) {\r\n            if (labelsText.isNotBlank()) labelsText += \", \"\r\n            labelsText += \"  ${it.labels[0].text}\"\r\n        }\r\n    }\r\n    when (mainViewModel.language) {\r\n        Language.ENGLISH -&gt; updateRecognizedTextChatVariable(labelsText)\r\n        else -&gt; mainViewModel.translate(\r\n            Language.ENGLISH,\r\n            mainViewModel.language,\r\n            labelsText\r\n        ).addOnSuccessListener {\r\n            updateRecognizedTextChatVariable(it)\r\n        }\r\n    }    \r\n    \r\n    \/\/ Create list of objects to be shown over the preview\r\n    try {\r\n        requireActivity().runOnUiThread {\r\n            if (binding.seeingResultsView.height &gt; 0 &amp;&amp; binding.seeingResultsView.width &gt; 0) {\r\n                items = mutableListOf()\r\n                objects.forEach {\r\n                    var firstLabel = \"\"\r\n                    if (it.labels.size &lt;= 0 || it.labels[0].confidence &lt; MIN_CONFIDENCE)\r\n                        return@forEach\r\n\r\n                    \/\/ Translate labels if the robot language is other than english and update the text\r\n                    when (mainViewModel.language) {\r\n                        Language.ENGLISH -&gt; firstLabel = it.labels[0].text\r\n                        else -&gt; mainViewModel.translate(\r\n                            Language.ENGLISH,\r\n                            mainViewModel.language,\r\n                            it.labels[0].text\r\n                        ).addOnSuccessListener { firstLabel = it }\r\n                    }\r\n                    \/\/ Add the label, the confidence and the bounding box\r\n                    items.add(\r\n                        Recognition(\r\n                            firstLabel,\r\n                            it.labels[0].confidence.roundConfidence(),\r\n                            it.boundingBox.toRectF()\r\n                        )\r\n                    )\r\n                }\r\n                \/\/ Calculate in which area the objects are and show the results\r\n                showResults(calculateObjectArea(items))\r\n            }\r\n        }\r\n    } catch (e: Exception) {\r\n        Timber.w(\"Could not show recognition results due to $e\")\r\n    }\r\n\r\n    \/\/ Take a new image\r\n    this.viewModel.takeImage(mainViewModel.qiContext)\r\n}<\/pre>\n<p>The results, including bounding boxes, labels, and confidence, drawn over the preview look something like this:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-38564\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/demo.jpg\" alt=\"Roboter Pepper with tablet on chest showing face recognition via squares around face\" width=\"784\" height=\"510\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/demo.jpg 1289w, https:\/\/www.inovex.de\/wp-content\/uploads\/demo-300x195.jpg 300w, https:\/\/www.inovex.de\/wp-content\/uploads\/demo-1024x666.jpg 1024w, https:\/\/www.inovex.de\/wp-content\/uploads\/demo-768x499.jpg 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/demo-400x260.jpg 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/demo-360x234.jpg 360w\" sizes=\"auto, (max-width: 784px) 100vw, 784px\" \/><\/p>\n<p>If you see that the bounding boxes do not match 100% to the objects and you&#8217;re wondering if it is an issue with the object detector: it is not. This is caused by Pepper&#8217;s constant lively movement. Do not forget it is a social robot imitating natural human movements. Therefore, sometimes he might move just a little too fast before the image is updated with the new content and cause these small differences.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Voice-interaction-%E2%80%9CPepper-what-do-you-see%E2%80%9C\"><\/span>Voice interaction: \u201cPepper, what do you see?\u201c<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><span style=\"font-weight: 400;\">Now, to the voice interaction part: When asked, we want Pepper to be able to respond to the question of what is around it or which objects it can see. To make it robust, we include many different ways how you can phrase the question. Whenever heard, it will check the contents of the chat variable updated with the last results and, if it is available, tell its content. If nothing was detected, it will adapt its answer according to that.\u00a0<\/span><\/p>\n<pre class=\"theme:inlellij-idea lang:default decode:true\">concept:(askedwhatdoyousee) [\"what do you see\" \"what can you see\" \"what is this\" \"what's [that this]\" \"what's [\"in the room\" \"around you\"] \" \"do you see [something anything]\" \"can you see [something anything]\" \"[\"what\" \"what [sort kind] of\"] object is [this that]\" \"do you know what [this that] is\" \"do you recognize this\" \"tell me what [this that] is\"]\r\n\r\nu:(~askedwhatdoyousee) %recognizedInImageBookmark I see \\pau=500\\ [\"^exist(recognizedInImage) $recognizedInImage\" \"nothing\"]<\/pre>\n<h3><span class=\"ez-toc-section\" id=\"Voice-interaction-%E2%80%9CPepper-where-is-it%E2%80%9C\"><\/span>Voice interaction: \u201cPepper, where is it?\u201c<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><span style=\"font-weight: 400;\">In this demo, we also want Pepper to point to an object when asked about its location. As the SDK does not currently provide a method to point in a specific direction, we will simplify he task and approximate it by defining areas from 1 to 6. The area we determine by dividing the image in six parts by a 3&#215;2 grid and calculating to which one the center of each object belongs. For each of those areas, we programmed short animations using the Animation Editor tool included in the plugin. The Editor allows defining a series of movements of all the robot&#8217;s moveable parts and their position for a time period. In our case, we want Pepper to point with either the right or the left arm to the wanted area.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Although the recognition will be continuously running, the pointing needs to be triggered by the user by asking where an object is to be found:\u00a0<\/span><\/p>\n<pre class=\"theme:inlellij-idea lang:default decode:true\">concept:(whereis) [\"where is the _*\" \"where do you see the _*\" \"do you know where the _* [\"is\" \"is at\" \"is located\"]\" \"where do i find the _*\" \"where is there a _*\"]\r\n\r\nu:(~whereis) $objectToLocate=$1 %askedWhereItIsBookmark $1 is there\r\nu:(^empty) %notFoundBookmark Oops, something happened, I can't show you where that is because i can not run the animation<\/pre>\n<p><span style=\"font-weight: 400;\">Once again, we make use of a bookmark to connect with the logic and reach the method in the fragment from its listener in the Activity. Using the variable, we check among the current results to find the object with the mentioned name and get its area.<\/span><\/p>\n<pre class=\"theme:classic lang:java decode:true\">fun locateObject() {\r\n    var name = mainViewModel.getQiChatVariable(getString(R.string.objectToLocate))\r\n\r\n    \/\/ Translate if necessary\r\n    if (mainViewModel.language != Language.ENGLISH) {\r\n        mainViewModel.translate(\r\n            Language.ENGLISH,\r\n            mainViewModel.language,\r\n            name\r\n        ).addOnSuccessListener { name = it }\r\n    }\r\n\r\n    \/\/ Get the area of the asked object finding the object in the list\r\n    val area =\r\n        items.find {\r\n            it.label == name.lowercase()\r\n        }?.area ?: Area.NONE\r\n    Timber.d(\"Asked to locate object: $name which is in area: $area\")\r\n\r\n    doAnimationForTheArea(area)\r\n}<\/pre>\n<p><span style=\"font-weight: 400;\">The only thing that\u2019s left is playing the animation to point in the direction:<\/span><\/p>\n<pre class=\"theme:classic lang:java decode:true\">private fun doAnimationForTheArea(area: Area) {\r\n    val animation = when (area) {\r\n        Area.ONE -&gt; R.raw.raise_left_hand_b006\r\n        Area.TWO -&gt; R.raw.raise_right_hand_b007\r\n        Area.THREE -&gt; R.raw.raise_right_hand_b006\r\n        Area.FOUR -&gt; R.raw.raise_left_hand_a003\r\n        Area.FIVE -&gt; R.raw.raise_both_hands_b001\r\n        Area.SIX -&gt; R.raw.raise_right_hand_a001\r\n        Area.NONE -&gt; null\r\n    }\r\n\r\n    animation?.let {\r\n        Timber.d(\"Doing animation for area: $area\")\r\n        mainViewModel.pepperActions.doAnimationAsync(\r\n            requireContext(),\r\n            mainViewModel.qiContext,\r\n            animation\r\n        )\r\n    } ?: run { mainViewModel.goToQiChatBookmark(getString(R.string.notFoundBookmark)) }\r\n}<\/pre>\n<h2><span class=\"ez-toc-section\" id=\"Conclusion-and-next-steps\"><\/span>Conclusion and next steps<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>That is it! That is how we can make a robot point to a certain object identified by object detection.\u00a0 Building on this, it can be improved to much more sophisticated and precise pointing. <span style=\"font-weight: 400;\">Another possible use of object detection with the robot is to track an object. One could make Pepper follow a presented object with its head or even the entire body, by walking towards it, similarly as it follows humans. Thus, getting closer to Pepper fetching a coffee, the wish of many \ud83d\ude42<\/span><\/p>\n<p>I hope you enjoyed this demo! C<span style=\"font-weight: 400;\">heck out the other articles of this series, where we\u2019re going to see more use cases and how to implement them in our ML-Kit-powered Android app for the Pepper robot!<\/span><\/p>\n<ol class=\"ol-styled\">\n<li><strong><a href=\"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-1\/\">Introduction<\/a><\/strong><\/li>\n<li><strong>demo with ML Kit&#8217;s Object Detection API (this article)<\/strong><\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>Welcome to the second part of this blog series \u201cHow to use Google\u2019s ML Kit to enhance Pepper with AI\u201c! In case you missed the first part, I recommend you start reading here for an introduction on what we\u2019re building and how to start.<\/p>\n","protected":false},"author":57,"featured_media":40402,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"ep_exclude_from_search":false,"footnotes":""},"tags":[149,510,511,150,151,727,724,140,725,726],"service":[420,76,436],"coauthors":[{"id":57,"display_name":"Silvia Santano","user_nicename":"ssantano"}],"class_list":["post-38815","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","tag-android","tag-apps-2","tag-artificial-intelligence-2","tag-computer-vision","tag-deep-learning","tag-human-computer-interaction","tag-kotlin","tag-machine-learning","tag-pepper","tag-robotics","service-apps","service-artificial-intelligence","service-computer-vision"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>How to Use Google\u2019s ML Kit to Enhance Pepper With AI (Part 2) - inovex GmbH<\/title>\n<meta name=\"description\" content=\"This article shows how to set up better object detection to identify the position of objects in the room for the social robot Pepper. \u00a0\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How to Use Google\u2019s ML Kit to Enhance Pepper With AI (Part 2) - inovex GmbH\" \/>\n<meta property=\"og:description\" content=\"This article shows how to set up better object detection to identify the position of objects in the room for the social robot Pepper. \u00a0\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/\" \/>\n<meta property=\"og:site_name\" content=\"inovex GmbH\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/inovexde\" \/>\n<meta property=\"article:published_time\" content=\"2022-12-16T09:16:19+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-02-18T06:41:53+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.inovex.de\/wp-content\/uploads\/enhancing-pepper-with-ai-part-2-of-5.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1921\" \/>\n\t<meta property=\"og:image:height\" content=\"1081\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Silvia Santano\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/www.inovex.de\/wp-content\/uploads\/enhancing-pepper-with-ai-part-2-of-5-1024x576.png\" \/>\n<meta name=\"twitter:creator\" content=\"@inovexgmbh\" \/>\n<meta name=\"twitter:site\" content=\"@inovexgmbh\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Silvia Santano\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"13\u00a0Minuten\" \/>\n\t<meta name=\"twitter:label3\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data3\" content=\"Silvia Santano\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\\\/\"},\"author\":{\"name\":\"Silvia Santano\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/person\\\/f2d17a49f3a806bc6cdb4902a32a0ef9\"},\"headline\":\"How to Use Google\u2019s ML Kit to Enhance Pepper With AI (Part 2)\",\"datePublished\":\"2022-12-16T09:16:19+00:00\",\"dateModified\":\"2026-02-18T06:41:53+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\\\/\"},\"wordCount\":1913,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/enhancing-pepper-with-ai-part-2-of-5.png\",\"keywords\":[\"Android\",\"Apps\",\"Artificial Intelligence\",\"Computer Vision\",\"Deep Learning\",\"Human-Computer-Interaction\",\"Kotlin\",\"Machine Learning\",\"Pepper\",\"Robotics\"],\"articleSection\":[\"Analytics\",\"Applications\",\"English Content\",\"General\"],\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\\\/\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\\\/\",\"name\":\"How to Use Google\u2019s ML Kit to Enhance Pepper With AI (Part 2) - inovex GmbH\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/enhancing-pepper-with-ai-part-2-of-5.png\",\"datePublished\":\"2022-12-16T09:16:19+00:00\",\"dateModified\":\"2026-02-18T06:41:53+00:00\",\"description\":\"This article shows how to set up better object detection to identify the position of objects in the room for the social robot Pepper. \u00a0\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/enhancing-pepper-with-ai-part-2-of-5.png\",\"contentUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/enhancing-pepper-with-ai-part-2-of-5.png\",\"width\":1921,\"height\":1081,\"caption\":\"Illustration of Pepper the robot with a superimposed brain\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"How to Use Google\u2019s ML Kit to Enhance Pepper With AI (Part 2)\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#website\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/\",\"name\":\"inovex GmbH\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#organization\",\"name\":\"inovex GmbH\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2021\\\/03\\\/inovex-logo-16-9-1.png\",\"contentUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2021\\\/03\\\/inovex-logo-16-9-1.png\",\"width\":1921,\"height\":1081,\"caption\":\"inovex GmbH\"},\"image\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/inovexde\",\"https:\\\/\\\/x.com\\\/inovexgmbh\",\"https:\\\/\\\/www.instagram.com\\\/inovexlife\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/inovex\",\"https:\\\/\\\/www.youtube.com\\\/channel\\\/UC7r66GT14hROB_RQsQBAQUQ\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/person\\\/f2d17a49f3a806bc6cdb4902a32a0ef9\",\"name\":\"Silvia Santano\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/cropped-IMG_6952-2-scaled-e1665488822192-1-96x96.jpegbd867eef3c3c053b3e6e1634ffac9ffc\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/cropped-IMG_6952-2-scaled-e1665488822192-1-96x96.jpeg\",\"contentUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/cropped-IMG_6952-2-scaled-e1665488822192-1-96x96.jpeg\",\"caption\":\"Silvia Santano\"},\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/author\\\/ssantano\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"How to Use Google\u2019s ML Kit to Enhance Pepper With AI (Part 2) - inovex GmbH","description":"This article shows how to set up better object detection to identify the position of objects in the room for the social robot Pepper. \u00a0","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/","og_locale":"de_DE","og_type":"article","og_title":"How to Use Google\u2019s ML Kit to Enhance Pepper With AI (Part 2) - inovex GmbH","og_description":"This article shows how to set up better object detection to identify the position of objects in the room for the social robot Pepper. \u00a0","og_url":"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/","og_site_name":"inovex GmbH","article_publisher":"https:\/\/www.facebook.com\/inovexde","article_published_time":"2022-12-16T09:16:19+00:00","article_modified_time":"2026-02-18T06:41:53+00:00","og_image":[{"width":1921,"height":1081,"url":"https:\/\/www.inovex.de\/wp-content\/uploads\/enhancing-pepper-with-ai-part-2-of-5.png","type":"image\/png"}],"author":"Silvia Santano","twitter_card":"summary_large_image","twitter_image":"https:\/\/www.inovex.de\/wp-content\/uploads\/enhancing-pepper-with-ai-part-2-of-5-1024x576.png","twitter_creator":"@inovexgmbh","twitter_site":"@inovexgmbh","twitter_misc":{"Verfasst von":"Silvia Santano","Gesch\u00e4tzte Lesezeit":"13\u00a0Minuten","Written by":"Silvia Santano"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/#article","isPartOf":{"@id":"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/"},"author":{"name":"Silvia Santano","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/person\/f2d17a49f3a806bc6cdb4902a32a0ef9"},"headline":"How to Use Google\u2019s ML Kit to Enhance Pepper With AI (Part 2)","datePublished":"2022-12-16T09:16:19+00:00","dateModified":"2026-02-18T06:41:53+00:00","mainEntityOfPage":{"@id":"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/"},"wordCount":1913,"commentCount":0,"publisher":{"@id":"https:\/\/www.inovex.de\/de\/#organization"},"image":{"@id":"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/#primaryimage"},"thumbnailUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/enhancing-pepper-with-ai-part-2-of-5.png","keywords":["Android","Apps","Artificial Intelligence","Computer Vision","Deep Learning","Human-Computer-Interaction","Kotlin","Machine Learning","Pepper","Robotics"],"articleSection":["Analytics","Applications","English Content","General"],"inLanguage":"de","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/","url":"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/","name":"How to Use Google\u2019s ML Kit to Enhance Pepper With AI (Part 2) - inovex GmbH","isPartOf":{"@id":"https:\/\/www.inovex.de\/de\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/#primaryimage"},"image":{"@id":"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/#primaryimage"},"thumbnailUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/enhancing-pepper-with-ai-part-2-of-5.png","datePublished":"2022-12-16T09:16:19+00:00","dateModified":"2026-02-18T06:41:53+00:00","description":"This article shows how to set up better object detection to identify the position of objects in the room for the social robot Pepper. \u00a0","breadcrumb":{"@id":"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/#primaryimage","url":"https:\/\/www.inovex.de\/wp-content\/uploads\/enhancing-pepper-with-ai-part-2-of-5.png","contentUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/enhancing-pepper-with-ai-part-2-of-5.png","width":1921,"height":1081,"caption":"Illustration of Pepper the robot with a superimposed brain"},{"@type":"BreadcrumbList","@id":"https:\/\/www.inovex.de\/de\/blog\/how-to-use-googles-ml-kit-to-enhance-pepper-with-ai-part-2\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.inovex.de\/de\/"},{"@type":"ListItem","position":2,"name":"How to Use Google\u2019s ML Kit to Enhance Pepper With AI (Part 2)"}]},{"@type":"WebSite","@id":"https:\/\/www.inovex.de\/de\/#website","url":"https:\/\/www.inovex.de\/de\/","name":"inovex GmbH","description":"","publisher":{"@id":"https:\/\/www.inovex.de\/de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.inovex.de\/de\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/www.inovex.de\/de\/#organization","name":"inovex GmbH","url":"https:\/\/www.inovex.de\/de\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/logo\/image\/","url":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/inovex-logo-16-9-1.png","contentUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/inovex-logo-16-9-1.png","width":1921,"height":1081,"caption":"inovex GmbH"},"image":{"@id":"https:\/\/www.inovex.de\/de\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/inovexde","https:\/\/x.com\/inovexgmbh","https:\/\/www.instagram.com\/inovexlife\/","https:\/\/www.linkedin.com\/company\/inovex","https:\/\/www.youtube.com\/channel\/UC7r66GT14hROB_RQsQBAQUQ"]},{"@type":"Person","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/person\/f2d17a49f3a806bc6cdb4902a32a0ef9","name":"Silvia Santano","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.inovex.de\/wp-content\/uploads\/cropped-IMG_6952-2-scaled-e1665488822192-1-96x96.jpegbd867eef3c3c053b3e6e1634ffac9ffc","url":"https:\/\/www.inovex.de\/wp-content\/uploads\/cropped-IMG_6952-2-scaled-e1665488822192-1-96x96.jpeg","contentUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/cropped-IMG_6952-2-scaled-e1665488822192-1-96x96.jpeg","caption":"Silvia Santano"},"url":"https:\/\/www.inovex.de\/de\/blog\/author\/ssantano\/"}]}},"_links":{"self":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/38815","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/users\/57"}],"replies":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/comments?post=38815"}],"version-history":[{"count":5,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/38815\/revisions"}],"predecessor-version":[{"id":66229,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/38815\/revisions\/66229"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/media\/40402"}],"wp:attachment":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/media?parent=38815"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/tags?post=38815"},{"taxonomy":"service","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/service?post=38815"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/coauthors?post=38815"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}