An android pointing a smartphone at a plant
Artificial Intelligence

Use Your TensorFlow Mobile Model in an Android App

8 ​​min

Dieser Blogartikel ist älter als 5 Jahre – die genannten Inhalte sind eventuell überholt.

Nowadays, modern mobile devices are extremely powerful and enable new approaches. Even if it sounds like a platitude, it is clear that some of these approaches are very interesting and are already used in some apps. One of these approaches is machine learning. In this post we want to show how to use TensorFlow Mobile to recognize houseplants in an Android app. Using a network directly on a device has several advantages. Firstly the app can process the data offline without the need of processing power on a server or bandwidth. Secondly, no sensitive data has to be sent through the network.

Previously on this blog …

This blog post is part of a TensorFlow Mobile series. The first post TensorFlow Mobile: Training and Deploying a Neural Network has focused on how to deploy a neural network and train it for our use case. In this post we’ll show how to import and integrate such a model into the app.

So as a short summary, what happened in the previous post?

  • We have chosen TensorFlow Mobile as solution to create the neural network. There are two reasons, why we used TensorFlow Mobile: TensorFlow Lite is still in developer preview and furthermore TensorFlow Mobile has a bigger feature set.
  • With TensorFlow Mobile we trained a model to classify images of houseplants. Our model is able to distinguish between 26 different houseplants.
  • This model has been further processed for mobile use.  At the end we have an optimized model, which is available as a Protocol Buffers (protobuff) file to use in our app.

Houseplant app

The job of the application is quite simple: Recognize houseplants with Android devices using a trained TensorFlow model. To do so the application should be capable of taking a picture of the plant and then indicate which plant it is most likely. The animated gif below shows what the end result looks like.

Getting started

Now that the goal is clear, let’s get going with the implementation. At first glance, you might think that we have to create a lot of boiler plate code to develop such an app. Fortunately, Google offers four different TensorFlow examples for Android. These examples are good starting points, so we don’t have to start from scratch. The TF Classify example, which uses the Android camera to classify images in real time, roughly corresponds to the requirements of our app. That is why we will build the app on top of this example.

A disadvantage of the four TensorFlow examples is, that they are all packed into one repository. Instead of pulling the needed example out of this repository, we will rather use the more minimal repository provided for the TensorFlow for poets tutorial. The repository only includes the TF Classify example and is an ideal starting point for our houseplant app.

Once we have checked out the TF Classify project, the model and labels file have to be replaced with ours. These files are located in  src/main/assets/graph.pb and src/main/assets/labels.txt

Next we need to adjust the field variables in the ClassifierActivity to match our model.

We use 224 as our input size, because the ImageNet we trained in part one of this series was trained with an image size of 224 pixels. Also, it is important to set the values for the images mean and standard deviation based on the used model.

The input and output names are the names of the input and output nodes of the model.

If you changed the name or the location of the model and labels file you will have to change the path of the file fields as well.

Classifying plants with TensorFlow Mobile

The classification is based on the image data received from the camera of the device. Therefore, the application is using the Camera2 API from Android. The API is quite extensive, but luckily all the needed API code is packed into the abstract class CameraActivity. The class takes care of the permission handling needed for accessing the camera and is setting up the CameraConnectionFragment. This fragment handles all camera relevant configuration and also opens and closes the camera connection and the image capturing. The fragment also registers the parent activity as listener for new images.

All the classification is handled in the ClassifierActivity which is extending the CameraActivity. The ClassifierActivity receives the image data through the onImageAvailable method which is the callback method from the OnImageAvailableListener on which the Activity has been registered by the CameraConnectionFragment.

Whenever an image is available, the callback method is called and the ClassifierActivity can process the image data. Then, the data is acquired from the ImageReader passed as parameter. Before the image data can be passed to a Classifier it is converted into a Bitmap. As result, the classification provides a List of Recognitions with its confidence.

In our case the Classifier which recognizes the image is the TensorFlowImageClassifier. Such a Classifier is using the TensorFlow Android API and our model to make an assumption what is displayed in the captured image.

Useful improvements

The example application from Google is constantly classifying the input from the camera. Because we wanted to give the user time to position the camera, we added a capture button and only analyzed the image on click.

Also, to improve our model in the future, we add a feature to get feedback from the user. That is why we present the top results on a separate page where the user can confirm if one of the suggestions is right. When the user confirms the result, the image is uploaded to a cloud storage. For the upload, we use Firebase Cloud Storage, because it is easy to use within Android applications. Before uploading, the image gets annotated by adding the probability result of the model to the EXIF tags. Furthermore, the images are uploaded in different folders named after the detected plants. The categorization into folders is made depending on the result confirmed by the user. These collected images can be used in the future to generate a better model. In this case, the ordering and annotations will help to filter and improve the image base. Keep in mind that you should ask the user for permission when uploading images to a cloud storage!


Overall, it is impressive how fast and easily a model can be integrated into an Android application. The given tutorials and examples from Google are a good way to get started and learn more about using TensorFlow. When you have a model prepared and optimized for mobile, integrating it into an Android application is not a big deal. In our example the most challenging part was getting to grips with the Camera2 API. It might take some time to understand the API, because of its extensiveness.

If you decide to use TensorFlow Lite instead of TensorFlow Mobile, as we did in this series, you should take a look at Googles ML Kit. With ML Kit the whole process is getting easier: It offers an easy to understand API, you can directly upload custom models via the web interface and they even have a model updating mechanism, which doesn’t require the user to update the whole application. Although TensorFlow Lite has a smaller feature set, if it suffices your use case, it is a great alternative and ML Kit simplifies the model handling enormously.

Join us!

Whether you’re interested in Android development or Machine Learning, take a look at our current job offers and find what best suits your interests. Join us and implement cutting edge technology in production level projects with our broad selection of customers!

15 Kommentare

    1. Thank you, I’m happy you enjoyed the blog post.
      The Camera2API from Android is a little hard to get into in the beginning, since you can and have to adjust a lot of parameters. If you want to do a capture button as shown in the example I would recommend you looking into Googles Camera2 API example ( it shows how to capture images. I have implemented the capture process as it is done in the Camera2BasicFragemnt of the Google example.
      If you are using the Tensorflow example (which is evaluating images continuously) you can do the following in the CameraConnectionFragment to only capture a single image:
      – Lock focus and capture image on button click as the lockFocus() function in Camera2BasicFragment is doing it.
      – Use a captureCallback like in the Camera2BasicFragment which handles onCaptureProcessed and onCaptureCompleted. (If you copy the callback from the Camera2BasicFragement, you need to copy all the methods and static fields used by the callback methods as well)
      I hope the description helps, if you have more questions, let me know.
      Note: Have also a look into „Tensorflow Lite“ since a few problems we had back then which lead us to the decision to use „Tensorflow Mobile“ might be resolved by now.

      1. Hi Simon,
        ich denk mal, dass du deutsch sprechen kannst! Vielen Dank erstmal für den Hinweis. Leider ist Java für mich noch ziemliche Rocket Science und ich habe keine Ahnung, welche Zeilen ich in dem Code austauschen muss, der im Tensorflow-Beispiel mitgeliefert wurde. Könntest du mir da weiterhelfen?
        Grüße aus Karlsruhe

        1. Hi Felix,
          ich versuche es noch etwas detailgetreuer zu erklären. Dabei gehe ich von dem CameraConnectionFragement aus dem Tensorflow-Beispiel aus. Camera2BasicFragment aus dem oben genannten Camera2 Example ist hierbei die Referenz aus der Kopiert wird.
          1. Button im Layout xml Einfügen & onClick Methode für den Button im Fragment definieren.
          2. Kopiere die lockFocus Methode und rufe sie aus der onClick Methode des Buttons auf.
          3. Kopiere den captureCallback und tausche ihn mit dem aktuellen aus.
          4. Kopiere alle Methoden die im CaptureCallback aufgerufen werden
          (captureStillPicture, runPrecaptureSequence, unlockFocus, setAutoFlash).
          5. Kopiere alle in den Methoden verwendeten Variablen (STATE_ constanten, mState, mFlashSupported).
          6. mFlashSupported muss noch gesetzt werden, das kann in der setUpCameraOutputs gemacht werden.
          7. Der CaptureRequest wird in lockFocus() gestellt, daher kann captureRequestBuilder und die Stellen wo die Variable verwendet wird entfernt werden.
          8. Die Initialisierung des imageReaders und das Setzen des OnImageAvailableListeners in die setUpCameraOutputs Methode verschieben.
          Ich hoffe, ich habe nichts vergessen.
          Am meisten lernst du natürlich, wenn du auch verstehst was genau hier passiert. Da meine Antwort gerade nur beschreibt, wie du dir die Funktion zusammen kopierst, kann ich dir empfehlen noch z.B. über dieses Tutorial: zu schauen, das fasst den Ablauf wie ein Bild aufgenommen wird ganz gut zusammen. Die offiziellen Guides von Android ( sind auch immer eine gute Quelle um Android Entwicklung zu lernen.
          Happy Coding 🙂

  1. Thanks for the tutorial! I’m new to both tensorflow and android studio. I have the working demo app, but I want to add the capture button to my project. Please explain how you did it in a bit more detail.Thanks!

    1. Hey Thomas,
      Its cool to see that you are interested in reimplementing this demo. Since the person in the comment below asked nearly the same question. I will translate my last answer where we switched to German for you. I hope this will answer your question. See the start of the conversation for context.
      Start with the CameraConnectionFragment from the Tensorflow example as a base. The following steps show which parts from Camera2BasicFragment out of Googles Camera2 API example have to be copied to capture a single image (see other comment for links to examples).
      1. Add a button to the Android layout xml and add an onClick method to the Fragment
      2. Copy lockFocus method and call it from the onClick
      3. Copy captureCallback and swap it with the current captureCallback
      4. Copy all methods called in the CaptureCallback (captureStillPicture, runPrecaptureSequence, unlockFocus, setAutoFlash)
      5. Copy all variables used in those methods (STAT_ constants, mState, mFlashSupport).
      6. mFlashSupport has to be set, you can set it in setUpCameraOutputs
      7. The CaptureRequest will be set in lockFocus() now, thus you can remove captureRequestBuilder and all places where this variable is used.
      8. Move the initialization of the imageReader and the OnImageAvailableListener to setUpCameraOutputs.
      I hope I did not forget anything.
      Copying those pieces of code may help you get the desired result, nevertheless you learn most if you understand what happens here. Thus, I recommend to have a look into some tutorials how to use Camera2 API: .
      This tutorial describes how the capture of an image works. The official guides from Android are also a good reference to learn how to develop Android Apps (
      Happy Coding 🙂

    1. Unfortunately there is no github repository. I’m sorry, since my code is outdated by now I do not plan to upload it. You may build your own example by following the instruction I gave in the answer of the other comment.

  2. Can you Please share some Documentation or any Video on How to it.
    I am having many difficulties in understanding it.

    1. Hey Sumair,
      The IT environment is changing fast, so is TensorFlow. I haven’t worked with TensorFlow for a while, but as far as I know „TensorFlow Mobile“ is outdated and now the way to go is „TensorFlow Lite“. Since this blogpost is about the rather old „TensorFlow Mobile“, I would recommend you checking out „TensorFlow Lite“. See

    1. Thanks! I can not see where I mentioned having two „fragment_camera2_basic.xml“ files. Can you please elaborate what you mean?

Hat dir der Beitrag gefallen?

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert