{"id":21085,"date":"2018-05-08T11:16:57","date_gmt":"2018-05-08T09:16:57","guid":{"rendered":"http:\/\/www.inovex.de\/blog\/?p=12819"},"modified":"2025-03-19T07:29:16","modified_gmt":"2025-03-19T06:29:16","slug":"tensorflow-mobile-training-and-deploying-a-neural-network","status":"publish","type":"post","link":"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/","title":{"rendered":"TensorFlow Mobile: Training and Deploying a Neural Network"},"content":{"rendered":"<p>Smart Assistants, fancy image filters in Snapchat and apps like Prisma all have one thing in common\u2014they are powered by Machine Learning. The use of Machine Learning in mobile apps is growing and new mobile apps are developed with Machine Learning based services as business models. In this blog series we want to give you hands-on advice on how you can train and deploy a convolutional neural network for image classification to a mobile app using the popular machine learning framework <a href=\"https:\/\/www.tensorflow.org\/\">TensorFlow<\/a>\u00a0Mobile.<!--more--><\/p>\n<p>Our task will be to classify images of houseplants which we have collected ourselves. You don&#8217;t have to go and snap pictures of plants, however, because our approach is generic and can be used for training and deploying a convolutional neural network for image classification, independent of their subject.\u00a0If you&#8217;d also like to go with houseplants, however, we have written an image crawler to save you the manual labor. You&#8217;ll find the instructions\u00a0<a href=\"https:\/\/github.com\/inovex\/tensorflow-train-to-mobile-tutorial\/tree\/master\/hp_dataset\" target=\"_blank\" rel=\"noopener\">here<\/a>.<\/p>\n<p>For your convenience we have published a <a href=\"https:\/\/github.com\/inovex\/tensorflow-train-to-mobile-tutorial\" target=\"_blank\" rel=\"noopener\">repository<\/a> containing all necessary files and source code used in this tutorial.<\/p>\n<p>As a concrete implementation of a convolutional neural network we&#8217;ll use one of the\u00a0<a href=\"https:\/\/arxiv.org\/abs\/1704.04861\" target=\"_blank\" rel=\"noopener\">MobileNets<\/a>, a\u00a0class of efficient convolutional neural networks for mobile and embedded vision applications. These are already implemented in one of the high-level APIs of TensorFlow which is called TF-Slim. You can find the TF-Slim models in the <a href=\"https:\/\/github.com\/tensorflow\/models\/tree\/master\/research\/slim\" target=\"_blank\" rel=\"noopener\">model repository<\/a> of TensorFlow. In this blog series we will use TF-Slim for the training of the MobileNet.<\/p>\n<p>For the deployment of neural networks to a mobile device there are currently two solutions:<\/p>\n<ul>\n<li>Tensor Flow Mobile:\u00a0TensorFlow was designed from the ground up to be a good deep learning solution for mobile platforms such as Android and iOS. TensorFlow Mobile represents the mobile version of the framework which you can use in your mobile apps. It also contains multiple guides and scripts for the deployment of a model into a mobile app.<\/li>\n<li><a href=\"https:\/\/www.tensorflow.org\/lite\/guide\">TensorFlow Lite<\/a>: This is an evolution of TensorFlow Mobile. In most cases, apps developed with TensorFlow Lite will have a smaller binary size, fewer dependencies, and better performance. Currently TensorFlow Lite is in developer preview, so not all use cases are covered yet and it only supports a limited set of operators, so not all models will work on it by default.<\/li>\n<\/ul>\n<p>In this blog series we will use TensorFlow Mobile because TensorFlow Lite is in developer preview and TensorFlow Mobile has a greater feature set. As mentioned before, we will use images of houseplants as our dataset. In total there are 9364 images across 26 classes available.<\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_83 counter-hierarchy ez-toc-counter ez-toc-custom ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\"><p class=\"ez-toc-title\" style=\"cursor:inherit\"><\/p>\n<\/div><nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/#The-Setup\" >The Setup<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/#Training\" >Training<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/#Mobile-Deployment\" >Mobile Deployment<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"The-Setup\"><\/span>The Setup<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>First off we need to install TensorFlow. The easiest way is to follow the official\u00a0<a href=\"https:\/\/www.tensorflow.org\/install\/\">installation guide<\/a> which works for different platforms and operation systems. Just use the current version of TensorFlow.<\/p>\n<p>To get started we clone the TensorFlow model repository:\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">git clone https:\/\/github.com\/tensorflow\/models.git<\/span>\u00a0and switch to the TF-Slim models directory with<span class=\"lang:default highlight:0 decode:true crayon-inline\">cd models\/research\/slim<\/span>. There we find the following tree structure:<\/p>\n<pre class=\"nums:false lang:default highlight:0 decode:true\">.\r\n\r\n\u251c\u2500\u2500 BUILD\r\n\r\n\u251c\u2500\u2500 README.md\r\n\r\n\u251c\u2500\u2500 WORKSPACE\r\n\r\n\u251c\u2500\u2500 __init__.py\r\n\r\n\u251c\u2500\u2500 datasets\r\n\r\n\u251c\u2500\u2500 deployment\r\n\r\n\u251c\u2500\u2500 download_and_convert_data.py\r\n\r\n\u251c\u2500\u2500 eval_image_classifier.py\r\n\r\n\u251c\u2500\u2500 export_inference_graph.py\r\n\r\n\u251c\u2500\u2500 export_inference_graph_test.py\r\n\r\n\u251c\u2500\u2500 nets\r\n\r\n\u251c\u2500\u2500 preprocessing\r\n\r\n\u251c\u2500\u2500 scripts\r\n\r\n\u251c\u2500\u2500 setup.py\r\n\r\n\u251c\u2500\u2500 slim_walkthrough.ipynb\r\n\r\n\u2514\u2500\u2500 train_image_classifier.py<\/pre>\n<p>Next we need to define our dataset, creating a python file as description in the dataset directory, where the other dataset descriptions are arranged. With\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">cd datasets<\/span>\u00a0we switch to the directory and with\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">touch hp_plants.py<\/span> we create the required python file, to which we add the following code:<\/p>\n<pre class=\"lang:python decode:true\"># Copyright 2016 The TensorFlow Authors. All Rights Reserved.\r\n\r\n#\r\n\r\n# Licensed under the Apache License, Version 2.0 (the \"License\");\r\n\r\n# you may not use this file except in compliance with the License.\r\n\r\n# You may obtain a copy of the License at\r\n\r\n#\r\n\r\n# http:\/\/www.apache.org\/licenses\/LICENSE-2.0\r\n\r\n#\r\n\r\n# Unless required by applicable law or agreed to in writing, software\r\n\r\n# distributed under the License is distributed on an \"AS IS\" BASIS,\r\n\r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n\r\n# See the License for the specific language governing permissions and\r\n\r\n# limitations under the License.\r\n\r\n# ==============================================================================\r\n\r\n\"\"\"Provides data for the Houseplants dataset.\"\"\"\r\n\r\nfrom __future__ import absolute_import\r\n\r\nfrom __future__ import division\r\n\r\nfrom __future__ import print_function\r\n\r\nimport os\r\n\r\nimport tensorflow as tf\r\n\r\nfrom datasets import dataset_utils\r\n\r\nslim = tf.contrib.slim\r\n\r\n# DATASET-VARIABLE: TFRecord file pattern\r\n\r\n_FILE_PATTERN = 'hp_plants_%s_*.tfrecord'\r\n\r\n# DATASET-VARIABLE: splits the dataset into 80 % for training and 20 % for evaluation\r\n\r\nSPLITS_TO_SIZES = {'train': 7532, 'validation': 1883}\r\n\r\n# DATASET-VARIABLE: num classes of the houseplants dataset\r\n\r\n_NUM_CLASSES = 26\r\n\r\n_ITEMS_TO_DESCRIPTIONS = {\r\n\r\n    'image': 'A color image.',\r\n\r\n    'label': 'A single integer between 0 and 26',\r\n\r\n}\r\n\r\ndef get_split(split_name, dataset_dir, file_pattern=None, reader=None):\r\n\r\n  \"\"\"Gets a dataset tuple with instructions for reading cifar10.\r\n\r\n  Args:\r\n\r\n    split_name: A train\/test split name.\r\n\r\n    dataset_dir: The base directory of the dataset sources.\r\n\r\n    file_pattern: The file pattern to use when matching the dataset sources.\r\n\r\n      It is assumed that the pattern contains a '%s' string so that the split\r\n\r\n      name can be inserted.\r\n\r\n    reader: The TensorFlow reader type.\r\n\r\n  Returns:\r\n\r\n    A `Dataset` namedtuple.\r\n\r\n  Raises:\r\n\r\n    ValueError: if `split_name` is not a valid train\/test split.\r\n\r\n  \"\"\"\r\n\r\n  if split_name not in SPLITS_TO_SIZES:\r\n\r\n    raise ValueError('split name %s was not recognized.' % split_name)\r\n\r\n  if not file_pattern:\r\n\r\n    file_pattern = _FILE_PATTERN\r\n\r\n  file_pattern = os.path.join(dataset_dir, file_pattern % split_name)\r\n\r\n  # Allowing None in the signature so that dataset_factory can use the default.\r\n\r\n  if not reader:\r\n\r\n    reader = tf.TFRecordReader\r\n\r\n  keys_to_features = {\r\n\r\n      'image\/encoded': tf.FixedLenFeature((), tf.string, default_value=''),\r\n\r\n      'image\/format': tf.FixedLenFeature((), tf.string, default_value='jpg'),\r\n\r\n      'image\/class\/label': tf.FixedLenFeature(\r\n\r\n          [], tf.int64, default_value=tf.zeros([], dtype=tf.int64)),\r\n\r\n  }\r\n\r\n  items_to_handlers = {\r\n\r\n      'image': slim.tfexample_decoder.Image(),\r\n\r\n      'label': slim.tfexample_decoder.Tensor('image\/class\/label'),\r\n\r\n  }\r\n\r\n  decoder = slim.tfexample_decoder.TFExampleDecoder(\r\n\r\n      keys_to_features, items_to_handlers)\r\n\r\n  labels_to_names = None\r\n\r\n  if dataset_utils.has_labels(dataset_dir):\r\n\r\n    labels_to_names = dataset_utils.read_label_file(dataset_dir)\r\n\r\n  return slim.dataset.Dataset(\r\n\r\n      data_sources=file_pattern,\r\n\r\n      reader=reader,\r\n\r\n      decoder=decoder,\r\n\r\n      num_samples=SPLITS_TO_SIZES[split_name],\r\n\r\n      items_to_descriptions=_ITEMS_TO_DESCRIPTIONS,\r\n\r\n      num_classes=_NUM_CLASSES,\r\n\r\n      labels_to_names=labels_to_names)<\/pre>\n<p>You can use every dataset you want, you just have to change the name of the python file and of the dataset variables inside the python file. The dataset variables are marked in the code above. Next we need to make a reference to our dataset in the dataset factory. Open\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">dataset_factory.py<\/span>\u00a0in the datasets folder and add the name of the dataset to the datasets map.<\/p>\n<pre class=\"start-line:27 lang:python decode:true\">datasets_map = {\r\n\r\n    'cifar10': cifar10,\r\n\r\n    'flowers': flowers,\r\n\r\n    'imagenet': imagenet,\r\n\r\n    'mnist': mnist,\r\n\r\n    # added hp_plants as dataset\r\n\r\n    'hp_plants':hp_plants\r\n\r\n}<\/pre>\n<p>To convert our image data to an appropriate binary file format (TFRecord) we use a script provided\u00a0by\u00a0Kwotsin in a <a href=\"https:\/\/github.com\/kwotsin\/create_tfrecords\" target=\"_blank\" rel=\"noopener\">Github repository<\/a>. Clone the repository and copy\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">create_tfrecord.py<\/span>\u00a0and\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">dataset_utils.py<\/span>\u00a0to the slim folder. To create the TFRecord files run the following command in your terminal.<\/p>\n<pre class=\"nums:false lang:default decode:true\">python create_tfrecord.py \\\r\n\r\n    --dataset_dir=..\/..\/..\/hp_dataset \\\r\n\r\n    --tfrecord_filename=hp_plants \\\r\n\r\n    --validation_size=0.2<\/pre>\n<p>With the<span class=\"lang:default highlight:0 decode:true crayon-inline\">dataset_dir<\/span>\u00a0parameter we define where our dataset is stored and with the\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">tfrecord_filename<\/span>\u00a0parameter we define the pattern of the TFRecord files. This pattern must match with the pattern we defined in our dataset description and the dataset factory. In the last parameter you define which size of the dataset should be used for validation. This step will create TFRecord files for training and validation. With the current setting of the\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">validation_size<\/span>\u00a0parameter, 80 % of the data will be used for training and 20 % for validation. The results can be viewed in the dataset folder of the houseplants.<\/p>\n<pre class=\"nums:false lang:sh highlight:0 decode:true\">.\r\n\r\n\u251c\u2500\u2500 hp_plants_train_00000-of-00002.tfrecord\r\n\r\n\u251c\u2500\u2500 hp_plants_train_00001-of-00002.tfrecord\r\n\r\n\u251c\u2500\u2500 hp_plants_validation_00000-of-00002.tfrecord\r\n\r\n\u251c\u2500\u2500 hp_plants_validation_00001-of-00002.tfrecord\r\n\r\n\u251c\u2500\u2500 images\r\n\r\n\u2514\u2500\u2500 labels.txt<\/pre>\n<p>The execution of the script has created 2 TFRecord files for execution and 2 for validation. Also a file with the labels was created, which contains the 26 class names of the dataset. The images folder contains the images of the houseplants in particular folders. For every class there is a folder with the class name as folder name and the images of the class inside of this particular folder.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Training\"><\/span>Training<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Until now we have done general preparation and pre-processing. In the next steps we will set up our training, using\u00a0\u00a0<a href=\"http:\/\/cs231n.github.io\/transfer-learning\/\" target=\"_blank\" rel=\"noopener\">Transfer Learning<\/a>. In practice\u00a0an entire convolutional neural network is rarely trained from scratch, because it is rare to have a dataset of sufficient size. With Transfer Learning however we can train a convolutional neural network with a dataset of a small size, because we are using pre-trained weights of the convolutional neural network. We just have to fine-tune it on our dataset.<\/p>\n<p>In the model repository of TensorFlow you can download multiple pre-trained weights of several different convolutional neural networks trained on ImageNet data. As mentioned above we are using a MobileNet in this blogpost series, whose pre-trained weights we have to download. We can find them in the <a href=\"https:\/\/github.com\/tensorflow\/models\/blob\/master\/research\/slim\/nets\/mobilenet_v1.md#pre-trained-models\" target=\"_blank\" rel=\"noopener\">MobileNet v1 description<\/a>\u00a0where we have to download\u00a0<a href=\"http:\/\/download.tensorflow.org\/models\/mobilenet_v1_2018_02_22\/mobilenet_v1_1.0_224.tgz\" target=\"_blank\" rel=\"nofollow noopener\">MobileNet_v1_1.0_224<\/a>. Copy the downloaded .tgz file to the slim folder, create a subfolder with the name\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">mobilenet_v1_1.0_224<\/span>\u00a0and extract it with\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">tar xf mobilenet_v1_1.0_224.tgz -C .\/mobilenet_v1_1.0_224<\/span>\u00a0to the subfolder. In the subfolder you can see multiple files.<\/p>\n<pre class=\"nums:false lang:sh highlight:0 decode:true\">.\r\n\r\n\u251c\u2500\u2500 mobilenet_v1_1.0_224.ckpt.data-00000-of-00001\r\n\r\n\u251c\u2500\u2500 mobilenet_v1_1.0_224.ckpt.index\r\n\r\n\u251c\u2500\u2500 mobilenet_v1_1.0_224.ckpt.meta\r\n\r\n\u251c\u2500\u2500 mobilenet_v1_1.0_224.tflite\r\n\r\n\u251c\u2500\u2500 mobilenet_v1_1.0_224_eval.pbtxt\r\n\r\n\u251c\u2500\u2500 mobilenet_v1_1.0_224_frozen.pb\r\n\r\n\u2514\u2500\u2500 mobilenet_v1_1.0_224_info.txt<\/pre>\n<p>The weights for transfer learning are stored in the .ckpt files.<\/p>\n<p>TensorFlow uses a dataflow graph to represent computations in terms of the dependencies between individual operations. Dataflow is a common programming model for parallel computing where the nodes represent units of computation and the edges represent the data consumed or produced,\u00a0which also applies to neural networks in TensorFlow. In the subfolder we have a whole graph of the MobileNet, which is stored in the .pb file provided. We&#8217;ll later need this\u00a0 for to provision our model for mobile.<\/p>\n<p>To start our training we need to run\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">train_image_classifier.py<\/span>\u00a0with some arguments. We recommend training on a GPU\u00a0to speed up the process considerably. Run this command in your terminal to start training on the GPU:<\/p>\n<pre class=\"nums:false lang:sh decode:true\">python train_image_classifier.py \\\r\n\r\n    --train_dir=.\/train_dir \\\r\n\r\n    --dataset_dir=..\/..\/..\/hp_dataset\\\r\n\r\n    --dataset_name=hp_plants \\\r\n\r\n    --dataset_split_name=train \\\r\n\r\n    --model_name=mobilenet_v1 \\\r\n\r\n    --train_image_size=224 \\\r\n\r\n    --checkpoint_path=.\/mobilenet_v1_1.0_224\/mobilenet_v1_1.0_224.ckpt \\\r\n\r\n    --max_number_of_steps=30000 \\\r\n\r\n    --checkpoint_exclude_scopes=MobilenetV1\/Logits<\/pre>\n<p>If you don&#8217;t have a GPU available, you have to use the following command. With the argument<span class=\"lang:default highlight:0 decode:true crayon-inline\">clone_on_cpu=True<\/span>\u00a0all computations will be executed on CPU.<\/p>\n<pre class=\"nums:false lang:sh decode:true\">python train_image_classifier.py \\\r\n\r\n    --train_dir=.\/train_dir \\\r\n\r\n    --dataset_dir=..\/..\/..\/hp_dataset \\\r\n\r\n    --dataset_name=hp_plants \\\r\n\r\n    --dataset_split_name=train \\\r\n\r\n    --model_name=mobilenet_v1 \\\r\n\r\n    --train_image_size=224 \\\r\n\r\n    --checkpoint_path=.\/mobilenet_v1_1.0_224\/mobilenet_v1_1.0_224.ckpt \\\r\n\r\n    --max_number_of_steps=30000 \\\r\n\r\n    --clone_on_cpu=True \\\r\n\r\n    --checkpoint_exclude_scopes=MobilenetV1\/Logits<\/pre>\n<p>With the argument\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">train_dir<\/span>\u00a0we specify the destination of the TFRecord files we created beforehand. As a further argument we have\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">dataset_name<\/span>\u00a0to select the dataset we want to use for training. Here we choose the houseplants dataset previously specified in the datasets description. The\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">dataset_split<\/span>\u00a0argument specifies which TFRecord files are used for training, so we select those from above. In the next argument\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">model_name<\/span>\u00a0we specify which model we want to train. Here we have choose the MobileNet as model, setting our input images size to 224x224x3 with the argument\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">train_image_size<\/span>\u00a0. With the argument\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">checkpoint_path<\/span>\u00a0we refer to the downloaded checkpoint of the MobileNet while also enabling Transfer Learning as training method. The argument\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">max_number_step<\/span>\u00a0defines the number of training steps, which we set too 30,000.<\/p>\n<p>In the end we need to exclude some weights of the checkpoint, because the model was pre-trained on the ImageNet datatset. The weights of the fully connected layer, which does the classification in the end, are trained on the ImageNet dataset with 1000 classes and our houseplants dataset has 26 classes. So we can not use the pre-trained weights for this layer and have to train it completely from scratch. During the training you see the loss converge with the output looking like this:<\/p>\n<pre class=\"nums:false lang:default highlight:0 decode:true\">INFO:tensorflow:global step 29790: loss = 0.3194 (0.236 sec\/step)\r\n\r\nINFO:tensorflow:global step 29800: loss = 0.1820 (0.175 sec\/step)\r\n\r\nINFO:tensorflow:global step 29810: loss = 0.1972 (0.230 sec\/step)\r\n\r\nINFO:tensorflow:global step 29820: loss = 0.2426 (0.232 sec\/step)\r\n\r\nINFO:tensorflow:global step 29830: loss = 0.2625 (0.241 sec\/step)\r\n\r\nINFO:tensorflow:global step 29840: loss = 0.1558 (0.188 sec\/step)\r\n\r\nINFO:tensorflow:global step 29850: loss = 0.1601 (0.230 sec\/step)\r\n\r\nINFO:tensorflow:global step 29860: loss = 0.2257 (0.245 sec\/step)\r\n\r\nINFO:tensorflow:global step 29870: loss = 0.3663 (0.269 sec\/step)\r\n\r\nINFO:tensorflow:global step 29880: loss = 0.1686 (0.198 sec\/step)\r\n\r\nINFO:tensorflow:global step 29890: loss = 0.3222 (0.216 sec\/step)\r\n\r\nINFO:tensorflow:global step 29900: loss = 0.2520 (0.217 sec\/step)\r\n\r\nINFO:tensorflow:global step 29910: loss = 0.3735 (0.243 sec\/step)\r\n\r\nINFO:tensorflow:global step 29920: loss = 0.2633 (0.204 sec\/step)\r\n\r\nINFO:tensorflow:global step 29930: loss = 0.2714 (0.185 sec\/step)\r\n\r\nINFO:tensorflow:global step 29940: loss = 0.3153 (0.194 sec\/step)\r\n\r\nINFO:tensorflow:global step 29950: loss = 0.1891 (0.215 sec\/step)\r\n\r\nINFO:tensorflow:global step 29960: loss = 0.2570 (0.197 sec\/step)\r\n\r\nINFO:tensorflow:global step 29970: loss = 0.1911 (0.203 sec\/step)\r\n\r\nINFO:tensorflow:global step 29980: loss = 0.1798 (0.222 sec\/step)\r\n\r\nINFO:tensorflow:global step 29990: loss = 0.1881 (0.218 sec\/step)\r\n\r\nINFO:tensorflow:global step 30000: loss = 0.1761 (0.226 sec\/step)\r\n\r\nINFO:tensorflow:Stopping Training.\r\n\r\nINFO:tensorflow:Finished training! Saving model to disk.<\/pre>\n<p>The model will be automatically saved in the specified train directory. After that we need to evaluate the trained model using the provided python script\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">eval_image_classifier.py<\/span>. Run the script with the following command:<\/p>\n<pre class=\"nums:false lang:sh decode:true\">python eval_image_classifier.py \\\r\n\r\n    --alsologtostderr \\\r\n\r\n    --checkpoint_path=.\/train_dir\/model.ckpt-30000 \\\r\n\r\n    --dataset_dir=..\/..\/..\/hp_dataset \\\r\n\r\n    --dataset_name=hp_plants \\\r\n\r\n    --dataset_split_name=validation \\\r\n\r\n    --model_name=mobilenet_v1 \\\r\n\r\n    --eval_image_size=224<\/pre>\n<p>It&#8217;s very important to refer to the right checkpoint for evaluation. This we can specify with the argument\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">checkpoint_path<\/span>\u00a0where we refer to the checkpoint of the 30000 training steps. The output looks like this:<\/p>\n<pre class=\"nums:false lang:default highlight:0 decode:true\">INFO:tensorflow:Evaluation [5\/19]\r\n\r\nINFO:tensorflow:Evaluation [6\/19]\r\n\r\nINFO:tensorflow:Evaluation [7\/19]\r\n\r\nINFO:tensorflow:Evaluation [8\/19]\r\n\r\nINFO:tensorflow:Evaluation [9\/19]\r\n\r\nINFO:tensorflow:Evaluation [10\/19]\r\n\r\nINFO:tensorflow:Evaluation [11\/19]\r\n\r\nINFO:tensorflow:Evaluation [12\/19]\r\n\r\nINFO:tensorflow:Evaluation [13\/19]\r\n\r\nINFO:tensorflow:Evaluation [14\/19]\r\n\r\nINFO:tensorflow:Evaluation [15\/19]\r\n\r\nINFO:tensorflow:Evaluation [16\/19]\r\n\r\nINFO:tensorflow:Evaluation [17\/19]\r\n\r\nINFO:tensorflow:Evaluation [18\/19]\r\n\r\nINFO:tensorflow:Evaluation [19\/19]\r\n\r\neval\/Accuracy[0.757894754]eval\/Recall_5[0.928947389]<\/pre>\n<p>We can see that our MobileNet trained with Transfer Learning in 30000 steps achieves an accuracy of\u00a0 about 75 % and a top-5-recall of 92 %. Thats quite good for such a short training. To boost the performance we could raise the number of training steps but this may lead to overfitting. Another approach is called <a href=\"http:\/\/cs231n.github.io\/neural-networks-3\/#hyper\" target=\"_blank\" rel=\"noopener\">Hyperparameter Optimization<\/a>\u00a0which automatically searches for good parameters like learning rate or\u00a0regularization strength for our neural network.<span style=\"color: #ff0000;\">\u00a0<\/span><\/p>\n<h2><span class=\"ez-toc-section\" id=\"Mobile-Deployment\"><\/span>Mobile Deployment<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>After we have finished training and evaluation of our MobileNet, we now can start with the preparation of the mobile deployment. For this we first need to create an inference graph of our MobileNet, which represents the whole MobileNet and is used to map our trained weights with the correct graph of the MobileNet. To create the\u00a0inference graph we need to run this command:<\/p>\n<pre class=\"nums:false lang:sh decode:true\">python export_inference_graph.py \\\r\n\r\n  --alsologtostderr \\\r\n\r\n  --model_name=mobilenet_v1 \\\r\n\r\n  --output_file=.\/inference_graph_mobilenet.pb \\\r\n\r\n  --dataset_name=hp_plants<\/pre>\n<p>You can now find the\u00a0correct graph representation of the MobileNet in the slim folder as .pb file with the name\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">inference_graph_mobilenet.pb<\/span>. Now we need to freeze our trained MobileNet, mapping the trained weights with the correct graph representation of the MobileNet:<\/p>\n<pre class=\"nums:false lang:sh decode:true\">python freeze_graph.py \\\r\n\r\n    --input_graph=.\/inference_graph_mobilenet.pb \\\r\n\r\n    --input_binary=true \\\r\n\r\n    --input_checkpoint=.\/train_dir\/model.ckpt-30000 \\\r\n\r\n    --output_graph=.\/frozen_mobilenet.pb \\\r\n\r\n    --output_node_names=MobilenetV1\/Predictions\/Reshape_1<\/pre>\n<p>As you can see our previously generated inference graph is used as input for the freezing. Also we are using the trained weights from our latest checkpoint. With the argument\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">output_graph<\/span>\u00a0we specify the output name of the frozen graph. Furthermore we need to provide the argument\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">output_node<\/span>\u00a0with the right output node name. The information about input and output nodes of the MobileNet can be found in the previously downloaded files at\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline\">.\/mobilenet_v1_1.0_224\/mobilenet_v1_1.0_224_info.txt<\/span>.<\/p>\n<p>As a last step we need to optimize our graph for mobile. This step will reduce the binary size of the graph by removing unnecessary operations for classificaton and rounding the provided weights of the model. Rounding the provided weights will lead to a small accuracy loss but will improve the classification duration of the model greatly which very is important for mobile devices. To optimize our graph we need to run the the following command:<\/p>\n<pre class=\"nums:false lang:sh decode:true \">python optimize_for_inference.py \\\r\n\r\n    --input=.\/frozen_mobilenet.pb \\\r\n\r\n    --output=.\/opt_frozen_mobilenet.pb \\\r\n\r\n    --input_names=input \\\r\n\r\n    --output_names=MobilenetV1\/Predictions\/Reshape_1<\/pre>\n<p>As you can see the input for the optimization is our previously frozen graph which will be optimized for mobile and which is saved as .pb file at\u00a0<span class=\"lang:default highlight:0 decode:true crayon-inline \">opt_frozen_mobilenet.pb<\/span>.<\/p>\n<p>Now we have a fully functional and mobile optimized graph which we can deploy to our Android or iOS app\u2014we&#8217;ll show you how in future articles!<\/p>\n<p>Read on to learn how to <a href=\"https:\/\/www.inovex.de\/blog\/tensorflow-mobile-android-app\/\" target=\"_blank\" rel=\"noopener\">integrate the graph with your Android app<\/a>!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Smart Assistants, fancy image filters in Snapchat and apps like Prisma all have one thing in common\u2014they are powered by Machine Learning. The use of Machine Learning in mobile apps is growing and new mobile apps are developed with Machine Learning based services as business models. In this blog series we want to give you [&hellip;]<\/p>\n","protected":false},"author":69,"featured_media":13428,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"ep_exclude_from_search":false,"footnotes":""},"tags":[509,510,151],"service":[420,76],"coauthors":[{"id":69,"display_name":"Zoran Cupic","user_nicename":"zcupic"}],"class_list":["post-21085","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","tag-ai-2","tag-apps-2","tag-deep-learning","service-apps","service-artificial-intelligence"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>TensorFlow Mobile: Training and Deploying a Neural Network - inovex GmbH<\/title>\n<meta name=\"description\" content=\"In this blog series we explain how you can train and deploy a convolutional neural network for image classification to a mobile app using TensorFlow\u00a0Mobile.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"TensorFlow Mobile: Training and Deploying a Neural Network - inovex GmbH\" \/>\n<meta property=\"og:description\" content=\"In this blog series we explain how you can train and deploy a convolutional neural network for image classification to a mobile app using TensorFlow\u00a0Mobile.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/\" \/>\n<meta property=\"og:site_name\" content=\"inovex GmbH\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/inovexde\" \/>\n<meta property=\"article:published_time\" content=\"2018-05-08T09:16:57+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-03-19T06:29:16+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/04\/Zeichenfla\u0308che-11080.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t<meta property=\"og:image:height\" content=\"1080\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Zoran Cupic\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/04\/Zeichenfla\u0308che-11080-1024x576.png\" \/>\n<meta name=\"twitter:creator\" content=\"@inovexgmbh\" \/>\n<meta name=\"twitter:site\" content=\"@inovexgmbh\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Zoran Cupic\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"15\u00a0Minuten\" \/>\n\t<meta name=\"twitter:label3\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data3\" content=\"Zoran Cupic\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/tensorflow-mobile-training-and-deploying-a-neural-network\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/tensorflow-mobile-training-and-deploying-a-neural-network\\\/\"},\"author\":{\"name\":\"Zoran Cupic\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/person\\\/5a5b0a7151b99bffe15b0fd126d30718\"},\"headline\":\"TensorFlow Mobile: Training and Deploying a Neural Network\",\"datePublished\":\"2018-05-08T09:16:57+00:00\",\"dateModified\":\"2025-03-19T06:29:16+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/tensorflow-mobile-training-and-deploying-a-neural-network\\\/\"},\"wordCount\":1889,\"commentCount\":7,\"publisher\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/tensorflow-mobile-training-and-deploying-a-neural-network\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2018\\\/04\\\/Zeichenfla\u0308che-11080.png\",\"keywords\":[\"Ai\",\"Apps\",\"Deep Learning\"],\"articleSection\":[\"Analytics\",\"Applications\",\"English Content\",\"General\"],\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/tensorflow-mobile-training-and-deploying-a-neural-network\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/tensorflow-mobile-training-and-deploying-a-neural-network\\\/\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/tensorflow-mobile-training-and-deploying-a-neural-network\\\/\",\"name\":\"TensorFlow Mobile: Training and Deploying a Neural Network - inovex GmbH\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/tensorflow-mobile-training-and-deploying-a-neural-network\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/tensorflow-mobile-training-and-deploying-a-neural-network\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2018\\\/04\\\/Zeichenfla\u0308che-11080.png\",\"datePublished\":\"2018-05-08T09:16:57+00:00\",\"dateModified\":\"2025-03-19T06:29:16+00:00\",\"description\":\"In this blog series we explain how you can train and deploy a convolutional neural network for image classification to a mobile app using TensorFlow\u00a0Mobile.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/tensorflow-mobile-training-and-deploying-a-neural-network\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/tensorflow-mobile-training-and-deploying-a-neural-network\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/tensorflow-mobile-training-and-deploying-a-neural-network\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2018\\\/04\\\/Zeichenfla\u0308che-11080.png\",\"contentUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2018\\\/04\\\/Zeichenfla\u0308che-11080.png\",\"width\":1920,\"height\":1080,\"caption\":\"Das Tensorflow Logo auf zwei Smartphone Bildschirmen.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/tensorflow-mobile-training-and-deploying-a-neural-network\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"TensorFlow Mobile: Training and Deploying a Neural Network\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#website\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/\",\"name\":\"inovex GmbH\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#organization\",\"name\":\"inovex GmbH\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2021\\\/03\\\/inovex-logo-16-9-1.png\",\"contentUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2021\\\/03\\\/inovex-logo-16-9-1.png\",\"width\":1921,\"height\":1081,\"caption\":\"inovex GmbH\"},\"image\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/inovexde\",\"https:\\\/\\\/x.com\\\/inovexgmbh\",\"https:\\\/\\\/www.instagram.com\\\/inovexlife\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/inovex\",\"https:\\\/\\\/www.youtube.com\\\/channel\\\/UC7r66GT14hROB_RQsQBAQUQ\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/person\\\/5a5b0a7151b99bffe15b0fd126d30718\",\"name\":\"Zoran Cupic\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/6d80d21bc62d93641adc813829da719e355f8aa4655031939b4cef5f2053c5e3?s=96&d=retro&r=gb79481e5f5244f7cb00dcfb7f9ef5a4c\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/6d80d21bc62d93641adc813829da719e355f8aa4655031939b4cef5f2053c5e3?s=96&d=retro&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/6d80d21bc62d93641adc813829da719e355f8aa4655031939b4cef5f2053c5e3?s=96&d=retro&r=g\",\"caption\":\"Zoran Cupic\"},\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/author\\\/zcupic\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"TensorFlow Mobile: Training and Deploying a Neural Network - inovex GmbH","description":"In this blog series we explain how you can train and deploy a convolutional neural network for image classification to a mobile app using TensorFlow\u00a0Mobile.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/","og_locale":"de_DE","og_type":"article","og_title":"TensorFlow Mobile: Training and Deploying a Neural Network - inovex GmbH","og_description":"In this blog series we explain how you can train and deploy a convolutional neural network for image classification to a mobile app using TensorFlow\u00a0Mobile.","og_url":"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/","og_site_name":"inovex GmbH","article_publisher":"https:\/\/www.facebook.com\/inovexde","article_published_time":"2018-05-08T09:16:57+00:00","article_modified_time":"2025-03-19T06:29:16+00:00","og_image":[{"width":1920,"height":1080,"url":"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/04\/Zeichenfla\u0308che-11080.png","type":"image\/png"}],"author":"Zoran Cupic","twitter_card":"summary_large_image","twitter_image":"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/04\/Zeichenfla\u0308che-11080-1024x576.png","twitter_creator":"@inovexgmbh","twitter_site":"@inovexgmbh","twitter_misc":{"Verfasst von":"Zoran Cupic","Gesch\u00e4tzte Lesezeit":"15\u00a0Minuten","Written by":"Zoran Cupic"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/#article","isPartOf":{"@id":"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/"},"author":{"name":"Zoran Cupic","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/person\/5a5b0a7151b99bffe15b0fd126d30718"},"headline":"TensorFlow Mobile: Training and Deploying a Neural Network","datePublished":"2018-05-08T09:16:57+00:00","dateModified":"2025-03-19T06:29:16+00:00","mainEntityOfPage":{"@id":"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/"},"wordCount":1889,"commentCount":7,"publisher":{"@id":"https:\/\/www.inovex.de\/de\/#organization"},"image":{"@id":"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/#primaryimage"},"thumbnailUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/04\/Zeichenfla\u0308che-11080.png","keywords":["Ai","Apps","Deep Learning"],"articleSection":["Analytics","Applications","English Content","General"],"inLanguage":"de","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/","url":"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/","name":"TensorFlow Mobile: Training and Deploying a Neural Network - inovex GmbH","isPartOf":{"@id":"https:\/\/www.inovex.de\/de\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/#primaryimage"},"image":{"@id":"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/#primaryimage"},"thumbnailUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/04\/Zeichenfla\u0308che-11080.png","datePublished":"2018-05-08T09:16:57+00:00","dateModified":"2025-03-19T06:29:16+00:00","description":"In this blog series we explain how you can train and deploy a convolutional neural network for image classification to a mobile app using TensorFlow\u00a0Mobile.","breadcrumb":{"@id":"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/#primaryimage","url":"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/04\/Zeichenfla\u0308che-11080.png","contentUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/04\/Zeichenfla\u0308che-11080.png","width":1920,"height":1080,"caption":"Das Tensorflow Logo auf zwei Smartphone Bildschirmen."},{"@type":"BreadcrumbList","@id":"https:\/\/www.inovex.de\/de\/blog\/tensorflow-mobile-training-and-deploying-a-neural-network\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.inovex.de\/de\/"},{"@type":"ListItem","position":2,"name":"TensorFlow Mobile: Training and Deploying a Neural Network"}]},{"@type":"WebSite","@id":"https:\/\/www.inovex.de\/de\/#website","url":"https:\/\/www.inovex.de\/de\/","name":"inovex GmbH","description":"","publisher":{"@id":"https:\/\/www.inovex.de\/de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.inovex.de\/de\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/www.inovex.de\/de\/#organization","name":"inovex GmbH","url":"https:\/\/www.inovex.de\/de\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/logo\/image\/","url":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/inovex-logo-16-9-1.png","contentUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/inovex-logo-16-9-1.png","width":1921,"height":1081,"caption":"inovex GmbH"},"image":{"@id":"https:\/\/www.inovex.de\/de\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/inovexde","https:\/\/x.com\/inovexgmbh","https:\/\/www.instagram.com\/inovexlife\/","https:\/\/www.linkedin.com\/company\/inovex","https:\/\/www.youtube.com\/channel\/UC7r66GT14hROB_RQsQBAQUQ"]},{"@type":"Person","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/person\/5a5b0a7151b99bffe15b0fd126d30718","name":"Zoran Cupic","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/secure.gravatar.com\/avatar\/6d80d21bc62d93641adc813829da719e355f8aa4655031939b4cef5f2053c5e3?s=96&d=retro&r=gb79481e5f5244f7cb00dcfb7f9ef5a4c","url":"https:\/\/secure.gravatar.com\/avatar\/6d80d21bc62d93641adc813829da719e355f8aa4655031939b4cef5f2053c5e3?s=96&d=retro&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/6d80d21bc62d93641adc813829da719e355f8aa4655031939b4cef5f2053c5e3?s=96&d=retro&r=g","caption":"Zoran Cupic"},"url":"https:\/\/www.inovex.de\/de\/blog\/author\/zcupic\/"}]}},"_links":{"self":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/21085","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/users\/69"}],"replies":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/comments?post=21085"}],"version-history":[{"count":4,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/21085\/revisions"}],"predecessor-version":[{"id":61293,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/21085\/revisions\/61293"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/media\/13428"}],"wp:attachment":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/media?parent=21085"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/tags?post=21085"},{"taxonomy":"service","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/service?post=21085"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/coauthors?post=21085"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}