{"id":20842,"date":"2021-03-01T09:21:29","date_gmt":"2021-03-01T08:21:29","guid":{"rendered":"https:\/\/www.inovex.de\/blog\/?p=20842"},"modified":"2022-12-01T12:22:37","modified_gmt":"2022-12-01T11:22:37","slug":"data-processing-dask-rapids-installing-data-science-app-dask-client","status":"publish","type":"post","link":"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/","title":{"rendered":"Data Processing Scaled Up and Out with Dask and RAPIDS: Installing a Data Science App as Dask Client (2\/3)"},"content":{"rendered":"<p>This blog post tutorial shows how a scalable and high-performance environment for machine learning can be set up using the ingredients GPUs, Kubernetes clusters, Dask and Jupyter.\u00a0In the <a href=\"https:\/\/www.inovex.de\/blog\/data-processing-dask-rapids-kubernetes-cluster-gpu-support-13\/\">first article of our blog series<\/a> we have set up a Kubernetes cluster with access to GPUs. In this part we will add containerized applications to the cluster to be able to run data processing workloads in our cluster. Being more precise: we will prepare a notebook image that has CUDA installed which is required if we want to use GPU-based frameworks.\u00a0Furthermore, the image should contain Dask, Rapids and Dask-Rapids. As soon as the image is ready, we will deploy JupyterHub which spawns said notebook image as a container for each user.\u00a0<!--more--><\/p>\n<p>We will use JupyterLab notebooks as an interactive environment to start our data processing algorithms. In other words, JupyterLab will act as a Dask client. As we want to provide an environment not only for one data scientist but for a group of users, we decided to install JupyterHub on our Kubernetes cluster. JupyterHub makes it possible to serve a pre-configured data science environment to a group of users.<\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-custom ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\"><p class=\"ez-toc-title\" style=\"cursor:inherit\"><\/p>\n<\/div><nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/#Permissions-for-Dask-Clients\" >Permissions for Dask-Clients<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/#Docker-Image-for-Jupyter\" >Docker Image for Jupyter<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/#Deploying-Jupyterhub\" >Deploying Jupyterhub<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/#Outlook-on-Part-3\" >Outlook on Part 3<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"Permissions-for-Dask-Clients\"><\/span>Permissions for Dask-Clients<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>At first, we have to care about the permissions of our JupyterLab instances. When being used as a Dask-Client it needs to have sufficient permissions to start new pods acting as\u00a0 Dask-workers. As we decided to install JupyterHub, no extra configuration is required since JupyterHub uses a Service Account with sufficient permissions per default. If you want to use Dask from a different environment, you will have to make sure to grant correct permissions for your client to create, delete, view etc. of your Dask-worker-pods via a Service Account.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Docker-Image-for-Jupyter\"><\/span>Docker Image for Jupyter<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>JupyterHub is a multi-tenant version of JupyterLab. The hub creates a pod in the cluster for each user and pulls the notebook image that runs on that pod. There are official Jupyter-specific images like the Minimal-Notebook or the Data-Science-Notebook that are ready to use. However, to use the Rapids-Library, CUDA Toolkit is required. So we cannot use these base images and simply add Rapids and Dask to it.<\/p>\n<p>It seems to be a good idea to create a base image which contains Jupyter and CUDA and use it to build an image with Rapids and Dask. Since Rapids and Dask are still in development and new versions are released frequently, keeping Jupyter and CUDA as a separate base image will make it easier to maintain our final image.<\/p>\n<p>Fortunately, there are not only official Notebook images but also official images from NVIDIA with CUDA. We can simply combine both images. We will use the the base-notebook Dockerfile\u00a0 from <a href=\"https:\/\/github.com\/jupyter\/docker-stacks\/tree\/master\/base-notebook\">here<\/a><b>\u00a0<\/b>and the 10.2-base-ubuntu-18.04 CUDA 10.2 Dockerfile from\u00a0\u00a0<a href=\"https:\/\/gitlab.com\/nvidia\/container-images\/cuda\/blob\/master\/dist\/ubuntu18.04\/10.2\/base\/Dockerfile\">here<\/a>\u00a0. We then combine both of them into a single image. Keep in mind that for the base-notebook you need to have following files together with your Dockerfile:<\/p>\n<ol>\n<li>\u00a0fix-permissions<\/li>\n<li>\u00a0jupyter_notebook_config.py<\/li>\n<li>\u00a0start.sh<\/li>\n<li>\u00a0start-notebook-sh<\/li>\n<li>\u00a0start-singleuser.sh<\/li>\n<\/ol>\n<p>All these files can be found in the base-notebook registry from the link above. The resulting Dockerfile is listed below:<\/p>\n<pre class=\"lang:sh decode:true \">ARG ROOT_CONTAINER=ubuntu:bionic-20200311@sha256:e5dd9dbb37df5b731a6688fa49f4003359f6f126958c9c928f937bec69836320\r\n\r\nARG BASE_CONTAINER=$ROOT_CONTAINER\r\n\r\nFROM $BASE_CONTAINER\r\n\r\nLABEL maintainer=\"Jupyter Project &lt;jupyter@googlegroups.com&gt;\"\r\n\r\nARG NB_USER=\"jovyan\"\r\n\r\nARG NB_UID=\"1000\"\r\n\r\nARG NB_GID=\"100\"\r\n\r\nUSER root\r\n\r\n# Install all OS dependencies for notebook server that starts but lacks all\r\n\r\n# features (e.g., download as all possible file formats)\r\n\r\nENV DEBIAN_FRONTEND noninteractive\r\n\r\nRUN apt-get update \\\r\n\r\n&amp;&amp; apt-get install -yq --no-install-recommends \\\r\n\r\n\u00a0 \u00a0 wget \\\r\n\r\n\u00a0 \u00a0 bzip2 \\\r\n\r\n\u00a0 \u00a0 ca-certificates \\\r\n\r\n\u00a0 \u00a0 sudo \\\r\n\r\n\u00a0 \u00a0 locales \\\r\n\r\n\u00a0 \u00a0 fonts-liberation \\\r\n\r\n\u00a0 \u00a0 run-one \\\r\n\r\n&amp;&amp; apt-get clean &amp;&amp; rm -rf \/var\/lib\/apt\/lists\/*\r\n\r\nRUN echo \"en_US.UTF-8 UTF-8\" &gt; \/etc\/locale.gen &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 locale-gen\r\n\r\n# Configure environment\r\n\r\nENV CONDA_DIR=\/opt\/conda \\\r\n\r\n\u00a0 \u00a0 SHELL=\/bin\/bash \\\r\n\r\n\u00a0 \u00a0 NB_USER=$NB_USER \\\r\n\r\n\u00a0 \u00a0 NB_UID=$NB_UID \\\r\n\r\n\u00a0 \u00a0 NB_GID=$NB_GID \\\r\n\r\n\u00a0 \u00a0 LC_ALL=en_US.UTF-8 \\\r\n\r\n\u00a0 \u00a0 LANG=en_US.UTF-8 \\\r\n\r\n\u00a0 \u00a0 LANGUAGE=en_US.UTF-8\r\n\r\nENV PATH=$CONDA_DIR\/bin:$PATH \\\r\n\r\n\u00a0 \u00a0 HOME=\/home\/$NB_USER\r\n\r\n# Copy a script that we will use to correct permissions after running certain commands\r\n\r\nCOPY fix-permissions \/usr\/local\/bin\/fix-permissions\r\n\r\nRUN chmod a+rx \/usr\/local\/bin\/fix-permissions\r\n\r\n# Enable prompt color in the skeleton .bashrc before creating the default NB_USER\r\n\r\nRUN sed -i 's\/^#force_color_prompt=yes\/force_color_prompt=yes\/' \/etc\/skel\/.bashrc\r\n\r\n# Create NB_USER wtih name jovyan user with UID=1000 and in the 'users' group\r\n\r\n# and make sure these dirs are writable by the `users` group.\r\n\r\nRUN echo \"auth requisite pam_deny.so\" &gt;&gt; \/etc\/pam.d\/su &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 sed -i.bak -e 's\/^%admin\/#%admin\/' \/etc\/sudoers &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 sed -i.bak -e 's\/^%sudo\/#%sudo\/' \/etc\/sudoers &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 useradd -m -s \/bin\/bash -N -u $NB_UID $NB_USER &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 mkdir -p $CONDA_DIR &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 chown $NB_USER:$NB_GID $CONDA_DIR &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 chmod g+w \/etc\/passwd &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 fix-permissions $HOME &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 fix-permissions $CONDA_DIR\r\n\r\nUSER $NB_UID\r\n\r\nWORKDIR $HOME\r\n\r\nARG PYTHON_VERSION=default\r\n\r\n# Setup work directory for backward-compatibility\r\n\r\nRUN mkdir \/home\/$NB_USER\/work &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 fix-permissions \/home\/$NB_USER\r\n\r\nENV MINICONDA_VERSION=4.6.14 \\\r\n\r\n\u00a0 \u00a0 CONDA_VERSION=4.7.10\r\n\r\nRUN cd \/tmp &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 wget --quiet https:\/\/repo.continuum.io\/miniconda\/Miniconda3-${MINICONDA_VERSION}-Linux-x86_64.sh &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 echo \"718259965f234088d785cad1fbd7de03 *Miniconda3-${MINICONDA_VERSION}-Linux-x86_64.sh\" | md5sum -c - &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 \/bin\/bash Miniconda3-${MINICONDA_VERSION}-Linux-x86_64.sh -f -b -p $CONDA_DIR &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 rm Miniconda3-${MINICONDA_VERSION}-Linux-x86_64.sh &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 echo \"conda ${CONDA_VERSION}\" &gt;&gt; $CONDA_DIR\/conda-meta\/pinned &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 $CONDA_DIR\/bin\/conda config --system --prepend channels conda-forge &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 $CONDA_DIR\/bin\/conda config --system --set auto_update_conda false &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 $CONDA_DIR\/bin\/conda config --system --set show_channel_urls true &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 $CONDA_DIR\/bin\/conda install --quiet --yes conda &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 $CONDA_DIR\/bin\/conda update --all --quiet --yes &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 conda list python | grep '^python ' | tr -s ' ' | cut -d '.' -f 1,2 | sed 's\/$\/.*\/' &gt;&gt; $CONDA_DIR\/conda-meta\/pinned &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 conda clean --all -f -y &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 rm -rf \/home\/$NB_USER\/.cache\/yarn &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 fix-permissions $CONDA_DIR &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 fix-permissions \/home\/$NB_USER\r\n\r\n# Install Tini\r\n\r\nRUN conda install --quiet --yes 'tini=0.18.0' &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 conda list tini | grep tini | tr -s ' ' | cut -d ' ' -f 1,2 &gt;&gt; $CONDA_DIR\/conda-meta\/pinned &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 conda clean --all -f -y &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 fix-permissions $CONDA_DIR &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 fix-permissions \/home\/$NB_USER\r\n\r\n# Install Jupyter Notebook, Lab, and Hub\r\n\r\n# Generate a notebook server config\r\n\r\n# Cleanup temporary files\r\n\r\n# Correct permissions\r\n\r\n# Do all this in a single RUN command to avoid duplicating all of the\r\n\r\n# files across image layers when the permissions change\r\n\r\nRUN conda install --quiet --yes \\\r\n\r\n\u00a0 \u00a0 'notebook=6.0.3' \\\r\n\r\n\u00a0 \u00a0 'jupyterhub=1.1.0' \\\r\n\r\n\u00a0 \u00a0 'jupyterlab=2.0.1' &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 conda clean --all -f -y &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 npm cache clean --force &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 jupyter notebook --generate-config &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 rm -rf $CONDA_DIR\/share\/jupyter\/lab\/staging &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 rm -rf \/home\/$NB_USER\/.cache\/yarn &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 fix-permissions $CONDA_DIR &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 fix-permissions \/home\/$NB_USER\r\n\r\nEXPOSE 8888\r\n\r\n# Configure container startup\r\n\r\nENTRYPOINT [\"tini\", \"-g\", \"--\"]\r\n\r\nCMD [\"start-notebook.sh\"]\r\n\r\n# Copy local files as late as possible to avoid cache busting\r\n\r\nCOPY start.sh start-notebook.sh start-singleuser.sh \/usr\/local\/bin\/\r\n\r\nCOPY jupyter_notebook_config.py \/etc\/jupyter\/\r\n\r\n# Fix permissions on \/etc\/jupyter as root\r\n\r\nUSER root\r\n\r\nRUN fix-permissions \/etc\/jupyter\/\r\n\r\n# Switch back to jovyan to avoid accidental container runs as root\r\n\r\nUSER $NB_UID\r\n\r\n##################CUDA\r\n\r\nUSER root\r\n\r\n#FROM ubuntu:18.04\r\n\r\nLABEL maintainer \"NVIDIA CORPORATION &lt;cudatools@nvidia.com&gt;\"\r\n\r\nRUN apt-get update &amp;&amp; apt-get install -y --no-install-recommends \\\r\n\r\ngnupg2 curl ca-certificates &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 curl -fsSL https:\/\/developer.download.nvidia.com\/compute\/cuda\/repos\/ubuntu1804\/x86_64\/7fa2af80.pub | apt-key add - &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 echo \"deb https:\/\/developer.download.nvidia.com\/compute\/cuda\/repos\/ubuntu1804\/x86_64 \/\" &gt; \/etc\/apt\/sources.list.d\/cuda.list &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 echo \"deb https:\/\/developer.download.nvidia.com\/compute\/machine-learning\/repos\/ubuntu1804\/x86_64 \/\" &gt; \/etc\/apt\/sources.list.d\/nvidia-ml.list &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 apt-get purge --autoremove -y curl &amp;&amp; \\\r\n\r\nrm -rf \/var\/lib\/apt\/lists\/*\r\n\r\nENV CUDA_VERSION 10.2.89\r\n\r\nENV CUDA_PKG_VERSION 10-2=$CUDA_VERSION-1\r\n\r\n# For libraries in the cuda-compat-* package: https:\/\/docs.nvidia.com\/cuda\/eula\/index.html#attachment-a\r\n\r\nRUN apt-get update &amp;&amp; apt-get install -y --no-install-recommends \\\r\n\r\n\u00a0 \u00a0 \u00a0 \u00a0 cuda-cudart-$CUDA_PKG_VERSION \\\r\n\r\ncuda-compat-10-2 &amp;&amp; \\\r\n\r\nln -s cuda-10.2 \/usr\/local\/cuda &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 rm -rf \/var\/lib\/apt\/lists\/*\r\n\r\n# Required for nvidia-docker v1\r\n\r\nRUN echo \"\/usr\/local\/nvidia\/lib\" &gt;&gt; \/etc\/ld.so.conf.d\/nvidia.conf &amp;&amp; \\\r\n\r\n\u00a0 \u00a0 echo \"\/usr\/local\/nvidia\/lib64\" &gt;&gt; \/etc\/ld.so.conf.d\/nvidia.conf\r\n\r\nENV PATH \/usr\/local\/nvidia\/bin:\/usr\/local\/cuda\/bin:${PATH}\r\n\r\nENV LD_LIBRARY_PATH \/usr\/local\/nvidia\/lib:\/usr\/local\/nvidia\/lib64\r\n\r\n# nvidia-container-runtime\r\n\r\nENV NVIDIA_VISIBLE_DEVICES all\r\n\r\nENV NVIDIA_DRIVER_CAPABILITIES compute,utility\r\n\r\nENV NVIDIA_REQUIRE_CUDA \"cuda&gt;=10.2 brand=tesla,driver&gt;=384,driver&lt;385 brand=tesla,driver&gt;=396,driver&lt;397 brand=tesla,driver&gt;=410,driver&lt;411 brand=tesla,driver&gt;=418,driver&lt;419\"\r\n\r\nUSER $NB_UID<\/pre>\n<p>It is important to enable the root user for the CUDA part and then to switch back to normal user settings afterwards.<\/p>\n<p>We have to build this image and push it to a repository of our choice. Then we have a base image with Jupyter and CUDA. To create the final image on top, we need to install the Rapids-Library (cuDF and cuML), Dask, Dask-cuDF and Dask-cuML. The none-Dask-Rapids is required for the Dask version. This can be easily done in just a few steps and the Dockerfile looks like this:<\/p>\n<pre class=\"lang:sh decode:true \">FROM &lt;your_registry&gt;\r\n\r\n###############################################cudf\r\n\r\nRUN conda install -c rapidsai -c nvidia -c conda-forge \\\r\n\r\n\u00a0 \u00a0 -c defaults cudf=0.13 cuml=0.13 python=3.7\u00a0\r\n\r\n##############################################DASK\r\n\r\nRUN conda install --yes \\\r\n\r\n\u00a0 \u00a0 -c conda-forge -c rapidsai -c nvidia -c defaults \\\r\n\r\n\u00a0 \u00a0 python-blosc \\\r\n\r\n\u00a0 \u00a0 cytoolz \\\r\n\r\n\u00a0 \u00a0 dask==2.15.0 \\\r\n\r\n\u00a0 \u00a0 lz4 \\\r\n\r\n\u00a0 \u00a0 nomkl \\\r\n\r\n\u00a0 \u00a0 numpy==1.18.1 \\\r\n\r\n\u00a0 \u00a0 pandas==0.25.3 \\\r\n\r\n\u00a0 \u00a0 tini==0.18.0 \\\r\n\r\n\u00a0 \u00a0 zstd==1.4.3 \\\r\n\r\n\u00a0 \u00a0 &amp;&amp; conda clean -tipsy \\\r\n\r\n\u00a0 \u00a0 &amp;&amp; find \/opt\/conda\/ -type f,l -name '*.a' -delete \\\r\n\r\n\u00a0 \u00a0 &amp;&amp; find \/opt\/conda\/ -type f,l -name '*.pyc' -delete \\\r\n\r\n\u00a0 \u00a0 &amp;&amp; find \/opt\/conda\/ -type f,l -name '*.js.map' -delete \\\r\n\r\n\u00a0 \u00a0 &amp;&amp; find \/opt\/conda\/lib\/python*\/site-packages\/bokeh\/server\/static -type f,l -name '*.js' -not -name '*.min.js' -delete \\\r\n\r\n\u00a0 \u00a0 &amp;&amp; rm -rf \/opt\/conda\/pkgs\r\n\r\nRUN python3 -m pip install pip --upgrade\r\n\r\nCOPY requirements.txt \/home\/files\/requirements.txt\r\n\r\nRUN pip install --default-timeout=300 -r \/home\/files\/requirements.txt\r\n\r\n#USER $NB_UID<\/pre>\n<p>In line 5, cuDF and cuML are installed. Line 10 installs Dask and a few needed libraries like NumPy or Pandas. This part, in particular lines 12 to 19, was copied from the daskdev\/dask:latest Dockerfile. We will discuss later, why copying it was a good idea.<\/p>\n<p>Finally, in line 27, libraries specified in the requirements.txt (which needs to be accessible while building the image) are installed via pip. These libraries are dask-kubernetes, dask_cuda, dask_cudf, dask_cuml\u00a0 and GCSFS (needed to read from google Buckets).<\/p>\n<p>Again, we build the image and push it to a repository.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Deploying-Jupyterhub\"><\/span>Deploying Jupyterhub<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Now we are ready to deploy the JupyterHub image into our Kubernetes Cluster. This\u00a0<a href=\"https:\/\/zero-to-jupyterhub.readthedocs.io\/en\/latest\/#setup-a-kubernetes-cluster\">link<\/a>\u00a0provides a lot of information about deploying it on Kubernetes.\u00a0There you can find many details on how to customize and personalize your deployment. We will come straight to the point. Create a file <i>config.yaml<\/i> according to your configuration preferences. My config looks like this:<\/p>\n<pre class=\"lang:yaml decode:true \">proxy:\r\n\r\nsecretToken: \"&lt;YOUR 32 BYTES SECURITY TOKEN&gt; \"\r\n\r\n# Do not assign a public IP\r\n\r\nservice:\r\n\r\ntype: NodePort\r\n\r\nsingleuser:\r\n\r\ndefaultUrl: \"\/lab\"\r\n\r\n#The service account we created for Jupyter\r\n\r\nserviceAccountName: jupyter-service-account\r\n\r\n#The final image we built\r\n\r\nimage:\r\n\r\nname: &lt;REGISTRY PATH HERE&gt;\r\n\r\ntag: &lt;TAG&gt;\r\n\r\nstorage:\r\n\r\n#customize sotrage for jupyter client (default 10 Gi)\r\n\r\ncapacity: 20Gi\r\n\r\n#Mounts for NVIDIA Drivers\r\n\r\nextraVolumes:\r\n\r\n- name: nvidia-debug-tools\r\n\r\nhostPath:\r\n\r\npath: \/home\/kubernetes\/bin\/nvidia\/bin\r\n\r\n- name: nvidia-libraries\r\n\r\nhostPath:\r\n\r\npath: \/home\/kubernetes\/bin\/nvidia\/lib64\r\n\r\n#The NFS PVC\r\n\r\n- name: my-pvc-nfs\r\n\r\npersistentVolumeClaim:\r\n\r\nclaimName: nfs\r\n\r\nextraVolumeMounts:\r\n\r\n#Mount NVIDIA drivers paths\r\n\r\n- name: nvidia-debug-tools\r\n\r\nmountPath: \/usr\/local\/bin\/nvidia\r\n\r\n- name: nvidia-libraries\r\n\r\nmountPath: \/usr\/local\/nvidia\/lib64\r\n\r\n#Mount the NFS\r\n\r\n- name: my-pvc-nfs\r\n\r\nmountPath: \"\/home\/jovyan\/mnt\"\r\n\r\n#Create 2 Profiles, Notebook with or without a GPU\r\n\r\nprofileList:\r\n\r\n- display_name: \"GPU Server\"\r\n\r\ndescription: \"Spawns a notebook server with access to a GPU\"\r\n\r\nkubespawner_override:\r\n\r\nextra_resource_limits:\r\n\r\nnvidia.com\/gpu: \"1\"\r\n\r\n- display_name: \"GPU Server\"\r\n\r\ndescription: \"Spawns a notebook server without access to a GPU\"\r\n\r\nhub:\r\n\r\nextraConfig:\r\n\r\n# use jupyterLab by default\r\n\r\n1_jupyterlab:\r\n\r\nc.Spawner.cmd = ['jupyter-labhub']\r\n\r\n#Create a simple authentication\r\n\r\nauth:\r\n\r\ntype: dummy\r\n\r\ndummy:\r\n\r\npassword: '&lt;YOUR PASSWORD&gt;'\r\n\r\nwhitelist:\r\n\r\nusers:\r\n\r\n- &lt;USER&gt;<\/pre>\n<p>To create your 32 Bytes security token, simply run:<\/p>\n<pre class=\"lang:sh decode:true \">openssl rand -hex 32<\/pre>\n<p>\u2026 in the terminal and paste the result into line 2 of your config. Then, specify your image, mount the configMap for accessing the Bucket and path to the NVIDIA Drivers (this might or might not be necessary). You can create different profiles with different requests for resources. In the above example, a profile with access to the GPU and one without it are available. A simple password-based authentication is provided as well.<\/p>\n<p>Now we can add the JupyterHub Helm chart repository:<\/p>\n<pre class=\"lang:sh decode:true \">helm repo add jupyterhub https:\/\/jupyterhub.github.io\/helm-chart\/\r\n\r\nhelm repo update<\/pre>\n<p>After a while an\u00a0 \u201cUpdate Complete. <em>Happy Helming<\/em>\u201c info should appear. We are ready to deploy the Hub. From the directory with the config.yaml, run:<\/p>\n<pre class=\"lang:sh decode:true \">helm upgrade --install jupyterhub jupyterhub\/jupyterhub --namespace kubeyard --version=0.8.2 --values config.yaml<\/pre>\n<p>You might want to add a <strong>\u00a0&#8211;timeout flag<\/strong> with a higher value, like 1000, since the image is quite big and it sometimes results in timeout errors. The deployment should create a Hub and Proxy pod. As soon as both are running, we can port-forward the proxy to a 8000 port:<\/p>\n<pre class=\"lang:sh decode:true \">kubectl port-forward &lt;PROXY-POD NAME&gt; 8000<\/pre>\n<h2><span class=\"ez-toc-section\" id=\"Outlook-on-Part-3\"><\/span>Outlook on Part 3<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Finally, Jupyter is up and running and port-forwarding is enabled. Now we can access JupyterHub from the browser, log in (if authentication is on) and we see the workspace of our JupyterLab.\u00a0\u00a0In the next part of our series we will finally use the prepared infrastructure for data science and compare the efficiency of four various approaches \u2013 including usage of multiple GPUs!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This blog post tutorial shows how a scalable and high-performance environment for machine learning can be set up using the ingredients GPUs, Kubernetes clusters, Dask and Jupyter.\u00a0In the first article of our blog series we have set up a Kubernetes cluster with access to GPUs. In this part we will add containerized applications to the [&hellip;]<\/p>\n","protected":false},"author":179,"featured_media":23617,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"ep_exclude_from_search":false,"footnotes":""},"tags":[],"service":[420,431],"coauthors":[{"id":179,"display_name":"Rafal Lokuciejewski","user_nicename":"rafal-lokuciejewskiinovex-de"}],"class_list":["post-20842","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","service-apps","service-data-science"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Data Processing Scaled Up and Out with Dask and RAPIDS: Installing a Data Science App as Dask Client (2\/3) - inovex GmbH<\/title>\n<meta name=\"description\" content=\"In this part we will add containerized applications to our Kubernetes cluster to be able to run data processing workloads in our cluster with Dask. Being more precise: we will prepare a notebook image that has CUDA installed which is required if we want to use GPU-based frameworks.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Data Processing Scaled Up and Out with Dask and RAPIDS: Installing a Data Science App as Dask Client (2\/3) - inovex GmbH\" \/>\n<meta property=\"og:description\" content=\"In this part we will add containerized applications to our Kubernetes cluster to be able to run data processing workloads in our cluster with Dask. Being more precise: we will prepare a notebook image that has CUDA installed which is required if we want to use GPU-based frameworks.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/\" \/>\n<meta property=\"og:site_name\" content=\"inovex GmbH\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/inovexde\" \/>\n<meta property=\"article:published_time\" content=\"2021-03-01T08:21:29+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2022-12-01T11:22:37+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/gpgpu-2-1.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t<meta property=\"og:image:height\" content=\"1080\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Rafal Lokuciejewski\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/gpgpu-2-1-1024x576.png\" \/>\n<meta name=\"twitter:creator\" content=\"@inovexgmbh\" \/>\n<meta name=\"twitter:site\" content=\"@inovexgmbh\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Rafal Lokuciejewski\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"12\u00a0Minuten\" \/>\n\t<meta name=\"twitter:label3\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data3\" content=\"Rafal Lokuciejewski\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/data-processing-dask-rapids-installing-data-science-app-dask-client\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/data-processing-dask-rapids-installing-data-science-app-dask-client\\\/\"},\"author\":{\"name\":\"Rafal Lokuciejewski\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/person\\\/4852bd3d70d7e8d5453571bb27fc29c1\"},\"headline\":\"Data Processing Scaled Up and Out with Dask and RAPIDS: Installing a Data Science App as Dask Client (2\\\/3)\",\"datePublished\":\"2021-03-01T08:21:29+00:00\",\"dateModified\":\"2022-12-01T11:22:37+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/data-processing-dask-rapids-installing-data-science-app-dask-client\\\/\"},\"wordCount\":1005,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/data-processing-dask-rapids-installing-data-science-app-dask-client\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2021\\\/03\\\/gpgpu-2-1.png\",\"articleSection\":[\"Analytics\",\"English Content\",\"General\"],\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/data-processing-dask-rapids-installing-data-science-app-dask-client\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/data-processing-dask-rapids-installing-data-science-app-dask-client\\\/\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/data-processing-dask-rapids-installing-data-science-app-dask-client\\\/\",\"name\":\"Data Processing Scaled Up and Out with Dask and RAPIDS: Installing a Data Science App as Dask Client (2\\\/3) - inovex GmbH\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/data-processing-dask-rapids-installing-data-science-app-dask-client\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/data-processing-dask-rapids-installing-data-science-app-dask-client\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2021\\\/03\\\/gpgpu-2-1.png\",\"datePublished\":\"2021-03-01T08:21:29+00:00\",\"dateModified\":\"2022-12-01T11:22:37+00:00\",\"description\":\"In this part we will add containerized applications to our Kubernetes cluster to be able to run data processing workloads in our cluster with Dask. Being more precise: we will prepare a notebook image that has CUDA installed which is required if we want to use GPU-based frameworks.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/data-processing-dask-rapids-installing-data-science-app-dask-client\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/data-processing-dask-rapids-installing-data-science-app-dask-client\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/data-processing-dask-rapids-installing-data-science-app-dask-client\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2021\\\/03\\\/gpgpu-2-1.png\",\"contentUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2021\\\/03\\\/gpgpu-2-1.png\",\"width\":1920,\"height\":1080,\"caption\":\"A graphics card with 2 active cores for machine learning with dask\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/data-processing-dask-rapids-installing-data-science-app-dask-client\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Data Processing Scaled Up and Out with Dask and RAPIDS: Installing a Data Science App as Dask Client (2\\\/3)\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#website\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/\",\"name\":\"inovex GmbH\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#organization\",\"name\":\"inovex GmbH\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2021\\\/03\\\/inovex-logo-16-9-1.png\",\"contentUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2021\\\/03\\\/inovex-logo-16-9-1.png\",\"width\":1921,\"height\":1081,\"caption\":\"inovex GmbH\"},\"image\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/inovexde\",\"https:\\\/\\\/x.com\\\/inovexgmbh\",\"https:\\\/\\\/www.instagram.com\\\/inovexlife\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/inovex\",\"https:\\\/\\\/www.youtube.com\\\/channel\\\/UC7r66GT14hROB_RQsQBAQUQ\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/person\\\/4852bd3d70d7e8d5453571bb27fc29c1\",\"name\":\"Rafal Lokuciejewski\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/8bf762d23ce1a4aca8afafba67dce7d6b0dabbcb56999bbb2e41d56664f9bcb7?s=96&d=retro&r=ge3f981b6ae50c555514691c36d70131a\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/8bf762d23ce1a4aca8afafba67dce7d6b0dabbcb56999bbb2e41d56664f9bcb7?s=96&d=retro&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/8bf762d23ce1a4aca8afafba67dce7d6b0dabbcb56999bbb2e41d56664f9bcb7?s=96&d=retro&r=g\",\"caption\":\"Rafal Lokuciejewski\"},\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/author\\\/rafal-lokuciejewskiinovex-de\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Data Processing Scaled Up and Out with Dask and RAPIDS: Installing a Data Science App as Dask Client (2\/3) - inovex GmbH","description":"In this part we will add containerized applications to our Kubernetes cluster to be able to run data processing workloads in our cluster with Dask. Being more precise: we will prepare a notebook image that has CUDA installed which is required if we want to use GPU-based frameworks.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/","og_locale":"de_DE","og_type":"article","og_title":"Data Processing Scaled Up and Out with Dask and RAPIDS: Installing a Data Science App as Dask Client (2\/3) - inovex GmbH","og_description":"In this part we will add containerized applications to our Kubernetes cluster to be able to run data processing workloads in our cluster with Dask. Being more precise: we will prepare a notebook image that has CUDA installed which is required if we want to use GPU-based frameworks.","og_url":"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/","og_site_name":"inovex GmbH","article_publisher":"https:\/\/www.facebook.com\/inovexde","article_published_time":"2021-03-01T08:21:29+00:00","article_modified_time":"2022-12-01T11:22:37+00:00","og_image":[{"width":1920,"height":1080,"url":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/gpgpu-2-1.png","type":"image\/png"}],"author":"Rafal Lokuciejewski","twitter_card":"summary_large_image","twitter_image":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/gpgpu-2-1-1024x576.png","twitter_creator":"@inovexgmbh","twitter_site":"@inovexgmbh","twitter_misc":{"Verfasst von":"Rafal Lokuciejewski","Gesch\u00e4tzte Lesezeit":"12\u00a0Minuten","Written by":"Rafal Lokuciejewski"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/#article","isPartOf":{"@id":"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/"},"author":{"name":"Rafal Lokuciejewski","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/person\/4852bd3d70d7e8d5453571bb27fc29c1"},"headline":"Data Processing Scaled Up and Out with Dask and RAPIDS: Installing a Data Science App as Dask Client (2\/3)","datePublished":"2021-03-01T08:21:29+00:00","dateModified":"2022-12-01T11:22:37+00:00","mainEntityOfPage":{"@id":"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/"},"wordCount":1005,"commentCount":0,"publisher":{"@id":"https:\/\/www.inovex.de\/de\/#organization"},"image":{"@id":"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/#primaryimage"},"thumbnailUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/gpgpu-2-1.png","articleSection":["Analytics","English Content","General"],"inLanguage":"de","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/","url":"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/","name":"Data Processing Scaled Up and Out with Dask and RAPIDS: Installing a Data Science App as Dask Client (2\/3) - inovex GmbH","isPartOf":{"@id":"https:\/\/www.inovex.de\/de\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/#primaryimage"},"image":{"@id":"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/#primaryimage"},"thumbnailUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/gpgpu-2-1.png","datePublished":"2021-03-01T08:21:29+00:00","dateModified":"2022-12-01T11:22:37+00:00","description":"In this part we will add containerized applications to our Kubernetes cluster to be able to run data processing workloads in our cluster with Dask. Being more precise: we will prepare a notebook image that has CUDA installed which is required if we want to use GPU-based frameworks.","breadcrumb":{"@id":"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/#primaryimage","url":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/gpgpu-2-1.png","contentUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/gpgpu-2-1.png","width":1920,"height":1080,"caption":"A graphics card with 2 active cores for machine learning with dask"},{"@type":"BreadcrumbList","@id":"https:\/\/www.inovex.de\/de\/blog\/data-processing-dask-rapids-installing-data-science-app-dask-client\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.inovex.de\/de\/"},{"@type":"ListItem","position":2,"name":"Data Processing Scaled Up and Out with Dask and RAPIDS: Installing a Data Science App as Dask Client (2\/3)"}]},{"@type":"WebSite","@id":"https:\/\/www.inovex.de\/de\/#website","url":"https:\/\/www.inovex.de\/de\/","name":"inovex GmbH","description":"","publisher":{"@id":"https:\/\/www.inovex.de\/de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.inovex.de\/de\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/www.inovex.de\/de\/#organization","name":"inovex GmbH","url":"https:\/\/www.inovex.de\/de\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/logo\/image\/","url":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/inovex-logo-16-9-1.png","contentUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/inovex-logo-16-9-1.png","width":1921,"height":1081,"caption":"inovex GmbH"},"image":{"@id":"https:\/\/www.inovex.de\/de\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/inovexde","https:\/\/x.com\/inovexgmbh","https:\/\/www.instagram.com\/inovexlife\/","https:\/\/www.linkedin.com\/company\/inovex","https:\/\/www.youtube.com\/channel\/UC7r66GT14hROB_RQsQBAQUQ"]},{"@type":"Person","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/person\/4852bd3d70d7e8d5453571bb27fc29c1","name":"Rafal Lokuciejewski","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/secure.gravatar.com\/avatar\/8bf762d23ce1a4aca8afafba67dce7d6b0dabbcb56999bbb2e41d56664f9bcb7?s=96&d=retro&r=ge3f981b6ae50c555514691c36d70131a","url":"https:\/\/secure.gravatar.com\/avatar\/8bf762d23ce1a4aca8afafba67dce7d6b0dabbcb56999bbb2e41d56664f9bcb7?s=96&d=retro&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/8bf762d23ce1a4aca8afafba67dce7d6b0dabbcb56999bbb2e41d56664f9bcb7?s=96&d=retro&r=g","caption":"Rafal Lokuciejewski"},"url":"https:\/\/www.inovex.de\/de\/blog\/author\/rafal-lokuciejewskiinovex-de\/"}]}},"_links":{"self":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/20842","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/users\/179"}],"replies":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/comments?post=20842"}],"version-history":[{"count":1,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/20842\/revisions"}],"predecessor-version":[{"id":23619,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/20842\/revisions\/23619"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/media\/23617"}],"wp:attachment":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/media?parent=20842"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/tags?post=20842"},{"taxonomy":"service","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/service?post=20842"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/coauthors?post=20842"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}