{"id":13695,"date":"2018-08-20T10:23:43","date_gmt":"2018-08-20T08:23:43","guid":{"rendered":"https:\/\/www.inovex.de\/blog\/?p=13695"},"modified":"2024-07-09T07:20:55","modified_gmt":"2024-07-09T05:20:55","slug":"multiplicative-lstm-recommenders","status":"publish","type":"post","link":"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/","title":{"rendered":"Multiplicative LSTM for sequence-based Recommenders"},"content":{"rendered":"<p>Recommender Systems support the decision making processes of customers with personalized suggestions. They are widely used and influence the daily life of almost everyone in different domains like e-commerce, social media, or entertainment. Quite often the dimension of time plays a dominant role in the generation of a relevant recommendation. Which user interaction occurred just before the point of time where we want to provide a recommendation? How many interactions ago did the user interact with an item like this one? Traditional user-item recommenders often neglect the dimension of time completely. This means that many traditional recommenders find for each user a latent representation based on the user\u2019s historical item interactions without any notion of recency and sequence of interactions. To also incorporate this kind of contextual information about interactions, sequence-based recommenders were developed.\u00a0With the advent of deep learning quite a few of them are nowadays based on <a href=\"https:\/\/en.wikipedia.org\/wiki\/Recurrent_neural_network\">Recurrent Neural Networks<\/a>\u00a0(RNNs).\u00a0<!--more--><\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-custom ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\"><p class=\"ez-toc-title\" style=\"cursor:inherit\"><\/p>\n<\/div><nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/#Motivation\" >Motivation<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/#Theory\" >Theory<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/#Implementation\" >Implementation<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/#Evaluation\" >Evaluation<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/#Conclusion\" >Conclusion<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/#Read-on\" >Read on<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"Motivation\"><\/span>Motivation<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Whenever I want to dig deeper into a topic like sequence-based recommenders I follow a few simple steps: First of all, to learn something I directly need to apply it otherwise learning things doesn\u2019t work for me. In order to apply something I need a challenge and a small goal that keeps me motivated on the journey. Following the <a href=\"https:\/\/en.wikipedia.org\/wiki\/SMART_criteria\"><span class=\"caps\">SMART<\/span> criteria<\/a> a goal needs to be measurable and thus a typical outcome for me is a blog post like the one you are just reading. Another good thing about a blog post is the fact that no one wants to publish something completely crappy, so there is an intrinsic quality assurance attached to the whole process. This blog post is actually the outcome of several things I wanted to familiarize myself more and try\u00a0out:<\/p>\n<ol>\n<li><a href=\"https:\/\/pytorch.org\/\">PyTorch<\/a>, since this framework is used in a large fraction of recent publications about deep\u00a0learning,<\/li>\n<li><a href=\"https:\/\/github.com\/maciejkula\/spotlight\">Spotlight<\/a>, since this library gives you a sophisticated structure to play around with new ideas for recommender systems and already has a lot of functionality\u00a0implemented,<\/li>\n<li>applying a paper about <a href=\"https:\/\/arxiv.org\/abs\/1609.07959\">Multiplicative <span class=\"caps\">LSTM<\/span> for sequence modelling<\/a> to recommender systems and see how that performs compared to traditional\u00a0LSTMs.<\/li>\n<\/ol>\n<p>Since Spotlight is based on PyTorch and multiplicative LSTMs (mLSTMs) are not yet implemented in PyTorch the task of evaluating mLSTMs vs. LSTMs inherently addresses all those points outlined above. The goal is set, so let\u2019s get\u00a0going!<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Theory\"><\/span>Theory<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Long short-term memory architectures (LSTMs) are maybe the most common incarnations of RNNs since they don\u2019t adhere to the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Vanishing_gradient_problem\">vanishing gradient problem<\/a> and thus are able to capture long-term relationships in a sequence. You can find a great explanation of LSTMs in Colah\u2019s post <a href=\"http:\/\/colah.github.io\/posts\/2015-08-Understanding-LSTMs\/\">Understanding <span class=\"caps\">LSTM<\/span> Networks<\/a> and more general about the power of RNNs in the article <a href=\"http:\/\/karpathy.github.io\/2015\/05\/21\/rnn-effectiveness\/\">The Unreasonable Effectiveness of Recurrent Neural Networks<\/a>.<\/p>\n<p>More recently, also Gated Recurrent Units (GRUs) which have a simplified structure compared to LSTMs are also used in sequential prediction tasks with occasionally superior results. <a href=\"https:\/\/github.com\/maciejkula\/spotlight\">Spotlight<\/a> provides a sequential recommender based on LSTMs and the quite renowned <a href=\"https:\/\/github.com\/hidasib\/GRU4Rec\">GRU4Rec<\/a> model uses GRUs but in general it\u2019s not possible to state that one always outperforms the\u00a0other.<\/p>\n<p>So given these ingredients, how do we now construct a sequential recommender? Let\u2019s assume on every timestep \\(t\\in\\{1,\\ldots,T\\}\\) a user has interacted with an item \\(i_t\\). The basic idea is now to feed these interactions into an <span class=\"caps\">LSTM<\/span> up to the time \\(t\\) in order to get a representation of the user\u2019s preferences \\(h_t\\) and use these to state if the user might like or dislike the next item \\(i_{t+1}\\). Just like in a non-sequential recommender we also do a <a href=\"https:\/\/en.wikipedia.org\/wiki\/One-hot\">one-hot encoding<\/a> of the items followed by an embedding into a dense vector representation \\(e_{i_t}\\) which is then feed into the <span class=\"caps\">LSTM<\/span>. We can then just use the output <span class=\"math\">\\(h_t\\)<\/span> of the <span class=\"caps\">LSTM<\/span> and calculate the inner product (\\(\\bigotimes\\)) with the embedding \\(e_{i_{t+1}}\\) plus an item bias for varying item popularity to retrieve an output \\(p_{t+1}\\). This output along with others is then used to calculate the actual loss depending on our sample strategy and loss function. We train our model by sampling positive interactions and corresponding negative interactions. In an <em>explicit feedback<\/em> context a positive and negative interaction might be a positive and negative rating of a user for an item, respectively. In an <em>implicit feedback<\/em> context, all item interactions of a user are considered positive whereas negative interactions arise from items the user did not interact with.<\/p>\n<p>During the training we adapt the weights of our model so that for a given user the scalar output of a positive interaction is greater than the output of a negative interaction. This can be seen as an approximation to a <a href=\"https:\/\/en.wikipedia.org\/wiki\/Softmax_function\">softmax<\/a> in very high-dimensional output\u00a0space. Figure 1 illustrates our sequential recommender model and this is what\u2019s actually happening inside Spotlight\u2019s sequential recommender with an <span class=\"caps\">LSTM<\/span> representation. If you raise your eyebrow due to the usage of an inner product then be aware that <a href=\"https:\/\/en.wikipedia.org\/wiki\/Low-rank_approximation\">low-rank approximations<\/a> have been and still are one of the most successful building blocks of recommender systems. An alternative would be to replace the inner product with a deep feed forward network but to quite some extent, this would also just learn to perform an approximation of an inner product. A recent paper <a href=\"https:\/\/static.googleusercontent.com\/media\/research.google.com\/en\/\/pubs\/archive\/46488.pdf\">Latent Cross: Making Use of Context in Recurrent Recommender Systems<\/a> by Google also emphasizes the power of learning low-rank relations with the help of inner\u00a0products.<\/p>\n<p><a href=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/08\/mLSTM.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-13702\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/08\/mLSTM.png\" alt=\"Timestep diagram\" width=\"479\" height=\"218\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/08\/mLSTM.png 479w, https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/08\/mLSTM-300x137.png 300w, https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/08\/mLSTM-400x182.png 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/08\/mLSTM-360x164.png 360w\" sizes=\"auto, (max-width: 479px) 100vw, 479px\" \/><\/a><\/p>\n<p>What we want to do is basically replacing the <span class=\"caps\">LSTM<\/span> part of Spotlight\u2019s sequential recommender with an mLSTM. But before we do that the obvious question is why? Let\u2019s recap the formulae of a typical <a href=\"http:\/\/pytorch.org\/docs\/0.3.1\/nn.html?highlight=lstm#torch.nn.LSTM\"><span class=\"caps\">LSTM<\/span> implementation<\/a> like the one in\u00a0PyTorch:<\/p>\n<p>\\[\\begin{split}\\begin{array}{ll}<br \/>\ni_t = \\mathrm{sigmoid}(W_{ii} x_t + b_{ii} + W_{hi} h_{t-1} + b_{hi}) \\\\<br \/>\nf_t = \\mathrm{sigmoid}(W_{if} x_t + b_{if} + W_{hf} h_{t-1} + b_{hf}) \\\\<br \/>\ng_t = \\tanh(W_{ig} x_t + b_{ig} + W_{hc} h_{t-1} + b_{hg}) \\\\<br \/>\no_t = \\mathrm{sigmoid}(W_{io} x_t + b_{io} + W_{ho} h_{t-1} + b_{ho}) \\\\<br \/>\nc_t = f_t * c_{t-1} + i_t * g_t \\\\<br \/>\nh_t = o_t * \\tanh(c_t)<br \/>\n\\end{array}\\end{split}\\]<\/p>\n<p>where <span class=\"math\">\\(i_t\\)<\/span> denotes the input gate, <span class=\"math\">\\(f_t\\)<\/span> the forget gate and <span class=\"math\">\\(o_t\\)<\/span> the output gate at timestep <span class=\"math\">\\(t\\)<\/span>. If we look at those lines again we can see a lot of terms in the form of <span class=\"math\">\\(W_{**} x_t + W_{**} h_{t-1}\\)<\/span> neglecting the biases <span class=\"math\">\\(b_*\\)<\/span> for a moment. Thus a lot of an <span class=\"caps\">LSTM<\/span>\u2019s inner workings depend on the addition of the transformed input with the transformed hidden state. So what happens if a trained <span class=\"caps\">LSTM<\/span> with thus fixed <span class=\"math\">\\(W_{**}\\)<\/span> encounters some unexpected, completely surprising input <span class=\"math\">\\(x_t\\)<\/span>? This might disturb the cell state <span class=\"math\">\\(c_t\\)<\/span> leading to pertubated future <span class=\"math\">\\(h_t\\)<\/span> and it might take a long time for the <span class=\"caps\">LSTM<\/span> to recover from that singular surprising input. The authors of the paper <a href=\"https:\/\/arxiv.org\/abs\/1609.07959\">Multiplicative <span class=\"caps\">LSTM<\/span> for sequence modelling<\/a> now argue that \u201c<span class=\"caps\">RNN<\/span> architectures with hidden-to-hidden transition functions that are input-dependent are better suited to recover from surprising inputs\u201c. By allowing the hidden state to react flexibly on the new input by changing its magnitude it might be able to recover from mistakes faster. The quite vague formulation of <em>input-dependent transition functions<\/em> is then actually achieved in a quite simple way. In an mLSTM the hidden state <span class=\"math\">\\(h_{t-1}\\)<\/span> is transformed in a multiplicative way using the input <span class=\"math\">\\(x_t\\)<\/span> into an intermediate state <span class=\"math\">\\(m_t\\)<\/span> before it is used in a plain <span class=\"caps\">LSTM<\/span> as before. Eventually, there is only a single equation to be prepended to the equations of an <span class=\"caps\">LSTM<\/span>:<\/p>\n<div class=\"math\">\n<p>\\[\\begin{split}\\begin{array}{ll}<br \/>\nm_t = (W_{im} x_t + b_{im}) \\odot{} ( W_{hm} h_{t-1} + b_{hm}) \\\\<br \/>\ni_t = \\mathrm{sigmoid}(W_{ii} x_t + b_{ii} + W_{mi} m_t + b_{mi}) \\\\<br \/>\nf_t = \\mathrm{sigmoid}(W_{if} x_t + b_{if} + W_{mf} m_t + b_{mf}) \\\\<br \/>\ng_t = \\tanh(W_{ig} x_t + b_{ig} + W_{mc} m_t + b_{mg}) \\\\<br \/>\no_t = \\mathrm{sigmoid}(W_{io} x_t + b_{io} + W_{mo} m_t + b_{mo}) \\\\<br \/>\nc_t = f_t * c_{t-1} + i_t * g_t \\\\<br \/>\nh_t = o_t * \\tanh(c_t)<br \/>\n\\end{array}\\end{split}\\]<\/p>\n<\/div>\n<p>The element-wise multiplication (<span class=\"math\">\\(\\odot\\)<\/span>) allows <span class=\"math\">\\(m_t\\)<\/span> to flexibly change it\u2019s value with respect to <span class=\"math\">\\(h_{t-1}\\)<\/span> and <span class=\"math\">\\(x_t\\)<\/span>.<\/p>\n<p>On a more theoretical note, if you picture the hidden states of an <span class=\"caps\">LSTM<\/span> as a tree depending on the inputs at each timestep then the number of all possible states at timestep <span class=\"math\">\\(t\\)<\/span> will be much larger for an mLSTM compared to an <span class=\"caps\">LSTM<\/span>. Therefore, the tree of an mLSTM will be much wider and consequently more flexible to represent different probability distributions according to the paper. The paper focuses only on <span class=\"caps\">NLP<\/span> tasks but since surprising inputs are also a concern in sequential recommender systems, the self-evident idea is to evaluate if mLSTMs also excel in recommender\u00a0tasks.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Implementation\"><\/span>Implementation<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Everyone seems to love <a href=\"https:\/\/pytorch.org\/\">PyTorch<\/a> for it\u2019s beautiful <span class=\"caps\">API<\/span> and I totally agree. For me its beauty lies in its simplicity. Every elementary building block of a neural network like a linear transformation is called a <em>Module<\/em> in PyTorch. A Module is just a class that inherits from <code>Module<\/code> and implements a <code>forward<\/code> method that does the transformation with the help of tensor operations. A more complex neural network is again just a <code>Module<\/code> and uses the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Composition_over_inheritance\">composition principle<\/a> to compose its functionality from simpler modules. Therefore, in my humble opinion, PyTorch found a much nicer concept of combining low-level tensor operations with the high level composition of layers compared to core <a href=\"https:\/\/www.tensorflow.org\/\">TensorFlow<\/a> and <a href=\"https:\/\/keras.io\/\">Keras<\/a> where you are either stuck on the level of tensor operations or the composition of\u00a0layers.<\/p>\n<p>For our task, we gonna need an <code>mLSTM<\/code> module and luckily PyTorch provides <code>RNNBase<\/code>, a base class for custom RNNs. So all we have to do is to write a module that inherits from <code>RNNBase<\/code>, defines additional parameters and implements the mLSTM equations inside of <code>forward<\/code>:<\/p>\n<pre class=\"lang:python decode:true\">import math\r\n\r\nimport torch\r\n\r\nfrom torch.nn import Parameter\r\n\r\nfrom torch.nn.modules.rnn import RNNBase, LSTMCell\r\n\r\nfrom torch.nn import functional as F\r\n\r\nclass mLSTM(RNNBase):\r\n\r\n    def __init__(self, input_size, hidden_size, bias=True):\r\n\r\n        super(mLSTM, self).__init__(\r\n\r\n            mode='LSTM', input_size=input_size, hidden_size=hidden_size,\r\n\r\n                 num_layers=1, bias=bias, batch_first=True,\r\n\r\n                 dropout=0, bidirectional=False)\r\n\r\n        w_im = torch.Tensor(hidden_size, input_size)\r\n\r\n        w_hm = torch.Tensor(hidden_size, hidden_size)\r\n\r\n        b_im = torch.Tensor(hidden_size)\r\n\r\n        b_hm = torch.Tensor(hidden_size)\r\n\r\n        self.w_im = Parameter(w_im)\r\n\r\n        self.b_im = Parameter(b_im)\r\n\r\n        self.w_hm = Parameter(w_hm)\r\n\r\n        self.b_hm = Parameter(b_hm)\r\n\r\n        self.lstm_cell = LSTMCell(input_size, hidden_size, bias)\r\n\r\n        self.reset_parameters()\r\n\r\n    def reset_parameters(self):\r\n\r\n        stdv = 1.0 \/ math.sqrt(self.hidden_size)\r\n\r\n        for weight in self.parameters():\r\n\r\n            weight.data.uniform_(-stdv, stdv)\r\n\r\n    def forward(self, input, hx):\r\n\r\n        n_batch, n_seq, n_feat = input.size()\r\n\r\n        hx, cx = hx\r\n\r\n        steps = [cx.unsqueeze(1)]\r\n\r\n        for seq in range(n_seq):\r\n\r\n            mx = F.linear(input[:, seq, :], self.w_im, self.b_im) * F.linear(hx, self.w_hm, self.b_hm)\r\n\r\n            hx = (mx, cx)\r\n\r\n            hx, cx = self.lstm_cell(input[:, seq, :], hx)\r\n\r\n            steps.append(cx.unsqueeze(1))\r\n\r\n        return torch.cat(steps, dim=1)<\/pre>\n<p>The code is pretty much self-explanatory. We inherit from <code>RNNBase<\/code> and initialize the additional parameters we need for the calculation of \\(m_t\\) in <code>__init__<\/code>. In <code>forward<\/code> we use those parameters to calculate \\(m_t = (W_{im} x_t + b_{im}) \\odot{} ( W_{hm} h_{t-1} + b_{hm})\\) with the help of <code>F.linear<\/code> and pass it to an ordinary <code>LSTMCell<\/code>. We collect the results for each timestep in our sequence in <code>steps<\/code> and return it as concatenated\u00a0tensor. The <a href=\"https:\/\/github.com\/maciejkula\/spotlight\">Spotlight<\/a> library, in the spirit of PyTorch, also follows a modular concept of components that can be easily plugged together and replaced. It has only five\u00a0components:<\/p>\n<ol>\n<li><strong>embedding layers<\/strong> which map item ids to dense\u00a0vectors,<\/li>\n<li><strong>user\/item representations<\/strong> which take embedding layers to calculate latent representations and the score for a user\/item\u00a0pair,<\/li>\n<li><strong>interactions<\/strong> which give easy access to the usr\/item interactions and their explicit\/implicit\u00a0feedback,<\/li>\n<li><strong>losses<\/strong> which define the objective for the recommendation\u00a0task,<\/li>\n<li><strong>models<\/strong> which take user\/item representations, the user\/item interactions and a given loss to train the\u00a0network.<\/li>\n<\/ol>\n<p>Due to this modular layout, we only need to write a new user\/item representation module called <code>mLSTMNet<\/code>. Since thisis straight-forward I leave it to you to take a look at the source code in my <a href=\"https:\/\/github.com\/FlorianWilhelm\/mlstm4reco\">mlstm4reco<\/a> repository. At this point I should mentioned that the whole layout of the repository was strongly inspired by Maciej Kula\u2019s\u00a0<a href=\"https:\/\/arxiv.org\/abs\/1711.08379\">Mixture-of-tastes Models for Representing Users with Diverse Interests<\/a> paper and the accompanying <a href=\"https:\/\/github.com\/maciejkula\/mixture\">source code<\/a>.<\/p>\n<p>My implementation also follows his advise of using an automatic hyperparameter optimisation for my own model and the baseline model for comparison. This avoids quite a common bias in research when people put more effort in hand-tuning their own model compared to the baseline to later show a better improvement in order to get the paper accepted. Using a tool like <a href=\"http:\/\/hyperopt.github.io\/hyperopt\/\">HyperOpt<\/a> for hyperparameter optimisation is quite easy and mitigates this bias to some extent at\u00a0least.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Evaluation\"><\/span>Evaluation<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>To compare Spotlight\u2019s <a href=\"https:\/\/maciejkula.github.io\/spotlight\/sequence\/implicit.html#module-spotlight.sequence.implicit\">ImplicitSequenceModel<\/a> with an <span class=\"caps\">LSTM<\/span> to an mLSTM user representation, the <a href=\"https:\/\/github.com\/FlorianWilhelm\/mlstm4reco\">mlstm4reco<\/a> repository provides a <code>run.py<\/code> script in the <code>experiments<\/code> folder which takes several command line options. Some might argue that this is a bit of over-engineering for a one time evaluation. But for me it\u2019s just one aspect of proper and reproducible research since it avoids errors and you can also easily log which parameters were used to generate the results. I also used PyScaffold to set up proper Python package scaffold within seconds. This allows me to properly install the <code>mlstm4reco<\/code> package and import its functionality from wherever I want without messing around with the <span class=\"caps\">PYTHONPATH<\/span> environment variable which one should never do\u00a0anyway.<\/p>\n<p>For the evaluation matrix below I ran each experiment 200 times to give <a href=\"http:\/\/hyperopt.github.io\/hyperopt\/\">HyperOpt<\/a> enough chances to find good hyperparameters for the number of epochs (<code>n_iter<\/code>), number of embeddings (<code>embedding_dim<\/code>), l2-regularisation (<code>l2<\/code>), batch size (<code>batch_size<\/code>) and learning rate (<code>learn_rate<\/code>). Each of our two models, i.e. <code>lstm<\/code> and <code>mlstm<\/code> user representation, were applied to three datasets, the <a href=\"https:\/\/grouplens.org\/datasets\/movielens\/\">MovieLens<\/a> 1m and 10m datasets as well as the <a href=\"https:\/\/snap.stanford.edu\/data\/amazon-meta.html\">Amazon<\/a> dataset. For instance, to run 200 experiments with the mlstm model on the Movielens 10m dataset the command would be <code>.\/run.py -m mlstm -n 200 10m<\/code>. In each experiment the data is split into a training, validation and test set where training is used to fit the model, validation to find the right hyperparameters and test for the final evaluation after all parameters are determined. The performance of the models is measured with the help of the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Mean_reciprocal_rank\">mean reciprocal rank<\/a> (<span class=\"caps\">MRR<\/span>) score. Here are the\u00a0results:<\/p>\n<table>\n<thead>\n<tr>\n<th align=\"right\">dataset<\/th>\n<th align=\"right\">type<\/th>\n<th align=\"right\">validation<\/th>\n<th align=\"right\">test<\/th>\n<th align=\"right\">learn_rate<\/th>\n<th align=\"right\">batch_size<\/th>\n<th align=\"right\">embedding_dim<\/th>\n<th align=\"right\">l2<\/th>\n<th align=\"right\">n_iter<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td align=\"right\">Movielens 1m<\/td>\n<td align=\"right\"><span class=\"caps\">LSTM<\/span><\/td>\n<td align=\"right\">0.1199<\/td>\n<td align=\"right\">0.1317<\/td>\n<td align=\"right\">1.93e-2<\/td>\n<td align=\"right\">208<\/td>\n<td align=\"right\">112<\/td>\n<td align=\"right\">6.01e-06<\/td>\n<td align=\"right\">50<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">Movielens 1m<\/td>\n<td align=\"right\">mLSTM<\/td>\n<td align=\"right\">0.1275<\/td>\n<td align=\"right\">0.1386<\/td>\n<td align=\"right\">1.25e-2<\/td>\n<td align=\"right\">240<\/td>\n<td align=\"right\">120<\/td>\n<td align=\"right\">5.90e-06<\/td>\n<td align=\"right\">40<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">Movielens 10m<\/td>\n<td align=\"right\"><span class=\"caps\">LSTM<\/span><\/td>\n<td align=\"right\">0.1090<\/td>\n<td align=\"right\">0.1033<\/td>\n<td align=\"right\">4.19e-3<\/td>\n<td align=\"right\">224<\/td>\n<td align=\"right\">120<\/td>\n<td align=\"right\">2.43e-07<\/td>\n<td align=\"right\">50<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">Movielens 10m<\/td>\n<td align=\"right\">mLSTM<\/td>\n<td align=\"right\">0.1142<\/td>\n<td align=\"right\">0.1115<\/td>\n<td align=\"right\">4.50e-3<\/td>\n<td align=\"right\">224<\/td>\n<td align=\"right\">128<\/td>\n<td align=\"right\">1.12e-06<\/td>\n<td align=\"right\">45<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">Amazon<\/td>\n<td align=\"right\"><span class=\"caps\">LSTM<\/span><\/td>\n<td align=\"right\">0.2629<\/td>\n<td align=\"right\">0.2642<\/td>\n<td align=\"right\">2.85e-3<\/td>\n<td align=\"right\">224<\/td>\n<td align=\"right\">128<\/td>\n<td align=\"right\">2.42e-11<\/td>\n<td align=\"right\">50<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">Amazon<\/td>\n<td align=\"right\">mLSTM<\/td>\n<td align=\"right\">0.3061<\/td>\n<td align=\"right\">0.3123<\/td>\n<td align=\"right\">2.48e-3<\/td>\n<td align=\"right\">144<\/td>\n<td align=\"right\">120<\/td>\n<td align=\"right\">4.53e-11<\/td>\n<td align=\"right\">50<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>If we compare the test results of the Movielens 1m dataset, it\u2019s an improvement of 5.30% when using mLSTM over <span class=\"caps\">LSTM\u00a0<\/span>representation, for Movielens 10m it\u2019s 7.96% more and for Amazon it\u2019s even 18.19%\u00a0more.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Conclusion\"><\/span>Conclusion<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>The performance improvements of using an mLSTM over an <span class=\"caps\">LSTM<\/span> user representation are quite good but nothing spectacular. They give us at least some indication that mLSTMs achieve superior results for sequential recommendation tasks. In order to further underpin this first assessment one could test with more datasets and also check other evaluation metrics besides <span class=\"caps\">MRR<\/span>. I leave this to a dedicated reader, so if you are interested, please let me know and share your results. With regard to my initial motivation and tasks, I have achieved much deeper insights into the domain of sequential recommenders and with the help of PyTorch, Spotlight I am looking forward to my next side project! Let me know if you liked this post and comment\u00a0below.<\/p>\n<p>This article first appeared on <a href=\"https:\/\/florianwilhelm.info\/2018\/08\/multiplicative_LSTM_for_sequence_based_recos\/\" target=\"_blank\" rel=\"noopener\">florianwilhelm.info<\/a>.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Read-on\"><\/span>Read on<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Curious about data science and machine learning? Have a look at our <a href=\"https:\/\/www.inovex.de\/de\/leistungen\/data-science-deep-learning\/\" target=\"_blank\" rel=\"noopener noreferrer\">portfolio<\/a> \u2013 or even consider <a href=\"https:\/\/www.inovex.de\/de\/karriere\/stellenangebote\/\" target=\"_blank\" rel=\"noopener noreferrer\">joining us<\/a>!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Recommender Systems support the decision making processes of customers with personalized suggestions. They are widely used and influence the daily life of almost everyone in different domains like e-commerce, social media, or entertainment. Quite often the dimension of time plays a dominant role in the generation of a relevant recommendation. Which user interaction occurred just [&hellip;]<\/p>\n","protected":false},"author":52,"featured_media":13709,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"ep_exclude_from_search":false,"footnotes":""},"tags":[206,151,243],"service":[],"coauthors":[{"id":52,"display_name":"Florian Wilhelm","user_nicename":"fwilhelm"}],"class_list":["post-13695","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","tag-data-science","tag-deep-learning","tag-recommender-systems"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Multiplicative LSTM for sequence-based Recommenders - inovex GmbH<\/title>\n<meta name=\"description\" content=\"Traditional user-item recommenders often neglect the dimension of time, finding for each user a latent representation based on the user\u2019s historical item interactions without any notion of recency and sequence of interactions. Sequence-based recommenders such as Multiplicative LSTMs tackle this issue.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Multiplicative LSTM for sequence-based Recommenders - inovex GmbH\" \/>\n<meta property=\"og:description\" content=\"Traditional user-item recommenders often neglect the dimension of time, finding for each user a latent representation based on the user\u2019s historical item interactions without any notion of recency and sequence of interactions. Sequence-based recommenders such as Multiplicative LSTMs tackle this issue.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/\" \/>\n<meta property=\"og:site_name\" content=\"inovex GmbH\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/inovexde\" \/>\n<meta property=\"article:published_time\" content=\"2018-08-20T08:23:43+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-07-09T05:20:55+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/08\/mLSTM-artikel.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"960\" \/>\n\t<meta property=\"og:image:height\" content=\"540\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Florian Wilhelm\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/08\/mLSTM-artikel.jpg\" \/>\n<meta name=\"twitter:creator\" content=\"@inovexgmbh\" \/>\n<meta name=\"twitter:site\" content=\"@inovexgmbh\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Florian Wilhelm\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"14\u00a0Minuten\" \/>\n\t<meta name=\"twitter:label3\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data3\" content=\"Florian Wilhelm\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/multiplicative-lstm-recommenders\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/multiplicative-lstm-recommenders\\\/\"},\"author\":{\"name\":\"Florian Wilhelm\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/person\\\/57ad7c24ee7f9ec59ed87598c73fe79e\"},\"headline\":\"Multiplicative LSTM for sequence-based Recommenders\",\"datePublished\":\"2018-08-20T08:23:43+00:00\",\"dateModified\":\"2024-07-09T05:20:55+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/multiplicative-lstm-recommenders\\\/\"},\"wordCount\":2525,\"commentCount\":1,\"publisher\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/multiplicative-lstm-recommenders\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2018\\\/08\\\/mLSTM-artikel.jpg\",\"keywords\":[\"Data Science\",\"Deep Learning\",\"Recommender Systems\"],\"articleSection\":[\"Analytics\",\"English Content\",\"General\"],\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/multiplicative-lstm-recommenders\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/multiplicative-lstm-recommenders\\\/\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/multiplicative-lstm-recommenders\\\/\",\"name\":\"Multiplicative LSTM for sequence-based Recommenders - inovex GmbH\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/multiplicative-lstm-recommenders\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/multiplicative-lstm-recommenders\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2018\\\/08\\\/mLSTM-artikel.jpg\",\"datePublished\":\"2018-08-20T08:23:43+00:00\",\"dateModified\":\"2024-07-09T05:20:55+00:00\",\"description\":\"Traditional user-item recommenders often neglect the dimension of time, finding for each user a latent representation based on the user\u2019s historical item interactions without any notion of recency and sequence of interactions. Sequence-based recommenders such as Multiplicative LSTMs tackle this issue.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/multiplicative-lstm-recommenders\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/multiplicative-lstm-recommenders\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/multiplicative-lstm-recommenders\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2018\\\/08\\\/mLSTM-artikel.jpg\",\"contentUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2018\\\/08\\\/mLSTM-artikel.jpg\",\"width\":960,\"height\":540,\"caption\":\"A brain sucking in data points\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/multiplicative-lstm-recommenders\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Multiplicative LSTM for sequence-based Recommenders\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#website\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/\",\"name\":\"inovex GmbH\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#organization\",\"name\":\"inovex GmbH\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2021\\\/03\\\/inovex-logo-16-9-1.png\",\"contentUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2021\\\/03\\\/inovex-logo-16-9-1.png\",\"width\":1921,\"height\":1081,\"caption\":\"inovex GmbH\"},\"image\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/inovexde\",\"https:\\\/\\\/x.com\\\/inovexgmbh\",\"https:\\\/\\\/www.instagram.com\\\/inovexlife\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/inovex\",\"https:\\\/\\\/www.youtube.com\\\/channel\\\/UC7r66GT14hROB_RQsQBAQUQ\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/person\\\/57ad7c24ee7f9ec59ed87598c73fe79e\",\"name\":\"Florian Wilhelm\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/cropped-florian-1-IMG_5829-800x610-1-96x96.jpg5db1abe47435abb84b0b7484ce0890e9\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/cropped-florian-1-IMG_5829-800x610-1-96x96.jpg\",\"contentUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/cropped-florian-1-IMG_5829-800x610-1-96x96.jpg\",\"caption\":\"Florian Wilhelm\"},\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/author\\\/fwilhelm\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Multiplicative LSTM for sequence-based Recommenders - inovex GmbH","description":"Traditional user-item recommenders often neglect the dimension of time, finding for each user a latent representation based on the user\u2019s historical item interactions without any notion of recency and sequence of interactions. Sequence-based recommenders such as Multiplicative LSTMs tackle this issue.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/","og_locale":"de_DE","og_type":"article","og_title":"Multiplicative LSTM for sequence-based Recommenders - inovex GmbH","og_description":"Traditional user-item recommenders often neglect the dimension of time, finding for each user a latent representation based on the user\u2019s historical item interactions without any notion of recency and sequence of interactions. Sequence-based recommenders such as Multiplicative LSTMs tackle this issue.","og_url":"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/","og_site_name":"inovex GmbH","article_publisher":"https:\/\/www.facebook.com\/inovexde","article_published_time":"2018-08-20T08:23:43+00:00","article_modified_time":"2024-07-09T05:20:55+00:00","og_image":[{"width":960,"height":540,"url":"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/08\/mLSTM-artikel.jpg","type":"image\/jpeg"}],"author":"Florian Wilhelm","twitter_card":"summary_large_image","twitter_image":"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/08\/mLSTM-artikel.jpg","twitter_creator":"@inovexgmbh","twitter_site":"@inovexgmbh","twitter_misc":{"Verfasst von":"Florian Wilhelm","Gesch\u00e4tzte Lesezeit":"14\u00a0Minuten","Written by":"Florian Wilhelm"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/#article","isPartOf":{"@id":"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/"},"author":{"name":"Florian Wilhelm","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/person\/57ad7c24ee7f9ec59ed87598c73fe79e"},"headline":"Multiplicative LSTM for sequence-based Recommenders","datePublished":"2018-08-20T08:23:43+00:00","dateModified":"2024-07-09T05:20:55+00:00","mainEntityOfPage":{"@id":"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/"},"wordCount":2525,"commentCount":1,"publisher":{"@id":"https:\/\/www.inovex.de\/de\/#organization"},"image":{"@id":"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/#primaryimage"},"thumbnailUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/08\/mLSTM-artikel.jpg","keywords":["Data Science","Deep Learning","Recommender Systems"],"articleSection":["Analytics","English Content","General"],"inLanguage":"de","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/","url":"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/","name":"Multiplicative LSTM for sequence-based Recommenders - inovex GmbH","isPartOf":{"@id":"https:\/\/www.inovex.de\/de\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/#primaryimage"},"image":{"@id":"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/#primaryimage"},"thumbnailUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/08\/mLSTM-artikel.jpg","datePublished":"2018-08-20T08:23:43+00:00","dateModified":"2024-07-09T05:20:55+00:00","description":"Traditional user-item recommenders often neglect the dimension of time, finding for each user a latent representation based on the user\u2019s historical item interactions without any notion of recency and sequence of interactions. Sequence-based recommenders such as Multiplicative LSTMs tackle this issue.","breadcrumb":{"@id":"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/#primaryimage","url":"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/08\/mLSTM-artikel.jpg","contentUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2018\/08\/mLSTM-artikel.jpg","width":960,"height":540,"caption":"A brain sucking in data points"},{"@type":"BreadcrumbList","@id":"https:\/\/www.inovex.de\/de\/blog\/multiplicative-lstm-recommenders\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.inovex.de\/de\/"},{"@type":"ListItem","position":2,"name":"Multiplicative LSTM for sequence-based Recommenders"}]},{"@type":"WebSite","@id":"https:\/\/www.inovex.de\/de\/#website","url":"https:\/\/www.inovex.de\/de\/","name":"inovex GmbH","description":"","publisher":{"@id":"https:\/\/www.inovex.de\/de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.inovex.de\/de\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/www.inovex.de\/de\/#organization","name":"inovex GmbH","url":"https:\/\/www.inovex.de\/de\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/logo\/image\/","url":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/inovex-logo-16-9-1.png","contentUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/inovex-logo-16-9-1.png","width":1921,"height":1081,"caption":"inovex GmbH"},"image":{"@id":"https:\/\/www.inovex.de\/de\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/inovexde","https:\/\/x.com\/inovexgmbh","https:\/\/www.instagram.com\/inovexlife\/","https:\/\/www.linkedin.com\/company\/inovex","https:\/\/www.youtube.com\/channel\/UC7r66GT14hROB_RQsQBAQUQ"]},{"@type":"Person","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/person\/57ad7c24ee7f9ec59ed87598c73fe79e","name":"Florian Wilhelm","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.inovex.de\/wp-content\/uploads\/cropped-florian-1-IMG_5829-800x610-1-96x96.jpg5db1abe47435abb84b0b7484ce0890e9","url":"https:\/\/www.inovex.de\/wp-content\/uploads\/cropped-florian-1-IMG_5829-800x610-1-96x96.jpg","contentUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/cropped-florian-1-IMG_5829-800x610-1-96x96.jpg","caption":"Florian Wilhelm"},"url":"https:\/\/www.inovex.de\/de\/blog\/author\/fwilhelm\/"}]}},"_links":{"self":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/13695","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/users\/52"}],"replies":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/comments?post=13695"}],"version-history":[{"count":3,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/13695\/revisions"}],"predecessor-version":[{"id":55543,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/13695\/revisions\/55543"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/media\/13709"}],"wp:attachment":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/media?parent=13695"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/tags?post=13695"},{"taxonomy":"service","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/service?post=13695"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/coauthors?post=13695"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}