{"id":46,"date":"2018-05-18T09:56:02","date_gmt":"2018-05-18T09:56:02","guid":{"rendered":"http:\/\/egert.org\/blog\/?p=46"},"modified":"2018-12-27T07:34:42","modified_gmt":"2018-12-27T06:34:42","slug":"deep-learning-what-is-it-and-how-it-relates-to-supply-chains","status":"publish","type":"post","link":"https:\/\/egert.org\/blog\/2018\/05\/18\/deep-learning-what-is-it-and-how-it-relates-to-supply-chains\/","title":{"rendered":"Deep Learning: What it is and how it relates to supply chains"},"content":{"rendered":"<p><em>Disclaimer: In this post I&#8217;m going to write about how we use Deep Learning in my company, Lokad.<\/em><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">When you follow the news about deep learning, you might have come across exciting breakthroughs such as algorithms which are able to <a href=\"http:\/\/tinyclouds.org\/colorize\/\">colorize black and white photographs<\/a><\/span><span style=\"font-weight: 400;\">\u00a0or <a href=\"https:\/\/research.googleblog.com\/2015\/07\/how-google-translate-squeezes-deep.html\">automatic real-life translations of texts on pictures taken by a phone app<\/a> <\/span><span style=\"font-weight: 400;\">. \u00a0While these are all pretty cool applications, they do not immediately give any direct use cases for most traditional businesses. At Lokad, our goal is to translate the stunning reach of deep learning capabilities into the real world, to optimize supply chains everywhere.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">So, before going into detail how we do that, let me quickly and very roughly summarize what Deep Learning actually entails without going too much into technical details.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">First of all, deep learning is a flavor of machine learning. Regular non-machine learning algorithms require full prior knowledge on the task (and no training data whatsoever). An expert-knowledge approach to demand forecasting would require you to specify in advance all specific rules and patterns such as <\/span><\/p>\n<p><i><span style=\"font-weight: 400;\">\u201cAll articles that have category=Spring will peak in May and slowly die down until October.\u201d<\/span><\/i><\/p>\n<p><span style=\"font-weight: 400;\">This, however may only be true for some products of this category. It is also possible that \u00a0there might be subcategories that behave a bit differently and so on. Combining these with a moving average forecast already yields an overall understanding of future demand which is not so far from reality. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">However it does have the following downsides:<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">It does not embrace uncertainty &#8212; In our experience, risk and uncertainty are crucial for supply chains, since it\u2019s mostly the boundary cases that can be either very profitable or very costly if ignored,<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">You have to maintain and manage the complexity of your rule set &#8211; An approach is only as powerful as the set of rules that are applied to it. Maintaining rules is very costly i.e. for each rule in the algorithm, we calculate there is an initial cost of about 1 man-day of implementation, testing and proper documentation initially and about half a day of maintenance. Assuming you keep on refining your rules and therefore have to readjust the old ones this yields a cost of 8k \u20ac per rule for a five year period. It is worth noting that this only applies \u00a0for one rule and does not take into account the exponential increase in complexity that arises when dealing with more complex product portfolios. Even demand patterns for small businesses usually exhibit dozens of influences making their maintenance incredibly costly.<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">Now imagine that there is a technology that could, like a human child, learn on its own to deduce patterns from data and could thus independently predict how your portfolio of products develop throughout a year. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Just like a child in development, a deep learning algorithm will try to make sense of the world by trying to deduce correlations from observations. It will test them and discard those that do not make any sense for the remaining data.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Again following our analogy, like a child learning to makes sense of the world, a deep learning algorithm is consuming lots and lots of data and the key lies in grasping the information that is actually relevant. While a child in a big city might be completely overwhelmed with all the different colors, noises and smells, it will learn later that the traffic lights are the ones to watch out for in combination with noises coming from approaching cars that are most critical when crossing a street. The same mechanism is in place for deep learning. The algorithm may process a vast amount of data and needs to find out the essence of what drives demand.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The way to figure out what is important and what is not is carried out via repeating similar situations several times, like you would repeat correct traffic behavior with a child. A human brain is highly parallelizing its sensory input processing and reaction, so that it is able to react quickly to urgent new data such as a car that is approaching while crossing the street. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">With the rise of big data, parallelization became also a key topic driving efficiency and, in fact, feasibility of a \u201chuman-like\u201d autonomous learning process.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">At Lokad, we actually use the parallelized computing power high end gaming graphic cards in our cloud servers, to efficiently run our optimization for our clients, processing for a portfolio of 10.000 products with five years of sales data in less half an hour while largely outperforming any conventional moving average based algorithms (or even Lokad\u2019s own earlier generation machine learning forecasts) in accuracy. \u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Lokad then uses the demand forecasting results which come in a <a href=\"https:\/\/www.lokad.com\/features-forecasts\">probabilistic format<\/a> to optimize the supply chain decisions taking into account economic drivers such as one\u2019s stance on growth vs. profitability. With these analyses, Lokad directly delivers the best supply chain \u00a0decisions such as purchase orders or dispatching decisions. \u201cBest\u201d here refers to the economic driver set up (i.e. growth vs. profitability, opportunity costs etc.) that has been put in place supply chain decisions. It it will scale with the business as one\u2019s portfolio and demand patterns become more complex making any hard coded demand forecasting rules which need to be maintained by a human completely obsolete. \u00a0<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><strong>Notes:<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">Average Developer salary in Germany 58k \u20ac, 261 working days &#8211; 30 days of vacation in 2018 yields a 250\u20ac manday rate)<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Disclaimer: In this post I&#8217;m going to write about how we use Deep Learning in my company, Lokad. &nbsp; When you follow the news about deep learning, you might have come across exciting breakthroughs such as algorithms which are able to colorize black and white photographs\u00a0or automatic real-life translations of texts on pictures taken by [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[8,18,16,10,19,17],"class_list":["post-46","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-ai","tag-bigdata","tag-deeplearning","tag-lokad","tag-scm","tag-supplychain"],"_links":{"self":[{"href":"https:\/\/egert.org\/blog\/wp-json\/wp\/v2\/posts\/46"}],"collection":[{"href":"https:\/\/egert.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/egert.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/egert.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/egert.org\/blog\/wp-json\/wp\/v2\/comments?post=46"}],"version-history":[{"count":9,"href":"https:\/\/egert.org\/blog\/wp-json\/wp\/v2\/posts\/46\/revisions"}],"predecessor-version":[{"id":60,"href":"https:\/\/egert.org\/blog\/wp-json\/wp\/v2\/posts\/46\/revisions\/60"}],"wp:attachment":[{"href":"https:\/\/egert.org\/blog\/wp-json\/wp\/v2\/media?parent=46"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/egert.org\/blog\/wp-json\/wp\/v2\/categories?post=46"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/egert.org\/blog\/wp-json\/wp\/v2\/tags?post=46"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}