Machine learning and discrete-event simulation: Exemplary applications

In this article I discuss machine learning as a supportive technology for making discrete-event simulation more resource efficient and effective.

Discrete-event simulation is a technique used in manufacturing and logistics for problems that cannot be investigated with conventional analytical techniques. Conventional techniques e.g. include static calculations, mathematical programming or statistics. It is frequently the case that such conventional techniques do not suffice analysis requirements in complex production systems or complex logistics processes. For example, control logics or interpendencies between production equipment, logistics equipment and planning algorithms can often only be analyzed with a simulation model. The cost of building such simulation models can be huge, and deriving findings from the simulation results can be costly too. In both these cases costs can be reduced by applying machine learning. That is what I want to show in this article.

Integrating reinforcement learning, neural networks and discrete-event simulation

In this article I highlight exemplary applications of machine learning in the context of discrete-event simulation. These examples are applicable to simulation studies in factory planning, supply chain network design, production scheduling, process design and more. Topics addressed are reinforcement learning in simulation models, neural networks for extrapolating simulated sample spaces, and a comparison of reinforcement learning in simulation models vs. conventional model optimizers.

In the following sections of this article I will explain what discrete-event simulation is. I will also explain what reinforcement learning and neural networks are. I proceed by providing an overview of possible approaches for integrating machine learning and simulation. I will present a modeling approach that combines simulation and neural networks for extrapolating a simulated sample of the overall solution space.

Discrete-event simulation is appropriate for process modeling

In one of my earlier articles I provided a simulation technique classification system. Discrete-event simulation is my favorite simulation technique. It is a powerful technique for analyzing processes in manufacturing and logistics systems. I use discrete-event simulation to assess factory layouts, material handling concepts, and material flow optimization projects. I have also used discrete-event simulation for tactical production planning, i.e. production planning in the short and medium term. Examples include inventory simulation and production schedule simulation. Discrete-event simulation, compared to agent-based simulation, system dynamics and monte-carlo simulation, is an appropriate technique for modeling processes. Its focus is rather detailed and less conceptual as this modeling technique implements decision making on process level whereas other techniques neglect this level of detail.

Machine learning is comprised of supervised, unsupervised and reinforcement learning

In this article I highlight exemplary applications of machine learning, another powerful domain, in the context of discrete-event simulation projects. I have already published an article on this website covering the three machine learning paradigms: Supervised machine learning, unsupervised machine learning, and reinforcement learning. All of these paradigms have different scopes and are all powerful when applied appropriately. Reinforcement is also often described as artificial intelligence, since in its core it has a reinforcement learning algorithm that improves a state-based decision making policy over time as the algorithm learns from rewards and punishments received in the form of feedback from its environment.

Neural networks facilitate a black box that lacks transparency but is capable of complex prediciton making

Neural networks are a specific subset of machine learning technology, often associated with deep learning. At its core, a neural network is comprised of neurons that together reproduce a signal-and-response network. Neural networks aim at reproducing the behaviour in a human brain and are used for pattern recognition and problem solving without any particular analytical framework.

In below figure I illustrate the structure of a simple neural network. There are many different structures of neural networks, but they are comprised of this basic structural framework.

At the core, a network is comprised of neurons. In the figure displayed below illustrated the basic structre of a neural network with a defined number of input neurons (input laye), a hidden layer, and an output layer.

Each neuron converts input values into an output value. You can think of every neuron performing a regression analysis on the inputs provided, forwarding the result as an output to the next layer. The layers between original input and final output are referred to as hidden layers. The more hidden layers, the deeper the network (generally speaking).

Advantages of using a neural network model

Artificial neural networks can solve very complex real world problems. Models using a neural network practically facilitate artificial intelligence in that they are capable of learning relationships between input and output values, even when these are nonlinear or even more complex than nonlinear. This enables these models to reveal relationships that are otherwise difficult to predict.

Disadvantages of a neural network model

Artificial neural networks facilitate a black box model. This comes with risks, two major of them being 1) risk of overfitting the model, 2) difficulty of drawing generalizable findings from the model. Artificial neural networks are not good at explaining how and why they made their decisions. Artifical neural networks even tend to work well in cases of extreme variance and in the presence of extreme outliers.

Commercial applications of neural networks in SCM and beyond

In supply chain management neural network models can e.g. be used for inventory control and release of purchasing orders. As I will demonstrate neural networks can also be used to support discrete-event simulation studies, making them more efficient and effective. I have also seen applications of neural networks that were used for transport routing and transportation network optimization.

Beyond manufacturing and logistics, neural networks are e.g. used for:

  • Credit card fraud detection
  • Insurance fraud detection
  • Voice recognition
  • Natural language processing
  • Medical disease diagnosis
  • Financial stock price prediction
  • Process and quality control
  • Demand forecasting
  • Image recognition

Reinforcement learning can facilitate self-learning

Reinforcement learing is a machine learning paradigm that allows for decision making under uncertainty. In another article that I published on SCDA I explained how the three paradigms of machine learning are I) supervised machine learning, II) unsupervised machine learning, and III) reinforcement learning.

The class of supervised machine learning (I) algorithms describes algorithms that train models based on labelled data, i.e. data with known inputs and outputs. Linear regression is an example of supervised machine learning.

The class of unsupervised machine learning (II) algorithms describes algorithms that train models that can recognize patterns and structures in unlabelled data. k-means clustering is an example of unsupervised machine learning.

The class of reinforcement learning (III) algorithms differs from both unsupervised and supervised machine learning in that it facilitates itself through algorithms that can operate without any known data, i.e. do not need to be trained using input data. At their core they consist of a decision-making policy that describes what actions should be taken depending on a current state. By executing the decisions proposed by the policy, and by harvesting therefrom resulting rewards, the policy is adjusted. This alters decision making iteratively. The reinforcement learning concept is illustrated in the figure below.

Advantages of reinforcement learning

Some advantages of reinforcement learning algorithms include that they are capable of operating without training data. This means that they can be used in environments with a high level of uncertainty. Even when there is very little information available a reinforcement learning model might work well.

Disadvantages of reinforcement learning

In general, reinforcement learning models can be overkill. They may often represent a case of over-engineering. They also might require excessive amounts of computational power and might in some cases better be replaced by a supervised or unsupervised machine learning algorithm.

Commercial applications of reinforcement learning

One widely recognized application of reinforcement learning in supply chain management is inventory management. Another popular example can be found in robots and robot control. This is relevant for material equipment handling.

But there have also been commercial applications of reinforcement learning as part of discrete-event simulation already. The company Pathmind, not operating at this time (i.e. out of business), developed an AnyLogic tool that could be used to implement reinforcement learning directly in AnyLogic.

I can integrate machine learning into the simulation model, or it can run separately as external support

When integrating machine learning and discrete-event simulation I recommend one of two approaches: It, machine learning, can be integrated into the model as an alternative to conventional experiment planners or model optimizers, or machine learning can run on-top of the simulation model, to support the model and derive more information from its results and parameter configurations.

I refer to the first approach asmachine learning integration”, and to the latter as machine learning support”. In the following sections of this article I proceed by introducing exemplary applications of machine learning integration and machine learning support.

Simulation study support with a neural network to extrapolate solutions from the sampled subset

In below figure I illustrate how a simulation study can be supported by artificial neural network models.

The neural network is responsible for inferring from a small sample of simulation runs onto a broader solution space. In other words, a neural network is trained from the model configurations that were simulated, with thereto associated simulation results. Using this small sample, the neural network is adjusted to minimize deviation between simulated results (KPI values) and predicted neural network model outputs.

Once the neural network has been trained it is used against another small sample of simulated model confiugrations to assess its validity for predicting previously unknown system configurations.

The simulation engineer can now use the neural network model to predict model outputs (KPI values of the system of interest) for system configurations beyond the solution space that he has implemented and tested in his simulation environment. In other words, the simulation engineer can predict system behaviour based on known simulation results without having to implement additional model configurations.

Integrating reinforcement learning provides an interesting alternative to conventional optimizers

The path taken by reinforcement learning represents a very different way of utilizing artificial intelligence for simulation studies. With reinforcement learning, much like with conventional simulation model optimizers, the aim is to find a good or even optimal system configuration as quickly as possible, i.e. with least possible effort (cost).

Conventional simulation optimizers are meant to support the simulation analyst in identifying promising system configurations quickly. These optimizers take the form of experiment managers and e.g. implement strategies such as (I) gradient descent, (II) factorial designs, and (III) evolutionary improvement etc.

In the figure below I illustrate how conventional simulation model optimizers aim at supporting efficient and effective searches for an optimum within the derived solution space.

Conventional simulation model optimizers are generally capable of handling even very large amounts of decision variables and variable types, but they usually require a lot of computational power. In many cases the optimizers are furthermore not capable of finding good solutions. To avoid excessive computational effort industrial engineers and analysts often apply heuristics. The problem with these, however, is that they are inherently inflexible.

Another alternative to conventional optimizers is provided by reinforcement learning. Pathmind was one exemplary application that provided a simulation-related reinforcement learning service in AnyLogic. Pathmind offered the possibility of handling high variability, scalability to large state spaces, and multi-objective optimization with even contradictionary objectives. In the video below I have embedded a Youtube reference covering reinforcement learning with Pathmind in AnyLogic.

Some other interesting sources that investigate or explain reinforcement learning for discrete-event simulation are listed below:

Concluding remarks on artificial intelligence and simulation

In this article I have presented two procedures for how artificial intelligence can be used in discrete-event simulation engineering. Both procedures make use of artificial intelligence to enhance the effectiveness and efficiency of the simulation model. The simulation model itself does not represent artificial intelligence. I personally believe in neural networks being a strong support in making simulation study execution more efficient. The domain of reinforcement learning is a very interesting one, but it remains to be seen whether reinforcement learning can really gain a commercial and practical foothold in simulation engineering when e.g. compared to conventional optimizers.

If you are interested in hearing my thoughts on this subject in greater detail you can also watch the Youtube video that I referenced below.

Leave a Reply

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Close

Meta