The post Backlog simulation in FIFO production system appeared first on Supply Chain Data Analytics.

]]>The FIFO backlog development simulation implemented with simmer in this article is a very simple example. A commercial simulation study with realistic settings will most likely be more complex. Complexity can e.g. be added by increasing the number of part families, product groups, and production controls. In addition, there could be alternative routings in the production flow, requiring at least some sort of heuristic decision making.

In this simple example I model a receival process that receives raw material. This material goes into a central storage area that acts as a buffer. From there, parts go to one of four processing stations. From there processed parts go into another buffer ahead of shipping. Shipping facilitates the final process in this simple example.

The job prioritization logic applied is a simple “first in first out” (FIFO) logic. Furthermore, there is only one product and only one raw material type. The processing stations are all identical. Shift calenders etc. are not considered, and any efficiency losses at processing stations due to breakdowns or rework are merely considered by applying exponential distributions for processing times. More complex logic can be implemented with simmer as well, using the basic building blocks provided by the package. For more, see my simmer documentation. A link to that documentation is provided at the end of this article.

Parts received via the receival process follow a random distribution with regards to the time interval between deliveries. The processing time of the processing stations follows a random distribution as well. The shipping process has a fixed time interval.

For the time intervals between part receivals I used a random uniform distribution. For the processing time at the processing station I used exponential distributions. The idea behind this being that the exponential distribution has a long tail, capturing e.g. unexpected breakdowns, retoolings or rework etc. This is a simplification. Normally, one would have to implement e.g. scrap, breakdown, setups and e.g. machine warm-up in detail.

Note: The processing time has a minimum duration, defined by a constant lower limit.

I implement the model below, using the simmer package in R for discrete-event simulation.

**library**(magrittr)
**library**(simmer)
**library**(simmer.plot)
**library**(dplyr)
*# set seed for random number generation*
set.seed(42)
*# declarate / create simulation environment*
env = simmer("Backlog simulation")
*# setup a trajectory workflow*
production = trajectory("production process") %>%
seize("processing",1) %>%
timeout(**function**() 1+rexp(1,0.05)[1]) %>%
release("processing",1) %>%
seize("shipping",1) %>%
timeout(5) %>%
release("shipping",1)
*# adding a ressource to the simulation environment*
env %>%
add_resource("processing",
capacity = 4,
queue_size=100000) %>%
add_resource("shipping",
capacity=1,
queue_size=100000)
*trajectory to generator*
env %>%
add_generator(name_prefix = "parts",
trajectory = production,
distribution = **function**() runif(1,3,7))
*# run simulation for 3000 time units*
env %>% run(3000)

I procede with an analysis of the simulation results below. To learn more about monitored simmer data see my references at the end of the article.

First, I obtain (“get”) relevant monitored data in the form of R dataframes. I can plot the backlog curves with this data, e.g. using ggplot2 in R:

**library**(ggplot2)
ggplot(get_mon_resources(env)) +
geom_line(mapping=aes(x = time, y = queue, color = resource),size=2) +
ggtitle("Simulated backlog curve") +
xlab("Time") +
ylab("Backlog") +
scale_colour_manual(values = c("#FF0000", "#428EF4"))

With the current system configurations and processing times, the backlog ahead of processing is non-stationary. This indicates a bottleneck.

What happens if we add an additional processing station? I investigate this in the code below.

```
simmer("Backlog simulation (additional processing station") %>%
add_resource("processing",
capacity = 5,
queue_size=100000) %>%
add_resource("shipping",
capacity=1,
queue_size=100000) %>%
add_generator(name_prefix = "parts",
trajectory = production,
distribution =
```**function**() runif(1,3,7)) %>%
run(3000) %>%
get_mon_resources() %>%
ggplot() +
geom_line(mapping=aes(x = time, y = queue, color = resource),size=2) +
ggtitle("Simulated backlog curve") +
xlab("Time") +
ylab("Backlog") +
scale_colour_manual(values = c("#FF0000", "#428EF4"))

Adding additional resources to processing seems to stabilize the system. This indicates a resource bottleneck.

Even a simple backlog simulation model such as the one in this article offers a strong advantage over conventional Excel sheets: It captures the dynamics of processing times and resource limitations / capacities.

This simple backlog simulation model has many limitations. Raw material supply does not depend on current backlog (missing feedback loop will result in ever increasing inventory levels). In addition, the model merely implements a FIFO logic, which is e.g. not sufficient to model a more complex production system with many alternative production controls and job prioritizations.

One alternative approach that I can recommend for a simple backlog simulation of this kind is to build a simulation model in VenSim. VenSim facilitates are very simple tool for system dynamics, i.e. systems described by stocks and flows. A system of this kind could very well have been analyzed using a system dynamics approach. I will cover VenSim and system dynamics in upcoming articles.

In this article I demonstrated a more appropriate technique for simulating production backlog when compared to simple Excel calculations. The method applied was discrete-event simulation. I made use of the R-package simmer, a simulation package in R for discrete-event simulation.

For your reference, here are some articles that could be of interest to you in this context:

**Link:***Simulation techniques***Link:***Discrete-event simulation in R with simmer***Link:***Simulating a simple receival process using the simmer R-package*

The post Backlog simulation in FIFO production system appeared first on Supply Chain Data Analytics.

]]>The post Add()-method in pyautocad appeared first on Supply Chain Data Analytics.

]]>The pyautocad Add()-method creates member objects and adds them to our document. In this particular section, I will discuss the very basic syntax that can be used to create objects in this way. This is true for the following objects types/groups:

- Dictionaries
- DimStyles
- Documents
- Groups
- Layers
- Layouts
- Linetypes
- Materials
- PopupMenus
- RegisteredApplications
- SelectionSets
- TextStyles
- Toolbars
- Views
- Viewports

The syntax to create these objects is very simple:

`object.Add(Name) # pyautocad Add()-method`

For instance, if we want to create a new Layer, I will use the following syntax:

`acad.doc.Layers.Add(layer_name)`

The same concept works for all of the other object types contained by above object type list.

Sometimes we need to work with multiple objects, treating them as a single unit. In such cases, we use blocks.

The pyautocad syntax for creating a block in AutoCAD is as follows.

`object.Add(Insertion_Point, Block_Name)`

After creating a block we can save the same in a variable and add different geometries to that using the methods I discussed in my previous blog posts.

```
b1 = acad.doc.Blocks.Add(ip, "Test_block_1")
l1 = b1.AddLine(APoint(100, 100, 0), APoint(350, 350, 0))
c1 = b1.AddCircle(APoint(200, 250, 0), 150)
```

Now the AutoCAD block is created as a part of the document.

But still, it is not visible in the model space yet. To use the block I must insert the newly created block into the model space. I can do so by using the pyautocad InsertBlock()-method.

The pyautocad syntax for applying the InsertBlock()-method is as shown below:

```
object.InsertBlock(InsertionPoint, Name , Xscale , Yscale , ZScale , Rotation , Password)
e.g.
acad.model.InsertBlock(APoint(250, 500, 0), "Test_block_1", 1, 1, 1, 0)
```

I can see that the block has been successfully inserted into the model space.

For further blog posts covering AutoCAD automatization please check my other blog posts related to pyautocad and pywin32. Please leave any questions that you might have as a comment below. Feel free to contact me for any technical assistance. You can do so by using our contact form.

The post Add()-method in pyautocad appeared first on Supply Chain Data Analytics.

]]>The post Artificially intelligent algorithms for optimization in Python appeared first on Supply Chain Data Analytics.

]]>AIAs may not reach globally optimal solutions, which means that the user should always be aware of the reason for choosing them based on a use case and try to tune their parameters or increase their adaptiveness to the landscape of an optimization problem. However, contrary to the (exact) solvers introduced, they can obtain solutions in a reasonably shorter amount of time. Besides, AIAs may not derive feasible solutions based on the characteristics of an optimization model (and the programming expertise of the user). The significant advantage of using an AIA is that no matter the features of the optimization problem (e.g., LP, MIP, MILP, or MINLP), an AIA can be a general problem solver (GPS) and solve all of them.

**Notably, this is not a complete survey of what is available, but I will try to complete it over time.** Moreover, I implement either GA or PSO via the packages introduced, two of the most famous AIAs introduced in 1975 and 1995. Hence, the introduced packages should also support them. Finally, since the optimization model is presented here (+) and has constraints, I model them as penalties in the objective function.

- Supported algorithms: GA.
- Supporting multiple objectives: No.

Here, I provide a coding example using “geneticalgorithm” in Python:

# Installation (Uncomment the line below) #!pip install geneticalgorithm import numpy as np from geneticalgorithm import geneticalgorithm as ga # Define charateristics of variables: varbound=np.array([[0,1],[0,1]]) vartype=np.array([['real'],['real']]) # Define settings of the algorithm: algorithm_param = {'max_num_iteration': 100,\ 'population_size':60,\ 'mutation_probability':0.1,\ 'elit_ratio': 0.01,\ 'crossover_probability': 0.5,\ 'parents_portion': 0.3,\ 'crossover_type':'uniform',\ 'max_iteration_without_improv':None} # Define your optimization model: def MyOptProb(X): y = 0+X[1]*(1.29-0) x = np.round(0+X[0]*(2-0)) g1 = 5/10 * x + 3/10 * y - 1 g2 = 2/9 * x + 7/9 * y - 1 penalty = np.amax(np.array([0,g1,g2])) return -(2*x+5*y)+150*penalty**2 # Define a solution retriever: def Results(obj, variables): x = round(0+variables[0]*(2-0)) y = 0+variables[0]*(1.29-0) g1 = 5 * x + 3 * y - 10 g2 = 2 * x + 7 * y - 9 print(g1) print(g2) if g1>10e-6 or g2>10e-6: print("infeasible") else: print("feasible") print("x:",x) print("y:",y) print("obj:",2*x+5*y) # Model and solve the problem: model=ga(function=MyOptProb,dimension=2,variable_type_mixed=vartype,variable_boundaries=varbound,algorithm_parameters=algorithm_param) model.run() # Display results: Results(model.best_function,model.best_variable)

- Supported algorithms: PSO.
- Supporting multiple objectives: No

Here, I provide a coding example using “pyswarms” in Python:

# Installation (Uncomment the line below) # !pip install pyswarms from pyswarms.single.global_best import GlobalBestPSO import numpy as np # Define charateristics of variables: x_min = [0, 0] x_max = [1, 1] bounds = (x_min, x_max) dim = len(x_min) # Define settings of the algorithm: pop = 100 iterations = 250 options = {'c1': 0.5, 'c2': 0.4, 'w': 0.9} # Define your optimization model: def MyOptProb(x): y = 0 + x[:, 1] * (1.29 - 0) x = np.round(0 + x[:, 0] * (2 - 0)) g1 = 5 / 10 * x + 3 / 10 * y - 1 g2 = 2 / 9 * x + 7 / 9 * y - 1 penalty = np.amax(np.array([np.zeros(pop), g1, g2])) return -(2 * x + 5 * y) + 150 * penalty ** 2 # Define a solution retriever: def Results(obj, variables): x = round(0 + variables[0] * (2 - 0)) y = 0 + variables[0] * (1.29 - 0) g1 = 5 * x + 3 * y - 10 g2 = 2 * x + 7 * y - 9 if g1 > 10e-6 or g2 > 10e-6: print ('infeasible') else: print ('feasible') print ('x:', x) print ('y:', y) print ('obj:', 2 * x + 5 * y) # Model and solve the problem: optimizer = GlobalBestPSO(n_particles=pop, dimensions=dim, options=options, bounds=bounds) (obj, variables) = optimizer.optimize(MyOptProb, iterations) # Display results: Results(obj, variables)

Click the button below to download the complete pack of the above coding examples:

If this article is going to be used in research or other publishing methods, you can cite it as Tafakkori (2022) (in text) and refer to it as follows: Tafakkori, K. (2022). Artificially intelligent algorithms for optimization in Python. *Supply Chain Data Analytics*. url: https://www.supplychaindataanalytics.com/artificially-intelligent-algorithms-for-optimization-in-python/

The post Artificially intelligent algorithms for optimization in Python appeared first on Supply Chain Data Analytics.

]]>The post Price optimization for maximum profit and inventory control appeared first on Supply Chain Data Analytics.

]]>Prices are a powerful lever for retail businesses to drive profits, revenues as well as customer loyalty, market penetration, an so on. They also influence a company’s supply chain operations and, thus, workforce scheduling and delivery times.

An important and fascinating aspect of pricing is understanding customers’ behavior with regard to price changes. Nowadays this involves analyzing large quantities of data via data science methods.

For example, causal analysis can estimate the effect of marketing campaigns over sales or profit, while forecasting methods can predict the actual demand the business will encounter in the near future.

However, even the best predictions extracted from the past, be it yesterday, the last week or the last year, are not enough to make reliable decisions for the future.

Indeed, once the best prediction and insights are in, a company can have thousands or million prices to set. For example, consider a retailer with a weekly pricing schedule and just 10 different pricing regions and 1000 articles in the assortment. She would then have 10000 prices to set each week.

As each pricing decision can influence other prices, the problem requires considering a huge amount of possible solutions, which cannot be handled manually.

Then, to have a modern and effective dynamic pricing solution, i.e. able to quickly react to changes in the market or the organization’s plans, pricing managers need reliable insights from past data as well as a *pricing optimizer* that finds the best combination of prices to reach their goals.

In the following I will present this use case for pricing optimization, implemented with mathematical programming:

**A pricing manager for a fashion retailer needs to set prices for the next week to maximize profit. She also worries about having too much left-over stock over the next weeks. **

As explained above, this is an example of dynamic pricing, where prices are updated frequently to react to changes in demand and other conditions like inventory. I will then build a pricing optimizer based on mathematical programming to generate good prices for the next week, according to the given goals.

The tool will provide quick and reliable price plans for each week, improving the efficiency and effectiveness of pricing decisions, as well as simulations of optimal pricing policies that can help clarify the relationship of pricing with other functions of the organization, like supply chain and inventory management, further improving communication among decision makers.

The example provided here is going to be very simple and easy to present but the same approach and technology is routinely applied at way more complicated and realistic scenarios.

In order to account for future profitability and the left-over stock , the optimization model will cover a four-weeks horizon. The first week in the horizon provides the prices for the next week, while the others provide an “educated guess” of what to expect for the near future. I then assume the pricing manager wants to minimize the left-over stock at the end of the 4th week.

After week 1, the model will be run again in week 2, covering up to week 5, in a “sliding-window” fashion, so that on week 2 the model can include new data such as the updated inventory values, newer demand forecasts and, if present, newer business-driven targets.

**Note:** In real-world setting I would use a more structured approach (a.k.a. “hierarchical”) where the a longer-term model e.g. a monthly model covering the next quarter, would provide the weekly model targets, e.g. on inventory, profit, margin etc…, to reach for the 4-week period.

The model will control these variables or decisions for each week and article:

- prices
- stock
- sales

and have the constraints:

- “stock flow”: the stock available at each week is either sold or becomes available for the next week
- “stock on sale”: on each week all the stock is available for sale, no stock withholding is allowed

And two objectives:

- maximize total profit
- minimize the 4th week’s left-over stock

These elements can be readily modeled in a mathematical program and optimized with general purpose algorithms.

A couple of further details on the example problem:

- for simplicity, I do not include inventory replenishment options within the 4 weeks horizon so the initial stock is all the stock available.
- Since the articles have different prices, I will use a uniform “discount notation” where prices are represented as a discount from the list price, and can range between 0% and 50%, with a 5% step. Smaller adjustments like rounding to ‘0.99’ can be handled outside the model by some rule-based post-processing.

I will solve the problem with a bi-objective approach, optimizing for profit and stock in two steps:

- “Max-Profit”: maximize profit,
- “Max-Profit/Min-Stock”:minimize the left-over stock while constraining the total profit to be
*near*the optimal value computed at step 1.

In practice, in the second step I will slightly relax the constraint on total profit, allowing up to a 2.5% loss w.r.t. the optimal value computed in the first step.

Indeed, forcing the total profit to stay too close to the optimum can significantly increase the complexity of the model and leaves little room for the left-over stock to be reduced. On the other hand, the “profit loss” is mostly theoretical as it would require perfect accuracy for the forecasts and the pricing model to actually achieve the optimum.

Consider three items: jeans, shirts and socks, with the data:

Article | List price [€] | Cost [€] | Initial stock |
---|---|---|---|

socks | 10 | 5 | 290 |

jeans | 25 | 15 | 250 |

shirts | 40 | 25 | 280 |

Here is the (randomly generated) demand curves and demand time series:

Demand time series (for selected discount levels):

I then implemented the bi-objective scheme in python with the open-source general-purpose solver PULP/CBC. Here are the results:

Scenario | Step 1 “Max-Profit” | Step 2 ” Max-Profit / Min-Stock” | change [%] |
---|---|---|---|

Profit [€] | 5517 | 5381 | -2.5 |

Revenue [€] | 15305 | 16543 | +8.0 |

Left-over stock [units] | 199 | 88 | -56.0 |

Article | Step 1 “Max-Profit” | Step 2 “Max-Profit / Min-Stock” | change [%] |
---|---|---|---|

socks | 104 | 34 | -67.6 |

jeans | 47 | 47 | 0.0 |

shirts | 47 | 7 | -86.3 |

The final solution achieves high profitability (97.5% of theoretical maximum) and with the minimum amount of left-over stock possible for that level of profit.

Compared to the purely profit-maximizing solution in the first step, the solution in the second step reduces the left-over stock by 50% by increasing discounts for socks and shirts to push their sales.

By using appropriate technology like mathematical programming, decision makers can find effective solutions for complex decision problems with competing targets. Indeed, as the number of possible solutions for these problems explodes, the probability of having surprisingly good solutions increases; the issue is then in using the right tool to quickly find them.

In this article I also touched on the topic of multi-objective optimization. If you are interested in multi-objective optimization you can find several examples implementing multi-objective optimization in Python here on this blog. Here is an overview of some references that might be of interest to you:

- Link:
*Large-scale price optimization for an online retail fasion retailer* - Link:
*Multi-objective linear optimization with PuLP in Python* - Link:
*Multi-objective linear optimization with weighted sub-problems using PuLP in Python* - Link:
*Scalarizing multi-objective optimization problems* - Link:
*Pricing with linear demand function using Gekko in Python*

The post Price optimization for maximum profit and inventory control appeared first on Supply Chain Data Analytics.

]]>The post Machine learning and discrete-event simulation: Exemplary applications appeared first on Supply Chain Data Analytics.

]]>**Discrete-event simulation **is a technique used in manufacturing and logistics for problems that cannot be investigated with conventional analytical techniques. Conventional techniques e.g. include **static calculations**, **mathematical programming **or **statistics**. It is frequently the case that such conventional techniques do not suffice analysis requirements in complex production systems or complex logistics processes. For example, control logics or interpendencies between production equipment, logistics equipment and planning algorithms can often only be analyzed with a simulation model. The cost of building such simulation models can be huge, and deriving findings from the simulation results can be costly too. In both these cases costs can be reduced by applying machine learning. That is what I want to show in this article.

In this article I highlight exemplary applications of machine learning in the context of discrete-event simulation. These examples are applicable to simulation studies in factory planning, supply chain network design, production scheduling, process design and more. Topics addressed are reinforcement learning in simulation models, neural networks for extrapolating simulated sample spaces, and a comparison of reinforcement learning in simulation models vs. conventional model optimizers.

In the following sections of this article I will explain what discrete-event simulation is. I will also explain what reinforcement learning and neural networks are. I proceed by providing an overview of possible approaches for integrating machine learning and simulation. I will present a modeling approach that combines simulation and neural networks for extrapolating a simulated sample of the overall solution space.

In one of my earlier articles I provided a simulation technique classification system. Discrete-event simulation is my favorite simulation technique. It is a powerful technique for analyzing processes in manufacturing and logistics systems. I use discrete-event simulation to assess factory layouts, material handling concepts, and material flow optimization projects. I have also used discrete-event simulation for tactical production planning, i.e. production planning in the short and medium term. Examples include inventory simulation and production schedule simulation. Discrete-event simulation, compared to agent-based simulation, system dynamics and monte-carlo simulation, is an appropriate technique for modeling processes. Its focus is rather detailed and less conceptual as this modeling technique implements decision making on process level whereas other techniques neglect this level of detail.

In this article I highlight exemplary applications of machine learning, another powerful domain, in the context of discrete-event simulation projects. I have already published an article on this website covering the three machine learning paradigms: Supervised machine learning, unsupervised machine learning, and reinforcement learning. All of these paradigms have different scopes and are all powerful when applied appropriately. Reinforcement is also often described as artificial intelligence, since in its core it has a reinforcement learning algorithm that improves a state-based decision making policy over time as the algorithm learns from rewards and punishments received in the form of feedback from its environment.

Neural networks are a specific subset of machine learning technology, often associated with deep learning. At its core, a neural network is comprised of neurons that together reproduce a signal-and-response network. Neural networks aim at reproducing the behaviour in a human brain and are used for pattern recognition and problem solving without any particular analytical framework.

In below figure I illustrate the structure of a simple neural network. There are many different structures of neural networks, but they are comprised of this basic structural framework.

At the core, a network is comprised of neurons. In the figure displayed below illustrated the basic structre of a neural network with a defined number of input neurons (input laye), a hidden layer, and an output layer.

Each neuron converts input values into an output value. You can think of every neuron performing a regression analysis on the inputs provided, forwarding the result as an output to the next layer. The layers between original input and final output are referred to as hidden layers. The more hidden layers, the deeper the network (generally speaking).

Artificial neural networks can solve very complex real world problems. Models using a neural network practically facilitate artificial intelligence in that they are capable of learning relationships between input and output values, even when these are nonlinear or even more complex than nonlinear. This enables these models to reveal relationships that are otherwise difficult to predict.

Artificial neural networks facilitate a black box model. This comes with risks, two major of them being 1) risk of overfitting the model, 2) difficulty of drawing generalizable findings from the model. Artificial neural networks are not good at explaining how and why they made their decisions. Artifical neural networks even tend to work well in cases of extreme variance and in the presence of extreme outliers.

In supply chain management neural network models can e.g. be used for inventory control and release of purchasing orders. As I will demonstrate neural networks can also be used to support discrete-event simulation studies, making them more efficient and effective. I have also seen applications of neural networks that were used for transport routing and transportation network optimization.

Beyond manufacturing and logistics, neural networks are e.g. used for:

- Credit card fraud detection
- Insurance fraud detection
- Voice recognition
- Natural language processing
- Medical disease diagnosis
- Financial stock price prediction
- Process and quality control
- Demand forecasting
- Image recognition

Reinforcement learing is a machine learning paradigm that allows for decision making under uncertainty. In another article that I published on SCDA I explained how the three paradigms of machine learning are **I) supervised machine learning**, **II) unsupervised machine learning**, and **III) reinforcement learning**.

The class of **supervised machine learning (I) **algorithms describes algorithms that train models based on labelled data, i.e. data with known inputs and outputs. Linear regression is an example of supervised machine learning.

The class of **unsupervised machine learning (II)** algorithms describes algorithms that train models that can recognize patterns and structures in unlabelled data. k-means clustering is an example of unsupervised machine learning.

The class of **reinforcement learning (III)** algorithms differs from both unsupervised and supervised machine learning in that it facilitates itself through algorithms that can operate without any known data, i.e. do not need to be trained using input data. At their core they consist of a decision-making policy that describes what actions should be taken depending on a current state. By executing the decisions proposed by the policy, and by harvesting therefrom resulting rewards, the policy is adjusted. This alters decision making iteratively. The reinforcement learning concept is illustrated in the figure below.

Some advantages of reinforcement learning algorithms include that they are capable of operating without training data. This means that they can be used in environments with a high level of uncertainty. Even when there is very little information available a reinforcement learning model might work well.

In general, reinforcement learning models can be overkill. They may often represent a case of over-engineering. They also might require excessive amounts of computational power and might in some cases better be replaced by a supervised or unsupervised machine learning algorithm.

One widely recognized application of reinforcement learning in supply chain management is inventory management. Another popular example can be found in robots and robot control. This is relevant for material equipment handling.

But there have also been commercial applications of reinforcement learning as part of discrete-event simulation already. The company Pathmind, not operating at this time (i.e. out of business), developed an AnyLogic tool that could be used to implement reinforcement learning directly in AnyLogic.

When integrating machine learning and discrete-event simulation I recommend one of two approaches: It, machine learning, can be integrated into the model as an alternative to conventional experiment planners or model optimizers, or machine learning can run on-top of the simulation model, to support the model and derive more information from its results and parameter configurations.

I refer to the first approach as** “machine learning integration”**, and to the latter as **“machine learning support”**. In the following sections of this article I proceed by introducing exemplary applications of machine learning integration and machine learning support.

In below figure I illustrate how a simulation study can be supported by artificial neural network models.

The neural network is responsible for inferring from a small sample of simulation runs onto a broader solution space. In other words, a neural network is trained from the model configurations that were simulated, with thereto associated simulation results. Using this small sample, the neural network is adjusted to minimize deviation between simulated results (KPI values) and predicted neural network model outputs.

Once the neural network has been trained it is used against another small sample of simulated model confiugrations to assess its validity for predicting previously unknown system configurations.

The simulation engineer can now use the neural network model to predict model outputs (KPI values of the system of interest) for system configurations beyond the solution space that he has implemented and tested in his simulation environment. In other words, the simulation engineer can predict system behaviour based on known simulation results without having to implement additional model configurations.

The path taken by reinforcement learning represents a very different way of utilizing artificial intelligence for simulation studies. With reinforcement learning, much like with conventional simulation model optimizers, the aim is to find a good or even optimal system configuration as quickly as possible, i.e. with least possible effort (cost).

Conventional simulation optimizers are meant to support the simulation analyst in identifying promising system configurations quickly. These optimizers take the form of experiment managers and e.g. implement strategies such as **(I) gradient descent**, **(II) factorial designs**, and **(III) evolutionary improvemen**t etc.

In the figure below I illustrate how conventional simulation model optimizers aim at supporting efficient and effective searches for an optimum within the derived solution space.

Conventional simulation model optimizers are generally capable of handling even very large amounts of decision variables and variable types, but they usually require a lot of computational power. In many cases the optimizers are furthermore not capable of finding good solutions. To avoid excessive computational effort industrial engineers and analysts often apply heuristics. The problem with these, however, is that they are inherently inflexible.

Another alternative to conventional optimizers is provided by reinforcement learning. Pathmind was one exemplary application that provided a simulation-related reinforcement learning service in AnyLogic. Pathmind offered the possibility of handling high variability, scalability to large state spaces, and multi-objective optimization with even contradictionary objectives. In the video below I have embedded a Youtube reference covering reinforcement learning with Pathmind in AnyLogic.

Some other interesting sources that investigate or explain reinforcement learning for discrete-event simulation are listed below:

- Reinforcement learning in AnyLogic simulation model – a guideing example using Pathmind
- Patmind examples
- AI reinforcement learning digital twins can solve shortages and save Christmas
- Solving complex business problems with simulation and Pathmind AI

In this article I have presented two procedures for how artificial intelligence can be used in discrete-event simulation engineering. Both procedures make use of artificial intelligence to enhance the effectiveness and efficiency of the simulation model. The simulation model itself does not represent artificial intelligence. I personally believe in neural networks being a strong support in making simulation study execution more efficient. The domain of reinforcement learning is a very interesting one, but it remains to be seen whether reinforcement learning can really gain a commercial and practical foothold in simulation engineering when e.g. compared to conventional optimizers.

If you are interested in hearing my thoughts on this subject in greater detail you can also watch the Youtube video that I referenced below.

The post Machine learning and discrete-event simulation: Exemplary applications appeared first on Supply Chain Data Analytics.

]]>The post Creating a support vector machine using Gekko in Python appeared first on Supply Chain Data Analytics.

]]>In this article, I model the intelligence of such a machine (i.e. a support vector machine) using mathematical relations and optimize its intelligence via Gekko, an optimization interface in Python with large-scale non-linear solvers.

Herein, I code the decision problem for creating a support vector machine (SVM):

import gekko as op import itertools as it #Developer: @KeivanTafakkori, 22 April 2022 def model (U,T,a,b,solve="y"): m = op.GEKKO(remote=False, name='SupportVectorMachine') alpha = {t: m.Var(lb=0, ub=None) for t in T} n_a = {(t,i): a[t][i] for t,i in it.product(T,U)} n_b = {t: b[t] for t in T} objs = {0: sum(alpha[t] for t in T) - sum(alpha[t]*alpha[tt] * n_b[t]*n_b[tt] * sum(n_a[(t,i)]*n_a[(tt,i)] for i in U) for t,tt in it.product(T,T))} cons = {0: {0: ( sum(alpha[t]*n_b[t] for t in T) == 0) for t in T}} m.Maximize(objs[0]) for keys1 in cons: for keys2 in cons[keys1]: m.Equation(cons[keys1][keys2]) if solve == "y": m.options.SOLVER=1 m.solve(disp=True) for keys in alpha: alpha[keys] = alpha[keys].value[0] print(f"alpha[{keys}]", alpha[keys]) x = [None for i in U] for i in U: x[i]=sum(alpha[t]*b[t]*n_a[(t,i)] for t in T) for t in T: if alpha[t]>0: z=b[t] - sum(x[i]*n_a[(t,i)] for i in U) break return m,x,z,alpha

The decision here is to find values of alpha (support vectors). Note that the optimization model coded here is the dual form of the main optimization problem for SVMs. The dual form of the model does not need to have constraints over observed data. Moreover, it can create non-linear boundaries for classes using kernel tricks. Such characteristics make it more suitable for deriving better boundaries and for datasets with higher dimensions (without a need for dimensionality reduction!), e.g., images, while making it more computationally efficient. Accordingly, an SVM can perform better than complex deep neural networks.

Next, when training the machine is completed, the stored values for *alpha *and *z* can be used to make predictions using the following prediction function:

def classify(dataset,x,z,alpha,a): if sum(sum(alpha[t]*dataset[1][t]*dataset[0][t][i] for t in T)*a[i] for i in U) + z > 0: return 1 else: return -1

For instance, consider the following dataset:

# EXP1 EXP2 EXP3 EXP4 EXP5 a = [[1,2,2],[2,3,3],[3,4,5],[4,5,6],[5,7,8]] #Training Dataset (inputs) b = [ -1 , 1 , -1 , 1 , -1 ] #Training Dataset (outputs) U = range(len(a[0])) #Set of input features T = range(len(b)) #Set of the training points

To make predictions, at first we model and solve the mentioned optimization problem:

```
m, x, z, alpha = model(U,T,a,b) #Model and solve the problem
```

The results are as follows:

alpha[0] 7.8112756344 alpha[1] 9.8112756622 alpha[2] 2.1887244438 alpha[3] 3.1887243395 alpha[4] 2.9999999235

Then, we implement the prediction function:

print(classify([a,b],x,z,alpha,[1,2,2])) #Predict the output (100% Accurate :)!

Fortunately, the result is 100% accurate and the output is as follows:

```
-1
```

That’s all! We have successfully built our support vector machine without using extra packages but considering the** optimization logic** behind it!

In this article, I showed how a classifier, called a support vector machine, can be modeled using an optimization model, then solved to optimality to characterize a prediction function, which can accurately label new observations. In the following articles, I will try to discuss kernels and how to use them effectively.

If this article is going to be used in research or other publishing methods, you can cite it as Tafakkori (2022) (in text) and refer to it as follows: Tafakkori, K. (2022). Creating a support vector machine using Gekko in Python. *Supply Chain Data Analytics*. url: https://www.supplychaindataanalytics.com/creating-a-support-vector-machine-using-gekko-in-python/

The post Creating a support vector machine using Gekko in Python appeared first on Supply Chain Data Analytics.

]]>The post Simmer R-package applied to simulate simple receival inspection process appeared first on Supply Chain Data Analytics.

]]>I provide a very simple conceptual model of a receival inspection process in the figure below.

The process is a follows: Pallets received are unloaded box by box and placed onto a first conveyor line. This conveyor line leads to the inspection station. In this simple model the assumption is that every box is inspected. After inspection, the box is forwarded to the central storage area via another conveyor line.

Having drafted a conceptual model I can now implement it in code. This is where the simmer R-package comes into play. The R code is displayed below. As you can see I also make use of the R-packages magrittr and dplyr.

*# importing simmer and other packages*
library(magrittr)
library(simmer)
library(simmer.plot)
library(dplyr)
*# set seed for random number generation
*set.seed(42)
*# declarate / create simulation environment
*env = simmer("Inspection model")
*# setup a receival inspection trajectory workflow
*inspection = trajectory("receival inspection") %>%
seize("conveyor1", 1) %>%
timeout(3) %>%
release("conveyor1", 1) %>%
seize("inspection", 1) %>%
timeout(15) %>%
release("inspection",1) %>%
seize("conveyor2",1) %>%
timeout(3) %>%
release("conveyor2",1)
*# adding a ressource to the simulation environment
*env %>%
* # there is one conveyor belt going towards inspection
* add_resource("conveyor1",
capacity=1,
queue_size=10) %>%
* # there is one inspection station
* add_resource("inspection",
capacity = 1,
queue_size=0) %>%
* # there is another conveyor line goig from inspection towards central storage area
* add_resource("conveyor2",
capacity=1,
queue_size=10)
*# adding generator to the simulation environment and assigning production trajectory to generator
*env %>%
add_generator(name_prefix = "boxes",
trajectory = inspection,
distribution = function() rnorm(n=1,
mean=6,
sd=1))

Having implemented the code I can now execute a simulation run.

In the line of R code displayed below I execute the simulation. I let the simulation run until simulation time 1000. In this case this represents a simulation time 1000 sec.

`env %>% run(1000)`

```
## simmer environment: Inspection model | now: 1000 | next: 1001.26780219819
## { Monitor: in memory }
## { Resource: conveyor1 | monitored: TRUE | server status: 0(1) | queue status: 0(10) }
## { Resource: inspection | monitored: TRUE | server status: 1(1) | queue status: 0(0) }
## { Resource: conveyor2 | monitored: TRUE | server status: 0(1) | queue status: 0(10) }
## { Source: boxes | monitored: 1 | n_generated: 168 }
```

The information displayed above does not tell us much. We can see that **the source triggered 168 arrivals**, which matches the **box unloading time of 6 sec (**1000 sec/168 boxes = 5.95 sec). But this does NOT mean that 168 boxes were inspected. If we look at the conceptual model this would clearly not be possible since inspection time is **15 sec per box**. Thus** within 1000 sec max. 66 boxes can be inspected**. In other words, just because 168 arrivals where generated by the source does not mean that all of these were inspected/processed. Many of these arrivals were rejected. When an arrival cannot be served by a resource, and when the queue is not big enough to contain it, the resource is rejected and dropped from the trajectory.

As I will demonstrate in another blog post simmer allows for sub-trajectories. Rejected arrivals can be assigned to these sub-trajectories. However, for now: If I want to analyze the simulation results I have to use monitoring functions as provided by the simmer R-package.

As introduced by me in my simmer documentation (see my earlier blog post) the simmer R-package provides monitoring functions for simulation analysis.

In the code below I make use of **get_mon_resources()** getter-function:

```
dfResults_resources = env %>% get_mon_resources()
dfResults_resources %>% head()
```

resource<chr> | time<dbl> | server<int> | queue<int> | capacity<dbl> | queue_size<dbl> | system<int> | limit<dbl> | replication<int> | |
---|---|---|---|---|---|---|---|---|---|

1 | conveyor1 | 7.370958 | 1 | 0 | 1 | 10 | 1 | 11 | 1 |

2 | conveyor1 | 10.370958 | 0 | 0 | 1 | 10 | 0 | 11 | 1 |

3 | inspection | 10.370958 | 1 | 0 | 1 | 0 | 1 | 1 | 1 |

4 | conveyor1 | 12.806260 | 1 | 0 | 1 | 10 | 1 | 11 | 1 |

5 | conveyor1 | 15.806260 | 0 | 0 | 1 | 10 | 0 | 11 | 1 |

6 | conveyor1 | 19.169389 | 1 | 0 | 1 | 10 | 1 | 11 | 1 |

**get_mon_resources()** is a getter-function for obtaining data that is being monitored by simmer. The simmer R-package provides such getter-functions for arrivals, attributes and resources. As becomes clear from the dataframe excerpt displayed above, state changes related to resources are documented in the data returned by **get_mon_resources()**. For example, **conveyor 1 is serving a box at t = 7.37 sec and is done serving that box 3 sec later, at t = 10.37 sec.**

Three getter-functions come along with the simmer R-package. All of them return dataframes.

The getter-functions are the following:

**get_mon_arrivals():**Returns data for each arrival, including name, start time, end time, active time, finished boolean flag.**get_mon_resources():**Returns data that documents state changes in resources, including resource name, event time (trigger time), server and queue count, capacity, queue size, system count (server + queue) and system limit (capacity + capacity_size).**get_mon_attributes():**Returns data on state changes in attributes, including name, time of attribute change event, and attribute key.

Using data obtained by getter-functions I can develop custom statistics to assess the performance of a simulated system. In this case I will use the **get_mon_arrivals()** getter-function to implement a throughput statistic. I do so in the code below:

```
throughput = function(env){
throughput_count = env %>% get_mon_arrivals() %>% filter(finished==TRUE) %>% nrow()
return(throughput_count)
}
throughput(env)
```

`## [1] 54`

**After 1000 sec simulation time, 54 boxes have been inspected** and put away in the central storage area.

Using the monitored resource data retrieved as a dataframe with the **get_mon_resources()** getter-function provided by the simmer R-package I can produce helpful plots. In the example below I implement a resource utilization plot.

```
plot(get_mon_resources(env),
metric="utilization",
c("conveyor1","inspection","conveyor2"))
```

Above visualization indicates that the **inspection station is timed out more than 80% of its time**. From the model parameters we know that the inspection process itself takes 15 sec. If this represents roughly 83.33% of the inspection stations total available time then that means the available time per box is somewhere around 18.00 sec (rough calculation). **That also means that the inspection station is spending 3.00 sec on something else. This “something else” is the convey-in time from conveyor1 onto the inspection station.**

I will **increase the queue size of the inspection station to 1, up from 0**. This should increase utilization of the inspection station.

```
env2 = simmer("Inspection model 2") %>%
add_resource("conveyor1",
capacity=1,
queue_size=10) %>%
add_resource("inspection",
capacity = 1,
queue_size=1) %>%
add_resource("conveyor2",
capacity=1,
queue_size=10) %>%
add_generator(name_prefix = "boxes",
trajectory = inspection,
distribution = function() rnorm(n=1,
mean=6,
sd=1)) %>%
run(1000)
env2 %>%
get_mon_resources() %>%
plot(metric="utilization",
c("conveyor1","inspection","conveyor2"))
```

There you go. Since convey-in can now be executed onto a buffer position in parallel to the inspection station the utilization of the inspection process itself increases. No time is lost on conveying in.

I complete this analysis by evaluating the throughput of this adjusted system configuration.

`throughput(env2)`

`## [1] 65`

The **throughput is **now 65 boxes after 1000 sec simulation time, which verifies the simulation model.

In this article I introduced the simmer R-package and provided a very simple practical example. In future blog posts I will provide additional examples. I thereby aim at providing a better understanding about simmer. I am confident to thereby convince simulation engineers about the fact that **simmer is a valid alternative when compared with commercial simulation software **such as e.g. AnyLogic, Plant Simulation, FlexSim etc.

The post Simmer R-package applied to simulate simple receival inspection process appeared first on Supply Chain Data Analytics.

]]>The post Solved: ‘Call was rejected by callee’ in pyautocad appeared first on Supply Chain Data Analytics.

]]>The main reason for getting this error while working with AutoCAD automation using pythoncom is the speed of call-making. I.e. the speed of making calls.

The subsequent calls that we make to perform certain tasks in AutoCAD are faster than the application can handle.

This eventually throws an ‘Call was rejected by callee’ error.

To solve this issue, we can use the ‘sleep’ method by importing the ‘time’ module into Python.

`time.sleep(5)`

This method shall be executed after executing certain heavy functions which consume a considerable amount of computer resources while performing tasks. We have to pass parameters as seconds to suspend a process of the current thread for a specified time.

As mentioned in the code above I am suspending the execution for 5 seconds.

The code mentioned below represents a subprocess of a code. I need to turn the viewport on to perform certain tasks on the viewport. But while working with multiple viewports I end up getting the “Call was rejected by callee” error.

To mitigate the same, I am forcing the process to wait for 0.5 seconds and then continue.

```
def turn_on_viewport(viewport):
viewport.ViewportOn = True
print("Viewport On: " + str(viewport.ViewportOn))
time.sleep(0.5)
```

For further more blog posts covering AutoCAD automatization please check our other blog posts related to pyautocad and pywin32.

Please leave any questions that you might have as a comment below. Feel free to contact us for any technical assistance. You can do so by using our contact form.

The post Solved: ‘Call was rejected by callee’ in pyautocad appeared first on Supply Chain Data Analytics.

]]>The post Quadratic assignment problem with Pyomo in Python appeared first on Supply Chain Data Analytics.

]]>Herein, I code the decision problem according to the following assumptions:

*The facilities* (assignees):

- Are static.
- Need to communicate (interact) with each other.
- Can be assigned to only a single room, manufacturing cell, geographical region, etc. (in general, a position).
- Have a specific interaction volume between each other.

*The positions* (assignments):

- Are occupied by one facility
- Have a specific distance (similarity) from each other.

The assignee-assignment relations:

- Are one-to-one.

In summary, the objective is to make facilities with higher interaction positioned closer to each other.

import pyomo.environ as op import itertools as it import os #Developer: @KeivanTafakkori, 4 April 2022 def model(I,J,a,dispmodel="y",solve="y", dispresult="y"): m = op.ConcreteModel("QuadraticAssignmentProblem") m.I = op.Set(initialize=I) m.J = op.Set(initialize=J) m.K = op.SetOf(m.I) m.L = op.SetOf(m.J) m.x = op.Var(m.I,m.J, domain=op.Binary) objs = {0: sum(a[(i,j,k,l)]*m.x[i,j]*m.x[k,l] for i,j,k,l in it.product(m.I,m.J,m.K,m.L))} cons = {0: {j: (sum(m.x[i,j] for i in m.I) == 1) for j in m.J}, 1: {i: (sum(m.x[i,j] for j in m.J) == 1) for i in m.I}} m.OBJ = op.Objective(expr=objs[0],sense=op.minimize) m.constraint = op.ConstraintList() for keys1 in cons: for keys2 in cons[keys1]: m.constraint.add(expr=cons[keys1][keys2]) if dispmodel=="y": print("Model --- \n",m) if solve == "y": os.environ['NEOS_EMAIL'] = 'myemail@email.com' solver_manager = op.SolverManagerFactory('neos') results = solver_manager.solve(m, solver = "bonmin") if dispresult == "y": print(results) op.display(m) return m

Notably, I use the *bonmin *solver provided online by the NEOS server, a free cloud platform for solving challenging optimization problems. Therefore, to see the implementation results, ensure you have a reliable internet connection. The server requires you to enter your email address. To implement the coded model, at first, we need data, which is given as follows:

w = [[0,3,0,2], [3,0,0,1], #Flow matrix (between assignees) [0,0,0,4], [2,1,4,0]] d = [[0,22,53,53], [22,0,40,62], #Distance matrix (between assignments) [53,40,0,55], [53,62,55,0]] I = range(len(w)) #Set of assignees K = I J = range(len(w[0])) #Set of assignments L = J a ={(i,j,k,l): w[i][k]*d[j][l] for i,j,k,l in it.product(I,J,K,L)} #Relative cost matrix

Then, I model and solve QAP as follows:

```
m = model(I,J,a) #Model and sovle the problem
```

Fortunately, the status is *optimal *and I obtain the following results:

Model QuadraticAssignmentProblem Variables: x : Size=16, Index=x_index Key : Lower : Value : Upper : Fixed : Stale : Domain (0, 0) : 0 : 0.0 : 1 : False : False : Binary (0, 1) : 0 : 0.0 : 1 : False : False : Binary (0, 2) : 0 : 1.0 : 1 : False : False : Binary (0, 3) : 0 : 0.0 : 1 : False : False : Binary (1, 0) : 0 : 0.0 : 1 : False : False : Binary (1, 1) : 0 : 0.0 : 1 : False : False : Binary (1, 2) : 0 : 0.0 : 1 : False : False : Binary (1, 3) : 0 : 1.0 : 1 : False : False : Binary (2, 0) : 0 : 1.0 : 1 : False : False : Binary (2, 1) : 0 : 0.0 : 1 : False : False : Binary (2, 2) : 0 : 0.0 : 1 : False : False : Binary (2, 3) : 0 : 0.0 : 1 : False : False : Binary (3, 0) : 0 : 0.0 : 1 : False : False : Binary (3, 1) : 0 : 1.0 : 1 : False : False : Binary (3, 2) : 0 : 0.0 : 1 : False : False : Binary (3, 3) : 0 : 0.0 : 1 : False : False : Binary Objectives: OBJ : Size=1, Index=None, Active=True Key : Active : Value None : True : 790.0 Constraints: constraint : Size=8 Key : Lower : Body : Upper 1 : 1.0 : 1.0 : 1.0 2 : 1.0 : 1.0 : 1.0 3 : 1.0 : 1.0 : 1.0 4 : 1.0 : 1.0 : 1.0 5 : 1.0 : 1.0 : 1.0 6 : 1.0 : 1.0 : 1.0 7 : 1.0 : 1.0 : 1.0 8 : 1.0 : 1.0 : 1.0

The results show that one should assign facilities 1,2,3 and 4 to positions 3,4,1, and 2, respectively.

In this article, I solved a simple quadratic assignment problem (QAP) via Pyomo, an interface for optimization in Python, using a solver called BONMIN through the NEOS server. Solving such optimization problems necessitates using robust and fast optimization algorithms. The interested readers can refer to previous articles to know more about solvers and interfaces. Besides, by visiting previous articles, you can find that other optimization problems such as single machine and flow shop scheduling and pricing can be solved similar to quadratic assignment problems in Python. Finally, an implementation of the proposed QAP with the used dataset is provided by NEOS, which can be accessed using this link.

If this article is going to be used in research or other publishing methods, you can cite it as Tafakkori (2022) (in text) and refer to it as follows: Tafakkori, K. (2022). Quadratic assignment problem with Pyomo in Python. *Supply Chain Data Analytics*. url: https://www.supplychaindataanalytics.com/quadratic-assignment-problem-with-pyomo-in-python/

The post Quadratic assignment problem with Pyomo in Python appeared first on Supply Chain Data Analytics.

]]>The post Flow shop scheduling with PuLP in Python appeared first on Supply Chain Data Analytics.

]]>Herein, I code the decision problem according to the following assumptions and regarding the elements of the decision making environment:

*The machine*s:

- Can not conduct more than one task at a time. (No multi-tasking)
- Have a setup time before starting to conduct any task.
- May not process one job after another immediately (except the first one).
- Operate in a predetermined sequence.

*The tasks*:

- Have a specific processing time on each machine.
- Have a specific priority (weight) for their completion.
- Are equivalent to a single job (i.e., all tasks consist of one job).
- Follow a predetermined process route.
- Follow the same sequence (permutation) on all machines.

*The time*:

- Starts from zero (for the first machine).

*The criterion*:

- Is to minimize the total weighted completion time (TWCT) (for jobs that leave the last machine in the sequence).

Herein, I provide a code that models the decision problem, satisfying the mentioned assumptions:

import pulp as op import itertools as it #Developer: @KeivanTafakkori, 14 March 2022 def model(I,J,K,p,s,dispmodel="n",solve="y", dispresult="y"): m = op.LpProblem("FlowShopSchedulingProblem", op.LpMinimize) x = {(i,j): op.LpVariable(f"x{i}{j}", 0,1, op.LpBinary) for i,j in it.product(I, J)} c = {(j,k): op.LpVariable(f"c{j}{k}", 0, None, op.LpContinuous) for j,k in it.product(J, K)} maxi = {(j,k): op.LpVariable(f"maxi{j}{k}", 0, None, op.LpContinuous) for j,k in it.product(J, K)} objs = {0: sum(w[j]*c[(j,1)] for j in J)} cons = {0: {i: (sum(x[(i,j)] for j in J) == 1, f"eq0_{i}") for i in I}, 1: {j: (sum(x[(i,j)] for i in I) == 1, f"eq1_{j}") for j in K}, 2: {j: (c[(j,1)] >= sum(x[(i,j)]*p[1][i] for i in I) + maxi[(j,1)], f"eq2_{j}") for j in J if j!=0}, 3: {0: (c[(0,1)] == s[1] + sum(x[(i,0)]*p[1][i] for i in I) + c[(0,0)], "eq3_")}, 4: {j: (c[(j,0)] >= c[(j-1,0)] + sum(x[(i,j)]*p[0][i] for i in I), f"eq4_{j}") for j in J if j!=0}, 5: {0: (c[(0,0)] == s[0] + sum(x[(i,0)]*p[0][i] for i in I), "eq5_")}, 6: {j: (maxi[(j,1)] >= c[(j-1,1)], f"eq6_{j}") for j in J if j!=0}, 7: {j: (maxi[(j,1)] >= c[(j,0)], f"eq7_{j}") for j in J}} m += objs[0] for keys1 in cons: for keys2 in cons[keys1]: m += cons[keys1][keys2] if dispmodel=="y": print("Model --- \n",m) if solve == "y": result = m.solve(op.PULP_CBC_CMD(timeLimit=None)) print("Status --- \n", op.LpStatus[result]) if dispresult == "y" and op.LpStatus[result] =='Optimal': print("Objective --- \n", op.value(m.objective)) print("Decision --- \n", [(variables.name,variables.varValue) for variables in m.variables() if variables.varValue!=0]) return m, c, x w = [0.1, 0.4, 0.15, 0.35] #Priority weight of each job p = [[ 7, 3, 9, 4], [ 7, 3, 9, 4]] #Processing time of each job on each machine s = [5, 2] #Setup times of the machines I = range(len(p[0])) #Set of jobs J = range(len(I)) #Set of positions K = range(len(p)) #Set of machines

To solve the model, I use the following command:

```
m, c, x = model(I,J,K,p,s) #Model and solve the problem
```

Fortunately, there are no errors, the status is reported feasible and optimal, and the following results are obtained:

Status --- Optimal Objective --- 24.950000000000003 Decision --- [('c00', 8.0), ('c01', 13.0), ('c10', 12.0), ('c11', 17.0), ('c20', 19.0), ('c21', 26.0), ('c30', 28.0), ('c31', 37.0), ('maxi01', 8.0), ('maxi11', 13.0), ('maxi21', 19.0), ('maxi31', 28.0), ('x02', 1.0), ('x10', 1.0), ('x23', 1.0), ('x31', 1.0)]

Therefore, the obtained sequence is: 2->4->1->3, which is similar for all machines. Finally, a visualization of the results can be as follows:

In this article, I created a simple and basic model for flow shop scheduling in Python then solved it using PuLP, an **interface** in Python, which by default uses the open-source CBC **solver **for integer programming models. If you found the description interesting, let us know by commenting below this article!

If this article is going to be used in research or other publishing methods, you can cite it as Tafakkori (2022) (in text) and refer to it as follows: Tafakkori, K. (2022). Flow shop scheduling with PuLP in Python. *Supply Chain Data Analytics*. url: https://www.supplychaindataanalytics.com/flow-shop-scheduling-with-pulp-in-python

The post Flow shop scheduling with PuLP in Python appeared first on Supply Chain Data Analytics.

]]>