Simulation and Token Generation
I have a modelled a process for a department to deal with defaults on payments. They currently have a large backlog of people they need to call and we want to simulate how long it would take, with the current process and resources, to clear all the backlog.
The model seem to run ok. However, when the simulation is run with tokens not having an arrival interval:
- They will push through and activities with a set time of, say, 3 minutes will, once the process has run and all tokens processed, give an average completion time of many thousands of minutes. Why would the number be so high?
- Activities will wait for all tokens to be processed in the previous activity BEFORE moving on to the next one. In real life, once a caller has carried out his/her activities and passed on to the next role in the process, he/she will pick up a new token. Hence, the arrival interval depends really on how long the caller takes to complete the call.
Is there a way to simulate this, so that tokens are dispatched as callers finish their activities? What might be doing wrong?