Monday, October 27, 2008

Why reducing variation?

Variance reduction is one of the core issues of simulation. For a reliable mean estimation, it is crucial to narrow down confidence interval as much as possible and the way is variance reduction, either by increasing run time or increasing number of replications. That, I understand for academic purposes. But why is that so crucial for the industry? Not for system-wide variation reduction my concerns are but for the simulation itself. Isn't the simulation a reflection of the real-life system? And aren't real-life systems under so many variability effects?

Take a semiconductor fab as an example. In such a complex environment with so much variability factors, if I see a smooth WIP profile, that concerns me more than seeing some fluctuating crazy WIP behavior with respect to validity of the simulation model. Who can argue that one can witness such a smooth WIP behavior in a real fab?

Having a smooth WIP profile by manipulating simulations (if you're not targeting a future ideal fab) hide problems while they are real and existing in the real-life system. Anyways, I never understand why industry is caring so much about the mean results of simulation performance. Why not asking for variation results? Yes, a high WIP or cycle time should tell something about high variation in the system but what is going to be the reference point for a simulation run that needs to be updated everyday for dayly schedule asessment? Evaluating the importance of simulation needs to be revisited by the industry and it should be more around the variation analysis rather than the interest in mean performance results.

No comments: