Ferra Posted February 3, 2014 Posted February 3, 2014 (edited) Hi all I am new to DOE. I wonder if for a very costly experiment that sampling is impossible in practice, we can implement DOE methods and find the optimum parameters just by using the mathematical model (that simulate the process) of that process (for example generating data by using the model not by experiment). And which designs are usually used for costly experiments? Thanks Edited February 3, 2014 by Ferra
imatfaal Posted February 4, 2014 Posted February 4, 2014 Ferra This isn't my subject in any way at all but one of your comment reminded me of something. I heard John Barrow quickly outline the Anthropic principle a few days ago (now I am just showing off) and something he said tallied. When ever we are performing experiments with probabilistic outcomes we must be very wary of imposing our own probability space on the potential results - because of our necessary and natural preconceptions and "correct" understanding we can erroneously limit the scope of our empirical data gathering and thus only gain the data we expect rather than the answers that we need. The more narrow an experiment the nicer the data and stronger the implication - but the greater possibility that our data only fits our model coincidentally and a wider view is necessary for any understanding.
Ferra Posted February 4, 2014 Author Posted February 4, 2014 Ferra This isn't my subject in any way at all but one of your comment reminded me of something. I heard John Barrow quickly outline the Anthropic principle a few days ago (now I am just showing off) and something he said tallied. When ever we are performing experiments with probabilistic outcomes we must be very wary of imposing our own probability space on the potential results - because of our necessary and natural preconceptions and "correct" understanding we can erroneously limit the scope of our empirical data gathering and thus only gain the data we expect rather than the answers that we need. The more narrow an experiment the nicer the data and stronger the implication - but the greater possibility that our data only fits our model coincidentally and a wider view is necessary for any understanding. Thank you for your comment. That was interesting.
EdEarl Posted February 4, 2014 Posted February 4, 2014 We make models of things we know, and they are bounded by things we know that we don't know and sometimes things we don't know that we don't know. We can estimate errors on things that we know that we don't know, but the things that we don't know at all can make our models useless. In some cases the things we don't know at all will apply to special cases. Models are useful, but always limited. Using them for predictions is often better than a SWAG. We do not have perfect foresight, which is the only way we might make perfect predictions. Sometimes "improved" engineering efforts will result in an unfortunate result such as the Tacoma Narrows "Galloping Gurtie" Bridge or Fukushima.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now