Re: Media Testing for maclife.de
Re: Media Testing for maclife.de
- Subject: Re: Media Testing for maclife.de
- From: Richard Wagner <email@hidden>
- Date: Sat, 20 Sep 2008 08:12:15 -0700
On Sep 19, 2008, at 12:28 PM, Chris Cox wrote:
Just a bachelors in physics, several publications in physics, lots of
experimental physics jobs during college, lots of electronics
design and
construction since high-school, continued dabbling in physics after
taking a
job on Photoshop (mostly helping researchers at Stanford), plus the
optics
work I do for Photoshop (spectroscopy, building instruments,
calibrated
imaging, image sensor design, etc.).
Ok, just a PhD in Biophysics (7.5 years), a year of post-doctoral
fellowship, two stints as a visiting scientist in Italy, and about 10
years publishing papers on the biophysics of ion channels and
picoamp / femtoamp electrical currents in heart, muscle and other
cells. Most of my work was devoted to looking at the electrical
currents through single ion channels (single protein molecules),
using stochastic math and Markov modeling to analyze the results and
make sense of the measurements. I spent 2+ years writing my own
software for data acquisition and analysis. So I know something
about making measurements.
What I tried to explain is very fundamental.
That "the measurement itself is just a number (or set of numbers) -
by itself, it cannot be wrong"? I measured X, so that measurement
must be correct? That's naive. You're given an assignment - measure
the voltage between points A and B. You make a measurement and get a
number. Whether or not that number represents what you think it does
is another story. You report back, "The voltage across A and B is 26
millivolts." Now, you want to insist that your number must be
right? If you make the measurement 10 times, you'll get the same
result? Today, tomorrow, next week? You have the utmost confidence
in that number because... a measurement is a number, and therefore
measurements can't be wrong?
My view isn't convoluted, it's just looking at the fundamentals. I'm
certainly not looking at it from a clinical standpoint (where you
always
trust your calibration and don't try to look for explanations).
We disagree on the fundamentals...
"Just trust your calibration?" I don't think any of us have said that.
Scientific discovery is all about "hmm, why don't these numbers
match, I
thought they would". If you throw out everything that doesn't
match your
expectations, you just confirm your expectations and don't learn
anything
new.
Someone who does that won't finish their graduate program, won't get
published, and won't get very far as a scientist. If you take the
approach that "the measurement itself is just a number (or set of
numbers) - by itself, it cannot be wrong" you'll spend all kinds of
time coming up with far-out explanations for things, rather than
looking for sources of error in the data. If you make the assumption
that the measurements CAN be wrong, and may not represent what you
expect, and you thoroughly look for and eliminate sources of error,
and you STILL have measurements/results that don't seem expected,
then you MAY be onto something. Or not.
Without understanding this concept, I know for a fact that you
would have
failed the physics program at CMU (where one of the experimental
physics
classes was designed to make you question the measurements and look
beyond
the obvious).
You think so? Somehow, I don't.
I am still trying to figure out how people can study sciences
and not understand this concept -- am I thinking too abstract and
fundamental for some people to understand?
Too abstract, maybe. Too fundamental, no. Lots of measurements are
wrong - plain and simple. The job of the scientist is to figure out
which measurements are reliable, and which are not. If the data are
good, they can be used for hypothesis testing. Unfortunately, it
often takes a lot of work to figure out whether the measurements are
good or not. And as David pointed out, ALL measurements have error
associated with them. While a measurement may be "good enough" for
some purposes, it may be completely inadequate for others. A setup
that works fine one day may give bad data the next, for reasons that
may be as far out as solar flares.
BTW - your example, while wordy, reinforces my statement that
sometimes the
measurement doesn't match what is expected because you did not
completely
understand the experimental setup. "unaccounted for source of
error" means a
bad experimental setup, or a measurement device that you don't have a
calibration for (ie: scaling, offset, etc.).
What can I say. Cutting-edge research often means pushing the
technology as far as it will go. Error is not just the result of a
"bad setup" or a lack of calibration. Your concept of how science is
performed seems very naive. No doubt, you are a far more accomplished
programmer than scientist. And software is far more predictable than
most experimental measurements. ;-)
And I still think we're getting far off topic. I'd be happy to
continue this thread over a beer someday.
--Rich
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Colorsync-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden