Re: The DDC situation - corrected post
Re: The DDC situation - corrected post
- Subject: Re: The DDC situation - corrected post
- From: Robert Krawitz <email@hidden>
- Date: Tue, 23 Dec 2008 10:07:37 -0500
Date: Tue, 23 Dec 2008 06:30:14 -0800 (PST)
From: MARK SEGAL <email@hidden>
Do you really think it is as simple as that? How many pieces of
hardware and software would need to be tested by how many actors in
how many configurations (including OS versions) to achieve the
result you quite reasonably would like to see? And who orchestrates
that this would actually be done? At what cost and who pays?
I'd also like to see this problem resolved, but if we're discussing
solutions - and I don't pretend to have the answers - some kind of
operational significance needs to be infused here.
I think the point here (since Edmund invoked my name) is for each
agency that changes something to test that their change doesn't change
the behavior.
The particular scenario Edmund was talking about was when I rewrote
the Epson family driver in Gutenprint I enhanced one of the tests to
checksum each test case (we're talking anywhere between 50 and 100
test cases per printer, with about 85 distinct printers in the Epson
family). But that was a fairly straightforward situation -- I was
changing the data representation from hard-coded C to external XML
files, and I wanted to make sure that the changed data representation
had absolutely no effect on the output. The particular technique I
employed was useful in that situation (where the intent was that there
be no change whatsoever in functionality), but doesn't apply
everywhere.
If the intent is that an implementation change *does* change the
functionality, then this technique doesn't work. It's not too helpful
if I'm making deliberate changes, other than to ensure that I'm only
changing what I think I'm changing (which isn't unimportant, either).
Testing standards compliance is a tricky job. It's very hard to write
a standard where there's no wiggle room, and if there is any room for
interpretation, it's a sure bet that different vendors (or even the
same vendor over time) will interpret it differently.
All of that said, Edmund is absolutely correct that however difficult
the testing may be, it's essential that it be done. Perhaps a better
example of this is the "TCP/IP bakeoff" testing that was done in the
years of the IP protocol (IP was introduced some time in the 1970's,
as I recall). This was a multi-vendor effort to test the various IP
implementations against each other; see
http://www.faqs.org/rfcs/rfc1025.html and
http://stuff.mit.edu/afs/sipb/user/rlk/hack -- apparently I saved that
message away, but I don't recall doing it -- for more information. IP
implementation incompatibilities are very rare these days, although
even now there are occasionally problems (as I recall, a recent Linux
kernel had problems with a certain brand of router; the issue was hard
to reproduce but was fortunately fixed before it became too much of a
problem).
This kind of testing is certainly much more difficult and
resource-intensive than simple module regression testing, but the cost
of not doing it is very high.
--
Robert Krawitz <email@hidden>
Tall Clubs International -- http://www.tall.org/ or 1-888-IM-TALL-2
Member of the League for Programming Freedom -- mail email@hidden
Project lead for Gutenprint -- http://gimp-print.sourceforge.net
"Linux doesn't dictate how I work, I dictate how Linux works."
--Eric Crampton
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Colorsync-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden