Discussion:
why device independent color?
(too old to reply)
Dale
2014-01-23 14:26:14 UTC
Permalink
if you want to purpose an image to more than one output device color,
and have the output look the same

or

if you want different input device color purposed to different output
device color(s) and want the output to look the same

then

you need to convert the device colors through device independent color
space like XYZ,CIELAB,CIELUV

I remember the introduction of the sRGB standard color space

I remember speaking on Kodak's internal ICC ( http://www.color.org )
mailing list, espousing that sRGB would be an excuse NOT to make device
profiles with regard to the device independent color space(s)

I think the use of SWOP CMYK standards had a similar result

it's been almost 20 years and it seems like most cameras are using sRGB
or ProPhotoRGB as default profiles instead of getting a REAL profile
from the vendor of the hardware or making such a profile itself

people who don't consider how far an image accurately when it is
multi-purposed in device independent color

there are few vertical imaging workflows left, perhaps there you can
translate the color by matching filtration, etc.

the only place I see for sRGB and SWOP is consumer related imaging

not to say that RGB/CMY (with/without maintenance of black channel)
isn't the best working space, I just don't see it as a profile
connection space, since there are MANY RGBs, they are device dependent,
and have device dependent color, whereas XYZ,CIELAB,CIELUV are
independent of device

let me take this time to also say that the print reference medium has
been a start with ICC to tackle appearance matching instead of color
matching, ought to be more reference mediums and implementations of such
use-cases to make better workflows
--
Dale
nospam
2014-01-23 21:52:12 UTC
Permalink
Post by Dale
if you want to purpose an image to more than one output device color,
and have the output look the same
or
if you want different input device color purposed to different output
device color(s) and want the output to look the same
then
you need to convert the device colors through device independent color
space like XYZ,CIELAB,CIELUV
completely wrong.

what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
Eric Stevens
2014-01-23 22:55:57 UTC
Permalink
Post by nospam
Post by Dale
if you want to purpose an image to more than one output device color,
and have the output look the same
or
if you want different input device color purposed to different output
device color(s) and want the output to look the same
then
you need to convert the device colors through device independent color
space like XYZ,CIELAB,CIELUV
completely wrong.
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
And how do you do that with a reference colour space, such as "XYZ,
CIELAB, CIELUV"?
--
Regards,

Eric Stevens
nospam
2014-01-24 03:06:42 UTC
Permalink
Post by Eric Stevens
Post by nospam
Post by Dale
if you want to purpose an image to more than one output device color,
and have the output look the same
or
if you want different input device color purposed to different output
device color(s) and want the output to look the same
then
you need to convert the device colors through device independent color
space like XYZ,CIELAB,CIELUV
completely wrong.
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
And how do you do that with a reference colour space, such as "XYZ,
CIELAB, CIELUV"?
users do not need to convert the image.

what they need to do is use a colour managed workflow and the computer
takes care of the details.

if you choose a different printer, pick the relevant profile and
whatever conversions are necessary are done automatically.

once again, let the computer do the work.
Dale
2014-01-24 05:06:37 UTC
Permalink
Post by nospam
once again, let the computer do the work.
no, let lab dudes do gamut compression math, etc., by hand, for each image
--
Dale
nospam
2014-01-24 17:13:45 UTC
Permalink
Post by Dale
Post by nospam
once again, let the computer do the work.
no, let lab dudes do gamut compression math, etc., by hand, for each image
what lab dudes? what labs?

people process their own images on their own computers, and all they
need to do is adopt a colour managed workflow and let the computer do
the work.

there is no need to do the math by hand for each image.
Eric Stevens
2014-01-24 09:25:10 UTC
Permalink
Post by nospam
Post by Eric Stevens
Post by nospam
Post by Dale
if you want to purpose an image to more than one output device color,
and have the output look the same
or
if you want different input device color purposed to different output
device color(s) and want the output to look the same
then
you need to convert the device colors through device independent color
space like XYZ,CIELAB,CIELUV
completely wrong.
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
And how do you do that with a reference colour space, such as "XYZ,
CIELAB, CIELUV"?
users do not need to convert the image.
what they need to do is use a colour managed workflow and the computer
takes care of the details.
if you choose a different printer, pick the relevant profile and
whatever conversions are necessary are done automatically.
once again, let the computer do the work.
But the computer has to have some standards against which it can
determine the meaning of the colour profile. Otherwise its a bit like
saying to your tailor I want a 197 chest, a 132 waist and a leg of
106. At which point your tailor will say "Huh! Waddaya mean?".
--
Regards,

Eric Stevens
Dale
2014-01-24 10:20:22 UTC
Permalink
Post by Eric Stevens
Post by nospam
Post by Eric Stevens
Post by nospam
Post by Dale
if you want to purpose an image to more than one output device color,
and have the output look the same
or
if you want different input device color purposed to different output
device color(s) and want the output to look the same
then
you need to convert the device colors through device independent color
space like XYZ,CIELAB,CIELUV
completely wrong.
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
And how do you do that with a reference colour space, such as "XYZ,
CIELAB, CIELUV"?
users do not need to convert the image.
what they need to do is use a colour managed workflow and the computer
takes care of the details.
if you choose a different printer, pick the relevant profile and
whatever conversions are necessary are done automatically.
once again, let the computer do the work.
But the computer has to have some standards against which it can
determine the meaning of the colour profile. Otherwise its a bit like
saying to your tailor I want a 197 chest, a 132 waist and a leg of
106. At which point your tailor will say "Huh! Waddaya mean?".
profiles are calculated to go from device space to device independent
space, or vice versa

there are other considerations ...

but sRGB or SWOP or ProPhotoRGB are NOT device independent color spaces,
they are device standard spaces with which to match by design of
equipment/media to such device standard space

like a TV and a TV Camera, or like consumer imaging nowadays

even those might want to repurpose the image outside such a chain, in
which case you need to go through a device independent space with a profile

with all the different things happening in television besides P22 and
EBU phosphor CRT display, there are LCD, LED, Plasma, OLED, maybe more,
I think sRGB is going to die, same with ProphotoRGB and like SWOP
already might have
--
Dale
Jeroen
2014-01-24 18:56:38 UTC
Permalink
Hi,
Post by Dale
but sRGB or SWOP or ProPhotoRGB are NOT device independent color spaces,
they are device standard spaces with which to match by design of
equipment/media to such device standard space
like a TV and a TV Camera, or like consumer imaging nowadays
sRGB just describes the behaviour that (CRT) computer monitors
already had, just like Rec.709 describes the behaviour of (CRT)
TVs. Rec.1886 goes a step further and describes the EOTF (gamma
function) of the TV studio monitor, i.e. a gamma of 2.4.
sRGB and Rec.709 describe the same color space, i.e. the (x,y)
of the RGB color primaries and the white point, and thus they
also fix the standard color gamut of the last 50 years or so.
Post by Dale
with all the different things happening in television besides P22 and
EBU phosphor CRT display, there are LCD, LED, Plasma, OLED, maybe more,
I think sRGB is going to die, same with ProphotoRGB and like SWOP
already might have
You might as well say then that Rec.709 will die for TV.
That won't happen until Rec.2020 takes over, which is just
another variant of RGB color space, but with wider primaries.
(You would need lasers to build a display for it.)

Digital Cinema uses X'Y'Z' signals (and a gamma of 2.6), but
they know that they should stay within the P3 gamut or else
the outer colors may become unreproducible on some displays.

Now to make my point: sRGB and Rec.709 are not just a standard
for the color space, they also define a (rather narrow) color
gamut. This gamut can be reproduced by any reasonable display.

If you try to work with a wider color gamut then the risk
increases that different displays will apply different amounts
and types of gamut mapping, and the result becomes unpredictable.

When in 2007 I tried to convince my friends in Hollywood of
the benefits of (Sony's) xvYXX (x.v.Color) standard for wide
color gamut, I got voted down for merely suggesting that they
could leave the gamut mapping issue to the TV setmakers, that
we would render suitably approximate colors when necessary.
They would rather stay with the standard Rec.709 (= sRGB) color
gamut, and know exactly what will be rendered in our homes.

The UHD standardisation process is proposing to go to an XYZ
color space, just like Digital Cinema, but with color difference
signals (e.g. Y'DzDx and 4:2:0). Unless they voluntarily agree
to limit the actual color gamut to something that most displays
can accurately render (not larger than the P3 gamut) then they
will run into the same issues of being unacceptable to Hollywood.
You need to define a guaranteed color gamut, small enough that
all reasonably modern displays can fully reproduce it. Gamut
mapping by the receiver is not an acceptable option for them.

Technicolor would say that you need to send metadata that
describes the boundaries of the input color gamut, and give
hints for the direction of the gamut mapping. Complicated !

The common denominator solution isn't so stupid, even if the
gamut is as small as sRGB. At least it is guaranteed.

Best,
-- Jeroen (who is an obvious follower of Charles P)
Dale
2014-01-24 19:53:41 UTC
Permalink
Post by Jeroen
The common denominator solution isn't so stupid, even if the
gamut is as small as sRGB. At least it is guaranteed.
yes

but standard RGB spaces are device dependent, not device independent
like XYZ, CIELAB, CIELUV

device spaces don't make for for as good of a profile connection spaces
and storage spaces

maybe better for working spaces
--
Dale
Jeroen
2014-01-25 21:30:56 UTC
Permalink
Hi,
Post by Dale
but standard RGB spaces are device dependent, not device independent
like XYZ, CIELAB, CIELUV
device spaces don't make for for as good of a profile connection spaces
and storage spaces
I agree with you, and that device independent spaces are fine for
cameras and work in progress. But not for delivery to an audience
with displays that have a known color space. Then it is customary
to deliver it directly in their color space, because you know that
it is not going to change any further due to gamut mapping.

And if the content goes directly to the user's display, as with
an amateur's digital camera, then you might as well directly
convert it to sRGB. Anything else would just cause problems.

It's a pity, really, because any well designed camera is capable
of an extremely wide color gamut (similar to XYZ). It is only
through conversion to sRGB / Rec.709 and then clipping of the
negative values to zero that the color gamut becomes limited.

This is where sYCC and e-sRGB (and xvYCC) come in, to preserve
those negative values and be able to convert (back) to a larger
color space later, while being (more or less) compatible to SRGB.

Best,
-- J
Eric Stevens
2014-01-24 23:39:25 UTC
Permalink
On Fri, 24 Jan 2014 05:20:22 -0500, Dale
Post by Dale
Post by Eric Stevens
Post by nospam
Post by Eric Stevens
Post by nospam
Post by Dale
if you want to purpose an image to more than one output device color,
and have the output look the same
or
if you want different input device color purposed to different output
device color(s) and want the output to look the same
then
you need to convert the device colors through device independent color
space like XYZ,CIELAB,CIELUV
completely wrong.
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
And how do you do that with a reference colour space, such as "XYZ,
CIELAB, CIELUV"?
users do not need to convert the image.
what they need to do is use a colour managed workflow and the computer
takes care of the details.
if you choose a different printer, pick the relevant profile and
whatever conversions are necessary are done automatically.
once again, let the computer do the work.
But the computer has to have some standards against which it can
determine the meaning of the colour profile. Otherwise its a bit like
saying to your tailor I want a 197 chest, a 132 waist and a leg of
106. At which point your tailor will say "Huh! Waddaya mean?".
profiles are calculated to go from device space to device independent
space, or vice versa
there are other considerations ...
but sRGB or SWOP or ProPhotoRGB are NOT device independent color spaces,
they are device standard spaces with which to match by design of
equipment/media to such device standard space
I wasn't discussing "sRGB or SWOP or ProPhotoRGB" colour spaces. I was
referring to "XYZ,CIELAB,CIELUV" as originally mentioned by Dale at
the beginning of this thread.
Post by Dale
like a TV and a TV Camera, or like consumer imaging nowadays
even those might want to repurpose the image outside such a chain, in
which case you need to go through a device independent space with a profile
with all the different things happening in television besides P22 and
EBU phosphor CRT display, there are LCD, LED, Plasma, OLED, maybe more,
I think sRGB is going to die, same with ProphotoRGB and like SWOP
already might have
--
Regards,

Eric Stevens
nospam
2014-01-24 17:13:48 UTC
Permalink
Post by Eric Stevens
Post by nospam
Post by Eric Stevens
Post by nospam
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
And how do you do that with a reference colour space, such as "XYZ,
CIELAB, CIELUV"?
users do not need to convert the image.
what they need to do is use a colour managed workflow and the computer
takes care of the details.
if you choose a different printer, pick the relevant profile and
whatever conversions are necessary are done automatically.
once again, let the computer do the work.
But the computer has to have some standards against which it can
determine the meaning of the colour profile. Otherwise its a bit like
saying to your tailor I want a 197 chest, a 132 waist and a leg of
106. At which point your tailor will say "Huh! Waddaya mean?".
the computer knows how to convert it. the authors of the profiling
software need to understand the math to write the software to do the
conversions. that's about the extent of it.

the end users do not need to understand any of it, other than how to
use profiles in a colour managed workflow.
Eric Stevens
2014-01-24 23:44:38 UTC
Permalink
Post by nospam
Post by Eric Stevens
Post by nospam
Post by Eric Stevens
Post by nospam
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
And how do you do that with a reference colour space, such as "XYZ,
CIELAB, CIELUV"?
users do not need to convert the image.
what they need to do is use a colour managed workflow and the computer
takes care of the details.
if you choose a different printer, pick the relevant profile and
whatever conversions are necessary are done automatically.
once again, let the computer do the work.
But the computer has to have some standards against which it can
determine the meaning of the colour profile. Otherwise its a bit like
saying to your tailor I want a 197 chest, a 132 waist and a leg of
106. At which point your tailor will say "Huh! Waddaya mean?".
the computer knows how to convert it. the authors of the profiling
software need to understand the math to write the software to do the
conversions. that's about the extent of it.
Here we go again. It's not about what the computer knows or the
computer can do for the user. It's about the definition of colour
spaces such as sRGB, and whatever else it is you have snipped, for
which you need an underlying reference system such as "XYZ, CIELAB,
CIELUV".
Post by nospam
the end users do not need to understand any of it, other than how to
use profiles in a colour managed workflow.
True, but you are changing the subject.
--
Regards,

Eric Stevens
nospam
2014-01-25 03:11:50 UTC
Permalink
Post by Eric Stevens
Post by nospam
Post by Eric Stevens
Post by nospam
what they need to do is use a colour managed workflow and the computer
takes care of the details.
if you choose a different printer, pick the relevant profile and
whatever conversions are necessary are done automatically.
once again, let the computer do the work.
But the computer has to have some standards against which it can
determine the meaning of the colour profile. Otherwise its a bit like
saying to your tailor I want a 197 chest, a 132 waist and a leg of
106. At which point your tailor will say "Huh! Waddaya mean?".
the computer knows how to convert it. the authors of the profiling
software need to understand the math to write the software to do the
conversions. that's about the extent of it.
Here we go again. It's not about what the computer knows or the
computer can do for the user.
of course it is.
Post by Eric Stevens
It's about the definition of colour
spaces such as sRGB, and whatever else it is you have snipped, for
which you need an underlying reference system such as "XYZ, CIELAB,
CIELUV".
no it isn't.

the user wants as close a match as possible, given the limits of a
device. that requires a colour managed workflow.

they don't need to know the math as to how it works.
Post by Eric Stevens
Post by nospam
the end users do not need to understand any of it, other than how to
use profiles in a colour managed workflow.
True, but you are changing the subject.
not at all.
Eric Stevens
2014-01-25 04:13:06 UTC
Permalink
Post by nospam
Post by Eric Stevens
Post by nospam
Post by Eric Stevens
Post by nospam
what they need to do is use a colour managed workflow and the computer
takes care of the details.
if you choose a different printer, pick the relevant profile and
whatever conversions are necessary are done automatically.
once again, let the computer do the work.
But the computer has to have some standards against which it can
determine the meaning of the colour profile. Otherwise its a bit like
saying to your tailor I want a 197 chest, a 132 waist and a leg of
106. At which point your tailor will say "Huh! Waddaya mean?".
the computer knows how to convert it. the authors of the profiling
software need to understand the math to write the software to do the
conversions. that's about the extent of it.
Here we go again. It's not about what the computer knows or the
computer can do for the user.
of course it is.
Post by Eric Stevens
It's about the definition of colour
spaces such as sRGB, and whatever else it is you have snipped, for
which you need an underlying reference system such as "XYZ, CIELAB,
CIELUV".
no it isn't.
Yes it is. Please read the subject of the thread and Dale's article
which started it. If you want to talk about something different please
go away and start another thread.
Post by nospam
the user wants as close a match as possible, given the limits of a
device. that requires a colour managed workflow.
they don't need to know the math as to how it works.
Whoever said they did?
Post by nospam
Post by Eric Stevens
Post by nospam
the end users do not need to understand any of it, other than how to
use profiles in a colour managed workflow.
True, but you are changing the subject.
not at all.
Read Dale's article.
--
Regards,

Eric Stevens
nospam
2014-01-25 04:16:37 UTC
Permalink
Post by Eric Stevens
Post by nospam
Post by Eric Stevens
Post by nospam
Post by Eric Stevens
Post by nospam
what they need to do is use a colour managed workflow and the computer
takes care of the details.
if you choose a different printer, pick the relevant profile and
whatever conversions are necessary are done automatically.
once again, let the computer do the work.
But the computer has to have some standards against which it can
determine the meaning of the colour profile. Otherwise its a bit like
saying to your tailor I want a 197 chest, a 132 waist and a leg of
106. At which point your tailor will say "Huh! Waddaya mean?".
the computer knows how to convert it. the authors of the profiling
software need to understand the math to write the software to do the
conversions. that's about the extent of it.
Here we go again. It's not about what the computer knows or the
computer can do for the user.
of course it is.
Post by Eric Stevens
It's about the definition of colour
spaces such as sRGB, and whatever else it is you have snipped, for
which you need an underlying reference system such as "XYZ, CIELAB,
CIELUV".
no it isn't.
Yes it is. Please read the subject of the thread and Dale's article
which started it. If you want to talk about something different please
go away and start another thread.
Post by nospam
the user wants as close a match as possible, given the limits of a
device. that requires a colour managed workflow.
they don't need to know the math as to how it works.
Whoever said they did?
Post by nospam
Post by Eric Stevens
Post by nospam
the end users do not need to understand any of it, other than how to
use profiles in a colour managed workflow.
True, but you are changing the subject.
not at all.
Read Dale's article.
i did, and it's wrong.
Dale
2014-01-25 07:18:50 UTC
Permalink
Post by nospam
Post by Eric Stevens
Post by nospam
Post by Eric Stevens
Post by nospam
Post by Eric Stevens
Post by nospam
what they need to do is use a colour managed workflow and the computer
takes care of the details.
if you choose a different printer, pick the relevant profile and
whatever conversions are necessary are done automatically.
once again, let the computer do the work.
But the computer has to have some standards against which it can
determine the meaning of the colour profile. Otherwise its a bit like
saying to your tailor I want a 197 chest, a 132 waist and a leg of
106. At which point your tailor will say "Huh! Waddaya mean?".
the computer knows how to convert it. the authors of the profiling
software need to understand the math to write the software to do the
conversions. that's about the extent of it.
Here we go again. It's not about what the computer knows or the
computer can do for the user.
of course it is.
Post by Eric Stevens
It's about the definition of colour
spaces such as sRGB, and whatever else it is you have snipped, for
which you need an underlying reference system such as "XYZ, CIELAB,
CIELUV".
no it isn't.
Yes it is. Please read the subject of the thread and Dale's article
which started it. If you want to talk about something different please
go away and start another thread.
Post by nospam
the user wants as close a match as possible, given the limits of a
device. that requires a colour managed workflow.
they don't need to know the math as to how it works.
Whoever said they did?
Post by nospam
Post by Eric Stevens
Post by nospam
the end users do not need to understand any of it, other than how to
use profiles in a colour managed workflow.
True, but you are changing the subject.
not at all.
Read Dale's article.
i did, and it's wrong.
the way things are NOW, photographers and lab techs/engineers have to
know about the making of profiles, once device/driver manufacturers make
profiles for their devices, it will be more like you are getting at, it
shouldn't matter to the user, but sometimes a user might want to
make/edit his own profiles

this leads to measurement instrumentation

when I worked at Kodak we had spectroradiometers, colorimeters etc.,
that cost over $100,000

the software I see now is for instruments like X-Rite and MacBeth
levels, works okay for software like Kodak's ColorFlow where you are
actually creating an edited profile, but I think the ICC needs to get
more influences to device/driver manufacturers

someone needs to have the high priced instruments

then again there is such a thing as "good enough", especially when
applied to consumer imaging, television is trying to get into the high
quality professional markets though
--
Dale
nospam
2014-01-25 19:01:02 UTC
Permalink
Post by Dale
the way things are NOW, photographers and lab techs/engineers have to
know about the making of profiles, once device/driver manufacturers make
profiles for their devices, it will be more like you are getting at, it
shouldn't matter to the user, but sometimes a user might want to
make/edit his own profiles
making a profile is easy. just run the software.

what photographers and techs don't need to know is the math behind the
conversions and everything else about colour management.
Post by Dale
this leads to measurement instrumentation
when I worked at Kodak we had spectroradiometers, colorimeters etc.,
that cost over $100,000
the software I see now is for instruments like X-Rite and MacBeth
levels, works okay for software like Kodak's ColorFlow where you are
actually creating an edited profile, but I think the ICC needs to get
more influences to device/driver manufacturers
someone needs to have the high priced instruments
no they don't.

the low priced colour pucks work exceptionally well, and since they are
affordable by just about anyone, they actually get used.
Post by Dale
then again there is such a thing as "good enough", especially when
applied to consumer imaging, television is trying to get into the high
quality professional markets though
today's low price products are *better* than the overpriced stuff you
may have had long ago.
Eric Stevens
2014-01-26 02:08:13 UTC
Permalink
Post by nospam
Post by Dale
the way things are NOW, photographers and lab techs/engineers have to
know about the making of profiles, once device/driver manufacturers make
profiles for their devices, it will be more like you are getting at, it
shouldn't matter to the user, but sometimes a user might want to
make/edit his own profiles
making a profile is easy. just run the software.
what photographers and techs don't need to know is the math behind the
conversions and everything else about colour management.
Something has to know and ultimately it boils down to people having to
know.
Post by nospam
Post by Dale
this leads to measurement instrumentation
when I worked at Kodak we had spectroradiometers, colorimeters etc.,
that cost over $100,000
the software I see now is for instruments like X-Rite and MacBeth
levels, works okay for software like Kodak's ColorFlow where you are
actually creating an edited profile, but I think the ICC needs to get
more influences to device/driver manufacturers
someone needs to have the high priced instruments
no they don't.
Well where does the calibration standard come from?
Post by nospam
the low priced colour pucks work exceptionally well, and since they are
affordable by just about anyone, they actually get used.
How could you know they work exceptionally well if you didn't have
standards against which you can test them?
Post by nospam
Post by Dale
then again there is such a thing as "good enough", especially when
applied to consumer imaging, television is trying to get into the high
quality professional markets though
today's low price products are *better* than the overpriced stuff you
may have had long ago.
How can you know that, without using even better and higher priced
stuff to test and calibrate them?
--
Regards,

Eric Stevens
Martin Brown
2014-01-27 08:47:31 UTC
Permalink
Post by Eric Stevens
Post by nospam
Post by Dale
the way things are NOW, photographers and lab techs/engineers have to
know about the making of profiles, once device/driver manufacturers make
profiles for their devices, it will be more like you are getting at, it
shouldn't matter to the user, but sometimes a user might want to
make/edit his own profiles
making a profile is easy. just run the software.
what photographers and techs don't need to know is the math behind the
conversions and everything else about colour management.
Something has to know and ultimately it boils down to people having to
know.
Yes. But only a handful of people who work on the design of imaging
systems actually need to understand the details of the mathematics that
underpins moving between colour spaces reliably. The end user merely
needs to be able to see clearly what parts of his image cannot be
rendered accurately on the final destination medium and preview what it
will look like after the compromises are made for gamut capability.
Post by Eric Stevens
Post by nospam
Post by Dale
this leads to measurement instrumentation
when I worked at Kodak we had spectroradiometers, colorimeters etc.,
that cost over $100,000
the software I see now is for instruments like X-Rite and MacBeth
levels, works okay for software like Kodak's ColorFlow where you are
actually creating an edited profile, but I think the ICC needs to get
more influences to device/driver manufacturers
someone needs to have the high priced instruments
no they don't.
Well where does the calibration standard come from?
You don't need that many of the high end instruments - modern simple
color measurement devices are now surprisingly good. The dye
manufacturers and printer/display makers labs will need such kit to
characterise the properties of new inks and papers or OLED/plasma/LCD
but end users can get by with very modest colorimetry.
Post by Eric Stevens
Post by nospam
the low priced colour pucks work exceptionally well, and since they are
affordable by just about anyone, they actually get used.
How could you know they work exceptionally well if you didn't have
standards against which you can test them?
Photograph a few colour paint sample charts and then do a calibrated
workflow then compare the resulting print against the original - as a
concrete example. The human eye is very good at spotting small
differences in hue - especially on near flesh tones.

Heck this is already so well established that there is paint
manufacturer software to allow you to photograph a small test chart with
a hole in it for the unknown target colour on your mobile phone. Email
it to the paint maker and they will send you back a mix formula to match
it that can be taken to your nearest DIY store and works.

It isn't that long ago that individual batches of paint with nominally
the same colour formulation could have radically different properties.

American NTSC TV used to amuse Europeans because the newscaster would
drift between having ghoulish green and surreal purple flesh tones or
else be clamped to an unearthly pale orange by the flesh bodger. I
always assumed it was an inherent limitation of NTSC until I saw the
Japanese domestic implementation of it which works flawlessly.
Post by Eric Stevens
Post by nospam
Post by Dale
then again there is such a thing as "good enough", especially when
applied to consumer imaging, television is trying to get into the high
quality professional markets though
today's low price products are *better* than the overpriced stuff you
may have had long ago.
How can you know that, without using even better and higher priced
stuff to test and calibrate them?
I was involved in some of the very early dyesub printing in Japan. They
kept separate colour profiles for printing souvenir images of visiting
VIPs - Westerners and Japanese. These were largely subjective and
neither group liked seeing a neutral balanced version of their portrait!

When a westerner was due one of us would be photographed and printed to
check the calibration. A Westerner printed on the Japanese setting would
look pink like they were drunk and a Japanese person printed on the
Westerner setting would look jaundiced. Neither setting represented true
calibrated neutral reality but the "customers" didn't like reality!
--
Regards,
Martin Brown
isw
2014-01-27 18:29:50 UTC
Permalink
Post by Martin Brown
American NTSC TV used to amuse Europeans because the newscaster would
drift between having ghoulish green and surreal purple flesh tones or
else be clamped to an unearthly pale orange by the flesh bodger. I
always assumed it was an inherent limitation of NTSC until I saw the
Japanese domestic implementation of it which works flawlessly.
It wasn't their implementation so much as it was their fanatic attention
to monitoring and constantly adjusting the performance of their
transmission links (that is, their technicians were just more attentive
than ours were).

For the most part, NTSC's early color drift problems were due to an
inadequate understanding of what it took to provide long-haul
high-quality transmission links for what were then the highest-bandwidth
signals ever sent for those distances.

i.e. it was not "NTSC" that was the problem; it was the unstable
performance of the channels the signals passed through.

In Europe, the "problem" was dealt with by designing a more complex
system (PAL) which was considerably more immune to drift in transmission
link gear.

Quite soon, in the US, engineers figured out what the problems were,
designed better gear, and the problems went away.

Meanwhile in Europe, equipment performance also improved, so the extra
mechanisms included in PAL to deal with drift became unnecessary. But
PAL got stuck for its entire lifetime with a requirement for more
complex (and so more costly) gear, and some unpleasant secondary
artifacts that came with their "superior" system -- high-brightness
flicker and a much lower color interlace rate (6.25 Hz. vs. ~15 Hz. for
NTSC).

And just to keep things sort-of on topic, despite its limitations,
NTSC's color space was larger than that of any other commercial color
reproduction technique that existed at the time. (And that includes
color photographic film).

Isaac
Eric Stevens
2014-01-27 22:48:36 UTC
Permalink
Post by isw
Post by Martin Brown
American NTSC TV used to amuse Europeans because the newscaster would
drift between having ghoulish green and surreal purple flesh tones or
else be clamped to an unearthly pale orange by the flesh bodger. I
always assumed it was an inherent limitation of NTSC until I saw the
Japanese domestic implementation of it which works flawlessly.
It wasn't their implementation so much as it was their fanatic attention
to monitoring and constantly adjusting the performance of their
transmission links (that is, their technicians were just more attentive
than ours were).
For the most part, NTSC's early color drift problems were due to an
inadequate understanding of what it took to provide long-haul
high-quality transmission links for what were then the highest-bandwidth
signals ever sent for those distances.
i.e. it was not "NTSC" that was the problem; it was the unstable
performance of the channels the signals passed through.
In Europe, the "problem" was dealt with by designing a more complex
system (PAL) which was considerably more immune to drift in transmission
link gear.
Quite soon, in the US, engineers figured out what the problems were,
designed better gear, and the problems went away.
Meanwhile in Europe, equipment performance also improved, so the extra
mechanisms included in PAL to deal with drift became unnecessary. But
PAL got stuck for its entire lifetime with a requirement for more
complex (and so more costly) gear, and some unpleasant secondary
artifacts that came with their "superior" system -- high-brightness
flicker and a much lower color interlace rate (6.25 Hz. vs. ~15 Hz. for
NTSC).
And just to keep things sort-of on topic, despite its limitations,
NTSC's color space was larger than that of any other commercial color
reproduction technique that existed at the time. (And that includes
color photographic film).
Wow!

That brings us back to Dale's original topic. You couldn't say "NTSC's
color space was larger than that of any other commercial color
reproduction technique that existed at the time" unless you had a
device independent space (such as "XYZ, CIELAB, CIELUV) through which
you can connect them.

Many thanks. :-)
--
Regards,

Eric Stevens
Martin Brown
2014-01-28 11:59:41 UTC
Permalink
Post by Eric Stevens
Post by isw
And just to keep things sort-of on topic, despite its limitations,
NTSC's color space was larger than that of any other commercial color
reproduction technique that existed at the time. (And that includes
color photographic film).
Wow!
That brings us back to Dale's original topic. You couldn't say "NTSC's
color space was larger than that of any other commercial color
reproduction technique that existed at the time" unless you had a
device independent space (such as "XYZ, CIELAB, CIELUV) through which
you can connect them.
Many thanks. :-)
Yes you could by showing that the other colour spaces gamut could be
represented as a subset of the NTSC colour space. I am not convinced the
claim is true about NTSC though it was for a while a de facto colour
space standard in practice.

The fact is though that CIE 1931 predates all of the modern device
independent colour spaces in trying to encode human colour perception.

http://en.wikipedia.org/wiki/CIE_1931_color_space

And it was obviously available when NTSC 1953 was defined. The RGB
phosphors were in fact specified by their position on CIE 1931.
--
Regards,
Martin Brown
isw
2014-01-28 18:16:05 UTC
Permalink
Post by Martin Brown
Post by Eric Stevens
Post by isw
And just to keep things sort-of on topic, despite its limitations,
NTSC's color space was larger than that of any other commercial color
reproduction technique that existed at the time. (And that includes
color photographic film).
Wow!
That brings us back to Dale's original topic. You couldn't say "NTSC's
color space was larger than that of any other commercial color
reproduction technique that existed at the time" unless you had a
device independent space (such as "XYZ, CIELAB, CIELUV) through which
you can connect them.
Many thanks. :-)
Yes you could by showing that the other colour spaces gamut could be
represented as a subset of the NTSC colour space. I am not convinced the
claim is true about NTSC though it was for a while a de facto colour
space standard in practice.
You could just compare the area enclosed by the CIE coordinates of its
primaries to the others (which IIRC, was the origin of the claim ...)

I believe that claim came out at the time of the introduction of the
NTSC system. It would have been made with reference to the original red
phosphor, which was rather poor in light output and had a short lifetime
from being driven hard. The replacement red phosphor was much more
robust, but was located at different CIE coordinates (naturally). I do
not know whether the gamut claim was still true with it or not -- but I
think it was.

Isaac
Eric Stevens
2014-01-27 22:39:39 UTC
Permalink
On Mon, 27 Jan 2014 08:47:31 +0000, Martin Brown
Post by Martin Brown
Post by Eric Stevens
Post by nospam
Post by Dale
the way things are NOW, photographers and lab techs/engineers have to
know about the making of profiles, once device/driver manufacturers make
profiles for their devices, it will be more like you are getting at, it
shouldn't matter to the user, but sometimes a user might want to
make/edit his own profiles
making a profile is easy. just run the software.
what photographers and techs don't need to know is the math behind the
conversions and everything else about colour management.
Something has to know and ultimately it boils down to people having to
know.
Yes. But only a handful of people who work on the design of imaging
systems actually need to understand the details of the mathematics that
underpins moving between colour spaces reliably. The end user merely
needs to be able to see clearly what parts of his image cannot be
rendered accurately on the final destination medium and preview what it
will look like after the compromises are made for gamut capability.
All of that's true but is not Dale's original point:

"you need to convert the device colors through device independent
color space like XYZ,CIELAB,CIELUV".

But you do seem to be contradicting nospam's response:

"completely wrong.

what is needed is a colour managed workflow, with the image and
each device along the way having a profile."

nospams comment is in fact both a contradiction and a non seqitur. You
can't have a colour managed work flow without a device independent
colour space, but nospam seems to be denying that.
Post by Martin Brown
Post by Eric Stevens
Post by nospam
Post by Dale
this leads to measurement instrumentation
when I worked at Kodak we had spectroradiometers, colorimeters etc.,
that cost over $100,000
the software I see now is for instruments like X-Rite and MacBeth
levels, works okay for software like Kodak's ColorFlow where you are
actually creating an edited profile, but I think the ICC needs to get
more influences to device/driver manufacturers
someone needs to have the high priced instruments
no they don't.
Well where does the calibration standard come from?
You don't need that many of the high end instruments - modern simple
color measurement devices are now surprisingly good.
And they are standardised against .... what?
Post by Martin Brown
The dye
manufacturers and printer/display makers labs will need such kit to
characterise the properties of new inks and papers or OLED/plasma/LCD
but end users can get by with very modest colorimetry.
Post by Eric Stevens
Post by nospam
the low priced colour pucks work exceptionally well, and since they are
affordable by just about anyone, they actually get used.
How could you know they work exceptionally well if you didn't have
standards against which you can test them?
Photograph a few colour paint sample charts and then do a calibrated
workflow then compare the resulting print against the original - as a
concrete example. The human eye is very good at spotting small
differences in hue - especially on near flesh tones.
To be useful, the procedure has to be able to measure, not just enable
a viewer to reach an opinion.
Post by Martin Brown
Heck this is already so well established that there is paint
manufacturer software to allow you to photograph a small test chart with
a hole in it for the unknown target colour on your mobile phone. Email
it to the paint maker and they will send you back a mix formula to match
it that can be taken to your nearest DIY store and works.
It isn't that long ago that individual batches of paint with nominally
the same colour formulation could have radically different properties.
American NTSC TV used to amuse Europeans because the newscaster would
drift between having ghoulish green and surreal purple flesh tones or
else be clamped to an unearthly pale orange by the flesh bodger. I
always assumed it was an inherent limitation of NTSC until I saw the
Japanese domestic implementation of it which works flawlessly.
Post by Eric Stevens
Post by nospam
Post by Dale
then again there is such a thing as "good enough", especially when
applied to consumer imaging, television is trying to get into the high
quality professional markets though
today's low price products are *better* than the overpriced stuff you
may have had long ago.
How can you know that, without using even better and higher priced
stuff to test and calibrate them?
I was involved in some of the very early dyesub printing in Japan. They
kept separate colour profiles for printing souvenir images of visiting
VIPs - Westerners and Japanese. These were largely subjective and
neither group liked seeing a neutral balanced version of their portrait!
When a westerner was due one of us would be photographed and printed to
check the calibration. A Westerner printed on the Japanese setting would
look pink like they were drunk and a Japanese person printed on the
Westerner setting would look jaundiced. Neither setting represented true
calibrated neutral reality but the "customers" didn't like reality!
That's another matter again.
--
Regards,

Eric Stevens
a***@invalid.invalid
2014-01-27 23:57:56 UTC
Permalink
Post by nospam
Post by Dale
the way things are NOW, photographers and lab techs/engineers have to
know about the making of profiles, once device/driver manufacturers make
profiles for their devices, it will be more like you are getting at, it
shouldn't matter to the user, but sometimes a user might want to
make/edit his own profiles
making a profile is easy. just run the software.
what photographers and techs don't need to know is the math behind the
conversions and everything else about colour management.
Post by Dale
this leads to measurement instrumentation
when I worked at Kodak we had spectroradiometers, colorimeters etc.,
that cost over $100,000
the software I see now is for instruments like X-Rite and MacBeth
levels, works okay for software like Kodak's ColorFlow where you are
actually creating an edited profile, but I think the ICC needs to get
more influences to device/driver manufacturers
someone needs to have the high priced instruments
no they don't.
the low priced colour pucks work exceptionally well, and since they are
affordable by just about anyone, they actually get used.
Post by Dale
then again there is such a thing as "good enough", especially when
applied to consumer imaging, television is trying to get into the high
quality professional markets though
today's low price products are *better* than the overpriced stuff you
may have had long ago.
--- news://freenews.netfront.net/ - complaints: ***@netfront.net ---
Eric Stevens
2014-01-25 23:19:56 UTC
Permalink
On Sat, 25 Jan 2014 02:18:50 -0500, Dale
Post by Dale
Post by nospam
Post by Eric Stevens
Post by nospam
Post by Eric Stevens
Post by nospam
Post by Eric Stevens
Post by nospam
what they need to do is use a colour managed workflow and the computer
takes care of the details.
if you choose a different printer, pick the relevant profile and
whatever conversions are necessary are done automatically.
once again, let the computer do the work.
But the computer has to have some standards against which it can
determine the meaning of the colour profile. Otherwise its a bit like
saying to your tailor I want a 197 chest, a 132 waist and a leg of
106. At which point your tailor will say "Huh! Waddaya mean?".
the computer knows how to convert it. the authors of the profiling
software need to understand the math to write the software to do the
conversions. that's about the extent of it.
Here we go again. It's not about what the computer knows or the
computer can do for the user.
of course it is.
Post by Eric Stevens
It's about the definition of colour
spaces such as sRGB, and whatever else it is you have snipped, for
which you need an underlying reference system such as "XYZ, CIELAB,
CIELUV".
no it isn't.
Yes it is. Please read the subject of the thread and Dale's article
which started it. If you want to talk about something different please
go away and start another thread.
Post by nospam
the user wants as close a match as possible, given the limits of a
device. that requires a colour managed workflow.
they don't need to know the math as to how it works.
Whoever said they did?
Post by nospam
Post by Eric Stevens
Post by nospam
the end users do not need to understand any of it, other than how to
use profiles in a colour managed workflow.
True, but you are changing the subject.
not at all.
Read Dale's article.
i did, and it's wrong.
the way things are NOW, photographers and lab techs/engineers have to
know about the making of profiles, once device/driver manufacturers make
profiles for their devices, it will be more like you are getting at, it
shouldn't matter to the user, but sometimes a user might want to
make/edit his own profiles
this leads to measurement instrumentation
when I worked at Kodak we had spectroradiometers, colorimeters etc.,
that cost over $100,000
the software I see now is for instruments like X-Rite and MacBeth
levels, works okay for software like Kodak's ColorFlow where you are
actually creating an edited profile, but I think the ICC needs to get
more influences to device/driver manufacturers
someone needs to have the high priced instruments
then again there is such a thing as "good enough", especially when
applied to consumer imaging, television is trying to get into the high
quality professional markets though
It still comes back to your original point that you can't talk
meaningfully about device colour profiles unless you have a recognised
colour space to measure them against. Hence the need for device
independent color spaces like XYZ,CIELAB,CIELUV. I suppose you could
of course do it all in terms of Angstrom units.
--
Regards,

Eric Stevens
nospam
2014-01-26 00:42:30 UTC
Permalink
Post by Eric Stevens
It still comes back to your original point that you can't talk
meaningfully about device colour profiles unless you have a recognised
colour space to measure them against. Hence the need for device
independent color spaces like XYZ,CIELAB,CIELUV. I suppose you could
of course do it all in terms of Angstrom units.
sure, but users need not concern themselves with any of that.

all they need to do is adopt a colour managed workflow.
Eric Stevens
2014-01-26 02:10:31 UTC
Permalink
Post by nospam
Post by Eric Stevens
It still comes back to your original point that you can't talk
meaningfully about device colour profiles unless you have a recognised
colour space to measure them against. Hence the need for device
independent color spaces like XYZ,CIELAB,CIELUV. I suppose you could
of course do it all in terms of Angstrom units.
sure, but users need not concern themselves with any of that.
Who is talking about that, apart from you?
Post by nospam
all they need to do is adopt a colour managed workflow.
How do you set up a colour managed work flow without standards against
which you can calibrate it?
--
Regards,

Eric Stevens
PeterN
2014-01-26 02:23:31 UTC
Permalink
Post by Eric Stevens
Post by nospam
Post by Eric Stevens
It still comes back to your original point that you can't talk
meaningfully about device colour profiles unless you have a recognised
colour space to measure them against. Hence the need for device
independent color spaces like XYZ,CIELAB,CIELUV. I suppose you could
of course do it all in terms of Angstrom units.
sure, but users need not concern themselves with any of that.
Who is talking about that, apart from you?
Post by nospam
all they need to do is adopt a colour managed workflow.
How do you set up a colour managed work flow without standards against
which you can calibrate it?
Interested people want to know.
--
PeterN
Dale
2014-01-24 05:07:38 UTC
Permalink
Post by nospam
Post by Dale
if you want to purpose an image to more than one output device color,
and have the output look the same
or
if you want different input device color purposed to different output
device color(s) and want the output to look the same
then
you need to convert the device colors through device independent color
space like XYZ,CIELAB,CIELUV
completely wrong.
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
that's how you get the profiles
--
Dale
nospam
2014-01-24 17:13:47 UTC
Permalink
Post by Dale
Post by nospam
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
that's how you get the profiles
no, you get the profiles by running the appropriate profiling software.

what the software does internally doesn't matter. users do not need to
understand all the math behind it to be able to use it.

what matters is does the user get what they expect, and the answer is
yes.
Eric Stevens
2014-01-24 23:47:07 UTC
Permalink
Post by nospam
Post by Dale
Post by nospam
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
that's how you get the profiles
no, you get the profiles by running the appropriate profiling software.
what the software does internally doesn't matter. users do not need to
understand all the math behind it to be able to use it.
what matters is does the user get what they expect, and the answer is
yes.
You are missing the point of Dale's original comment:

"you need to convert the device colors through device independent
color space like XYZ,CIELAB,CIELUV".
--
Regards,

Eric Stevens
nospam
2014-01-25 03:11:52 UTC
Permalink
Post by Eric Stevens
Post by nospam
Post by Dale
Post by nospam
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
that's how you get the profiles
no, you get the profiles by running the appropriate profiling software.
what the software does internally doesn't matter. users do not need to
understand all the math behind it to be able to use it.
what matters is does the user get what they expect, and the answer is
yes.
"you need to convert the device colors through device independent
color space like XYZ,CIELAB,CIELUV".
given that users do not have to do that, what exactly am i missing?

the *computer* might do it internally (or it might not), depending on
what needs to be done to produce the result the user wants.

the user does not need to worry about that nor do they nee to know what
any of that means.

what matters is if they get the expected results, and with a colour
managed workflow, they do.
Eric Stevens
2014-01-25 04:18:07 UTC
Permalink
Post by nospam
Post by Eric Stevens
Post by nospam
Post by Dale
Post by nospam
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
that's how you get the profiles
no, you get the profiles by running the appropriate profiling software.
what the software does internally doesn't matter. users do not need to
understand all the math behind it to be able to use it.
what matters is does the user get what they expect, and the answer is
yes.
"you need to convert the device colors through device independent
color space like XYZ,CIELAB,CIELUV".
given that users do not have to do that, what exactly am i missing?
You are missing the point that this dicussin is confined to camera
users.
Post by nospam
the *computer* might do it internally (or it might not), depending on
what needs to be done to produce the result the user wants.
the user does not need to worry about that nor do they nee to know what
any of that means.
what matters is if they get the expected results, and with a colour
managed workflow, they do.
Good. We agree on that. Please now go away and come back when you
accept that colour managed work spaces (with profiles etc) requires
"device independent color space like XYZ,CIELAB,CIELUV".

Otherwise it's like you giving a tailor the dimensions for a suit in
units of measurement for which he has no definition.
--
Regards,

Eric Stevens
Dale
2014-01-25 07:10:23 UTC
Permalink
Post by Eric Stevens
Post by nospam
Post by Eric Stevens
Post by nospam
Post by Dale
Post by nospam
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
that's how you get the profiles
no, you get the profiles by running the appropriate profiling software.
what the software does internally doesn't matter. users do not need to
understand all the math behind it to be able to use it.
what matters is does the user get what they expect, and the answer is
yes.
"you need to convert the device colors through device independent
color space like XYZ,CIELAB,CIELUV".
given that users do not have to do that, what exactly am i missing?
You are missing the point that this dicussin is confined to camera
users.
Post by nospam
the *computer* might do it internally (or it might not), depending on
what needs to be done to produce the result the user wants.
the user does not need to worry about that nor do they nee to know what
any of that means.
what matters is if they get the expected results, and with a colour
managed workflow, they do.
Good. We agree on that. Please now go away and come back when you
accept that colour managed work spaces (with profiles etc) requires
"device independent color space like XYZ,CIELAB,CIELUV".
Otherwise it's like you giving a tailor the dimensions for a suit in
units of measurement for which he has no definition.
I think the value of device independent color is underestimated

sure, light sources and filtration can make it easier, Eikonix/Kodak
has/had a patent of filtration for scanners and maybe cameras that
matched XYZ which makes it a lot easier, you could probably use a matrix
and 1D LUT instead of a 3D LUT, ICC can accommodate both math constructs

but I see a problem with digital cameras

chrome film/scanners were easy for device independent workflows, you
only needed to match the chrome

with a digital camera you have to match the scene, appearance as opposed
to color considerations come into play, like white balance

I have heard people are using the RAW camera files without the white balance

I have heard Kodak has a patent on how to characterize cameras without a
target

targets are a hassle for photographers

I don't think this is going to be resolved until camera manufacturers
make profiles for their cameras instead of using sRGB, ProPhotoRGB, etc.

they could to this with information of sensor sensitivity, sensor
filtration, etc., and stuff it into an ICC profile
--
Dale
Eric Stevens
2014-01-25 23:23:01 UTC
Permalink
On Sat, 25 Jan 2014 02:10:23 -0500, Dale
Post by Dale
Post by Eric Stevens
Post by nospam
Post by Eric Stevens
Post by nospam
Post by Dale
Post by nospam
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
that's how you get the profiles
no, you get the profiles by running the appropriate profiling software.
what the software does internally doesn't matter. users do not need to
understand all the math behind it to be able to use it.
what matters is does the user get what they expect, and the answer is
yes.
"you need to convert the device colors through device independent
color space like XYZ,CIELAB,CIELUV".
given that users do not have to do that, what exactly am i missing?
You are missing the point that this dicussin is confined to camera
users.
Post by nospam
the *computer* might do it internally (or it might not), depending on
what needs to be done to produce the result the user wants.
the user does not need to worry about that nor do they nee to know what
any of that means.
what matters is if they get the expected results, and with a colour
managed workflow, they do.
Good. We agree on that. Please now go away and come back when you
accept that colour managed work spaces (with profiles etc) requires
"device independent color space like XYZ,CIELAB,CIELUV".
Otherwise it's like you giving a tailor the dimensions for a suit in
units of measurement for which he has no definition.
I think the value of device independent color is underestimated
sure, light sources and filtration can make it easier, Eikonix/Kodak
has/had a patent of filtration for scanners and maybe cameras that
matched XYZ which makes it a lot easier, you could probably use a matrix
and 1D LUT instead of a 3D LUT, ICC can accommodate both math constructs
but I see a problem with digital cameras
chrome film/scanners were easy for device independent workflows, you
only needed to match the chrome
with a digital camera you have to match the scene, appearance as opposed
to color considerations come into play, like white balance
I have heard people are using the RAW camera files without the white balance
I have heard Kodak has a patent on how to characterize cameras without a
target
targets are a hassle for photographers
I don't think this is going to be resolved until camera manufacturers
make profiles for their cameras instead of using sRGB, ProPhotoRGB, etc.
they could to this with information of sensor sensitivity, sensor
filtration, etc., and stuff it into an ICC profile
Presumably this is more or less what DxO is doing.
--
Regards,

Eric Stevens
Martin Brown
2014-01-27 08:58:27 UTC
Permalink
Post by Eric Stevens
Post by nospam
Post by Dale
Post by nospam
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
that's how you get the profiles
no, you get the profiles by running the appropriate profiling software.
what the software does internally doesn't matter. users do not need to
understand all the math behind it to be able to use it.
what matters is does the user get what they expect, and the answer is
yes.
"you need to convert the device colors through device independent
color space like XYZ,CIELAB,CIELUV".
But that is clearly not true!

It is a lot more convenient to convert to a device independent colour
space and from there to whatever output medium you want to use because
the number of profiles need for N different image sources and M
destinations is limited to N+M colour profiles.

But you could with a *lot* more work compute direct colour profiles for
every possible combination of source and destination N*M. In the early
days when N was about 3 and M was about 4 that was what happened.

It may still make a lot more sense to store the original image in the
colour space where it was measured and only ever compute the device
independent form as a hidden step on the way to the output device.

You lose a bit to rounding errors in every colourspace conversion.
(with a handful of exceptions that are exactly invertable)
--
Regards,
Martin Brown
isw
2014-01-27 17:53:27 UTC
Permalink
Post by Martin Brown
Post by Eric Stevens
Post by nospam
Post by Dale
Post by nospam
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
that's how you get the profiles
no, you get the profiles by running the appropriate profiling software.
what the software does internally doesn't matter. users do not need to
understand all the math behind it to be able to use it.
what matters is does the user get what they expect, and the answer is
yes.
"you need to convert the device colors through device independent
color space like XYZ,CIELAB,CIELUV".
But that is clearly not true!
It is a lot more convenient to convert to a device independent colour
space and from there to whatever output medium you want to use because
the number of profiles need for N different image sources and M
destinations is limited to N+M colour profiles.
But you could with a *lot* more work compute direct colour profiles for
every possible combination of source and destination N*M. In the early
days when N was about 3 and M was about 4 that was what happened.
It may still make a lot more sense to store the original image in the
colour space where it was measured and only ever compute the device
independent form as a hidden step on the way to the output device.
What do you do years later, when all information about the creating
device's characteristics are long gone, and all you have is an image
file?

Isaac
Martin Brown
2014-01-27 20:29:23 UTC
Permalink
Post by isw
Post by Martin Brown
Post by Eric Stevens
Post by nospam
Post by Dale
Post by nospam
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
that's how you get the profiles
no, you get the profiles by running the appropriate profiling software.
what the software does internally doesn't matter. users do not need to
understand all the math behind it to be able to use it.
what matters is does the user get what they expect, and the answer is
yes.
"you need to convert the device colors through device independent
color space like XYZ,CIELAB,CIELUV".
But that is clearly not true!
It is a lot more convenient to convert to a device independent colour
space and from there to whatever output medium you want to use because
the number of profiles need for N different image sources and M
destinations is limited to N+M colour profiles.
But you could with a *lot* more work compute direct colour profiles for
every possible combination of source and destination N*M. In the early
days when N was about 3 and M was about 4 that was what happened.
It may still make a lot more sense to store the original image in the
colour space where it was measured and only ever compute the device
independent form as a hidden step on the way to the output device.
What do you do years later, when all information about the creating
device's characteristics are long gone, and all you have is an image
file?
In the digital age you struggle badly unless you can recognise the data
format. There are plenty of data "archives" now virtually inaccessible.
An example of a UK one that required heroic efforts to resurrect was the
BBC Domesday Project with multimedia from only 1986.

http://www.atsf.co.uk/dottext/domesday.html

Finding a still surviving videodisc player was only half the problem.
--
Regards,
Martin Brown
bugbear
2014-01-31 09:45:57 UTC
Permalink
Post by isw
Post by Martin Brown
Post by Eric Stevens
Post by nospam
Post by Dale
Post by nospam
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
that's how you get the profiles
no, you get the profiles by running the appropriate profiling software.
what the software does internally doesn't matter. users do not need to
understand all the math behind it to be able to use it.
what matters is does the user get what they expect, and the answer is
yes.
"you need to convert the device colors through device independent
color space like XYZ,CIELAB,CIELUV".
But that is clearly not true!
It is a lot more convenient to convert to a device independent colour
space and from there to whatever output medium you want to use because
the number of profiles need for N different image sources and M
destinations is limited to N+M colour profiles.
But you could with a *lot* more work compute direct colour profiles for
every possible combination of source and destination N*M. In the early
days when N was about 3 and M was about 4 that was what happened.
It may still make a lot more sense to store the original image in the
colour space where it was measured and only ever compute the device
independent form as a hidden step on the way to the output device.
What do you do years later, when all information about the creating
device's characteristics are long gone, and all you have is an image
file?
You know about embedded profiles, right?

BugBear

Eric Stevens
2014-01-27 23:06:51 UTC
Permalink
On Mon, 27 Jan 2014 08:58:27 +0000, Martin Brown
Post by Martin Brown
Post by Eric Stevens
Post by nospam
Post by Dale
Post by nospam
what is needed is a colour managed workflow, with the image and each
device along the way having a profile.
that's how you get the profiles
no, you get the profiles by running the appropriate profiling software.
what the software does internally doesn't matter. users do not need to
understand all the math behind it to be able to use it.
what matters is does the user get what they expect, and the answer is
yes.
"you need to convert the device colors through device independent
color space like XYZ,CIELAB,CIELUV".
But that is clearly not true!
It is a lot more convenient to convert to a device independent colour
space and from there to whatever output medium you want to use because
the number of profiles need for N different image sources and M
destinations is limited to N+M colour profiles.
But you could with a *lot* more work compute direct colour profiles for
every possible combination of source and destination N*M. In the early
days when N was about 3 and M was about 4 that was what happened.
Strictly speaking, I suppose you are correct, but no one would really
want to do it that way. This is equivalent to commerce in the days
before standard currency. "I'll swap this cart load of beans for one
and a half of those bundles of leather". {Thinks: I wonder how many
hides I will need to get those two big stone jars?}.

Alternatively, it's the equivalent of making your original colour
profile the reference standard against which others are measured - but
hang on! I've got two pictures taken with two different cameras and
hence two different colour spaces here. What do I do?

Having got this far you decide that what you need is an external
device independent colour space against which your originals can be
compared and transformed as required.
Post by Martin Brown
It may still make a lot more sense to store the original image in the
colour space where it was measured and only ever compute the device
independent form as a hidden step on the way to the output device.
Hidden or not, it's still there.
Post by Martin Brown
You lose a bit to rounding errors in every colourspace conversion.
(with a handful of exceptions that are exactly invertable)
You will always have rounding errors as there are always limits to the
accuracy of your measuring equipment.
--
Regards,

Eric Stevens
Continue reading on narkive:
Loading...