Image Dithering: How would I calculate quantization error and nearest colour to implement in a Floyd-Steinburg algorithm?
I intend to display (4, 8 or 16 bit per channel - no alpha) images on a 1 bit display in an embedded system. Images are stored in RGB tuples. My intention is to use Floyd-Steinburg, as it looks reasonably good, is more than quick enough and concise in code.
In reference to the WikiPedia article, I have two questions.
What would the best practice for expressing nearest colour be? Would the following work? (ignore that I'm returning a structure in c)
typedef rgb16_tag { unsigned short r, g, b } rgb16;
rgb16 nearest_1bit_colour(rgb16 p) {
double c; rgb16 r;
c = ((double)(p.r + p.g + p.b + 3 * (1 << 15))) / ( 3.0 * (1 << 16));
if (c>= 1.0) {
r.r = r.g = r.b = 1;
} else {
r.r = r.g = r.b = 0;
}
return r;
}
and, Is the expression of quantization error done on a per channel basis? i.e. does this make sense?
rgb16 q, new, old, image[X开发者_StackOverflow中文版][Y];
int x, y;
... /* (somewhere in the nested loops) */
old = image[x][y];
new = nearest_1bit_colour(old);
/* Repeat the following for each colour channel seperately. */
q.{r,g,b} = old.{r,g,b} - new.{r,g,b};
image[x+1][y].{r,g,b} = image[x+1][y].{r,g,b} + 7/16 * q.{r,g,b}
image[x-1][y+1].{r,g,b} = image[x-1][y+1].{r,g,b} + 3/16 * q.{r,g,b}
image[x][y+1].{r,g,b} = image[x][y+1].{r,g,b} + 5/16 * q.{r,g,b}
image[x+1][y+1].{r,g,b} = image[x+1][y+1].{r,g,b} + 1/16 * q.{r,g,b}
I've seen two typical approaches to measuring the difference between two colors. The most common way is probably to just find the Euclidian distance between them through the color cube:
float r = i.r - j.r;
float g = i.g - j.g;
float b = i.b - j.b;
float diff = sqrtf( r * r + g + g + b * b );
The other is just to average the absolute differences, possibly weighting for luminance:
float diff = 0.30f * fabs( i.r - j.r ) +
0.59f * fabs( i.g - j.g ) +
0.11f * fabs( i.b - j.b );
As to your second question, yes. Accumulate the error separately in each channel.
Edit: Misread at first and missed that this was for a bi-level display. In that case, I'd suggest just using luminance:
float luminance = 0.30f * p.r + 0.59f * p.g + 0.11f * p.b;
if ( luminance > 0.5f * channelMax ) {
// white
} else {
// black
}
As you return an rgb16
value in nearest_1bit_colour
and use it to compare it with other colors and you have to use white and black as returned colors, use 0 and 0xFFFF instead of 0 and 1 (which is black and a very dark gray). Additionally, I think you should compare c
with 0.5 instead of 1.0:
if (c >= 0.5) {
r.r = r.g = r.b = 0xFFFF;
} else {
r.r = r.g = r.b = 0;
}
Also, there might be pitfalls with (un)signedness:
q.{r,g,b} = old.{r,g,b} - new.{r,g,b};
This can get negative, so q
shouldn't be of type rgb16
which apparently is unsigned short
, but of type short
instead.
Of course, the whole code is for 16-bit input data, for 4- or 8-bit input, you have to change it (or just convert 4-bit and 8-bit data to 16-bit data so you can use the same code).
As purely an integer based solution (my processor doesn't have a FPU), I think this might work.
#include <limits.h>
#include <assert.h>
typedef struct rgb16_tag { unsigned short r,g,b; } rgb16;
typedef struct rgb32_tag { unsigned long r,g,b; } rgb32;
#define LUMINESCENSE_CONSTANT (ULONG_MAX >> (CHAR_BIT * sizeof (unsigned short)))
static const rgb32 luminescence_multiplier = {
LUMINESCENSE_CONSTANT * 0.30f,
LUMINESCENSE_CONSTANT * 0.59f,
LUMINESCENSE_CONSTANT * 0.11f
};
int black_or_white( rgb16 p ) {
unsigned long luminescence;
assert(( luminescence_multiplier.r
+ luminescence_multiplier.g
+ luminescence_multiplier.b) < LUMINESCENSE_CONSTANT);
luminescence = p.r * luminescence_multiplier.r
+ p.g * luminescence_multiplier.g
+ p.b * luminescence_multiplier.b;
return (luminescence > ULONG_MAX/2); /* 1 == white; */
}
There is some excellent half-toning techniques HERE
精彩评论