I usually go with something like the implementation given in Josh Bloch's fabulous Effective Java. It's fast and creates a pretty good hash which is unlikely to cause collisions. Pick two different prime numbers, e.g. 17 and 23, and do:
public override int GetHashCode()
{
unchecked // Overflow is fine, just wrap
{
int hash = 17;
// Suitable nullity checks etc, of course :)
hash = hash * 23 + field1.GetHashCode();
hash = hash * 23 + field2.GetHashCode();
hash = hash * 23 + field3.GetHashCode();
return hash;
}
}
As noted in comments, you may find it's better to pick a large prime to multiply by instead. Apparently 486187739 is good... and although most examples I've seen with small numbers tend to use primes, there are at least similar algorithms where non-prime numbers are often used. In the not-quite-FNV example later, for example, I've used numbers which apparently work well - but the initial value isn't a prime. (The multiplication constant is prime though. I don't know quite how important that is.)
This is better than the common practice of XOR
ing hashcodes for two main reasons. Suppose we have a type with two int
fields:
XorHash(x, x) == XorHash(y, y) == 0 for all x, y
XorHash(x, y) == XorHash(y, x) for all x, y
By the way, the earlier algorithm is the one currently used by the C# compiler for anonymous types.
This page gives quite a few options. I think for most cases the above is "good enough" and it's incredibly easy to remember and get right. The FNV alternative is similarly simple, but uses different constants and XOR
instead of ADD
as a combining operation. It looks something like the code below, but the normal FNV algorithm operates on individual bytes, so this would require modifying to perform one iteration per byte, instead of per 32-bit hash value. FNV is also designed for variable lengths of data, whereas the way we're using it here is always for the same number of field values. Comments on this answer suggest that the code here doesn't actually work as well (in the sample case tested) as the addition approach above.
// Note: Not quite FNV!
public override int GetHashCode()
{
unchecked // Overflow is fine, just wrap
{
int hash = (int) 2166136261;
// Suitable nullity checks etc, of course :)
hash = (hash * 16777619) ^ field1.GetHashCode();
hash = (hash * 16777619) ^ field2.GetHashCode();
hash = (hash * 16777619) ^ field3.GetHashCode();
return hash;
}
}
Note that one thing to be aware of is that ideally you should prevent your equality-sensitive (and thus hashcode-sensitive) state from changing after adding it to a collection that depends on the hash code.
As per the documentation:
You can override GetHashCode for immutable reference types. In general, for mutable reference types, you should override GetHashCode only if:
- You can compute the hash code from fields that are not mutable; or
- You can ensure that the hash code of a mutable object does not change while the object is contained in a collection that relies on its hash code.
The link to the FNV article is broken but here is a copy in the Internet Archive: Eternally Confuzzled - The Art of Hashing
Its about endianness.
RGB is a byte-order. But a deliberate implementation choice of most vanilla Graphics libraries is that they treat colours as unsigned 32-bit integers internally, with the three (or four, as alpha is typically included) components packed into the integer.
On a little-endian machine (such as x86) the integer 0x01020304 will actually be stored in memory as 0x04030201. And thus 0x00BBGGRR will be stored as 0xRRGGBB00!
So the term BGR (and BGRA etc) is a leaky abstraction where the graphics library is explaining how the integer is logically ordered, so as to make your code that is directly accessing the colour components individually more readable.
Remember that bitmaps are usually accessed by more parts of the hardware than your processor, and the endian that is specified by, say, conventional display adapters, is not necessarily the same as the endian of your CPU. At the level of manipulating the channels in the pixel its no problem for a CPU to extract the fields whatever their order; its purely a programmer understanding the labelling thing.
Best Answer
Your question actually consists of two parts:
The intensity of the gradient must be constant in a perceptual color space or it will look unnaturally dark or light at points in the gradient. You can see this easily in a gradient based on simple interpolation of the sRGB values, particularly the red-green gradient is too dark in the middle. Using interpolation on linear values rather than gamma-corrected values makes the red-green gradient better, but at the expense of the back-white gradient. By separating the light intensities from the color you can get the best of both worlds.
Often when a perceptual color space is required, the Lab color space will be proposed. I think sometimes it goes too far, because it tries to accommodate the perception that blue is darker than an equivalent intensity of other colors such as yellow. This is true, but we are used to seeing this effect in our natural environment and in a gradient you end up with an overcompensation.
A power-law function of 0.43 was experimentally determined by researchers to be the best fit for relating gray light intensity to perceived brightness.
I have taken here the wonderful samples prepared by Ian Boyd and added my own proposed method at the end. I hope you'll agree that this new method is superior in all cases.
Here's the code in Python:
Now for part 2 of your question. You need an equation to define the line that represents the midpoint of the gradient, and a distance from the line that corresponds to the endpoint colors of the gradient. It would be natural to put the endpoints at the farthest corners of the rectangle, but judging by your example in the question that is not what you did. I picked a distance of 71 pixels to approximate the example.
The code to generate the gradient needs to change slightly from what's shown above, to be a little more flexible. Instead of breaking the gradient into a fixed number of steps, it is calculated on a continuum based on the parameter
t
which ranges between 0.0 and 1.0.And here's the result of the above: