I want to only copare the lower 8 bits of a variable named x (data type is long) to a range of bit patterns (0000 0000 to 1111 1111)
How can I go about converting the long datatype to binary format and then compare with the above mentioned range of patterns|||Values are already stored in binary format. The trick is in understanding that format. I'm assuming you're using a 32-bit long -- 32-bit longs are commonly stored in two possible formats, called big-endian and little-endian. This has to do with the order of the bytes in memory vs. their bit significance. The first thing you need to decide is when you say "lower 8 bits" whether you mean the least significant 8 bits, or the 8 bits at the lowest memory address in the long.
In C/C++, the "patterns" ranging from 0000 0000 through 1111 1111 can be viewed simply as unsigned char constants, ranging from 0 to 255.
If you take the long variable, x, mask and cast it to an unsigned char, "(unsigned char)(x %26amp; 0xffL)", you'll have an expression that represents the least significant 8 bits.
if (((unsigned char)(x %26amp; 0xffL))==7)
{
// x == %26lt;3 bytes of anything%26gt; 0000 0111
}
If instead you do this as a pointer cast with a dereference, "*(unsigned char *)%26amp;x", you'll have an expression that yields the bit pattern for the 8 bits of x at the lowest memory address. (On some machines, e.g. those with Intel and AMD x86 CPUs, this also yields the least significant 8 bits. On other hardware, e.g. Xbox 360, you'll have the most significant 8 bits instead.)
if ((*(unsigned char *)%26amp;x)==7)
{
/*
different on different hardware! but sometimes necessary,
e.g. when encoding/decoding information sent over the network
*/
}|||What do you mean, compare them to a range of bit patterns?
I would suggest subtracting to make the value less than 256, then xoring or something.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment