Electronic – 13-bit 2’s Complement on .Net MF

.net-micro-framework

I am using an ADXL345 triple axis accelerometer via a FEZmini1.3 .Net Micro Framework board in a project I am busy with.

The ADXL345 provides data in 13-bit 2's complement.

How do you decode this into decimal?

I have implemented a BitConverter (As provided by Ravenheart (Toshko)) in the project, but surely this assumes a full 16 bits (or two bytes) worth of data, where the most significant bit will always be the sign bit?

In a 13 bit number the 16th bit will always be 0, won't it?

       ------------Byte 1------------   ------------Byte 2-----------
bit#:  15  14  13  12  11  10   9   8   7   6   5   4   3   2   1   0
16 bit: 1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 = -32767
13 bit: 0   0   0   1   1   1   1   1   1   1   1   1   1   1   1   1 = -4095

but if the 13 bit number is being converted by an algorithm assuming a full 16 bits the number will equal 8192.

Is my understanding correct? If yes, how do I go about converting a 13 bit 2's complement number into decimal?

Edit (After getting some help from my friends below):
So thanks to the responses I now know that my assumptions of 2's compliment was wrong, so for clarity when anyone reads this thread I wanted to correct my initial statement:

       ------------Byte 1------------   ------------Byte 2-----------
bit#:  15  14  13  12  11  10   9   8   7   6   5   4   3   2   1   0
16 bit: 1   1   1   1   1   1   1   1   1   1   1   1   1   1   1   1 = -1
13 bit: 0   0   0   1   1   1   1   1   1   1   1   1   1   1   1   1 = -1

In 2's Compliment the numbers count up to the halfway point and then start counting down, so:

00000000 = 0
00000001 = 2
00000010 = 3
00000011 = 4
.
.
.
01111110 = 126
01111111 = 127
10000000 = -128
10000001 = -127
10000010 = -126
.
.
.
11111100 = -4
11111101 = -3
11111110 = -2
11111111 = -1

Best Answer

To decode the number into decimal, convert it to something your processor understands natively, then use the existing binary to decimal conversion capabilities to convert to decimal. To convert to native integer representation, all you have to do is sign extend:

II is native signed integer
II <-- 13 bit signed A/D value  (get the A/D result)
II <-- II & 1FFFh               (make really sure limited to 13 bits)
if II >= 1000h                   (negative value ?)
  then II <-- II - 2000h        (convert to native negative)
Note that this is independent of how wide the native signed integer is, as long as it's more than 13 bits. This means this technique works for both 16 bit and 32 bit signed integers.

Edit: fixed details of sign extending above. I wrote it right on the scribble paper next to my keyboard, then copied it into the post wrong. Also added note about it being independent of the native integer size.