What is the gain of the INA125 set to?
It sounds like you need a lot more gain in the instrumentation amplifier. If you don't have a resistor \$R_{G}\$ between the pins 8 and 9 of the instrumentation amplifier, your overall gain of the load-cell output will be only 4.
The load cell you have specced has a 1 mV/V output. Since it sounds like you're feeding the load-cell with your 2.5V reference, this means the full-scale output of the load-cell will be 2.5 mV. With the 4X gain in the INA125, that is 10 mV into the ADC input.
With a 10-bit ADC and 2.5V reference, your bit-size is ~0.0024 v/bit (\$\frac{2.5V}{2^{10}}\$), you should expect a change of \$\frac{0.010V}{0.0024V} \$, or approximately ~4 LSBs, and that's for the maximum load the load-cell is rated for.
So... Unless you have something else going on that you have not described, it sounds like you're getting significantly more output then would be expected. I would guess you have some gain in the INA125 that you have not described.
The solution here, of course, is to put a voltmeter on the interconnect between the INA125 and the MCU's ADC. That way, you can measure the real voltage going into the MCU, which will tell you where your error is coming from (MCU's ADC, or the INA).
You said you only care about 1% accuracy, which is less than 7 bits of the full range. You can therefore use the 1.000-2.024 voltage directly. Even if you have a 10 bit A/D with a 0-3.3 V full range, you still get about 320 counts, which is more than 3 times your requirement. There is no need to shift or scale anything.
If you use a divider to create Vref+ instead of using the 3.3 V supply internally, then you get even more resolution. If you can bring it down to 2.1 V, for example, to leave a little margin, then you get 500 counts over your range. Thats lots more resolution than accuracy unless you use a separate precision reference. Consider that a divider made from 1% resistors will cause significantly more error than a 10 bit A/D using the reference. To get 1% accuracy, using a fixed external reference is probably the simplest way. A 2.048 V reference is almost perfect here.
Some PICs do have a optional Vref- input, but tying it to anything other than ground is going to decrease accuracy. Basically you'd be tradeing off accuracy to get more resolution, which makes no sense when you already have lots of resolution and accuracy is on the edge.
Your desire to get the raw A/D counts to represent some arbitrary "round" value is silly. Don't burden your measurement system with having to meet this arbitrary spec. Do the best job of taking the measurement, then the rest is simple conversion in firmware. You have a digital processor that can easily apply a scale and offset instantaneously in human time. The conversion to decimal will probably take more cycles, although that will be instantaneous in human time too.
Basically, think about what you really want to get out, proritize your requirements accordingly, and don't specify implementation details (like what one A/D count should represent). Your top priority should be accuracy, given your specs, since everything else pretty much falls out with a 10 bit A/D.
Best Answer
From a bit more information in the comments, you are using an ATMega168. This microcontroller has three options for the ADC reference voltage:
With the 10bit ADC, you then split this into \$2^{10}\$ bins (LSB). To avoid the need to prescale your signal in order to use the maximum dynamic range of the ADC, you can also select a more appropriate reference.
In your case, the internal 1.1 V reference gives LSBs of ~1.1 mV and is very easy to use. Simply set ADMUX[7:6] = b11 and put a capacitor on the AREF pin. If you want to use an external 1.024 V reference, this gives LSBs of 1.0 mV; set ADMUX[7:6] = b00.
Another advantage of using the bandgap or an external reference is that you are not directly using the noisy \$V_\mathrm{CC}\$ rail as your reference voltage. Typical reference ICs will include lots of filtering, and will give you the lowest noise of the options.