You can buy a few 0.1% resistors to calculate resistance ranges cheaply.
Voltage is tricker - if you have access to several meters you can 'calibrate by consensus', as it is improbable that they will all go out in the same direction.
Another option is buy a precision voltage reference IC - e.g. AD581 is 10V with 0.1% accuracy.
Current can also be done using voltage across a known, accurate resistance.
So you've got:
R_x R_fixed
Vcc -----^v^v^----+----^v^v^------- Gnd
|
|
+--- V_sensed --- ADC input
Rx is some unknown resistance (probably a sensor of some kind). And you're using R_fixed at 0.1% right now in order to effectively calculate R_x, but you want to use a cheaper fixed resistor with a lower tolerance of perhaps 1%. In doing so you want to perform some kind of calibration during production to correct for the increased error, is that right?
The way you end up doing this is putting an byte in EEPROM (or some other non-volatile memory) that acts as an "offset" in your calculation, and it's a perfectly viable thing to do. The thing is it's going to cost you some time during production to do the calibration activity. In order to do the calibration, you'll need one of those 0.1% resistors (call it R_cal) of nominally comparable value to your 1% resistor to substitute into the circuit for R_x. Measuring V_sensed, you can infer more precisely the value of R_fixed (i.e. to something like 0.2%).
If R_cal and R_fixed are nominally the same value, you would expect V_sensed to be equal to Vcc / 2. You would store the measured deviation from Vcc / 2 as a calibration offset byte, and always add it to V_sensed as perceived by your ADC.
The pitfall, as I see it, is that there is a bunch of work involved in doing the measurement and subsequently in storing the value. Another thing to consider as a pitfall is that temperature can play a role in causing a resistance to deviate from it's nominal value, so you'll want a reasonably well temperature controlled calibration environment. Finally don't forget to use calibrated measurement equipment, as that's another potential source of additive error. One last pitfall I can think of is that the calibration byte should be stored in units of the lsb of your ADC (so if you have a 12-bit ADC, units of calibration offset byte should be "Vcc/2^12 Volts").
Edit
If you are using two fixed resistors to divide a large voltage down to a lower scale as follows:
R1_fixed R2_fixed
V_in -----^v^v^----+----^v^v^------- Gnd
|
|
+--- V_sensed --- ADC input
Re-edited Section
So now you want to use a precision voltage reference (call it V_cal) to stimulate V_in during a calibration step in production. What you've got there is in theory:
V_sensed = V_predicted = V_cal * R2_fixed / (R1_fixed + R2_fixed) = V_cal * slope_fixed
But what you've got in reality is:
V_sensed = V_measured = V_cal * R2_actual / (R1_actual + R2_actual) = V_cal * slope_actual
In effect you have a different transfer function slope in reality than what you would predict from the resistor values. The deviation from the predicted divider transfer function will be linear with respect to the input voltage, and you can safely assume that 0V in will give you 0V out, so making one precision voltage reference measurement should give you enough information to characterize this linear scale factor. Namely:
V_measured / V_predicted = slope_fixed / slope_actual
slope_actual = slope_fixed * V_measured / V_predicted
And you would use slope_actual as your calibrated value to determine the voltage in as a function of the voltage measured.
below courtesy of @markrages
To get the actual slope sensitivity to resistor values requires partial differentiation:

Best Answer
You are not really asking "How to calibrate my magnetometer?" You think you are, but you aren't.
What you are really asking is: "How do you calibrate a sensor in an environment with noise and a DC offset?"
The answer to that is actually pretty simple and the comment by andrea answers it in a large part: You introduce a calibrated, known AC signal and make sure the software or firmware involved knows its magnitude.
If you equate it to voltage, the thinking may become vastly more easy:
If you have 1V maximum DC offset and you want to calibrate to an accuracy of 10mV, you introduce an accurate signal, usually square wave, so that you have time to let the peak excitation stabilise. In this case you'd probably want at least 2.2V peak-peak, so that you also force the signal under 0V. You then make sure you know that signal is accurate to half your requirement or better, so 5mV accuracy will do. And that's of course considering all the noise contributions as well.
In your case you may have several axes you want to calibrate, but their orientation is known once you insert the board, so you need either several fixed "exciters", in the form of for example, again as andrea says Helmholz coils, or you need one that can rotate and hold in position accurate enough.
Making a set-up with a bunch of Helmholz coils fixed with their point of highest uniformity around your device shouldn't be too hard. Since they have a very uniform and if your driving electronics are well designed also a good repeatability, you should be able to suppress surrounding noise and "afflict" your PCB with a strong enough known field to cancel out anything.
Really sensor calibration is always the same problem, the only thing that changes is the way you create the signal.