There are two entirely separate issues here, precision and accuracy.
If you adjust one meter to read the same as the other, you are adjusting the second to have the same precision as the first. That is, if A reads 9.781 volts and B reads 9.781 volts, each is providing the same precision.
However, if both are measuring a 5.221 volt source, both are equally inaccurate.
So - are you trying to calibrate your meter to "the fluke ones" (matching precision) or are you trying for accuracy? If the first, you can use almost any standards you like as long as you are sure that they don't drift with time, temperature, or anything else. You measure the standards with "the fluke ones", record the readings, then adjust your other meters to match.
If you're trying for accuracy, you'll need to do one of two things: either buy or build a good reference, or rent or borrow a very good, high-resolution meter to find the errors in "the fluke ones". You should be able to find either solution at a test equipment house. There are many online.
Quite simply, you need to build the prototype (or complete) system around the MLX90616 that includes the appropriate (likely microcontroller) interface and processing, then you need to take measurements against objects of known temperature and emissivity. With that data, you can develop coefficients/constants to use in the function that translates the MLX90616 digital output value into a temperature value.
Best Answer
Very generally and at the most basic, you need a precision voltage reference and resistors. Obtaining 0.1% is fairly easy, for resistors it's straightforward, and for voltage references look up "precision voltage reference" (something like this), and bias it with 0.1 to 1 mA.
The actual calibration procedure will vary from meter to meter, so you will need to dig up the appropriate documents.