Actually, the chip you mention has something close to a current source- because they're measuring the voltage across the reference resistor they don't care if it changes a bit. Since the resistor (connected to the 2V bias) is 4x the 100°C value of the RTD and the RTD only changes by 30-40% for a +/-100°C range, the current is constant within about 10% at a (very high) 4mA for Pt100 and 0.4mA for a Pt1000. That is a much higher current than typically used in precision applications, so self heating is a definite source of error.
Let's take an example- you want to measure temperature from 0 to 100°C and you have a 0.8mA current source. Let's assume it's a Pt100 DIN curve (\$\alpha = 0.00385\$).
The voltage across the RTD will be 80mV at 0°C and 110.8mV at 100°C, for a span of 30.8mV. So a 0.1°C error (say that's our allowable error due to the electronics) would represent a 30.8uV voltage, and it would be 0.027% of full scale.
If you offset the 100 ohm base resistance of the sensor with a stable resistor (it's easy to get a resistor that is much more stable than a voltage or current source), and if we assume that error is relatively negligible, then we still have the 30.8uV error budget, but now we only have to have an accuracy of 0.1% in our measurement (almost 4 times better). A good ADC can be comparable to a precision resistor divider in ratiometric measurements, and that's what the MAX chip is depending on- also they're not shooting for the best possible accuracy, just something viable.
If you were thinking about using a circuit with, say, a 160mV voltage source and a series 100 ohm resistor to measure the current, you'd have substantially changing current through the sensor (so you'd get less resolution in degrees at high temperatures for a given resolution or noise floor), and the self-heating would greatly increase at low temperatures rather than appearing as a (relatively) fixed offset temperature. A high voltage V with a large series resistance R behaves the same as an imperfect current source of I = V/R with an output impedance equal to that R (Thevenin).
It's difficult to fully understand your RTD cabling, since you've mentioned "Two of them have a multi-core and the other three have a single core", yet you would need at least two wires to connect to each RTD, so using a single core (wire) seems impossible!
[Update - Above original wiring now clarified in later comment to have been: Two sensors using cables with stranded wires, and the other three sensors using cables with solid core wires.]
However, some of the other information suggests the answer to your original question could be "yes". Noise induced in the sensor 5 cable (from external sources) seems to fit with information presented so far, and being the longest cable, it's not a surprise that this sensor would be more affected than the other sensors, which have shorter cables.
the cables are unshielded
Use of long unshielded cables for RTDs is a concern, especially if external sources of electrical noise are present. You've also explained that the affected sensor is the one with the longest cable (150m). That's an interesting correlation. Shielded, twisted conductor (2, 3 or 4 wires) is a common cable type for long RTD cables, with the shield grounded at one end (typically the "measuring end") only.
The use of only 2-wire connections to the RTDs, especially with long cables, will affect the accuracy of the measurements, although that might not be important to you.
The cable of the probe 5 runs in parallel with different cables in a cable channel and at one spot it crosses a wireless router.
Again, it's an interesting correlation between that long cable run, close to other cables (potential "radiators") and the wireless router (a definite radiator) and the affected sensor. That is especially interesting if the cables to the other sensors, are further away from potential and actual EM radiators.
On Sensor 5 the voltage on both pins are higher than on the other Sensors. (RTD- 0.1V instead of 0.05V and RTD+ 0.48V instead of 0.44V) If VBias is not applied the voltage on those pins are changing in the 100mV range. The other Sensors show 0V.
That is interesting. The difference between the sensor 5 measurements and those from other sensors, is telling you something. If you can do more work to understand the specific differences, and what changes them (i.e. what makes those differences increase or decrease), you can extract more value from that difference.
This is an example of the type of difference which can form part of the tests, comparing the measurements between "good" and "bad" configurations which I mentioned in earlier comments (if you are experienced with that type of troubleshooting approach - but you might prefer to follow a different approach of your own).
I assume those voltage measurements you listed were made with a DMM. I would use a 'scope and look at the voltages on those sensor 5 signals, and use that to try to find the cause e.g.:
- What is the waveform shape? Does the magnitude match what you measured with the DMM or is it different (perhaps larger)?
- Is there more AC ripple at 50Hz/60Hz (whatever the mains frequency is at the affected site) on the sensor 5 cables, compared to the cables from other sensors?
- Are the cables which are parallel to the sensor 5 cable, carrying mains power, or low voltage signals, or something else? Can you match that answer to whatever induced voltage waveform you see on the sensor 5 cable?
- Is the waveform shape and magnitude of the externally induced voltage on the sensor 5 cable, different from that on the cables to other sensors? Anything unusual, considering cable length and routing for each cable?
- Can you temporarily switch off the wireless router close to the cable for sensor 5, and see whether the faults reported by its MAX31865 stop or are reduced?
Also ensure that you have selected the correct "notch frequency" (50Hz or 60Hz - whatever is the local mains frequency) in the configuration of the MAX31865, so that it has the best chance of ignoring induced voltage at that frequency.
How can it be that this over/undervoltage fault only occurs constant when sensor 5 is connected? (without probe 5 the error sometimes occurs with another probe but not that much)
[...]
The "funny" thing is that I can read the probe without a problem if it is connected alone. But as soon as I connect the others (not even measuring just connecting the other probes) it generates a error.
I suggest you look for differences in any measurement, between the different cases. If nothing changes (between different test cases) to cause different voltages to be externally induced in the sensor wiring to other sensors, then the "better" behaviour when only sensor 5 is attached may suggest an additional problem.
One hypothesis which might start to explain that, could be that externally induced voltages from all sensors are fed back to the "controller" (i.e. MAX31865 devices) and have some sort of cumulative effect there. This could explain why connecting sensor 5 on its own is not enough to cause errors to be reported ("only" 150m of cable attached); whereas when all sensors are attached (e.g. 250-300m [my guess] of cable attached in total, spread over 5 channels) this has a worse effect on the MAX31865 devices, enough for errors to be reported. This is why I suggested (in a comment) looking at the Vcc supplies for the MAX31865 devices - that's just an initial place to start looking; look for any measureable differences at the "controller board", when different numbers of sensors are connected.
That data point of when only sensor 5 is attached you don't get errors, is telling you something, but I'm not sure exactly what it means with the data given so far. Either gathering additional data and finding anomalies, or performing substitution tests (e.g. changing the cable type) and getting different results, will help.
I wasn't joking when I suggested making a diagram of the physical layout of the sensor wiring and potential interference sources. That may help you to understand the timing of when the MAX31865 reports errors, e.g. if that correlates with the operation times of specific interference sources.
It seems possible that the only way to resolve these errors, may be to rewire with suitable shielded, twisted conductor cabling. Depending on your budget, time pressures, availability of suitable shielded cabling etc., one option is that you could choose to take a risk, postpone additional investigation at this stage, and perform some testing with equivalent lengths of that different cable, laid temporarily in the same position as the existing sensor 5 cable. Get some measurements with the 'scope and see if there is an improvement in the magnitude of the induced voltage and, of course, see if you still get errors from the MAX31865.
Inevitably that approach has risks e.g. it might be a waste of time/money, but it would allow you to gather useful measurements, to see if the shielded, twisted conductor cable helped at all.
One final thought: The hypothesis is that your system is suffering from externally induced voltages, especially on sensor 5. When that voltage is large enough, it triggers the fault detection in the MAX31865 to set bit D2 in the Fault Status Register. However perhaps that induced voltage would not always be large enough to trigger the fault detection, and instead it might only cause erroneous temperature readings. It would be interesting if you are seeing unexpected intermittent high or low readings (especially on sensor 5, but the other sensors too), as well as the actual faults reported by the MAX31865.
I hope that review of the information, suggestions for additional data gathering and hypotheses to consider, are helpful. However getting a conclusive answer to your question, will require more work.
Best Answer
The MAX31865 seems like a very good choice. The datasheet does give more detailed accuracy specifications—the first page of a datasheet is generally just marketing material.
On page 3, we have some specifications of the ADC: full-scale error typically ±1 LSB, integral nonlinearity typically ±1 LSB, and offset error at most ±3 LSB. Therefore, the output of the ADC will typically be within 4 least-significant bits of the correct value. Since it's a 15-bit ADC, that's an error of \$\frac{4}{2^{15}}\$ or about 0.013%. Since the resistance of an RTD is roughly linear with temperature (at 2.73 K/Ω for a PT100 RTD), an error of 0.013% at 273 K corresponds to a temperature error of 0.013% * 273 K = 0.036 K. At a higher temperature, the absolute error would be proportionately larger.
We also have some graphs of accuracy in the datasheet, at the bottom page 6. These give a more detailed picture, and we can see that our previous estimate of a typical error of 0.036 K is not too far off, though the absolute error does not seem to grow linearly with resistance.
Overall, the 0.5 K worst-case error from page 1 of the datasheet seems very conservative. The actual error is likely to be no more than the error inherent to the RTD, if you do indeed go with a 1/10 DIN RTD, which has an error of at most ±0.07 K from -60 to 50 degrees Celsius.
If you follow the datasheet recommendations, though, you'll introduce another source of error: self-heating. The datasheet suggests a 400 Ω reference resistor for a PT100, which results in 4 mA through the RTD (due to the 2 V bias generated). This is about an order of magnitude above the recommended current for a PT100, so depending on what RTD probe you pick, you may want to use a reference resistor of about 5 kΩ instead.
The advantage of a smaller reference resistor (and therefore a higher current is better noise-immunity) so there is a tradeoff. For use as a reference thermometer, you can afford to average the noise out over a very long time, so a large resistor makes sense. In an industrial application, you might have a lot of noise, but also a very large measurement sample that can absorb the generated heat effectively, so there the smaller resistor would make sense.
Interestingly, regardless of the choice for reference resistor, the tolerance of it is very important, as it directly contributes to the final error. Using an 0.1% resistor means that you can never do better than 0.1% error (0.273 K error at 0 degrees Celsius), so you may want to splurge on an 0.01% resistor.