Suppose I'm receiving IPv4 UDP packets whose payload is less than 18 octets, so when they are transmitted over Ethernet they have some trailing padding. The equipment generating said packets includes the length of the padding in the IP header's total length field. So, for example, a packet with a 7-octet-long payload will not have an IP total length of 35, but rather 46.
Where did you get that idea?
Ethernet is a layer-2 protocol, and it doesn't know about the layer-3 protocol. Ethernet can carry any number of layer-3 protocols (IPv4, IPX, IPv6, AppleTalk, etc.), and it knows nothing about the layer-3 protocols or headers for the layer-3 protocols, so it has no way to change a field in a layer-3 header.
Conversely, the layer-3 protocol has no idea which layer-2 protocol (ethernet, Wi-Fi, token ring, frame relay, ATM, PPP, etc.) carries its packets.
The ethernet padding is for ethernet frames at layer-2, not the layer-3 packets.
Edit:
You completely changed the meaning of the question, which is very bad form, especially when you already got an answer to the original question. You should start a new question for a different question, not change the original question.
The device in the middle that changes the IPv4 header total length field must change the IPv4 header checksum, and the UDP checksum (if it used, but it is optional for IPv4, and it is often not used) is not computed using the IPv4 total length field or checksum, so it would not change.
If the IPv4 total length field is changed, then IPv4 will send its original payload (the UDP datagram) and the padding to UDP.
The UDP header has its own length field. If the device does not modify this field, then the UDP will be correct and UDP will send the correct number of octets (not including the padding) to the application, but if the device changes the UDP length field, then the UDP checksum will be recomputed and UDP will send the original UDP payload, plus the padding to the application, possibly causing a problem for the application.
Unless you have implemented transport layer encryption or application level encryption or data signing, there is no way to tell if anyone has manipulated your UDP datagram - whether in content or in size. Manipulation has to include recalculating the checksum, of course.
The checksum's only purpose is to provide transport integrity against accidental changes, i.e. the receiving IP stack can see whether the datagram has been damaged in transport and will then discard it - silently, as it's UDP.
If a deeper layer is using a checksum - as in Ethernet's link layer - the damaged packet/frame has already been discarded on the way, long before reaching the destination.
Best Answer
If you mean the optional (for IPv4, but required for IPv6) UDP checksum, then that creates a 16-bit checksum that conceptually matches multiple combinations of datagrams that are larger than 16-bits. There is no guarantee that a UDP datagram that matches the checksum is error-free, but the odds of an error are very small. Many errors that would match the checksum would prevent the datagram from reaching its destination.
If the checksum indicates an error, then something is wrong somewhere, and it is almost always corruption in the datagram. Other possibilities include an incorrect or buggy checksum algorithm on the part of the sender or receiver.
If you mean a checksum in the application data, that further protects the data, but that is off-topic here.
There are also the possibilities that bits get flipped in RAM or on a disk drive. It does happen, but not very often.
See RFC 768, User Datagram Protocol: