Each OS can do this differently. It is up to the OS designers how this specifically happens in an OS.
RFC 826, An Ethernet Address Resolution Protocol -- or -- Converting Network Protocol Addresses to 48.bit Ethernet Address for Transmission on Ethernet Hardware gives you a general outline of what is supposed to happen, but an OS could skip the whole table creation and use ARP requests for every packet.
Packet Generation:
As a packet is sent down through the network layers, routing
determines the protocol address of the next hop for the packet and on
which piece of hardware it expects to find the station with the
immediate target protocol address. In the case of the 10Mbit
Ethernet, address resolution is needed and some lower layer (probably
the hardware driver) must consult the Address Resolution module
(perhaps implemented in the Ethernet support module) to convert the
<protocol type, target protocol address> pair to a 48.bit Ethernet
address. The Address Resolution module tries to find this pair in a
table. If it finds the pair, it gives the corresponding 48.bit
Ethernet address back to the caller (hardware driver) which then
transmits the packet. If it does not, it probably informs the caller
that it is throwing the packet away (on the assumption the packet will
be retransmitted by a higher network layer), and generates an Ethernet
packet with a type field of ether_type$ADDRESS_RESOLUTION. The
Address Resolution module then sets the ar$hrd field to
ares_hrd$Ethernet, ar$pro to the protocol type that is being resolved,
ar$hln to 6 (the number of bytes in a 48.bit Ethernet address), ar$pln
to the length of an address in that protocol, ar$op to
ares_op$REQUEST, ar$sha with the 48.bit ethernet address of itself,
ar$spa with the protocol address of itself, and ar$tpa with the
protocol address of the machine that is trying to be accessed. It
does not set ar$tha to anything in particular, because it is this
value that it is trying to determine. It could set ar$tha to the
broadcast address for the hardware (all ones in the case of the 10Mbit
Ethernet) if that makes it convenient for some aspect of the
implementation. It then causes this packet to be broadcast to all
stations on the Ethernet cable originally determined by the routing
mechanism.
Suppose I'm receiving IPv4 UDP packets whose payload is less than 18 octets, so when they are transmitted over Ethernet they have some trailing padding. The equipment generating said packets includes the length of the padding in the IP header's total length field. So, for example, a packet with a 7-octet-long payload will not have an IP total length of 35, but rather 46.
Where did you get that idea?
Ethernet is a layer-2 protocol, and it doesn't know about the layer-3 protocol. Ethernet can carry any number of layer-3 protocols (IPv4, IPX, IPv6, AppleTalk, etc.), and it knows nothing about the layer-3 protocols or headers for the layer-3 protocols, so it has no way to change a field in a layer-3 header.
Conversely, the layer-3 protocol has no idea which layer-2 protocol (ethernet, Wi-Fi, token ring, frame relay, ATM, PPP, etc.) carries its packets.
The ethernet padding is for ethernet frames at layer-2, not the layer-3 packets.
Edit:
You completely changed the meaning of the question, which is very bad form, especially when you already got an answer to the original question. You should start a new question for a different question, not change the original question.
The device in the middle that changes the IPv4 header total length field must change the IPv4 header checksum, and the UDP checksum (if it used, but it is optional for IPv4, and it is often not used) is not computed using the IPv4 total length field or checksum, so it would not change.
If the IPv4 total length field is changed, then IPv4 will send its original payload (the UDP datagram) and the padding to UDP.
The UDP header has its own length field. If the device does not modify this field, then the UDP will be correct and UDP will send the correct number of octets (not including the padding) to the application, but if the device changes the UDP length field, then the UDP checksum will be recomputed and UDP will send the original UDP payload, plus the padding to the application, possibly causing a problem for the application.
Best Answer
Ethernet has its own checksum, and it has nothing to to with IP, TCP, or UDP. Neither TCP not IPv6 have anything to do with the UDP checksum. UDP on the source will create the checksum, and UDP on the destination will verify the checksum.
I think you don't really understand the network stack layers.
Layer-2 protocols, e.g. ethernet, Wi-Fi, etc., may use a checksum. In general, layer-2 protocols will drop any layer-2 frame with a bad checksum anywhere along the layer-2 path. For instance, a switch will discard an ethernet frame with a bad checksum. Layer-2 protocols don't care which layer-3 or layer-4 protocols are carried in their frames, nor are they aware of any layer-3 or layer-4 checksums.
In layer-3, IPv4 has a header checksum that layer-3 devices, e.g. routers or hosts, will inspect to verify the integrity of the IPv4 header, discarding any layer-3 packets with a bad header checksum. IPv6 has done away with the IPv4 header checksum. Layer-3 protocols do not care which layer-2 protocol carries their layer-3 packets, nor which layer-4 protocols they carry. Neither are they aware of any layer-2 or layer-4 checksums.
Layer-4 protocols, e.g. TCP, UDP, etc. may have a checksum. In IPv4, the UDP checksum was optional, but it is mandatory with IPv6. A layer-4 protocol will inspect it own checksum, and it will discard any datagrams with bad layer-4 checksums. Layer-4 protocols are unaware of any layer-2 or layer-3 checksums.