Your assumption the IPv4 is always encapsulated by ethernet is flawed. Don't confuse the network layers. Ethernet, a layer-2 protocol, can carry any numbers of layer-3 protocols, not only IPv4. On the other hand, IPv4, a layer-3 protocol, can be carried by any number of layer-2 protocols, and it doesn't care which. Some layer-2 protocols on which IPv4 is carried have larger maximum MTU sizes than does ethernet.
Ethernet and IPv4 were developed and released at about the same time, but by very different groups. It was not obvious at the time that either would end up being the dominant protocol for its network layer. Ethernet is a LAN protocol which was mostly used for IPX, and IPv4 was usually used on WANs to connect large university computers.
IPv4 can be fragmented by routers in the path, IPv6 cannot, but it specifies a minimum MTU of 1280. Lately, there is PMTUD which discovers the minimum MTU in a path before sending packets out along the path, so that packet sizes can be adjusted to fit the minimum MTU of the path before being sent.
You are missing a network layer:
- Layer-1/2: ethernet
- Layer-3: IPv4 or IPv6
- Layer-4: UDP
The payload of the ethernet frames will probably be either IPv4 or IPv6 packets. You need to check the ethernet frame header EtherType field to determine what, specifically the payload is.
UDP datagrams will be the payload of either IPv4 or IPv6 packets. You can check the IPv4 packet header Protocol field, or the IPv6 header Next Header field to determine the payload of the IP packets.
As far as the byte order goes, the IETF has a Network Byte Order:
1.1. Background and Motivation
The document "ON HOLY WARS AND A PLEA FOR PEACE" [IEN-137]
written in 1980 argues that the industry should settle on a single
byte order. Since then, the IETF has largely settled on a single byte
order known as "Network Byte Order" and this memo is intended to
record that rough concensus. Unfortunately, the "holy war" continues
among CPU manufacturers.
2. Definition of Network Byte Order
When a number is too large to fit in a single byte, multiple bytes are
used to encode that number. When such numbers are sent over a
byte-oriented protocol (e.g., TCP is 8-bit-byte oriented) an order for
the bytes must be selected so both ends interpret the numbers in the
same way independent of CPU architecture. When the bytes which make
up such multi-byte numbers are ordered from most significant byte to
least significant byte, that is called "network byte order" or "big
endian."
For example, take the unsigned hexidecimal number 0xFEEDFACE (decimal
4,277,009,102). If this is sent as a sequence of 8-bit bytes using
network byte order (big endian), the sequence would be: 0xFE, 0xED,
0xFA, 0xCE. In little endian (least significant byte to most
significant byte), this would be: 0xCE, 0xFA, 0xED, 0xFE.
For ethernet, and other IEEE LANs, the destination address comes first, but for IPv4, IPv6, TCP, UDP, and other IETF standards, the source address comes first.
Best Answer
On most hardware/platforms, the Ethernet checksum is handled by the NIC before it's passed up to Wireshark. There's no way (or really any reason) to pass this up to higher layers because of the fact that the NIC does this in hardware, unless you've coded the hardware/driver to behave this way. Refer to the Ethernet wiki on wiki.wireshark.org for more information.