Yes. Yes it is possible. The network has to be able to handle all the packets (without exceptions - packet loss means death), the machines have to be able to send out packets fast enough, the other end must not have "flooding protection" and the address has to not change every time you open a new connection. This method managed to connect my laptop behind my University's WiFi with my friends laptop behind his WiFi and it sent a packet from my home WiFi through my University's symmetric NAT. If you're using a phone, you should probably send packets as fast as possible because phones are slow but with laptops you can synchronize the barrage of UDP packets with Thread.sleep()
/**
* Shoots empty UDP packets at every port between "start" and "end".
*/
public static void sendBarrage(int start, int end, InetAddress target) {
final ByteBuffer to_send = ByteBuffer.allocate(0);
// Start at a random value to avoid re-traversing failed paths on repeated attempts.
final int starting_point = start + generator.nextInt(end - start);
// Go from starting_point down to 1024.
for (int port = starting_point; port >= start; --port) {
try {
if (made_connection == false) {
myChannel.send(to_send, new InetSocketAddress(target, port));
} else {
break;
}
} catch (java.nio.channels.ClosedChannelException cce) {
Application.printerr("Channel closed while on port: " + port);
break;
} catch (IOException e) {
Application.printerr("Error sending in port: " + port);
continue;
}
}
// Go from 65535 down to say starting_point, not including starting_point.
for (int port = end - 1; port > starting_point; --port) {
try {
if (made_connection == false) {
myChannel.send(to_send, new InetSocketAddress(target, port));
} else {
break;
}
} catch (java.nio.channels.ClosedChannelException cce) {
Application.printerr("Channel closed while on port: " + port);
break;
} catch (IOException e) {
Application.printerr("Error sending in port: " + port);
continue;
}
}
}
^ Make sure you call this method in a loop, it might take a few attempts. Keep in mind that this method does NOT work with cell phone tower networks (unless you get insanely lucky and the public ip address does not change). It might work better if one side holds the port open while the other guesses rather than having them both brute force both sides.
Also, if only one side holds the ports open, this method might be good: https://www.goto.info.waseda.ac.jp/~wei/file/wei-apan-v10.pdf
Remember, bits arrive on a NIC as a series of 1's and 0's. Something has to exist to dictate how the next series of 1's and 0's should be interpreted.
Ethernet2 is the defacto standard for L2, as such it is assumed to interpret the first 56 bits as a Preamble, and the next 8 bits as the Preamble, and the next 48 bits as the Destination MAC, and the next 48 bits as the Source MAC, and so on and so forth.
The only variation might be the somewhat antiquated 802.3 L2 header, which predates the current Ethernet2 standard, but also included a SNAP header which served the same purpose. But, I digress.
The standard, Ethernet2 L2 header has a Type field, which tells the receiving node how to interpret the 1's and 0's that follow:
Without this, how would the receiving entity know whether the L3 header is IP, or IPv6? (or AppleTalk, or IPX, or IPv8, etc...)
The L3 header (to the same frame as above) has the Protocol field, which tells the receiving node how to interpret the next set of 1's and 0's that follow the IP header:
Again, without this, how would the receiving entity know to interpret those bits as an ICMP packet? It could also be TCP, or UDP, or GRE, or another IP header, or a plethora of others.
This creates a sort of protocol chain to indicate to the receiving entity how to interpret the next set of bits. Without this, the receiving end would have to use heuristics (or other similar strategy) to first identify the type of header, and then interpret and process the bits. Which would add significant overhead at each layer, and noticeable delay in packet processing.
At this point, its tempting to look at the TCP header or UDP header and point out that those headers don't have a Type or Protocol field... but recall, once TCP/UDP have interpreted the bits, it passes its payload to the application. Which undoubtedly probably has some sort of marker to at least identify the version of the L5+ protocol. For example, HTTP has a version number built into the HTTP requests: (1.0 vs 1.1).
Edit to speak to the original poster's edit:
What's wrong with a model where instead of every header identifying the next header's type, every header identifies its own type in a predetermined location (eg. in the first header byte)? Why is such a model any less desirable than the current one?
Before getting into my attempt at an answer, I think its worth noting that there is probably no definitive million dollar answer as to why one way is better or the other. In both cases, protocol identifying itself vs protocol identifying what it encapsulates, the receiving entity would be able to interpret the bits correctly.
That said, I think there are a few reasons why the protocol identifying the next header makes more sense:
#1
If the standard was for the first byte of every header to identify itself, this would be setting a standard across every protocol at every layer. Which means if only one byte is dedicated we could only ever have 256 protocols. Even if you dedicated two bytes, that caps you at 65536. Either way, it puts an arbitrary cap on the number of protocols that could be developed.
Whereas if one protocol was only responsible for interpreting the next, and even if only one byte was dedicated to each protocol identification field, at the very least you 'scale' that 256 maximum to each layer.
#2
Protocols which order their fields in such a way to allow receiving entities the option to only inspect the bare minimum to make a decision only exists if the next protocol field exists in the previous header.
Ethernet2 and "Cut-Through" switching come to mind. This would be impossible if the first (few) bytes were forced to be a protocol identification block.
#3
Lastly, I don't want to take credit, but I think @reirab's answer in the comment in the comments of the original question is extremely viable:
Because then it's effectively just the last byte of the IPv4 (or whatever lower-level protocol) header in all but name. It's a "chicken or egg" problem. You can't parse a header if you don't know what protocol it is.
Quoted with Reirab's permission
Best Answer
That is correct. IP fragments will contain only IP fields. The TCP/UDP header will be in the first fragment only. So, you'd have to collect the entire frame (from all the fragments) to apply any L4 rules to it, or track the entire session to apply the same rule to all the fragments. Cisco calls this virtual fragment reassembly. Some/Many firewalls simply don't bother, and instead block all fragments. ("it's the only way to be sure" :-))
(FWIW, all my routers/firewalls are explicitly configured to drop fragments. If you cannot do PMTUd correctly, I have no desire to talk to you.)