What you're looking for is commonly called a "transmit hash policy" or "transmit hash algorithm". It controls the selection of a port from a group of aggregate ports with which to transmit a frame.
Getting my hands on the 802.3ad standard has proven difficult because I'm not willing to spend money on it. Having said that, I've been able to glean some information from a semi-official source that sheds some light on what you're looking for. Per this presentation from the 2007 Ottawa, ON, CA IEEE High Speed Study Group meeting the 802.3ad standard does not mandate particular algorithms for the "frame distributor":
This standard does not mandate any particular distribution algorithm(s); however, any distribution algorithm shall ensure that, when frames are received by a Frame Collector as specified in 43.2.3, the algorithm shall not cause a) Mis-ordering of frames that are part of any given conversation, or b) Duplication of frames. The above requirement to maintain frame ordering is met by ensuring that all frames that compose a given conversation are transmitted on a single link in the order that they are generated by the MAC Client; hence, this requirement does not involve the addition (or modification) of any information to the MAC frame, nor any buffering or processing on the part of the corresponding Frame Collector in order to re-order frames.
So, whatever algorithm a switch / NIC driver uses to distribute transmitted frames must adhere to the requirements as stated in that presentation (which, presumably, was quoting from the standard). There is no particular algorithm specified, only a compliant behavior defined.
Even though there's no algorithm specified, we can look at a particular implementation to get a feel for how such an algorithm might work. The Linux kernel "bonding" driver, for example, has an 802.3ad-compliant transmit hash policy that applies the function (see bonding.txt in the Documentation\networking directory of the kernel source):
Destination Port = ((<source IP> XOR <dest IP>) AND 0xFFFF)
XOR (<source MAC> XOR <destination MAC>)) MOD <ports in aggregate group>
This causes both the source and destination IP addresses, as well as the source and destination MAC addresses, to influence the port selection.
The destination IP address used in this type of hashing would be the address that's present in the frame. Take a second to think about that. The router's IP address, in an Ethernet frame header away from your server to the Internet, isn't encapsulated anywhere in such a frame. The router's MAC address is present in the header of such a frame, but the router's IP address isn't. The destination IP address encapsulated in the frame's payload will be the address of the Internet client making the request to your server.
A transmit hash policy that takes into account both source and destination IP addresses, assuming you have a widely varied pool of clients, should do pretty well for you. In general, more widely varied source and/or destination IP addresses in the traffic flowing across such an aggregated infrastructure will result in more efficient aggregation when a layer 3-based transmit hash policy is used.
Your diagrams show requests coming directly to the servers from the Internet, but it's worth pointing out what a proxy might do to the situation. If you're proxying client requests to your servers then, as chris speaks about in his answer then you may cause bottlenecks. If that proxy is making the request from its own source IP address, instead of from the Internet client's IP address, you'll have fewer possible "flows" in a strictly layer 3-based transmit hash policy.
A transmit hash policy could also take layer 4 information (TCP / UDP port numbers) into account, too, so long as it kept with the requirements in the 802.3ad standard. Such an algorithm is in the Linux kernel, as you reference in your question. Beware that the the documentation for that algorithm warns that, due to fragmentation, traffic may not necessarily flow along the same path and, as such, the algorithm isn't strictly 802.3ad-compliant.
LACP is the Link Aggregation Control Protocol. It is all about setting up link aggregation automatically and dynamically whenever more than one link is available and the other side speaks LACP as well. It typically is used with redundant server-switch interconnection since a static setup with link aggregation would break server connectivity as long as the NIC drivers (where link aggregation is implemented) have not been loaded, thus effectively breaking pre-boot server management or network boot capabilities.
For switch interconnects, usually a static setup is preferred - although I would consider it purely a matter of taste.
"Link aggregation" and "trunking" are usually used as synonyms. There is a defined IEEE standard for LA (802.3ad) and many proprietary vendor extensions have arisen before standardization, most of which have implementations even in newer switch models for backward compatibility reasons.
If you set up a link aggregation or trunk group (LAG/TG), you should define the same VLANs as members of the group for switches on both sides. You only should define more than one path (i.e. more than one LAG interconnection) between two switches if you a) know exactly what you are doing and b) have enabled STP on both connected switches.
If you just suspect a bandwidth bottleneck, use the port statistics counters of your switches to verify it - quite possible that the bandwidth usage will turn out fine and your problem is an entirely different one. Mostly, switches do have rather slow CPUs and fast ASICs able to do most of the processing without any burden on the CPU. Some operations still would eat CPU cycles, one that is quite "popular" is the reception of broadcasts or multicast packets. If your network is generating a lot of broadcast/multicast traffic, processing and discarding the packets itself might saturate the CPU of a switch beyond reason. Again, check the counters to see if an excessive number of broadcasts is seen on the net.
Best Answer
Distributed Trunking was a means on the ProVision family of HP Switches (35xx/54xx/62xx/66xx/82xx.. not 38xx) to be seen as one switch. It was still two configurations, and there are some features that get disabled on the ProVision switches if enabled. When Distributed Trunking first came out, it was Server to Switch. Its since been improved from switch to switch.
HP now does proper stacking in the ProVision range with the 38xx series, and the 2920 series. It requires a stacking module and stacking cables. but the switches are now seen as one device. One IP Address. One Virtual MAC address. Single Configuration file. Supports Distributed Link Aggregation.
(BTW, I work for Hewlett Packards Networking division in a technical role)