The numbers like "0a:00.0" are the PCI bus addresses associated with the PCI slots. These are a consistent mapping - a card in a given slot will always have the same PCI bus address.
The devices will be enumerated in order they are seen, so if you remove a device it will reshuffle the list as you suggest. You may be able to change this behaviour with udev, but it's probably easier to create symlinks instead.
You can either empirically determine which PCI address maps to which slot (eg, put a card in slot 1, record the bus address, repeat), or if you are very lucky, then the bus address to slot mapping contained in the output of "biosdecode" will actually be useful. It's not useful in most of the motherboards I've seen, in that it has duplicate slot names, or they don't actually correspond to any logical ordering on the back. However, once you have worked out the mapping yourself, it won't change.
At any rate, have a look through the output of biosdecode and perhaps dmidecode -t slot, you may find something there that is useful. Otherwise, make your mapping manually.
(Also, the PCI mapping can change - if you change your BIOS or BIOS options, it may cause devices to be enumerated differently. EG, if an onboard USB controller shows up at 0b:00.00, and you have PCI devices showing up at 0a:00.0 and 0c:00.0, and you disable the USB controller, it may result in the 0c:00.0 device being shifted down to 0b:00.0. Or it may not. Your mileage may vary)
You're correct that you typically want to align your guest so it either fits within a single NUMA node (narrow). If you do go wide in your current scenario (by memory or cpu), I agree that you're getting into a one-vm per host configuration.
It's hard to know whether narrow or wide is a better fit for the VM in question without knowing an awful lot about the SQL Server and its bottlenecks. But it is generally true that memory is very powerful at alleviating IO pressure for SQL Servers and is very often helpful -- so I think your long term plan of increasing the amount of memory to be able to keep the guest narrow and give it more memory is sound.
With your version of vSphere you do have the option to make your virtual machine NUMA aware, but it's a very specific configuration setting. It isn't done by simply setting the socket/processor ratio on the VM.
The advanced setting value you are looking for is "numa.vcpu.maxPerVirtualNode", on your server you have two physical sockets each with 4 hyper-threaded cores so set this value to 4. That will cause VMware to allocate 4 Virtual CPU's on each socket.
It's enabled by default for guests with more than 8 vCPUs, so it wouldn't be on by default for yours. You will want to keep all the hosts configured the same way, as migrating to hosts with different NUMA configurations could be bad news.
More info:
Best Answer
It's hardwired and wouldn't be changeable via software.
What problem are you looking to solve?