Jumbo frames may make some difference but your throughput problems indicate a much more severe issue. You can enable Jumbo frames in ESXi but it requires the use of the vCLI command line tools - you can find specific instructions in this VMware ESXi config document.
There are some possible causes.
You might have your data going in and out of the ESXi host - in that case Converter would be copying the data from within the VM in the ESXi host back to the Management Interface via your physical network. Given that it's a 100Megabit uplink I'd still expect you to get a couple of Megabytes/sec rather than the 4Megabit/sec you report.
Your ESX host NICs might not actually be negotiating the 100Mbps/Full duplex settings correctly with the switch - make sure both the switch and the pNIC settings on your ESXi host are set correctly.
Converter isn't terribly efficient in terms of throughput but if you are using block based disk copying (rather than file level) it is OK (transfer rate will be >50% of the link bandwidth's maximum - say 4 Meg/sec on a 100Mbps network, 40Meg/sec on GigE). If your copy is using file level copying then things will be a lot slower.
All of this activity is putting a fair amount of additional load on the disk subsystem your VM's are stored on. If you are running all of this off fairly slow storage (say a handful of SATA drives in RAID 5) then it's possible that the disk is thrashing but for a healthy storage setup this sort of thing shouldn't be a stress.
I think the problem is with your virtual networking though - assuming that it is you should consider the following:
If your ESXi Management Port is on the same virtual switch as your VM's production network port group then the traffic should loop back internally within the Virtual Switch. If that's not happening then I'd start checking whether VLAN's are configured on the ports\port groups or check whether your ip-addressing is causing the traffic to think it has to exit the switch before coming back in (e.g. if you have the Management port on a different subnet to the VM Network and are relying on an external router to allow them to communicate). If you suspect that your network isn't doing the above correctly then you can put the Source and target VM's on the same subnet as the management port and connect them to a VM Port Group on the same vSwitch as the Management port then you should get traffic between the various systems (the source, the converter VM and the ESX host) to remain within the confines of the vSwitch. Move the VM port groups rather than messing with the Management port - if you make a mistake with that you'll have to go back to the ESXi's physical console to fix things and it's best to avoid taking any risks with that.
Also shut down as much as you can before you start just in case something like a Backup process is hogging all the Management port network bandwidth etc.
Best Answer
Thick provisioned disks is the 'default' and gives best performance but is obviously less space efficient.
Thin disks need to grow with the VM but reads and writes to already-allocated blocks within the disk work at the same speed as on thick disks; i.e. a thin disk that's not growing is the same speed as a thick disk.
What does slow the host and guest/s down is the process of file growth as this has to be handled properly before the load on the host and guest/s is complete. So if you have a VM that's very frequently growing its thin disk then yes you'll see quite a lot of write performance drops and read ones too if you're doing interleaved writes and reads.
In this situation using thick disks makes more sense but for the majority of guests that may grow periodically it often makes a lot of sense to use thin disks and take the, relatively minor, performance hit on growths. If consistent performance is more of an issue then stay with thick.