A vCPU can only be mapped to a single physical CPU. You can't take 4 physical CPUs and make a single vCPU that's 4x faster; it's just not how it works.
Hyper-V is limited to assigning 4 vCPUs to a VM (last I checked). If you need significant CPU power, go physical, there's no sense in adding virtualization overhead to something that CPU intensive and parallel in the first place.
Also, as Holocryptic notes, if you assign 4 vCPUs to a VM, that VM can't run until Hyper-V has acquired 4 physical CPU cores to run them on. Depending on your configuration this could be a major stumbling block (ex, if you have a 6-core machine with a bunch of 4 vCPU VMs, only one will ever run at a time, the other two cores will always go essentially unused). According to Jake Oshins this was not true for any version of Hyper-V. He states that Hyper-V does not use gang scheduling for the CPU; as almost every other hypervisor does. Accordingly, if one phystical CPU core is available, Hyper-V may use it to run a multi-CPU VM. (Also mentioned, Hyper-V may not use all the physical cores available at the time because of NUMA partitioning)
Side note: SQL doesn't necessarily use all the cores you can throw at it in the first place. It really depends on what you're using it for and how parallelizable the load is.
It is a trade-off. Say you have four physical cores and two VMs.
If you map two virtual cores in each VM, then pretty much each guest will see what it expects. It will see two cores that it expects to have full control over and more or less get that. (Assuming host load isn't high.) You won't often confuse the guest's scheduler with a case where it assigns a task to a core expecting the task to be done immediately and instead the task does not get done because no physical core is available. However, if one guest is CPU-starved and the other is idle, two cores will be twiddling their thumbs.
On the other hand, if you give each guest four virtual cores, each guest will be able to use all the available CPU power when the other guest and the host don't need it. However, when there is load from another source, the guest's scheduler will not get the behavior it expects and some tasks will get started immediately and some won't in a way that the guest's scheduler can't easily expect and deal with.
My recommendation is generally to put as many physical cores as you have in each VM that might ever be expected to need a lot of CPU. The exception would be if you have VM's that are critically dependent on latency. I would also reduce the core count on any "smaller" or "less important" VMs that share a physical box with more critical VMs.
The hypervisor assigns logical cores to physical cores as part of its scheduling policy. There is no fixed mapping unless you specifically make one. (Which I don't recommend except in the one specific case where you want to reserve a core for a latency-critical VM.)
Best Answer
There is no hard proportion between virtual and physical cores. Of course, the very idea behind virtualization is that you overcommit resources (especially CPUs) to prevent vacancies of expensive and power-consuming hardware, but how many vCPUs you would be able to run on your hardware would depend on your load.
Start with an overcommit factor of 4-8, monitor the load and migrate virtual machines away as you see average usage values climbing over 70% of your total CPU capacity for prolonged periods of time (15-30 minutes) as this would indicate a CPU bottleneck.