This feature was requested a long time ago. Now libvirt supports it by providing two new commands: domifaddr and net-dhcp-leases
Usage: domifaddr <domain> [interface] [--full] [--source lease|agent]
Example outputs:
virsh # domifaddr f20 --source agent
Name MAC address Protocol Address
-------------------------------------------------------------------------------
lo 00:00:00:00:00:00 ipv4 127.0.0.1/8
- - ipv6 ::1/128
eth0 52:54:00:2e:45:ce ipv4 10.1.33.188/24
- - ipv6 2001:db8:0:f101::2/64
- - ipv6 fe80::5054:ff:fe2e:45ce/64
eth1 52:54:00:b1:70:19 ipv4 192.168.105.201/16
- - ipv4 192.168.201.195/16
- - ipv6 2001:db8:ca2:2:1::bd/128
eth2 52:54:00:36:2a:e5 N/A N/A
eth3 52:54:00:20:70:3d ipv4 192.168.105.240/16
- - ipv6 fe80::5054:ff:fe20:703d/64
virsh # domifaddr f20 --full
Name MAC address Protocol Address
-------------------------------------------------------------------------------
vnet0 52:54:00:2e:45:ce ipv6 2001:db8:0:f101::2/64
vnet1 52:54:00:b1:70:19 ipv4 192.168.105.201/16
vnet1 52:54:00:b1:70:19 ipv6 2001:db8:ca2:2:1::bd/128
vnet3 52:54:00:20:70:3d ipv4 192.168.105.240/16
virsh # domifaddr f20 eth0 --source agent --full
Name MAC address Protocol Address
-------------------------------------------------------------------------------
eth0 52:54:00:2e:45:ce ipv4 10.1.33.188/24
eth0 52:54:00:2e:45:ce ipv6 2001:db8:0:f101::2/128
eth0 52:54:00:2e:45:ce ipv6 fe80::5054:ff:fe2e:45ce/64
For eth0, ipv6 is managed by libvirt, but ipv4 is not.
For eth1, the second IP is created using ip aliasing.
For eth2, there is no IP configured as of yet.
For eth3, only ipv4 has been configured.
fd00::/8 are private ipv6 ranges. Hence not visible through --source lease
In a different scenario:
Example Usage: net-dhcp-leases <network> [mac]
virsh # net-dhcp-leases --network default6
Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-------------------------------------------------------------------------------------------------------------------
2014-06-16 03:40:14 52:54:00:85:90:e2 ipv4 192.168.150.231/24 fedora20-test 01:52:54:00:85:90:e2
2014-06-16 03:40:17 52:54:00:85:90:e2 ipv6 2001:db8:ca2:2:1::c0/64 fedora20-test 00:04:b1:d8:86:42:e1:6a:aa:cf:d5:86:94:23:6f:94:04:cd
2014-06-16 03:34:42 52:54:00:e8:73:eb ipv4 192.168.150.181/24 ubuntu14-vm -
2014-06-16 03:34:46 52:54:00:e8:73:eb ipv6 2001:db8:ca2:2:1::5b/64 - 00:01:00:01:1b:30:c6:aa:52:54:00:e8:73:eb
Qemu:
QEmu is a complete and standalone software of its own. You use it to emulate machines, it is very flexible and portable. Mainly it works by a special 'recompiler' that transforms binary code written for a given processor into another one (say, to run MIPS code on a PPC mac, or ARM in an x86 PC).
To emulate more than just the processor, Qemu includes a long list of peripheral emulators: disk, network, VGA, PCI, USB, serial/parallel ports, etc.
KQemu:
In the specific case where both source and target are the same architecture (like the common case of x86 on x86), it still has to parse the code to remove any 'privileged instructions' and replace them with context switches. To make it as efficient as possible on x86 Linux, there's a kernel module called KQemu that handles this.
Being a kernel module, KQemu is able to execute most code unchanged, replacing only the lowest-level ring0-only instructions. In that case, userspace Qemu still allocates all the RAM for the emulated machine, and loads the code. The difference is that instead of recompiling the code, it calls KQemu to scan/patch/execute it. All the peripheral hardware emulation is done in Qemu.
This is a lot faster than plain Qemu because most code is unchanged, but still has to transform ring0 code (most of the code in the VM's kernel), so performance still suffers.
KVM:
KVM is a couple of things: first it is a Linux kernel module—now included in mainline—that switches the processor into a new 'guest' state. The guest state has its own set of ring states, but privileged ring0 instructions fall back to the hypervisor code. Since it is a new processor mode of execution, the code doesn't have to be modified in any way.
Apart from the processor state switching, the kernel module also handles a few low-level parts of the emulation like the MMU registers (used to handle VM) and some parts of the PCI emulated hardware.
Second, KVM is a fork of the Qemu executable. Both teams work actively to keep differences at a minimum, and there are advances in reducing it. Eventually, the goal is that Qemu should work anywhere, and if a KVM kernel module is available, it could be automatically used. But for the foreseeable future, the Qemu team focuses on hardware emulation and portability, while KVM folks focus on the kernel module (sometimes moving small parts of the emulation there, if it improves performance), and interfacing with the rest of the userspace code.
The kvm-qemu executable works like normal Qemu: allocates RAM, loads the code, and instead of recompiling it, or calling KQemu, it spawns a thread (this is important). The thread calls the KVM kernel module to switch to guest mode and proceeds to execute the VM code. On a privileged instruction, it switches back to the KVM kernel module, which, if necessary, signals the Qemu thread to handle most of the hardware emulation.
One of the nice things of this architecture is that the guest code is emulated in a posix thread which you can manage with normal Linux tools. If you want a VM with 2 or 4 cores, kvm-qemu creates 2 or 4 threads, each of them calls the KVM kernel module to start executing. The concurrency—if you have enough real cores—or scheduling—if not—is managed by the normal Linux scheduler, keeping code small and surprises limited.
Best Answer
Remove the autostart from the network definition. It is being started by the network manager. The two configurations may be conflicting. virsh may report the interace as inactive even if it is up. Whatever you define your bridge as, it neeeds to be declared to libvirt. You will need to ensure the network configuration is complete.
Alternatively, you can delete the external bridge definition, but that may cause issues for applications running on the host.
I prefer to define the bridge using the host tools. This ensures things aren't changed there and allows me to manage the networking with one set of tools.
I generally test the networking in stages.