What isn't clear (to me) is the technology to implement an "internal" routing similar to that provided by VMWare. Especially in a way that the frame (physical box) OS can participate.
I am moving from VMWare Server at home to Centos/QEMU ... (I use VMware ESX at work). What I have outlined below worked fine (CentOS 5.2, VMWare Server 2.0, vm's of 5.2 and WinXP)
I am provisioning my frame (physical box is X86_64, an Intel I5) with 3 nics (internet, dmz, intranet). The frame is running Centos 6.2 64 bit, "qemu-kvm"
frame os (CentOS 6.2), and various vm's (all VM's running CentOS 6.2, providing services as 2 DNS servers, 1 Hylafax gw, 1 postfix mail server, 1 apache/tomcat web server)
No NAT, each needs its static IP for the intranet & internet, several also need static dmz IP addresses
I also have read your docs, as well as much others. There does not appear to be a description that provides:
(1) host and multiple VM access through a given nic where both the host and the VM's have "their own" static, routeable IP's in the respective network's CIDR range (no NAT)
(2) no example that shows how the physical box can have traffic on a nic once that nic is promiscuous and part of a bridge.
Let's start with a simple example that I think will cover all the bases:
one nic on the CentOS 6 frame (qemu-kvm), eth0 (intranet)
two vm's both CentOS 6, both need static IP addresses to intranet
Interanet is 184.108.40.206/255.255.255.0
br0: 220.127.116.11 (intranent)
tun created (not quite sure what to do with it in the above context)
The VM's have br0 as their NIC
Network manager off, network on
I created br0, set eth0 in promiscuous mode with no IP, configured br0 with the IP of the "old" eth0, added eth0 to br0, set vm's to use br0 as their network interface, assigned static IP's to the VM's, and generated unique hardware addresses. Boot any system and the network will not "come up".
centos frame cannot access the net, does not route through BR0 (no route to host) when pinging an IP in the BR0 CIDR. I would have thought that the frame's centos would use e.g. br0 as its interface but there are no br0 routes in netstat (may have missed config for default route to eth0, but not in /etc/sysconfig or subdirs)
Do I need 2nd nic because the physical frame's CentOS cannot participate in a bridge?
What is the relationship between br0 and tun0? do I need a tun1 if I have a br1?
Is it possible that this functionality just does not yet work in QEM?
I really don't want to load the system up with a pair of 4 port cards if I don't have to, but ...
while I am wishing ... I would *like* to have one set of firewall rules on the frame's centos that handles routing for all the guests. Is there a way to identify which VM is the source or destination or do I have to manage the rules to br0 the have the vm manage it's own interface?