Networking: Difference between revisions
m (→public bridge) |
|||
Line 31: | Line 31: | ||
== | == Private Virtual Bridge == | ||
'''Use case:''' | '''Use case:''' | ||
Line 38: | Line 38: | ||
'''Prerequisites:''' | '''Prerequisites:''' | ||
* You need kvm up and running | * You need kvm up and running | ||
* If you don't want to run as root, the user | * If you don't want to run as root, then the user needs to have rw access to /dev/kvm | ||
* You need the following commands installed on your system, and if you don't want to run as root, the user you want to use needs to be able to sudo the following command: | * You need the following commands installed on your system, and if you don't want to run as root, the user you want to use needs to be able to sudo the following command: | ||
/sbin/ip | /sbin/ip | ||
Line 80: | Line 80: | ||
* Each guest on the private virtual network must have a different MAC address | * Each guest on the private virtual network must have a different MAC address | ||
== public bridge == | == public bridge == |
Revision as of 22:40, 21 November 2015
Setting guest network
Guest (VM) networking in kvm is the same as in qemu, so it is possible to refer to other documentations about networking for qemu. This page will try to explain how to configure the most frequent types of network needed.
User Networking
Use case:
- You want a simple way for your virtual machine to access to the host, to the internet or to resources available on your local network.
- You don't need to access your guest from the network or from another guest.
- You are ready to take a huge performance hit.
- Warning: User networking does not support a number of networking features like ICMP. Certain applications (like ping) may not function properly.
Prerequisites:
- You need kvm up and running
- If you don't want to run as root, the user you want to use needs to have rw access to /dev/kvm
- If you want to be able to access the internet or a local network, your host system must be able to access the internet or the local network
Solution:
- simply run your guest without specifying network parameters, which by default will create user-lever (a.k.a slirp) networking:
qemu-system-x86_64 -hda /path/to/hda.img
Notes:
- The IP address can be automatically assigned to the guest thanks to the DHCP service integrated in QEMU
- If you run multiple guests on the host, you don't need to specify a different MAC address for each guest
- The default is equivalent to this explicit setup:
qemu-system-x86_64 -hda /path/to/hda.img -netdev user,id=user.0 -device e1000,netdev=user.0
- user.0 identifier above is just to connect the two halves into one, you may use any identifier you wish, such as "n" or "net0".
- Use rtl8139 instead of e1000 to get 8139-series NIC.
- You can still access one specific port on the guest using the "hostfwd" option. This means e.g. if you want to transport a file with scp from host to guest, start the guest with "-device e1000,netdev=user.0 -netdev user,id=user.0,hostfwd=tcp::5555-:22". Now you are forwarding the host port 5555 to the guest port 22. After starting up the guest, you can transport a file with e.g. "scp -P 5555 file.txt root@localhost:/tmp" from host to guest. Or you can also use other address of the host to connect to.
Private Virtual Bridge
Use case:
- You want to set up a private network between 2 or more virtual machines. This network won't be seen from the other virtual machines nor from the real network.
Prerequisites:
- You need kvm up and running
- If you don't want to run as root, then the user needs to have rw access to /dev/kvm
- You need the following commands installed on your system, and if you don't want to run as root, the user you want to use needs to be able to sudo the following command:
/sbin/ip /usr/sbin/brctl /usr/sbin/tunctl
Solution:
- You need to create a bridge, e-g:
sudo /usr/sbin/brctl addbr br0
- You need a qemu-ifup script containing the following:
#!/bin/sh set -x switch=br0 if [ -n "$1" ];then /usr/bin/sudo /usr/sbin/tunctl -u `whoami` -t $1 /usr/bin/sudo /sbin/ip link set $1 up sleep 0.5s /usr/bin/sudo /usr/sbin/brctl addif $switch $1 exit 0 else echo "Error: no interface specified" exit 1 fi
- Generate a MAC address, either manually or using:
#!/bin/bash # generate a random mac address for the qemu nic printf 'DE:AD:BE:EF:%02X:%02X\n' $((RANDOM%256)) $((RANDOM%256))
- Run each guest with the following, replacing $macaddress with the value from the previous step
qemu-system-x86_64 -hda /path/to/hda.img -device e1000,netdev=net0,mac=$macaddress -netdev tap,id=net0
Notes:
- If you don't want to run as root, the qemu-ifup must be executable by the user you want to use
- You can either create a system-wide qemu-ifup in /etc/qemu-ifup or use another one. In the latter case, run
qemu-system-x86_64 -hda /path/to/hda.img -device e1000,netdev=net0,mac=$macaddress -netdev tap,id=net0,script=/path/to/qemu-ifup
- Each guest on the private virtual network must have a different MAC address
public bridge
WARNING: The method shown here will not work with most (if not all) wireless drivers as these do not support bridging.
Use case:
- You want to assign an IP address to your virtual machines and make them accessible from your local network
- You also want performance out of your virtual machine.
Prerequisites:
- You need kvm up and running
- If you don't want to run kvm as root, then the user must have rw access to /dev/kvm
- The following commands must be installed on the host system and executed as root:
/sbin/ip /usr/sbin/brctl /usr/sbin/tunctl
- Your host system must be able to access the internet or the local network
Solution 1: Using Distribution-Specific Scripts
RedHat's way | Debian's way | SuSE's way |
---|---|---|
DEVICE=br0 BOOTPROTO=dhcp ONBOOT=yes TYPE=Bridge |
/etc/network/interfaces # Replace old eth0 config with br0 auto # Use old eth0 config for br0, plus bridge stuff iface br0 inet dhcp bridge_ports eth0 bridge_stp off bridge_maxwait 0 bridge_fd 0 |
|
- /etc/init.d/networking restart
- The bridge br0 should get the ip address (either static/dhcp) while the physical eth0 is left without an ip address.
VLANs
Please note that the rtl8139 virtual network interface driver does not support VLANs. If you want to use VLANs with your virtual machine, you must use another virtual network interface like virtio.
When using VLANs on a setup like this and no traffic is getting through to your guest(s), you might want to do:
# cd /proc/sys/net/bridge # ls bridge-nf-call-arptables bridge-nf-call-iptables bridge-nf-call-ip6tables bridge-nf-filter-vlan-tagged # for f in bridge-nf-*; do echo 0 > $f; done
Solution 2: Manual Configuration
- You need to create a bridge, e-g:
# /usr/sbin/brctl addbr br0
- Add one of your physical interface to the bridge, e-g for eth0:
# /usr/sbin/brctl addif br0 eth0
- You need a qemu-ifup script containing the following (run as root):
#!/bin/sh set -x switch=br0 if [ -n "$1" ];then /usr/sbin/tunctl -u `whoami` -t $1 /sbin/ip link set $1 up sleep 0.5s /usr/sbin/brctl addif $switch $1 exit 0 else echo "Error: no interface specified" exit 1 fi
- Generate a MAC address, either manually or using:
#!/bin/sh # generate a random mac address for the qemu nic printf 'DE:AD:BE:EF:%02X:%02X\n' $((RANDOM%256)) $((RANDOM%256))
- Run each guest with the following, replacing $macaddress with the value from the previous step
qemu-system-x86_64 -hda /path/to/hda.img -device e1000,netdev=net0,mac=$macaddress -netdev tap,id=net0
Notes:
- If you don't want to run as root, the qemu-ifup must be executable by the user you want to use
- You can either create a system-wide qemu-ifup in /etc/qemu-ifup or use another one. In the latter case, run
qemu-system-x86_64 -hda /path/to/hda.img -device e1000,netdev=net0,mac=$macaddress -netdev tap,id=net0,script=/path/to/qemu-ifup
- Each guest on the network must have a different MAC address
iptables/routing
you can also connect your guest vm to a tap in your host. then setting iptables rules in your host to become a router + firewall for your vm.
Routing would be done simply by creating the default route on the client to the IP of the host (and allowing IP forwarding) and setting a route to the tap? device of the client on the host.
Test the setup beforehand:
- Hostside: Allow IPv4 forwarding and add route to client (could be put in a script - route has to be added after the client has started):
sysctl -w net.ipv4.ip_forward=1 # allow forwarding of IPv4 route add -host <ip-of-client> dev <tap-device> # add route to the client
- Clientside: Default GW of the client is of course then the host (<ip-of-host> has to be in same subnet as <ip-of-client> ...):
route add default gw <ip-of-host>
- Clientside v2: If you host IP is not on the same subnet as <ip-of-client>, then you must manually add the route to host before you create default route:
route add -host <ip-of-host> dev <network-interface> route add default gw <ip-of-host>
vde
Another option is using vde (virtual distributed ethernet).
performance
Data on benchmarking results should go in here. There's now a page dedicated to ideas for improving Networking Performance.
Some 10G NIC performance comparison between VFIO passthrough and virtio VFIO vs virtio
Compatibility
There's another, old and obsolete syntax of specifying network for virtual machines. Above examples uses -netdev..-device model, old way used -net..-net pairs. For example,
-netdev tap,id=net0 -device e1000,netdev=net0,mac=52:54:00:12:34:56
is about the same as old
-net tap,vlan=0 -net nic,vlan=0,model=e1000,macaddr=52:54:00:12:34:56
(note mac => macaddr parameter change as well; vlan=0 is the default).
Old way used the notion of "VLANs" - these are QEMU VLANS, which has nothing to do with 802.1q VLANs. Qemu VLANs are numbered starting with 0, and it's possible to connect one or more devices (either host side, like -net tap, or guest side, like -net nic) to each VLAN, and, in particular, it's possible to connect more than 2 devices to a VLAN. Each device in a VLAN gets all traffic received by every device in it. This model was very confusing for the user (especially when a guest has more than one NIC).
In new model, each host side correspond to just one guest side, forming a pair of devices based on -netdev id= and -device netdev= parameters. It is less confusing, it is faster (because it's always 1:1 pair), and it supports more parameters than old -net..-net way.
However, -net..-net is still supported, used widely, and mentioned in lots of various HOWTOs and guides around the world. It is also a bit shorter and so faster to type.