Setting DPDK+OVS+QEMU on CentOS

时间:2023-03-09 06:07:06
Setting DPDK+OVS+QEMU on CentOS

Environment Build Step:

these packages are needed for building dpdk+ovs:

yum install -y make gcc glibc.i686 libgcc.i686 libstdc++.i686 glibc-devel.i686 glibc-devel.i686 libc6-dev-i386 glibc-devel.x86_64 libc6-dev clang autoconf automake libtool cmake python m4 openssl git libpcap-devel pciutils numactl-devel kernel-devel

First download latest dpdk & ovs :http://dpdk.org/download

git clone https://github.com/openvswitch/ovs

unzip these files.

then configure & compile dpdk. in config/common_linux add the following lines:

CONFIG_RTE_LIBRTE_PMD_PCAP=y
CONFIG_RTE_LIBRTE_PMD_RING=y

DPDK:

first need to know, if ovs-vsctl add-port, then the port is automatically binded to the NIC, by order.. so the name dpdk0 shouldn't be changed.. the dpdk[n]'s n is its order of the NICs..

also dpdkvhostuser[n], dpdkr[n].. and so on.

  1. before startup, computer must met: VT-d, (svm, vmx),  add following line to /etc/fstab (if wondering why 1GB rather than 2mb.. this pageexplains all)
    nodev /mnt/huge_1GB hugetlbfs pagesize=1GB  
  2. add startup-config: default_hugepagesz=1GB hugepagesz=1GB hugepages=5 (5GB RAM), modify /boot/grub2/grub.cfg and reboot
  3. make sure these pre-requirements are met: make, gcc, gcc-multilib( glibc.i686, libgcc.i686, libstdc++.i686 and glibc-devel.i686 / glibc-devel.i686 / libc6-dev-i386; glibc-devel.x86_64 / libc6-dev )
  4. ( it's quite possible that this problem is not solved, it's v16.04 and still need this patch. the patch no. is 945 ) patch usage: patch -p1 < *.patch ( for more, see man patch )
  5. using its ./tools/setup.sh to compile at a suitable version of dpdk.  (not suggested, this step is too simple.. is divided as follows:
    1. make install T=x86_64-native-linuxapp-gcc
    2. cd x86_64-native-linuxapp-gcc
    3. vi .config (note this step can modify the output of the libs to static libs. otherwise the .so files are needed  to set the paths. seems..
    4. make (if need to reset .config, just make clean && make will done)
    5. done.
  6. then to set environmental vars:
  7.  export RTE_SDK=/root/dpdk-16.04
    export RTE_TARGET=x86_64-....
  8. use ./tools/setup.sh again to insmod igb_uio kernel module. then create hugepages. then add the additional nic to igb_uio.
  9. *test with test programs
  10. ready to use

OVS installation:

  1. set the PATH for ovs:
    export DPDK_DIR=$HOME/dpdk-16.04
    export DPDK_TARGET=x86_64-native-linuxapp-gcc
    export DPDK_BUILD=$DPDK_DIR/$DPDK_TARGET
    export OVS_DIR=$HOME/ovs
    export VM_NAME=Centos-vm
    export GUEST_MEM=1024M
    export QCOW2_IMAGE=/root/CentOS7_x86_64.qcow2
    export VHOST_SOCK_DIR=/usr/local/var/run/openvswitch
    export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
  2. yum install clang autoconf automake libtool
  3. ./boot.sh
  4. ./configure --with-dpdk=%DPDK_BUILD
  5. make install
  6. installation finished

OVS startup:

  1. *if started before..
    rm -f /usr/local/etc/openvswitch/conf.db /usr/local/var/run/openvswitch/db.sock
    ovs-appctl -t ovsdb-server exit
    ovs-appctl -t ovs-vswitchd exit
  2. source ./setup.sh content as follows: (start server, ovs and add port for qemu usage).  note sometimes there are 2 nodes of NUMA, so if dpdk ports are on the 2nd node, the socket mem should be set larger than 0, usually 1024.
    ovsdb-tool create /usr/local/etc/openvswitch/conf.db  \
    /usr/local/share/openvswitch/vswitch.ovsschema
    ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
    --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
    --pidfile --detach
    ovs-vsctl --no-wait init
    ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
    ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-hugepage-dir=/mnt/huge_1GB
    ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="1024,0"
    ovs-vswitchd unix:$DB_SOCK --pidfile --detach
    ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=
    ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
    ovs-vsctl add-bond br0 dpdkbond dpdk0 dpdk1 -- set Interface dpdk0 type=dpdk -- set Interface dpdk1 type=dpdk
    ovs-vsctl add-port br0 dpdkvhostuser0 -- set Interface dpdkvhostuser0 type=dpdkvhostuser
    ovs-vsctl add-port br0 dpdkvhostuser1 -- set Interface dpdkvhostuser1 type=dpdkvhostuser
  3. done

QEMU:

  1. before starting Qemu, need to create a virtual tap/tun device, and bind it with NIC: reference the steps
    yum install brctl " a bridge creating pkg
    ip link set eno16777736 down " a nic, which is connected to outer web
    brctl addbr br1 " create a bridge where the outer web and the inner nic running on, letting the inner system get on web
    brctl addif eno16777736
    ip link set dev br1 promisc on " promisc mode on, in this mode, tap/nic both working..
    ip link set dev eno16777736 promisc on
    dhclient br1 " to give ip addr to every un-allocated [v]nics.. and if something goes wrong, make sure dhclient is not already running in the background..
    ip link set dev br1 up
    ip link set dev eno16777736 up
    ip tuntap add mode tap tap0
    ip link set dev tap0 promisc on
  2. start qemu vm by:
    qemu-system-x86_64 -name $VM_NAME -cpu host -enable-kvm -m $GUEST_MEM -object memory-backend-file,id=mem,size=$GUEST_MEM,mem-path=/mnt/huge_1GB,share=on -numa node,memdev=mem -mem-prealloc -smp sockets=,cores= -drive file=$QCOW2_IMAGE -chardev socket,id=char0,path=$VHOST_SOCK_DIR/dpdkvhostuser0 -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=:::::,netdev=mynet1,mrg_rxbuf=off -chardev socket,id=char1,path=$VHOST_SOCK_DIR/dpdkvhostuser1 -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce -device virtio-net-pci,mac=:::::,netdev=mynet2,mrg_rxbuf=off -net nic,macaddr=00:00:00:00:00:21 -net tap,ifname=tap0,script=no,downscript=no -nographic -snapshot
  3. to enable connection between dpdkvhostuser ports, first need to set bridge br0 up by: (必要はない
    ip link set dev br0 up
  4. set the ip inside the VMs, by:
    ip addr add dev eth0 192.168..xx[n]
    ip link set eth0 up
    ip route add default via 192.168.6.1 " this gw should be found by traceroute www.baidu.com in the outer machine.. the first jump router is the gw..
  5. done, now host & guest is ping-free.. but still not able to ssh(securecrt) into the inside vm.. fixed by modifying the ssh configure settings and restart sshd service..

note: viewing logs created by any ovs program, using journalctl..

journalctl -t ovs-vswitchd

done