仮想ゲストOS固定ip化の手番整理

まえがき

shell力が足りない。

実行ディレクトリ

_main.shに一元化してみた。

コード表示

[oracle@centos vx]$ pwd
/home/oracle/vx
[oracle@centos vx]$ ll
合計 16
-rwx------. 1 oracle docker 1721  6月  2 11:07 Vagrantfile
-rwx------. 1 oracle docker 3015  5月 24 05:16 Vagrantfile_org
-rwxr-xr-x. 1 oracle docker 3583  6月  6 23:28 _main.sh
-rwxr-xr-x. 1 oracle docker  155  6月  6 23:33 a.sh

Vagrantfile

コード表示

[oracle@centos vx]$ cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"
  config.vm.synced_folder '.', '/mnt', type: 'rsync'
  config.vm.synced_folder '.', '/vagrant', disabled: true
  config.vm.define "node1" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node1"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node2" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node2"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node3" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node3"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node4" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node4"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node5" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node5"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node6" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node6"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
end

a.sh

a.shの中身だよ

コード表示

[oracle@centos vx]$ cat a.sh
#!/bin/bash
yum install -y net-tools
yum install -y lsof
yum install -y psmisc
yum install -y traceroute
yum install -y bridge-utils
yum install -y expect

固定ip化前の状態

コード表示

[oracle@centos vx]$ time vagrant up
real	0m55.164s
user	0m8.515s
sys	0m0.832s
[oracle@centos vx]$ vagrant ssh-config | grep -E "^Host|\s{1,}Host"
Host node1
  HostName 192.168.121.199
Host node2
  HostName 192.168.121.240
Host node3
  HostName 192.168.121.140
Host node4
  HostName 192.168.121.129
Host node5
  HostName 192.168.121.72
Host node6
  HostName 192.168.121.208
[oracle@centos vx]$ brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242fb2b351e	no		veth1486633
virbr0		8000.5254006df710	yes		virbr0-nic
							vnet0
							vnet1
							vnet2
							vnet3
							vnet4
							vnet5
virbr100		8000.525400ecc4a6	yes		virbr100-nic
virbr101		8000.525400922d8b	yes		virbr101-nic
virbr102		8000.5254003f5854	yes		virbr102-nic

_main.shをキックする

コード表示

[oracle@centos vx]$ su root
パスワード:
[root@centos vx]# ./_main.sh
ネットワーク mynet100 は強制停止されました

ネットワーク mynet100 の定義が削除されました

ネットワーク mynet101 は強制停止されました

ネットワーク mynet101 の定義が削除されました

ネットワーク mynet102 は強制停止されました

ネットワーク mynet102 の定義が削除されました

ネットワーク mynet100 が mynet100.xml から定義されました

ネットワーク mynet100 が起動されました

ネットワーク mynet101 が mynet101.xml から定義されました

ネットワーク mynet101 が起動されました

ネットワーク mynet102 が mynet102.xml から定義されました

ネットワーク mynet102 が起動されました

ネットワーク mynet100 は強制停止されました

ネットワーク mynet100 が mynet100.xml から定義されました

ネットワーク mynet100 が起動されました

ネットワーク mynet101 は強制停止されました

ネットワーク mynet101 が mynet101.xml から定義されました

ネットワーク mynet101 が起動されました

ネットワーク mynet102 は強制停止されました

ネットワーク mynet102 が mynet102.xml から定義されました

ネットワーク mynet102 が起動されました

ドメイン vx_node1 が vx_node1.xml から定義されました

ドメイン vx_node2 が vx_node2.xml から定義されました

ドメイン vx_node3 が vx_node3.xml から定義されました

ドメイン vx_node4 が vx_node4.xml から定義されました

ドメイン vx_node5 が vx_node5.xml から定義されました

ドメイン vx_node6 が vx_node6.xml から定義されました


vagrant reload

コード表示

[oracle@centos vx]$ time vagrant reload
==> node1: Halting domain...
==> node1: Starting domain.
==> node1: Waiting for domain to get an IP address...
==> node1: Waiting for SSH to become available...
==> node1: Creating shared folders metadata...
==> node1: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node1: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node1: flag to force provisioning. Provisioners marked to run always will still run.
==> node2: Halting domain...
==> node2: Starting domain.
==> node2: Waiting for domain to get an IP address...
==> node2: Waiting for SSH to become available...
==> node2: Creating shared folders metadata...
==> node2: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node2: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node2: flag to force provisioning. Provisioners marked to run always will still run.
==> node3: Halting domain...
==> node3: Starting domain.
==> node3: Waiting for domain to get an IP address...
==> node3: Waiting for SSH to become available...
==> node3: Creating shared folders metadata...
==> node3: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node3: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node3: flag to force provisioning. Provisioners marked to run always will still run.
==> node4: Halting domain...
==> node4: Starting domain.
==> node4: Waiting for domain to get an IP address...
==> node4: Waiting for SSH to become available...
==> node4: Creating shared folders metadata...
==> node4: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node4: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node4: flag to force provisioning. Provisioners marked to run always will still run.
==> node5: Halting domain...
==> node5: Starting domain.
==> node5: Waiting for domain to get an IP address...
==> node5: Waiting for SSH to become available...
==> node5: Creating shared folders metadata...
==> node5: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node5: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node5: flag to force provisioning. Provisioners marked to run always will still run.
==> node6: Halting domain...
==> node6: Starting domain.
==> node6: Waiting for domain to get an IP address...
==> node6: Waiting for SSH to become available...
==> node6: Creating shared folders metadata...
==> node6: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node6: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node6: flag to force provisioning. Provisioners marked to run always will still run.

real	1m42.110s
user	0m5.937s
sys	0m0.601s

固定ip化後の状態

コード表示

[oracle@centos vx]$ vagrant ssh-config | grep -E "^Host|\s{1,}Host"
Host node1
  HostName 192.168.100.2
Host node2
  HostName 192.168.100.3
Host node3
  HostName 192.168.101.2
Host node4
  HostName 192.168.101.3
Host node5
  HostName 192.168.102.2
Host node6
  HostName 192.168.102.3
[oracle@centos vx]$ brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242fb2b351e	no		veth1486633
virbr0		8000.5254006df710	yes		virbr0-nic
virbr100		8000.525400106400	yes		virbr100-nic
							vnet3
							vnet4
virbr101		8000.5254009fdb10	yes		virbr101-nic
							vnet0
							vnet5
virbr102		8000.5254009e9318	yes		virbr102-nic
							vnet1
							vnet2

疎通確認

外部のみ

コード表示

[oracle@centos vx]$ while read line;do echo ${line};sleep 10; echo ${line}|bash;done < <(seq 6 | xargs -I@ bash -c "echo a |awk '{print \"vagrant ssh node@ -c \"\"\x5c\x27\"\"traceroute 8.8.8.8\"\"\x5c\x27\"}'")
vagrant ssh node1 -c 'traceroute 8.8.8.8'
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.100.1)  0.141 ms  0.117 ms  0.098 ms
 2  192.168.1.1 (192.168.1.1)  1.009 ms  1.000 ms  0.988 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  4.888 ms  4.874 ms  4.866 ms
 4  210.139.125.169 (210.139.125.169)  4.972 ms  4.941 ms  4.929 ms
 5  210.165.249.177 (210.165.249.177)  6.271 ms  5.383 ms  6.357 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  9.074 ms  7.535 ms  7.492 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  7.545 ms  6.645 ms  6.815 ms
 8  72.14.202.229 (72.14.202.229)  6.865 ms 72.14.205.32 (72.14.205.32)  6.279 ms 72.14.202.229 (72.14.202.229)  6.369 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  6.498 ms  11.164 ms  11.162 ms
vagrant ssh node2 -c 'traceroute 8.8.8.8'
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.100.1)  0.177 ms  0.145 ms  0.130 ms
 2  192.168.1.1 (192.168.1.1)  2.078 ms  2.140 ms  2.126 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  4.553 ms  4.540 ms  5.028 ms
 4  210.139.125.169 (210.139.125.169)  5.133 ms  5.099 ms  5.600 ms
 5  210.165.249.177 (210.165.249.177)  6.771 ms  6.915 ms  7.005 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  7.796 ms  6.482 ms  6.469 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  6.912 ms  9.759 ms  9.742 ms
 8  72.14.202.229 (72.14.202.229)  9.722 ms 72.14.205.32 (72.14.205.32)  9.735 ms 72.14.202.229 (72.14.202.229)  9.851 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  9.791 ms  10.098 ms  10.236 ms
vagrant ssh node3 -c 'traceroute 8.8.8.8'
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.101.1)  0.105 ms  0.081 ms  0.071 ms
 2  192.168.1.1 (192.168.1.1)  1.157 ms  1.221 ms  1.209 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  4.555 ms  4.547 ms  5.230 ms
 4  210.139.125.169 (210.139.125.169)  5.218 ms  4.581 ms  5.193 ms
 5  210.165.249.177 (210.165.249.177)  6.360 ms  5.170 ms  5.684 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  10.565 ms  10.326 ms  10.306 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  7.237 ms  6.757 ms  6.739 ms
 8  72.14.205.32 (72.14.205.32)  6.692 ms  6.684 ms  6.672 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  7.246 ms  7.269 ms  7.310 ms
vagrant ssh node4 -c 'traceroute 8.8.8.8'
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.101.1)  0.117 ms  0.098 ms  0.086 ms
 2  192.168.1.1 (192.168.1.1)  1.337 ms  1.328 ms  1.315 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  5.040 ms  5.031 ms  5.015 ms
 4  210.139.125.169 (210.139.125.169)  5.064 ms  5.045 ms  5.037 ms
 5  210.165.249.177 (210.165.249.177)  6.263 ms  5.925 ms  6.818 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  8.944 ms  7.222 ms  7.423 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  8.024 ms  7.253 ms  7.240 ms
 8  72.14.205.32 (72.14.205.32)  6.980 ms 72.14.202.229 (72.14.202.229)  6.966 ms 72.14.205.32 (72.14.205.32)  7.013 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  6.952 ms  7.687 ms  6.928 ms
vagrant ssh node5 -c 'traceroute 8.8.8.8'
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.102.1)  0.134 ms  0.112 ms  0.100 ms
 2  192.168.1.1 (192.168.1.1)  1.683 ms  1.674 ms  1.663 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  4.572 ms  4.561 ms  4.544 ms
 4  210.139.125.169 (210.139.125.169)  4.636 ms  5.272 ms  5.260 ms
 5  210.165.249.177 (210.165.249.177)  6.053 ms  6.396 ms  6.731 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  7.816 ms  6.286 ms  6.272 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  10.755 ms  6.881 ms  6.861 ms
 8  72.14.205.32 (72.14.205.32)  5.745 ms  6.561 ms 72.14.202.229 (72.14.202.229)  7.336 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  6.604 ms  6.460 ms  7.875 ms
vagrant ssh node6 -c 'traceroute 8.8.8.8'
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.102.1)  0.099 ms  0.082 ms  0.067 ms
 2  192.168.1.1 (192.168.1.1)  1.620 ms  1.609 ms  1.599 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  4.549 ms  5.257 ms  4.521 ms
 4  210.139.125.169 (210.139.125.169)  5.314 ms  5.304 ms  5.290 ms
 5  210.165.249.177 (210.165.249.177)  6.331 ms  6.404 ms  6.728 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  9.462 ms  6.419 ms  6.397 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  7.266 ms  7.219 ms  7.347 ms
 8  72.14.205.32 (72.14.205.32)  7.070 ms  6.498 ms  7.053 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  6.993 ms  6.983 ms  9.076 ms

_main.shの中身

すきっりさせるのは後日。。。

コード表示

[oracle@centos vx]$ cat _main.sh
#!/bin/bash
OUTPUT=$(pwd)/output
CMD_DIR=$(pwd)/cmd
TMP_DIR=$(pwd)/tmp
AWK_DIR=$(pwd)/awk
LVT_DIR=/etc/libvirt/qemu
LVT_NET_DIR=${LVT_DIR}/networks

_offnet(){
  START_RN=$1
  END_RN=$2
  ( cd ${LVT_NET_DIR} && \
    seq ${START_RN} ${END_RN} | while read RN;do
      virsh net-destroy mynet${RN} && virsh net-undefine mynet${RN};
    done )
}

_mknet(){
  START_RN=$1
  END_RN=$2
  ( cd ${LVT_NET_DIR} && \
    seq ${START_RN} ${END_RN} | while read RN;do
      cp $(pwd)/tmpl/mynet@.xml $(pwd)/mynet${RN}.xml && sed -i s/@/${RN}/g $(pwd)/mynet${RN}.xml;
    done )
}

_onnet(){
  START_RN=$1
  END_RN=$2
  ( cd ${LVT_NET_DIR} && \
    seq ${START_RN} ${END_RN} | while read RN;do
      virsh net-define mynet${RN}.xml && virsh net-start mynet${RN};
    done )
}

_rebnet(){
  START_RN=$1
  END_RN=$2
  ( cd ${LVT_NET_DIR} && \
    seq ${START_RN} ${END_RN} | while read RN;do
      virsh net-destroy mynet${RN} && virsh net-define mynet${RN}.xml && virsh net-start mynet${RN}
    done )
}

_buildnet(){
  START_RN=$1
  END_RN=$2
  _offnet ${START_RN} ${END_RN}
  _mknet ${START_RN} ${END_RN}
  _onnet ${START_RN} ${END_RN}
}

_rmdir(){
  rm -rf {${OUTPUT},${CMD_DIR},${TMP_DIR},${AWK_DIR}};  
}

_mkdir(){
  mkdir -p {${OUTPUT},${CMD_DIR},${TMP_DIR},${AWK_DIR}};
}

_initdir(){
  _rmdir
  _mkdir
}

_mkvxnm(){
  START_RN=$1
  END_RN=$2
  seq ${START_RN} ${END_RN} | while read RN;do
    echo ${LVT_DIR}/vx_node${RN}.xml >>${OUTPUT}/vx_node;
  done
}

_grp(){
  RN=$1
  GRP=$2
  while read line; do
     echo ${line} | sed -e s/GRP/${GRP}/ | bash;
  done < <(seq ${RN} | xargs -I@ bash -c 'echo echo $\(\(@%GRP\)\)') | sort >${OUTPUT}/grp
}

_join(){
  LFT=$1
  RGT=$2
  OPT_FNM=$3
  paste -d ' ' ${LFT} ${RGT} >${OPT_FNM};
}

_callcmd(){
  RPT=$1
  while read line; do
    OPT_FNM=$(basename ${line} | sed -e s/\_/\\t/g | awk '{print $2}')
    [ -e ${OUTPUT}/${OPT_FNM} ] && rm -f ${OUTPUT}/${OPT_FNM};
    seq ${RPT} | while read rpt; do
      cat ${line} | sed -e s/@/${rpt}/ | bash >>${OUTPUT}/${OPT_FNM};
    done
  done < <(find ${CMD_DIR}/* -name "*")
}

_mkcmd(){
  CMD_FNM=$1
  CMD=$2
  echo ${CMD} > ${CMD_DIR}/${CMD_FNM};
}

_split(){
  LFT=$1
  RGT=$2
  paste -d ' ' ${LFT} ${RGT} | awk '
    OUTPUT="'"${OUTPUT}"'"
    {print>OUTPUT"\x2f""split_"$1}
  ' 1>/dev/null 
}

_mk_def_ip_script_with_awk(){
  cat <<EOF >${AWK_DIR}/def_ip.awk
{
  gsub(/[^ ]+/,"\x27&\x27");
  print "<host mac="\$3" name="\$2" ip=\x27""192.168."third_octet"."NR+1"\x27""/>"
}
EOF
}

_call_def_ip_script_with_awk(){
  START_RN=$1
  END_RN=$2
  seq ${START_RN} ${END_RN} | while read RN;do
    gawk -v "third_octet=$((${RN}+100))" -f ${AWK_DIR}/def_ip.awk ${OUTPUT}/split_${RN} > ${OUTPUT}/def_host_tag_$((${RN}+100));
  done
}

_kvm_guest_modify_network(){
  START_RN=$1
  END_RN=$2
  
  while read line; do
    OPT_FNM=$(basename ${line})
    sed -e "/range/a @" < <(cat ${line}) > ${TMP_DIR}/${OPT_FNM}
  done < <(find ${LVT_NET_DIR} -maxdepth 1 -name "mynet*")
  
  seq ${START_RN} ${END_RN} | while read RN; do
    SRC_FILE=${TMP_DIR}/mynet${RN}.xml;
    EMBED_STR=$(cat ${OUTPUT}/def_host_tag_${RN} | tr "\n" " ");
    TAR_FILE=${LVT_NET_DIR}/mynet${RN}.xml
    awk '{
      SRC_FILE="'"${SRC_FILE}"'"
      EMBED_STR="'"${EMBED_STR}"'"
      gsub("@",EMBED_STR);
      print;
    }' ${SRC_FILE} > ${LVT_NET_DIR}/mynet${RN}.xml
  done
}

_kvm_guest_modify_machine(){
  while read line; do
    echo ${line} | awk '
      BEGIN{
      }
      {
        net_name="mynet"$2+100;
        tar_file=$3;
        system("sed -i -e s/vagrant-libvirt/"net_name"/g "tar_file" ");
      }
      END{
      }
    ' 
  done < <(cat ${OUTPUT}/vx_node_grp|nl) 1>/dev/null
}

_redefvm(){
  START_RN=$1
  END_RN=$2
  ( cd ${LVT_DIR} && \
    seq ${START_RN} ${END_RN} | while read RN;do
      virsh define vx_node${RN}.xml;
    done )
}

_buildnet 100 102
_initdir
_mkcmd "get_macaddr_cmd" "virsh dumpxml vx_node@ | grep \"mac address\" | awk 'match(\$0, /[a-f0-9]{2}(:[a-f0-9]{2}){5}/) {print substr(\$0, RSTART, RLENGTH)}'"
_mkcmd "get_nodename_cmd" "echo node@"
_callcmd 6
_join ${OUTPUT}/nodename ${OUTPUT}/macaddr ${OUTPUT}/vminfo
_grp 6 3
_split ${OUTPUT}/grp ${OUTPUT}/vminfo
_mk_def_ip_script_with_awk
_call_def_ip_script_with_awk 0 2
_kvm_guest_modify_network 100 102
_rebnet 100 102
_mkvxnm 1 6
_join ${OUTPUT}/grp ${OUTPUT}/vx_node ${OUTPUT}/vx_node_grp
_kvm_guest_modify_machine
_redefvm 1 6

あとがき

やりきった。。後で見直しだ。。ねる。。

kvm上の仮想マシンipを固定ipにしようかなって思った話

まえがき

固定ipにでもしようかなとおもいました。

参考文献

KVMにDHCPで固定IPを設定する  
Libvirt/KVM で VM に静的な IP アドレスを配布する  
フィルタ言語 AWK (2)  
はじめてのAWK  
awkでOSのコマンドを実行させる  
KVM仮想マシンの名前変更と移動  

Vagrantfile

ここらへんもホスト名違いだけだし、うまくすっきり出来そうだけど、今はこれでよしとして楽しみにとっておこう。やり方はあるとおもうんだよな。

コード表示

[oracle@centos vx]$ cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"
  config.vm.synced_folder '.', '/mnt', type: 'rsync'
  config.vm.synced_folder '.', '/vagrant', disabled: true
  config.vm.define "node1" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node1"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node2" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node2"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node3" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node3"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node4" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node4"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node5" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node5"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node6" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node6"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
end

a.sh

a.shの中身だよ

コード表示

[oracle@centos vx]$ cat a.sh
#!/bin/bash
yum install -y net-tools
yum install -y lsof
yum install -y psmisc
yum install -y traceroute
yum install -y bridge-utils
yum install -y expect

vagrantツールでlibvirt管理のkvm上の仮想マシン立ち上げ!

コード表示

[oracle@centos vx]$ time vagrant up
real	0m54.930s
user	0m8.535s
sys	0m0.917s
[oracle@centos vx]$ vagrant status
Current machine states:

node1                     running (libvirt)
node2                     running (libvirt)
node3                     running (libvirt)
node4                     running (libvirt)
node5                     running (libvirt)
node6                     running (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

vagrantプラグインのvagrant-libvirtによるネットワーク設定確認

コード表示

[oracle@centos vx]$ vagrant ssh-config
Host node1
  HostName 192.168.121.233
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node1/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node2
  HostName 192.168.121.193
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node2/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node3
  HostName 192.168.121.18
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node3/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node4
  HostName 192.168.121.17
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node4/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node5
  HostName 192.168.121.227
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node5/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node6
  HostName 192.168.121.98
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node6/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

ref_net.sh

コード表示

[root@centos vx]# cat ref_net.sh
#!/bin/bash
SRC_FILE="$1"
TAR_FILE="$2"

while read line; do
  sed -e "/range/a @" <<<${line};
done < <(cat ${SRC_FILE}) > tmp

cat ${TAR_FILE} | tr "\n" " " | xargs -I{} bash -c 'awk "{gsub(\"@\",\"{}\");print}" tmp' > ${SRC_FILE}
[root@centos vx]# ll ref_net.sh
-rwxr-xr-x. 1 oracle docker 232  6月  2 16:00 ref_net.sh

libvirt管理のネットワークを確認する

コード表示

[root@centos vx]# cd /etc/libvirt/qemu/networks
[root@centos networks]# ll
合計 24
drwx------. 2 root root 4096  5月 26 16:59 autostart
-rw-------. 1 root root  591  6月  2 16:04 mynet100.xml
-rw-------. 1 root root  591  6月  1 16:22 mynet101.xml
-rw-------. 1 root root  591  6月  1 16:22 mynet102.xml
drwxr-xr-x. 2 root root 4096  6月  1 16:21 tmpl
-rw-------. 1 root root  603  6月  1 14:57 vagrant-libvirt.xml
[root@centos networks]# virsh net-list --all
 名前               状態     自動起動  永続
----------------------------------------------------------
 mynet100             動作中  いいえ (no) はい (yes)
 mynet101             動作中  いいえ (no) はい (yes)
 mynet102             動作中  いいえ (no) はい (yes)
 vagrant-libvirt      動作中  いいえ (no) はい (yes)

[root@centos networks]# seq 100 102 | xargs -t -I@ bash -c 'cat mynet@.xml'
bash -c cat mynet100.xml 
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh net-edit mynet100
or other application using the libvirt API.
-->

<network ipv6='yes'>
  <name>mynet100</name>
  <uuid>a4541103-3100-44ef-91c2-7c624e2db293</uuid>
  <forward mode='nat'/>
  <bridge name='virbr100' stp='on' delay='0'/>
  <mac address='52:54:00:e3:ab:b1'/>
  <ip address='192.168.100.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.100.2' end='192.168.100.254'/>
    </dhcp>
  </ip>
</network>
bash -c cat mynet101.xml 
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh net-edit mynet101
or other application using the libvirt API.
-->

<network ipv6='yes'>
  <name>mynet101</name>
  <uuid>5681498d-dc77-4180-b24a-f8de3dacc458</uuid>
  <forward mode='nat'/>
  <bridge name='virbr101' stp='on' delay='0'/>
  <mac address='52:54:00:28:6e:9c'/>
  <ip address='192.168.101.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.101.2' end='192.168.101.254'/>
    </dhcp>
  </ip>
</network>
bash -c cat mynet102.xml 
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh net-edit mynet102
or other application using the libvirt API.
-->

<network ipv6='yes'>
  <name>mynet102</name>
  <uuid>c8acdbb8-8802-4fa0-a443-ccf283de7aa7</uuid>
  <forward mode='nat'/>
  <bridge name='virbr102' stp='on' delay='0'/>
  <mac address='52:54:00:85:24:5e'/>
  <ip address='192.168.102.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.102.2' end='192.168.102.254'/>
    </dhcp>
  </ip>
</network>
[root@centos networks]# brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242fb2b351e	no		
virbr0		8000.5254006df710	yes		virbr0-nic
							vnet0
							vnet1
							vnet2
							vnet3
							vnet4
							vnet5
virbr100		8000.525400e3abb1	yes		virbr100-nic
virbr101		8000.525400286e9c	yes		virbr101-nic
virbr102		8000.52540085245e	yes		virbr102-nic

ref_net.shを実行してマシン毎に反映する

コード表示

[root@centos networks]# seq 100 102 | xargs -t -I@ bash -c 'cat mynet@.xml'
bash -c cat mynet100.xml 
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
virsh net-edit mynet100
or other application using the libvirt API.
-->

<network ipv6='yes'>
<name>mynet100</name>
<uuid>a4541103-3100-44ef-91c2-7c624e2db293</uuid>
<forward mode='nat'/>
<bridge name='virbr100' stp='on' delay='0'/>
<mac address='52:54:00:e3:ab:b1'/>
<ip address='192.168.100.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.100.2' end='192.168.100.254'/>
<host mac='52:54:00:5d:4a:3f' name='node1' ip='192.168.100.2'/> <host mac='52:54:00:dc:5b:1b' name='node2' ip='192.168.100.3'/> 
</dhcp>
</ip>
</network>
bash -c cat mynet101.xml 
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
virsh net-edit mynet101
or other application using the libvirt API.
-->

<network ipv6='yes'>
<name>mynet101</name>
<uuid>5681498d-dc77-4180-b24a-f8de3dacc458</uuid>
<forward mode='nat'/>
<bridge name='virbr101' stp='on' delay='0'/>
<mac address='52:54:00:28:6e:9c'/>
<ip address='192.168.101.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.101.2' end='192.168.101.254'/>
<host mac='52:54:00:77:33:99' name='node3' ip='192.168.101.2'/> <host mac='52:54:00:61:72:67' name='node4' ip='192.168.101.3'/> 
</dhcp>
</ip>
</network>
bash -c cat mynet102.xml 
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
virsh net-edit mynet102
or other application using the libvirt API.
-->

<network ipv6='yes'>
<name>mynet102</name>
<uuid>c8acdbb8-8802-4fa0-a443-ccf283de7aa7</uuid>
<forward mode='nat'/>
<bridge name='virbr102' stp='on' delay='0'/>
<mac address='52:54:00:85:24:5e'/>
<ip address='192.168.102.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.102.2' end='192.168.102.254'/>
<host mac='52:54:00:88:c2:54' name='node5' ip='192.168.102.2'/> <host mac='52:54:00:a8:bc:df' name='node6' ip='192.168.102.3'/> 
</dhcp>
</ip>
</network>

独自ネットワークを強制停止し、再定義して、再起動する

コード表示

[root@centos networks]# seq 100 102 | xargs -t -I@ bash -c 'virsh net-destroy mynet@ && virsh net-define mynet@.xml && virsh net-start mynet@'
bash -c virsh net-destroy mynet100 && virsh net-define mynet100.xml && virsh net-start mynet100 
ネットワーク mynet100 は強制停止されました

ネットワーク mynet100 が mynet100.xml から定義されました

ネットワーク mynet100 が起動されました

bash -c virsh net-destroy mynet101 && virsh net-define mynet101.xml && virsh net-start mynet101 
ネットワーク mynet101 は強制停止されました

ネットワーク mynet101 が mynet101.xml から定義されました

ネットワーク mynet101 が起動されました

bash -c virsh net-destroy mynet102 && virsh net-define mynet102.xml && virsh net-start mynet102 
ネットワーク mynet102 は強制停止されました

ネットワーク mynet102 が mynet102.xml から定義されました

ネットワーク mynet102 が起動されました

[root@centos networks]# virsh net-list --all
 名前               状態     自動起動  永続
----------------------------------------------------------
 mynet100             動作中  いいえ (no) はい (yes)
 mynet101             動作中  いいえ (no) はい (yes)
 mynet102             動作中  いいえ (no) はい (yes)
 vagrant-libvirt      動作中  いいえ (no) はい (yes)

仮想マシンが所属するネットワークを仮想マシンの設定ファイルを修正して変更する

変更するんだ

コード表示

[root@centos networks]# cd /etc/libvirt/qemu/
[root@centos qemu]# ll
合計 28
drwx------. 4 root root 4096  6月  2 16:09 networks
-rw-------. 1 root root 2367  6月  2 11:12 vx_node1.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node2.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node3.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node4.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node5.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node6.xml
[root@centos qemu]# seq 6 | xargs -t -I@ bash -c "cat vx_node@.xml | awk '/<interface/,/interface>/'"
bash -c cat vx_node1.xml | awk '/<interface/,/interface>/' 
    <interface type='network'>
      <mac address='52:54:00:5d:4a:3f'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
bash -c cat vx_node2.xml | awk '/<interface/,/interface>/' 
    <interface type='network'>
      <mac address='52:54:00:dc:5b:1b'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
bash -c cat vx_node3.xml | awk '/<interface/,/interface>/' 
    <interface type='network'>
      <mac address='52:54:00:77:33:99'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
bash -c cat vx_node4.xml | awk '/<interface/,/interface>/' 
    <interface type='network'>
      <mac address='52:54:00:61:72:67'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
bash -c cat vx_node5.xml | awk '/<interface/,/interface>/' 
    <interface type='network'>
      <mac address='52:54:00:88:c2:54'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
bash -c cat vx_node6.xml | awk '/<interface/,/interface>/' 
    <interface type='network'>
      <mac address='52:54:00:a8:bc:df'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
[root@centos qemu]# seq 6 | xargs -t -I@ bash -c "cat vx_node@.xml | grep \"source network\""
bash -c cat vx_node1.xml | grep "source network" 
      <source network='vagrant-libvirt'/>
bash -c cat vx_node2.xml | grep "source network" 
      <source network='vagrant-libvirt'/>
bash -c cat vx_node3.xml | grep "source network" 
      <source network='vagrant-libvirt'/>
bash -c cat vx_node4.xml | grep "source network" 
      <source network='vagrant-libvirt'/>
bash -c cat vx_node5.xml | grep "source network" 
      <source network='vagrant-libvirt'/>
bash -c cat vx_node6.xml | grep "source network" 
      <source network='vagrant-libvirt'/>

いざ変更!のまえに使用するref_nett.sh

awkいい

コード表示

[root@centos qemu]# cat ref_nett.sh
#!/bin/bash
RPT="$1"
seq ${RPT} | xargs -I@ bash -c 'echo /etc/libvirt/qemu/vx_node@.xml'>vx_node
paste -d ' ' /home/oracle/vx/grp vx_node > vx_node_grp

while read line; do
  echo ${line} | awk '
    BEGIN{
    }
    {
      net_name="mynet"$1+100;
      system("sed -i -e s/vagrant-libvirt/"net_name"/g "$2"");
    }
    END{
    }
  '; 
done < <(cat vx_node_grp)

いざ変更

コード表示

[root@centos qemu]# pwd
/etc/libvirt/qemu
[root@centos qemu]# ll
合計 40
drwx------. 4 root root 4096  6月  2 16:09 networks
-rwxr-xr-x. 1 root root  366  6月  2 18:38 ref_nett.sh
-rw-r--r--. 1 root root  186  6月  2 18:35 vx_node
-rw-------. 1 root root 2367  6月  2 11:12 vx_node1.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node2.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node3.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node4.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node5.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node6.xml
-rw-r--r--. 1 root root  198  6月  2 18:35 vx_node_grp
[root@centos qemu]# seq 6 | xargs -I@ bash -c 'cat vx_node@.xml | grep -E "<name>|<source network|mynet"'
  <name>vx_node1</name>
      <source network='vagrant-libvirt'/>
  <name>vx_node2</name>
      <source network='vagrant-libvirt'/>
  <name>vx_node3</name>
      <source network='vagrant-libvirt'/>
  <name>vx_node4</name>
      <source network='vagrant-libvirt'/>
  <name>vx_node5</name>
      <source network='vagrant-libvirt'/>
  <name>vx_node6</name>
      <source network='vagrant-libvirt'/>
[root@centos qemu]# ./ref_nett.sh 6 | grep -E "<name>|<source network|mynet"
[root@centos qemu]# seq 6 | xargs -I@ bash -c 'cat vx_node@.xml | grep -E "<name>|<source network|mynet"'
  <name>vx_node1</name>
      <source network='mynet100'/>
  <name>vx_node2</name>
      <source network='mynet100'/>
  <name>vx_node3</name>
      <source network='mynet101'/>
  <name>vx_node4</name>
      <source network='mynet101'/>
  <name>vx_node5</name>
      <source network='mynet102'/>
  <name>vx_node6</name>
      <source network='mynet102'/>
[root@centos qemu]# ll
合計 40
drwx------. 4 root root 4096  6月  2 16:09 networks
-rwxr-xr-x. 1 root root  366  6月  2 18:40 ref_nett.sh
-rw-r--r--. 1 root root  186  6月  2 18:40 vx_node
-rw-------. 1 root root 2360  6月  2 18:40 vx_node1.xml
-rw-------. 1 root root 2360  6月  2 18:40 vx_node2.xml
-rw-------. 1 root root 2360  6月  2 18:40 vx_node3.xml
-rw-------. 1 root root 2360  6月  2 18:40 vx_node4.xml
-rw-------. 1 root root 2360  6月  2 18:40 vx_node5.xml
-rw-------. 1 root root 2360  6月  2 18:40 vx_node6.xml
-rw-r--r--. 1 root root  198  6月  2 18:40 vx_node_grp
[root@centos qemu]# seq 6 | xargs -I@ bash -c 'virsh define vx_node@.xml'
ドメイン vx_node1 が vx_node1.xml から定義されました

ドメイン vx_node2 が vx_node2.xml から定義されました

ドメイン vx_node3 が vx_node3.xml から定義されました

ドメイン vx_node4 が vx_node4.xml から定義されました

ドメイン vx_node5 が vx_node5.xml から定義されました

ドメイン vx_node6 が vx_node6.xml から定義されました

起動している仮想マシンを再起動前の状態

コード表示

[oracle@centos vx]$ brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242fb2b351e	no		
virbr0		8000.5254006df710	yes		virbr0-nic
							vnet0
							vnet1
							vnet2
							vnet3
							vnet4
							vnet5
virbr100		8000.525400e3abb1	yes		virbr100-nic
virbr101		8000.525400286e9c	yes		virbr101-nic
virbr102		8000.52540085245e	yes		virbr102-nic
[oracle@centos vx]$ sudo virsh net-list --all
[sudo] oracle のパスワード:
 名前               状態     自動起動  永続
----------------------------------------------------------
 mynet100             動作中  いいえ (no) はい (yes)
 mynet101             動作中  いいえ (no) はい (yes)
 mynet102             動作中  いいえ (no) はい (yes)
 vagrant-libvirt      動作中  いいえ (no) はい (yes)

[oracle@centos vx]$ vagrant ssh-config | grep -E "^Host|\s{1,}Host"
Host node1
  HostName 192.168.121.233
Host node2
  HostName 192.168.121.193
Host node3
  HostName 192.168.121.18
Host node4
  HostName 192.168.121.17
Host node5
  HostName 192.168.121.227
Host node6
  HostName 192.168.121.98

起動している仮想マシンを再起動

コード表示

[oracle@centos vx]$ time vagrant reload
==> node1: Halting domain...
==> node1: Starting domain.
==> node1: Waiting for domain to get an IP address...
==> node1: Waiting for SSH to become available...
==> node1: Creating shared folders metadata...
==> node1: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node1: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node1: flag to force provisioning. Provisioners marked to run always will still run.
==> node2: Halting domain...
==> node2: Starting domain.
==> node2: Waiting for domain to get an IP address...
==> node2: Waiting for SSH to become available...
==> node2: Creating shared folders metadata...
==> node2: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node2: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node2: flag to force provisioning. Provisioners marked to run always will still run.
==> node3: Halting domain...
==> node3: Starting domain.
==> node3: Waiting for domain to get an IP address...
==> node3: Waiting for SSH to become available...
==> node3: Creating shared folders metadata...
==> node3: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node3: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node3: flag to force provisioning. Provisioners marked to run always will still run.
==> node4: Halting domain...
==> node4: Starting domain.
==> node4: Waiting for domain to get an IP address...
==> node4: Waiting for SSH to become available...
==> node4: Creating shared folders metadata...
==> node4: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node4: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node4: flag to force provisioning. Provisioners marked to run always will still run.
==> node5: Halting domain...
==> node5: Starting domain.
==> node5: Waiting for domain to get an IP address...
==> node5: Waiting for SSH to become available...
==> node5: Creating shared folders metadata...
==> node5: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node5: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node5: flag to force provisioning. Provisioners marked to run always will still run.
==> node6: Halting domain...
==> node6: Starting domain.
==> node6: Waiting for domain to get an IP address...
==> node6: Waiting for SSH to become available...
==> node6: Creating shared folders metadata...
==> node6: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node6: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node6: flag to force provisioning. Provisioners marked to run always will still run.

real	4m1.601s
user	0m5.994s
sys	0m0.554s

起動している仮想マシンを再起動後の状態

コード表示

[oracle@centos vx]$ brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242fb2b351e	no		
virbr0		8000.5254006df710	yes		virbr0-nic
virbr100		8000.525400e3abb1	yes		virbr100-nic
							vnet4
							vnet5
virbr101		8000.525400286e9c	yes		virbr101-nic
							vnet0
							vnet1
virbr102		8000.52540085245e	yes		virbr102-nic
							vnet2
							vnet3
[oracle@centos vx]$ vagrant ssh-config | grep -E "^Host|\s{1,}Host"
Host node1
  HostName 192.168.100.2
Host node2
  HostName 192.168.100.3
Host node3
  HostName 192.168.101.2
Host node4
  HostName 192.168.101.3
Host node5
  HostName 192.168.102.2
Host node6
  HostName 192.168.102.3
[oracle@centos vx]$ while read line;do echo ${line};sleep 5; echo ${line}|bash;done < <(seq 6 | xargs -I@ bash -c "awk '{print \"vagrant ssh node@ -c \"\"\x5c\x27\"\"ip a show eth0\"\"\x5c\x27\"}' dummy_oneline")
vagrant ssh node1 -c 'ip a show eth0'
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:5d:4a:3f brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.2/24 brd 192.168.100.255 scope global noprefixroute dynamic eth0
       valid_lft 3309sec preferred_lft 3309sec
    inet6 fe80::5054:ff:fe5d:4a3f/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node2 -c 'ip a show eth0'
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:dc:5b:1b brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.3/24 brd 192.168.100.255 scope global noprefixroute dynamic eth0
       valid_lft 3341sec preferred_lft 3341sec
    inet6 fe80::5054:ff:fedc:5b1b/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node3 -c 'ip a show eth0'
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:77:33:99 brd ff:ff:ff:ff:ff:ff
    inet 192.168.101.2/24 brd 192.168.101.255 scope global noprefixroute dynamic eth0
       valid_lft 3375sec preferred_lft 3375sec
    inet6 fe80::5054:ff:fe77:3399/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node4 -c 'ip a show eth0'
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:61:72:67 brd ff:ff:ff:ff:ff:ff
    inet 192.168.101.3/24 brd 192.168.101.255 scope global noprefixroute dynamic eth0
       valid_lft 3410sec preferred_lft 3410sec
    inet6 fe80::5054:ff:fe61:7267/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node5 -c 'ip a show eth0'
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:88:c2:54 brd ff:ff:ff:ff:ff:ff
    inet 192.168.102.2/24 brd 192.168.102.255 scope global noprefixroute dynamic eth0
       valid_lft 3445sec preferred_lft 3445sec
    inet6 fe80::5054:ff:fe88:c254/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node6 -c 'ip a show eth0'
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:a8:bc:df brd ff:ff:ff:ff:ff:ff
    inet 192.168.102.3/24 brd 192.168.102.255 scope global noprefixroute dynamic eth0
       valid_lft 3479sec preferred_lft 3479sec
    inet6 fe80::5054:ff:fea8:bcdf/64 scope link 
       valid_lft forever preferred_lft forever

疎通確認でもしようかな

前やったときとは違って、セグメント越えられていないぞ!!!やった!!!仮想マシン側の設定ファイルと独自ネットワークの設定ファイルがちゃんと再定義、再起動して固定ipにした時に、いいかんじになるのかな。吟味必要。大事なとこ。

libvirt管理の仮想ゲストOSネットワークセグメントを切り分ける話  
コード表示

[oracle@centos vx]$ vagrant ssh node1
Last login: Sun Jun  2 10:31:45 2019 from 192.168.100.1
[vagrant@node1 ~]$ seq 6 | xargs -t -I% bash -c 'traceroute node% && ping -c 1 node%'
bash -c traceroute node1 && ping -c 1 node1 
traceroute to node1 (127.0.0.1), 30 hops max, 60 byte packets
 1  node1 (127.0.0.1)  0.009 ms  0.003 ms  0.003 ms
PING node1 (127.0.0.1) 56(84) bytes of data.
64 bytes from node1 (127.0.0.1): icmp_seq=1 ttl=64 time=0.005 ms

--- node1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.005/0.005/0.005/0.000 ms
bash -c traceroute node2 && ping -c 1 node2 
traceroute to node2 (192.168.100.3), 30 hops max, 60 byte packets
 1  node2 (192.168.100.3)  0.209 ms  0.191 ms  0.180 ms
PING node2 (192.168.100.3) 56(84) bytes of data.
64 bytes from node2 (192.168.100.3): icmp_seq=1 ttl=64 time=0.101 ms

--- node2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms
bash -c traceroute node3 && ping -c 1 node3 
node3: Name or service not known
Cannot handle "host" cmdline arg `node3' on position 1 (argc 1)
bash -c traceroute node4 && ping -c 1 node4 
node4: Name or service not known
Cannot handle "host" cmdline arg `node4' on position 1 (argc 1)
bash -c traceroute node5 && ping -c 1 node5 
node5: Name or service not known
Cannot handle "host" cmdline arg `node5' on position 1 (argc 1)
bash -c traceroute node6 && ping -c 1 node6 
node6: Name or service not known
Cannot handle "host" cmdline arg `node6' on position 1 (argc 1)
[vagrant@node1 ~]$ logout
Connection to 192.168.100.2 closed.
[oracle@centos vx]$ vagrant ssh node2
Last login: Sun Jun  2 10:33:18 2019 from 192.168.100.1
[vagrant@node2 ~]$ seq 6 | xargs -I% bash -c 'traceroute node% && ping -c 1 node%'
traceroute to node1 (192.168.100.2), 30 hops max, 60 byte packets
 1  node1 (192.168.100.2)  0.247 ms  0.215 ms  0.205 ms
PING node1 (192.168.100.2) 56(84) bytes of data.
64 bytes from node1 (192.168.100.2): icmp_seq=1 ttl=64 time=0.152 ms

--- node1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms
traceroute to node2 (127.0.0.1), 30 hops max, 60 byte packets
 1  node2 (127.0.0.1)  0.006 ms  0.002 ms  0.002 ms
PING node2 (127.0.0.1) 56(84) bytes of data.
64 bytes from node2 (127.0.0.1): icmp_seq=1 ttl=64 time=0.005 ms

--- node2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.005/0.005/0.005/0.000 ms
node3: Name or service not known
Cannot handle "host" cmdline arg `node3' on position 1 (argc 1)
node4: Name or service not known
Cannot handle "host" cmdline arg `node4' on position 1 (argc 1)
node5: Name or service not known
Cannot handle "host" cmdline arg `node5' on position 1 (argc 1)
node6: Name or service not known
Cannot handle "host" cmdline arg `node6' on position 1 (argc 1)
[vagrant@node2 ~]$ logout
Connection to 192.168.100.3 closed.
[oracle@centos vx]$ vagrant ssh node3
[vagrant@node3 ~]$ seq 6 | xargs -I% bash -c 'traceroute node% && ping -c 1 node%'
node1: Name or service not known
Cannot handle "host" cmdline arg `node1' on position 1 (argc 1)
node2: Name or service not known
Cannot handle "host" cmdline arg `node2' on position 1 (argc 1)
traceroute to node3 (127.0.0.1), 30 hops max, 60 byte packets
 1  node3 (127.0.0.1)  0.010 ms  0.003 ms  0.002 ms
PING node3 (127.0.0.1) 56(84) bytes of data.
64 bytes from node3 (127.0.0.1): icmp_seq=1 ttl=64 time=0.008 ms

--- node3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.008/0.008/0.008/0.000 ms
traceroute to node4 (192.168.101.3), 30 hops max, 60 byte packets
 1  node4 (192.168.101.3)  0.323 ms  0.301 ms  0.269 ms
PING node4 (192.168.101.3) 56(84) bytes of data.
64 bytes from node4 (192.168.101.3): icmp_seq=1 ttl=64 time=0.096 ms

--- node4 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms
node5: Name or service not known
Cannot handle "host" cmdline arg `node5' on position 1 (argc 1)
node6: Name or service not known
Cannot handle "host" cmdline arg `node6' on position 1 (argc 1)
[vagrant@node3 ~]$ logout
Connection to 192.168.101.2 closed.
[oracle@centos vx]$ vagrant ssh node4
Last login: Sun Jun  2 03:00:49 2019 from 192.168.121.1
[vagrant@node4 ~]$ seq 6 | xargs -I% bash -c 'traceroute node% && ping -c 1 node%'
node1: Name or service not known
Cannot handle "host" cmdline arg `node1' on position 1 (argc 1)
node2: Name or service not known
Cannot handle "host" cmdline arg `node2' on position 1 (argc 1)
traceroute to node3 (192.168.101.2), 30 hops max, 60 byte packets
 1  node3 (192.168.101.2)  0.090 ms  0.077 ms  0.068 ms
PING node3 (192.168.101.2) 56(84) bytes of data.
64 bytes from node3 (192.168.101.2): icmp_seq=1 ttl=64 time=0.108 ms

--- node3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms
traceroute to node4 (127.0.0.1), 30 hops max, 60 byte packets
 1  node4 (127.0.0.1)  0.010 ms  0.005 ms  0.002 ms
PING node4 (127.0.0.1) 56(84) bytes of data.
64 bytes from node4 (127.0.0.1): icmp_seq=1 ttl=64 time=0.005 ms

--- node4 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.005/0.005/0.005/0.000 ms
node5: Name or service not known
Cannot handle "host" cmdline arg `node5' on position 1 (argc 1)
node6: Name or service not known
Cannot handle "host" cmdline arg `node6' on position 1 (argc 1)
[vagrant@node4 ~]$ logout
Connection to 192.168.101.3 closed.
[oracle@centos vx]$ vagrant ssh node5
[vagrant@node5 ~]$ seq 6 | xargs -I% bash -c 'traceroute node% && ping -c 1 node%'
node1: Name or service not known
Cannot handle "host" cmdline arg `node1' on position 1 (argc 1)
node2: Name or service not known
Cannot handle "host" cmdline arg `node2' on position 1 (argc 1)
node3: Name or service not known
Cannot handle "host" cmdline arg `node3' on position 1 (argc 1)
node4: Name or service not known
Cannot handle "host" cmdline arg `node4' on position 1 (argc 1)
traceroute to node5 (127.0.0.1), 30 hops max, 60 byte packets
 1  node5 (127.0.0.1)  0.009 ms  0.003 ms  0.002 ms
PING node5 (127.0.0.1) 56(84) bytes of data.
64 bytes from node5 (127.0.0.1): icmp_seq=1 ttl=64 time=0.006 ms

--- node5 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.006/0.006/0.006/0.000 ms
traceroute to node6 (192.168.102.3), 30 hops max, 60 byte packets
 1  node6 (192.168.102.3)  0.388 ms  0.368 ms  0.358 ms
PING node6 (192.168.102.3) 56(84) bytes of data.
64 bytes from node6 (192.168.102.3): icmp_seq=1 ttl=64 time=0.186 ms

--- node6 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms
[vagrant@node5 ~]$ logout
Connection to 192.168.102.2 closed.
[oracle@centos vx]$ vagrant ssh node6
[vagrant@node6 ~]$ seq 6 | xargs -I% bash -c 'traceroute node% && ping -c 1 node%'
node1: Name or service not known
Cannot handle "host" cmdline arg `node1' on position 1 (argc 1)
node2: Name or service not known
Cannot handle "host" cmdline arg `node2' on position 1 (argc 1)
node3: Name or service not known
Cannot handle "host" cmdline arg `node3' on position 1 (argc 1)
node4: Name or service not known
Cannot handle "host" cmdline arg `node4' on position 1 (argc 1)
traceroute to node5 (192.168.102.2), 30 hops max, 60 byte packets
 1  node5 (192.168.102.2)  0.084 ms  0.075 ms  0.067 ms
PING node5 (192.168.102.2) 56(84) bytes of data.
64 bytes from node5 (192.168.102.2): icmp_seq=1 ttl=64 time=0.084 ms

--- node5 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms
traceroute to node6 (127.0.0.1), 30 hops max, 60 byte packets
 1  node6 (127.0.0.1)  0.007 ms  0.002 ms  0.002 ms
PING node6 (127.0.0.1) 56(84) bytes of data.
64 bytes from node6 (127.0.0.1): icmp_seq=1 ttl=64 time=0.005 ms

--- node6 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.005/0.005/0.005/0.000 ms
[vagrant@node6 ~]$ logout
Connection to 192.168.102.3 closed.
[oracle@centos vx]$ 

あとがき

ネットワーク楽しい!!!awk楽しい!!!面白くなってきた!!!あとでスクリプトにまとめておきたいな。仮想環境でiptablesの自作ルータ作りたい!!!以上、ありがとうございました。

libvirt管理のネットワークを固定ipにするためのhostタグを第3オクテットを指定して作ってみた話

まえがき

awk便利だよなー

def_ip.awk

コード表示

[root@centos vx]# cat def_ip.awk
{
  gsub(/[^ ]+/,"\x5c\x27&\x5c\x27");
  print "<host mac="$2" name="$1" ip=\x5c\x27""192.168."third_octet"."NR+1"\x5c\x27""/>"
}

split_@ファイル

グルーピング後のファイルを対象に実行してみるよ

コード表示

[root@centos vx]# ll split*
-rw-r--r--. 1 root root 48  6月  2 15:32 split_0
-rw-r--r--. 1 root root 48  6月  2 15:32 split_1
-rw-r--r--. 1 root root 48  6月  2 15:32 split_2
[root@centos vx]# seq 0 2 | xargs -t -I@ bash -c 'cat split_@'
bash -c cat split_0 
node1 52:54:00:5d:4a:3f
node2 52:54:00:dc:5b:1b
bash -c cat split_1 
node3 52:54:00:77:33:99
node4 52:54:00:61:72:67
bash -c cat split_2 
node5 52:54:00:88:c2:54
node6 52:54:00:a8:bc:df

実行

第3オクテット100の時と、200の時で実行してみるよ。シングルクォートはエスケープしておく。後続処理のため。

コード表示

[root@centos vx]# seq 0 2 | xargs -I@ bash -c 'gawk -v "third_octet=$((@+100))" -f def_ip.awk split_@'
<host mac=\'52:54:00:5d:4a:3f\' name=\'node1\' ip=\'192.168.100.2\'/>
<host mac=\'52:54:00:dc:5b:1b\' name=\'node2\' ip=\'192.168.100.3\'/>
<host mac=\'52:54:00:77:33:99\' name=\'node3\' ip=\'192.168.101.2\'/>
<host mac=\'52:54:00:61:72:67\' name=\'node4\' ip=\'192.168.101.3\'/>
<host mac=\'52:54:00:88:c2:54\' name=\'node5\' ip=\'192.168.102.2\'/>
<host mac=\'52:54:00:a8:bc:df\' name=\'node6\' ip=\'192.168.102.3\'/>
[root@centos vx]# seq 0 2 | xargs -I@ bash -c 'gawk -v "third_octet=$((@+200))" -f def_ip.awk split_@'
<host mac=\'52:54:00:5d:4a:3f\' name=\'node1\' ip=\'192.168.200.2\'/>
<host mac=\'52:54:00:dc:5b:1b\' name=\'node2\' ip=\'192.168.200.3\'/>
<host mac=\'52:54:00:77:33:99\' name=\'node3\' ip=\'192.168.201.2\'/>
<host mac=\'52:54:00:61:72:67\' name=\'node4\' ip=\'192.168.201.3\'/>
<host mac=\'52:54:00:88:c2:54\' name=\'node5\' ip=\'192.168.202.2\'/>
<host mac=\'52:54:00:a8:bc:df\' name=\'node6\' ip=\'192.168.202.3\'/>
[root@centos vx]# seq 0 2 | xargs -I@ bash -c 'gawk -v "third_octet=$((@+100))" -f def_ip.awk split_@ >def_host_tag_$((@+100))'
[root@centos vx]# ll def_host*
-rw-r--r--. 1 root root 140  6月  2 15:54 def_host_tag_100
-rw-r--r--. 1 root root 140  6月  2 15:54 def_host_tag_101
-rw-r--r--. 1 root root 140  6月  2 15:54 def_host_tag_102
[root@centos vx]# seq 0 2 | xargs -t -I@ bash -c 'cat def_host_tag_$((@+100))'
bash -c cat def_host_tag_$((0+100)) 
<host mac=\'52:54:00:5d:4a:3f\' name=\'node1\' ip=\'192.168.100.2\'/>
<host mac=\'52:54:00:dc:5b:1b\' name=\'node2\' ip=\'192.168.100.3\'/>
bash -c cat def_host_tag_$((1+100)) 
<host mac=\'52:54:00:77:33:99\' name=\'node3\' ip=\'192.168.101.2\'/>
<host mac=\'52:54:00:61:72:67\' name=\'node4\' ip=\'192.168.101.3\'/>
bash -c cat def_host_tag_$((2+100)) 
<host mac=\'52:54:00:88:c2:54\' name=\'node5\' ip=\'192.168.102.2\'/>
<host mac=\'52:54:00:a8:bc:df\' name=\'node6\' ip=\'192.168.102.3\'/>

あとがき

いいかんじだぁ

vagrant ssh -cでコマンド実行できるんだねって話

Vagrantfile

ここらへんもホスト名違いだけだし、うまくすっきり出来そうだけど、今はこれでよしとして楽しみにとっておこう。やり方はあるとおもうんだよな。

コード表示

[oracle@centos vx]$ cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"
  config.vm.synced_folder '.', '/mnt', type: 'rsync'
  config.vm.synced_folder '.', '/vagrant', disabled: true
  config.vm.define "node1" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node1"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node2" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node2"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node3" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node3"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node4" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node4"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node5" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node5"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node6" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node6"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
end

vagrantツールでlibvirt管理のkvm上の仮想マシン立ち上げ!

コード表示

[oracle@centos vx]$ time vagrant up
real	0m54.930s
user	0m8.535s
sys	0m0.917s
[oracle@centos vx]$ vagrant status
Current machine states:

node1                     running (libvirt)
node2                     running (libvirt)
node3                     running (libvirt)
node4                     running (libvirt)
node5                     running (libvirt)
node6                     running (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

vagrant ssh-configによるip確認

dhcpサービスによる動的ip配布なので。

コード表示

[oracle@centos vx]$ vagrant ssh-config | grep -E "^Host|\s{1,}Host"
Host node1
  HostName 192.168.121.233
Host node2
  HostName 192.168.121.193
Host node3
  HostName 192.168.121.18
Host node4
  HostName 192.168.121.17
Host node5
  HostName 192.168.121.227
Host node6
  HostName 192.168.121.98

vagrant ssh -cによるip確認

bashコマンドってパイプ渡しがいいぽいね。echoしてどのノードのipなのか確認しつつ、疎通に1秒以上かかるから、5秒待って次ノードって感じ。dummy_onelineファイルでawkによる行単位処理ができるように工夫。

コード表示

[oracle@centos vx]$ cat dummy_oneline
dummy_oneline
[oracle@centos vx]$ while read line;do echo ${line};sleep 5; echo ${line}|bash;done < <(seq 6 | xargs -I@ bash -c "awk '{print \"vagrant ssh node@ -c \"\"\x5c\x27\"\"ip a show eth0\"\"\x5c\x27\"}' dummy_oneline")
vagrant ssh node1 -c 'ip a show eth0'
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:5d:4a:3f brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.233/24 brd 192.168.121.255 scope global noprefixroute dynamic eth0
       valid_lft 3435sec preferred_lft 3435sec
    inet6 fe80::5054:ff:fe5d:4a3f/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node2 -c 'ip a show eth0'
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:dc:5b:1b brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.193/24 brd 192.168.121.255 scope global noprefixroute dynamic eth0
       valid_lft 3232sec preferred_lft 3232sec
    inet6 fe80::5054:ff:fedc:5b1b/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node3 -c 'ip a show eth0'
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:77:33:99 brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.18/24 brd 192.168.121.255 scope global noprefixroute dynamic eth0
       valid_lft 3281sec preferred_lft 3281sec
    inet6 fe80::5054:ff:fe77:3399/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node4 -c 'ip a show eth0'
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:61:72:67 brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.17/24 brd 192.168.121.255 scope global noprefixroute dynamic eth0
       valid_lft 3570sec preferred_lft 3570sec
    inet6 fe80::5054:ff:fe61:7267/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node5 -c 'ip a show eth0'
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:88:c2:54 brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.227/24 brd 192.168.121.255 scope global noprefixroute dynamic eth0
       valid_lft 3540sec preferred_lft 3540sec
    inet6 fe80::5054:ff:fe88:c254/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node6 -c 'ip a show eth0'
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:a8:bc:df brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.98/24 brd 192.168.121.255 scope global noprefixroute dynamic eth0
       valid_lft 2150sec preferred_lft 2150sec
    inet6 fe80::5054:ff:fea8:bcdf/64 scope link 
       valid_lft forever preferred_lft forever

kvm上の仮想マシンmacアドレスをkey-value形式のファイルで管理してみようと思った話

まえがき

役に立つと思ったので、作ってみた。なお、rootユーザで作業する前提。sudoでパスワード渡しにくいから。

コマンドファイル作成

エスケープ回避したい。あんまりに気にしたくない。

コード表示

[root@centos vx]# cat get_macaddr_cmd
virsh dumpxml vx_node@ | grep "mac address" | awk 'match($0, /[a-f0-9]{2}(:[a-f0-9]{2}){5}/) {print substr($0, RSTART, RLENGTH)}'

ノードファイル作成

仮想マシンをノードファイルで管理する。

コード表示

[root@centos vx]# seq 3 | xargs -I{} bash -c 'echo node{}' > nodename
[root@centos vx]# cat nodename
node1
node2
node3

3つの仮想マシンを立ち上げる

コード表示

[oracle@centos vx]$ vagrant status
Current machine states:

node1                     running (libvirt)
node2                     running (libvirt)
node3                     running (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

実行例

コード表示

[root@centos vx]# seq 3 | xargs -I{} bash -c 'sed -e s/@/{}/ get_macaddr_cmd | bash ' > macaddr
[root@centos vx]# cat macaddr
52:54:00:91:02:58
52:54:00:8d:fd:14
52:54:00:cd:fe:3a
[root@centos vx]# seq 3 | xargs -I{} bash -c 'echo node{}' > nodename
[root@centos vx]# cat nodename
node1
node2
node3
[root@centos vx]# paste -d ' ' nodename macaddr
node1 52:54:00:91:02:58
node2 52:54:00:8d:fd:14
node3 52:54:00:cd:fe:3a
[root@centos vx]# virsh dumpxml vx_node1 | awk '/<interface/,/interface>/'
    <interface type='network'>
      <mac address='52:54:00:91:02:58'/>
      <source network='vagrant-libvirt' bridge='virbr0'/>
      <target dev='vnet2'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
[root@centos vx]# virsh dumpxml vx_node2 | awk '/<interface/,/interface>/'
    <interface type='network'>
      <mac address='52:54:00:8d:fd:14'/>
      <source network='vagrant-libvirt' bridge='virbr0'/>
      <target dev='vnet1'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
[root@centos vx]# virsh dumpxml vx_node3 | awk '/<interface/,/interface>/'
    <interface type='network'>
      <mac address='52:54:00:cd:fe:3a'/>
      <source network='vagrant-libvirt' bridge='virbr0'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>

こういうの全部まとめておきたい

必要なもの

コード表示

[root@centos vx]# cat get_nodename_cmd
echo node@
[root@centos vx]# cat get_macaddr_cmd
virsh dumpxml vx_node@ | grep "mac address" | awk 'match($0, /[a-f0-9]{2}(:[a-f0-9]{2}){5}/) {print substr($0, RSTART, RLENGTH)}'
[root@centos vx]# seq 3 | xargs -I{} bash -c 'sed -e s/@/{}/ get_nodename_cmd | bash ' > nodename
[root@centos vx]# cat nodename
node1
node2
node3
[root@centos vx]# seq 3 | xargs -I{} bash -c 'sed -e s/@/{}/ get_macaddr_cmd | bash ' > macaddr
[root@centos vx]# cat macaddr
52:54:00:91:02:58
52:54:00:8d:fd:14
52:54:00:cd:fe:3a
[root@centos vx]# paste -d ' ' nodename macaddr
node1 52:54:00:91:02:58
node2 52:54:00:8d:fd:14
node3 52:54:00:cd:fe:3a
[root@centos vx]# cat cmd_list
get_nodename_cmd
get_macaddr_cmd
[root@centos vx]# cat script_run.sh
#!/bin/bash
RPT="$1"

while read line; do
  rm -rf $(echo ${line} | sed -e s/\_/\\t/g | awk '{print $2}');
  seq ${RPT} | while read rpt; do
    cat ${line} | sed -e s/@/${rpt}/ | bash;
  done
done < <(cat cmd_list)

実行例

いいんじゃないかな。

コード表示

[root@centos vx]# ./script_run.sh 3 && paste -d ' ' nodename macaddr
node1 52:54:00:91:02:58
node2 52:54:00:8d:fd:14
node3 52:54:00:cd:fe:3a

macアドレス取得する話

参考文献

grepで正規表現を用いてIPアドレス・MACアドレスを抽出する

実行例

16進数表現らしいので、A-F0-9。awkエスケープあんましなくていいからよいな。

コード表示

[oracle@centos vx]$ sudo virsh dumpxml vx_node1 | grep "mac address" | awk 'match($0, /[A-F0-9]{2}(:[A-F0-9]{2}){5}/) {print substr($0, RSTART, RLENGTH)}'
[sudo] oracle のパスワード:
52:54:00:91:63:57

libvirt管理の仮想ゲストOSネットワークセグメントを切り分ける話

まえがき

Vagrantfile

コード表示

[oracle@centos vx]$ cat V*e
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"
  config.vm.synced_folder '.', '/mnt', type: 'rsync'
  config.vm.synced_folder '.', '/vagrant', disabled: true
  config.vm.define "node1" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node1"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 2
    end
  end
  config.vm.define "node2" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node2"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 2
    end
  end
  config.vm.define "node3" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node3"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 2
    end
  end
end

デフォルトの確認

コード表示

[oracle@centos vx]$ sudo virsh net-list --all
[sudo] oracle のパスワード:
 名前               状態     自動起動  永続
----------------------------------------------------------
 mynet100             動作中  いいえ (no) はい (yes)
 mynet101             動作中  いいえ (no) はい (yes)
 mynet102             動作中  いいえ (no) はい (yes)
 vagrant-libvirt      動作中  いいえ (no) はい (yes)
[oracle@centos vx]$ brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.024236fc45c6	no		
virbr0		8000.525400fbdf5d	yes		virbr0-nic
							vnet0
							vnet1
							vnet2
virbr100		8000.5254007a263c	yes		virbr100-nic
virbr101		8000.525400262f1a	yes		virbr101-nic
virbr102		8000.5254008d1c94	yes		virbr102-nic
[oracle@centos vx]$ vagrant ssh-config
Host node1
  HostName 192.168.121.107
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node1/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node2
  HostName 192.168.121.223
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node2/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node3
  HostName 192.168.121.215
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node3/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

vagrant upする

コード表示

[oracle@centos vx]$ time vagrant up
real	0m37.919s
user	0m5.003s
sys	0m0.529s

仮想マシンの設定ファイルの確認

source networkを独自ネットワークに書き換える

コード表示

[root@centos qemu]# cat /etc/libvirt/qemu/vx_node1.xml | awk '/<interface/,/interface>/'
    <interface type='network'>
      <mac address='52:54:00:45:15:d8'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
[root@centos qemu]# cat /etc/libvirt/qemu/vx_node2.xml | awk '/<interface/,/interface>/'
    <interface type='network'>
      <mac address='52:54:00:f3:f5:86'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
[root@centos qemu]# cat /etc/libvirt/qemu/vx_node3.xml | awk '/<interface/,/interface>/'
    <interface type='network'>
      <mac address='52:54:00:f5:a0:20'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>

仮想マシンの設定ファイルの修正

コード表示

[root@centos qemu]# virsh edit vx_node1
ドメイン vx_node1 XML の設定は編集されました 

[root@centos qemu]# virsh edit vx_node2
ドメイン vx_node2 XML の設定は編集されました 

[root@centos qemu]# virsh edit vx_node3
ドメイン vx_node3 XML の設定は編集されました 
[root@centos qemu]# cat /etc/libvirt/qemu/vx_node1.xml | awk '/<interface/,/interface>/'
    <interface type='network'>
      <mac address='52:54:00:45:15:d8'/>
      <source network='mynet100'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
[root@centos qemu]# cat /etc/libvirt/qemu/vx_node2.xml | awk '/<interface/,/interface>/'
    <interface type='network'>
      <mac address='52:54:00:f3:f5:86'/>
      <source network='mynet101'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
[root@centos qemu]# cat /etc/libvirt/qemu/vx_node3.xml | awk '/<interface/,/interface>/'
    <interface type='network'>
      <mac address='52:54:00:f5:a0:20'/>
      <source network='mynet102'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>

仮想マシンの再起動

コード表示

[root@centos qemu]# exit
[oracle@centos vx]$ seq 3 | xargs -I@ bash -c 'vagrant reload node@'
==> node1: Halting domain...
==> node1: Starting domain.
==> node1: Waiting for domain to get an IP address...
==> node1: Waiting for SSH to become available...
==> node1: Creating shared folders metadata...
==> node1: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node1: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node1: flag to force provisioning. Provisioners marked to run always will still run.
==> node2: Halting domain...
==> node2: Starting domain.
==> node2: Waiting for domain to get an IP address...
==> node2: Waiting for SSH to become available...
==> node2: Creating shared folders metadata...
==> node2: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node2: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node2: flag to force provisioning. Provisioners marked to run always will still run.
==> node3: Halting domain...
==> node3: Starting domain.
==> node3: Waiting for domain to get an IP address...
==> node3: Waiting for SSH to become available...
==> node3: Creating shared folders metadata...
==> node3: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node3: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node3: flag to force provisioning. Provisioners marked to run always will still run.

修正後ネットワークの確認

コード表示

[oracle@centos vx]$ brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.024236fc45c6	no		
virbr0		8000.525400df588a	yes		virbr0-nic
virbr100		8000.5254007a263c	yes		virbr100-nic
							vnet1
virbr101		8000.525400262f1a	yes		virbr101-nic
							vnet2
virbr102		8000.5254008d1c94	yes		virbr102-nic
							vnet0
[oracle@centos vx]$ vagrant ssh-config
Host node1
  HostName 192.168.100.227
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node1/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node2
  HostName 192.168.101.213
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node2/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node3
  HostName 192.168.102.60
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node3/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

ssh接続確認

kvmホスト192.168.1.109にpingとおる。外部通信できる。

コード表示

[oracle@centos vx]$ vagrant ssh node1
[vagrant@node1 ~]$ ip a show     
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:45:15:d8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.227/24 brd 192.168.100.255 scope global noprefixroute dynamic eth0
       valid_lft 3443sec preferred_lft 3443sec
    inet6 fe80::5054:ff:fe45:15d8/64 scope link 
       valid_lft forever preferred_lft forever
[vagrant@node1 ~]$ traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.100.1)  0.419 ms  0.331 ms  0.271 ms
 2  192.168.1.1 (192.168.1.1)  1.700 ms  1.627 ms  1.575 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  4.594 ms  4.539 ms  5.109 ms
 4  210.139.125.169 (210.139.125.169)  5.012 ms  4.941 ms  4.878 ms
 5  210.165.249.177 (210.165.249.177)  5.741 ms  6.134 ms  5.941 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  7.639 ms  6.870 ms  6.854 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  8.023 ms  8.626 ms  8.590 ms
 8  72.14.205.32 (72.14.205.32)  8.417 ms  8.417 ms 72.14.202.229 (72.14.202.229)  8.540 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  8.865 ms  8.516 ms  8.829 ms
[vagrant@node1 ~]$ ping -c 1 192.168.1.109
PING 192.168.1.109 (192.168.1.109) 56(84) bytes of data.
64 bytes from 192.168.1.109: icmp_seq=1 ttl=64 time=0.081 ms

--- 192.168.1.109 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms
[vagrant@node1 ~]$ logout
Connection to 192.168.100.227 closed.
[oracle@centos vx]$ vagrant ssh node2
[vagrant@node2 ~]$ ip a show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:f3:f5:86 brd ff:ff:ff:ff:ff:ff
    inet 192.168.101.213/24 brd 192.168.101.255 scope global noprefixroute dynamic eth0
       valid_lft 3414sec preferred_lft 3414sec
    inet6 fe80::5054:ff:fef3:f586/64 scope link 
       valid_lft forever preferred_lft forever
[vagrant@node2 ~]$ traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.101.1)  0.280 ms  0.187 ms  0.126 ms
 2  192.168.1.1 (192.168.1.1)  0.706 ms  1.579 ms  1.507 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  4.730 ms  4.657 ms  4.599 ms
 4  210.139.125.169 (210.139.125.169)  5.292 ms  5.237 ms  5.158 ms
 5  210.165.249.177 (210.165.249.177)  6.043 ms  6.128 ms  6.098 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  7.172 ms  8.597 ms  8.483 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  8.446 ms  8.360 ms  6.768 ms
 8  72.14.202.229 (72.14.202.229)  6.738 ms  6.662 ms  6.588 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  8.291 ms  8.819 ms  8.203 ms
[vagrant@node2 ~]$ ping -c 1 192.168.1.109
PING 192.168.1.109 (192.168.1.109) 56(84) bytes of data.
64 bytes from 192.168.1.109: icmp_seq=1 ttl=64 time=0.289 ms

--- 192.168.1.109 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms
[vagrant@node2 ~]$ logout
Connection to 192.168.101.213 closed.
[oracle@centos vx]$ vagrant ssh node3
[vagrant@node3 ~]$ ip a show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:f5:a0:20 brd ff:ff:ff:ff:ff:ff
    inet 192.168.102.60/24 brd 192.168.102.255 scope global noprefixroute dynamic eth0
       valid_lft 3391sec preferred_lft 3391sec
    inet6 fe80::5054:ff:fef5:a020/64 scope link 
       valid_lft forever preferred_lft forever
[vagrant@node3 ~]$ traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.102.1)  0.252 ms  0.144 ms  0.087 ms
 2  192.168.1.1 (192.168.1.1)  1.172 ms  1.084 ms  1.037 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  5.054 ms  4.986 ms  4.916 ms
 4  210.139.125.169 (210.139.125.169)  5.000 ms  4.943 ms  4.897 ms
 5  210.165.249.177 (210.165.249.177)  5.982 ms  5.384 ms  5.971 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  8.510 ms  6.312 ms  6.193 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  7.042 ms  6.580 ms  6.543 ms
 8  72.14.202.229 (72.14.202.229)  6.268 ms 72.14.205.32 (72.14.205.32)  6.904 ms  7.049 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  7.454 ms  7.437 ms  7.424 ms
[vagrant@node3 ~]$ ping -c 1 192.168.1.109
PING 192.168.1.109 (192.168.1.109) 56(84) bytes of data.
64 bytes from 192.168.1.109: icmp_seq=1 ttl=64 time=0.100 ms

--- 192.168.1.109 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms
[vagrant@node3 ~]$ logout
Connection to 192.168.102.60 closed.
[oracle@centos vx]$ 

セグメント間疎通確認

できないと思っていた。なぜ、こんなにムラがある。固定ipにしたらムラがなくなるかな。vlanでタグ付するのかな。

コード表示

[oracle@centos vx]$ vagrant ssh node1
Last login: Thu May 30 21:45:13 2019 from 192.168.100.1
[vagrant@node1 ~]$ ping -c 1 192.168.101.213
PING 192.168.101.213 (192.168.101.213) 56(84) bytes of data.
64 bytes from 192.168.101.213: icmp_seq=1 ttl=63 time=0.245 ms

--- 192.168.101.213 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms
[vagrant@node1 ~]$ ping -c 1 192.168.102.60
PING 192.168.102.60 (192.168.102.60) 56(84) bytes of data.
64 bytes from 192.168.102.60: icmp_seq=1 ttl=63 time=0.240 ms

--- 192.168.102.60 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms
[vagrant@node1 ~]$ logout
Connection to 192.168.100.227 closed.
[oracle@centos vx]$ vagrant ssh node2
Last login: Thu May 30 21:46:59 2019 from 192.168.101.1
[vagrant@node2 ~]$ ping -c 1 192.168.100.227
PING 192.168.100.227 (192.168.100.227) 56(84) bytes of data.
From 192.168.101.1 icmp_seq=1 Destination Port Unreachable

--- 192.168.100.227 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

[vagrant@node2 ~]$ ping -c 1 192.168.102.60
PING 192.168.102.60 (192.168.102.60) 56(84) bytes of data.
From 192.168.101.1 icmp_seq=1 Destination Port Unreachable

--- 192.168.102.60 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

[vagrant@node2 ~]$ logout
Connection to 192.168.101.213 closed.
[oracle@centos vx]$ vagrant ssh node3
Last login: Thu May 30 21:48:05 2019 from 192.168.102.1
[vagrant@node3 ~]$ ping -c 1 192.168.100.227
PING 192.168.100.227 (192.168.100.227) 56(84) bytes of data.
From 192.168.102.1 icmp_seq=1 Destination Port Unreachable

--- 192.168.100.227 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

[vagrant@node3 ~]$ ping -c 1 192.168.101.213
PING 192.168.101.213 (192.168.101.213) 56(84) bytes of data.
64 bytes from 192.168.101.213: icmp_seq=1 ttl=63 time=0.561 ms

--- 192.168.101.213 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms

あとがき

次は固定ipにしたい。

自作したlibvirt管理のネットワークとデフぉで出来るネットワークの差分確認する話

まえがき

仮想ホストOSから見た際の仮想ゲストOSに対するiptablesの設定を確認したい。

INPUTチェインのルール

コード表示

[root@centos vx]# iptables -L INPUT
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     udp  --  anywhere             anywhere             udp dpt:domain
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domain
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootps
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:bootps
ACCEPT     udp  --  anywhere             anywhere             udp dpt:domain
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domain
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootps
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:bootps
ACCEPT     udp  --  anywhere             anywhere             udp dpt:domain
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domain
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootps
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:bootps
ACCEPT     udp  --  anywhere             anywhere             udp dpt:domain
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domain
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootps
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:bootps
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere            
INPUT_direct  all  --  anywhere             anywhere            
INPUT_ZONES_SOURCE  all  --  anywhere             anywhere            
INPUT_ZONES  all  --  anywhere             anywhere            
DROP       all  --  anywhere             anywhere             ctstate INVALID
REJECT     all  --  anywhere             anywhere             reject-with icmp-host-prohibited

OUTPUTチェインのルール

コード表示

[root@centos vx]# iptables -L OUTPUT
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootpc
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootpc
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootpc
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootpc
OUTPUT_direct  all  --  anywhere             anywhere            

FORWARDチェインのルール

コード表示

[root@centos vx]# iptables -L FORWARD
Chain FORWARD (policy DROP)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             192.168.121.0/24     ctstate RELATED,ESTABLISHED
ACCEPT     all  --  192.168.121.0/24     anywhere            
ACCEPT     all  --  anywhere             anywhere            
REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable
REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable
ACCEPT     all  --  anywhere             192.168.102.0/24     ctstate RELATED,ESTABLISHED
ACCEPT     all  --  192.168.102.0/24     anywhere            
ACCEPT     all  --  anywhere             anywhere            
REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable
REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable
ACCEPT     all  --  anywhere             192.168.101.0/24     ctstate RELATED,ESTABLISHED
ACCEPT     all  --  192.168.101.0/24     anywhere            
ACCEPT     all  --  anywhere             anywhere            
REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable
REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable
ACCEPT     all  --  anywhere             192.168.100.0/24     ctstate RELATED,ESTABLISHED
ACCEPT     all  --  192.168.100.0/24     anywhere            
ACCEPT     all  --  anywhere             anywhere            
REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable
REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable
DOCKER-USER  all  --  anywhere             anywhere            
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere            
FORWARD_direct  all  --  anywhere             anywhere            
FORWARD_IN_ZONES_SOURCE  all  --  anywhere             anywhere            
FORWARD_IN_ZONES  all  --  anywhere             anywhere            
FORWARD_OUT_ZONES_SOURCE  all  --  anywhere             anywhere            
FORWARD_OUT_ZONES  all  --  anywhere             anywhere            
DROP       all  --  anywhere             anywhere             ctstate INVALID
REJECT     all  --  anywhere             anywhere             reject-with icmp-host-prohibited

PREROUTINGチェインのルール

コード表示

[root@centos vx]# iptables -t nat -L PREROUTING
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
PREROUTING_direct  all  --  anywhere             anywhere            
PREROUTING_ZONES_SOURCE  all  --  anywhere             anywhere            
PREROUTING_ZONES  all  --  anywhere             anywhere            
DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

POSTROUTINGチェインのルール

コード表示

[root@centos vx]# iptables -t nat -L POSTROUTING
Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
RETURN     all  --  192.168.121.0/24     base-address.mcast.net/24 
RETURN     all  --  192.168.121.0/24     255.255.255.255     
MASQUERADE  tcp  --  192.168.121.0/24    !192.168.121.0/24     masq ports: 1024-65535
MASQUERADE  udp  --  192.168.121.0/24    !192.168.121.0/24     masq ports: 1024-65535
MASQUERADE  all  --  192.168.121.0/24    !192.168.121.0/24    
RETURN     all  --  192.168.102.0/24     base-address.mcast.net/24 
RETURN     all  --  192.168.102.0/24     255.255.255.255     
MASQUERADE  tcp  --  192.168.102.0/24    !192.168.102.0/24     masq ports: 1024-65535
MASQUERADE  udp  --  192.168.102.0/24    !192.168.102.0/24     masq ports: 1024-65535
MASQUERADE  all  --  192.168.102.0/24    !192.168.102.0/24    
RETURN     all  --  192.168.101.0/24     base-address.mcast.net/24 
RETURN     all  --  192.168.101.0/24     255.255.255.255     
MASQUERADE  tcp  --  192.168.101.0/24    !192.168.101.0/24     masq ports: 1024-65535
MASQUERADE  udp  --  192.168.101.0/24    !192.168.101.0/24     masq ports: 1024-65535
MASQUERADE  all  --  192.168.101.0/24    !192.168.101.0/24    
RETURN     all  --  192.168.100.0/24     base-address.mcast.net/24 
RETURN     all  --  192.168.100.0/24     255.255.255.255     
MASQUERADE  tcp  --  192.168.100.0/24    !192.168.100.0/24     masq ports: 1024-65535
MASQUERADE  udp  --  192.168.100.0/24    !192.168.100.0/24     masq ports: 1024-65535
MASQUERADE  all  --  192.168.100.0/24    !192.168.100.0/24    
MASQUERADE  all  --  172.17.0.0/16        anywhere            
POSTROUTING_direct  all  --  anywhere             anywhere            
POSTROUTING_ZONES_SOURCE  all  --  anywhere             anywhere            
POSTROUTING_ZONES  all  --  anywhere             anywhere            

RAWチェインのルール

コード表示

[root@centos vx]# iptables -t nat -nvL RAW
iptables: No chain/target/match by that name.

自作するなら、POSTROUTINGチェインのルールとFORWARDチェインのルールは気にする必要はありそう

libvirt管理の仮想ゲストOSの設定ファイルの在処の話

ありか

コード表示

[root@centos vx]# find /etc/libvirt/qemu/ -name "*node*" 2>/dev/null
/etc/libvirt/qemu/vx_node1.xml
[root@centos vx]# virsh list
 Id    名前                         状態
----------------------------------------------------
 24    vx_node1                       実行中

修正したいときはvirsh editで

コード表示

[root@centos vx]# virsh edit vx_node1

ただinterfaceタグだけ注目したいんで、これでもOK

sourceタグまわり弄り倒せば仮想ゲストOSの仮想NIC増やせそう。

コード表示

[root@centos vx]# cat /etc/libvirt/qemu/vx_node1.xml | awk '/<interface/,/interface>/'
    <interface type='network'>
      <mac address='52:54:00:ed:9e:1b'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>

net-list

コード表示

[root@centos vx]# virsh net-list
 名前               状態     自動起動  永続
----------------------------------------------------------
 mynet100             動作中  いいえ (no) はい (yes)
 mynet101             動作中  いいえ (no) はい (yes)
 mynet102             動作中  いいえ (no) はい (yes)
 vagrant-libvirt      動作中  いいえ (no) はい (yes)

libvirt管理のネットワークばばっと作成する話

実行例

テンプレつくってコピッて置換しておしまい

コード表示

[root@centos ~]# cd /etc/libvirt/qemu/networks
[root@centos networks]# virsh net-list --all
 名前               状態     自動起動  永続
----------------------------------------------------------

[root@centos networks]# tree
.
├── autostart
└── tmpl
    └── mynet@.xml

2 directories, 1 file
[root@centos networks]# cat tmpl/my*
<network ipv6='yes'>
  <name>mynet@</name>
  <forward mode='nat'/>
  <bridge name='virbr@' stp='on' delay='0'/>
  <ip address='192.168.@.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.@.1' end='192.168.@.254'/>
    </dhcp>
  </ip>
</network>
[root@centos networks]# seq 100 102 | xargs -I{} bash -c 'cp $(pwd)/tmpl/mynet@.xml $(pwd)/mynet{}.xml && sed -i s/@/{}/g $(pwd)/mynet{}.xml'
[root@centos networks]# tree
.
├── autostart
├── mynet100.xml
├── mynet101.xml
├── mynet102.xml
└── tmpl
    └── mynet@.xml

2 directories, 4 files
[root@centos networks]# cat my*
<network ipv6='yes'>
  <name>mynet100</name>
  <forward mode='nat'/>
  <bridge name='virbr100' stp='on' delay='0'/>
  <ip address='192.168.100.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.100.1' end='192.168.100.254'/>
    </dhcp>
  </ip>
</network>
<network ipv6='yes'>
  <name>mynet101</name>
  <forward mode='nat'/>
  <bridge name='virbr101' stp='on' delay='0'/>
  <ip address='192.168.101.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.101.1' end='192.168.101.254'/>
    </dhcp>
  </ip>
</network>
<network ipv6='yes'>
  <name>mynet102</name>
  <forward mode='nat'/>
  <bridge name='virbr102' stp='on' delay='0'/>
  <ip address='192.168.102.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.102.1' end='192.168.102.254'/>
    </dhcp>
  </ip>
</network>
[root@centos networks]# virsh net-list --all
 名前               状態     自動起動  永続
----------------------------------------------------------

[root@centos networks]# seq 100 102 | xargs -I{} bash -c 'virsh net-define mynet{}.xml && virsh net-start mynet{}'
ネットワーク mynet100 が mynet100.xml から定義されました

ネットワーク mynet100 が起動されました

ネットワーク mynet101 が mynet101.xml から定義されました

ネットワーク mynet101 が起動されました

ネットワーク mynet102 が mynet102.xml から定義されました

ネットワーク mynet102 が起動されました

[root@centos networks]# virsh net-list --all
 名前               状態     自動起動  永続
----------------------------------------------------------
 mynet100             動作中  いいえ (no) はい (yes)
 mynet101             動作中  いいえ (no) はい (yes)
 mynet102             動作中  いいえ (no) はい (yes)

[root@centos networks]# brctl show
bridge name	bridge id		STP enabled	interfaces
brd0		8000.00d8612cf15b	no		eno1
docker0		8000.02420140344c	no		
virbr100		8000.52540049f529	yes		virbr100-nic
virbr101		8000.5254000d3917	yes		virbr101-nic
virbr102		8000.5254005384cf	yes		virbr102-nic
[root@centos networks]# seq 100 102 | xargs -I{} bash -c 'ip a show virbr{}'
95: virbr100: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:49:f5:29 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.1/24 brd 192.168.100.255 scope global virbr100
       valid_lft forever preferred_lft forever
97: virbr101: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:0d:39:17 brd ff:ff:ff:ff:ff:ff
    inet 192.168.101.1/24 brd 192.168.101.255 scope global virbr101
       valid_lft forever preferred_lft forever
99: virbr102: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:53:84:cf brd ff:ff:ff:ff:ff:ff
    inet 192.168.102.1/24 brd 192.168.102.255 scope global virbr102
       valid_lft forever preferred_lft forever
[root@centos networks]# tree
.
├── autostart
├── mynet100.xml
├── mynet101.xml
├── mynet102.xml
└── tmpl
    └── mynet@.xml

2 directories, 4 files
[root@centos networks]# seq 100 102 | xargs -I{} bash -c 'virsh net-destroy mynet{} && virsh net-undefine mynet{}'
ネットワーク mynet100 は強制停止されました

ネットワーク mynet100 の定義が削除されました

ネットワーク mynet101 は強制停止されました

ネットワーク mynet101 の定義が削除されました

ネットワーク mynet102 は強制停止されました

ネットワーク mynet102 の定義が削除されました

[root@centos networks]# tree
.
├── autostart
└── tmpl
    └── mynet@.xml

2 directories, 1 file
[root@centos networks]# brctl show
bridge name	bridge id		STP enabled	interfaces
brd0		8000.00d8612cf15b	no		eno1
docker0		8000.02420140344c	no		
[root@centos networks]# seq 100 102 | xargs -I{} bash -c 'ip a show virbr{}'
Device "virbr100" does not exist.
Device "virbr101" does not exist.
Device "virbr102" does not exist.
[root@centos networks]# virsh net-list --all
 名前               状態     自動起動  永続
----------------------------------------------------------