仮想ゲストOS固定ip化の手番整理

まえがき

shell力が足りない。

実行ディレクトリ

_main.shに一元化してみた。

コード表示

[oracle@centos vx]$ pwd
/home/oracle/vx
[oracle@centos vx]$ ll
合計 16
-rwx------. 1 oracle docker 1721  6月  2 11:07 Vagrantfile
-rwx------. 1 oracle docker 3015  5月 24 05:16 Vagrantfile_org
-rwxr-xr-x. 1 oracle docker 3583  6月  6 23:28 _main.sh
-rwxr-xr-x. 1 oracle docker  155  6月  6 23:33 a.sh

Vagrantfile

コード表示

[oracle@centos vx]$ cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"
  config.vm.synced_folder '.', '/mnt', type: 'rsync'
  config.vm.synced_folder '.', '/vagrant', disabled: true
  config.vm.define "node1" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node1"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node2" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node2"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node3" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node3"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node4" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node4"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node5" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node5"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node6" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node6"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
end

a.sh

a.shの中身だよ

コード表示

[oracle@centos vx]$ cat a.sh
#!/bin/bash
yum install -y net-tools
yum install -y lsof
yum install -y psmisc
yum install -y traceroute
yum install -y bridge-utils
yum install -y expect

固定ip化前の状態

コード表示

[oracle@centos vx]$ time vagrant up
real	0m55.164s
user	0m8.515s
sys	0m0.832s
[oracle@centos vx]$ vagrant ssh-config | grep -E "^Host|\s{1,}Host"
Host node1
  HostName 192.168.121.199
Host node2
  HostName 192.168.121.240
Host node3
  HostName 192.168.121.140
Host node4
  HostName 192.168.121.129
Host node5
  HostName 192.168.121.72
Host node6
  HostName 192.168.121.208
[oracle@centos vx]$ brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242fb2b351e	no		veth1486633
virbr0		8000.5254006df710	yes		virbr0-nic
							vnet0
							vnet1
							vnet2
							vnet3
							vnet4
							vnet5
virbr100		8000.525400ecc4a6	yes		virbr100-nic
virbr101		8000.525400922d8b	yes		virbr101-nic
virbr102		8000.5254003f5854	yes		virbr102-nic

_main.shをキックする

コード表示

[oracle@centos vx]$ su root
パスワード:
[root@centos vx]# ./_main.sh
ネットワーク mynet100 は強制停止されました

ネットワーク mynet100 の定義が削除されました

ネットワーク mynet101 は強制停止されました

ネットワーク mynet101 の定義が削除されました

ネットワーク mynet102 は強制停止されました

ネットワーク mynet102 の定義が削除されました

ネットワーク mynet100 が mynet100.xml から定義されました

ネットワーク mynet100 が起動されました

ネットワーク mynet101 が mynet101.xml から定義されました

ネットワーク mynet101 が起動されました

ネットワーク mynet102 が mynet102.xml から定義されました

ネットワーク mynet102 が起動されました

ネットワーク mynet100 は強制停止されました

ネットワーク mynet100 が mynet100.xml から定義されました

ネットワーク mynet100 が起動されました

ネットワーク mynet101 は強制停止されました

ネットワーク mynet101 が mynet101.xml から定義されました

ネットワーク mynet101 が起動されました

ネットワーク mynet102 は強制停止されました

ネットワーク mynet102 が mynet102.xml から定義されました

ネットワーク mynet102 が起動されました

ドメイン vx_node1 が vx_node1.xml から定義されました

ドメイン vx_node2 が vx_node2.xml から定義されました

ドメイン vx_node3 が vx_node3.xml から定義されました

ドメイン vx_node4 が vx_node4.xml から定義されました

ドメイン vx_node5 が vx_node5.xml から定義されました

ドメイン vx_node6 が vx_node6.xml から定義されました


vagrant reload

コード表示

[oracle@centos vx]$ time vagrant reload
==> node1: Halting domain...
==> node1: Starting domain.
==> node1: Waiting for domain to get an IP address...
==> node1: Waiting for SSH to become available...
==> node1: Creating shared folders metadata...
==> node1: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node1: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node1: flag to force provisioning. Provisioners marked to run always will still run.
==> node2: Halting domain...
==> node2: Starting domain.
==> node2: Waiting for domain to get an IP address...
==> node2: Waiting for SSH to become available...
==> node2: Creating shared folders metadata...
==> node2: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node2: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node2: flag to force provisioning. Provisioners marked to run always will still run.
==> node3: Halting domain...
==> node3: Starting domain.
==> node3: Waiting for domain to get an IP address...
==> node3: Waiting for SSH to become available...
==> node3: Creating shared folders metadata...
==> node3: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node3: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node3: flag to force provisioning. Provisioners marked to run always will still run.
==> node4: Halting domain...
==> node4: Starting domain.
==> node4: Waiting for domain to get an IP address...
==> node4: Waiting for SSH to become available...
==> node4: Creating shared folders metadata...
==> node4: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node4: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node4: flag to force provisioning. Provisioners marked to run always will still run.
==> node5: Halting domain...
==> node5: Starting domain.
==> node5: Waiting for domain to get an IP address...
==> node5: Waiting for SSH to become available...
==> node5: Creating shared folders metadata...
==> node5: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node5: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node5: flag to force provisioning. Provisioners marked to run always will still run.
==> node6: Halting domain...
==> node6: Starting domain.
==> node6: Waiting for domain to get an IP address...
==> node6: Waiting for SSH to become available...
==> node6: Creating shared folders metadata...
==> node6: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node6: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node6: flag to force provisioning. Provisioners marked to run always will still run.

real	1m42.110s
user	0m5.937s
sys	0m0.601s

固定ip化後の状態

コード表示

[oracle@centos vx]$ vagrant ssh-config | grep -E "^Host|\s{1,}Host"
Host node1
  HostName 192.168.100.2
Host node2
  HostName 192.168.100.3
Host node3
  HostName 192.168.101.2
Host node4
  HostName 192.168.101.3
Host node5
  HostName 192.168.102.2
Host node6
  HostName 192.168.102.3
[oracle@centos vx]$ brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242fb2b351e	no		veth1486633
virbr0		8000.5254006df710	yes		virbr0-nic
virbr100		8000.525400106400	yes		virbr100-nic
							vnet3
							vnet4
virbr101		8000.5254009fdb10	yes		virbr101-nic
							vnet0
							vnet5
virbr102		8000.5254009e9318	yes		virbr102-nic
							vnet1
							vnet2

疎通確認

外部のみ

コード表示

[oracle@centos vx]$ while read line;do echo ${line};sleep 10; echo ${line}|bash;done < <(seq 6 | xargs -I@ bash -c "echo a |awk '{print \"vagrant ssh node@ -c \"\"\x5c\x27\"\"traceroute 8.8.8.8\"\"\x5c\x27\"}'")
vagrant ssh node1 -c 'traceroute 8.8.8.8'
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.100.1)  0.141 ms  0.117 ms  0.098 ms
 2  192.168.1.1 (192.168.1.1)  1.009 ms  1.000 ms  0.988 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  4.888 ms  4.874 ms  4.866 ms
 4  210.139.125.169 (210.139.125.169)  4.972 ms  4.941 ms  4.929 ms
 5  210.165.249.177 (210.165.249.177)  6.271 ms  5.383 ms  6.357 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  9.074 ms  7.535 ms  7.492 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  7.545 ms  6.645 ms  6.815 ms
 8  72.14.202.229 (72.14.202.229)  6.865 ms 72.14.205.32 (72.14.205.32)  6.279 ms 72.14.202.229 (72.14.202.229)  6.369 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  6.498 ms  11.164 ms  11.162 ms
vagrant ssh node2 -c 'traceroute 8.8.8.8'
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.100.1)  0.177 ms  0.145 ms  0.130 ms
 2  192.168.1.1 (192.168.1.1)  2.078 ms  2.140 ms  2.126 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  4.553 ms  4.540 ms  5.028 ms
 4  210.139.125.169 (210.139.125.169)  5.133 ms  5.099 ms  5.600 ms
 5  210.165.249.177 (210.165.249.177)  6.771 ms  6.915 ms  7.005 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  7.796 ms  6.482 ms  6.469 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  6.912 ms  9.759 ms  9.742 ms
 8  72.14.202.229 (72.14.202.229)  9.722 ms 72.14.205.32 (72.14.205.32)  9.735 ms 72.14.202.229 (72.14.202.229)  9.851 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  9.791 ms  10.098 ms  10.236 ms
vagrant ssh node3 -c 'traceroute 8.8.8.8'
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.101.1)  0.105 ms  0.081 ms  0.071 ms
 2  192.168.1.1 (192.168.1.1)  1.157 ms  1.221 ms  1.209 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  4.555 ms  4.547 ms  5.230 ms
 4  210.139.125.169 (210.139.125.169)  5.218 ms  4.581 ms  5.193 ms
 5  210.165.249.177 (210.165.249.177)  6.360 ms  5.170 ms  5.684 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  10.565 ms  10.326 ms  10.306 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  7.237 ms  6.757 ms  6.739 ms
 8  72.14.205.32 (72.14.205.32)  6.692 ms  6.684 ms  6.672 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  7.246 ms  7.269 ms  7.310 ms
vagrant ssh node4 -c 'traceroute 8.8.8.8'
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.101.1)  0.117 ms  0.098 ms  0.086 ms
 2  192.168.1.1 (192.168.1.1)  1.337 ms  1.328 ms  1.315 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  5.040 ms  5.031 ms  5.015 ms
 4  210.139.125.169 (210.139.125.169)  5.064 ms  5.045 ms  5.037 ms
 5  210.165.249.177 (210.165.249.177)  6.263 ms  5.925 ms  6.818 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  8.944 ms  7.222 ms  7.423 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  8.024 ms  7.253 ms  7.240 ms
 8  72.14.205.32 (72.14.205.32)  6.980 ms 72.14.202.229 (72.14.202.229)  6.966 ms 72.14.205.32 (72.14.205.32)  7.013 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  6.952 ms  7.687 ms  6.928 ms
vagrant ssh node5 -c 'traceroute 8.8.8.8'
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.102.1)  0.134 ms  0.112 ms  0.100 ms
 2  192.168.1.1 (192.168.1.1)  1.683 ms  1.674 ms  1.663 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  4.572 ms  4.561 ms  4.544 ms
 4  210.139.125.169 (210.139.125.169)  4.636 ms  5.272 ms  5.260 ms
 5  210.165.249.177 (210.165.249.177)  6.053 ms  6.396 ms  6.731 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  7.816 ms  6.286 ms  6.272 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  10.755 ms  6.881 ms  6.861 ms
 8  72.14.205.32 (72.14.205.32)  5.745 ms  6.561 ms 72.14.202.229 (72.14.202.229)  7.336 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  6.604 ms  6.460 ms  7.875 ms
vagrant ssh node6 -c 'traceroute 8.8.8.8'
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.102.1)  0.099 ms  0.082 ms  0.067 ms
 2  192.168.1.1 (192.168.1.1)  1.620 ms  1.609 ms  1.599 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  4.549 ms  5.257 ms  4.521 ms
 4  210.139.125.169 (210.139.125.169)  5.314 ms  5.304 ms  5.290 ms
 5  210.165.249.177 (210.165.249.177)  6.331 ms  6.404 ms  6.728 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  9.462 ms  6.419 ms  6.397 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  7.266 ms  7.219 ms  7.347 ms
 8  72.14.205.32 (72.14.205.32)  7.070 ms  6.498 ms  7.053 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  6.993 ms  6.983 ms  9.076 ms

_main.shの中身

すきっりさせるのは後日。。。

コード表示

[oracle@centos vx]$ cat _main.sh
#!/bin/bash
OUTPUT=$(pwd)/output
CMD_DIR=$(pwd)/cmd
TMP_DIR=$(pwd)/tmp
AWK_DIR=$(pwd)/awk
LVT_DIR=/etc/libvirt/qemu
LVT_NET_DIR=${LVT_DIR}/networks

_offnet(){
  START_RN=$1
  END_RN=$2
  ( cd ${LVT_NET_DIR} && \
    seq ${START_RN} ${END_RN} | while read RN;do
      virsh net-destroy mynet${RN} && virsh net-undefine mynet${RN};
    done )
}

_mknet(){
  START_RN=$1
  END_RN=$2
  ( cd ${LVT_NET_DIR} && \
    seq ${START_RN} ${END_RN} | while read RN;do
      cp $(pwd)/tmpl/mynet@.xml $(pwd)/mynet${RN}.xml && sed -i s/@/${RN}/g $(pwd)/mynet${RN}.xml;
    done )
}

_onnet(){
  START_RN=$1
  END_RN=$2
  ( cd ${LVT_NET_DIR} && \
    seq ${START_RN} ${END_RN} | while read RN;do
      virsh net-define mynet${RN}.xml && virsh net-start mynet${RN};
    done )
}

_rebnet(){
  START_RN=$1
  END_RN=$2
  ( cd ${LVT_NET_DIR} && \
    seq ${START_RN} ${END_RN} | while read RN;do
      virsh net-destroy mynet${RN} && virsh net-define mynet${RN}.xml && virsh net-start mynet${RN}
    done )
}

_buildnet(){
  START_RN=$1
  END_RN=$2
  _offnet ${START_RN} ${END_RN}
  _mknet ${START_RN} ${END_RN}
  _onnet ${START_RN} ${END_RN}
}

_rmdir(){
  rm -rf {${OUTPUT},${CMD_DIR},${TMP_DIR},${AWK_DIR}};  
}

_mkdir(){
  mkdir -p {${OUTPUT},${CMD_DIR},${TMP_DIR},${AWK_DIR}};
}

_initdir(){
  _rmdir
  _mkdir
}

_mkvxnm(){
  START_RN=$1
  END_RN=$2
  seq ${START_RN} ${END_RN} | while read RN;do
    echo ${LVT_DIR}/vx_node${RN}.xml >>${OUTPUT}/vx_node;
  done
}

_grp(){
  RN=$1
  GRP=$2
  while read line; do
     echo ${line} | sed -e s/GRP/${GRP}/ | bash;
  done < <(seq ${RN} | xargs -I@ bash -c 'echo echo $\(\(@%GRP\)\)') | sort >${OUTPUT}/grp
}

_join(){
  LFT=$1
  RGT=$2
  OPT_FNM=$3
  paste -d ' ' ${LFT} ${RGT} >${OPT_FNM};
}

_callcmd(){
  RPT=$1
  while read line; do
    OPT_FNM=$(basename ${line} | sed -e s/\_/\\t/g | awk '{print $2}')
    [ -e ${OUTPUT}/${OPT_FNM} ] && rm -f ${OUTPUT}/${OPT_FNM};
    seq ${RPT} | while read rpt; do
      cat ${line} | sed -e s/@/${rpt}/ | bash >>${OUTPUT}/${OPT_FNM};
    done
  done < <(find ${CMD_DIR}/* -name "*")
}

_mkcmd(){
  CMD_FNM=$1
  CMD=$2
  echo ${CMD} > ${CMD_DIR}/${CMD_FNM};
}

_split(){
  LFT=$1
  RGT=$2
  paste -d ' ' ${LFT} ${RGT} | awk '
    OUTPUT="'"${OUTPUT}"'"
    {print>OUTPUT"\x2f""split_"$1}
  ' 1>/dev/null 
}

_mk_def_ip_script_with_awk(){
  cat <<EOF >${AWK_DIR}/def_ip.awk
{
  gsub(/[^ ]+/,"\x27&\x27");
  print "<host mac="\$3" name="\$2" ip=\x27""192.168."third_octet"."NR+1"\x27""/>"
}
EOF
}

_call_def_ip_script_with_awk(){
  START_RN=$1
  END_RN=$2
  seq ${START_RN} ${END_RN} | while read RN;do
    gawk -v "third_octet=$((${RN}+100))" -f ${AWK_DIR}/def_ip.awk ${OUTPUT}/split_${RN} > ${OUTPUT}/def_host_tag_$((${RN}+100));
  done
}

_kvm_guest_modify_network(){
  START_RN=$1
  END_RN=$2
  
  while read line; do
    OPT_FNM=$(basename ${line})
    sed -e "/range/a @" < <(cat ${line}) > ${TMP_DIR}/${OPT_FNM}
  done < <(find ${LVT_NET_DIR} -maxdepth 1 -name "mynet*")
  
  seq ${START_RN} ${END_RN} | while read RN; do
    SRC_FILE=${TMP_DIR}/mynet${RN}.xml;
    EMBED_STR=$(cat ${OUTPUT}/def_host_tag_${RN} | tr "\n" " ");
    TAR_FILE=${LVT_NET_DIR}/mynet${RN}.xml
    awk '{
      SRC_FILE="'"${SRC_FILE}"'"
      EMBED_STR="'"${EMBED_STR}"'"
      gsub("@",EMBED_STR);
      print;
    }' ${SRC_FILE} > ${LVT_NET_DIR}/mynet${RN}.xml
  done
}

_kvm_guest_modify_machine(){
  while read line; do
    echo ${line} | awk '
      BEGIN{
      }
      {
        net_name="mynet"$2+100;
        tar_file=$3;
        system("sed -i -e s/vagrant-libvirt/"net_name"/g "tar_file" ");
      }
      END{
      }
    ' 
  done < <(cat ${OUTPUT}/vx_node_grp|nl) 1>/dev/null
}

_redefvm(){
  START_RN=$1
  END_RN=$2
  ( cd ${LVT_DIR} && \
    seq ${START_RN} ${END_RN} | while read RN;do
      virsh define vx_node${RN}.xml;
    done )
}

_buildnet 100 102
_initdir
_mkcmd "get_macaddr_cmd" "virsh dumpxml vx_node@ | grep \"mac address\" | awk 'match(\$0, /[a-f0-9]{2}(:[a-f0-9]{2}){5}/) {print substr(\$0, RSTART, RLENGTH)}'"
_mkcmd "get_nodename_cmd" "echo node@"
_callcmd 6
_join ${OUTPUT}/nodename ${OUTPUT}/macaddr ${OUTPUT}/vminfo
_grp 6 3
_split ${OUTPUT}/grp ${OUTPUT}/vminfo
_mk_def_ip_script_with_awk
_call_def_ip_script_with_awk 0 2
_kvm_guest_modify_network 100 102
_rebnet 100 102
_mkvxnm 1 6
_join ${OUTPUT}/grp ${OUTPUT}/vx_node ${OUTPUT}/vx_node_grp
_kvm_guest_modify_machine
_redefvm 1 6

あとがき

やりきった。。後で見直しだ。。ねる。。

vagrantで.Vagrantフォルダ吹っ飛ばした時の話

まえがき

疲れていることもある。明日雨だから、ランしたけど、つかれたわ。勢いで消したわ。

参考文献

dhilst commented on 13 Apr 2018  

起動している仮想マシン確認

コード表示

[oracle@centos vx]$ vagrant up node1
Bringing machine 'node1' up with 'libvirt' provider...
Name `vx_node1` of domain about to create is already taken. Please try to run
`vagrant up` command again.
[oracle@centos vx]$ sudo virsh list --all
[sudo] oracle のパスワード:
 Id    名前                         状態
----------------------------------------------------
 57    vx_node3                       実行中
 58    vx_node1                       実行中
 59    vx_node5                       実行中
 60    vx_node6                       実行中
 61    vx_node4                       実行中
 62    vx_node2                       実行中

強制停止してundefineすればいい

コード表示

[oracle@centos vx]$ seq 6 | xargs -I@ bash -c 'sudo virsh destroy vx_node@ && sudo virsh undefine vx_node@'

状態確認

コード表示

[oracle@centos vx]$ sudo virsh list --all
 Id    名前                         状態
----------------------------------------------------

実はこれだけではまだいけぬ

boxイメージをvagrantアプリにマウントして使用しているけど、起動する仮想マシン分テンプレからコピーしている。こいつらも全部消してやるんだ。

コード表示

[oracle@centos vx]$ sudo virsh vol-list default
 名前               パス                                  
------------------------------------------------------------------------------
 centos-VAGRANTSLASH-7_vagrant_box_image_0.img /var/lib/libvirt/images/centos-VAGRANTSLASH-7_vagrant_box_image_0.img
 vx_node1.img         /var/lib/libvirt/images/vx_node1.img    
 vx_node2.img         /var/lib/libvirt/images/vx_node2.img    
 vx_node3.img         /var/lib/libvirt/images/vx_node3.img    
 vx_node4.img         /var/lib/libvirt/images/vx_node4.img    
 vx_node5.img         /var/lib/libvirt/images/vx_node5.img    
 vx_node6.img         /var/lib/libvirt/images/vx_node6.img    
[oracle@centos vx]$ sudo virsh vol-list default | awk '/vx/ {print $1}' | xargs -I@ bash -c 'sudo virsh vol-delete --pool default @'
ボリューム vx_node1.img は削除されました

ボリューム vx_node2.img は削除されました

ボリューム vx_node3.img は削除されました

ボリューム vx_node4.img は削除されました

ボリューム vx_node5.img は削除されました

ボリューム vx_node6.img は削除されました

[oracle@centos vx]$ sudo virsh vol-list default
 名前               パス                                  
------------------------------------------------------------------------------
 centos-VAGRANTSLASH-7_vagrant_box_image_0.img /var/lib/libvirt/images/centos-VAGRANTSLASH-7_vagrant_box_image_0.img

仮想マシン立ち上がるか確認

コード表示

[oracle@centos vx]$ vagrant up node1
Bringing machine 'node1' up with 'libvirt' provider...
==> node1: Creating image (snapshot of base box volume).
==> node1: Creating domain with the following settings...
==> node1:  -- Name:              vx_node1
==> node1:  -- Domain type:       kvm
==> node1:  -- Cpus:              1
==> node1:  -- Feature:           acpi
==> node1:  -- Feature:           apic
==> node1:  -- Feature:           pae
==> node1:  -- Memory:            2048M
==> node1:  -- Management MAC:    
==> node1:  -- Loader:            
==> node1:  -- Nvram:             
==> node1:  -- Base box:          centos/7
==> node1:  -- Storage pool:      default
==> node1:  -- Image:             /var/lib/libvirt/images/vx_node1.img (41G)
==> node1:  -- Volume Cache:      default
==> node1:  -- Kernel:            
==> node1:  -- Initrd:            
==> node1:  -- Graphics Type:     vnc
==> node1:  -- Graphics Port:     -1
==> node1:  -- Graphics IP:       127.0.0.1
==> node1:  -- Graphics Password: Not defined
==> node1:  -- Video Type:        cirrus
==> node1:  -- Video VRAM:        9216
==> node1:  -- Sound Type:	
==> node1:  -- Keymap:            en-us
==> node1:  -- TPM Path:          
==> node1:  -- INPUT:             type=mouse, bus=ps2
==> node1: Creating shared folders metadata...
==> node1: Starting domain.
==> node1: Waiting for domain to get an IP address...
==> node1: Waiting for SSH to become available...
    node1: 
    node1: Vagrant insecure key detected. Vagrant will automatically replace
    node1: this with a newly generated keypair for better security.
    node1: 
    node1: Inserting generated public key within guest...
    node1: Removing insecure key from the guest if it's present...
    node1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> node1: Setting hostname...
==> node1: Configuring and enabling network interfaces...
    node1: SSH address: 192.168.121.83:22
    node1: SSH username: vagrant
    node1: SSH auth method: private key
==> node1: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node1: Running provisioner: shell...
    node1: Running: /tmp/vagrant-shell20190607-1175-1kel8h.sh
    node1: Loaded plugins: fastestmirror
    node1: Determining fastest mirrors
    node1:  * base: ftp-srv2.kddilabs.jp
    node1:  * extras: ftp-srv2.kddilabs.jp
    node1:  * updates: ftp-srv2.kddilabs.jp
    node1: Resolving Dependencies
    node1: --> Running transaction check
    node1: ---> Package net-tools.x86_64 0:2.0-0.24.20131004git.el7 will be installed
    node1: --> Finished Dependency Resolution
    node1: 
    node1: Dependencies Resolved
    node1: 
    node1: ================================================================================
    node1:  Package         Arch         Version                          Repository  Size
    node1: ================================================================================
    node1: Installing:
    node1:  net-tools       x86_64       2.0-0.24.20131004git.el7         base       306 k
    node1: 
    node1: Transaction Summary
    node1: ================================================================================
    node1: Install  1 Package
    node1: 
    node1: Total download size: 306 k
    node1: Installed size: 918 k
    node1: Downloading packages:
    node1: Public key for net-tools-2.0-0.24.20131004git.el7.x86_64.rpm is not installed
    node1: warning: /var/cache/yum/x86_64/7/base/packages/net-tools-2.0-0.24.20131004git.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
    node1: Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
    node1: Importing GPG key 0xF4A80EB5:
    node1:  Userid     : "CentOS-7 Key (CentOS 7 Official Signing Key) "
    node1:  Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
    node1:  Package    : centos-release-7-6.1810.2.el7.centos.x86_64 (@anaconda)
    node1:  From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
    node1: Running transaction check
    node1: Running transaction test
    node1: Transaction test succeeded
    node1: Running transaction
    node1:   Installing : net-tools-2.0-0.24.20131004git.el7.x86_64                    1/1
    node1:  
    node1:   Verifying  : net-tools-2.0-0.24.20131004git.el7.x86_64                    1/1
    node1:  
    node1: 
    node1: Installed:
    node1:   net-tools.x86_64 0:2.0-0.24.20131004git.el7                                   
    node1: 
    node1: Complete!
    node1: Loaded plugins: fastestmirror
    node1: Loading mirror speeds from cached hostfile
    node1:  * base: ftp-srv2.kddilabs.jp
    node1:  * extras: ftp-srv2.kddilabs.jp
    node1:  * updates: ftp-srv2.kddilabs.jp
    node1: Resolving Dependencies
    node1: --> Running transaction check
    node1: ---> Package lsof.x86_64 0:4.87-6.el7 will be installed
    node1: --> Finished Dependency Resolution
    node1: 
    node1: Dependencies Resolved
    node1: 
    node1: ================================================================================
    node1:  Package         Arch              Version                Repository       Size
    node1: ================================================================================
    node1: Installing:
    node1:  lsof            x86_64            4.87-6.el7             base            331 k
    node1: 
    node1: Transaction Summary
    node1: ================================================================================
    node1: Install  1 Package
    node1: 
    node1: Total download size: 331 k
    node1: Installed size: 927 k
    node1: Downloading packages:
    node1: Running transaction check
    node1: Running transaction test
    node1: Transaction test succeeded
    node1: Running transaction
    node1:   Installing : lsof-4.87-6.el7.x86_64                                       1/1
    node1:  
    node1:   Verifying  : lsof-4.87-6.el7.x86_64                                       1/1
    node1:  
    node1: 
    node1: Installed:
    node1:   lsof.x86_64 0:4.87-6.el7                                                      
    node1: 
    node1: Complete!
    node1: Loaded plugins: fastestmirror
    node1: Loading mirror speeds from cached hostfile
    node1:  * base: ftp-srv2.kddilabs.jp
    node1:  * extras: ftp-srv2.kddilabs.jp
    node1:  * updates: ftp-srv2.kddilabs.jp
    node1: Resolving Dependencies
    node1: --> Running transaction check
    node1: ---> Package psmisc.x86_64 0:22.20-15.el7 will be installed
    node1: --> Finished Dependency Resolution
    node1: 
    node1: Dependencies Resolved
    node1: 
    node1: ================================================================================
    node1:  Package          Arch             Version                 Repository      Size
    node1: ================================================================================
    node1: Installing:
    node1:  psmisc           x86_64           22.20-15.el7            base           141 k
    node1: 
    node1: Transaction Summary
    node1: ================================================================================
    node1: Install  1 Package
    node1: 
    node1: Total download size: 141 k
    node1: Installed size: 475 k
    node1: Downloading packages:
    node1: Running transaction check
    node1: Running transaction test
    node1: Transaction test succeeded
    node1: Running transaction
    node1:   Installing : psmisc-22.20-15.el7.x86_64                                   1/1
    node1:  
    node1:   Verifying  : psmisc-22.20-15.el7.x86_64                                   1/1
    node1:  
    node1: 
    node1: Installed:
    node1:   psmisc.x86_64 0:22.20-15.el7                                                  
    node1: 
    node1: Complete!
    node1: Loaded plugins: fastestmirror
    node1: Loading mirror speeds from cached hostfile
    node1:  * base: ftp-srv2.kddilabs.jp
    node1:  * extras: ftp-srv2.kddilabs.jp
    node1:  * updates: ftp-srv2.kddilabs.jp
    node1: Resolving Dependencies
    node1: --> Running transaction check
    node1: ---> Package traceroute.x86_64 3:2.0.22-2.el7 will be installed
    node1: --> Finished Dependency Resolution
    node1: 
    node1: Dependencies Resolved
    node1: 
    node1: ================================================================================
    node1:  Package            Arch           Version                   Repository    Size
    node1: ================================================================================
    node1: Installing:
    node1:  traceroute         x86_64         3:2.0.22-2.el7            base          59 k
    node1: 
    node1: Transaction Summary
    node1: ================================================================================
    node1: Install  1 Package
    node1: 
    node1: Total download size: 59 k
    node1: Installed size: 92 k
    node1: Downloading packages:
    node1: Running transaction check
    node1: Running transaction test
    node1: Transaction test succeeded
    node1: Running transaction
    node1:   Installing : 3:traceroute-2.0.22-2.el7.x86_64                             1/1
    node1:  
    node1:   Verifying  : 3:traceroute-2.0.22-2.el7.x86_64                             1/1
    node1:  
    node1: 
    node1: Installed:
    node1:   traceroute.x86_64 3:2.0.22-2.el7                                              
    node1: 
    node1: Complete!
    node1: Loaded plugins: fastestmirror
    node1: Loading mirror speeds from cached hostfile
    node1:  * base: ftp-srv2.kddilabs.jp
    node1:  * extras: ftp-srv2.kddilabs.jp
    node1:  * updates: ftp-srv2.kddilabs.jp
    node1: Resolving Dependencies
    node1: --> Running transaction check
    node1: ---> Package bridge-utils.x86_64 0:1.5-9.el7 will be installed
    node1: --> Finished Dependency Resolution
    node1: 
    node1: Dependencies Resolved
    node1: 
    node1: ================================================================================
    node1:  Package               Arch            Version              Repository     Size
    node1: ================================================================================
    node1: Installing:
    node1:  bridge-utils          x86_64          1.5-9.el7            base           32 k
    node1: 
    node1: Transaction Summary
    node1: ================================================================================
    node1: Install  1 Package
    node1: 
    node1: Total download size: 32 k
    node1: Installed size: 56 k
    node1: Downloading packages:
    node1: Running transaction check
    node1: Running transaction test
    node1: Transaction test succeeded
    node1: Running transaction
    node1:   Installing : bridge-utils-1.5-9.el7.x86_64                                1/1
    node1:  
    node1:   Verifying  : bridge-utils-1.5-9.el7.x86_64                                1/1
    node1:  
    node1: 
    node1: Installed:
    node1:   bridge-utils.x86_64 0:1.5-9.el7                                               
    node1: 
    node1: Complete!
    node1: Loaded plugins: fastestmirror
    node1: Loading mirror speeds from cached hostfile
    node1:  * base: ftp-srv2.kddilabs.jp
    node1:  * extras: ftp-srv2.kddilabs.jp
    node1:  * updates: ftp-srv2.kddilabs.jp
    node1: Resolving Dependencies
    node1: --> Running transaction check
    node1: ---> Package expect.x86_64 0:5.45-14.el7_1 will be installed
    node1: --> Processing Dependency: libtcl8.5.so()(64bit) for package: expect-5.45-14.el7_1.x86_64
    node1: --> Running transaction check
    node1: ---> Package tcl.x86_64 1:8.5.13-8.el7 will be installed
    node1: --> Finished Dependency Resolution
    node1: 
    node1: Dependencies Resolved
    node1: 
    node1: ================================================================================
    node1:  Package         Arch            Version                    Repository     Size
    node1: ================================================================================
    node1: Installing:
    node1:  expect          x86_64          5.45-14.el7_1              base          262 k
    node1: Installing for dependencies:
    node1:  tcl             x86_64          1:8.5.13-8.el7             base          1.9 M
    node1: 
    node1: Transaction Summary
    node1: ================================================================================
    node1: Install  1 Package (+1 Dependent package)
    node1: 
    node1: Total download size: 2.1 M
    node1: Installed size: 4.9 M
    node1: Downloading packages:
    node1: --------------------------------------------------------------------------------
    node1: Total                                              1.1 MB/s | 2.1 MB  00:01     
    node1: Running transaction check
    node1: Running transaction test
    node1: Transaction test succeeded
    node1: Running transaction
    node1:   Installing : 1:tcl-8.5.13-8.el7.x86_64                                    1/2
    node1:  
    node1:   Installing : expect-5.45-14.el7_1.x86_64                                  2/2
    node1:  
    node1:   Verifying  : 1:tcl-8.5.13-8.el7.x86_64                                    1/2
    node1:  
    node1:   Verifying  : expect-5.45-14.el7_1.x86_64                                  2/2
    node1:  
    node1: 
    node1: Installed:
    node1:   expect.x86_64 0:5.45-14.el7_1                                                 
    node1: 
    node1: Dependency Installed:
    node1:   tcl.x86_64 1:8.5.13-8.el7                                                     
    node1: 
    node1: Complete!

あとがき

疲れた

kvm上の仮想マシンipを固定ipにしようかなって思った話

まえがき

固定ipにでもしようかなとおもいました。

参考文献

KVMにDHCPで固定IPを設定する  
Libvirt/KVM で VM に静的な IP アドレスを配布する  
フィルタ言語 AWK (2)  
はじめてのAWK  
awkでOSのコマンドを実行させる  
KVM仮想マシンの名前変更と移動  

Vagrantfile

ここらへんもホスト名違いだけだし、うまくすっきり出来そうだけど、今はこれでよしとして楽しみにとっておこう。やり方はあるとおもうんだよな。

コード表示

[oracle@centos vx]$ cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"
  config.vm.synced_folder '.', '/mnt', type: 'rsync'
  config.vm.synced_folder '.', '/vagrant', disabled: true
  config.vm.define "node1" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node1"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node2" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node2"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node3" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node3"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node4" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node4"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node5" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node5"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node6" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node6"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
end

a.sh

a.shの中身だよ

コード表示

[oracle@centos vx]$ cat a.sh
#!/bin/bash
yum install -y net-tools
yum install -y lsof
yum install -y psmisc
yum install -y traceroute
yum install -y bridge-utils
yum install -y expect

vagrantツールでlibvirt管理のkvm上の仮想マシン立ち上げ!

コード表示

[oracle@centos vx]$ time vagrant up
real	0m54.930s
user	0m8.535s
sys	0m0.917s
[oracle@centos vx]$ vagrant status
Current machine states:

node1                     running (libvirt)
node2                     running (libvirt)
node3                     running (libvirt)
node4                     running (libvirt)
node5                     running (libvirt)
node6                     running (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

vagrantプラグインのvagrant-libvirtによるネットワーク設定確認

コード表示

[oracle@centos vx]$ vagrant ssh-config
Host node1
  HostName 192.168.121.233
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node1/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node2
  HostName 192.168.121.193
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node2/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node3
  HostName 192.168.121.18
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node3/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node4
  HostName 192.168.121.17
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node4/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node5
  HostName 192.168.121.227
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node5/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node6
  HostName 192.168.121.98
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node6/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

ref_net.sh

コード表示

[root@centos vx]# cat ref_net.sh
#!/bin/bash
SRC_FILE="$1"
TAR_FILE="$2"

while read line; do
  sed -e "/range/a @" <<<${line};
done < <(cat ${SRC_FILE}) > tmp

cat ${TAR_FILE} | tr "\n" " " | xargs -I{} bash -c 'awk "{gsub(\"@\",\"{}\");print}" tmp' > ${SRC_FILE}
[root@centos vx]# ll ref_net.sh
-rwxr-xr-x. 1 oracle docker 232  6月  2 16:00 ref_net.sh

libvirt管理のネットワークを確認する

コード表示

[root@centos vx]# cd /etc/libvirt/qemu/networks
[root@centos networks]# ll
合計 24
drwx------. 2 root root 4096  5月 26 16:59 autostart
-rw-------. 1 root root  591  6月  2 16:04 mynet100.xml
-rw-------. 1 root root  591  6月  1 16:22 mynet101.xml
-rw-------. 1 root root  591  6月  1 16:22 mynet102.xml
drwxr-xr-x. 2 root root 4096  6月  1 16:21 tmpl
-rw-------. 1 root root  603  6月  1 14:57 vagrant-libvirt.xml
[root@centos networks]# virsh net-list --all
 名前               状態     自動起動  永続
----------------------------------------------------------
 mynet100             動作中  いいえ (no) はい (yes)
 mynet101             動作中  いいえ (no) はい (yes)
 mynet102             動作中  いいえ (no) はい (yes)
 vagrant-libvirt      動作中  いいえ (no) はい (yes)

[root@centos networks]# seq 100 102 | xargs -t -I@ bash -c 'cat mynet@.xml'
bash -c cat mynet100.xml 
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh net-edit mynet100
or other application using the libvirt API.
-->

<network ipv6='yes'>
  <name>mynet100</name>
  <uuid>a4541103-3100-44ef-91c2-7c624e2db293</uuid>
  <forward mode='nat'/>
  <bridge name='virbr100' stp='on' delay='0'/>
  <mac address='52:54:00:e3:ab:b1'/>
  <ip address='192.168.100.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.100.2' end='192.168.100.254'/>
    </dhcp>
  </ip>
</network>
bash -c cat mynet101.xml 
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh net-edit mynet101
or other application using the libvirt API.
-->

<network ipv6='yes'>
  <name>mynet101</name>
  <uuid>5681498d-dc77-4180-b24a-f8de3dacc458</uuid>
  <forward mode='nat'/>
  <bridge name='virbr101' stp='on' delay='0'/>
  <mac address='52:54:00:28:6e:9c'/>
  <ip address='192.168.101.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.101.2' end='192.168.101.254'/>
    </dhcp>
  </ip>
</network>
bash -c cat mynet102.xml 
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh net-edit mynet102
or other application using the libvirt API.
-->

<network ipv6='yes'>
  <name>mynet102</name>
  <uuid>c8acdbb8-8802-4fa0-a443-ccf283de7aa7</uuid>
  <forward mode='nat'/>
  <bridge name='virbr102' stp='on' delay='0'/>
  <mac address='52:54:00:85:24:5e'/>
  <ip address='192.168.102.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.102.2' end='192.168.102.254'/>
    </dhcp>
  </ip>
</network>
[root@centos networks]# brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242fb2b351e	no		
virbr0		8000.5254006df710	yes		virbr0-nic
							vnet0
							vnet1
							vnet2
							vnet3
							vnet4
							vnet5
virbr100		8000.525400e3abb1	yes		virbr100-nic
virbr101		8000.525400286e9c	yes		virbr101-nic
virbr102		8000.52540085245e	yes		virbr102-nic

ref_net.shを実行してマシン毎に反映する

コード表示

[root@centos networks]# seq 100 102 | xargs -t -I@ bash -c 'cat mynet@.xml'
bash -c cat mynet100.xml 
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
virsh net-edit mynet100
or other application using the libvirt API.
-->

<network ipv6='yes'>
<name>mynet100</name>
<uuid>a4541103-3100-44ef-91c2-7c624e2db293</uuid>
<forward mode='nat'/>
<bridge name='virbr100' stp='on' delay='0'/>
<mac address='52:54:00:e3:ab:b1'/>
<ip address='192.168.100.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.100.2' end='192.168.100.254'/>
<host mac='52:54:00:5d:4a:3f' name='node1' ip='192.168.100.2'/> <host mac='52:54:00:dc:5b:1b' name='node2' ip='192.168.100.3'/> 
</dhcp>
</ip>
</network>
bash -c cat mynet101.xml 
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
virsh net-edit mynet101
or other application using the libvirt API.
-->

<network ipv6='yes'>
<name>mynet101</name>
<uuid>5681498d-dc77-4180-b24a-f8de3dacc458</uuid>
<forward mode='nat'/>
<bridge name='virbr101' stp='on' delay='0'/>
<mac address='52:54:00:28:6e:9c'/>
<ip address='192.168.101.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.101.2' end='192.168.101.254'/>
<host mac='52:54:00:77:33:99' name='node3' ip='192.168.101.2'/> <host mac='52:54:00:61:72:67' name='node4' ip='192.168.101.3'/> 
</dhcp>
</ip>
</network>
bash -c cat mynet102.xml 
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
virsh net-edit mynet102
or other application using the libvirt API.
-->

<network ipv6='yes'>
<name>mynet102</name>
<uuid>c8acdbb8-8802-4fa0-a443-ccf283de7aa7</uuid>
<forward mode='nat'/>
<bridge name='virbr102' stp='on' delay='0'/>
<mac address='52:54:00:85:24:5e'/>
<ip address='192.168.102.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.102.2' end='192.168.102.254'/>
<host mac='52:54:00:88:c2:54' name='node5' ip='192.168.102.2'/> <host mac='52:54:00:a8:bc:df' name='node6' ip='192.168.102.3'/> 
</dhcp>
</ip>
</network>

独自ネットワークを強制停止し、再定義して、再起動する

コード表示

[root@centos networks]# seq 100 102 | xargs -t -I@ bash -c 'virsh net-destroy mynet@ && virsh net-define mynet@.xml && virsh net-start mynet@'
bash -c virsh net-destroy mynet100 && virsh net-define mynet100.xml && virsh net-start mynet100 
ネットワーク mynet100 は強制停止されました

ネットワーク mynet100 が mynet100.xml から定義されました

ネットワーク mynet100 が起動されました

bash -c virsh net-destroy mynet101 && virsh net-define mynet101.xml && virsh net-start mynet101 
ネットワーク mynet101 は強制停止されました

ネットワーク mynet101 が mynet101.xml から定義されました

ネットワーク mynet101 が起動されました

bash -c virsh net-destroy mynet102 && virsh net-define mynet102.xml && virsh net-start mynet102 
ネットワーク mynet102 は強制停止されました

ネットワーク mynet102 が mynet102.xml から定義されました

ネットワーク mynet102 が起動されました

[root@centos networks]# virsh net-list --all
 名前               状態     自動起動  永続
----------------------------------------------------------
 mynet100             動作中  いいえ (no) はい (yes)
 mynet101             動作中  いいえ (no) はい (yes)
 mynet102             動作中  いいえ (no) はい (yes)
 vagrant-libvirt      動作中  いいえ (no) はい (yes)

仮想マシンが所属するネットワークを仮想マシンの設定ファイルを修正して変更する

変更するんだ

コード表示

[root@centos networks]# cd /etc/libvirt/qemu/
[root@centos qemu]# ll
合計 28
drwx------. 4 root root 4096  6月  2 16:09 networks
-rw-------. 1 root root 2367  6月  2 11:12 vx_node1.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node2.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node3.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node4.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node5.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node6.xml
[root@centos qemu]# seq 6 | xargs -t -I@ bash -c "cat vx_node@.xml | awk '/<interface/,/interface>/'"
bash -c cat vx_node1.xml | awk '/<interface/,/interface>/' 
    <interface type='network'>
      <mac address='52:54:00:5d:4a:3f'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
bash -c cat vx_node2.xml | awk '/<interface/,/interface>/' 
    <interface type='network'>
      <mac address='52:54:00:dc:5b:1b'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
bash -c cat vx_node3.xml | awk '/<interface/,/interface>/' 
    <interface type='network'>
      <mac address='52:54:00:77:33:99'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
bash -c cat vx_node4.xml | awk '/<interface/,/interface>/' 
    <interface type='network'>
      <mac address='52:54:00:61:72:67'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
bash -c cat vx_node5.xml | awk '/<interface/,/interface>/' 
    <interface type='network'>
      <mac address='52:54:00:88:c2:54'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
bash -c cat vx_node6.xml | awk '/<interface/,/interface>/' 
    <interface type='network'>
      <mac address='52:54:00:a8:bc:df'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
[root@centos qemu]# seq 6 | xargs -t -I@ bash -c "cat vx_node@.xml | grep \"source network\""
bash -c cat vx_node1.xml | grep "source network" 
      <source network='vagrant-libvirt'/>
bash -c cat vx_node2.xml | grep "source network" 
      <source network='vagrant-libvirt'/>
bash -c cat vx_node3.xml | grep "source network" 
      <source network='vagrant-libvirt'/>
bash -c cat vx_node4.xml | grep "source network" 
      <source network='vagrant-libvirt'/>
bash -c cat vx_node5.xml | grep "source network" 
      <source network='vagrant-libvirt'/>
bash -c cat vx_node6.xml | grep "source network" 
      <source network='vagrant-libvirt'/>

いざ変更!のまえに使用するref_nett.sh

awkいい

コード表示

[root@centos qemu]# cat ref_nett.sh
#!/bin/bash
RPT="$1"
seq ${RPT} | xargs -I@ bash -c 'echo /etc/libvirt/qemu/vx_node@.xml'>vx_node
paste -d ' ' /home/oracle/vx/grp vx_node > vx_node_grp

while read line; do
  echo ${line} | awk '
    BEGIN{
    }
    {
      net_name="mynet"$1+100;
      system("sed -i -e s/vagrant-libvirt/"net_name"/g "$2"");
    }
    END{
    }
  '; 
done < <(cat vx_node_grp)

いざ変更

コード表示

[root@centos qemu]# pwd
/etc/libvirt/qemu
[root@centos qemu]# ll
合計 40
drwx------. 4 root root 4096  6月  2 16:09 networks
-rwxr-xr-x. 1 root root  366  6月  2 18:38 ref_nett.sh
-rw-r--r--. 1 root root  186  6月  2 18:35 vx_node
-rw-------. 1 root root 2367  6月  2 11:12 vx_node1.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node2.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node3.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node4.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node5.xml
-rw-------. 1 root root 2367  6月  2 11:12 vx_node6.xml
-rw-r--r--. 1 root root  198  6月  2 18:35 vx_node_grp
[root@centos qemu]# seq 6 | xargs -I@ bash -c 'cat vx_node@.xml | grep -E "<name>|<source network|mynet"'
  <name>vx_node1</name>
      <source network='vagrant-libvirt'/>
  <name>vx_node2</name>
      <source network='vagrant-libvirt'/>
  <name>vx_node3</name>
      <source network='vagrant-libvirt'/>
  <name>vx_node4</name>
      <source network='vagrant-libvirt'/>
  <name>vx_node5</name>
      <source network='vagrant-libvirt'/>
  <name>vx_node6</name>
      <source network='vagrant-libvirt'/>
[root@centos qemu]# ./ref_nett.sh 6 | grep -E "<name>|<source network|mynet"
[root@centos qemu]# seq 6 | xargs -I@ bash -c 'cat vx_node@.xml | grep -E "<name>|<source network|mynet"'
  <name>vx_node1</name>
      <source network='mynet100'/>
  <name>vx_node2</name>
      <source network='mynet100'/>
  <name>vx_node3</name>
      <source network='mynet101'/>
  <name>vx_node4</name>
      <source network='mynet101'/>
  <name>vx_node5</name>
      <source network='mynet102'/>
  <name>vx_node6</name>
      <source network='mynet102'/>
[root@centos qemu]# ll
合計 40
drwx------. 4 root root 4096  6月  2 16:09 networks
-rwxr-xr-x. 1 root root  366  6月  2 18:40 ref_nett.sh
-rw-r--r--. 1 root root  186  6月  2 18:40 vx_node
-rw-------. 1 root root 2360  6月  2 18:40 vx_node1.xml
-rw-------. 1 root root 2360  6月  2 18:40 vx_node2.xml
-rw-------. 1 root root 2360  6月  2 18:40 vx_node3.xml
-rw-------. 1 root root 2360  6月  2 18:40 vx_node4.xml
-rw-------. 1 root root 2360  6月  2 18:40 vx_node5.xml
-rw-------. 1 root root 2360  6月  2 18:40 vx_node6.xml
-rw-r--r--. 1 root root  198  6月  2 18:40 vx_node_grp
[root@centos qemu]# seq 6 | xargs -I@ bash -c 'virsh define vx_node@.xml'
ドメイン vx_node1 が vx_node1.xml から定義されました

ドメイン vx_node2 が vx_node2.xml から定義されました

ドメイン vx_node3 が vx_node3.xml から定義されました

ドメイン vx_node4 が vx_node4.xml から定義されました

ドメイン vx_node5 が vx_node5.xml から定義されました

ドメイン vx_node6 が vx_node6.xml から定義されました

起動している仮想マシンを再起動前の状態

コード表示

[oracle@centos vx]$ brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242fb2b351e	no		
virbr0		8000.5254006df710	yes		virbr0-nic
							vnet0
							vnet1
							vnet2
							vnet3
							vnet4
							vnet5
virbr100		8000.525400e3abb1	yes		virbr100-nic
virbr101		8000.525400286e9c	yes		virbr101-nic
virbr102		8000.52540085245e	yes		virbr102-nic
[oracle@centos vx]$ sudo virsh net-list --all
[sudo] oracle のパスワード:
 名前               状態     自動起動  永続
----------------------------------------------------------
 mynet100             動作中  いいえ (no) はい (yes)
 mynet101             動作中  いいえ (no) はい (yes)
 mynet102             動作中  いいえ (no) はい (yes)
 vagrant-libvirt      動作中  いいえ (no) はい (yes)

[oracle@centos vx]$ vagrant ssh-config | grep -E "^Host|\s{1,}Host"
Host node1
  HostName 192.168.121.233
Host node2
  HostName 192.168.121.193
Host node3
  HostName 192.168.121.18
Host node4
  HostName 192.168.121.17
Host node5
  HostName 192.168.121.227
Host node6
  HostName 192.168.121.98

起動している仮想マシンを再起動

コード表示

[oracle@centos vx]$ time vagrant reload
==> node1: Halting domain...
==> node1: Starting domain.
==> node1: Waiting for domain to get an IP address...
==> node1: Waiting for SSH to become available...
==> node1: Creating shared folders metadata...
==> node1: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node1: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node1: flag to force provisioning. Provisioners marked to run always will still run.
==> node2: Halting domain...
==> node2: Starting domain.
==> node2: Waiting for domain to get an IP address...
==> node2: Waiting for SSH to become available...
==> node2: Creating shared folders metadata...
==> node2: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node2: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node2: flag to force provisioning. Provisioners marked to run always will still run.
==> node3: Halting domain...
==> node3: Starting domain.
==> node3: Waiting for domain to get an IP address...
==> node3: Waiting for SSH to become available...
==> node3: Creating shared folders metadata...
==> node3: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node3: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node3: flag to force provisioning. Provisioners marked to run always will still run.
==> node4: Halting domain...
==> node4: Starting domain.
==> node4: Waiting for domain to get an IP address...
==> node4: Waiting for SSH to become available...
==> node4: Creating shared folders metadata...
==> node4: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node4: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node4: flag to force provisioning. Provisioners marked to run always will still run.
==> node5: Halting domain...
==> node5: Starting domain.
==> node5: Waiting for domain to get an IP address...
==> node5: Waiting for SSH to become available...
==> node5: Creating shared folders metadata...
==> node5: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node5: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node5: flag to force provisioning. Provisioners marked to run always will still run.
==> node6: Halting domain...
==> node6: Starting domain.
==> node6: Waiting for domain to get an IP address...
==> node6: Waiting for SSH to become available...
==> node6: Creating shared folders metadata...
==> node6: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node6: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node6: flag to force provisioning. Provisioners marked to run always will still run.

real	4m1.601s
user	0m5.994s
sys	0m0.554s

起動している仮想マシンを再起動後の状態

コード表示

[oracle@centos vx]$ brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242fb2b351e	no		
virbr0		8000.5254006df710	yes		virbr0-nic
virbr100		8000.525400e3abb1	yes		virbr100-nic
							vnet4
							vnet5
virbr101		8000.525400286e9c	yes		virbr101-nic
							vnet0
							vnet1
virbr102		8000.52540085245e	yes		virbr102-nic
							vnet2
							vnet3
[oracle@centos vx]$ vagrant ssh-config | grep -E "^Host|\s{1,}Host"
Host node1
  HostName 192.168.100.2
Host node2
  HostName 192.168.100.3
Host node3
  HostName 192.168.101.2
Host node4
  HostName 192.168.101.3
Host node5
  HostName 192.168.102.2
Host node6
  HostName 192.168.102.3
[oracle@centos vx]$ while read line;do echo ${line};sleep 5; echo ${line}|bash;done < <(seq 6 | xargs -I@ bash -c "awk '{print \"vagrant ssh node@ -c \"\"\x5c\x27\"\"ip a show eth0\"\"\x5c\x27\"}' dummy_oneline")
vagrant ssh node1 -c 'ip a show eth0'
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:5d:4a:3f brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.2/24 brd 192.168.100.255 scope global noprefixroute dynamic eth0
       valid_lft 3309sec preferred_lft 3309sec
    inet6 fe80::5054:ff:fe5d:4a3f/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node2 -c 'ip a show eth0'
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:dc:5b:1b brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.3/24 brd 192.168.100.255 scope global noprefixroute dynamic eth0
       valid_lft 3341sec preferred_lft 3341sec
    inet6 fe80::5054:ff:fedc:5b1b/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node3 -c 'ip a show eth0'
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:77:33:99 brd ff:ff:ff:ff:ff:ff
    inet 192.168.101.2/24 brd 192.168.101.255 scope global noprefixroute dynamic eth0
       valid_lft 3375sec preferred_lft 3375sec
    inet6 fe80::5054:ff:fe77:3399/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node4 -c 'ip a show eth0'
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:61:72:67 brd ff:ff:ff:ff:ff:ff
    inet 192.168.101.3/24 brd 192.168.101.255 scope global noprefixroute dynamic eth0
       valid_lft 3410sec preferred_lft 3410sec
    inet6 fe80::5054:ff:fe61:7267/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node5 -c 'ip a show eth0'
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:88:c2:54 brd ff:ff:ff:ff:ff:ff
    inet 192.168.102.2/24 brd 192.168.102.255 scope global noprefixroute dynamic eth0
       valid_lft 3445sec preferred_lft 3445sec
    inet6 fe80::5054:ff:fe88:c254/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node6 -c 'ip a show eth0'
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:a8:bc:df brd ff:ff:ff:ff:ff:ff
    inet 192.168.102.3/24 brd 192.168.102.255 scope global noprefixroute dynamic eth0
       valid_lft 3479sec preferred_lft 3479sec
    inet6 fe80::5054:ff:fea8:bcdf/64 scope link 
       valid_lft forever preferred_lft forever

疎通確認でもしようかな

前やったときとは違って、セグメント越えられていないぞ!!!やった!!!仮想マシン側の設定ファイルと独自ネットワークの設定ファイルがちゃんと再定義、再起動して固定ipにした時に、いいかんじになるのかな。吟味必要。大事なとこ。

libvirt管理の仮想ゲストOSネットワークセグメントを切り分ける話  
コード表示

[oracle@centos vx]$ vagrant ssh node1
Last login: Sun Jun  2 10:31:45 2019 from 192.168.100.1
[vagrant@node1 ~]$ seq 6 | xargs -t -I% bash -c 'traceroute node% && ping -c 1 node%'
bash -c traceroute node1 && ping -c 1 node1 
traceroute to node1 (127.0.0.1), 30 hops max, 60 byte packets
 1  node1 (127.0.0.1)  0.009 ms  0.003 ms  0.003 ms
PING node1 (127.0.0.1) 56(84) bytes of data.
64 bytes from node1 (127.0.0.1): icmp_seq=1 ttl=64 time=0.005 ms

--- node1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.005/0.005/0.005/0.000 ms
bash -c traceroute node2 && ping -c 1 node2 
traceroute to node2 (192.168.100.3), 30 hops max, 60 byte packets
 1  node2 (192.168.100.3)  0.209 ms  0.191 ms  0.180 ms
PING node2 (192.168.100.3) 56(84) bytes of data.
64 bytes from node2 (192.168.100.3): icmp_seq=1 ttl=64 time=0.101 ms

--- node2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms
bash -c traceroute node3 && ping -c 1 node3 
node3: Name or service not known
Cannot handle "host" cmdline arg `node3' on position 1 (argc 1)
bash -c traceroute node4 && ping -c 1 node4 
node4: Name or service not known
Cannot handle "host" cmdline arg `node4' on position 1 (argc 1)
bash -c traceroute node5 && ping -c 1 node5 
node5: Name or service not known
Cannot handle "host" cmdline arg `node5' on position 1 (argc 1)
bash -c traceroute node6 && ping -c 1 node6 
node6: Name or service not known
Cannot handle "host" cmdline arg `node6' on position 1 (argc 1)
[vagrant@node1 ~]$ logout
Connection to 192.168.100.2 closed.
[oracle@centos vx]$ vagrant ssh node2
Last login: Sun Jun  2 10:33:18 2019 from 192.168.100.1
[vagrant@node2 ~]$ seq 6 | xargs -I% bash -c 'traceroute node% && ping -c 1 node%'
traceroute to node1 (192.168.100.2), 30 hops max, 60 byte packets
 1  node1 (192.168.100.2)  0.247 ms  0.215 ms  0.205 ms
PING node1 (192.168.100.2) 56(84) bytes of data.
64 bytes from node1 (192.168.100.2): icmp_seq=1 ttl=64 time=0.152 ms

--- node1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms
traceroute to node2 (127.0.0.1), 30 hops max, 60 byte packets
 1  node2 (127.0.0.1)  0.006 ms  0.002 ms  0.002 ms
PING node2 (127.0.0.1) 56(84) bytes of data.
64 bytes from node2 (127.0.0.1): icmp_seq=1 ttl=64 time=0.005 ms

--- node2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.005/0.005/0.005/0.000 ms
node3: Name or service not known
Cannot handle "host" cmdline arg `node3' on position 1 (argc 1)
node4: Name or service not known
Cannot handle "host" cmdline arg `node4' on position 1 (argc 1)
node5: Name or service not known
Cannot handle "host" cmdline arg `node5' on position 1 (argc 1)
node6: Name or service not known
Cannot handle "host" cmdline arg `node6' on position 1 (argc 1)
[vagrant@node2 ~]$ logout
Connection to 192.168.100.3 closed.
[oracle@centos vx]$ vagrant ssh node3
[vagrant@node3 ~]$ seq 6 | xargs -I% bash -c 'traceroute node% && ping -c 1 node%'
node1: Name or service not known
Cannot handle "host" cmdline arg `node1' on position 1 (argc 1)
node2: Name or service not known
Cannot handle "host" cmdline arg `node2' on position 1 (argc 1)
traceroute to node3 (127.0.0.1), 30 hops max, 60 byte packets
 1  node3 (127.0.0.1)  0.010 ms  0.003 ms  0.002 ms
PING node3 (127.0.0.1) 56(84) bytes of data.
64 bytes from node3 (127.0.0.1): icmp_seq=1 ttl=64 time=0.008 ms

--- node3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.008/0.008/0.008/0.000 ms
traceroute to node4 (192.168.101.3), 30 hops max, 60 byte packets
 1  node4 (192.168.101.3)  0.323 ms  0.301 ms  0.269 ms
PING node4 (192.168.101.3) 56(84) bytes of data.
64 bytes from node4 (192.168.101.3): icmp_seq=1 ttl=64 time=0.096 ms

--- node4 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms
node5: Name or service not known
Cannot handle "host" cmdline arg `node5' on position 1 (argc 1)
node6: Name or service not known
Cannot handle "host" cmdline arg `node6' on position 1 (argc 1)
[vagrant@node3 ~]$ logout
Connection to 192.168.101.2 closed.
[oracle@centos vx]$ vagrant ssh node4
Last login: Sun Jun  2 03:00:49 2019 from 192.168.121.1
[vagrant@node4 ~]$ seq 6 | xargs -I% bash -c 'traceroute node% && ping -c 1 node%'
node1: Name or service not known
Cannot handle "host" cmdline arg `node1' on position 1 (argc 1)
node2: Name or service not known
Cannot handle "host" cmdline arg `node2' on position 1 (argc 1)
traceroute to node3 (192.168.101.2), 30 hops max, 60 byte packets
 1  node3 (192.168.101.2)  0.090 ms  0.077 ms  0.068 ms
PING node3 (192.168.101.2) 56(84) bytes of data.
64 bytes from node3 (192.168.101.2): icmp_seq=1 ttl=64 time=0.108 ms

--- node3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms
traceroute to node4 (127.0.0.1), 30 hops max, 60 byte packets
 1  node4 (127.0.0.1)  0.010 ms  0.005 ms  0.002 ms
PING node4 (127.0.0.1) 56(84) bytes of data.
64 bytes from node4 (127.0.0.1): icmp_seq=1 ttl=64 time=0.005 ms

--- node4 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.005/0.005/0.005/0.000 ms
node5: Name or service not known
Cannot handle "host" cmdline arg `node5' on position 1 (argc 1)
node6: Name or service not known
Cannot handle "host" cmdline arg `node6' on position 1 (argc 1)
[vagrant@node4 ~]$ logout
Connection to 192.168.101.3 closed.
[oracle@centos vx]$ vagrant ssh node5
[vagrant@node5 ~]$ seq 6 | xargs -I% bash -c 'traceroute node% && ping -c 1 node%'
node1: Name or service not known
Cannot handle "host" cmdline arg `node1' on position 1 (argc 1)
node2: Name or service not known
Cannot handle "host" cmdline arg `node2' on position 1 (argc 1)
node3: Name or service not known
Cannot handle "host" cmdline arg `node3' on position 1 (argc 1)
node4: Name or service not known
Cannot handle "host" cmdline arg `node4' on position 1 (argc 1)
traceroute to node5 (127.0.0.1), 30 hops max, 60 byte packets
 1  node5 (127.0.0.1)  0.009 ms  0.003 ms  0.002 ms
PING node5 (127.0.0.1) 56(84) bytes of data.
64 bytes from node5 (127.0.0.1): icmp_seq=1 ttl=64 time=0.006 ms

--- node5 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.006/0.006/0.006/0.000 ms
traceroute to node6 (192.168.102.3), 30 hops max, 60 byte packets
 1  node6 (192.168.102.3)  0.388 ms  0.368 ms  0.358 ms
PING node6 (192.168.102.3) 56(84) bytes of data.
64 bytes from node6 (192.168.102.3): icmp_seq=1 ttl=64 time=0.186 ms

--- node6 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms
[vagrant@node5 ~]$ logout
Connection to 192.168.102.2 closed.
[oracle@centos vx]$ vagrant ssh node6
[vagrant@node6 ~]$ seq 6 | xargs -I% bash -c 'traceroute node% && ping -c 1 node%'
node1: Name or service not known
Cannot handle "host" cmdline arg `node1' on position 1 (argc 1)
node2: Name or service not known
Cannot handle "host" cmdline arg `node2' on position 1 (argc 1)
node3: Name or service not known
Cannot handle "host" cmdline arg `node3' on position 1 (argc 1)
node4: Name or service not known
Cannot handle "host" cmdline arg `node4' on position 1 (argc 1)
traceroute to node5 (192.168.102.2), 30 hops max, 60 byte packets
 1  node5 (192.168.102.2)  0.084 ms  0.075 ms  0.067 ms
PING node5 (192.168.102.2) 56(84) bytes of data.
64 bytes from node5 (192.168.102.2): icmp_seq=1 ttl=64 time=0.084 ms

--- node5 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms
traceroute to node6 (127.0.0.1), 30 hops max, 60 byte packets
 1  node6 (127.0.0.1)  0.007 ms  0.002 ms  0.002 ms
PING node6 (127.0.0.1) 56(84) bytes of data.
64 bytes from node6 (127.0.0.1): icmp_seq=1 ttl=64 time=0.005 ms

--- node6 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.005/0.005/0.005/0.000 ms
[vagrant@node6 ~]$ logout
Connection to 192.168.102.3 closed.
[oracle@centos vx]$ 

あとがき

ネットワーク楽しい!!!awk楽しい!!!面白くなってきた!!!あとでスクリプトにまとめておきたいな。仮想環境でiptablesの自作ルータ作りたい!!!以上、ありがとうございました。

vagrant ssh -cでコマンド実行できるんだねって話

Vagrantfile

ここらへんもホスト名違いだけだし、うまくすっきり出来そうだけど、今はこれでよしとして楽しみにとっておこう。やり方はあるとおもうんだよな。

コード表示

[oracle@centos vx]$ cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"
  config.vm.synced_folder '.', '/mnt', type: 'rsync'
  config.vm.synced_folder '.', '/vagrant', disabled: true
  config.vm.define "node1" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node1"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node2" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node2"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node3" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node3"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node4" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node4"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node5" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node5"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
  config.vm.define "node6" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node6"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 1
    end
  end
end

vagrantツールでlibvirt管理のkvm上の仮想マシン立ち上げ!

コード表示

[oracle@centos vx]$ time vagrant up
real	0m54.930s
user	0m8.535s
sys	0m0.917s
[oracle@centos vx]$ vagrant status
Current machine states:

node1                     running (libvirt)
node2                     running (libvirt)
node3                     running (libvirt)
node4                     running (libvirt)
node5                     running (libvirt)
node6                     running (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

vagrant ssh-configによるip確認

dhcpサービスによる動的ip配布なので。

コード表示

[oracle@centos vx]$ vagrant ssh-config | grep -E "^Host|\s{1,}Host"
Host node1
  HostName 192.168.121.233
Host node2
  HostName 192.168.121.193
Host node3
  HostName 192.168.121.18
Host node4
  HostName 192.168.121.17
Host node5
  HostName 192.168.121.227
Host node6
  HostName 192.168.121.98

vagrant ssh -cによるip確認

bashコマンドってパイプ渡しがいいぽいね。echoしてどのノードのipなのか確認しつつ、疎通に1秒以上かかるから、5秒待って次ノードって感じ。dummy_onelineファイルでawkによる行単位処理ができるように工夫。

コード表示

[oracle@centos vx]$ cat dummy_oneline
dummy_oneline
[oracle@centos vx]$ while read line;do echo ${line};sleep 5; echo ${line}|bash;done < <(seq 6 | xargs -I@ bash -c "awk '{print \"vagrant ssh node@ -c \"\"\x5c\x27\"\"ip a show eth0\"\"\x5c\x27\"}' dummy_oneline")
vagrant ssh node1 -c 'ip a show eth0'
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:5d:4a:3f brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.233/24 brd 192.168.121.255 scope global noprefixroute dynamic eth0
       valid_lft 3435sec preferred_lft 3435sec
    inet6 fe80::5054:ff:fe5d:4a3f/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node2 -c 'ip a show eth0'
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:dc:5b:1b brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.193/24 brd 192.168.121.255 scope global noprefixroute dynamic eth0
       valid_lft 3232sec preferred_lft 3232sec
    inet6 fe80::5054:ff:fedc:5b1b/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node3 -c 'ip a show eth0'
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:77:33:99 brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.18/24 brd 192.168.121.255 scope global noprefixroute dynamic eth0
       valid_lft 3281sec preferred_lft 3281sec
    inet6 fe80::5054:ff:fe77:3399/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node4 -c 'ip a show eth0'
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:61:72:67 brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.17/24 brd 192.168.121.255 scope global noprefixroute dynamic eth0
       valid_lft 3570sec preferred_lft 3570sec
    inet6 fe80::5054:ff:fe61:7267/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node5 -c 'ip a show eth0'
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:88:c2:54 brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.227/24 brd 192.168.121.255 scope global noprefixroute dynamic eth0
       valid_lft 3540sec preferred_lft 3540sec
    inet6 fe80::5054:ff:fe88:c254/64 scope link 
       valid_lft forever preferred_lft forever
vagrant ssh node6 -c 'ip a show eth0'
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:a8:bc:df brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.98/24 brd 192.168.121.255 scope global noprefixroute dynamic eth0
       valid_lft 2150sec preferred_lft 2150sec
    inet6 fe80::5054:ff:fea8:bcdf/64 scope link 
       valid_lft forever preferred_lft forever

vagrant管理の仮想ゲストOSマシンと仮想ホストOSマシンの同期をとる話

実行例

Cntrl+Cするんだな。仮想ゲストOSの作業を仮想ホストOSに反映することができない。nfsとかのファイル同期遅くていやになる。いろいろ考えないとな。

コード表示

[oracle@centos vx]$ ll
合計 24
-rw-r--r--. 1 oracle docker  767  5月 27 00:50 Vagrantfile
-rw-r--r--. 1 oracle docker 3015  5月 24 05:16 Vagrantfile_org
-rwxr-xr-x. 1 oracle docker  133  5月 26 23:48 a.sh
-rwxr-xr-x. 1 oracle docker  221  5月 27 00:55 s.sh
drwxr-xr-x. 3 oracle docker 4096  5月 27 01:02 share
drwxr-xr-x. 2 oracle docker 4096  5月 27 00:59 tmpl
[oracle@centos vx]$ vagrant ssh node1 -c "ls -l /vagrant"
total 16
-rw-r--r--. 1 vagrant vagrant  767 May 26 15:50 Vagrantfile
-rw-r--r--. 1 vagrant vagrant 3015 May 23 20:16 Vagrantfile_org
-rw-r--r--. 1 vagrant vagrant    0 May 26 22:31 a
-rwxr-xr-x. 1 vagrant vagrant  133 May 26 14:48 a.sh
-rwxr-xr-x. 1 vagrant vagrant  221 May 26 15:55 s.sh
drwxr-xr-x. 3 vagrant vagrant   18 May 26 16:02 share
drwxr-xr-x. 2 vagrant vagrant   20 May 26 15:59 tmpl
Connection to 192.168.121.179 closed.
[oracle@centos vx]$ vagrant rsync-auto
==> node1: Doing an initial rsync...
==> node1: Rsyncing folder: /home/oracle/vx/ => /vagrant
==> node1: Watching: /home/oracle/vx
^C[oracle@centos vx]$ 
[oracle@centos vx]$ vagrant ssh node1 -c "ls -l /vagrant"
total 16
-rw-r--r--. 1 vagrant vagrant  767 May 26 15:50 Vagrantfile
-rw-r--r--. 1 vagrant vagrant 3015 May 23 20:16 Vagrantfile_org
-rwxr-xr-x. 1 vagrant vagrant  133 May 26 14:48 a.sh
-rwxr-xr-x. 1 vagrant vagrant  221 May 26 15:55 s.sh
drwxr-xr-x. 3 vagrant vagrant   18 May 26 16:02 share
drwxr-xr-x. 2 vagrant vagrant   20 May 26 15:59 tmpl
Connection to 192.168.121.179 closed.
[oracle@centos vx]$ vagrant ssh node1 -c "touch /vagrant/a"
Connection to 192.168.121.179 closed.
[oracle@centos vx]$ vagrant ssh node1 -c "ls -l /vagrant"
total 16
-rw-r--r--. 1 vagrant vagrant  767 May 26 15:50 Vagrantfile
-rw-r--r--. 1 vagrant vagrant 3015 May 23 20:16 Vagrantfile_org
-rw-rw-r--. 1 vagrant vagrant    0 May 26 22:38 a
-rwxr-xr-x. 1 vagrant vagrant  133 May 26 14:48 a.sh
-rwxr-xr-x. 1 vagrant vagrant  221 May 26 15:55 s.sh
drwxr-xr-x. 3 vagrant vagrant   18 May 26 16:02 share
drwxr-xr-x. 2 vagrant vagrant   20 May 26 15:59 tmpl
Connection to 192.168.121.179 closed.
[oracle@centos vx]$ ll
合計 24
-rw-r--r--. 1 oracle docker  767  5月 27 00:50 Vagrantfile
-rw-r--r--. 1 oracle docker 3015  5月 24 05:16 Vagrantfile_org
-rwxr-xr-x. 1 oracle docker  133  5月 26 23:48 a.sh
-rwxr-xr-x. 1 oracle docker  221  5月 27 00:55 s.sh
drwxr-xr-x. 3 oracle docker 4096  5月 27 01:02 share
drwxr-xr-x. 2 oracle docker 4096  5月 27 00:59 tmpl
[oracle@centos vx]$ vagrant rsync-auto
==> node1: Doing an initial rsync...
==> node1: Rsyncing folder: /home/oracle/vx/ => /vagrant
==> node1: Watching: /home/oracle/vx
^C[oracle@centos vx]$ ll
合計 24
-rw-r--r--. 1 oracle docker  767  5月 27 00:50 Vagrantfile
-rw-r--r--. 1 oracle docker 3015  5月 24 05:16 Vagrantfile_org
-rwxr-xr-x. 1 oracle docker  133  5月 26 23:48 a.sh
-rwxr-xr-x. 1 oracle docker  221  5月 27 00:55 s.sh
drwxr-xr-x. 3 oracle docker 4096  5月 27 01:02 share
drwxr-xr-x. 2 oracle docker 4096  5月 27 00:59 tmpl
[oracle@centos vx]$ vagrant ssh node1 -c "ls -l /vagrant"
total 16
-rw-r--r--. 1 vagrant vagrant  767 May 26 15:50 Vagrantfile
-rw-r--r--. 1 vagrant vagrant 3015 May 23 20:16 Vagrantfile_org
-rwxr-xr-x. 1 vagrant vagrant  133 May 26 14:48 a.sh
-rwxr-xr-x. 1 vagrant vagrant  221 May 26 15:55 s.sh
drwxr-xr-x. 3 vagrant vagrant   18 May 26 16:02 share
drwxr-xr-x. 2 vagrant vagrant   20 May 26 15:59 tmpl
Connection to 192.168.121.179 closed.

rsyncがあった

Vagrantfile。各仮想ゲストOSマシンでの共有フォルダは共通していいとおもう。デフぉは無効にする。

コード表示

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"
  config.vm.synced_folder '.', '/mnt', type: 'rsync'
  config.vm.synced_folder '.', '/vagrant', disabled: true
  config.vm.define "node1" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node1"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 2
    end
  end
end

vagrant-rsync-backをいんすこ

コード表示

[oracle@centos vx]$ vagrant plugin install vagrant-rsync-back
Installing the 'vagrant-rsync-back' plugin. This can take a few minutes...
Fetching: vagrant-rsync-back-0.0.1.gem (100%)
Installed the plugin 'vagrant-rsync-back (0.0.1)'!

仮想ゲストOSの同期対象を仮想ホストOSの方にmergeする

コード表示

[oracle@centos vx]$ vagrant ssh node1
Last login: Mon May 27 12:10:04 2019 from 192.168.121.1
[vagrant@node1 ~]$ cd /mnt
[vagrant@node1 mnt]$ ll
total 16
-rwx------. 1 vagrant vagrant  878 May 27 12:09 Vagrantfile
-rwx------. 1 vagrant vagrant 3015 May 23 20:16 Vagrantfile_org
-rwx------. 1 vagrant vagrant  133 May 26 14:48 a.sh
-rwx------. 1 vagrant vagrant  221 May 26 15:55 s.sh
drwx------. 3 vagrant vagrant   18 May 26 16:02 share
drwx------. 2 vagrant vagrant   20 May 26 15:59 tmpl
[vagrant@node1 mnt]$ touch a
[vagrant@node1 mnt]$ ll
total 16
-rwx------. 1 vagrant vagrant  878 May 27 12:09 Vagrantfile
-rwx------. 1 vagrant vagrant 3015 May 23 20:16 Vagrantfile_org
-rw-rw-r--. 1 vagrant vagrant    0 May 27 12:14 a
-rwx------. 1 vagrant vagrant  133 May 26 14:48 a.sh
-rwx------. 1 vagrant vagrant  221 May 26 15:55 s.sh
drwx------. 3 vagrant vagrant   18 May 26 16:02 share
drwx------. 2 vagrant vagrant   20 May 26 15:59 tmpl
[vagrant@node1 mnt]$ logout
Connection to 192.168.121.147 closed.
[oracle@centos vx]$ ll
合計 24
-rwx------. 1 oracle docker  878  5月 27 21:09 Vagrantfile
-rwx------. 1 oracle docker 3015  5月 24 05:16 Vagrantfile_org
-rwx------. 1 oracle docker  133  5月 26 23:48 a.sh
-rwx------. 1 oracle docker  221  5月 27 00:55 s.sh
drwx------. 3 oracle docker 4096  5月 27 01:02 share
drwx------. 2 oracle docker 4096  5月 27 00:59 tmpl
[oracle@centos vx]$ vagrant rsync-back
==> node1: Rsyncing folder: /mnt/ => /home/oracle/vx
[oracle@centos vx]$ ll
合計 24
-rwx------. 1 oracle docker  878  5月 27 21:09 Vagrantfile
-rwx------. 1 oracle docker 3015  5月 24 05:16 Vagrantfile_org
-rw-rw-r--. 1 oracle docker    0  5月 27 21:14 a
-rwx------. 1 oracle docker  133  5月 26 23:48 a.sh
-rwx------. 1 oracle docker  221  5月 27 00:55 s.sh
drwx------. 3 oracle docker 4096  5月 27 01:02 share
drwx------. 2 oracle docker 4096  5月 27 00:59 tmpl

仮想ホストOSの同期対象を仮想ゲストOSの方にmergeする

コード表示

[oracle@centos vx]$ vagrant ssh node1
[vagrant@node1 ~]$ cd /mnt
[vagrant@node1 mnt]$ ll
total 16
-rwx------. 1 vagrant vagrant  878 May 27 12:09 Vagrantfile
-rwx------. 1 vagrant vagrant 3015 May 23 20:16 Vagrantfile_org
-rwx------. 1 vagrant vagrant  133 May 26 14:48 a.sh
-rwx------. 1 vagrant vagrant  221 May 26 15:55 s.sh
drwx------. 3 vagrant vagrant   18 May 26 16:02 share
drwx------. 2 vagrant vagrant   20 May 26 15:59 tmpl
[vagrant@node1 mnt]$ touch aine
[vagrant@node1 mnt]$ ll
total 16
-rwx------. 1 vagrant vagrant  878 May 27 12:09 Vagrantfile
-rwx------. 1 vagrant vagrant 3015 May 23 20:16 Vagrantfile_org
-rwx------. 1 vagrant vagrant  133 May 26 14:48 a.sh
-rw-rw-r--. 1 vagrant vagrant    0 May 27 12:10 aine
-rwx------. 1 vagrant vagrant  221 May 26 15:55 s.sh
drwx------. 3 vagrant vagrant   18 May 26 16:02 share
drwx------. 2 vagrant vagrant   20 May 26 15:59 tmpl
[vagrant@node1 mnt]$ logout
Connection to 192.168.121.147 closed.
[oracle@centos vx]$ ll
合計 24
-rwx------. 1 oracle docker  878  5月 27 21:09 Vagrantfile
-rwx------. 1 oracle docker 3015  5月 24 05:16 Vagrantfile_org
-rwx------. 1 oracle docker  133  5月 26 23:48 a.sh
-rwx------. 1 oracle docker  221  5月 27 00:55 s.sh
drwx------. 3 oracle docker 4096  5月 27 01:02 share
drwx------. 2 oracle docker 4096  5月 27 00:59 tmpl
[oracle@centos vx]$ vagrant rsync
==> node1: Rsyncing folder: /home/oracle/vx/ => /mnt
[oracle@centos vx]$ ll
合計 24
-rwx------. 1 oracle docker  878  5月 27 21:09 Vagrantfile
-rwx------. 1 oracle docker 3015  5月 24 05:16 Vagrantfile_org
-rwx------. 1 oracle docker  133  5月 26 23:48 a.sh
-rwx------. 1 oracle docker  221  5月 27 00:55 s.sh
drwx------. 3 oracle docker 4096  5月 27 01:02 share
drwx------. 2 oracle docker 4096  5月 27 00:59 tmpl

vagrantで複数台の仮想ゲストOSを立ち上げた時の話

まえがき

とりあえず、複数立ち上げたらどうなるかみてみた。

参考文献

Vagrantで複数の仮想マシンを立ち上げる マルチマシン(Malti-Machine)設定  

Vagrantfile

コード表示

[oracle@centos vx]$ cat V*e
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"
  
  config.vm.define "node1" do |centos_on_kvm|
    centos_on_kvm.vm.hostname = "node1"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 2
    end
  end
  config.vm.define "node2" do |centos_on_kvm|
    centos_on_kvm.vm.hostname = "node2"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 2
    end
  end
  config.vm.define "node3" do |centos_on_kvm|
    centos_on_kvm.vm.hostname = "node3"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 2
    end
  end
end

実行ログ

コード表示

[oracle@centos vx]$ vagrant up
Bringing machine 'node1' up with 'libvirt' provider...
Bringing machine 'node2' up with 'libvirt' provider...
Bringing machine 'node3' up with 'libvirt' provider...
==> node1: Creating image (snapshot of base box volume).
==> node1: Creating domain with the following settings...
==> node1:  -- Name:              vx_node1
==> node1:  -- Domain type:       kvm
==> node3: Creating image (snapshot of base box volume).
==> node2: Creating image (snapshot of base box volume).
==> node1:  -- Cpus:              2
==> node2: Creating domain with the following settings...
==> node1:  -- Feature:           acpi
==> node3: Creating domain with the following settings...
==> node2:  -- Name:              vx_node2
==> node3:  -- Name:              vx_node3
==> node1:  -- Feature:           apic
==> node3:  -- Domain type:       kvm
==> node2:  -- Domain type:       kvm
==> node3:  -- Cpus:              2
==> node1:  -- Feature:           pae
==> node3:  -- Feature:           acpi
==> node2:  -- Cpus:              2
==> node3:  -- Feature:           apic
==> node1:  -- Memory:            2048M
==> node2:  -- Feature:           acpi
==> node3:  -- Feature:           pae
==> node1:  -- Management MAC:    
==> node2:  -- Feature:           apic
==> node3:  -- Memory:            2048M
==> node1:  -- Loader:            
==> node2:  -- Feature:           pae
==> node3:  -- Management MAC:    
==> node2:  -- Memory:            2048M
==> node1:  -- Nvram:             
==> node3:  -- Loader:            
==> node2:  -- Management MAC:    
==> node1:  -- Base box:          centos/7
==> node3:  -- Nvram:             
==> node1:  -- Storage pool:      default
==> node2:  -- Loader:            
==> node3:  -- Base box:          centos/7
==> node1:  -- Image:             /var/lib/libvirt/images/vx_node1.img (41G)
==> node3:  -- Storage pool:      default
==> node2:  -- Nvram:             
==> node1:  -- Volume Cache:      default
==> node2:  -- Base box:          centos/7
==> node3:  -- Image:             /var/lib/libvirt/images/vx_node3.img (41G)
==> node1:  -- Kernel:            
==> node2:  -- Storage pool:      default
==> node1:  -- Initrd:            
==> node3:  -- Volume Cache:      default
==> node2:  -- Image:             /var/lib/libvirt/images/vx_node2.img (41G)
==> node1:  -- Graphics Type:     vnc
==> node3:  -- Kernel:            
==> node2:  -- Volume Cache:      default
==> node3:  -- Initrd:            
==> node1:  -- Graphics Port:     -1
==> node3:  -- Graphics Type:     vnc
==> node2:  -- Kernel:            
==> node1:  -- Graphics IP:       127.0.0.1
==> node3:  -- Graphics Port:     -1
==> node1:  -- Graphics Password: Not defined
==> node2:  -- Initrd:            
==> node3:  -- Graphics IP:       127.0.0.1
==> node1:  -- Video Type:        cirrus
==> node3:  -- Graphics Password: Not defined
==> node2:  -- Graphics Type:     vnc
==> node1:  -- Video VRAM:        9216
==> node3:  -- Video Type:        cirrus
==> node1:  -- Sound Type:	
==> node2:  -- Graphics Port:     -1
==> node3:  -- Video VRAM:        9216
==> node1:  -- Keymap:            en-us
==> node2:  -- Graphics IP:       127.0.0.1
==> node3:  -- Sound Type:	
==> node1:  -- TPM Path:          
==> node2:  -- Graphics Password: Not defined
==> node3:  -- Keymap:            en-us
==> node1:  -- INPUT:             type=mouse, bus=ps2
==> node2:  -- Video Type:        cirrus
==> node3:  -- TPM Path:          
==> node3:  -- INPUT:             type=mouse, bus=ps2
==> node2:  -- Video VRAM:        9216
==> node2:  -- Sound Type:	
==> node2:  -- Keymap:            en-us
==> node2:  -- TPM Path:          
==> node2:  -- INPUT:             type=mouse, bus=ps2
==> node1: Creating shared folders metadata...
==> node3: Creating shared folders metadata...
==> node3: Starting domain.
==> node1: Starting domain.
==> node3: Waiting for domain to get an IP address...
==> node1: Waiting for domain to get an IP address...
==> node2: Creating shared folders metadata...
==> node2: Starting domain.
==> node2: Waiting for domain to get an IP address...
==> node3: Waiting for SSH to become available...
==> node1: Waiting for SSH to become available...
==> node2: Waiting for SSH to become available...
    node1: 
    node1: Vagrant insecure key detected. Vagrant will automatically replace
    node1: this with a newly generated keypair for better security.
    node3: 
    node3: Vagrant insecure key detected. Vagrant will automatically replace
    node3: this with a newly generated keypair for better security.
    node2: 
    node2: Vagrant insecure key detected. Vagrant will automatically replace
    node2: this with a newly generated keypair for better security.
    node3: 
    node3: Inserting generated public key within guest...
    node1: 
    node1: Inserting generated public key within guest...
    node2: 
    node2: Inserting generated public key within guest...
    node3: Removing insecure key from the guest if it's present...
    node1: Removing insecure key from the guest if it's present...
    node2: Removing insecure key from the guest if it's present...
    node3: Key inserted! Disconnecting and reconnecting using new SSH key...
    node1: Key inserted! Disconnecting and reconnecting using new SSH key...
    node2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> node3: Setting hostname...
==> node1: Setting hostname...
==> node2: Setting hostname...
==> node3: Configuring and enabling network interfaces...
==> node1: Configuring and enabling network interfaces...
==> node2: Configuring and enabling network interfaces...
    node3: SSH address: 192.168.121.209:22
    node3: SSH username: vagrant
    node3: SSH auth method: private key
    node1: SSH address: 192.168.121.32:22
    node1: SSH username: vagrant
    node1: SSH auth method: private key
    node2: SSH address: 192.168.121.99:22
    node2: SSH username: vagrant
    node2: SSH auth method: private key
==> node3: Rsyncing folder: /home/oracle/vx/ => /vagrant
==> node1: Rsyncing folder: /home/oracle/vx/ => /vagrant
==> node2: Rsyncing folder: /home/oracle/vx/ => /vagrant

起動確認

コード表示

[oracle@centos vx]$ vagrant status
Current machine states:

node1                     running (libvirt)
node2                     running (libvirt)
node3                     running (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

ネットワーク確認

コード表示

[oracle@centos vx]$ sudo virsh net-list
[sudo] oracle のパスワード:
 名前               状態     自動起動  永続
----------------------------------------------------------
 default              動作中  はい (yes)  はい (yes)
 vagrant-libvirt      動作中  いいえ (no) はい (yes)

ブリッチ設定確認

コード表示

[root@centos networks]# brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242b22c2a85	no		
virbr0		8000.525400bad5c4	yes		virbr0-nic
virbr1		8000.52540042696d	yes		virbr1-nic
							vnet0
							vnet1
							vnet2

仮想ホストOS側のNIC設定確認

コード表示

[oracle@centos vx]$ ip a show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:d8:61:2c:f1:5b brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.109/24 brd 192.168.1.255 scope global noprefixroute eno1
       valid_lft forever preferred_lft forever
    inet6 fe80::865a:b7c8:6a76:1722/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:b2:2c:2a:85 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:b2ff:fe2c:2a85/64 scope link 
       valid_lft forever preferred_lft forever
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:ba:d5:c4 brd ff:ff:ff:ff:ff:ff
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:ba:d5:c4 brd ff:ff:ff:ff:ff:ff
90: virbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:42:69:6d brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.1/24 brd 192.168.121.255 scope global virbr1
       valid_lft forever preferred_lft forever
91: virbr1-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr1 state DOWN group default qlen 1000
    link/ether 52:54:00:42:69:6d brd ff:ff:ff:ff:ff:ff
96: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr1 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:14:54:0f brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe14:540f/64 scope link 
       valid_lft forever preferred_lft forever
97: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr1 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:4f:71:b5 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe4f:71b5/64 scope link 
       valid_lft forever preferred_lft forever
98: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr1 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:82:1d:03 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe82:1d03/64 scope link 
       valid_lft forever preferred_lft forever

仮想ホストOSと仮想ゲストOSのマスカレ設定確認

コード表示

[root@centos networks]# sudo iptables -t nat -L -n | grep -A 16 "Chain POSTROUTING (policy ACCEPT)"
Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
RETURN     all  --  192.168.121.0/24     224.0.0.0/24        
RETURN     all  --  192.168.121.0/24     255.255.255.255     
MASQUERADE  tcp  --  192.168.121.0/24    !192.168.121.0/24     masq ports: 1024-65535
MASQUERADE  udp  --  192.168.121.0/24    !192.168.121.0/24     masq ports: 1024-65535
MASQUERADE  all  --  192.168.121.0/24    !192.168.121.0/24    
RETURN     all  --  192.168.122.0/24     224.0.0.0/24        
RETURN     all  --  192.168.122.0/24     255.255.255.255     
MASQUERADE  tcp  --  192.168.122.0/24    !192.168.122.0/24     masq ports: 1024-65535
MASQUERADE  udp  --  192.168.122.0/24    !192.168.122.0/24     masq ports: 1024-65535
MASQUERADE  all  --  192.168.122.0/24    !192.168.122.0/24    
MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0           
POSTROUTING_direct  all  --  0.0.0.0/0            0.0.0.0/0           
POSTROUTING_ZONES_SOURCE  all  --  0.0.0.0/0            0.0.0.0/0           
POSTROUTING_ZONES  all  --  0.0.0.0/0            0.0.0.0/0           

libvirt管理のネットワーク確認

コード表示

[root@centos vx]# cd /etc/libvirt/qemu/networks
[root@centos networks]# ll
合計 12
drwx------. 2 root root 4096  5月 15 06:08 autostart
-rw-------. 1 root root  576  5月 12 16:11 default.xml
-rw-------. 1 root root  603  5月 24 06:45 vagrant-libvirt.xml
[root@centos networks]# cat default.xml
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh net-edit default
or other application using the libvirt API.
-->

<network>
  <name>default</name>
  <uuid>431ebd86-8c41-4a77-91be-9dc7e8cb097e</uuid>
  <forward mode='nat'/>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:ba:d5:c4'/>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.2' end='192.168.122.254'/>
    </dhcp>
  </ip>
</network>
[root@centos networks]# cat vagrant-libvirt.xml
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh net-edit vagrant-libvirt
or other application using the libvirt API.
-->

<network ipv6='yes'>
  <name>vagrant-libvirt</name>
  <uuid>a86854f5-a240-42c6-b7da-ecd457aea19e</uuid>
  <forward mode='nat'/>
  <bridge name='virbr1' stp='on' delay='0'/>
  <mac address='52:54:00:42:69:6d'/>
  <ip address='192.168.121.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.121.1' end='192.168.121.254'/>
    </dhcp>
  </ip>
</network>

仮想ゲストOSノード1にssh接続

コード表示

[oracle@centos vx]$ vagrant ssh node1
[vagrant@node1 ~]$ ip a show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:4f:71:b5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.32/24 brd 192.168.121.255 scope global noprefixroute dynamic eth0
       valid_lft 3384sec preferred_lft 3384sec
    inet6 fe80::5054:ff:fe4f:71b5/64 scope link 
       valid_lft forever preferred_lft forever
[vagrant@node1 ~]$ su root
Password: vagrant
[root@node1 vagrant]# yum install -y net-tools
[root@node1 vagrant]# netstat -anp | grep -E "Active|Proto|ssh"
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      2633/sshd           
tcp        0      0 192.168.121.32:22       192.168.121.1:38226     ESTABLISHED 5422/sshd: vagrant  
tcp6       0      0 :::22                   :::*                    LISTEN      2633/sshd           
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   PID/Program name     Path
unix  2      [ ]         DGRAM                    31193    5422/sshd: vagrant   
unix  3      [ ]         STREAM     CONNECTED     31196    5425/sshd: vagrant@  
unix  3      [ ]         STREAM     CONNECTED     31197    5422/sshd: vagrant   
unix  3      [ ]         STREAM     CONNECTED     21321    2633/sshd            
[root@node1 vagrant]# yum install -y lsof
[root@node1 vagrant]# lsof -i -nP | grep -E "COMMAND|ssh"
COMMAND   PID    USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
sshd     2633    root    3u  IPv4  21394      0t0  TCP *:22 (LISTEN)
sshd     2633    root    4u  IPv6  21403      0t0  TCP *:22 (LISTEN)
sshd     5422    root    3u  IPv4  31111      0t0  TCP 192.168.121.32:22->192.168.121.1:38226 (ESTABLISHED)
sshd     5425 vagrant    3u  IPv4  31111      0t0  TCP 192.168.121.32:22->192.168.121.1:38226 (ESTABLISHED)
[root@node1 vagrant]# yum install -y psmisc
[root@node1 vagrant]# pstree -p
systemd(1)-+-NetworkManager(4841)-+-dhclient(4861)
           |                      |-{NetworkManager}(4842)
           |                      `-{NetworkManager}(4844)
           |-agetty(1762)
           |-agetty(1932)
           |-auditd(1203)---{auditd}(1205)
           |-chronyd(1511)
           |-crond(1761)
           |-dbus-daemon(1471)---{dbus-daemon}(1587)
           |-gssproxy(1464)-+-{gssproxy}(1490)
           |                |-{gssproxy}(1491)
           |                |-{gssproxy}(1492)
           |                |-{gssproxy}(1493)
           |                `-{gssproxy}(1494)
           |-irqbalance(1437)
           |-master(2877)-+-pickup(2878)
           |              `-qmgr(2879)
           |-polkitd(1439)-+-{polkitd}(1538)
           |               |-{polkitd}(1619)
           |               |-{polkitd}(1632)
           |               |-{polkitd}(1643)
           |               |-{polkitd}(1645)
           |               `-{polkitd}(1648)
           |-rpcbind(1674)
           |-rsyslogd(2634)-+-{rsyslogd}(2639)
           |                `-{rsyslogd}(2641)
           |-sshd(2633)---sshd(5422)---sshd(5425)---bash(5426)---su(5448)---bash(5452)---pstree(5554)
           |-systemd-journal(1146)
           |-systemd-logind(1449)
           |-systemd-udevd(1179)
           `-tuned(2632)-+-{tuned}(2822)
                         |-{tuned}(2823)
                         |-{tuned}(2824)
                         `-{tuned}(2876)
[root@node1 vagrant]# yum install -y traceroute
[root@node1 vagrant]# traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.121.1)  0.095 ms  0.074 ms  0.053 ms
 2  192.168.1.1 (192.168.1.1)  1.838 ms  1.820 ms  1.814 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  4.495 ms  5.015 ms  5.003 ms
 4  210.139.125.169 (210.139.125.169)  4.990 ms  5.083 ms  5.542 ms
 5  210.165.249.177 (210.165.249.177)  6.561 ms  5.832 ms  6.640 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  8.476 ms  8.465 ms  8.428 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  9.317 ms  7.725 ms  7.701 ms
 8  72.14.202.229 (72.14.202.229)  7.327 ms  7.319 ms  7.663 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  7.740 ms  7.879 ms  7.931 ms
[root@node1 vagrant]# exit
[vagrant@node1 ~]$ logout
Connection to 192.168.121.32 closed.

仮想ゲストOSノード2にssh接続

コード表示

[oracle@centos vx]$ vagrant ssh node2
[vagrant@node2 ~]$ ip a show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:82:1d:03 brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.99/24 brd 192.168.121.255 scope global noprefixroute dynamic eth0
       valid_lft 2468sec preferred_lft 2468sec
    inet6 fe80::5054:ff:fe82:1d03/64 scope link 
       valid_lft forever preferred_lft forever
[vagrant@node2 ~]$ su root
Password: vagrant
[root@node2 vagrant]# yum install -y net-tools
[root@node2 vagrant]# netstat -anp | grep -E "Active|Proto|ssh"
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      2630/sshd           
tcp        0      0 192.168.121.99:22       192.168.121.1:37726     ESTABLISHED 5421/sshd: vagrant  
tcp6       0      0 :::22                   :::*                    LISTEN      2630/sshd           
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   PID/Program name     Path
unix  3      [ ]         STREAM     CONNECTED     20384    2630/sshd            
unix  3      [ ]         STREAM     CONNECTED     31362    5424/sshd: vagrant@  
unix  2      [ ]         DGRAM                    31359    5421/sshd: vagrant   
unix  3      [ ]         STREAM     CONNECTED     31363    5421/sshd: vagrant   


[root@node2 vagrant]# yum install -y lsof
[root@node2 vagrant]# lsof -i -nP | grep -E "COMMAND|ssh"
COMMAND   PID    USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
sshd     2630    root    3u  IPv4  20470      0t0  TCP *:22 (LISTEN)
sshd     2630    root    4u  IPv6  20479      0t0  TCP *:22 (LISTEN)
sshd     5421    root    3u  IPv4  30566      0t0  TCP 192.168.121.99:22->192.168.121.1:37726 (ESTABLISHED)
sshd     5424 vagrant    3u  IPv4  30566      0t0  TCP 192.168.121.99:22->192.168.121.1:37726 (ESTABLISHED)
[root@node2 vagrant]# yum install -y psmisc
[root@node2 vagrant]# pstree -p
systemd(1)-+-NetworkManager(4838)-+-dhclient(4863)
           |                      |-{NetworkManager}(4839)
           |                      `-{NetworkManager}(4841)
           |-agetty(1844)
           |-agetty(1938)
           |-auditd(1203)---{auditd}(1205)
           |-chronyd(1556)
           |-crond(1862)
           |-dbus-daemon(1428)---{dbus-daemon}(1491)
           |-gssproxy(1550)-+-{gssproxy}(1577)
           |                |-{gssproxy}(1578)
           |                |-{gssproxy}(1579)
           |                |-{gssproxy}(1580)
           |                `-{gssproxy}(1581)
           |-irqbalance(1422)
           |-master(2873)-+-pickup(2875)
           |              `-qmgr(2876)
           |-polkitd(1499)-+-{polkitd}(1560)
           |               |-{polkitd}(1573)
           |               |-{polkitd}(1585)
           |               |-{polkitd}(1593)
           |               |-{polkitd}(1596)
           |               `-{polkitd}(1607)
           |-rpcbind(1430)
           |-rsyslogd(2633)-+-{rsyslogd}(2637)
           |                `-{rsyslogd}(2638)
           |-sshd(2630)---sshd(5421)---sshd(5424)---bash(5425)---su(5447)---bash(5451)---pstree(5520)
           |-systemd-journal(1145)
           |-systemd-logind(1528)
           |-systemd-udevd(1176)
           `-tuned(2629)-+-{tuned}(2826)
                         |-{tuned}(2827)
                         |-{tuned}(2831)
                         `-{tuned}(2877)
[root@node2 vagrant]# yum install -y traceroute
[root@node2 vagrant]# traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.121.1)  0.095 ms  0.074 ms  0.053 ms
 2  192.168.1.1 (192.168.1.1)  1.838 ms  1.820 ms  1.814 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  4.495 ms  5.015 ms  5.003 ms
 4  210.139.125.169 (210.139.125.169)  4.990 ms  5.083 ms  5.542 ms
 5  210.165.249.177 (210.165.249.177)  6.561 ms  5.832 ms  6.640 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  8.476 ms  8.465 ms  8.428 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  9.317 ms  7.725 ms  7.701 ms
 8  72.14.202.229 (72.14.202.229)  7.327 ms  7.319 ms  7.663 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  7.740 ms  7.879 ms  7.931 ms
[root@node2 vagrant]# exit
[vagrant@node2 ~]$ logout
Connection to 192.168.121.99 closed.

仮想ゲストOSノード3にssh接続

コード表示

[oracle@centos vx]$ vagrant ssh node3
[vagrant@node3 ~]$ ip a show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:14:54:0f brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.209/24 brd 192.168.121.255 scope global noprefixroute dynamic eth0
       valid_lft 2261sec preferred_lft 2261sec
    inet6 fe80::5054:ff:fe14:540f/64 scope link 
       valid_lft forever preferred_lft forever
[vagrant@node3 ~]$ su root
Password: 
[root@node3 vagrant]# yum install -y net-tools


[root@node3 vagrant]# netstat -anp | grep -E "Active|Proto|ssh"
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      2614/sshd           
tcp        0      0 192.168.121.209:22      192.168.121.1:58210     ESTABLISHED 5404/sshd: vagrant  
tcp6       0      0 :::22                   :::*                    LISTEN      2614/sshd           
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   PID/Program name     Path
unix  3      [ ]         STREAM     CONNECTED     22553    2614/sshd            
unix  3      [ ]         STREAM     CONNECTED     31015    5407/sshd: vagrant@  
unix  3      [ ]         STREAM     CONNECTED     31016    5404/sshd: vagrant   
unix  2      [ ]         DGRAM                    31012    5404/sshd: vagrant   
[root@node3 vagrant]# yum install -y lsof
[root@node3 vagrant]# lsof -i -nP | grep -E "COMMAND|ssh"
COMMAND   PID    USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
sshd     2614    root    3u  IPv4  21823      0t0  TCP *:22 (LISTEN)
sshd     2614    root    4u  IPv6  21835      0t0  TCP *:22 (LISTEN)
sshd     5404    root    3u  IPv4  31849      0t0  TCP 192.168.121.209:22->192.168.121.1:58210 (ESTABLISHED)
sshd     5407 vagrant    3u  IPv4  31849      0t0  TCP 192.168.121.209:22->192.168.121.1:58210 (ESTABLISHED)
[root@node3 vagrant]# yum install -y psmisc
[root@node3 vagrant]# pstree -p
systemd(1)-+-NetworkManager(4821)-+-dhclient(4844)
           |                      |-{NetworkManager}(4822)
           |                      `-{NetworkManager}(4824)
           |-agetty(1799)
           |-agetty(1987)
           |-auditd(1201)---{auditd}(1202)
           |-chronyd(1459)
           |-crond(1793)
           |-dbus-daemon(1413)---{dbus-daemon}(1472)
           |-gssproxy(1422)-+-{gssproxy}(1442)
           |                |-{gssproxy}(1443)
           |                |-{gssproxy}(1444)
           |                |-{gssproxy}(1445)
           |                `-{gssproxy}(1446)
           |-irqbalance(1375)
           |-master(2857)-+-pickup(2858)
           |              `-qmgr(2859)
           |-polkitd(1391)-+-{polkitd}(1470)
           |               |-{polkitd}(1491)
           |               |-{polkitd}(1501)
           |               |-{polkitd}(1514)
           |               |-{polkitd}(1515)
           |               `-{polkitd}(1532)
           |-rpcbind(1381)
           |-rsyslogd(2617)-+-{rsyslogd}(2621)
           |                `-{rsyslogd}(2622)
           |-sshd(2614)---sshd(5404)---sshd(5407)---bash(5408)---su(5431)---bash(5435)---pstree(5503)
           |-systemd-journal(1144)
           |-systemd-logind(1488)
           |-systemd-udevd(1175)
           `-tuned(2612)-+-{tuned}(2802)
                         |-{tuned}(2803)
                         |-{tuned}(2804)
                         `-{tuned}(2817)
[root@node3 vagrant]# yum install -y traceroute
[root@node3 vagrant]# traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.121.1)  0.095 ms  0.074 ms  0.053 ms
 2  192.168.1.1 (192.168.1.1)  1.838 ms  1.820 ms  1.814 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  4.495 ms  5.015 ms  5.003 ms
 4  210.139.125.169 (210.139.125.169)  4.990 ms  5.083 ms  5.542 ms
 5  210.165.249.177 (210.165.249.177)  6.561 ms  5.832 ms  6.640 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  8.476 ms  8.465 ms  8.428 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  9.317 ms  7.725 ms  7.701 ms
 8  72.14.202.229 (72.14.202.229)  7.327 ms  7.319 ms  7.663 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  7.740 ms  7.879 ms  7.931 ms
[root@node3 vagrant]# exit
[vagrant@node3 ~]$ logout
Connection to 192.168.121.209 closed.

あとがき

おもしろくなってきた!!