この記事は約42分8秒で読むことができます。

libvirt管理の仮想ゲストOSネットワークセグメントを切り分ける話

まえがき

Vagrantfile

コード表示

[oracle@centos vx]$ cat V*e
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"
  config.vm.synced_folder '.', '/mnt', type: 'rsync'
  config.vm.synced_folder '.', '/vagrant', disabled: true
  config.vm.define "node1" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node1"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 2
    end
  end
  config.vm.define "node2" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node2"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 2
    end
  end
  config.vm.define "node3" do |centos_on_kvm|
    centos_on_kvm.vm.provision :shell, :path => "a.sh"
    centos_on_kvm.vm.hostname = "node3"
    centos_on_kvm.vm.provider "libvirt" do |spec|
      spec.memory = 2048
      spec.cpus = 2
    end
  end
end

デフォルトの確認

コード表示

[oracle@centos vx]$ sudo virsh net-list --all
[sudo] oracle のパスワード:
 名前               状態     自動起動  永続
----------------------------------------------------------
 mynet100             動作中  いいえ (no) はい (yes)
 mynet101             動作中  いいえ (no) はい (yes)
 mynet102             動作中  いいえ (no) はい (yes)
 vagrant-libvirt      動作中  いいえ (no) はい (yes)
[oracle@centos vx]$ brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.024236fc45c6	no		
virbr0		8000.525400fbdf5d	yes		virbr0-nic
							vnet0
							vnet1
							vnet2
virbr100		8000.5254007a263c	yes		virbr100-nic
virbr101		8000.525400262f1a	yes		virbr101-nic
virbr102		8000.5254008d1c94	yes		virbr102-nic
[oracle@centos vx]$ vagrant ssh-config
Host node1
  HostName 192.168.121.107
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node1/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node2
  HostName 192.168.121.223
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node2/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node3
  HostName 192.168.121.215
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node3/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

vagrant upする

コード表示

[oracle@centos vx]$ time vagrant up
real	0m37.919s
user	0m5.003s
sys	0m0.529s

仮想マシンの設定ファイルの確認

source networkを独自ネットワークに書き換える

コード表示

[root@centos qemu]# cat /etc/libvirt/qemu/vx_node1.xml | awk '/<interface/,/interface>/'
    <interface type='network'>
      <mac address='52:54:00:45:15:d8'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
[root@centos qemu]# cat /etc/libvirt/qemu/vx_node2.xml | awk '/<interface/,/interface>/'
    <interface type='network'>
      <mac address='52:54:00:f3:f5:86'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
[root@centos qemu]# cat /etc/libvirt/qemu/vx_node3.xml | awk '/<interface/,/interface>/'
    <interface type='network'>
      <mac address='52:54:00:f5:a0:20'/>
      <source network='vagrant-libvirt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>

仮想マシンの設定ファイルの修正

コード表示

[root@centos qemu]# virsh edit vx_node1
ドメイン vx_node1 XML の設定は編集されました 

[root@centos qemu]# virsh edit vx_node2
ドメイン vx_node2 XML の設定は編集されました 

[root@centos qemu]# virsh edit vx_node3
ドメイン vx_node3 XML の設定は編集されました 
[root@centos qemu]# cat /etc/libvirt/qemu/vx_node1.xml | awk '/<interface/,/interface>/'
    <interface type='network'>
      <mac address='52:54:00:45:15:d8'/>
      <source network='mynet100'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
[root@centos qemu]# cat /etc/libvirt/qemu/vx_node2.xml | awk '/<interface/,/interface>/'
    <interface type='network'>
      <mac address='52:54:00:f3:f5:86'/>
      <source network='mynet101'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
[root@centos qemu]# cat /etc/libvirt/qemu/vx_node3.xml | awk '/<interface/,/interface>/'
    <interface type='network'>
      <mac address='52:54:00:f5:a0:20'/>
      <source network='mynet102'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>

仮想マシンの再起動

コード表示

[root@centos qemu]# exit
[oracle@centos vx]$ seq 3 | xargs -I@ bash -c 'vagrant reload node@'
==> node1: Halting domain...
==> node1: Starting domain.
==> node1: Waiting for domain to get an IP address...
==> node1: Waiting for SSH to become available...
==> node1: Creating shared folders metadata...
==> node1: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node1: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node1: flag to force provisioning. Provisioners marked to run always will still run.
==> node2: Halting domain...
==> node2: Starting domain.
==> node2: Waiting for domain to get an IP address...
==> node2: Waiting for SSH to become available...
==> node2: Creating shared folders metadata...
==> node2: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node2: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node2: flag to force provisioning. Provisioners marked to run always will still run.
==> node3: Halting domain...
==> node3: Starting domain.
==> node3: Waiting for domain to get an IP address...
==> node3: Waiting for SSH to become available...
==> node3: Creating shared folders metadata...
==> node3: Rsyncing folder: /home/oracle/vx/ => /mnt
==> node3: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> node3: flag to force provisioning. Provisioners marked to run always will still run.

修正後ネットワークの確認

コード表示

[oracle@centos vx]$ brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.024236fc45c6	no		
virbr0		8000.525400df588a	yes		virbr0-nic
virbr100		8000.5254007a263c	yes		virbr100-nic
							vnet1
virbr101		8000.525400262f1a	yes		virbr101-nic
							vnet2
virbr102		8000.5254008d1c94	yes		virbr102-nic
							vnet0
[oracle@centos vx]$ vagrant ssh-config
Host node1
  HostName 192.168.100.227
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node1/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node2
  HostName 192.168.101.213
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node2/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host node3
  HostName 192.168.102.60
  User vagrant
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/oracle/vx/.vagrant/machines/node3/libvirt/private_key
  IdentitiesOnly yes
  LogLevel FATAL

ssh接続確認

kvmホスト192.168.1.109にpingとおる。外部通信できる。

コード表示

[oracle@centos vx]$ vagrant ssh node1
[vagrant@node1 ~]$ ip a show     
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:45:15:d8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.227/24 brd 192.168.100.255 scope global noprefixroute dynamic eth0
       valid_lft 3443sec preferred_lft 3443sec
    inet6 fe80::5054:ff:fe45:15d8/64 scope link 
       valid_lft forever preferred_lft forever
[vagrant@node1 ~]$ traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.100.1)  0.419 ms  0.331 ms  0.271 ms
 2  192.168.1.1 (192.168.1.1)  1.700 ms  1.627 ms  1.575 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  4.594 ms  4.539 ms  5.109 ms
 4  210.139.125.169 (210.139.125.169)  5.012 ms  4.941 ms  4.878 ms
 5  210.165.249.177 (210.165.249.177)  5.741 ms  6.134 ms  5.941 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  7.639 ms  6.870 ms  6.854 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  8.023 ms  8.626 ms  8.590 ms
 8  72.14.205.32 (72.14.205.32)  8.417 ms  8.417 ms 72.14.202.229 (72.14.202.229)  8.540 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  8.865 ms  8.516 ms  8.829 ms
[vagrant@node1 ~]$ ping -c 1 192.168.1.109
PING 192.168.1.109 (192.168.1.109) 56(84) bytes of data.
64 bytes from 192.168.1.109: icmp_seq=1 ttl=64 time=0.081 ms

--- 192.168.1.109 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms
[vagrant@node1 ~]$ logout
Connection to 192.168.100.227 closed.
[oracle@centos vx]$ vagrant ssh node2
[vagrant@node2 ~]$ ip a show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:f3:f5:86 brd ff:ff:ff:ff:ff:ff
    inet 192.168.101.213/24 brd 192.168.101.255 scope global noprefixroute dynamic eth0
       valid_lft 3414sec preferred_lft 3414sec
    inet6 fe80::5054:ff:fef3:f586/64 scope link 
       valid_lft forever preferred_lft forever
[vagrant@node2 ~]$ traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.101.1)  0.280 ms  0.187 ms  0.126 ms
 2  192.168.1.1 (192.168.1.1)  0.706 ms  1.579 ms  1.507 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  4.730 ms  4.657 ms  4.599 ms
 4  210.139.125.169 (210.139.125.169)  5.292 ms  5.237 ms  5.158 ms
 5  210.165.249.177 (210.165.249.177)  6.043 ms  6.128 ms  6.098 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  7.172 ms  8.597 ms  8.483 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  8.446 ms  8.360 ms  6.768 ms
 8  72.14.202.229 (72.14.202.229)  6.738 ms  6.662 ms  6.588 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  8.291 ms  8.819 ms  8.203 ms
[vagrant@node2 ~]$ ping -c 1 192.168.1.109
PING 192.168.1.109 (192.168.1.109) 56(84) bytes of data.
64 bytes from 192.168.1.109: icmp_seq=1 ttl=64 time=0.289 ms

--- 192.168.1.109 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms
[vagrant@node2 ~]$ logout
Connection to 192.168.101.213 closed.
[oracle@centos vx]$ vagrant ssh node3
[vagrant@node3 ~]$ ip a show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:f5:a0:20 brd ff:ff:ff:ff:ff:ff
    inet 192.168.102.60/24 brd 192.168.102.255 scope global noprefixroute dynamic eth0
       valid_lft 3391sec preferred_lft 3391sec
    inet6 fe80::5054:ff:fef5:a020/64 scope link 
       valid_lft forever preferred_lft forever
[vagrant@node3 ~]$ traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  gateway (192.168.102.1)  0.252 ms  0.144 ms  0.087 ms
 2  192.168.1.1 (192.168.1.1)  1.172 ms  1.084 ms  1.037 ms
 3  nas827.p-kanagawa.nttpc.ne.jp (210.153.251.235)  5.054 ms  4.986 ms  4.916 ms
 4  210.139.125.169 (210.139.125.169)  5.000 ms  4.943 ms  4.897 ms
 5  210.165.249.177 (210.165.249.177)  5.982 ms  5.384 ms  5.971 ms
 6  0-0-0-18.tky-no-acr01.sphere.ad.jp (210.153.241.89)  8.510 ms  6.312 ms  6.193 ms
 7  0-0-1-0--2025.tky-t4-bdr01.sphere.ad.jp (202.239.117.14)  7.042 ms  6.580 ms  6.543 ms
 8  72.14.202.229 (72.14.202.229)  6.268 ms 72.14.205.32 (72.14.205.32)  6.904 ms  7.049 ms
 9  * * *
10  google-public-dns-a.google.com (8.8.8.8)  7.454 ms  7.437 ms  7.424 ms
[vagrant@node3 ~]$ ping -c 1 192.168.1.109
PING 192.168.1.109 (192.168.1.109) 56(84) bytes of data.
64 bytes from 192.168.1.109: icmp_seq=1 ttl=64 time=0.100 ms

--- 192.168.1.109 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms
[vagrant@node3 ~]$ logout
Connection to 192.168.102.60 closed.
[oracle@centos vx]$ 

セグメント間疎通確認

できないと思っていた。なぜ、こんなにムラがある。固定ipにしたらムラがなくなるかな。vlanでタグ付するのかな。

コード表示

[oracle@centos vx]$ vagrant ssh node1
Last login: Thu May 30 21:45:13 2019 from 192.168.100.1
[vagrant@node1 ~]$ ping -c 1 192.168.101.213
PING 192.168.101.213 (192.168.101.213) 56(84) bytes of data.
64 bytes from 192.168.101.213: icmp_seq=1 ttl=63 time=0.245 ms

--- 192.168.101.213 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms
[vagrant@node1 ~]$ ping -c 1 192.168.102.60
PING 192.168.102.60 (192.168.102.60) 56(84) bytes of data.
64 bytes from 192.168.102.60: icmp_seq=1 ttl=63 time=0.240 ms

--- 192.168.102.60 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms
[vagrant@node1 ~]$ logout
Connection to 192.168.100.227 closed.
[oracle@centos vx]$ vagrant ssh node2
Last login: Thu May 30 21:46:59 2019 from 192.168.101.1
[vagrant@node2 ~]$ ping -c 1 192.168.100.227
PING 192.168.100.227 (192.168.100.227) 56(84) bytes of data.
From 192.168.101.1 icmp_seq=1 Destination Port Unreachable

--- 192.168.100.227 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

[vagrant@node2 ~]$ ping -c 1 192.168.102.60
PING 192.168.102.60 (192.168.102.60) 56(84) bytes of data.
From 192.168.101.1 icmp_seq=1 Destination Port Unreachable

--- 192.168.102.60 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

[vagrant@node2 ~]$ logout
Connection to 192.168.101.213 closed.
[oracle@centos vx]$ vagrant ssh node3
Last login: Thu May 30 21:48:05 2019 from 192.168.102.1
[vagrant@node3 ~]$ ping -c 1 192.168.100.227
PING 192.168.100.227 (192.168.100.227) 56(84) bytes of data.
From 192.168.102.1 icmp_seq=1 Destination Port Unreachable

--- 192.168.100.227 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

[vagrant@node3 ~]$ ping -c 1 192.168.101.213
PING 192.168.101.213 (192.168.101.213) 56(84) bytes of data.
64 bytes from 192.168.101.213: icmp_seq=1 ttl=63 time=0.561 ms

--- 192.168.101.213 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms

あとがき

次は固定ipにしたい。

Leave a Reply

Your email address will not be published. Required fields are marked *