Cloud CM-IPMP Guía para resolver problemas Pagina 188

  • Descarga
  • Añadir a mis manuales
  • Imprimir
  • Pagina
    / 201
  • Tabla de contenidos
  • MARCADORES
  • Valorado. / 5. Basado en revisión del cliente
Vista de pagina 187
rhinohost1:
# ifconfig -a
...
/jointfilesconvert/447134/bge0: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 3
inet 192.168.1.101 netmask ffffff00 broadcast 192.168.1.255
groupname savanna
ether 0:3:ba:3c:9c:d3
/jointfilesconvert/447134/bge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 192.168.1.1 netmask ffffff00 broadcast 192.168.1.255
/jointfilesconvert/447134/bge1: flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACTIVE> mtu 1500 index 4
inet 192.168.1.102 netmask ffffff00 broadcast 192.168.1.255
groupname savanna
ether 0:3:ba:3c:9c:d4
Note that the virtual interface
/jointfilesconvert/447134/bge0:1
. This has the public, highly-available address. If there is a link failure on
/jointfilesconvert/447134/bge0
(for
example, when the network cable is pulled out), then the failure will be detected by the
in.mpathd daemon
and it will create
the virtual interface
/jointfilesconvert/447134/bge1:1
and move the IP address across.
We now have a basic active/standby group. Next we need to tune some settings to make it suitable for use in a Rhino cluster.
B.1.3 Tune failure detection time
The default time for IPMP to detect a link failure and perform a failover is 10 seconds. This is higher than Rhino SLEE’s default
8 second timeout, so it is possible that we could get unnecessary node failures while IPMP is busy failing over.
To reduce the IPMP failure detection time, edit /etc/default/mpathd and change the line:
FAILURE_DETECTION_TIME=10000
to:
FAILURE_DETECTION_TIME=1000
1000ms usually works well for failing over.
After editing the file, make the in.mpathd daemon reload its configuration:
# pkill -HUP in.mpathd
B.1.4 Configure probe addresses
IPMP will dynamically select some remote IP addresses to use as probe addresses. These will be other hosts on the same
network, and IPMP frequently pings these addresses to help determine if any link failures have occured. If you snoop the
interfaces you will see many pings emanating from the test addresses on the two interfaces.
In this configuration with just two hosts on the network, it is likely that IPMP will pick the other host’s public address as a
probe address. This seems to work except in the case where both active interfaces fail at the same time. Because each host will
temporarily be unable to ping the other’s public address, IPMP will decide that all the interfaces have gone down and not permit
any traffic on either interface.
To get around this problem, IPMP needs to be forced to use specific IP addresses as probe addresses. There should be more
than one probe address, and these should be on separate hosts. For example, in a Rhino cluster, each node could use the test
addresses of one or more other nodes (or a quorum node or management server etc) as probe addresses.
To specify probe addresses, add static host routes to the routing table:
rhinohost1:
# route add rhinohost2-/jointfilesconvert/447134/bge0 rhinohost2-/jointfilesconvert/447134/bge0 -static
# route add rhinohost2-/jointfilesconvert/447134/bge1 rhinohost2-/jointfilesconvert/447134/bge1 -static
IPMP will automatically start using these addresses as probe addresses. Now if both active interfaces fail, the standby interfaces
will still be able to ping some of their probe addresses, so IPMP will still be able to failover to the standby interface.
Open Cloud Rhino 1.4.3 Administration Manual v1.1 179
Vista de pagina 187
1 2 ... 183 184 185 186 187 188 189 190 191 192 193 ... 200 201

Comentarios a estos manuales

Sin comentarios