You are here

SC3.0 Schulung

- Nach der Installation von SC 3.0 + Patche



-- quorum device erstellen.

-- ntp Timeserver.

-- booten ohne Cluster.

-- Apache Diskgroup erstellen.

-- Nafo Groupe erstellen.

-- Installation Apache (Agent).

 



quorum device erstellen

root@defx0wf0 # scsetup
>>> Initial Cluster Setup <<<

    This program has detected that the cluster "installmode" attribute is
    still enabled. As such, certain initial cluster setup steps will be
    performed at this time. This includes adding any necessary quorum
    devices, then resetting both the quorum vote counts and the
    "installmode" property.

    Please do not proceed if any additional nodes have yet to join the
    cluster.

    Is it okay to continue (yes/no) [yes]?   

    Do you want to add any quorum disks (yes/no) [yes]?  

    Dual-ported SCSI-2 disks may be used as quorum devices in two-node
    clusters. However, clusters with more than two nodes require that
    SCSI-3 PGR disks be used for all disks with more than two
    node-to-disk paths. You can use a disk containing user data or one
    that is a member of a device group as a quorum device.

    Each quorum disk must be connected to at least two nodes. Please
    refer to the Sun Cluster documentation for more information on
    supported quorum device topologies.

    Which global device do you want to use (d<N>)?  d6

 Which global device do you want to use (d<N>)?  d6

    Is it okay to proceed with the update (yes/no) [yes]?  

scconf -a -q globaldev=d6

    Command completed successfully.

    
Hit ENTER to continue:  

    Do you want to add another quorum disk (yes/no)?  no
    Once the "installmode" property has been reset, this program will
    skip "Initial Cluster Setup" each time it is run again in the future.
    However, quorum devices can always be added to the cluster using the
    regular menu options. Resetting this property fully activates quorum
    settings and is necessary for the normal and safe operation of the
    cluster.

    Is it okay to reset "installmode" (yes/no) [yes]?  yes

scconf -c -q reset
scconf -a -T node=.

    Cluster initialization is complete.

 

 

- Überprüfen ob drei Quorum votes möglich und genutzt sind (ja war vorher 1/1/1)

root@defx0wf0 # scstat -q| more

-- Quorum Summary --

  Quorum votes possible:      3
  Quorum votes needed:        2
  Quorum votes present:       3

-- Quorum Votes by Node --

                    Node Name           Present Possible Status
                    ---------           ------- -------- ------
  Node votes:       defx0wf0            1        1       Online
  Node votes:       defx0wf1            1        1       Online

-- Quorum Votes by Device --

                    Device Name         Present Possible Status
                    -----------         ------- -------- ------
  Device votes:     /dev/did/rdsk/d6s2  1        1       Online

 


 

- ntp Timeserver
es wird bei der installation der Cluster software defaultmäßig /etc/inet/ntp.conf.cluster angelegt.
wenn man nach xntpd grept sieht man, daß dieser seine Zeit über die Einstellungen aus
/etc/inet/ntp.conf.cluster holt.

root@defx0wf1 # ps -ef|grep xntp
    root   209     1  0 12:35:12 ?        0:00 /usr/lib/inet/xntpd -c /etc/inet/ntp.conf.cluster
    root   507   487  0 11:14:37 pts/1    0:00 grep xntp

 

1.) /etc/inet/ntp.conf.cluster umbennen in /etc/inet/ntp.conf
2.) ich habe die local kernel clock 127.127.1.0 auskommentiert und dafür unsere Timeserver eingetragen:

#server 127.127.1.0
server 129.4.4.44
server 129.5.5.55

 

3.)alle clusternode-Einträge ab node 3 entfernt (Empfehlung auf S.5-33 Clusterbuch), es bleibt also stehen:

peer clusternode1-priv prefer
peer clusternode2-priv

 

4.) bestehenden xntpd-prozess gestoppt auf beiden Knoten
root@defx0wf1 # ps -ef|grep xntp
    root   209     1  0 12:35:12 ?        0:00 /usr/lib/inet/xntpd -c /etc/inet/ntp.conf.cluster
    root   507   487  0 11:14:37 pts/1    0:00 grep xntp

root@defx0wf1 # /usr/bin/pkill -x -u 0 '(ntpdate|xntpd)'
root@defx0wf1 # ps -ef|grep xn
    root   511   487  0 11:23:07 pts/1    0:00 grep xn

 

5.)den xntpd wieder gestartet auf beiden Knoten
root@defx0wf1 # /etc/rc2.d/S74xntpd start

man sieht, daß jetzt die Infos nicht mehr aus der /etc/inet/ntp.conf.cluster geholt werden
root@defx0wf1 # ps -ef|grep xntp
    root   521   487  0 10:24:59 pts/1    0:00 grep xntp
    root   519     1  0 10:24:55 ?        0:00 /usr/lib/inet/xntpd

 

6.) schnell hintereinander ausgeführte date-abfrage ergibt, dieselbe Zeit mit einer sek. unterschied
    -> Zeit wird vom Timeserver geholt.

root@defx0wf0 # date  
Tue Sep 17 10:47:22 WEST 2002
root@defx0wf1 # date
Tue Sep 17 10:47:23 WEST 2002

 


 

- booten eines Nodes ohne cluster funktioniert.

{0} ok boot -x
Resetting ...

Can't open input device.
screen not found.
Can't open input device.
Keyboard not present.  Using ttya for input and output.
Keyboard not present.  Using ttya for input and output.
Sun (TM) Enterprise 250 (2 X UltraSPARC-II 400MHz), No Keyboard
OpenBoot 3.22, 1024 MB memory installed, Serial #50521704.
Ethernet address 0:3:ca:2:e6:88, Host ID: 8362e688.

Rebooting with command: boot -x                                       
Boot device: /pci@1f,4000/scsi@3/disk@0,0  File and args: -x
SunOS Release 5.8 Version Generic_108528-15 64-bit
Copyright 1983-2001 Sun Microsystems, Inc.  All rights reserved.
/pci@1f,4000/pci@4/SUNW,isptwo@4 (isp0):
        initiator SCSI ID now 6
/pci@1f,4000/pci@5/SUNW,isptwo@4 (isp1):
        initiator SCSI ID now 6
configuring IPv4 interfaces: hme0.
configuring IPv6 interfaces: hme0.
Hostname: defx0wf1
Not booting as part of a cluster
The system is coming up.  Please wait.
checking ufs filesystems
/dev/rdsk/c0t0d0s6: is clean.
Starting IPv6 neighbor discovery.
Setting default IPv6 interface for multicast: add net ff00::/8: gateway fe77::233:b4ff:fr02:e688
starting rpc services: rpcbind done.
Setting netmask of hme0 to 255.255.255.0
Setting default IPv4 interface for multicast: add net 224.0/4: gateway defx0wf1
syslog service starting.
Print services started.
Sep 17 12:52:28 defx0wf1 sendmail[235]: My unqualified host name (defx0wf1) unknown; sleeping for retry
volume management starting.
Sep 17 12:52:31 defx0wf1 xntpd[259]: couldn't resolve `clusternode1-priv', giving up on it
Sep 17 12:52:31 defx0wf1 xntpd[259]: couldn't resolve `clusternode2-priv', giving up on it
The system is ready.

 

 

- global devices fehlen (boot -x)

root@defx0wf1 # df -k
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/dsk/c0t0d0s0    15121031  892572 14077249     6%    /
/proc                      0       0       0     0%    /proc
fd                         0       0       0     0%    /dev/fd
mnttab                     0       0       0     0%    /etc/mnttab
/dev/dsk/c0t0d0s5    4032504   81414 3910765     3%    /var
swap                 2785104      16 2785088     1%    /var/run
swap                 2785104      16 2785088     1%    /tmp
/dev/dsk/c0t0d0s6    4032504  170426 3821753     5%    /opt

 

- scstat auf dem nicht aktiven Rechner zeigt folgendes an
root@defx0wf1 # scstat
scstat:  not a cluster member.

 

- scstat auf dem noch aktiven cluster zeigt an wer online/offline ist

root@defx0wf0 # scstat        

-- Cluster Nodes --

                    Node name           Status
                    ---------           ------
  Cluster node:     defx0wf0            Online
  Cluster node:     defx0wf1            Offline

 

- Knoten wieder ins cluster einfügen: im ok-prompt : boot oder auf der console init 6
root@defx0wf1 # scstat

-- Cluster Nodes --

                    Node name           Status
                    ---------           ------
  Cluster node:     defx0wf0            Online
  Cluster node:     defx0wf1            Online

 

- Cluster auf beiden Seiten komplett stoppen mit scshutdown -y -g 30

root@defx0wf0 # scshutdown -y -g 30
Broadcast Message from root (???) on defx0wf0 Tue Sep 17 13:18:41...
 The cluster testcluster will be shutdown in  30 seconds

 

Node1 und Node2 befinden sich danach im ok-prompt:scshutdown -y -g 30

- Den zweiten Knoten zum cluster hinzufügen mittels 'boot'
{1} ok
boot

 

- Nach den boot des Nodes.
root@defx0wf1 # df -k
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/dsk/c0t0d0s0    15121031  892572 14077249     6%    /
/proc                      0       0       0     0%    /proc
fd                         0       0       0     0%    /dev/fd
mnttab                     0       0       0     0%    /etc/mnttab
/dev/dsk/c0t0d0s5    4032504   81488 3910691     3%    /var
swap                 2684976     128 2684848     1%    /var/run
swap                 2684872      24 2684848     1%    /tmp
/dev/dsk/c0t0d0s6    4032504  170426 3821753     5%    /opt
/dev/did/dsk/d12s7    192790    3653  169858     3%    /global/.devices/node@2

und

root@defx0wf1 # scstat

-- Cluster Nodes --

                    Node name           Status
                    ---------           ------
  Cluster node:     defx0wf0            Offline
  Cluster node:     defx0wf1            Online

 

- Jetzt den ersten Nodes mit 'boot' wieder hochfahren
ergebnis:

root@defx0wf0 # df -k
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/dsk/c0t0d0s0    15121031  892610 14077211     6%    /
/proc                      0       0       0     0%    /proc
fd                         0       0       0     0%    /dev/fd
mnttab                     0       0       0     0%    /etc/mnttab
/dev/dsk/c0t0d0s5    4032504   81661 3910518     3%    /var
swap                 2643888     136 2643752     1%    /var/run
swap                 2643776      24 2643752     1%    /tmp
/dev/dsk/c0t0d0s6    4032504  170426 3821753     5%    /opt
/dev/did/dsk/d12s7    192790    3653  169858     3%    /global/.devices/node@2
/dev/did/dsk/d1s7     192790    3678  169833     3%    /global/.devices/node@1

und

root@defx0wf0 # scstat

-- Cluster Nodes --

                    Node name           Status
                    ---------           ------
  Cluster node:     defx0wf0            Online
  Cluster node:     defx0wf1            Online

 


 

1.) Apache Diskgroupe
2.) initialisieren einer apache-diskgroup und hinzufügen der gewünschten Platten)

- apachedg mit der ersten Platte anlegen
root@defx0wf0 # vxdg init apachedg apache_disk1=c1t9d0

- restliche Platten zur apachedg hinzufügen
root@defx0wf0 # vxdg -g apachedg adddisk apache_disk2=c1t10d0
root@defx0wf0 # vxdg -g apachedg adddisk apache_disk3=c1t11d0
root@defx0wf0 #
vxdg -g apachedg adddisk apache_disk4=c1t12d0

 

Überprüfen:
root@defx0wf0 # vxdisk list
DEVICE       TYPE      DISK         GROUP        STATUS
c0t0d0s2     sliced    rootdisk_1   rootdg       online
c0t8d0s2     sliced    disk01       rootdg       online nohotuse
c1t9d0s2     sliced    apache_disk1  apachedg     online
c1t10d0s2    sliced    apache_disk2  apachedg     online
c1t11d0s2    sliced    apache_disk3  apachedg     online
c1t12d0s2    sliced    apache_disk4  apachedg     online
c2t9d0s2     sliced    -            -            error
c2t10d0s2    sliced    -            -            error
c2t11d0s2    sliced    -            -            error
c2t12d0s2    sliced    -            -            error

 

3.) Registrieren einer VxVM disk group als device gruppe im Cluster

root@defx0wf0 # scsetup

==>

*** Main Menu ***

    Please select from one of the following options:

        1) Quorum
        2) Resource groups
        3) Cluster interconnect
        4) Device groups and volumes
        5) Private hostnames
        6) New nodes
        7) Other cluster properties

        ?) Help with menu options
        q) Quit

    Option:  4

==>

*** Device Groups Menu ***

    Please select from one of the following options:

        1) Register a VxVM disk group as a device group
        2) Synchronize volume information for a VxVM device group
        3) Unregister a VxVM device group
        4) Add a node to a VxVM device group
        5) Remove a node from a VxVM device group
        6) Change key properties of a device group

        ?) Help
        q) Return to the Main Menu

    Option:  1

==>

 >>> Register a VxVM Disk Group as a Device Group <<<

    VERITAS Volume Manager disk groups are always managed by the cluster
    as cluster device groups. This option is used to register a VxVM disk
    group with the cluster as a cluster device group.

    Is it okay to continue (yes/no) [yes]?  yes

    Name of the VxVM disk group you want to register?  apachedg

    Primary ownership of a device group is determined by either
    specifying or not specifying a preferred ordering of the nodes that
    can own the device group. If an order is specified, this will be the
    order in which nodes will attempt to establish ownership. If an order
    is not specified, the first node thet attempts to access a disk in
    the device group becomes the owner.

    Do you want to configure a preferred ordering (yes/no) [yes]?  no

Are both nodes attached to all disks in this group (yes/no) [yes]?  yes

    Is it okay to proceed with the update (yes/no) [yes]?  yes

scconf -a -D type=vxvm,name=apachedg,nodelist=defx0wf0:defx0wf1

    Command completed successfully.

    
Hit ENTER to continue:

  *** Device Groups Menu ***

    Please select from one of the following options:

        1) Register a VxVM disk group as a device group
        2) Synchronize volume information for a VxVM device group
        3) Unregister a VxVM device group
        4) Add a node to a VxVM device group
        5) Remove a node from a VxVM device group
        6) Change key properties of a device group

        ?) Help
        q) Return to the Main Menu

    Option:  q

 

4.)Erstellen von Volumes in der apachedg

- apachevol1 Größe 500m auf apache_disk1 anlegen
root@defx0wf0 # /usr/sbin/vxassist -g apachedg -U fsgen make apachevol1 500m layout=nostripe,nolog apache_disk1 

- apachevol2 Größe 1500m auf apache_disk2 anlegen
root@defx0wf0 #
/usr/sbin/vxassist -g apachedg -U fsgen make apachevol2 1500m layout=nostripe,nolog apache_disk1

- apachevol3 Größe 8000m auf apache_disk2 und apache_disk3 anlegen/verteilen (striped)
root@defx0wf0 #
/usr/sbin/vxassist -g apachedg -U fsgen make apachevol3 8000m layout=striped,nolog apache_disk2 apache_disk3

 

5.) hinzufügen einer weiter platte in in die diskgroup apachedg

Status:
root@defx0wf0 # vxdisk list
DEVICE       TYPE      DISK         GROUP        STATUS
c0t0d0s2     sliced    rootdisk_1   rootdg       online
c0t8d0s2     sliced    disk01       rootdg       online nohotuse
c1t9d0s2     sliced    apache_disk1  apachedg     online
c1t10d0s2    sliced    apache_disk2  apachedg     online
c1t11d0s2    sliced    apache_disk3  apachedg     online
c1t12d0s2    sliced    apache_disk4  apachedg     online
c2t9d0s2     sliced    -            -            error
c2t10d0s2    sliced    -            -            error
c2t11d0s2    sliced    -            -            error
c2t12d0s2    sliced    -            -            error

 

5a) Platte initialisieren:
root@defx0wf0 # /usr/lib/vxvm/bin/vxdisksetup -i c2t9d0

5b) Platte zur apachedg hinzufügen
root@defx0wf0 # v
xdg -g apachedg adddisk apache_mirror=c2t9d0

 

Überprüfen:
root@defx0wf0 # vxdisk list
DEVICE       TYPE      DISK         GROUP        STATUS
c0t0d0s2     sliced    rootdisk_1   rootdg       online
c0t8d0s2     sliced    disk01       rootdg       online nohotuse
c1t9d0s2     sliced    apache_disk1  apachedg     online
c1t10d0s2    sliced    apache_disk2  apachedg     online
c1t11d0s2    sliced    apache_disk3  apachedg     online
c1t12d0s2    sliced    apache_disk4  apachedg     online
c2t9d0s2     sliced    apache_mirror  apachedg     online
c2t10d0s2    sliced    -            -            error
c2t11d0s2    sliced    -            -            error
c2t12d0s2    sliced    -            -            error

 

6.) syncronisieren der nodes

root@defx0wf0 # scsetup

==>
*** Main Menu ***

    Please select from one of the following options:

        1) Quorum
        2) Resource groups
        3) Cluster interconnect
        4) Device groups and volumes
        5) Private hostnames
        6) New nodes
        7) Other cluster properties

        ?) Help with menu options
        q) Quit

    Option:  4

==>
*** Device Groups Menu ***

    Please select from one of the following options:

        1) Register a VxVM disk group as a device group
        2) Synchronize volume information for a VxVM device group
        3) Unregister a VxVM device group
        4) Add a node to a VxVM device group
        5) Remove a node from a VxVM device group
        6) Change key properties of a device group

        ?) Help
        q) Return to the Main Menu

    Option:  2

==>

>>> Synchronize Volume Information for a VxVM Device Group <<<

    VERITAS Volume Manager disk groups are always managed by the cluster
    as cluster device groups. This option is used to synchronize volume
    information for a VxVM device group between the VxVM software and the
    clustering software. It should be selected anytime a volume is either
    added to or removed from a VxVM disk group. Otherwise, the cluster
    will be unaware of the changes.

    Is it okay to continue (yes/no) [yes]?  yes

    Name of the VxVM device group you want to synchronize?  apachedg

    Is it okay to proceed with the update (yes/no) [yes]?  yes

scconf -c -D name=apachedg,sync

    Command completed successfully.

    
Hit ENTER to continue:

==>
 *** Device Groups Menu ***

    Please select from one of the following options:

        1) Register a VxVM disk group as a device group
        2) Synchronize volume information for a VxVM device group
        3) Unregister a VxVM device group
        4) Add a node to a VxVM device group
        5) Remove a node from a VxVM device group
        6) Change key properties of a device group

        ?) Help
        q) Return to the Main Menu

    Option:  q

 

7.) anlegen von Filesystemen auf den Volumes

z.I.:
root@defx0wf0 # vxdg list
NAME         STATE           ID
rootdg       enabled  1032344665.1025.defx0wf0
apachedg     enabled  1032427498.1194.defx0wf0
root@defx0wf0 # vxdisk list
DEVICE       TYPE      DISK         GROUP        STATUS
c0t0d0s2     sliced    rootdisk_1   rootdg       online
c0t8d0s2     sliced    disk01       rootdg       online nohotuse
c1t9d0s2     sliced    apache_disk1  apachedg     online
c1t10d0s2    sliced    apache_disk2  apachedg     online
c1t11d0s2    sliced    apache_disk3  apachedg     online
c1t12d0s2    sliced    apache_disk4  apachedg     online
c2t9d0s2     sliced    apache_mirror  apachedg     online
c2t10d0s2    sliced    -            -            error
c2t11d0s2    sliced    -            -            error
c2t12d0s2    sliced    -            -            error

 

7a) Die Filesysteme anlegen:

root@defx0wf0 # newfs /dev/vx/rdsk/apachedg/apachevol1

newfs: /dev/vx/rdsk/apachedg/apachevol1 last mounted as /global/db_bin
newfs: construct a new file system /dev/vx/rdsk/apachedg/apachevol1: (y/n)? y
/dev/vx/rdsk/apachedg/apachevol1:       1024000 sectors in 500 cylinders of 32 tracks, 64 sectors
        500.0MB in 32 cyl groups (16 c/g, 16.00MB/g, 7680 i/g)

root@defx0wf0 # newfs /dev/vx/rdsk/apachedg/apachevol2
newfs: /dev/vx/rdsk/apachedg/apachevol2 last mounted as /global/db_data1
newfs: construct a new file system /dev/vx/rdsk/apachedg/apachevol2: (y/n)? y
/dev/vx/rdsk/apachedg/apachevol2:       3072000 sectors in 1500 cylinders of 32 tracks, 64 sectors
        1500.0MB in 47 cyl groups (32 c/g, 32.00MB/g, 7936 i/g)

root@defx0wf0 # newfs /dev/vx/rdsk/apachedg/apachevol3
newfs: /dev/vx/rdsk/apachedg/apachevol3 last mounted as /global/db_data2
newfs: construct a new file system /dev/vx/rdsk/apachedg/apachevol3: (y/n)? y
/dev/vx/rdsk/apachedg/apachevol3:       16384000 sectors in 8000 cylinders of 32 tracks, 64 sectors
        8000.0MB in 164 cyl groups (49 c/g, 49.00MB/g, 6144 i/g)

 

8)
root@defx0wf0 # cd /global
root@defx0wf0 # ls -l
drwxr-xr-x  66 root     sys         1536 Sep 12 14:41 .devices

 

 - Die muß man auf beiden Seiten anlegen, sonst werden die nicht gemountet.
root@defx0wf0 # mkdir apache_bin
root@defx0wf0 # mkdir apache_data1                  
root@defx0wf0 #
mkdir apache_data2

 

in die /etc/vfstab auf beiden Seiten eingetragen:
/dev/vx/dsk/apachedg/apachevol1 /dev/vx/rdsk/apachedg/apachevol1        /global/apache_bin      ufs     2       yes     global,logging
/dev/vx/dsk/apachedg/apachevol2 /dev/vx/rdsk/apachedg/apachevol2        /global/apache_data1    ufs     2       yes     global,logging
/dev/vx/dsk/apachedg/apachevol3 /dev/vx/rdsk/apachedg/apachevol3        /global/apache_data2    ufs     2       yes     global,logging

Dann gings: (Problem mit mounten von /global/apache_data2 vermutl. wg. striping...)

 

 


 

10.) NAFO-Groups
(Network Adapter Failover, d.h. zwei netzwerkadapter (z.B. hme1 und hme0); fällz einer aus, wechselt die IP-Adresse auf den anderen Adapter)
In eine Nafo Group müssen mindestens zwei Netzwerkkarten reinkonfiguriert werden.
Wichtig ist das die Sekundärkarten keine IP-Adresse haben. Am besten ist, die Karten mit unplumb rauszukonfigurieren.
Das pnmset Kommando konfiguriert die secondary Adapter automatisch.
Die Sekundär Adapter nehmen ,im Falle des Ausfalls des Primary Adapters, die IP-Adresse des Primary Adapters an

siehe auch Seite 9-7

root@defx0wf0 # pnmset
In the following, you will be prompted to do
configuration for network adapter failover

Do you want to continue ... [y/n]: y

How many NAFO groups to configure [1]: 1

Enter NAFO group number [0]: 0
Enter space-separted list of adapters in nafo0: hme0,qfe0

Checking configuration of nafo0:
Error: no /etc/hostname.<adp> found for any adapter in nafo0

root@defx0wf0 # pnmset
In the following, you will be prompted to do
configuration for network adapter failover

Do you want to continue ... [y/n]: y

How many NAFO groups to configure [1]: 1

Enter NAFO group number [0]: 0
Enter space-separted list of adapters in nafo0: hme0 qfe0

Checking configuration of nafo0:
Testing active adapter hme0...
Testing adapter qfe0...

NAFO configuration completed

 

root@defx0wf0 # pnmset -p
current configuration is:

nafo0 hme0 qfe0

oot@defx0wf0 # pnmstat -l
group   adapters        status  fo_time act_adp
nafo0   hme0:qfe0       OK      NEVER   hme0

 

- gibt den primary adapter aus
root@defx0wf0 # pnmptor nafo0
hme0

 

- gibt die aktuelle Nafogroup aus
root@defx0wf0 # pnmrtop hme0     
nafo0

 

- Das gleiche auf dem anderen Knoten:

root@defx0wf1 # pnmset
In the following, you will be prompted to do
configuration for network adapter failover

Do you want to continue ... [y/n]: y

How many NAFO groups to configure [1]: 1

Enter NAFO group number [0]: 0
Enter space-separted list of adapters in nafo0: hme0 qfe0

Checking configuration of nafo0:
Testing active adapter hme0...
Testing adapter qfe0...

NAFO configuration completed

 

root@defx0wf1 # pnmstat -l
group   adapters        status  fo_time act_adp
nafo0   hme0:qfe0       OK      NEVER   hme0

 

 


 

Installation Apache (Agent)

1.)prüfen ob die benötigten Pakete vorhanden sind:

root@defx0wf0 # pkginfo | grep Apache
system      SUNWapchd      Apache Web Server Documentation
system      SUNWapchr      Apache Web Server (root)
system      SUNWapchu      Apache Web Server (usr)
application SUNWscva       Apache SSL Components

 

Die benötigten Pakete sind vorhanden (siehe auch SC3: S. 11-18ff )

Falls diese Pakete nicht vorhanden wären, entsprechend von der Solaris8 CD nachinstallieren, also:

pkgadd -d /cdrom/cdrom0/Solaris_8/Product SUNWapchr SUNWapchu SUNWapchd

 

2.) Überprüfen, ob in der /etc/nsswitch.conf folgendes eingetragen ist (hosts: cluster files). Ja, war der Fall

root@defx0wf0 # more /etc/nsswitch.conf
#
# /etc/nsswitch.files:
#
# An example file that could be copied over to /etc/nsswitch.conf; it
# does not use any naming service.
#
# "hosts:" and "services:" in this file are used only if the
# /etc/netconfig file has a "-" for nametoaddr_libs of "inet" transports.

passwd:     files
group:      files
#hosts:      files
hosts:      cluster files
ipnodes:    files
networks:   files
protocols:  files
rpc:        files
ethers:     files
usw.

 

3.) Apache startskripte umbennnen, so daß Sie nicht mehr ausgeführt werden können:

root@defx0wf0 # ls -la /etc/rc?.d/*apache
-rwxr--r--   6 root     sys          572 Jan  6  2000 /etc/rc0.d/K16apache
-rwxr--r--   6 root     sys          572 Jan  6  2000 /etc/rc1.d/K16apache
-rwxr--r--   6 root     sys          572 Jan  6  2000 /etc/rc2.d/K16apache
-rwxr--r--   6 root     sys          572 Jan  6  2000 /etc/rc3.d/S50apache
-rwxr--r--   6 root     sys          572 Jan  6  2000 /etc/rcS.d/K16apache
root@defx0wf0 # mv /etc/rc0.d/K16apache /etc/rc0.d/k16apache

usw.
die anderen Skripte auch umbennen, so daß es so aussieht:

root@defx0wf0 # ls -la /etc/rc?.d/*apache
-rwxr--r--   6 root     sys          572 Jan  6  2000 /etc/rc0.d/k16apache
-rwxr--r--   6 root     sys          572 Jan  6  2000 /etc/rc1.d/k16apache
-rwxr--r--   6 root     sys          572 Jan  6  2000 /etc/rc2.d/k16apache
-rwxr--r--   6 root     sys          572 Jan  6  2000 /etc/rc3.d/s50apache
-rwxr--r--   6 root     sys          572 Jan  6  2000 /etc/rcS.d/k16apache

- das gleiche auch auf der defx0wf1

root@defx0wf0 # pwd
/etc/apache
root@defx0wf0 # ls -la
total 186
drwxr-xr-x   2 root     bin          512 Sep 12 12:02 .
drwxr-xr-x  45 root     sys         3584 Sep 18 16:36 ..
-rw-r--r--   1 root     bin          285 Sep  5 15:12 access.conf
-rw-r--r--   1 root     bin        37164 Sep  9 23:44 httpd.conf-example
-rw-r--r--   1 root     bin         6376 Sep  5 15:12 jserv.conf
-rw-r--r--   1 root     bin        13137 Sep  5 15:12 jserv.properties
-rw-r--r--   1 root     bin        12441 Sep  5 15:12 magic
-rw-r--r--   1 root     bin         9957 Sep  5 15:12 mime.types
-rw-r--r--   1 root     bin          297 Sep  5 15:12 srm.conf
-rw-r--r--   1 root     bin         5934 Sep  5 15:12 zone.properties

 

- copy /etc/apache/httpd.conf , die log. Clusternamen eintragen (also defx0yf0)
root@defx0wf0 # cp /etc/apache/httpd.conf-example /etc/apache/httpd.conf

#ServerName new.host.name
ServerName defx0yf0

 

Zeile: DocumentRoot "/var/apache/htdocs" in DocumentRoot "/global/apache_data1/htdocs" umändern
<Directory "/var/apache/htdocs"> in <Directory "/global/apache_data1/htdocs">
ScriptAlias /cgi-bin/ "/var/apache/cgi-bin/" in ScriptAlias /cgi-bin/ "/global/apache_data1/cgi-bin/"
<Directory "/var/apache/cgi-bin"> in <Directory "/global/apache_data1/cgi-bin">

und

130.73.1.178 defx0xf0 in die /etc/hosts eingetragen (log. Clustername), auf beiden Nodes.

 

5.) kopieren der htdocs und cgi-bin verzeichnisse nach /global/apache_data1
!!!diesen schritt nur auf einem node ausführen

root@defx0wf0 # cp -rp /var/apache/htdocs   /global/apache_data1
root@defx0wf0 # cp -rp /var/apache/cgi-bin  /global/apache_data1

starten:
root@defx0wf0 # pwd
/usr/apache/bin

root@defx0wf0 # ./apachectl start
[Fri Sep 20 12:27:04 2002] [warn] module jserv_module is already loaded, skipping
./apachectl start: httpd started

root@defx0wf0 # ps -ef|grep httpd
    root   722     1  1 12:27:05 ?        0:00 /usr/apache/bin/httpd
  nobody   730   722  0 12:27:06 ?        0:00 /usr/apache/bin/httpd
    root   744   431  0 12:27:14 console  0:00 grep httpd
  nobody   723   722  0 12:27:06 ?        0:00 /usr/apache/bin/httpd
  nobody   728   722  0 12:27:06 ?        0:00 /usr/apache/bin/httpd
  nobody   724   722  0 12:27:06 ?        0:00 /usr/apache/bin/httpd
  nobody   727   722  0 12:27:06 ?        0:00 /usr/apache/bin/httpd
  nobody   732   722  0 12:27:06 ?        0:00 /usr/apache/bin/httpd

 

Test: (dazu auf dem Rechner defx0wf0 netscape starten (also netscape &)
http://130.73.1.183/ bzw. http://defx0wf0

 

10)Registering and Configuring the Data Service

10a) Ist das Cluster active?

root@defx0wf0 # scstat -p|pg

-- Cluster Nodes --

                    Node name           Status
                    ---------           ------
  Cluster node:     defx0wf0            Online
  Cluster node:     defx0wf1            Online

-- Cluster Transport Paths --

                    Endpoint            Endpoint            Status
                    --------            --------            ------
  Transport path:   defx0wf0:qfe2       defx0wf1:qfe2       Path online
  Transport path:   defx0wf0:qfe1       defx0wf1:qfe1       Path online

 

Sind die global FS gemountet?

root@defx0wf0 # df -k
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/vx/dsk/rootvol  15121031  949103 14020718     7%    /
/proc                      0       0       0     0%    /proc
fd                         0       0       0     0%    /dev/fd
mnttab                     0       0       0     0%    /etc/mnttab
/dev/vx/dsk/var      4032504   81993 3910186     3%    /var
swap                 2632880     144 2632736     1%    /var/run
swap                 2632824      88 2632736     1%    /tmp
/dev/vx/dsk/opt      4032504  290620 3701559     8%    /opt
/dev/vx/dsk/rootdisk_17vol
                      192790    3685  169826     3%    /global/.devices/node@1
/dev/vx/dsk/rootdisk_27vol
                      192790    3658  169853     3%    /global/.devices/node@2
/dev/vx/dsk/apachedg/apachevol1
                      480751    1042  431634     1%    /global/apache_bin
/dev/vx/dsk/apachedg/apachevol2
                     1488607    1600 1427463     1%    /global/apache_data1

 

Paket SUNWscapc installieren (auf der SUNagent CDROM)

Wie:

Installing Sun Cluster HA for Apache Packages
You can use the scinstall(1M) utility to install SUNWscapc, the Sun Cluster HA for Apache package, on a cluster. Do not use the -s option to noninteractive scinstall to install all of the data service packages.
If you installed the data service packages during your initial Sun Cluster installation, proceed to "Registering and Configuring Sun Cluster HA for Apache" <http://docs.sun.com/db/doc/816-2024/6m8dd6iho?a=view>. Otherwise, use the following procedure to install the SUNWscapc package now.

How to Install Sun Cluster HA for Apache Packages

You need the Sun Cluster 3.0 Agents 12/01 CD-ROM to complete this procedure. Perform this procedure on all of the cluster members that can master Sun Cluster HA for Apache.

Load the Sun Cluster 3.0 Agents 12/01 CD-ROM into the CD-ROM drive.

Run the scinstall utility with no options.
This step starts the scinstall utility in interactive mode.

Choose the menu option, Add Support for New Data Service to This Cluster Node.
The scinstall utility prompts you for additional information.

Provide the path to the Sun Cluster 3.0 Agents 12/01 CD-ROM.
The utility refers to the CD as the "data services cd."

Specify the data service to install.
The scinstall utility lists the data service that you selected and asks you to confirm your choice.

Exit the scinstall utility.

Unload the CD from the drive.

 

130.73.33.96 war die Workstation vom Otto, da hatte ich die CD eingelegt und dann auf defx0wf0 gemountet. dfshare zeigt die Resourcen auf dem entfernten Rechner an, die mit shareall (z.B. ) freigegben wurden.

root@defx0wf0 # dfshares 130.73.33.96
RESOURCE                                  SERVER        ACCESS    TRANSPORT
130.73.33.96:/export/home             130.73.33.96         -         -
130.73.33.96:/cdrom/scdataservices_3_0_u3 130.73.33.96     -         -

mount 130.73.33.96:/cdrom/scdataservices_3_0_u3 /mnt

 *** Main Menu ***

    Please select from one of the following (*) options:

        1) Establish a new cluster using this machine as the first node
        2) Add this machine as a node in an established cluster
        3) Configure a cluster to be JumpStarted from this install server
      * 4) Add support for new data services to this cluster node
      * 5) Print release information for this cluster node

      * ?) Help with menu options
      * q) Quit

    Option:  4

 *** Adding Data Service Software ***

    This option is used to add support for data services to a node
    already configured as a Sun Cluster cluster node.

    You will be asked to supply both the location of the media and a list
    of identifiers for the data services you want to install.

    Where is the data services CD?  Oct  8 14:16:40 defx0wf0 login: ROOT LOGIN /dev/pts/4 FROM 129.52.13.66

    Where is the data services CD?  /mnt

    This is the list of available data services qualified with the latest
    SunCluster release:

        Identifier    Description

        apache        Sun Cluster - Highly Available Apache
        bv            Sun Cluster - Highly Available Broadvision
        dns           Sun Cluster - Highly Available Domain Name Server
        iws           Sun Cluster - Highly Available iPlanet Web Server
        netbackup     Sun Cluster - Highly Available NetBackup Master Server
        nsldap        Sun Cluster - Highly Available Netscape Directory Server
        nfs           Sun Cluster - Highly Available NFS Server
        oracle        Sun Cluster - Highly Available Oracle DBMS
        sap           Sun Cluster - Highly Available SAP R/3
        sybase        Sun Cluster - Highly Available Sybase DBMS

        Other         Additional dataservices (non-qualified)

    Please list all of the data services you want to add. List one data
    service identifier per line. When finished, type Control-D:

    Data service identifier (Ctrl-D to finish):  apache
    Data service identifier (Ctrl-D to finish):  ^D

    This is the complete list of data services:

        apache

    Is it correct (yes/no) [yes]?  yes

This is the complete list of data services:

        apache

    Is it correct (yes/no) [yes]?  yes

    Is it okay to add the software for this data service [yes]  yes

scinstall -ik -s apache -d /mnt

** Installing Sun Cluster - Highly Available Apache **
        SUNWscapc...done

ERROR: information for "SUNW.apache" was not found

 

root@defx0wf0 # pkginfo -l SUNWscapc
   PKGINST:  SUNWscapc
      NAME:  Sun Cluster Apache Web Server Component
  CATEGORY:  application
      ARCH:  sparc
   VERSION:  3.0.0,REV=2000.10.01.01.00
   BASEDIR:  /opt
    VENDOR:  Sun Microsystems, Inc.
      DESC:  Sun Cluster Apache web server data service
    PSTAMP:  sc30-patch20020214114005
  INSTDATE:  Oct 08 2002 14:20
   HOTLINE:  Please contact your local service provider
    STATUS:  completely installed
     FILES:       13 installed pathnames
                   3 directories
                  10 executables
                1496 blocks used (approx)

 

 

S.11-43 u. 11-25
Schritt 2: registrieren des resourcentyps für den Apache data service (apache dienst):

root@defx0wf0 #
scrgadm -a -t SUNW.apache

 

schritt 3: erzeugen einer failover resource group für die shared adress resource (sa-rg)
root@defx0wf0 #
scrgadm -a -g sa-rg -h defx0wf0,defx0wf1

 

schritt4: Logical hostname zur resourcegruppe hinzufügen
root@defx0wf0 # scrgadm -a -S -g sa-rg -l defx0yf0
defx0yf0: resource exists; cannot create
(die sa-rg gab es schon..., also dieser Schriit war nicht nötig)

 

schritt 5: skalierbare resouce gruppe apache-rg erzeugen, die auf allen Knoten des Clusters läuft
root@defx0wf0 # scrgadm -a -g apache-rg -y Maximum_primaries=2 -y Desired_primaries=2 -y RG_dependencies=sa-rg

root@defx0wf0 # scrgadm -a -j apache-res -g apache-rg -t SUNW.apache -x Confdir_list=/etc/apache -x Bin_dir=/usr/apache/bin -y Scalable=TRUE -y Network_Resources_Used=defx0yf0
VALIDATE method failed -- check syslog for error messages

==> dieser Fehler taucht wohl auf, da bereits die nfs-rg auf defx0yfo angelegt war

 

root@defx0wf0 # scstat -p|pg
-- Resource Groups and Resources --

            Group Name          Resources
            ----------          ---------
 Resources: nfs-rg              defx0yf0 nfs-rs
 Resources: sa-rg               -
 Resources: apache-rg           -

-- Resource Groups --

            Group Name          Node Name           State
            ----------          ---------           -----
     Group: nfs-rg              defx0wf0            Online
     Group: nfs-rg              defx0wf1            Offline

     Group: sa-rg               defx0wf0            Unmanaged
     Group: sa-rg               defx0wf1            Unmanaged

     Group: apache-rg           defx0wf0            Unmanaged
     Group: apache-rg           defx0wf1            Unmanaged

-- Resources --

            Resource Name       Node Name           State     Status Message
            -------------       ---------           -----     --------------
  Resource: defx0yf0            defx0wf0            Online    Online - LogicalHostname online.
  Resource: defx0yf0            defx0wf1            Offline   Offline

  Resource: nfs-rs              defx0wf0            Online    Online
  Resource: nfs-rs              defx0wf1            Offline   Offline

 

schritt 6: Resource Groups starten

root@defx0wf0 # scswitch -Z -g sa-rg

root@defx0wf0 # scswitch -Z -g apache-rg

root@defx0wf0 # scstat -p|pg

-- Resource Groups and Resources --

            Group Name          Resources
            ----------          ---------
 Resources: nfs-rg              defx0yf0 nfs-rs
 Resources: sa-rg               -
 Resources: apache-rg           -

-- Resource Groups --

            Group Name          Node Name           State
            ----------          ---------           -----
     Group: nfs-rg              defx0wf0            Online
     Group: nfs-rg              defx0wf1            Offline

     Group: sa-rg               defx0wf0            Online
     Group: sa-rg               defx0wf1            Offline

     Group: apache-rg           defx0wf0            Online
     Group: apache-rg           defx0wf1            Online

-- Resources --

            Resource Name       Node Name           State     Status Message
            -------------       ---------           -----     --------------
  Resource: defx0yf0            defx0wf0            Online    Online - LogicalHostname online.
  Resource: defx0yf0            defx0wf1            Offline   Offline

  Resource: nfs-rs              defx0wf0            Online    Online
  Resource: nfs-rs              defx0wf1            Offline   Offline