SUSE Manager/HAE

From MicroFocusInternationalWiki
Jump to: navigation, search

SUSE Manager Main Page

HOW-TO SUSE Manager 1.7 on HAE

Some customers requested if it is possible to run SUSE Manager on a HAE cluster. This is officially not supported and this procedure should be used with care and not (directly) on a productive system. The build HAE Cluster is an active/passive. This means that all SUSE Manager services are always running on 1 node.

Requirements

2 server, 8 GB RAM, 2 Processors, 1 disk 20GB for OS, 1 disk 200GB for spacewalk data, 1 disk 50MB for SBD

This setup has been tested with VMWare. The cluster nodes have the names dmsm1 and dmsm2. SUSE Manager will be accessible via the hostname dmsm

Installation server 1

The server will be installed with a IP-Address that will not be used for SUSE Manager later. To this server the devices that will be shared in the HAE later should be mounted. Start the server with the SUSE Manager CD.

After phase 1 (so before yast susemanager_setup) perform the following:

 zypper -n up -l

Reboot the server, and then continue with:

 pvcreate /dev/sdb
 vgcreate vg_sm /dev/sdb
 lvcreate -n vol_db -L 15G vg_sm
 lvcreate -n vol_pack -L 150G vg_sm
 lvcreate -n vol_www -L 30G vg_sm
 mkfs -t ext3 /dev/vg_sm/vol_db
 mkfs -t ext3 /dev/vg_sm/vol_pack 
 mkfs -t ext3 /dev/vg_sm/vol_www
 echo "/dev/vg_sm/vol_pack /var/spacewalk ext3 defaults 0 0" >> /etc/fstab
 echo "/dev/vg_sm/vol_db/ /var/lib/pgsql ext3 defaults 0 0" >> /etc/fstab
 echo "/dev/vg_sm/vol_www /srv/www/htdocs/pub ext3 defaults 0 0" >> /etc/fstab
 mkdir -p /var/spacewalk
 mkdir -p /var/lib/pgsql
 mkdir -p /srv/www/htdocs/pub
 chown postgres.postgres /var/lib/pgsql
 chmod 755 /var/lib/pgsql
 mount -a

Start secondphase of installation:

 yast susemanager_setup

After the setup is finished add the IP Address and DNS for SUSE Manager to the /etc/hosts or configure this is DNS. I would prefer to do both.

Perform the next steps to finish the installation:

 zypper in spacewalk-utils
 mgr-ncc-sync -c sles11-sp3-pool-x86_64
 mgr-ncc-sync -c sles11-sp2-suse-manager-tools-x86_64-sp3
 mgr-ncc-sync -c sles11-sp3-updates-x86_64

When the sync to ncc are finished:

 rpm -e spacewalk-client-repository-sle-10-3-0.1-0.7.2 -e spacewalk-client-repository-0.1-0.7.1 \
 -e spacewalk-client-repository-sle-11-1-0.1-0.7.1 -e spacewalk-client-repository-sle-10-4-0.1-0.7.2
 mgr-create-bootstrap-repo -c SLE-11-SP3-x86_64

To prevent the services to start after rebooting, execute the following:

 for w in jabberd tomcat6 osa-dispatcher MonitoringScout Monitoring rhn-search taskomatic \  
 cobblerd apache2 postgresql susemanager; do chkconfig $w off; done

This finishes the installation of server 1.


Installation server 2

The server will be installed with a IP-Address that will not be used for SUSE Manager later. To this server do NOT mount the devices that will be shared in the HAE later. Start the server with the SUSE Manager CD.

After phase 1 (so before yast susemanager_setup) perform the following:

 zypper -n up -l

Continue with phase 2. You will receive an error that there is not enough diskspace. This can be ignored. After installation stop postgres and spacewalk:

 rcpostgresql stop
 spacewalk-service stop

and continue with:

 rm -R /var/spacewalk
 rm -R /var/lib/pgsql
 rm -R /srv/www/htdocs/pub
 mkdir -p /var/spacewalk
 mkdir -p /var/lib/pgsql
 mkdir -p /srv/www/htdocs/pub
 chown postgres.postgres /var/lib/pgsql
 chmod 755 /var/lib/pgsql

After the setup is finished add the IP Address and DNS for SUSE Manager to the /etc/hosts or configure this is DNS. I would prefer to do both.

To prevent the services to start after rebooting, execute the following:

 for w in jabberd tomcat6 osa-dispatcher MonitoringScout Monitoring rhn-search taskomatic \
 cobblerd apache2 postgresql susemanager; do chkconfig $w off; done

This finishes the installation of server 2. Please turn of this server.


Preparing SUSE Manager to listen to different host name on server 1

Check if in /etc/host the correct DNS for SUSE Manager is present. In my test this is dmsm.mb.lab.dus.novell.com

First we need to change the certificate for SUSE Manager. This can be done with the tool mgr-ssl-tool:

 mgr-ssl-tool --gen-ca --set-common-name=dmsm.mb.lab.dus.novell.com -p suse1234 -vv --force
 mgr-ssl-tool --gen-server -p suse1234 --set-hostname=dmsm.mb.lab.dus.novell.com -vv 

Now the certificates need to be copied to the proper locations:

 cp /root/ssl-build/RHN-ORG-TRUSTED-SSL-CERT /srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT 
 cp /root/ssl-build/rhn-org-trusted-ssl-cert-1.0-2.noarch.rpm /srv/www/htdocs/pub/
 rm /srv/www/htdocs/pub/rhn-org-trusted-ssl-cert-1.0-1.noarch.rpm
 cp /root/ssl-build/dmsm.mb.lab.dus/server.crt /etc/ssl/servercerts/spacewalk.crt
 cp /root/ssl-build/dmsm.mb.lab.dus/server.key /etc/apache2/ssl.key/spacewalk.key 
 cp /root/ssl-build/dmsm.mb.lab.dus/server.crt /etc/apache2/ssl.crt/spacewalk.crt 
 cp /root/ssl-build/dmsm.mb.lab.dus/server.csr /etc/apache2/ssl.csr/spacewalk.csr 
 cp /root/ssl-build/dmsm.mb.lab.dus/server.key /etc/ssl/private/spacewalk.key
 cp /root/ssl-build/dmsm.mb.lab.dus/server.pem /etc/pki/spacewalk/jabberd

Stop postgres and spacewalk:

 rcpostgresql stop
 spacewalk-service stop

Now configuration files need to modified:

 vim /etc/apache2/vhosts.d/ssl.conf 

- add under <VirtualHost _default_:443>

       ServerName dmsm.mb.lab.dus.novell.com:443

Change hostname in:

 /etc/cobbler/settings
 /etc/rhn/rhn.conf
 /etc/jabberd/c2s.xml
 /etc/jabberd/sm.xml

Remove old db's for password rm -Rf /var/lib/jabberd/db/*

To transfer the data to the other server, copy the following:

 mkdir -p /var/spacewalk/copy/etc/apache2/vhosts.d
 mkdir -p /var/spacewalk/copy/etc/cobbler
 mkdir -p /var/spacewalk/copy/etc/rhn
 mkdir -p /var/spacewalk/copy/etc/jabberd
 mkdir -p /var/spacewalk/copy/ssl-build
 cp /etc/apache2/vhosts.d/ssl.conf /var/spacewalk/copy/etc/apache2/vhosts.d
 cp /etc/cobbler/settings /var/spacewalk/copy/etc/cobbler
 cp /etc/rhn/rhn.conf /var/spacewalk/copy/etc/rhn
 cp /etc/jabberd/c2s.xml /var/spacewalk/copy/etc/jabberd
 cp /etc/jabberd/sm.xml /var/spacewalk/copy/etc/jabberd
 cp -R /root/ssl-build/* /var/spacewalk/copy/ssl-build

Remove from /etc/fstab:

 /dev/vg_sm/vol_pack /var/spacewalk ext3 defaults 0 0
 /dev/vg_sm/vol_db/ /var/lib/pgsql ext3 defaults 0 0
 /dev/vg_sm/vol_www /srv/www/htdocs/pub ext3 defaults 0 0

Shutdown server 1.

Preparing SUSE Manager to listen to different host name on server 2

Before starting the server, attach the shared medium to the server. Start the server and mount

 mount /dev/vg_sm/vol_pack /mnt
 cp -R /mnt/copy/ssl-build/* /root/ssl-build
 cp /mnt/copy/etc/apache2/vhosts.d/ssl.conf /var/spacewalk/copy/etc/apache2/vhosts.d
 cp /mnt/copy/etc/cobbler/settings /etc/cobbler
 cp /mnt/copy/etc/rhn/rhn.conf /etc/rhn
 cp /mnt/copy/etc/jabberd/c2s.xml /etc/jabberd
 cp /mnt/copy/etc/jabberd/sm.xml /etc/jabberd
 cp /root/ssl-build/dmsm.mb.lab.dus/server.crt /etc/ssl/servercerts/spacewalk.crt
 cp /root/ssl-build/dmsm.mb.lab.dus/server.key /etc/apache2/ssl.key/spacewalk.key 
 cp /root/ssl-build/dmsm.mb.lab.dus/server.crt /etc/apache2/ssl.crt/spacewalk.crt 
 cp /root/ssl-build/dmsm.mb.lab.dus/server.csr /etc/apache2/ssl.csr/spacewalk.csr 
 cp /root/ssl-build/dmsm.mb.lab.dus/server.key /etc/ssl/private/spacewalk.key
 cp /root/ssl-build/dmsm.mb.lab.dus/server.pem /etc/pki/spacewalk/jabberd

This completes the susemanager preparation.

Configure HAE

- Install HAE on both nodes. No configuration is needed. After install activate repository and apply patches:

 zypper mr -e SLE11-HAE-SP2-Pool
 zypper mr -e SLE11-HAE-SP2-Updates
 zypper -n up -l

- add softdog to initrd in /etc/sysconfig/kernel e.g. INITRD_MODULES="ext3 vmxnet3 vmw_pvscsi softdog"

- create new initrd witj mkinitrd and reboot server.

Connect to server 1 and perform:

- create sbd partition on 1 server

 sbd -d /dev/disk/by-id/scsi-36000c298b5ddb11a74c9738165f4bf11 create

- create /etc/sysconfig/sbd on both server SBD_DEVICE="/dev/disk/by-id/scsi-36000c298b5ddb11a74c9738165f4bf11" SBD_OPTS="-W -P"

- Create /etc/corosync/corosync.conf on both nodes:

 compatibility: whitetank
 aisexec {
       group:          root
       user:           root
 }
 service {
       use_mgmtd:      yes
       ver:            0
       name:           pacemaker
       use_logd:       yes
 }
 totem {
       rrp_mode:       passive
       token_retransmits_before_loss_const:    28
       join:           60
       max_messages:   20
       vsftype:        none
       token:          5000
       consensus:      6000
       secauth:        on
       version:        2
       threads:        4
       clear_node_high_bit:    yes
       transport:      udpu
       interface {
               member {
                       memberaddr:     147.2.93.135
               }
               member {
                       memberaddr:     147.2.93.136
               }
               mcastport:      5405
               bindnetaddr:    147.2.93.0
               ringnumber:     0
       }
       interface {
               member {
                       memberaddr:     192.168.100.70
               }
               member {
                       memberaddr:     192.168.100.71
               }
               mcastport:      5406
               bindnetaddr:    192.168.100.0
               ringnumber:     1
       }
 }
 logging {
       fileline:       off
       to_stderr:      no
       to_logfile:     no
       logfile:        /var/log/corosync.log
       to_syslog:      yes     
       syslog_facility: daemon
       debug:          off
       timestamp:      off
 }
 amf {
       mode:   disable
 }

- Generate the /etc/corosync/authkey (yast) and copy to other node:

- in /etc/lvm/lvm.conf add in the part activation (this needs to be done on both server):

    volume_list = [ ]

- start openais on both nodes.

- copy the following in the cib via crm configure edit:

 node dmsm1
 node dmsm2
 primitive pri_Monitoring lsb:Monitoring \
       op start interval="0" timeout="120" \
       op stop interval="0" timeout="300s"
 primitive pri_MonitoringScout lsb:MonitoringScout \
       op start interval="0" timeout="120" \
       op stop interval="0" timeout="300s"
 primitive pri_apache2 lsb:apache2 \
       op start interval="0" timeout="120" \
       op stop interval="0" timeout="300s"
 primitive pri_cobblerd lsb:cobblerd \
       op start interval="0" timeout="120" \
       op stop interval="0" timeout="300s"
 primitive pri_del_jabber_db ocf:heartbeat:anything \
       params binfile="/bin/rm" cmdline_options="-Rf /var/lib/jabberd/db/*" \
       op start interval="0" timeout="120" \
       op stop interval="0" timeout="300s"
 primitive pri_fs_db ocf:heartbeat:Filesystem \
       params device="/dev/vg_sm/vol_db" directory="/var/lib/pgsql" fstype="ext3" options="noatime,defaults" \
       op start interval="0" timeout="60" \
       op stop interval="0" timeout="300s"
 primitive pri_fs_spacewalk ocf:heartbeat:Filesystem \
       params device="/dev/vg_sm/vol_pack" directory="/var/spacewalk" fstype="ext3" options="noatime,defaults" \
       op start interval="0" timeout="60" \
       op stop interval="0" timeout="300s"
 primitive pri_fs_www ocf:heartbeat:Filesystem \
       params device="/dev/vg_sm/vol_www" directory="/srv/www/htdocs/pub" fstype="ext3" options="noatime,defaults" \
       op start interval="0" timeout="60" \
       op stop interval="0" timeout="300s"
 primitive pri_ip_suman ocf:heartbeat:IPaddr2 \
       params ip="147.2.93.134" \
       op monitor interval="10s" timeout="20s" on_fail="restart"
 primitive pri_jabberd lsb:jabberd \
       op start interval="0" timeout="120" \
       op stop interval="0" timeout="300s"
 primitive pri_lvm_suman ocf:heartbeat:LVM \
       params volgrpname="vg_sm" exclusive="yes" \
       op monitor interval="120s" timeout="60s" \
       op stop interval="0" timeout="30s"
 primitive pri_osa-dispatcher lsb:osa-dispatcher \
       op start interval="0" timeout="120" \
       op stop interval="0" timeout="300s"
 primitive pri_postgres ocf:heartbeat:pgsql \
       op start interval="0" timeout="120" \
       op stop interval="0" timeout="300s"
 primitive pri_rhn-search lsb:rhn-search \
       op start interval="0" timeout="120" \
       op stop interval="0" timeout="300s"
 primitive pri_taskomatic lsb:taskomatic \
       op start interval="0" timeout="120" \
       op stop interval="0" timeout="300s"
 primitive pri_tomcat6 lsb:tomcat6 \
       op start interval="0" timeout="120" \
       op stop interval="0" timeout="300s"
 primitive prm_stonith_SBD stonith:external/sbd \
       op start interval="0" timeout="15s" start-delay="15s"
 group grp_suman pri_ip_suman pri_lvm_suman pri_fs_spacewalk pri_fs_www pri_fs_db pri_del_jabber_db \
 pri_postgres pri_jabberd pri_tomcat6 pri_apache2 pri_osa-dispatcher pri_Monitoring \
 pri_MonitoringScout pri_rhn-search pri_cobblerd pri_taskomatic
 property $id="cib-bootstrap-options" \
       dc-version="1.1.7-77eeb099a504ceda05d648ed161ef8b1582c7daf" \
       cluster-infrastructure="openais" \
       expected-quorum-votes="2" \
       stonith-enabled="true" \
       stonith-action="reboot" \
       stonith-timeout="150s" \
       default-action-timeout="120s" \
       no-quorum-policy="ignore" \
       default-resource-stickiness="1000" \
       last-lrm-refresh="1392295320"
 rsc_defaults $id="rsc_defaults-options" \
       migration-threshold="5" \
       failure-timeout="86400s"
 op_defaults $id="rsc_op-defaults" \
       timeout="120s"

- After saving and exiting, all will be started and SUSE Manager is available.

Written by: Michael Brookhuis (mbrookhuis@suse.com)