SUSE Manager/Migration 34

From MicroFocusInternationalWiki
Jump to: navigation, search

Migration from SUSE Manager 3.2 to SUSE Manager 4

This Wiki page contains the draft for the official documentation that can be found here. The old contents here is provided only for reference reasons. Please refer to the official documentation whenever possible!

The migration from SUSE Manager 3.2 to SUSE Manager 4 works exactly the same way as a migration from SUSE Manager 2.1 to SUSE Manager 3. There is NO in-place migration! The migration happens from the original machine to a new one. While this has the drawback that you temporarily need two machines, it also has the advantage that the original machine will remain fully functional in case something goes wrong.

The whole process may be tricky, so it is strongly advised that the migration is done by an experienced consultant. Given the complexity of the product, the migration is an "all-or-nothing", this means if something goes wrong you will need to start all over. Error handling is very limited. Nevertheless it should work more or less out of the box if all the steps are carefully executed as documented.

Please note that the migration involves dumping the whole database on the source machine and restoring it on the target machine. This is a very time-consuming process! Also all of the channels and packages need to be copied to the new machine, so expect the whole migration to take several hours!

Prerequisites

The source machine needs to run SUSE Manager 3.2 with all the latest updates applied! Before trying to migrate, please make sure that the machine is up to date and all updates are installed!

During migration the database from the source machine needs to get dumped and this dump needs to be temporarily stored on the target system. The dump gets compressed with gzip using the default compression options (maximum compression only yields about 10% of space savings but costs a lot of runtime); so check the disk usage of the database with du -sch /var/lib/pgsql/data and ensure that you have at least 30% of this value available in /var/spacewalk/tmp.

These values from a test migration might help illustrate the space requirements:

f204:/var/lib/pgsql # du -sch data
1,8G    data
1,8G    total
f204:/var/spacewalk/packages # l -h /var/spacewalk/tmp/susemanager.dmp.gz
-rw-r--r-- 1 root root 506M Jan 12 14:58 /var/spacewalk/tmp/susemanager.dmp.gz

This is a small test installation; for bigger installations the ratio might be better (space required for the database dump might be less than 30%).

The dump will be written to the directory /var/spacewalk/tmp, the directory will be created if it does not exist yet. If you want the dump to be stored somewhere else, change the definition of the variable $TMPDIR on the beginning of the script to suit your needs.

Setting up the target machine

After the target system has been installed with SLES15SP1 using the unified installer and choosing "SUSE Manager Server" as product, just run yast2 susemanager_setup as you would do for a normal installation of SUSE Manager.

On the first screen, make sure to check Migrate a SUSE Manager compatible server instead of Set up SUSE Manager from scratch.

On the second screen, enter the name of the source system as Hostname of source SUSE Manager Server as well as the domain name. Also enter the database credentials of the source system.

On the next screen, you will need to specify the IP address of the target system. Normally this value should be pre-set to the correct value and you only should need to press ENTER. Only in the case of multiple IP addresses you might need to specify the one that should be used during migration. Please note that during the migration process, the target system will fake its host name to the one of the source system, because the host name of a SUSE Manager installation is very vital and should be used from the very beginning. So do not get confused when logging in to the systems during migration; they both will present you with the same hostname!

The next screens are the same as for a regular installation: Just specify the database parameters (using the same ones as on the source system is recommended).

After all the data has been gathered, YaST2 will terminate. The actual migration will NOT start automatically but needs to be triggered manually!

Performing the migration

The actual migration is performed by giving the command /usr/lib/susemanager/bin/mgr-setup -m. This will read the data gathered in the previous step, set up SUSE Manager on the new machine and transfer all of the data from the source machine. As it needs to perform several operations on the source machine via ssh, you will be prompted once for the root password of the source machine. A temporary ssh key named "migration-key" is created and installed on the source machine, so you need to give the root password only once. Of course the temporary ssh key will be deleted after successful migration.

Ideally, this is all you will need to do. Depending on the size of the installation, the actual migration will take up to several hours. Once finished, you will be prompted to shutdown the source machine, re-configure the network of the target machine to use the same IP address and host name as the original machine and restart it. It should now be a fully functional replacement for your previous 3.2 installation.

The following numbers illustrate the runtime for dumping and importing the database:

14:53:37   Dumping remote database to /var/spacewalk/tmp/susemanager.dmp.gz on target system. Please wait...
14:58:14   Database successfully dumped. Size is: 506M
14:58:29   Importing database dump. Please wait...
15:05:11   Database dump successfully imported.

So dumping the database took about five minutes and importing the dump on the target system took about seven minutes. For big installations this can take up to several hours. Additionally account for the time it takes to copy all the package data to the new machine. Depending on your network infrastructure and hardware, this can also take significant time.

Speeding up the migration

Since a lot of data needs to be copied the whole migration can consume a lot of time. The actual migration can be sped up by doing one of the most time consuming task prior to the actual migration: The copying of all the channels, packages, autoinstall images and so on. So the following approach is recommended: Once you gathered all the data via YaST2, run the following command:

mgr-setup -r

This will copy the lion's share of the data from the old server to the new one. This command can be run at any time; the current server remains fully functional during this task. Once the actual migration is started, only the data changed since running above command needs to be transferred which significantly will reduce the downtime.

On huge installations, the transfer of the database (which involves dumping of the database on the source machine and importing the dump on the target system) still can take quite some time. This is unavoidable as we only can dump the whole database. Of course during this time no write operations may happen, therefore the migration script needs to shutdown the SUSE Manager services on the source machine.

Packages on external storage

Some installations might store the package data on external storage (eg. NFS mount on /var/spacewalk/packages). In such a case it is of course pointless to have all this data copied to the new machine. Just edit the script /usr/lib/susemanager/bin/mgr-setup and remove the respective rsync command (around line 442). Make sure to have the storage mounted on the new machine before the first start of the system!

Analogue handling for /srv/www/htdocs/pub if appropriate.

Broken UI after migration

Chances are you will notice a broken user interface after migration. This is not a bug, but a browser caching issue. The new machine has the same hostname and the same IP address as the old machine which can confuse some web browsers. If you are experiencing this issue, just do a forced reload of the page (in Firefox this is accomplished by pressing the key combination ctrl-F5) and everything will be fine.

Example session

This is the output of a typical migration:


/var/spacewalk already exists. Leaving it untouched.
Filesystem type for /var/cache is ext4 - ok.
Open needed firewall ports...
running
success
success
/var/spacewalk/tmp does not exist; creating it...
Ensuring postgresql has read permissions on /var/spacewalk/tmp for database dump...


Migration needs to execute several commands on the remote machine.
Please enter the root password of the remote machine.
Testing for silent remote command execution... Ok
Migrating from remote system SUSE Manager 3.2
Checking for /etc/pki/tls/certs/spacewalk.crt... found
Found /etc/pki/tls/certs/spacewalk.crt and /etc/pki/tls/private/spacewalk.key on source system.
Shutting down remote spacewalk services...
Shutting down spacewalk services...
Done.
Created symlink /etc/systemd/system/multi-user.target.wants/postgresql.service → /usr/lib/systemd/system/postgresql.service.
CREATE ROLE
* Loading answer file: /root/spacewalk-answers.
** Database: Setting up database connection for PostgreSQL backend.
** Database: Populating database.
** Database: Skipping database population.
* Configuring tomcat.
* Setting up users and groups.
** GPG: Initializing GPG and importing key.
* Performing initial configuration.
* Configuring apache SSL virtual host.
** /etc/apache2/vhosts.d/vhost-ssl.conf has been backed up to vhost-ssl.conf-swsave
* Configuring jabberd.
* Creating SSL certificates.
** Skipping SSL certificate generation.
* Deploying configuration files.
* Setting up Cobbler..
Created symlink /etc/systemd/system/sockets.target.wants/tftp.socket → /usr/lib/systemd/system/tftp.socket.
Installation complete.
Visit https://e193.suse.de to create the Spacewalk administrator account.
13:23:24   Dumping remote database to /var/spacewalk/tmp/susemanager.dmp.gz on target system. Please wait...
13:31:51   Database successfully dumped. Size is: 1,5G
Reconfigure postgresql for high performance...
13:31:51   Importing database dump. Please wait...
13:39:20   Database dump successfully imported.
Reconfigure postgresql for normal safe operation...
Upgrade database schema...
Schema upgrade: [susemanager-schema-3.2.18-3.22.3] -> [susemanager-schema-4.0.11-1.3]
Searching for upgrade path to: [susemanager-schema-4.0.11-1]
Searching for upgrade path to: [susemanager-schema-4.0.11]
Searching for start path:  [susemanager-schema-3.2.18-3]
Searching for start path:  [susemanager-schema-3.2.18]
The path: [susemanager-schema-3.2.18] -> [susemanager-schema-4.0.0] -> [susemanager-schema-4.0.1] -> [susemanager-schema-4.0.2] -> [susemanager-schema-4.0.3] -> [susemanager-schema-4.0.4] -> [susemanager-schema-4.0.5] -> [susemanager-schema-4.0.6] -> [susemanager-schema-4.0.7] -> [susemanager-schema-4.0.8] -> [susemanager-schema-4.0.9] -> [susemanager-schema-4.0.10] -> [susemanager-schema-4.0.11]
Planning to run schema upgrade with dir '/var/log/spacewalk/schema-upgrade/schema-from-20190523-133924'
Executing spacewalk-sql, the log is in [/var/log/spacewalk/schema-upgrade/schema-from-20190523-133924-to-susemanager-schema-4.0.11.log].
[...]
The database schema was upgraded to version [susemanager-schema-4.0.11-1.3].
13:39:46   Schema upgrade successful.
Copy files from old SUSE Manager...
13:39:47   Copy /etc/salt ...
13:39:47   Copy /root/ssl-build ...
13:39:48   Copy /srv/formula_metadata ...
13:39:48   Copy /srv/pillar ...
13:39:48   Copy /srv/salt ...
13:39:49   Copy /srv/susemanager ...
13:39:49   Copy /srv/tftpboot ...
13:39:50   Copy /srv/www/htdocs/pub ...
13:39:50   Copy /srv/www/os-images ...
13:39:51   Copy /var/cache/rhn ...
13:39:51   Copy /var/cache/salt ...
13:39:52   Copy /var/lib/cobbler/config ...
13:39:52   Skipping non-existing /var/lib/Kiwi ...
13:39:52   Copy /var/lib/rhn ...
13:39:53   Copy /var/lib/salt ...
13:39:53   Copy /var/log/rhn ...
13:39:54   Copy /var/spacewalk ...
13:40:48   Copy /root/.ssh ...
13:40:48   Copy certificates ...
13:40:53   Converting /var/lib/cobbler/config/distros.d --> /var/lib/cobbler/collections/distros
13:40:53   Converting /var/lib/cobbler/config/profiles.d --> /var/lib/cobbler/collections/profiles
INFO: Database configuration has been changed.
INFO: Wrote new general configuration. Backup as /var/lib/pgsql/data/postgresql.2019-05-23-13-40-54.conf
INFO: Wrote new client auth configuration. Backup as /var/lib/pgsql/data/pg_hba.2019-05-23-13-40-54.conf
Database is online
System check finished


============================================================================
Migration complete.
Please shut down the old SUSE Manager server now.
Reboot the new server and make sure it uses the same IP address and hostname
as the old SUSE Manager server!

IMPORTANT: Make sure, if applicable, that your external storage is mounted
in the new server as well as the ISO images needed for distributions before
rebooting the new server!
============================================================================

Be aware of the faked hostname

As already documented, the target system needs to fake its hostname to look as the one of the source system. So very carefully check that you are on the right host when being logged in via ssh and make sure you are rebooting the right machine. Especially when logging in a second time while the migration is running, the prompt can look very confusing:

g36:~ # hostname
e193
g36:~ # exit
Abgemeldet
Connection to g36 closed.
mantel@Mandelbrot:~ [0] > ssh g36
Warning: Permanently added 'g36,10.160.67.36' (ECDSA) to the list of known hosts.
Last login: Wed Jun  5 10:23:42 2019 from 10.160.4.229
e193:~ # 

As you can see, the prompt still was set to the original hostname of the target machine because the login happened before the migration started. During migration, you can already see the changed hostname.

Custom data and scripts

The migration will only copy data from the old server that is part of the SUSE Manager product. So if customer has additional data and/or scripts (eg. special scripts in /opt/bin or wherever), he needs to copy this stuff to the new machine on his own.

Also be aware that with SUSE Manager 4.0, we are using python 3. So in case they have been using custom python scripts, they need to ensure these scripts will also run with python 3 and adapt them when necessary.

In case of trouble

  • check diskspace
  • check the output of the following command:

ssh root@<SUMa_3.2_machine> "su -s /bin/bash - postgres -c exit"

This Command shouldn't produce any output. If it does, it can lead to corrupted transfer of the archive with the database dump. Please re-visit your bash environment on the 3.2 machine (e.g. .bashrc file) and make sure no extra text is printed on the shell start.

  • check best practices:
    • /var/spacewalk and /var/lib/pgsql on separate (XFS) filesystem
    • and don't forget to remove subvolume entry in /etc/fstab for subvolume of /var/lib/pqsql when using a separate file system and reboot server first before continuing.

To try again do the following on the new server:

  • remove /root/.MANAGER_SETUP_COMPLETE
  • stop postgresql and remove /var/lib/pgsql/data
  • set the hostname correctly (it now has the hostname from the old SUSE Manager server)
  • correct the /etc/hosts
  • on the new server check /etc/setup_env.sh and see if the correct database name is set: MANAGER_DB_NAME='susemanager'
  • reboot the server before running mgr-setup again.