SUSE Manager/PowerVM virtualization info gathering

From MicroFocusInternationalWiki
Jump to: navigation, search

PPC virtualization gathering

There are multiple virtualization options for the Power architecture:

1. PowerKVM

Single KVM hypervisor system installed on a PPC machine. In this case, the poller approach could be used to gather the virtualization data.

Seems that PowerKVM is not used that often as PowerVM. Moreover, for PowerKVM, the same tooling as for KVM on x86 can be used (libvirt, virsh,....). Solving the virtualization gathering should be same/similar as with our established "poller" solution.

2. PowerVM

  • 1 PPC machine is able to host logical partitions - "LPARs" (basically virtual machines)
  • 1 special LPAR - VIO Server:
    • responsible for "mapping" the virtual resources of other LPARs to physical resources
    • runs some specialized OS (provided by IBM)
    • user doesn't interact with it directly (e.g. by "logging via SSH"), but using other ways:
      • HMC (hardware management console) - a dedicated machine runs some software (webpage, REST API) that manages hypervisor and resources on it (LPARs, virtualized resources). Seems this is a most common way to manage PowerVM machines.
      • NovaLink (only Power8 and higher!) - a separate LPAR on hypervisor that manager hypervisor and resources on it (LPARs, virtualized resources). NovaLink LPAR usually runs some specialized Ubuntu Linux made by IBM. It's also possible to run SUSE Linux with the NovaLink tooling, but this is more or less experimental and we don't support this. NovaLink CLI uses a python client library pypowervm that talks to REST API of NovaLink.

This page focuses on PowerVM only!!!

The problem

We need to collect these pieces of information:

  • Unique identifiers from the guests (see the "Guest part" below)
  • Unique identifiers of the guests running on hypervisor (see the "Hypervisor part" below)
  • Number of CPU/Sockets on hypervisor - essential for subscription counting

With these we can construct a mapping of which guests run on which hypervisors.

We'll need to use gatherer for this (as running poller on non SLE system (HMC, NovaLink machines)) is not desired.

Guest part

LPAR UUID is in /sys/firmware/devicetree/base/ibm,partition-uuid file. The biggest problem with this is that the file is not always present, see below. I asked our (2) PPC experts about this (what are the conditions for the presence of this file). Answers pending.


  • Installed on our PPC hardware: NovaLink machine:
  • LPAR name: franky-abid-awesome-lpar , IP, root/linux
  • Installed from an ISO:
    • Create a virtual optical media repository [2] (via the viosvrcmd command)
    • Upload the ISO: pvmctl vom upload
  • See instructions about installation here [1]

/sys/firmware/devicetree/base/ibm,partition-uuid contains the UUID that matches with pvmctl lpar list --display-fields LogicalPartition.uuid


  • shiraz-3.arch machine
  • No ibm,partition-uuid under device tree!!!
  • Only system-id - is it usable?


  • Running on our PPC hardware: NovaLink machine:
  • Not installed ("installation error" with no other hints), but linuxrc "FS" has the same file as SLE12-SP2, the ibm,partition-uuid is present.

Another SLE15 guest - mania-1

  • In QA lab [4]
  • IP, root/susetesting
  • SLE for SAP 15 RC1
  • LPAR UUID under the device tree (/sys)

Hypervisor part

This section contains ways to retrieve the virtualization information from the NovaLink and HMC machines, both via CLI and API.



Run these commands on the NovaLink machine.

LPARs and their uuids
pvmctl lpar list --display-fields LogicalPartition.uuid
CPU information of the hypervisor
pvmctl sys list \
   --display-fields ManagedSystem.system_name ManagedSystem.uuid \
   ManagedSystem.proc_units ManagedSystem.proc_units_avail


Successful with local access only on The password authentication didn't work for me (it seems it's disabled).


Create a file logon_request.xml with this content:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<LogonRequest xmlns="" schemaVersion="V1_0">

  • Auth using curl:

Port 12443 (HTTPS) or 12080 (HTTP).

curl -k -X PUT \
   -H "Content-Type: application/; type=LogonRequest" \
   -H "Accept: application/; type=LogonResponse" \
   -d @logon_request.xml \
Get info about LPARs
  • The previous logon via curl gives you a path with the token
  • Cat the file - this is the token
  • Use the token in request for LPARs
curl -X GET \
   -H "X-API-Session: <THE TOKEN>" \

You should get info about all logical partitions including their MACs and UUIDs.

Get info about the hypervisors (CPU Sockets)
  • Get the login token
curl -X GET \
   -H "X-API-Session: $TOKEN" \


You also need the token from the logon curl

curl -k -X DELETE \
   -H "Content-Type: application/; type=LogonRequest" \
   -H "X-API-Session: $TOKEN" \


Interacting with the HMC API is the same as with the NovaLink API. There should be just subtle differences [3].

Successfully tested on including the remote authentication.

For authentication, use this form of login_request.xml and continue as in "Logon" section for NovaLink.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<LogonRequest xmlns="" schemaVersion="V1_0">

Alternative approach

Do not use UUID, but a MAC address as UUID seems to be unreliable.


  • multiple MAC addresses per client
  • duplicate MAC addresses (different LANs?)


  • no change on the client tools - we already gather information about NICs!!

We'd need to adjust the gathering and UUID matching algorithm:

  • either we will store a MAC address of a single NIC as a UUID in the rhnVirtualInstance table and the matching algorithm will check if this MAC address is present among the addresses of the guest
  • or we store MAC addresses of ALL interfaces as the UUID (currently it is a varchar(128))
  • or we store a hash of sorted MAC addresses as a UUID? This would be hard to debug.

Guest part

Done - we store MAC addresses in the db already.

Hypervisor part


  • CLI
pvmctl eth list
  • API - Same snippets as for listing LPARs.


Same as NovaLink, the output differs.

Further reading