SUSE Manager/SaltSSHServerPush

From MicroFocusInternationalWiki
Revision as of 12:25, 12 December 2016 by Fkobzik (Talk | contribs) (Known limitations)

Jump to: navigation, search

SUSE Manager Main Page

Push via Salt SSH

Note: This page is not related to Push via SSH for the traditional clients. For Push via SSH, visit SUSE_Manager/SSHServerPush.
Note: This feature is still a feature preview. See the Known limitations below.

Salt provides the Salt SSH feature [1] to manage clients from a server. It works without installing Salt related software on clients. Using Salt SSH there is no need to have minions connected to the Salt master. In other words, the goal of the feature is to provide similar functionality as the traditional Push via SSH feature mentioned above.

This feature allows:

  • managing Salt entitled systems with the Push via SSH contact method using Salt SSH.
  • bootstrapping such systems.

To bootstrap a Salt SSH system, go to the "Bootstrapping" page in the Web UI (Salt -> Bootstrapping). Fill out the required fields, check the "Manage system completely via SSH" option, and confirm with clicking the "Bootstrap" button. After this the system will be bootstrapped and registered in the SUSE Manager and will appear in the "Systems" list.

Ss ssh push.png

Configuration

There are 2 kinds of parameters for Salt SSH:

  • Bootstrap-time parameters - these are configured in the Bootstrapping page
    • Host
    • Activation keys
    • Password (used only for bootstrapping, not to be saved anywhere, all future ssh sessions are authorized via a key/cert pair)
  • Persistent parameters - these are configured SUMA-wide:

Requirements

  • ssh daemon must be running on the remote system and reachable by the *salt-api* daemon (typically running on the SUSE Manager server)
  • python must be installed on the remote system (python must be supported by the installed salt). Currently: python 2.6.
Note: Old RHEL/CentOS versions (<= 5) are not supported because they do not have python 2.6 by default.

Action execution

The feature uses a taskomatic job to execute scheduled actions using salt-ssh. This job periodically checks for scheduled actions and executes them. Difference to the ssh push on traditional clients: the taskomatic job for the traditional clients just executed `rhn_check` on the clients via ssh. Salt-ssh push job, on the other hand, executes a full-blown salt-ssh call based on scheduled action.

Action execution progress

The machinery for running actions is partially shared with the traditional ssh push. The only difference is how a single minion is handled. The process is as follows:

  • Queued actions for a minion are retrieved (sorted by date) from the DB, for each action we check:
    • If an action is already completed or failed -> skip it
    • If a prerequisite action for this action is still queued for the minion -> skip it
    • If a prerequisite action for this action has failed for the minion -> fail this action too
    • If remaining tries for this action is < 1 -> fail this action
    • Otherwise
      • Decrease the number of remaining tries for this action
      • Execute the action using salt-ssh, check:
        • If we get an empty result from salt, or salt throws an exception, we log a warning (action will be retried in the next job run)
        • If we get a valid result, we update the action according to it
    • If some of the executed actions required a package list refresh (we determine that from the result), execute package list refresh action
    • If there was NO action executed, perform a check-in for the system
    • Execute a salt call to determine the uptime of the system, update this value in the database and cleanup old reboot actions

Reboot action handling

Reboot actions require special treatment - we do not want to flag them as 'completed' when the reboot command succeeds, but we would like to do so after the system actually reboots. This is done in a following way:

  • When we encounter a queued reboot action, we execute it and set its status to 'picked up' if it succeeds
  • When the ssh push worker runs again in the future, it updates the uptime of the system and cleans up past reboot actions with 'queued' or 'picked up' status (it will set their status to 'completed')

Note: The ssh push worker routine is called on:

    • every action execution
    • every check-in
    • when a system has reboot actions in a 'picked up' state

Known limitations

  • SSH push via tunnel is not yet implemented
  • SSH push with SUSE Manager proxies is not yet implemented
  • Beacons don't work with salt-ssh, that means:
    • Installing a package on a system using zypper will NOT invoke the package refresh
    • Virtual Host functions (e.g. a host to guests) will NOT work when the virtual host system is Salt SSH based