SUSE Manager/SaltSSHServerPush

From MicroFocusInternationalWiki
Revision as of 17:26, 8 December 2016 by Fkobzik (Talk | contribs)

Jump to: navigation, search

SUSE Manager Main Page

Push via Salt SSH

Note: This page is not related to Push via SSH for the traditional clients. For Push via SSH, visit SUSE_Manager/SSHServerPush.
Note: This feature is still work-in-progress and this page reflects the current state of it.

Salt provides the Salt SSH feature [1] to manage clients from a server. It works without installing Salt related software on clients. Using Salt SSH there is no need to have minions connected to the Salt master. In other words, the goal of the feature is to provide similar functionality as the traditional Push via SSH feature mentioned above.

This feature allows:

  • managing Salt entitled systems with the Push via SSH contact method using Salt SSH. This is only partially supported at the moment (we only support registering a basic system profile, almost no actions can be done on such a system).
  • bootstrapping such systems.

To bootstrap a Salt SSH system, go to the "Bootstrapping" page in the Web UI (Salt -> Bootstrapping). Fill out the required fields, check the "Manage system completely via SSH" option, and confirm with clicking the "Bootstrap" button. After this the system will be bootstrapped and registered in the SUSE Manager and will appear in the "Systems" list.

Ss ssh push.png

Note: This checkbox is hidden from the Web UI in the current code.


There are 2 kinds of parameters for Salt SSH:

  • Bootstrap-time parameters - these are configured in the Bootstrapping page
    • Host
    • Activation keys
    • Password (used only for bootstrapping, not to be saved anywhere, all future ssh sessions are authorized via a key/cert pair)
  • Persistent parameters - these are configured SUMA-wide:


  • ssh daemon must be running on the remote system and reachable by the *salt-api* daemon (typically running on the SUSE Manager server)
  • python must be installed on the remote system (python must be supported by the installed salt). Currently: python 2.6.
Note: Old RHEL/CentOS versions (<= 5) are not supported because they do not have python 2.6 by default.

Action execution

The feature uses a taskomatic job to execute scheduled actions using salt-ssh. This job periodically checks for scheduled actions and executes them. Difference to the ssh push on traditional clients: the taskomatic job for the traditional clients just executed `rhn_check` on the clients via ssh. Salt-ssh push job, on the other hand, executes a full-blown salt-ssh call based on scheduled action.

Action execution progress

The machinery for running actions is partially shared with the traditional ssh push. The only difference is how a single minion is handled. The process is as follows:

  • Pending actions for a minion are retrieved (sorted by date) from the DB, for each action we check:
    • If an action is already completed or failed -> skip it
    • If a prerequisite action for this action is still queued for the minion -> skip it
    • If a prerequisite action for this action has failed for the minion -> fail this action too
    • If remaining tries for this action is < 1 -> fail this action
  • Otherwise
    • Decrease the number of remaining tries for this action
    • Execute the action using salt-ssh, check:
      • If we get an empty result from salt, or salt throws an exception, we log a warning (action will be retried in the next job run)
      • If we get a valid result, we update the action according to it
  • If some of the executed actions required a package list refresh (we determine that from the result), execute package list refresh action
  • If there was NO action executed, perform a check-in for the system

Reboot action handling

Reboot actions require special treatment - we don't want to flag them as 'completed' when the reboot command succeeds, but we would like to do so after the system actually reboots. This is done in a following way:

  • When we encounter a queued reboot action, we execute it and set its status to 'picked up'
  • The ssh push worker then checks for the system uptime, which happens on:
    • every action execution
    • every check-in
    • when a system has reboot actions in a 'picked up' state

and after it, the system uptime is updated and old reboot actions are cleaned.

Known limitations

  • ssh push via tunnel is not yet implemented