SUSE Linux Enterprise Server

From MicroFocusInternationalWiki
Revision as of 20:45, 21 December 2007 by Ubrueck (Talk | contribs)

Jump to: navigation, search

Welcome to the SUSE Linux Enterprise Server Wiki!!

As already mentioned on the wiki main page, please feel free to join in. You can read anything in here without logging in, but if you feel like commenting on something, or starting a new topic, you'll need to use a Novell Login account (which you'll be prompted to create if you don't already have one). If you are unfamiliar with using Wiki's in general, please visit Novell Wiki or the grandaddy wiki info site www.wiki.org for some background info.

Several areas of the Novell webspace provide information regarding the SUSE LINUX Enterprise Server (commonly abbreviated as SLES).

General SLES Information

Some general information on SLES is available at the Novell and Linux page.

SLES 10

SLES 10 is released, and can be downloaded here:

New with SLES 10 is the Novell Customer Center. See Getting to know the Novell Customer Center

SLES 10 Service Packs

Service Pack 1 (SP1) was released in June 2007. See Novell Outlines Enhancements in SUSE Linux Enterprise 10 Service Pack 1 and SUSE® Linux* Enterprise Desktop 10 SP1 Release Notes.

Linux Data Management and High Availability Features

High Availability Storage Infrastructure

The High Availability Storage Infrastructure is a combination of several Open Source projects, plus Novell donated source code, integration tested and delivered in SLES 10. It allows for the creation of a server and storage infrastructure that enables service-level availability by running individual services in cluster-managed relocatable virtual machines.

Documentation :

  • Linux-World demo : four-node Cluster hosting two Virtual Machines.

* Exploring the High Availability Storage Infrastructure : English Edition - Japanese Edition (Many thanks to Takekazu Okamoto for the translation !)

    • Known issues regarding this document
      • Initial node2 NTP time grab might fail (leo@kangaroot.net, jdebaer@novell.com)
        • In the document we ask you to make node1 come up after node2, because node2 is running the iSCSI target. Making the nodes (re)boot in this order makes sure that node1 finds the iSCSI target on node2. However, as a result of this, node2 (NTP client) cannot grab it's initial time from node1 (NTP server). Normally this should not be a problem, since a mere reboot of both systems should not cause a large time drift. However, if the hardware clock of node2 is not ok, then node2 can come up with a time drift. When this time drift is too big, then this will cause the STONITH operation to fail when you perform the final test of killing heartbeat on node1. Solution is to run "/etc/init.d/ntp restart" on node2 right before killing the heartbeat on node1. In "Conquering the High Availability Storage Infrastructure" we will probably reverse client and server to resolve this potential issue.
      • Feeding two order constraints into CIB with one XML file only succeeds for one order constraint
        • In the document, in the section "Ordering resource startup in the cluster", we ask you to load an XML file in the CIB which we call "vm1orderconstraints.xml". This file contains two order constraints, one for the virtual machine to the imagestore, and one for the virtual machine to the configstore. When you run the command as indicated in the document, it might be possible THAT ONLY THE FIRST CONSTRAINT IS LOADED IN THE CIB. To check this, you need to run the command "cibadmin -Ql" after having loaded the file in the CIB. Check the output, and verify that both constraints were added to the CIB. If this is not the case, you need to copy the vm1orderconstraints.xml to a second file, e.g. vm1orderconstraints_workaround.xml, remove the first line in that file, and then load this second file in the CIB with the command "cibadmin -C -o constraints -x ./vm1orderconstraints_workaround.xml. The goal is to get the second constraint loaded in the CIB. Only then should you continue the procedure as described in the document.
        • You can also create one constraints file that adds both as follows:
<constraints>
  <rsc_order id="vm1orderconstraints-01" from="vm1" to="imagestorecloneset"/>
  <rsc_order id="vm1orderconstraints-02" from="vm1" to="configstorecloneset"/>
</constraints>
      • When running version 2.0.7 of Hearbeat 2
        • If you have update your Heartbeat 2 software with the patch that Novell released on August 21, then you might encounter some changes in behaviour. See http://support.novell.com/techcenter/psdb/898d4f4e7a472004cbc68e65f5a941a0.html for general information on the patch.
          • Configuring the demo cluster resource requires netmask and broadcast address
            • When you configure the demo cluster resource, i.e. the ip address 192.168.0.3, then either don't specify the nic parameter contrary to what is described in the document, or specify the nic parameter AND the netmask parameter (255.255.255.0) and the broadcast address parameter (192.168.0.255) for the demo cluster resource. Diverting from this path might lead to failure of the resource to start.
  • Conquering the High Availability Storage Infrastructure : to be released.
    • First preview release - November 15 2006 : download here
      • This preview release shows how to integrate a EVMS2 shared cluster container into the High Availability Storage Infrastructure
      • Disclaimer : as explained at the end of the preview chapter, perform the steps described will not lead to a _persistent_ EVMS integration across cluster reboots. The final version of the document will show how to do that.
    • Updated Brainshare TUT323 presentation : download here

Add-ons

Virtualization using SLES10

Advanced Patch Management with ZLM 7.2

If about 250 lines of bash code don't scare you away, then take a look at the document below. It describes how you can leverage ZENworks Linux Management 7.2 for advanced management of SLES 10 ZYPP patches, if you load the patches in with a script. The document contains the code for the script in Appendix A. Bug fixes for the script can be sent to jdebaer at novell dot com.

Updates to the script and the document will be posted here.

  • Advanced Patching of SUSE Linux Enterprise Server 10 with ZLM 7.2 (version 0.2) : download here
  • Example usage : download here

SLES 9

The current version of SLES9 is SLES9 SP4.

SLES 9 is available for a number of architectures and contains numerous packages (see the package list).

For a current installation, both the general release CDs and the current service pack CDs are needed. There are no overlay images available for SLES9.

Download SLES 9

Links to the unpatched ISOs can be found on the SLES 9 Download page.

SLES 9 Support Packs

Links to the support pack ISOs can also be found on the SLES 9 Download page.

Add-ons

Articles

  • Newicon.gif Securing SLES9 - Details some very basic actions you can take to make SLES9 even more secure.
  • Deploying SUSE Linux using autoyast - Whitepaper about deploying blades creating installation sources, an example autoyast file, using partitioning, LVM, YOU, Nagios, LDAP, SAN.
  • Newicon.gif DNS Migration - Simplified guide for migrating DNS from NetWare to Linux

Roberts Quick References

  • A place to recapture your earlier gained Linux knowledge
  • or learn about a plan for starting from scratch and endup as a Linux Professional or Engineer

See: Roberts Quick References

How to Build a Linux Business Case using Linux Consulting

  • See what fast Linux enablement tracks Novell Consulting has to offer: Linux Consulting

SLES8

SLES8 is an older version of SLES which is still supported.

Support for SLES

There are a number of support options for SLES.

Certifications and hardware compatibility

Related products