Client caching

From MicroFocusInternationalWiki
Revision as of 13:13, 30 January 2009 by Riddler (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Client caching explained

This article is still work in prograss. Currently, it is just a copy and paste from the support forums in the following thread:

Maybe a little bit more of explanation is needed to fully understand the concept of client caching and oplocks.

It is important to understand that client caching and oplocks is not the same thing but oplocks are a mechanism to improve client caching.

Client caching

Similar to disk caching, client caching keeps (part of) files in local cache memory of the workstation on order to improve access speed by reducing the number of times the workstation has to access the server. This can consist of both read and write caching. In case of read caching, data that is read from the server is kept in the cache, and if the same data is read again, it is taken from cache instead of from the server. In case of write caching, applications that write to the server actually just write to cache and get an OK. So the application continues and the client writes the data to the server in the background. Now the main problem with client caching is that unlike disk caching where the disk generally just belongs to one machine, files on the server can potentially be accessed and modified by different clients at the same time. This can of course create havoc in combination with caching. Just image a workstation has an old copy of a file in cache, but another workstation has modified it on the server in the meantime without the first workstation knowing. To avoid this kind of situation, the client has to make sure it only caches files or parts of files were it knows that it has exclusive access to the file. So by default, client caching would only works with files where you have exclusive access. For most cases however, this restriction does not match the reality. Most applications open files in a mode where they still allow other clients to access the same file. As such, all these applications would never benefit from client caching. This is where oplocks come in.

oplocks (opportunistic locks / opportunistic locking)

To regulate concurrent access to files, Windows and NetWare support the concept of locks. There are file locks which regulate access to the file as a whole and record locks which regulate access to certain parts of a file. An application when opening a file tells the server whether it wants exclusive access to the file or whether it still allows read or write access or both read and write access to other applications (or the same application running on a different workstations). If a certain type of right is denied and another application tries to open the file requesting those rights, then the file open will simply fail. The situation is similar for record locks. An application can at any time request certain parts of a file to be locked and deny certain types of access to other applications. Once again, when other applications then try conflicting access to the locked parts of the file, they get an error. Now these are "normal" locks requested by the application. If a certain part of a file is exclusively locked, then caching for that part of the file would be allowed. Now the trick of the client is that for files not locked by the application, the client will nevertheless lock the file for exclusive access. However when requesting the lock, the client will tell the server that it is a lock with callback, meaning that if another application wants to access the file, it will not immediately deny the access, but instead it will send a message to the client that did the lock and asks that client to release the lock. The client can that purge the cache (in case it had cached the file) and free the lock so that the other application can access the file with no conflict and no risk of inconsistency because of the information about the file being cached somewhere else. This is what Novell calls a "level I oplock", an oplock on a file as a whole. Now if you apply the same idea of the client just locking parts of a file to cache them, you have what Novell calls a "level II oplocks"

Other features to enhance client performance

Close behind

UNC path filter

Bad address cache

Bad name cache

Persistent server address cache

Bad server list

How things are configured

Now what are the influences of the various parameters on client caching and oplocks.

1) When you disable client caching in the client settings, then you disable client caching completely and because there is no client caching, the client will need request oplocks on the server. It seems however that this parameter does not influence the caching of files with exclusive access.

2) The parameter "set client caching enabled = on/off" enables or disables "level I" oplocks

3) The parameter "set level 2 oplocks enabled = on/off" enables or disables level II oplocks.

4) The registry setting that controls caching of exclusive access files. See the following TID for a discussion of that registry setting:

You can selectively enable or disable just one type of oplocks. In fact, in case of database corruption in combination of oplocks, it is often just one type of oplocks that is problematic. Also not that if you disable both level I and level II oplocks on the server, you have not completely disabled client caching. You may still do aching of exclusive access files.