Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 43 Next »

Work In Progress

  • This is a work in progress, please feel free to volunteer to work on this or help with it

Information

This is an effort to provide an efficient cluster-wide cache for Sakai. The idea is to make it easy to put data into a central cache so that it can easily be tuned for the entire server and the entire cluster.

Cluster-wide caching

  • Goals
    • Single point for all long lived caches in Sakai
    • Cache which works across mutliple servers in the cluster
    • Handles expiration across the cluster automatically (but can be manually controlled)
    • Handles replication across the cluster automatically (but can be manually controlled)
    • Support Manual and Automatic discovery of other cluster nodes (configurable)
    • Don't touch the database with caching data (efficiency and scalability)
    • Add tests to the memory service and profile the performance
    • Improve the cache access interfaces so there is a single point of access which is easy for developers to use and understand
  • Why do we need a cluster wide cache?
    • The advantages of a cluster wide cache are primarily a reduction in database access and the ability to scale up the app servers. For common actions, you can see the database activity on a table increase in direct correlation with the number of app servers. This is because each app server has to load the data into the local cache (if one is even being used) or simply load it each time it is requested. With a cluster wide cache, the data is loaded from the database only once and then sent out to the other nodes. Likewise, it is removed from all caches at once when it changes.
    • More info here: http://ehcache.sourceforge.net/documentation/distributed_design.html
    • It can also mean greater developer efficiency since they can depend on the cache to handle object retrieval instead of worrying about trying to optimize code and making it overly complex and harder to maintain. In general, it is preferable to take advantage of caching rather than attempting to optimize code in other ways since any optimization is adding to complexity and reducing flexibility (since optimal code paths are hard to change).
  • Nice to have
    • Ability to easily wrap a caching interceptor around a service class
    • Ability to easy control whether a cache is replicated, distributed, or just local
    • Tree cache/multiple reference expiration (see the section about this)

Notes about EhCache

  • Will have to move the ehcache jar into common/lib
    • The Tomcat and RMI classloaders do not get along that well. Move ehcache.jar to $TOMCAT_HOME/common/lib. This fixes the problem. This issue happens with anything that uses RMI, not just ehcache.
    • There are lots of causes of memory leaks on redeploy. Moving ehcache and backport-util-concurrent out of the WAR and into $TOMCAT/common/lib fixes this leak.
    • This also means commons-logging and commons-collections (if used in terracota) have to go into common/lib
  • Multicast and automatic discovery
    • Multicast Blocking
      The automatic peer discovery process relies on multicast. Multicast can be blocked by routers. Virtualisation technologies like Xen and VMWare may be blocking multicast. If so enable it. You may also need to turn it on in the configuration for your network interface card.
      An easy way to tell if your mutlicast is getting through is to use the ehcache remote debugger and watch for the heartbeat packets to arrive.
    • Multicast Not Progagating Far Enough or Propagating Too Far
      You can control how far the multicast packets propagate by setting the badly misnamed time to live. Using the multicast IP protocol, the timeToLive value indicates the scope or range in which a packet may be forwarded. By convention:
      0 is restricted to the same host
      1 is restricted to the same subnet
      32 is restricted to the same site
      64 is restricted to the same region
      128 is restricted to the same continent
      255 is unrestricted
      The default value in Java is 1, which propagates to the same subnet. Change the timeToLive property to restrict or expand propagation.
    • RMICachePeer may fail to start if there are spaces in the tomcat path (remove all spaces from the tomcat path)
  • Default delivery mechanism for ehcache is RMI, this would seem to perform more poorly than some of the other lighter weight options like JXTA or JGroups but this needs to be tested
    • Some users have reported that enabling distributed caching causes a full GC each minute. This is an issue with RMI generally, which can be worked around by increasing the interval for garbage collection. The effect that RMI is having is similar to a user application calling System.gc() each minute. In the settings above this is disabled, but it does not disable the full GC initiated by RMI.
      The default in JDK6 was increased to 1 hour. The following system properties control the interval.
      -Dsun.rmi.dgc.client.gcInterval=60000
      -Dsun.rmi.dgc.server.gcInterval=60000
      See http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4403367 for the bug report and detailed instructions on workarounds.
    • Some users have reported that enabling distributed caching causes a full GC each minute. This is an issue with RMI generally, which can be worked around by increasing the interval for garbage collection. The effect that RMI is having is similar to a user application calling System.gc() each minute. In the settings above this is disabled, but it does not disable the full GC initiated by RMI.
      The default in JDK6 was increased to 1 hour. The following system properties control the interval.
      -Dsun.rmi.dgc.client.gcInterval=60000
      -Dsun.rmi.dgc.server.gcInterval=60000
      See http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4403367 for the bug report and detailed instructions on workarounds.
  • Some recommended GC settings
    • -XX:+DisableExplicitGC - some libs call System.gc(). This is usually a bad idea and could explain some of what we saw.
    • -XX:+UseConcMarkSweepGC - use the low pause collector
    • -XX:NewSize=1/4 of total heap size -XX:SurvivorRatio=16
  • Replication requires an object to be Serializable to replicate correctly
    • Non-serializable Objects can use all parts of ehcache except for DiskStore and replication. If an attempt is made to persist or replicate them they are discarded and a WARNING level log message emitted.
    • Elements attempted to be replicated or overflowed to disk will be removed and a warning logged if not Serializable.
  • Use TCP for reliability (instead of UDP)
  • Shutting down the cache?
    • If the JVM keeps running after you stop using ehcache, you should call CacheManager.getInstance().shutdown() so that the threads are stopped and cache memory released back to the JVM. Calling shutdown also insures that your persistent disk stores get written to disk in a consistent state and will be usable the next time they are used.
    • If the CacheManager does not get shutdown it should not be a problem. There is a shutdown hook which calls the shutdown on JVM exit.
    • Ehcache should be shutdown after use. It does have a shutdown hook, but it is best practice to shut it down in your code.
      Shutdown the singleton CacheManager: CacheManager.getInstance().shutdown();
  • Testing and checking the cache
    • Section 9 of the manual explains methods for measuring cache efficiency
  • Upgrade to version 1.4.0 of ehcache (many advantages to using the newest version)
         <dependency>
            <groupId>net.sf.ehcache</groupId>
            <artifactId>ehcache</artifactId>
            <version>1.4.0</version>
         </dependency>
    
  • Testing cache replication
    ehcache-1.x-remote-debugger.jar can be used to debug replicated cache operations. It is included in the distribution tarball for ehcache-1.2.3 and higher. It is invoked using:
    java -jar ehcache-1.x-remote-debugger.jar path_to_ehcache.xml cacheToMonitor
    
    It will print a configuration of the cache, including replication settings and monitor the number of elements in the cache. If you are not seeing replication in your application, run up this tool to see what is going on.
    It is a command line application, so it can easily be run from a terminal session.

Sample ehcache settings

  • Sample settings
       <!-- Sample Distributed cache settings -->
       <!-- automatic discovery using tcp multicast -->
       <cacheManagerPeerProviderFactory
          class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
          properties="peerDiscovery=automatic, 
             multicastGroupAddress=230.0.0.1,
             multicastGroupPort=4446, 
             timeToLive=32" />
    
       <!-- manual discovery (you must customize this to include every node in the cluster) -->
    <!--   <cacheManagerPeerProviderFactory
          class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
          properties="peerDiscovery=manual,
          rmiUrls=//server2:40001/sampleCache11|//server2:40001/sampleCache12" />
    -->
    
       <!-- this will listen for the messages from peers  -->
       <cacheManagerPeerListenerFactory
          class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
          properties="port=40001" />
    
  • Sample cache settings
       <!-- Sample cache settings -->
       <cache name="sampleCache2"
             maxElementsInMemory="10"
             eternal="false"
             timeToIdleSeconds="100"
             timeToLiveSeconds="100"
             overflowToDisk="false">
          <cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
             properties="replicateAsynchronously=true, 
                replicatePuts=true, 
                replicateUpdates=true, 
                replicateUpdatesViaCopy=false, 
                replicateRemovals=true "/>
       </cache>
    
    • Definition of configuration
      replicatePuts=true | false - whether new elements placed in a cache are replicated to others. Defaults to true.
      replicateUpdates=true | false - whether new elements which override an element already existing with the same key are replicated. Defaults to true.
      replicateRemovals=true - whether element removals are replicated. Defaults to true.
      replicateAsynchronously=true | false - whether replications are asyncrhonous (true) or synchronous (false). Defaults to true.
      replicateUpdatesViaCopy=true | false - whether the new elements are copied to other caches (true), or whether a remove message is sent. Defaults to true.
    • Leaving properties out defaults everything to true
  • Hibernate cache settings (recommended)
       <!-- this cache tracks the timestamps of the most recent updates to particular tables. 
         It is important that the cache timeout of the underlying cache implementation be set to a 
         higher value than the timeouts of any of the query caches. In fact, it is recommended that 
         the the underlying cache not be configured for expiry at all. -->
       <cache name="org.hibernate.cache.UpdateTimestampsCache"
          maxElementsInMemory="6000"
          eternal="true"
          overflowToDisk="false" />
    
        <!-- this cache stores the actual objects pulled out of the DB by hibernate -->
        <cache name="org.hibernate.cache.StandardQueryCache"
          maxElementsInMemory="12000"
          eternal="false"
          timeToIdleSeconds="300"
          timeToLiveSeconds="600"
          overflowToDisk="false" />
    
       <!-- this cache holds retrieved data about users -->
       <cache name="org.sakaiproject.user.api.UserDirectoryService"
             maxElementsInMemory="25000"
             eternal="false"
             timeToIdleSeconds="7200"
             timeToLiveSeconds="7200"
             memoryStoreEvictionPolicy="LFU"
             overflowToDisk="true"
             diskSpoolBufferSizeMB="30"
             maxElementsOnDisk="250000"
             diskPersistent="false"
             diskExpiryThreadIntervalSeconds="120">
       </cache>
    

Sakai Memory Service (memory) changes and notes

  • Move ehcache jar into common/lib
    • Requires creating a new deployer in memory and adding in the module to the project base POM
    • Also requires removing the old deployer from db/shared-deployer
    • Also requires putting backport-util-concurrent into common/lib
    • NOTE: Merging will require adjustments to some other projects because of this change
  • Changed memory service from abstract to normal class
  • Move the ehcache.xml file into api/src/java/ehcache.xml
    • Would be good to have one in sakai_home override this default one (might be a pipe dream though)
  • Remove dependency on EventTrackingService
    • This is because we want to get rid of event based cache invalidation
  • Remove use of ComponentManager static (makes testing impossible since we cannot simulate the entire CM)
    • Replaced this with use of application context to attempt to load cache beans by name
  • Switch from using lookup-method to setter injection
    • This revealed an issue with a circular dependency which was handled using spring lazy init
    • This also allows better ability to run tests
  • Remove the explicit garbage collection
    • This is recommended for ehcache, here is the current code
      	// run the garbage collector now
      	System.runFinalization();
      	System.gc();
      
  • Remove the use of multirefcache (only used in security service currently)
    • Deprecate the method for making a multirefcache (newMultiRefCache(String cacheName)(wink)
    • Cause methods that build the MRC to notify that it is deprecated
    • Fix the security service to simply invalidate its own entries (invalidation will propogate)
    • Then destroy the MRC and all related methods from memory service
    • NOTE: Merging will require adjustments to some other projects because of this change
  • Fix the CacheRefresher so it works outside of MRC
    • the refresh method of the refresher is not being called anymore UNLESS multi ref cache is being used (through getPayload() in MemCache (innerclass)). If we want to continue supporting the refresher (outside MRC) then I think it needs to be called in the various methods of MemCache as well
    • The code has now been updated to support it
  • Switch all keys over from Object to String (Ian Boston suggestion)
    • This requires changes (replace Object with String) to the memory interfaces and also changes to the NotificationCache (event-impl) and SiteCacheImpl (site-impl)
    • NOTE: Merging will require adjustments to some other projects because of this change
  • Deprecate the use of the "pattern" argument
    • This was only used to filter out event messages but we are getting rid of event based cache cleanup so this is not needed
  • Fix up API documentation so it all makes sense and is more accurate and understandable
    • Will be running this by a junior developer to ensure it is clear
  • Questions
    • What is org.sakaiproject.memory.MemoryService.mref_map for?
      • This is the secondary cache for keeping track of all the references related to the multiref cache
    • What is org.hibernate.cache.UpdateTimestampsCache doing?
      • This is one of the hibernate caches, recommend smaller default settings (from the ehcache and hibernate docs)
    • What is org.hibernate.cache.StandardQueryCache
      • This is the primary hibernate cache, recommend larger default settings (from the ehcache and hibernate docs)
    • Why is there a separate cache for the UDS? (org.sakaiproject.user.api.UserDirectoryService)

TODO

  • Test out the cache using cluster wide invalidation and also cluster wide replication using RMI
  • Setup tests to profile and unit test where appropriate
    • (tick) Write unit tests to check the operation of the current APIs and validate the contract
    • Write runtime tests to get a baseline testing of the memory service working
    • Switch on distributed caching
    • Execute tests using a single node
    • Execute tests using two nodes
    • Get someone else to run the tests
  • Look at possibility of replacing RMI version with jGroups or some other centralized method which let's us control the server discovery or server definitions
    • This might wait until later because it may be really really hard
  • Run this by a junior developer to ensure it is clearly documented
  • Merge the branch into the trunk
  • NOTE: Record of test runs is available

Multiple reference and tree caching addition

  • Ian B. has added some APIs and implementations to the branch to support multiple reference caching
    • Here are his notes about it which explain pretty clearly what it does
      It is a simplified multiple reference cache.
      Objects on the cache can either have forward or reverse dependencies.
      When an item is added its dependencies are recorded, when an item is removed the dependencies are removed.
      when an item is re-put, the dependencies are removed and the object updated and the dependencies updated.
      Any cluster implementation of this interfae will automatically perform
      cache operations over the whole cache, the consuming service should not
      have to and should not perform any cluster wide invalidation, only
      concerning itself with its own invalidations.
  • The files that were added are as follows
    • APIS
      • org.sakaiproject.memory.api.MultipleReferenceCache.java
      • Some exception classes
    • IMPLS
      • org.sakaiproject.memory.impl.ObjectDependsOnOthersCache.java
      • org.sakaiproject.memory.impl.ObjectIsDependedOnByOthersCache.java
      • net.sf.ehcache.distribution.* (fork of some parts of ehcache)
      • org.sakaiproject.memory.impl.util.DependentPayload.java (util class)
  • This has not added any dependencies or spring beans
  • The forked ehcache code is there to allow the replication to be turned off temporarily and then turned back on
  • No labels