Sakai Terracotta distributed caching setup

Explains how to enable Sakai to use a Terracotta server for distributed caching. Requires Sakai 10.0 or better.

 

Terracotta server setup

  1. Download Terracotta 3.7.6 (works with older version of ehcache, currently 2.6.6, in Sakai):
  2. Get a license for the free version:
  3. Untar the Terracotta files ({TERRACOTTA_DIR})
  4. In the {TERRACOTTA_DIR} directory
  5. To start terracotta:
    • {TERRACOTTA_DIR}/bin/start-tc-server.sh -n <NAME_OF_SERVER>
  6. (OPTIONAL) To start the dev console monitor:
    • {TERRACOTTA_DIR}/bin/dev-console.sh

Terracotta-specific Sakai Setup:

sakai.properties
# CLUSTER CACHING (KNL-1184)
# WARNING: this requires an external distributed caching server
# Enable distributed caching
# DEFAULT: false
#memory.cluster.enabled=true

# The URLs of the distributed cache servers
#memory.cluster.server.urls.count=2
#memory.cluster.server.urls.1={CACHE_SERVER_URL_1}:9510
#memory.cluster.server.urls.2={CACHE_SERVER_URL_2}:9511

## The caches that will be using the distributed cache.
## The only ones known to be safe are listed:
## EVENTS clustering: org.sakaiproject.event.impl.ClusterEventTracking.eventsCache, org.sakaiproject.event.impl.ClusterEventTracking.eventLastCache
## (events caching shown below as example)
## Events caching uses a distributed cache to propagate events, rather than reading from the database, events are still stored in the database
## SESSION failover: org.sakaiproject.tool.impl.RebuildBreakdownService.cache
## PERFORMANCE: org.sakaiproject.authz.impl.DbAuthzGroupService.realmRoleGroupCache
#memory.cluster.names.count=2
#memory.cluster.names.1=org.sakaiproject.event.impl.ClusterEventTracking.eventsCache
#memory.cluster.names.2=org.sakaiproject.event.impl.ClusterEventTracking.eventLastCache

## Any Cache properties below that are not set will use the default values
# Valid properties include: maxEntries(int>0), timeToIdle(int>0, seconds), timeToLive(int>0, seconds), eternal(true|false)
# Defaults: maxEntries=10000, timeToIdle=600, timeToLive=600, eternal=false
# Configure cluster caches using: memory.cluster.{cacheName}.{property)={value}

# Events caching properties
#memory.cluster.org.sakaiproject.event.impl.ClusterEventTracking.eventsCache.maxEntries=80000
#memory.cluster.org.sakaiproject.event.impl.ClusterEventTracking.eventsCache.timeToIdle=60
#memory.cluster.org.sakaiproject.event.impl.ClusterEventTracking.eventsCache.timeToLive=60

# eventLastCache caching properties (only needs to ever store 1 entry but we need it forever)
#memory.cluster.org.sakaiproject.event.impl.ClusterEventTracking.eventLastCache.maxEntries=1
#memory.cluster.org.sakaiproject.event.impl.ClusterEventTracking.eventLastCache.eternal=true

# Session replication caching properties
#memory.cluster.org.sakaiproject.tool.impl.RebuildBreakdownService.cache.maxEntries=50000
#memory.cluster.org.sakaiproject.tool.impl.RebuildBreakdownService.cache.timeToIdle=3600
#memory.cluster.org.sakaiproject.tool.impl.RebuildBreakdownService.cache.timeToLive=10800

# Role and Group caching properties
#memory.cluster.org.sakaiproject.authz.impl.DbAuthzGroupService.realmRoleGroupCache.maxEntries=100000
#memory.cluster.org.sakaiproject.authz.impl.DbAuthzGroupService.realmRoleGroupCache.timeToIdle=2000
#memory.cluster.org.sakaiproject.authz.impl.DbAuthzGroupService.realmRoleGroupCache.timeToLive=2400
 
## Session Replication settings
## WARNING: This requires a distribution mechanism of some kind (currently requires a distributed cache)
## NOTES:
## cache: org.sakaiproject.tool.impl.RebuildBreakdownService.cache must be set to a distributed cache (see memory.cluster)
## cache: org.sakaiproject.tool.impl.RebuildBreakdownService.stash should be configured to last as long
##        as your user might need to navigate to JSF or other on-demand session tools after landing on a new server
# Enable session cluster replication (see notes above)
# Default: false
#session.cluster.replication=true
## Performance tuning settings below - be careful with these numbers as adjusting them downward can create heavy load
# Tuning setting, minimum seconds old the session must be before it will be replicated
# Default: 20
#session.cluster.minSecsOldToStore=20
# Tuning setting, minimum seconds that must pass before session data is updated (since last store)
# NOTE: certain events will cause the session data to be updated in the store regardless of this setting
# Default: 10
#session.cluster.minSecsBetweenStores=10
# Tuning setting, minimum seconds after a session has been rebuilt from the store before it can be updated in the storage again
# Default: 30
#session.cluster.minSecsAfterRebuild=30