Notice: This Wiki is now read only and edits are no longer possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.
EclipseLink/UserGuide/JPA/Basic JPA Development/Caching/Coordination
EclipseLink JPA
EclipseLink | |
Website | |
Download | |
Community | |
Mailing List • Forums • IRC • mattermost | |
Issues | |
Open • Help Wanted • Bug Day | |
Contribute | |
Browse Source |
Examples
Contents
Clustering and Cache Coordination
An application cluster is a set of middle tier server machines or VMs servicing requests for a single application, or set of applications. Multiple servers are used to increase the scalability of the application and/or to provide fault tolerance and high availability. Typically the same application will be deployed to all of the servers in the cluster and application requests will be load balanced across the set of servers. The application cluster will access a single database, or a database cluster. An application cluster may allow new servers to be added to increase scalability, and for servers to be removed such as for updates and servicing.
Application clusters can consist of Java EE servers, web containers, or Java server applications.
EclipseLink can function in any clustered environment. The main issue in a clustered environment is utilizing a shared persistence unit (L2) cache. If you are using a shared cache (enabled by default in EclipseLink), then each server will maintain its own cache, and each caches data can get out of sync with the other servers and the database.
EclipseLink provides cache coordination in a clustered environment to ensure the servers caches are is sync.
There are also many other solutions to caching in a clustered environment, including:
- Disable the shared cache (through setting
@Cacheable(false)
, or@Cache(isolation=ISOLATED)
). - Only cache read-only objects.
- Set a cache invalidation timeout to reduce stale data.
- Use refreshing on objects/queries when fresh data is required.
- Use optimistic locking to ensure write consistency (writes on stale data will fail, and will automatically invalidate the cache).
- Use a distributed cache (such as Oracle TopLink Grid's integration of EclipseLink with Oracle Coherence).
- Use database events to invalidate changed data in the cache (such as EclipseLink's support for Oracle DCN/QCN).
Cache coordination enables a set of persistence units deployed to different servers in the cluster (or on the same server) to synchronize their changes. Cache coordination works by each persistence unit on each server in the cluster being able to broadcast notification of transactional object changes to the other persistence units in the cluster. EclipseLink supports cache coordination over RMI, JMS and JGroups. The cache coordination framework is also extensible so other options could be developed.
Cache coordination works by broadcasting changes for each transaction to the other servers in the cluster. Each other server will receive the change notification, and either invalidate the changed objects in their cache, or update the cached objects state with the changes. Cache coordination occurs after the database commit, so only committed changes are broadcast.
Cache coordination greatly reduces the chance of an application getting stale data, but does not eliminate the possibility. Optimistic locking should still be used to ensure data integrity. Even in a single server application stale data is still possible within a persistence context unless pessimistic locking is used. Optimistic (or pessimistic) locking is always required to ensure data integrity in any multi-user system.
Configuring Cache Coordination
Cache coordination is configured using persistence unit properties. The following cache coordination properties are supported:
Property | Description | Default | Required? |
---|---|---|---|
eclipselink.cache.coordination.protocol | Enable cache coordination using the communication protocol:
|
no coordination | Required |
eclipselink.cache.coordination.channel | Sets the channel for cache coordination. All persistence units using the same channel will be coordinated. | EclipseLinkCommandChannel | Optional |
eclipselink.cache.coordination.propagate-asynchronously | Configures if changes are broadcast using a separate thread. If set to false the transaction will wait for all servers to be notified before returning.
Note that JMS is always asynchronous. |
true | Optional |
eclipselink.cache.coordination.thread.pool.size | Configures thread pool size for cache coordination threads.
RMI cache coordination will spawn one thread per node to send change notifications. RMI also spawns a thread to listen for new node notifications. JMS cache coordination will spawn one thread to receive JMS change notification messages (unless an MDB is used). JMS also spawns a thread to process the change notification (unless an MDB is used). A size of 0 indicates no thread pool should be used, and threads will be spawned when required. |
32 | Optional |
eclipselink.cache.coordination.remove-connection-on-error | Set if a connection should be removed if a communication error occurs when coordinating with it.
This is normally used for RMI coordination in case a server goes down (it will reconnect when it comes back up). |
true | Optional |
eclipselink.cache.coordination.naming-service | Set the naming service to use to look-up and register the RMI cache coordination service, either:
|
jndi | Optional |
eclipselink.cache.coordination.serializer | Set how cache coordination serializes message sent between nodes. Serializer other than default can be used for improved performance or integration with other systems. The full class name of the serializer class should be provided. | Java serialization | Optional |
eclipselink.cache.coordination.jndi.user | Set the application server user name to connect to JNDI with. This is only required if JNDI requires authentication. | no authentication | Optional |
eclipselink.cache.coordination.jndi.password | Set the application server user password to connect to JNDI with. This is only required if JNDI requires authentication.
This is normally not required if connecting to a local service. |
no authentication | Optional |
eclipselink.cache.coordination.rmi.url | Only required by RMI cache coordination. Sets the URL of the host server. This is the URL that other cluster members should use to connect to this host. This may not be required in a clustered environment where JNDI is replicated. This can also be set as a System property or using a SessionCustomizer to avoid a separate persistence.xml per server. | local | Optional |
eclipselink.cache.coordination.rmi.multicast-group | Only used for RMI coordination. Sets the multicast socket group address. The multicast group is used to find other members of the cluster. | 239.192.0.0 | Optional |
eclipselink.cache.coordination.rmi.multicast-group.port | Only used for RMI coordination. Sets the multicast socket group port. The multicast group is used to find other members of the cluster. | 3121 | Optional |
eclipselink.cache.coordination.rmi.announcement-delay | Only used for RMI coordination. Sets the number of milliseconds to wait for announcements from other cluster members on start-up. | 1000 | Optional |
eclipselink.cache.coordination.rmi.packet-time-to-live | Only used for RMI coordination. Sets the multicast socket packet time to live. The multicast group is used to find other members of the cluster. Set the number of hops the data packets of the session announcement will take before expiring. The default is 2, a hub and an interface card to prevent the data packets from leaving the local network.
Note that if sessions are hosted on different LANs that are part of WAN, the announcement sending by one session may not reach other sessions. In this case, consult your network administrator for the right time-to-live value or test your network by increase the value until sessions receive announcement sent by others. |
2 | Optional |
eclipselink.cache.coordination.jms.topic | Only used for JMS coordination. Sets the JMS topic name.
All persistence units sharing the same JMS topic from the same JMS service will be coordinated. |
jms/EclipseLinkTopic | Optional |
eclipselink.cache.coordination.jms.factory | Only used for JMS coordination. Sets the JMS topic connection factory JNDI look-up name.
If the server's JNDI is replicated in an application server cluster, then the jms.host option is not required. |
jms/EclipseLinkTopicConnectionFactory | Optional |
eclipselink.cache.coordination.jms.host | Only used for JMS coordination. Sets the URL for the JMS server hosting the topic.
This may not be required in a clustered environment where JNDI is replicated. |
local | Optional |
eclipselink.cache.coordination.jms.reuse-topic-publisher | Only used for JMS coordination. Sets the JSM transport manager to cache a TopicPubliser and reuse it for all cache coordination publishing.
Caching the publisher is supported by some JMS implementations, and may improve efficiency. |
false | Optional |
eclipselink.cache.coordination.jgroups.config | Only used for JGroups coordination. Sets the JGroups config XML file location. | If not set the default JGroups config will be used. | Optional |
EclipseLink persistence.xml cache coordination RMI example
<?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence persistence_2_0.xsd" version="2.0"> <persistence-unit name="acme" transaction-type="RESOURCE_LOCAL"> <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <exclude-unlisted-classes>false</exclude-unlisted-classes> <properties> <property name="eclipselink.cache.coordination.protocol" value="rmi"/> </properties> </persistence-unit> </persistence>
EclipseLink persistence.xml cache coordination JMS example
<?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence persistence_2_0.xsd" version="2.0"> <persistence-unit name="acme" transaction-type="RESOURCE_LOCAL"> <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <exclude-unlisted-classes>false</exclude-unlisted-classes> <properties> <property name="eclipselink.cache.coordination.protocol" value="jms"/> <property name="eclipselink.cache.coordination.jms.topic" value="jms/ACMETopic"/> <property name="eclipselink.cache.coordination.jms.factory" value="jms/ACMETopicConnectionFactory"/> </properties> </persistence-unit> </persistence>
EclipseLink persistence.xml cache coordination JGroups example
<?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence persistence_2_0.xsd" version="2.0"> <persistence-unit name="acme" transaction-type="RESOURCE_LOCAL"> <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <exclude-unlisted-classes>false</exclude-unlisted-classes> <properties> <property name="eclipselink.cache.coordination.protocol" value="jgroups"/> </properties> </persistence-unit> </persistence>
Cache Coordination and Oracle WebLogic
Both RMI and JMS cache coordination work with Oracle WebLogic. When a WebLogic cluster is used JNDI is replicated among the cluster servers, so an rmi.url or jms.host are not required. For JMS cache coordination the JMS topic should only be deployed to only one of the servers (as of Oracle WebLogic 10.3.6). It may be desirable to have a dedicated JMS server is the JMS messaging traffic is heavy.
Usage of other JMS services in WebLogic may have other requirements.
Cache Coordination and Glassfish
JMS cache coordination works with Glassfish. When a Glassfish cluster is used JNDI is replicated among the cluster servers, so a jms.host is not required.
Usage of other JMS services in Glassfish may have other requirements.
RMI cache coordination does not work when the JNDI naming service option is used in a Glassfish cluster (see bug#359395). RMI will work if the "eclipselink.cache.coordination.naming-service" is set to rmi. Each server must provide its own "eclipselink.cache.coordination.rmi.url" option, either by having a different persistence.xml per server, or setting the url as a System property in the server, or through a customizer.
Cache Coordination and IBM WebSphere
JMS cache coordination may have issues on IBM WebSphere. Usage of a Message Driven Bean may be required to allow access the JMS. To use an MDB with cache coordination set the "eclipselink.cache.coordination.protocol" option to jms-publishing. The application will also have to deploy an MDB that processing cache coordination messages in its ear.
Cache coordination MDB example
@MessageDriven public class JMSCacheCoordinationMDB implements MessageListener { private JMSTopicRemoteConnection connection; @PersistenceUnit(unitName="acme") private EntityManagerFactory emf; public void ejbCreate() { this.connection = new JMSTopicRemoteConnection(this.emf.unwrap(ServerSession.class).getCommandManager()); } public void onMessage(Message message) { this.connection.onMessage(message); } }