This topic has not yet been written. The content below is from the topic description.
A.1.1.2. The shadowing store The shadowing store s the original version of the object store, which was provided in prior releases. It is implemented by the class ShadowingStore. It is simple but slow. It uses pairs of files to represent objects. One file is the shadow version and the other is the committed version. Files are opened, locked, operated upon, unlocked, and closed on every interaction with the object store. This causes a lot of I/O overhead. If you are overriding the object store implementation, the type of this object store is ShadowingStore. A.1.1.3. No file-level locking Since transactional objects are concurrency-controlled through LockManager, you do not need to impose additional locking at the file level. The basic ShadowingStore implementation handles file-level locking. Therefore, the default object store implementation for JBoss Transaction Service, ShadowNoFileLockStore, relies upon user-level locking. This enables it to provide better performance than the ShadowingStore implementation. If you are overriding the object store implementation, the type of this object store is ShadowNoFileLockStore. A.1.1.4. The hashed store The HashedStore has the same structure for object states as the ShadowingStore, but has an alternate directory structure that is better suited to storing large numbers of objects of the same type. Using this store, objects are scattered among a set of directories by applying a hashing function to the object's Uid. By default, 255 sub-directories are used. However, you can override this by setting the ObjectStoreEnvironmentBean.hashedDirectories environment variable accordingly. If you are overriding the object store implementation, the type of this object store is HashedStore. A.1.1.5. The JDBC store The JDBCStore uses a JDBC database to save persistent object states. When used in conjunction with the Transactional Objects for Java API, nested transaction support is available. In the current implementation, all object states are stored as Binary Large Objects (BLOBs) within the same table. The limitation on object state size imposed by using BLOBs is 64k. If you try to store an object state which exceeds this limit, an error is generated and the state is not stored. The transaction is subsequently forced to roll back. When using the JDBC object store, the application must provide an implementation of the JDBCAccess interface, located in the com.arjuna.ats.arjuna.objectstore package: Example A.2. Interface JDBCAccess public interface JDBCAccess { public Connection getConnection () throws SQLException; public void putConnection (Connection conn) throws SQLException; public void initialise (Object[] objName); } The implementation of this class is responsible for providing the Connection which the JDBC ObjectStore uses to save and restore object states: getConnection Returns the Connection to use. This method is called whenever a connection is required, and the implementation should use whatever policy is necessary for determining what connection to return. This method need not return the same Connection instance more than once. putConnection Returns one of the Connections acquired from getConnection. Connections are returned if any errors occur when using them. initialise Used to pass additional arbitrary information to the implementation. The JDBC object store initially requests the number of Connections defined in the ObjectStoreEnvironmentBean.jdbcPoolSizeInitial property and will use no more than defined in the ObjectStoreEnvironmentBean.jdbcPoolSizeMaximum property. The implementation of the JDBCAccess interface to use should be set in the ObjectStoreEnvironmentBean.jdbcUserDbAccess property variable. If overriding the object store implementation, the type of this object store is JDBCStore. A JDBC object store can be used for managing the transaction log. In this case, the transaction log implementation should be set to JDBCActionStore and the JDBCAccess implementation must be provided via the ObjectStoreEnvironmentBean.jdbcTxDbAccess property variable. In this case, the default table name is JBossTSTxTable. You can use the same JDBCAccess implementation for both the user object store and the transaction log. A.1.1.6. The cached store This object store uses the hashed object store, but does not read or write states to the persistent backing store immediately. It maintains the states in a volatile memory cache and either flushes the cache periodically or when it is full. The failure semantics associated with this object store are different from the normal persistent object stores, because a failure could result in states in the cache being lost. If overriding the object store implementation, the type of this object store is CacheStore. Configuration Properties ObjectStoreEnvironmentBean.cacheStoreHash sets the number of internal stores to hash the states over. The default value is 128. ObjectStoreEnvironmentBean.cacheStoreSize the maximum size the cache can reach before a flush is triggered. The default is 10240 bytes. ObjectStoreEnvironmentBean.cacheStoreRemovedItems the maximum number of removed items that the cache can contain before a flush is triggered. By default, calls to remove a state that is in the cache will simply remove the state from the cache, but leave a blank entry (rather than remove the entry immediately, which would affect the performance of the cache). When triggered, these entries are removed from the cache. The default value is twice the size of the hash. ObjectStoreEnvironmentBean.cacheStoreWorkItems the maximum number of items that are allowed to build up in the cache before it is flushed. The default value is 100. ObjectStoreEnvironmentBean.cacheStoreScanPeriod sets the time in milliseconds for periodically flushing the cache. The default is 120 seconds. ObjectStoreEnvironmentBean.cacheStoreSync determines whether flushes of the cache are sync-ed to disk. The default is OFF. To enable, set to ON. A.1.1.7. LogStore This implementation is based on a traditional transaction log. All transaction states within the same process (VM instance) are written to the same log (file), which is an append-only entity. When transaction data would normally be deleted, at the end of the transaction, a delete record is added to the log instead. Therefore, the log just keeps growing. Periodically a thread runs to prune the log of entries that have been deleted. A log is initially given a maximum capacity beyond which it cannot grow. After it reaches this size, the system creates a new log for transactions that could not be accommodated in the original log. The new log and the old log are pruned as usual. During the normal execution of the transaction system, there may be an arbitrary number of log instances. These should be garbage collected by the system,(or the recovery sub-system, eventually. Check the Configuration Options table for how to configure the LogStore.