EMBEDDED OBJECT ORIENTED DATABASE SYSTEMS FOR JAVA AND CSHARP



Introduction

Object oriented database systems are not new. Like object oriented languages, object oriented databases first appeared a long time ago. Although some people expected them to replace relational database systems, this did not happen. What we see now is that object oriented and relational databases are moving toward each other: most vendors of OODBMSs provide within their products support of SQL-like non-procedural query languages, while vendors of mainstream RDBMS incorporate OO extensions in their products (OO version of SQL). We can therefore expect that at some point in time it will not be possible to distinguish relational and object oriented databases (similarly, it is already difficult to find a pure C compiler which doesn't recognize C++).

Here we should talk a little bit about object oriented databases and their main advantages. Most modern programming languages are object oriented, while most mainstream databases are relational. A programmer must therefore work simultaneously with two data models - relational and object-oriented. This significantly complicates the design of an application, because the system architect has to work with two different notions for representing the same entities. Further, the programmer has to implement a lot of code for packing/unpacking data from one representation to another. This is tedious, error-prone work and, even worse; such packing/unpacking routines significantly degrade system performance. Modern modeling tools, although far from ideal, help to somewhat alleviate this problem - by generating both application classes and database tables from universal model descriptions.

An object oriented database (as is clear from its name) is able to store objects. So there is no further need to work with different models of data - objects are everywhere. In addition to significant advantages from the system design point of view, an object oriented database can efficiently handle complex relations between objects. Unlike an RDBMS, which has to perform joins to follow such relations, an OODBMS directly represents relations between objects. Therefore, traversing such relations requires significantly less time. This is why object oriented databases are popular in areas requiring manipulation of complex sets of data: such as CAD/CAM or GIS. In such areas, the traditional relational approach is too inefficient, and attempts to represent complex object graphs using two-dimensional tables cause significant trouble.

Ideally, an object oriented database should completely hide database specifics from the application programmer. The programmer should not worry about whether the program works with normal (transient) or persistent (stored in the database) objects. This approach is called transparent persistence. There are many different approaches to reach this goal (discussed later), but an ideal cannot be attained. The main problem is that working with a database involves some aspects not present in normal applications - transactions, locking, and data distribution - for example. Since these cannot be completely hidden from the application programmer, it is not possible to reach complete transparency. In any case, solutions provided by existing OODBMS significantly simplify the life of a programmer, as compared with relational databases.

The Java language too cannot be considered new. C# is certainly a newer language, but actually it has more in common with Java than it has differences. Therefore, it is possible to use both Java and C#, and new kinds of programming languages (excepting 4GL languages) for high-level professional application development. Compared to C++, Java provides better safety and protection from programming errors. The lack of pointer arithmetic and presence of implicit memory allocation (and garbage collection) makes it possible to eliminate most serious C++ errors (dangling references, memory leaks, "walking" pointers). Also, the presence of a large standard class library makes development of applications much faster. GC and reflection packages make development and integration of different libraries much easier. Static (compile-time) type checking helps eliminate most programming errors at the compilation stage and makes it possible to develop very large and complex projects. These are significant advantages, as compared to other OO languages with dynamic type checking (such as Smalltalk). The speed of modern Java and C# just-in-time compilers is now competitive with C++: Optimized calculation - intensive C++ code, like multiplication of matrices, is only 2X faster than Java. For most real applications the difference is almost negligible.

With Java having so many advantages compared to C++, it is somewhat surprising that so many applications are still written in C++. Certainly there are many factors which restrain popularization of Java: first of all, Sun Microsystems had originally positioned this language for a different purpose. Other factors include the raw state of technology, lack of good development environments, and the necessity to include the complete JRE in the application.

However, the situation is now changing, especially for server-side development. There have been many approaches for accessing databases from Java. The most popular is the JDBC protocol for accessing relational databases. This protocol allows development of applications which can work with DBMS from different vendors. JDBC provides a standard way of retrieving relational data - the programmer creates or prepares a statement containing a query body and parameters, and then iterates through the result using a cursor (a ResultSet). The ResultSet class provides methods for fetching values of particular columns. So the programmer has to convert data from a relational model to the object model used in the application.

There were several attempts to implement an object-to-relational mapping layer on top of JDBC. One of the most recent and most popular tools to accomplish this is Hibernate, developed by the JBoss community. It not only supports all OO features (including inheritance and polymorphic queries) but also performs local caching of objects, which helps to significantly increase performance.

There are multiple approaches for integrating OODBMSs with Java. The primary approach is the ODMG standard developed by Object Database Management Group (consisting of the main OODBMS vendors). This standard describes object-to-database mapping for different programming languages (currently C++, Java and Smalltalk binding are provided). Most modern OODBMS for Java support the ODMG standard.

A more recent approach for object persistence for Java is the Java Data Objects (JDO) standard. The JDO API is a standard interface-based Java model abstraction of persistence. The main idea behind it was to provide independence of the application from object oriented database vendors, as JDBC does for relational databases. The primary difference between the ODMG and JDO approaches is that the first provides a more transparent model of persistence, but requires use of a special runtime or development environment (byte-code or source level preprocessor or specialized Java compiler or specialized JVM). On the other hand, the JDO model can be implemented without using specialized tools but requires the programmer to explicitly load/store persistent object instances.

Aspects of building embedded OODBMS

There are two aspects to database usage, the more important being that the server-side application be able to serve a large number of clients. In this area, it is difficult to compete with mainstream RDBMS. They provide a very high level of performance, reliability and scalability. Also, there are a number of tools available for them, including report generators and rapid application development wizards. But there are also several standalone applications which need to store data between sessions. Using Oracle, or even MySQL, is not possible in such a situation. It requires an embedded database engine - compact, and providing high performance. Let's consider ways of creating such embedded object oriented databases for Java applications.

Transparent persistence

Although Java provides good reflection support, there are tasks that cannot be handled by the Java reflection mechanism. While it is possible to create/inspect/update any object at runtime, it is not possible to change the behavior of the object, i.e. change the semantics of a method call or field access. Without these features, it is not possible to implement completely transparent persistence in Java without using some specialized development or runtime environment. Using a specialized development environment is not convenient for the programmer as it prevents him from working within his favorite IDE. Using a specialized runtime (JVM) causes even more problems by impeding the distribution of the product.

An alternative solution is to make the programmer explicitly fetch each persistent object. This approach is error prone and eliminates the main advantage of an OO database - which is to relieve the programmer from having to write extra code for extracting data from the database.

It is possible to recursively load clusters of related objects in memory (as the Java serialization mechanism does). In this case access to all objects will be transparent, except maybe the root object, which has to be accessed in a special way. This approach doesn't work for large databases. In this case all persistent objects will be loaded in memory - taking a significant amount of time and perhaps causing a memory overflow.

A compromise would be to load objects recursively, but allow the programmer to explicitly stop recursion. In this case the programmer should explicitly load the persistent object from the database. Also, it is possible to implement container classes in such a way that they retrieve their members on demand. As far as objects are usually accessed through containers, there may be no need for such explicit recursion termination at all - it will be implicitly done by containers. The typical case is that there is a cluster (assembly) of several objects referencing each other, which are usually traversed by references. This assembly has a name by which it can be accessed though some container object. If containers can load their members on demand, and members of a cluster (assembly) can be recursively loaded, then we can get almost transparent persistence without requiring specialized tools.

In Java it is also possible to use the AOP (aspect oriented programming) tools AspectJ and JAssist to provide transparent persistence. The main concept behind AOP is to split different aspects of the application and implement each independently. Subsequently all aspects get merged together (by a process called waving). Waving can be done statically (at compile time) or dynamically (when the class is loaded). The first approach is used by AspectJ (providing special compiler - ajc), and the second by JAssist (using special class loader).

Unfortunately there are no such tools for C#, but .Net contains built-in support for changing object behavior at runtime. For ContextBoundObjects, it is possible to provide attributes derived from ContextAttribute and implement the IContributeObjectSink interface. In this case, invocation of any public method or access to any public component will result in a special message (containing the name of the called method, its parameters...) to the class implementing IMessageThink. This class can perform the required pre/post processing and invoke the target method. So it is possible to implement transparent persistence using standard .Net framework capabilities, but there are several significant restrictions as listed below:

" The persistent class should have only public methods and fields. " The persistent class should be derived from ContextBoundObject. " Invocation of methods through this API is about 100 times slower than normal method invocation. " It is still not possible to handle updates of the object.

Which objects can be persistent?

If we want to support fully transparent persistence, then the answer is obvious - any object can be made persistent. But if we plan not to use special byte-code preprocessors or compilers, then nobody except the programmer can notify the database that the object was changed and has to be saved. Also, as described earlier, the stopping of recursion also requires some action from the programmer.

There is one more issue - performance. It is possible to serialize and unpack an instance of the standard Java Hashtable class, but it will require the loading of all hashtable members in memory. If there are one hundred million members, then the size of the hash table array will be more than 100,000. Extracting this huge array, as well as all referenced members, will add tremendous overhead. Also, if we need to add/remove some members, then we have to reallocate and write this massive array to the disk. This example demonstrates that we have to use specialized container classes, instead of standard JDK collections.

Therefore, it will not be possible to support "foreign" libraries - classes which were not specially designed to work without the system. First, they will not be able to efficiently save their changes in the database. Using standard JDK container classes may also cause many problems (memory overflows and very bad performance). And if we make the assumption that persistent classes should be specially designed (and thereby violate the main idea of transparent persistence), then the requirement for persistence-capable classes to be derived from some special class doesn't actually add new restrictions.

Which transaction model should one use?

Transactions are needed to synchronize concurrent access to the database and provide preserve database integrity in the event of application or system failures. The first goal is not that important for the Perst embedded database (since only one application will work with the database), but the second is very important. Actually, for embedded databases, preserving database consistency is sometimes even more important than for the server database: for a server working with critical data we can install a UPS to overcome power failures, and regularly perform backups to avoid losing data. But it is not possible to predict the use environment of an application storing data in embedded databases. Even if some changes are lost due to a power outage or other failure, all is not lost. The worst case scenario is one in which the database is corrupted and we lose all the data.

That is why a reliable transaction mechanism is important for embedded database systems; a transaction-in-progress at the time of failure cannot compromise the integrity of the database. The transaction mechanism should be simple, reliable, fast and automatic. This last requirement requires explanation:

The traditional implementation of the transaction commit mechanism uses a transaction log. The most popular protocol for committing transactions uses a write-ahead-log, where any changes made by the transaction are first written to the log and only after that can they be placed in the database. In this case, changes made by an uncommitted transaction can always be rolled back, and changes made by a committed transaction can be replayed. Periodically the DBMS performs checkpoints, fixing the consistent state of the database and truncating the log file. There is one issue with this scenario - if the transaction is a very long one, then the size of the log file can also be very large. It can be even larger than the size of the database itself - for example, if some objects were changed multiple times within the transaction. In large RDBMS, like Oracle, the system administrator is responsible for setup of the transaction log (by specifying size and location on disk). However, embedded database systems, for which there is no system administrator, should work without requiring any human administration.

An alternative transaction commit scheme is based on shadow objects. When an object is updated for the first time within a transaction, a shadow copy of the object is created. The original instance of the object is left unchanged and we only update the copy. Objects are accessed indirectly through an object index (i.e., an array whose elements contain the offset of objects in the file). There are two instances of the object index - current and shadow. During a transaction, only the current object index is changed. When the transaction commit request arrives, we just swap the roles of the two object indices - the current index becomes the shadow, and vice-versa. It is not necessary to copy object indices themselves - it is sufficient to change the pointer to the current index in the database header. So the database is automatically switched from one consistent state to another. Therefore a log file is not needed at all - the single database file is sufficient. Moreover, log transactions cannot cause an overflow. Since the shadow image is created for each object only once per transaction, the growth of the database file is limited. If the application is not using transactions at all (we can consider the whole database session as one large transaction), then the overhead of this transaction commit scheme is minimal: an object is cloned only once and further updates of the object do not cause any additional activity.

Yet another advantage of the shadow object transaction mechanism is the fast recovery time. Unlike log-based RDBMSs, which have to read and analyze the log file and perform rollback of changes on uncommitted transactions (UNDO) and replay changes of committed transactions (REDO), in this case it is only necessary to copy the content of the original (consistent) object index to the new one. This simple operation restores the consistent state of the database.

Actually, shadow page transaction schemes were used even before log-based schemes. The advantages of this scheme are mentioned above and the main drawback is that it is very difficult to scale it to support multiple concurrent transactions. That is why almost all modern DBMS use a transaction log. But for the Perst embedded database, multi-client access (which requires multiple concurrent transactions) is not needed. That is why a transaction commit schema based on shadow objects is a very good choice for Perst.

Which object-to-database mapping standard should be supported?

The ODMG standard provides transparent persistence, but requires use of specialized development or runtime environments which, as described in the previous section, are usually not acceptable for embedded databases. JDO is also not a simple standard, and implementing it requires significant effort. A primary goal of OODBMS is to provide transparent or nearly transparent persistence, thereby eliminating (or minimizing) differences in accessing transient and persistent objects. If this goal is reached we do not need special interfaces to access persistent data. Certainly there are database-specific aspects, which cannot be hidden from the developer - for example, dealing with transactions and locking. But these aspects are not so important for embedded applications, which usually access databases in exclusive mode. Another significant part of both the JDO and ODMG specifications is related to constructing queries. This aspect is discussed in the following section.

SQL - is it really needed?

It is not difficult to create a simple SQL engine. However, it is a challenge to make this engine efficiently execute all possible queries. To reach this goal, a sophisticated query optimizer is required. And for this optimizer to make the right choices, there must be sufficient metadata on which to base the decisions. That is why most mainstream RDBMS collect statistics about stored data: the number of records in each table, the distribution of key values, typical queries, etc. Collecting, storing and analyzing all of this metadata is excessive for an embedded database engine. On the other hand, even a simple query optimizer is able to efficiently (using an index instead of a sequential search) execute queries like this:
	select * from T where x='Y';
But why do we need to use SQL for such simple queries? It would be simpler to use, for example, the standard Java HashMap class to get the object with the key value 'Y':
	T findByX(String key) { 
     	return (T)hash.get(key);
    }

Collections

As we show in the previous section, most simple SQL queries can be implemented as methods of container classes. Certainly we need specialized collections; standard JDK collections do not fit our needs for many reasons. First, their algorithms are optimized with the assumption that all data is present in main memory. For database collections this is not true - they can contain millions of objects. Second, JDK collections do not support range queries (like "select * from person with age between 20 and 50"). And finally, database collections should fetch their members in a 'lazy' way - only when requested. This is necessary to reduce time to load the collection object itself and to prevent memory overflow for large collections.

Usually database indices are implemented using B-Tree (a data structure minimizing the number of disk accesses needed for basic collection operations: insert/find/remove). Using the Java reflection package, it is possible to implement indices for a field of the object: when an index is created we just specify the class name, and the name of the field(s) in it. Then, for inserting the object in the index, it is sufficient to pass a reference to the object - the value of the key field will be retrieved by the implementation of the index.

Since, in many cases, the size of the database is small enough to fit in main memory, it is also reasonable to provide indices which are optimized with the assumption that all collection members are in memory. Here, another data structure - T-Tree - should be used. Because all elements are directly accessible, there is no need to store keys in the container itself; it can simply contain references to the objects and use a comparator provided by the user to compare objects with each other and with the search key.

It has already been mentioned that OODBMS are widely used in applications working with complex data, such as CAD/CAM and GIS. In many of these systems, objects have spatial coordinates and it is necessary to efficiently locate such objects in space. This task can be solved using Guttman's R-Tree.

The new 1.5 version of Java supports template classes (generic). Generic containers are especially useful for object oriented database systems, since they often use different types of containers. If the type of container element can be statically checked by the compiler, it will greatly reduce the probability of introducing bugs. This makes the program more clear (no need for typecasts) and self-explanatory (the type of the container includes the type of its elements).

Schema evolution

Usually an application is significantly changed during its lifetime. Programmers add new features, re-implement existing features, or perform re-factoring. If a programmer changes a class (for example by adding a new variable), this creates incompatibility with instances of this class stored inside the database. The problem is especially critical for embedded databases - if a customer buys a new version of the application, most likely he does not want to lose all the data he has stored using the old version of the program. Additionally, there will be no system administrator who can convert the database to the new format - the application should do everything automatically. Also, the process of reformatting the database should be fast enough (a database can be quite large and the user will not want to wait long for this operation to be completed).

The best solution, therefore, is a lazy schema evolution. When the object instance is loaded from the database, its class descriptor is compared with the format of the class in the application. If they are not identical, then the object is converted to the new format and if it is changed, it is stored in the new format. So, reformatting data is performed on demand and doesn't require any time during opening of the database.

Architecture of Perst

Persistency by reachability

Perst is designed to implement the persistency by reachability approach. An object of any class derived from the Persistent base class is considered persistence-capable. It is automatically made persistent and stored in permanent storage when it is referenced from some other persistent object and the store method of that object is invoked. So, while it is still possible to do so, there is no need to explicitly assign an object to the storage.

The storage has one special root object. The root object is the only persistent object accessed in a special way (using the Storage.getRoot method). All other persistent objects are accessed in the normal way: either by traversing references from other persistent objects, or by using object containers (Index, Link or Relation). Unlike many other OODBMS, there can be only one root in the storage.If you need several named roots, you should create an Index object and use it as the root object.

Perst requires that each persistence-capable class be derived from the Persistent class. It is not possible to store "foreign" classes in storage. This is a small price to pay for the ease of use of Perst and the freedom from requiring any specialized preprocessors or compilers. Also, components of persistence-capable objects are restricted to the following types:

Scalar types
Any valid Java scalar type: boolean, integer or real. For example boolean, int, short, double.
String type
The java.lang.String type
Date type
The java.util.Date type
Reference to a persistent object
Any class inherited from Persistent or any interface extending the IPersistent interface.
Value type
Any class implementing the IValue interface with the same restrictions for types of components as for persistence-capable objects. If the Perst.implicit.values property is true, then any class which is not derived from IPersistent will be treated as a value. Values are stored inside the persistent object containing them. A value class should have a default constructor (constructor with an empty parameters list) or have no constructors at all. The declaration type of the field referencing value should be the same as the actual type of the referenced object.
Raw binary type
If the Perst.serialize.transient.objects property is true, then Perst will serialize all objects of non-persistent and non-value classes (classes which are not derived from the IPersistent or IValue interfaces) using the standard Java serialization mechanism. Object closure will be packed into an array of bytes, which is stored in the database. The class should implement the java.io.Serializable interface and should not contain references to persistent objects.
Array type
Single-dimensional array(s) with components of the type described above.
Link type
A one-to-many link or, from the implementation point of view, a dynamic array of persistent objects. Links are created using the Storage.createLink method.

Unfortunately it is not possible to detect if an object is changed without saving the old state of the object and performing a field-by-field comparison with the new state of the object. Understandably, the overhead of such a solution (both space and CPU) is very high. In Perst it is the programmer's responsibility to save the object in storage. This can be done by invoking the Persistent.store or Persistent.modify methods.

The Persistent.store method writes the passed object to the storage, as well as all objects referenced from the object that are not yet persistent. So if you create a tree of objects and assign a reference to the root of this tree to some persistent object X, it is only necessary to invoke the store() method in this object X. But then, if you update one of the elements in this tree, you should invoke store individually for each such element (X.store() will NOT now automatically save the referenced objects).

The Persistent.modify method marks an object as modified but doesn't immediately write it to the storage. All objects marked as modified will be stored to the storage during the transaction commit (the store method will be invoked for each modified object). So the modify method is preferable if an object is updated multiple times within a transaction. In this case, instead of storing it several times, it will be stored only once at the end.

Semitransparent object loading

Semitransparent object loading Perst uses no special compiler or preprocessor. Since Java doesn't provide runtime behavior reflection (changing the behavior of an object at runtime), it is not possible to implement completely transparent persistence (when there are no differences between access to transient and persistent objects). Instead, Perst proposes transparent behavior in most cases with some exceptions.

The IPersistent interface declares the recursiveLoading method. The default implementation of this method in the Persistent class always returns true. In this case Perst will recursively load any object referenced from a target object when the target object is loaded. So it causes implicit loading of all clusters of referenced objects from storage to main memory. This is similar to the manner in which the serialization mechanism works.

To avoid an overflow of main memory caused by recursive loading of all objects from the storage, the programmer has to overload the recursiveLoading method in some classes and return false. Objects loaded from such an object will not be implicitly loaded and the programmer has to explicitly invoke the Persistent.load method to load them. So the recursiveLoading method can be used to control loading of objects from storage to main memory.

Also it is important to note that containers (Link, Relation, Index) always load member objects on demand (they do not perform automatic loading of all objects in the containers). Since access to the container members is always performed through methods of the container class, the programmer will not notice it - container methods will always return loaded object(s).

Perst uses the default constructor (a constructor without parameters) to create an object loaded from the storage.
This means:

  1. All persistence-capable classes should have a default constructor (or have no constructor at all, in which case it will be automatically generated by the compiler). The default constructor does not need to be public, but can have any access type. Perst is also able to generate a default constructor for persistent classes using sun.reflect.ReflectionFactory. This will work only with Sun's JDK.
  2. The default constructor should initialize only transient fields.
  3. The default constructor is used to create an instance of the persistent object loaded from the storage. So at the time of default constructor execution, fields of the constructed object are not yet assigned values stored in the database. If you need these values to be able to perform initialization of transient fields, then you need to perform this initialization in the onLoad method which is invoked after fetching of all values of non-transient fields of the persistent object from the storage.

Summarizing the above, the proposed mechanism is convenient and easy to use because it does not require the programmer to explicitly load any referenced object. From another point of view, it is flexible by providing the programmer control over object loading. Only those classes (usually containers) which explicitly control loading of their members (by overloading recursiveLoading to return a false value) need to call the Persistent.load method.

Automatic schema evolution

Perst supports lazy automatic schema evolution. When a class format is changed, Perst performs conversion of the loaded object to the new format. If this object is modified, it will be stored in its new format. Therefore, an object is converted to the new format on demand. Perst is able to handle the following schema changes:
  1. Compatible change of scalar types (changes which could not cause data truncation). For example it is possible to change an int type to a long or an int to a float. But changing type from an int to a short is not possible. More precisely, Perst is able to perform any conversion which is supported by the Java reflection mechanism (field type XXX can be changed to YYY if java.lang.reflect.Field.setXXX method can be applied to the component with type YYY).
  2. Reordering components within a class or moving components to a base or derived class.
  3. Adding/removing classes from the class inheritance hierarchy.
  4. Changing the format of classes with value semantics.

All other schema modifications (such as renaming fields, incompatible changes of field types) cannot be handled by the Perst automatic schema evolution mechanism. In this case you can use the Perst XML export/import mechanism. The database can be exported to XML format using the Storage.exportXML method, then recreated with new class definitions and, after it is saved, data can be imported using the Storage.importXML method.

Relations

Java references provide a way to implement one-to-one relationships between objects. But in many cases one-to-many and many-to-many relations are needed. Perst provides the Link interface to deal with these possibilities. This interface allows you to add/delete/inspect members of the relation. Members of the relation can be accessed by index or extracted as an array of objects.

Relations can be of two kinds: embedded (when references to the related objects are stored in the relation owner object itself) and standalone (when the relation is a separate object, which contains references to the relation owner and relation members). Both kinds of relations implement the Link interface. An embedded relation is created by the Storage.createLink method and a standalone relation (represented by the Relation persistent class) is created by the Storage.createRelation method.

So, a relation between two classes A and B can be implemented in the following way:

Relation typeObject AObject B
one-to-oneB bref;A aref;
one-to-manyLink bref;A aref;
many-to-oneB bref;Link aref;
many-to-manyLink bref;Link aref;

Indices

Usually objects are accessed by traversing from one object to another using references or links. However, it is frequently necessary to locate an object by key. In the JDK, the Hashtable or HashMap class can be used for such needs. In databases, a more sophisticated search is usually required. The complete SQL language is not implemented in Perst because it immediately makes the DBMS bulky and slow. In most cases, the application only performs very simple queries using an exact match or key range. This is done in Perst by the Index and IndexField interfaces. The first interface is used for independent specification of a key and its associated value. The IndexField interface allows us to index an object using one of the fields of the object (the key field).

Indices are created in Perst using the Storage.createIndex or Storage.createFieldIndex methods. There can be several index implementations, but right now only one implementation based on B+Tree is provided (because B+Tree is the most efficient structure for disk based databases). Methods of the Index and FieldIndex interfaces allow you to add, remove and locate objects by key. It is possible to perform a search either by specifying the exact key value or by specifying a range of key values (high or low boundary) or both of them can be skipped or can be declared as being exclusive or inclusive). Therefore it is possible to perform the following types of search:

  1. key is equal to VAL
  2. key belongs to [MIN_VAL, MAX_VAL]
  3. key belongs to [MIN_VAL, MAX_VAL)
  4. key belongs to (MIN_VAL, MAX_VAL]
  5. key belongs to (MIN_VAL, MAX_VAL)
  6. key is greater than MIN_VAL
  7. key is greater than or equal to MIN_VAL
  8. key is less than MAX_VAL
  9. key is less than or equal to MAX_VAL

There are several different ways of selecting objects using an index:

IPersistent get(Key key)
Get an object by key. This method should be used for unique indices, to locate an object by an exact key value.
IPersistent[] get(Key from, Key to)
Get an array of objects with their key belonging to the specified range. Either the 'from' boundary, the 'to' boundary, or both can be null. Both boundaries can be inclusive or exclusive.
Iterator iterator()
Get an iterator which will traverse all objects in the index in ascending order of the key.
Iterator iterator(Key from, Key to, int order)
Get an iterator for objects with their key belonging to the specified range. Either the 'from' boundary, the 'to' boundary or both can be null. Both boundaries can be inclusive or exclusive. Objects can be traversed in ascending or descending order of the key.

If you need a set of persistent objects you should use the Storage.createSet method. Sets are implemented using B-Tree, where the object ID (OID) is used as a key.

Perst also supports spatial indices (org.garret.Perst.SpatialIndex) and generic indices with user-defined comparators (org.garret.Perst.SortedCollection). Spatial indices are implemented using Guttman's R-Tree with the quadratic split algorithm. It allows efficient search of R2 objects. The sorted collection provides almost the same methods as FieldIndex but uses a user-defined comparator to compare collection members. A sorted collection is implemented using T-Tree and is especially efficient for main-memory databases.

The table below summarizes information about all indices supported by Perst:
InterfaceDescriptionKey typeImplementationCreated by
Index Index with explicitly specified key used for exact match or range queries scalar, string or reference B+Tree Storage.createIndex(Class type, boolean unique)
FieldIndex Index constructed for one of the object fields scalar, string or reference B+Tree Storage.createFieldIndex(Class type, String fieldName, boolean unique)
java.util.Set Set of persistent objects - B+Tree Storage.createSet()
SpatialIndex Index for spatial objects with integer coordinates Rectangle R-Tree Storage.createSpatialIndex()
SpatialIndexR2 Index for spatial objects with real coordinates RectangleR2 R-Tree Storage.createSpatialIndexR2()
SortedCollection Index with user defined comparator any T-Tree Storage.createSortedCollection(PersistentComparator comparator, boolean unique)

Projection

Using Perst indices, a programmer can easily implement most simple SQL queries, like:
    select * from T where x=?;
Perst relations can be used to implement simple joins, like the following SQL query fetching all orders for a particular vendor:
    select * from O Order, V Vendor where O.vendorID = V.id AND V.name='ABC';
In Perst it is possible to select the first vendor using an index search and then traverse corresponding relations to locate all orders for this vendor.

Sometimes, though, it is necessary to implement more complex queries. This too is possible in Perst without needing to write a query in some special ( non-procedural) language. The Projection class is used to combine the results of several simple operations (such as an index search).

Let us start our explanation of this class with an example. Consider that we need to find all orders containing details whose names start with 'D1' and which were shipped by vendors whose name starts with either 'V1' or 'V3'. An SQL statement which performs this query is:

    select * from O Order, V Vendor, D Detail where D.name like 'D1%' and O.detailID = D.id
        and O.vendorID = V.id and (V.name like 'V1%' OR V.name like 'V3%');
And now, this is how the same task can be done in Perst. Consider that we have indices for details and vendors:
    FieldIndex detailIndex = db.createFieldIndex(Detail.class, "name", true);
    FieldIndex vendorIndex = db.createFieldIndex(Vendor.class, "name", true);
The set of requested orders can be obtained in the following way:
    //  Projection from vendors to orders (using the "orders" field of Link type)
    Projection v2o = new Projection(Vendor.class, "orders");
    //  Projection from details to orders (using the "orders" field of Link type)
    Projection d2o = new Projection(Detail.class, "orders");
    // Select vendors with name like 'V1%'
    v2o.project(vendorIndex.getPrefix("V1"));
    // Select vendors with name like 'V3%'
    v2o.project(vendorIndex.getPrefix("V3"));
    // Select details with name like 'D1%'
    d2o.project(detailIndex.getPrefix("D1"));
    // Join the projections
    v2o.join(d2o);
    // Get an array of the requested orders
    Order[] orders = (Order[])v2o.toArray(new Order[v2o.size()]);
So, as you can see, the Projection class is used for several purposes as listed below:
  1. To combine the results of several simple operations (implementing the OR operator).
  2. To eliminate duplicates resulting from such a merge.
  3. To obtain the set of related objects (perform a projection using a specified projection field)
  4. To join several projections (the analogue of an SQL JOIN)

It is possible to derive your own class from Projection to implement more sophisticated projections than are possible using a single projection field.

If you need to sort a selection in some particular order, the most efficient way is to use FieldIndex for it. When you select objects using an index, the selected objects are sorted by the search key. If you need to sort a selection by a field other than the search key, then you can use the Arrays.sort method. For example: if in the query described above we need to sort orders by the price field, it can be done with the following statement:

    Array.sort(orders, new Comparator() { 
       public int compare(Object a, Object b) { return ((Order)a).price - ((Order)b).price; }
    });

Transaction model

Perst preserves the consistency of data in case of a system or an application failure. A transaction mechanism is used to implement an all-or-nothing database update. Transactions in Perst are started implicitly when an update operation is performed for the first time and are finished explicitly by commit, rollback or close methods.

The committing of a transaction causes a synchronous write of changed pages to the disk. Synchronous writes (the application is blocked until data is really flushed to the disk) are very expensive operations. Since the average positioning (seek) time for modern disks is about 10ms, they are usually not able to perform more than 100 writes/second in arbitrary locations on the disk. As a typical transaction consists of several updated pages, it leads to an average performance of about 10 transaction commits per second.

Performance can be greatly increased if you minimize the number of commits (larger transactions). Perst therefore uses a shadow mechanism for transaction implementation. When an object is changed for the first time during a transaction, a shadow of the object is created and the original object is kept unchanged. Even if the object is updated multiple times during the transaction, its shadow is created only once. Because of this use of shadows, Perst does not need a transaction log file. Therefore, in Perst, a long transaction cannot cause a transaction log overflow. Quite the contrary, if you do not call commit at all, Perst functions as a DBMS without transaction support, adding almost no overhead for supporting transactions.

The only drawback of long transactions is the possibility of losing changes in case of a fault. Perst will preserve consistency of the database, but all changes made since the last commit will be lost.

Perst also supports per-thread transactions. Three types of per-thread transactions are supported: exclusive, cooperative and serializable. In an exclusive transaction, only one thread can update the database. In cooperative mode, multiple transactions can work concurrently and the commit() method will be invoked only when transactions of all threads are committed. Serializable transactions can also work concurrently, but unlike cooperative transactions, the threads are isolated from each other. Each thread has its own associated set of modified objects and committing the transaction will cause saving only of these objects to the database. To synchronize access to the objects in case of a serializable transaction, the programmer should use the lock methods of the IResource interface. A shared lock should be set before permitting read access to any object, and an exclusive lock should be set before permitting write access. Locks will be automatically released when the transaction is committed (the programmer should not explicitly invoke the unlock() method). In this case, it is guaranteed that transactions are serializable.

Using AspectJ

The main idea behind aspect oriented programming (AOP) is to separate different aspects of systems, implement them independently, and then automatically combine different aspects into a single program. AspectJ is a very popular tool which brings the concepts of AOP to the Java language.

Concerning the interface to the database system, there is the object persistence aspect which should control the loading from storage, and saving to storage, of transparent objects. Without AspectJ, Perst uses a programmer-controlled recursive object loading mechanism (when some object is requested, all objects referenced from this object will also be fetched unless they redefine the recursiveLoading method to return false). To store modified objects in the database, the programmer should explicitly call the store() or modify() methods. With AspectJ it is not needed: the persistence aspect will automatically load a referenced object and mark it as modified when values of some of the object fields are changed. Also, there is no need to derive all persistence-capable classes from the Persistent base class; it can be easily done using the AspectJ declaration construction. Each persistence-capable method must implement org.garret.Perst.aspectj.AutoPersist, which is a marker interface with no methods. However, the user can avoid this by writing a simple aspect to define persistence-capable classes, like this:

public aspect PersistentClasses {
    declare parents: mypackage.* implements AutoPersist;
}
Then you can build your application like this:
ajc *.java -aspectpath /Perst/lib/Perst_aspectj.jar
Restrictions of the AspectJ API pertinent to Perst are:

  1. Persistence-capable objects still need an empty default constructor (which will be used by the database to create an instance of the object when it is loaded from the storage).
  2. The hashCode() and equals() methods will work correctly only after the object is assigned an OID.
  3. The AspectJ interface is not able to automatically load an object when its fields are accessed from outside. So this interface correctly handles only invocation of methods and access to self variables. While in principle it is possible with AspectJ to also automatically handle access to foreign object components, it is done in a very inefficient way and since it contradicts OOP best practices, the feature is switched off in Perst. (You can switch it on by uncommenting a single code fragment in the PerstistenceAspect file.)

Using JAssist

JAssist is yet another popular product for transformation of Java byte-code. It is also possible to build a transparent API to Perst, using the JAssist package. Despite the fact that it is simpler and more compact than AspectJ, its API makes it possible to implement an even more convenient and flexible Perst interface than AspectJ - and with fewer limitations.

With JAssist you can use a normal Java compiler. The Perst translator for JAssist automatically transforms all classes matching a specified name pattern into persistence-capable classes. With this translator, it is not necessary that application classes should be derived from the Persistent class and should provide an empty default constructor.

To be able to use the JAssist interface, you should use a special JAssist class loader instead of the default Java class loader to load all classes which you want to make persistence-capable. Classes whose fully qualified name matches a specified pattern will be transformed during loading. Other classes are loaded unchanged.

With Perst, the JAssist code is slightly changed to be able to distinguish access to this field and to the fields of foreign objects. It makes it possible to eliminate extra overhead to access this field. If you plan to use the original JAssist library, then you should comment the Perst code using the !isSelfReader() condition. In this case Perst will not be able to handle access to foreign (non-this) fields. You should use getter/setter methods instead. Alternatively, you can replace this condition with true, allowing access to foreign fields but with significant degradation of performance and increased code size, because in this case a call of the load() method will be inserted before ALL accesses to fields of persistence-capable objects. Modified versions of JAssist files are included in the Perst distribution in the Perst/src/org/garret/jassist/edit directory.

Transformation includes the following steps:

  1. Add a special constructor which will be used by Perst to create the object instance when it is loaded from storage (this is not the same as the default constructor, which may or may not exist).
  2. If the class is not derived from Persistent, and its superclass is java.lang.Object, then replace the superclass with org.garret.Perst.Persistent.
  3. Insert, at the beginning of each instance method, a call to the Persistent.load() method.
  4. Insert, before each write to a field, a call to the Persistent.modify() method.
  5. Insert a recursiveLoading method returning a false value.
  6. Add pack/unpack methods to the class and create a load factory class.

The JAssist interface has only one restriction: you should not access fields of persistent objects other than this object in the constructor. Here is an example of using the JAssist interface:

package com.mycompany.mypackage; 
import org.garret.Perst.jassist.PerstTranslator; 
import javassist.*; 
public class MyApp { 
	public static void main(String[] args){
		Translator trans = new PerstTranslator(new String[]{"com.mycompany.*"}); 
		ClassPool pool = ClassPool.getDefault(trans); 
		Loader cl = new Loader(pool); 
		cl.run("Main", args); 
	} 
} 
In this example, all classes from com.mycompany.mypackage except MyApp will be loaded by the JAssist class loader and automatically made persistence-capable.

Memory allocation

Memory allocation is performed in Perst by bitmapping. Memory is allocated in chunks called an allocation quantum. In the current version of Perst, the size of the allocation quantum is 64 bytes. It means that the size of all allocated objects is aligned on a 64 byte boundary. Each 64 bytes of database memory is represented by one bit in the bitmap. To locate a hole of the requested size in the bitmap, Perst sequentially searches the bitmap pages for a corresponding number of successive cleared bits. Perst use three arrays indexed by bitmap bytes, which makes possible fast calculation of hole offset and size within the bitmap.

Perst performs cyclic scanning of bitmap pages. It keeps an identifier of the current bitmap page and the current position within the page. Each time an allocation request arrives, scanning of the bitmap starts from the current position. When the last allocated bitmap page is scanned, scanning continues from the beginning (from the first bitmap page) and until the current position. When no free space is found after a full cycle through all bitmap pages, a new block of memory is allocated. The size of the new extension is the larger of the size of the allocated object or the extension quantum. The extension quantum is a parameter of the database, specified in the constructor. The bitmap is extended to be able to map additional space. If virtual memory space is exhausted and no more bitmap pages can be allocated, then an OutOfMemory error is reported.

Allocation of memory using bitmapping provides a high locality of references (objects are mostly allocated sequentially) and also minimizes the number of modified pages. Minimization of the number of modified pages is significant when a commit operation is performed, and all dirty pages should be flushed to the disk. When all cloned objects are placed sequentially, the number of modified pages is minimal and so transaction commit time is also reduced. Using an extension quantum also helps to preserve sequential allocation. Once the bitmap is extended, objects will be allocated sequentially until the extension quantum is completely used up. Only after reaching the end of the bitmap will scanning restart from the beginning, searching for holes in previously allocated memory.

To reduce the number of bitmap page scans, Perst associates a descriptor with each page, which is used to remember the maximal size of the hole on that page. The calculation of maximal hole size is performed in the following way: if an object of size M cannot be allocated from a bitmap page, then its maximal hole size is less than M, so M is stored in the page descriptor if the previous value of the descriptor is larger than M. For the next allocation of an object of size greater than or equal to M, we will skip this bitmap page. The page descriptor is reset when some object is deallocated within this bitmap page.

Some database objects (like hash table pages) should be aligned on the page boundary to provide more efficient access. The Perst memory allocator checks the requested size, and if it is aligned on a page boundary, then the address of the allocated memory segment is also aligned on the page boundary. The search for free holes will be done faster in this case, because Perst increases the step of the current position increment according to the value of alignment.

To be able to deallocate memory used by an object, Perst needs to maintain information about the object's size. The Perst memory allocator deals with two types of objects - normal table records and page objects. All table records are prepended by a record header which contains the record size and a pointer to the L2-list linking all records in the table. So the size of the table record object can be extracted from the record header. Page objects always occupy the whole database page and are allocated at positions aligned on the page boundary. Page objects have no headers. Perst distinguishes page objects from normal objects by using a special marker in the object index.

Shadow transaction mechanism

Each record (object) in Perst has a unique identifier (OID). Object identifiers are used to implement references between objects. To locate an object by reference, its OID is used as an index in the array of object offsets within the file. This array is called the object index and each element of this array is called an object handle. There are two copies of the object index in Perst, one of which is current and other is a shadow. The header of the database contains pointers to both object indices and indicates which index is current at any given moment in time.

When an object is modified for the first time, it is cloned (a copy of the object is created) and the object handle in the current index is changed to point to the newly-created object copy. The shadow index still contains a handle which points to the original version of the object. All changes are done on the object copy, leaving the original object unchanged. Perst marks in special bitmap page of the object index, which contains the modified object handle.

When a transaction is committed, Perst first checks if the size of the object index was increased during the current transaction. If so, it also reallocates the shadow copy of the object index. Then Perst frees memory for all "old objects", i.e. objects which were cloned within the transaction. Memory cannot be deallocated before commit, because we want to preserve the consistent state of the database by keeping the original cloned objects unchanged. If we deallocate memory immediately after cloning, a new object can be allocated at the place of the original cloned object and we lose consistency. Since memory deallocation is done in Perst via bitmap using the same transaction mechanism as for normal database objects, deallocation of object space will require clearing some bits in a bitmap page, which should itself be cloned before modification. Cloning the bitmap page will require new space for allocation of the page copy, and we can reuse space of deallocated objects. For reasons explained above, doing so will cause loss of database consistency. That is why deallocation of an object is done in two steps. When an object is cloned, all bitmap pages used for marking object space are also cloned (if not cloned earlier). So when a transaction is committed, we only clear bits in bitmap pages and no more requests for allocation memory can be generated at that moment.

After deallocation of old copies, Perst flushes all modified pages to disk to synchronize the contents in memory and in the disk file. After that Perst toggles the current object index indicator in the database header to switch roles of the object indices. Now the object index which was current becomes the shadow, and vice versa. Then Perst again flushes the modified page (i.e. page with database header) to disk, bringing the database to a new consistent state. After that, Perst copies all modified handles from the new object index to the object index (which was previously shadow and now becomes current). At this moment contents of both indices are synchronized and Perst is ready to start a new transaction.

The bitmap of modified object index pages is used to minimize the time needed to commit transactions. Rather than the whole object index, only its modified pages should be copied. After committing the transaction, the bitmap is cleared.

When a transaction is explicitly aborted by the Storage.rollback method, the shadow object index is copied back to the current index, eliminating all changes made by the aborted transaction. After the end of copying, both indices are identical again and the database state corresponds to the moment before the start of the current transaction.

Allocation of object handles is done via the free handles list. The header of the list is also shadowed and two instances of list headers are stored in the database header. The switch between them is done in the same way as the switch of object indices. When there are no more free elements in the list, Perst allocates handles from the unused part of the new index. When there is no more space in the index, it is reallocated. The object index is the only entity in the database which is not cloned on modification. Instead of this, two copies of the object index are always used.

There are some predefined OID values in Perst. OID 0 is reserved as an invalid object identifier. OIDs starting from 1 are reserved for bitmap pages. The number of bitmap pages depends on the database maximum virtual space. For one terabyte of virtual space, 8 Kb page size and 64 byte allocation quantum, 32K bitmap pages are required. So 32K handles are reserved in the object index for the bitmap. Bitmap pages are allocated on demand, when the database size is extended. So the OID of the user's first object will be 0x8002.

The recovery procedure is trivial in Perst. There are two instances of the object index: one of which is current, while the other corresponds to the last consistent database state. When the database is opened, Perst checks the database header to detect if the database was closed normally. If not (i.e. the dirty flag is set in the database header), then Perst performs database recovery. Recovery is very similar to rolling back a transaction. The indicator of the current index in the database object header is used to determine the index corresponding to the last consistent database state. Object handles from this index are copied to another object index, eliminating all changes made by the uncommitted transaction. Since the only action performed by the recovery procedure is the copying of the object index (actually, only handles having different values in the current and shadow indices are copied, to reduce the number of modified pages) and the size of the object index is small, recovery can be very fast. The fast recovery procedure reduces "out-of-service" time of the application.

Comparison of existing systems

There are not too many embedded OODBMS systems for Java and C#. This discussion will compare six:
ObjectStore PSE Pro from Progress Software,
FastObjects from Versant,
Berkeley DB JE from Sleepycat,
JISP from CoyoteGulch,
db4o from db4objects,
and Perst.

We do not claim that this is an unbiased observation and will only briefly compare features provided by these six systems, and their comparative performance. All the aspects of building embedded OODBMS for Java and C# discussed in the previous sections reflect the primary goals while developing Perst.

OODBMSSupported languagesMemory allocationTransaction supportScheme evaluationQuery languageCollectionsByte code preprocessor
ObjectStore PSE ProC++, JavaExplicit or implicit (garbage collection)transaction log fileScheme evaluation through serializationquery predicates for searching in collectionsODMG collections: arrays, lists, bags, and sets, with B-tree and hash table indexingrequired
db4oJava, C# (Standard, Compact and Mono frameworks)only explicitno protection from power failureis possible to add and remove fields, change back and forth between class versions and rename classes and fieldsno query language, but provide API for constructing query objectspecial database-aware collections + standard JDK collections can be made persistent, but depth of collection is limited to 5 (loading of collection members is stopped after fifth recursive invocation)none
Berkeley DB JEJavaonly explicitoptionalnonoprimary and secondary indices, database is represented as set of <key,value> pairsnone
JISPJavaonly explicitnononoB-Treenone
FastObjectsJava, C#, J#, VBonly explicityesyesJDOQLJDO collections: persistent analogues of JDK collectionsrequired
PerstJava (including JDK 1.5) and C# (Standard, Compact and Mono frameworks)explicit or implicit (background garbage collection)shadow objectslazy scheme evaluationno various collections based on B-Tree, T-Tree and R-Tree (accepting indexing by object fields, user defined compare methods, range queries)no, possibility to integrate with AspectJ and JAssist to provide transparent persistence

To compare performance, a very simple test was ported for all systems (you can find the source code in the appendix). The TestIndex benchmark measures the performance of such basic operations as storing/fetching objects and locating objects using an index. This test contains three steps:

  1. Create a specified number of records with random long integers and string keys and include them in long integer and string indices. After inserting of all records, a commit is performed.
  2. Search for each record using a long integer and string key.
  3. Search for and remove each record from the indices and deallocate it from storage.

Time to execute each step is measured in milliseconds. The number of records in each case is the same - 100,000. All tests were executed on the same computer: AMD Athlon 64 (3200+), 1.5Gb RAM, WinXP, and Sun Java JDK version 1.4.2_04.

ProductLanguageCreateSearchRemove
PerstJava3 7751 6833 275
PerstCSharp4 4462 4033 975
ObjectStore PSE ProJava8 2729 4133 104
FastObjects J2Java13 39910 85638 435
FastObjects.NetCSharp43 0122 7147 461
db4o-4.0Java18 4576 27938 886
db4o-4.0CSharp31 72541 09988 517
Berkeley DB JEJava(*)15 51310 75512 758
JISPJava350 674343 063527 248
(*) - JE has to be started with -Xmx128M option to avoid memory overflow. Also it is necessary to notice that database close time is 64 seconds.


Appendix

TestIndex sources for ObjectStore PSE Pro

package com.odi.demo.testindex;

import com.odi.*;
import com.odi.util.*;
import java.io.*;
import java.util.*;

public class TestIndex implements ObjectStoreConstants { 
    String strKey;
    long   intKey;

    final static int nRecords = 100000;

    static public void main(String[] args) throws Exception {    
        Session session = null;
        session = Session.create(null, null);
        session.join();

        Database db = Database.create("testindex.odb", ALL_READ | ALL_WRITE);
        Transaction.begin(UPDATE);
        try { 
            db.createRoot("intIndex", new OSTreeMapLong(db));
            db.createRoot("strIndex", new OSTreeMapString(db));
        } catch (DatabaseRootAlreadyExistsException x) {}
        Transaction.current().commit();

        long start = System.currentTimeMillis();
        Transaction.begin(UPDATE);
        OSTreeMapLong intIndex = (OSTreeMapLong)db.getRoot("intIndex");
        OSTreeMapString strIndex  = (OSTreeMapString)db.getRoot("strIndex");

        long key = 1999;
        int i;
        for (i = 0; i != nRecords; i++) { 
            TestIndex rec = new TestIndex();
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            rec.intKey = key;
            rec.strKey = Long.toString(key);
            intIndex.put(rec.intKey, rec);                
            strIndex.put(rec.strKey, rec);                
        }
        Transaction.current().commit();
        System.out.println("Elapsed time for inserting " + nRecords + " records: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");
        
        start = System.currentTimeMillis();
        Transaction.begin(READONLY);
        intIndex = (OSTreeMapLong)db.getRoot("intIndex");
        strIndex  = (OSTreeMapString)db.getRoot("strIndex");
        key = 1999;
        for (i = 0; i != nRecords; i++) { 
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            TestIndex rec1 = (TestIndex)intIndex.get(key);
            TestIndex rec2 = (TestIndex)strIndex.get(Long.toString(key));
            if (rec1 == null || rec1 != rec2) { 
                throw new Error("Failed to locate record");
            }
        }
        Transaction.current().commit();
        System.out.println("Elapsed time for performing " + nRecords*2 + " index searches: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");
        
        start = System.currentTimeMillis();
        Transaction.begin(UPDATE);
        intIndex = (OSTreeMapLong)db.getRoot("intIndex");
        strIndex  = (OSTreeMapString)db.getRoot("strIndex");
        key = 1999;
        for (i = 0; i != nRecords; i++) { 
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            TestIndex rec = (TestIndex)intIndex.get(key);
            intIndex.remove(key);
            strIndex.remove(Long.toString(key));
            ObjectStore.destroy(rec);
        }
        Transaction.current().commit();
        System.out.println("Elapsed time for deleting " + nRecords + " records: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");
        db.close();
    }
}

TestIndex Java sources for db4o

import com.db4o.config.*;
import com.db4o.query.*;
import com.db4o.*;

import java.io.*;
import java.util.*;


public class TestIndex {
    
    public static class Record { 
        String strKey;
        long   intKey;
    };

    public static class Assert { 
        public static void that(boolean condition) { 
            if (!condition) { 
                throw new Error("Assertion failed");
            }
        }
    }
    
    static final String FILE = "testindex.yap";
    
    final static int nRecords = 100000;

    static public void main(String[] args) {
        
        new File(FILE).delete();
        
        Configuration conf = Db4o.configure();
        
        conf.objectClass(Record.class).objectField("strKey").indexed(true);
        conf.objectClass(Record.class).objectField("intKey").indexed(true);

        conf.weakReferences(false);
        conf.discardFreeSpace(Integer.MAX_VALUE);
        conf.automaticShutDown(false);
        conf.lockDatabaseFile(false);
        
        ObjectContainer db = Db4o.openFile(FILE);

        long start = System.currentTimeMillis();
        long key = 1999;
        int i;        
        for (i = 0; i < nRecords; i++) { 
            Record rec = new Record();
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            rec.intKey = key;
            rec.strKey = Long.toString(key);
            db.set(rec);
        }
        db.commit();
        System.out.println("Elapsed time for inserting " + nRecords + " records: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");
        start = System.currentTimeMillis();
        key = 1999;
        for (i = 0; i < nRecords; i++) { 
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            Query q = db.query();
            q.constrain(Record.class);
            q.descend("intKey").constrain(new Long(key));
            Record rec1 = (Record)q.execute().next();
            q = db.query();
            q.constrain(Record.class);
            q.descend("strKey").constrain(Long.toString(key));
            Record rec2 = (Record)q.execute().next();
            Assert.that(rec1 != null && rec1 == rec2);
        }
        System.out.println("Elapsed time for performing " + nRecords*2 + " index searches: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");
        
        start = System.currentTimeMillis();

        Query q = db.query();
        q.constrain(Record.class);
        ObjectSet objectSet = q.execute();
        
        while(objectSet.hasNext()){
            db.delete(objectSet.next());
        }
        db.commit();
        System.out.println("Elapsed time for deleting " + nRecords + " records: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");
        db.close();
    }
}

TestIndex CSharp sources for db4o

using System;
using System.Collections;
using System.Diagnostics;

using j4o.lang;
using com.db4o;
using com.db4o.query;
using com.db4o.config;

class Record { 
    internal String strKey;
    internal long   intKey;
};

public class TestIndex { 
     const int nRecords = 100000;

    static public void Main(string[] args) {    
        Query q;
        Configuration conf = Db4o.configure();
        
        conf.objectClass(typeof(Record)).objectField("strKey").indexed(true);
        conf.objectClass(typeof(Record)).objectField("intKey").indexed(true);
        
        conf.weakReferences(false);
        conf.discardFreeSpace(int.MaxValue);
        conf.automaticShutDown(false);
        conf.lockDatabaseFile(false);


        ObjectContainer db = Db4o.openFile("testindex.yap");

        DateTime start = DateTime.Now;
        long key = 1999;
        int i;        
        for (i = 0; i < nRecords; i++) { 
            Record rec = new Record();
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            rec.intKey = key;
            rec.strKey = Convert.ToString(key);
            db.set(rec);
        }
        db.commit();
        Console.WriteLine("Elapsed time for inserting " + nRecords + " records: " 
                           + (DateTime.Now - start) + " milliseconds");

        start = DateTime.Now;
        key = 1999;
        for (i = 0; i < nRecords; i++) { 
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            q = db.query();
            q.constrain(typeof(Record));
            q.descend("intKey").constrain(key);
            Record rec1 = (Record)q.execute().next();
            q = db.query();
            q.constrain(typeof(Record));
            q.descend("strKey").constrain(Convert.ToString(key));
            Record rec2 = (Record)q.execute().next();
            Debug.Assert(rec1 != null && rec1 == rec2);
        }
        Console.WriteLine("Elapsed time for performing " + nRecords*2 + " index searches: " + (DateTime.Now - start));
 
        start = DateTime.Now;
        
        q = db.query();
        q.constrain(typeof(Record));
        ObjectSet objectSet = q.execute();

        while(objectSet.hasNext()){
            db.delete(objectSet.next());
        }
        db.commit();
        Console.WriteLine("Elapsed time for deleting " + nRecords + " records: " + (DateTime.Now - start));
        db.close();
    }
}

TestIndex Java sources for Perst

import org.garret.perst.*;

import java.util.*;

class Record extends Persistent { 
    String strKey;
    long   intKey;
};

class Indices extends Persistent {
    Index strIndex;
    Index intIndex;
}

public class TestIndex { 
    final static int nRecords = 100000;
    static int pagePoolSize = 32*1024*1024;

    static public void main(String[] args) {    
        Storage db = StorageFactory.getInstance().createStorage();
        db.open("testidx.dbs", pagePoolSize);
        Indices root = (Indices)db.getRoot();
        if (root == null) { 
            root = new Indices();
            root.strIndex = db.createIndex(String.class, true);
            root.intIndex = db.createIndex(long.class, true);
            db.setRoot(root);
        }
        Index intIndex = root.intIndex;
        Index strIndex = root.strIndex;
        long start = System.currentTimeMillis();
        long key = 1999;
        int i;        
        for (i = 0; i < nRecords; i++) { 
            Record rec = new Record();
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            rec.intKey = key;
            rec.strKey = Long.toString(key);
            intIndex.put(new Key(rec.intKey), rec);                
            strIndex.put(new Key(rec.strKey), rec);                
        }
        db.commit();
        System.out.println("Elapsed time for inserting " + nRecords + " records: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");

        start = System.currentTimeMillis();
        key = 1999;
        for (i = 0; i < nRecords; i++) { 
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            Record rec1 = (Record)intIndex.get(new Key(key));
            Record rec2 = (Record)strIndex.get(new Key(Long.toString(key)));
            Assert.that(rec1 != null && rec1 == rec2);
        }
        System.out.println("Elapsed time for performing " + nRecords*2 + " index searches: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");
        start = System.currentTimeMillis();
        key = 1999;
        for (i = 0; i < nRecords; i++) { 
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            Record rec = (Record)intIndex.get(new Key(key));
            Record removed = (Record)intIndex.remove(new Key(key));
            Assert.that(removed == rec);
            strIndex.remove(new Key(Long.toString(key)));
            rec.deallocate();
        }

        System.out.println("Elapsed time for deleting " + nRecords + " records: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");
        db.close();
    }
}

TestIndex CSharp sources for Perst

using System;
using System.Collections;
using Perst;
using System.Diagnostics;

public class Record:Persistent
{
    public string strKey;
    public long intKey;
}


public class Root:Persistent
{
    public Index strIndex;
    public Index intIndex;
}

public class TestIndex
{
    internal const int nRecords = 100000;
    internal static int pagePoolSize = 32 * 1024 * 1024;
	
    static public void  Main(string[] args)
    {
        int i;
        Storage db = StorageFactory.Instance.CreateStorage();
        db.Open("testidx.dbs", pagePoolSize);

        Root root = (Root) db.Root;
        if (root == null)
        {
            root = new Root();
            root.strIndex = db.CreateIndex(typeof(String), true);
            root.intIndex = db.CreateIndex(typeof(long), true);
            db.Root = root;
        }
        Index intIndex = root.intIndex;
        Index strIndex = root.strIndex;
        DateTime start = DateTime.Now;
        long key = 1999;
        for (i = 0; i != nRecords; i++)
        {
            Record rec = new Record();
            key = (3141592621L * key + 2718281829L) % 1000000007L;
            rec.intKey = key;
            rec.strKey = Convert.ToString(key);
            intIndex.Put(new Key(rec.intKey), rec);
            strIndex.Put(new Key(rec.strKey), rec);
        }
        db.Commit();
        Console.WriteLine("Elapsed time for inserting " + nRecords + " records: " + (DateTime.Now - start));
		
        start = DateTime.Now;
        key = 1999;
        for (i = 0; i != nRecords; i++)
        {
            key = (3141592621L * key + 2718281829L) % 1000000007L;
            Record rec1 = (Record) intIndex.Get(new Key(key));
            Record rec2 = (Record) strIndex.Get(new Key(Convert.ToString(key)));
            Debug.Assert(rec1 != null && rec1 == rec2);
        }     
        Console.WriteLine("Elapsed time for performing " + nRecords * 2 + " index searches: " + (DateTime.Now - start));
        start = DateTime.Now;
        key = 1999;
        for (i = 0; i != nRecords; i++)
        {
            key = (3141592621L * key + 2718281829L) % 1000000007L;
            Record rec = (Record) intIndex.Get(new Key(key));
            Record removed = (Record)intIndex.Remove(new Key(key));
            Debug.Assert(removed == rec);
            strIndex.Remove(new Key(Convert.ToString(key)), rec);
            rec.Deallocate();
        }
        Console.WriteLine("Elapsed time for deleting " + nRecords + " records: " + (DateTime.Now - start));
        db.Close();
    }
}

TestIndex Java sources for Berkeley DB JE

import java.io.File;
import java.io.Serializable;

import com.sleepycat.bind.EntryBinding;
import com.sleepycat.bind.serial.*;
import com.sleepycat.bind.tuple.*;
import com.sleepycat.je.*;

class TestIndex {
    final static int nRecords = 100000;

    public static void main(String argv[]) throws DatabaseException {
        Cursor cursor;
        OperationStatus status;
        File envHomeDirectory = new File("dbenv");

        EnvironmentConfig envConfig = new EnvironmentConfig();
        envConfig.setTransactional(true); 
        envConfig.setAllowCreate(true);    
        Environment exampleEnv = new Environment(envHomeDirectory, envConfig);
        
        Transaction txn = exampleEnv.beginTransaction(null, null);
        DatabaseConfig dbConfig = new DatabaseConfig();
        dbConfig.setTransactional(true); 
        dbConfig.setAllowCreate(true);
        Database exampleDb = exampleEnv.openDatabase(txn, "bindingsDb", dbConfig);

        DatabaseConfig catalogConfig = new DatabaseConfig();
        catalogConfig.setTransactional(true); 
        catalogConfig.setAllowCreate(true);
        Database catalogDb = exampleEnv.openDatabase(txn, 
                                                     "catalogDb",
                                                     catalogConfig);
        StoredClassCatalog catalog = new StoredClassCatalog(catalogDb);
        
        EntryBinding keyBinding =
            TupleBinding.getPrimitiveBinding(Long.class);

        EntryBinding dataBinding = new SerialBinding(catalog, Record.class);
        
        EntryBinding secKeyBinding = TupleBinding.getPrimitiveBinding(String.class);

        SecondaryConfig secConfig = new SecondaryConfig();
        secConfig.setTransactional(true); 
        secConfig.setAllowCreate(true);
        secConfig.setSortedDuplicates(true);
        secConfig.setKeyCreator(new MyKeyCreator(secKeyBinding, dataBinding));
        SecondaryDatabase exampleSecDb = exampleEnv.openSecondaryDatabase(txn, 
                                                     "bindingsSecDb",
                                                     exampleDb,
                                                     secConfig);
        txn.commit();

        DatabaseEntry keyEntry = new DatabaseEntry();
        DatabaseEntry dataEntry = new DatabaseEntry();


        long start = System.currentTimeMillis();
        long key = 1999;
        int i;        
        txn = exampleEnv.beginTransaction(null, null);
        for (i = 0; i < nRecords; i++) { 
            Record rec = new Record();
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            rec.intKey = key;
            rec.strKey = Long.toString(key);

            LongBinding.longToEntry(key, keyEntry);
            dataBinding.objectToEntry(rec, dataEntry);

            status = exampleDb.put(txn, keyEntry, dataEntry);
            if (status != OperationStatus.SUCCESS) {
                throw new DatabaseException("Data insertion got status " + status);
            }
        }
        txn.commit();
        System.out.println("Elapsed time for inserting " + nRecords + " records: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");

        start = System.currentTimeMillis();
        key = 1999;
        txn = exampleEnv.beginTransaction(null, null);
        for (i = 0; i < nRecords; i++) { 
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            String strKey = Long.toString(key);
            StringBinding.stringToEntry(strKey, keyEntry);
            status = exampleSecDb.get(txn, keyEntry, dataEntry, LockMode.DEFAULT);
            if (status != OperationStatus.SUCCESS) {
                throw new DatabaseException("String key search failed with status " + status);
            }
            Record r1 = (Record)dataBinding.entryToObject(dataEntry);
            if (r1.intKey != key || !r1.strKey.equals(strKey)) { 
                throw new DatabaseException("Wrong string key");
            }

            LongBinding.longToEntry(key, keyEntry);
            status = exampleDb.get(txn, keyEntry, dataEntry, LockMode.DEFAULT);
            if (status != OperationStatus.SUCCESS) {
                throw new DatabaseException("Long key search failed with status " + status);
            }
            Record r2 = (Record)dataBinding.entryToObject(dataEntry);
            if (r2.intKey != key || !r2.strKey.equals(strKey)) { 
                throw new DatabaseException("Wrong long key");
            }            
        }
        txn.commit();
        System.out.println("Elapsed time for performing " + nRecords*2 + " index searches: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");


        start = System.currentTimeMillis();
        txn = exampleEnv.beginTransaction(null, null);        
        cursor = exampleDb.openCursor(null, null);
        key = Long.MIN_VALUE;
        for (i = 0; cursor.getNext(keyEntry, dataEntry, LockMode.DEFAULT) == OperationStatus.SUCCESS; i++) {
            Record rec = (Record)dataBinding.entryToObject(dataEntry);
            if (rec.intKey < key) {
                throw new DatabaseException("Wrong long key order");
            }
            key = rec.intKey;
        }
        cursor.close();

        cursor = exampleSecDb.openCursor(null, null);
        String strKey = "";
        for (i = 0; cursor.getNext(keyEntry, dataEntry, LockMode.DEFAULT) == OperationStatus.SUCCESS; i++) {
            Record rec = (Record)dataBinding.entryToObject(dataEntry);
            if (rec.strKey.compareTo(strKey) < 0) {
                throw new DatabaseException("Wrong string key order");
            }
            strKey = rec.strKey;
        }
        cursor.close();
        txn.commit();
        System.out.println("Elapsed time for iterating through " + (nRecords*2) + " records: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");
 

        start = System.currentTimeMillis();
        txn = exampleEnv.beginTransaction(null, null);        
        key = 1999;
        for (i = 0; i < nRecords; i++) { 
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            LongBinding.longToEntry(key, keyEntry);
            status = exampleDb.delete(txn, keyEntry);
            if (status != OperationStatus.SUCCESS) {
                throw new DatabaseException("Long key search failed with status " + status);
            }
        }
        txn.commit();
        System.out.println("Elapsed time for deleting " + nRecords + " records: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");
        
        start = System.currentTimeMillis();
        catalogDb.close();
        exampleSecDb.close();
        exampleDb.close();
        exampleEnv.close();
        System.out.println("Elapsed time for database close " 
                           + (System.currentTimeMillis() - start) + " milliseconds");
    }

    private static class Record implements Serializable {
        public long   intKey;
        public String strKey;
    }

    private static class MyKeyCreator implements SecondaryKeyCreator {

        private EntryBinding secKeyBinding;
        private EntryBinding dataBinding;

        MyKeyCreator(EntryBinding secKeyBinding, EntryBinding dataBinding) {

            this.secKeyBinding = secKeyBinding;
            this.dataBinding = dataBinding;
        }

        public boolean createSecondaryKey(SecondaryDatabase secondaryDb,
                                          DatabaseEntry keyEntry,
                                          DatabaseEntry dataEntry,
                                          DatabaseEntry resultEntry)
            throws DatabaseException 
        {

            Record rec = (Record) dataBinding.entryToObject(dataEntry);
            secKeyBinding.objectToEntry(rec.strKey, resultEntry);
            return true;
        }
    }
}

TestIndex Java sources for JISP

import java.io.*;
import java.util.*;
import com.coyotegulch.jisp.*;

public class TestIndex { 
    final static int nRecords = 100000;
    final static int btreeOrder = 33;

    static class Record implements Serializable {
        public long   intKey;
        public String strKey;
    }

    static public void main(String[] args) throws Exception {    
        IndexedObjectDatabase database = new IndexedObjectDatabase("testindex", true);
        BTreeIndex intIndex = new BTreeIndex("intindex", btreeOrder, false);
        BTreeIndex strIndex = new BTreeIndex("strindex", btreeOrder, false);
        database.attachIndex(intIndex);
        database.attachIndex(strIndex);
 
        long start = System.currentTimeMillis();
        long key = 1999;
        int i;        
        for (i = 0; i < nRecords; i++) { 
            Record rec = new Record();
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            rec.intKey = key;
            rec.strKey = Long.toString(key);
            OrderedObject [] keyArray = new OrderedObject[2];
            keyArray[0] = new LongKey(rec.intKey);
            keyArray[1] = new StringKey(rec.strKey);
            database.write(keyArray, rec);
        }
        System.out.println("Elapsed time for inserting " + nRecords + " records: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");

        start = System.currentTimeMillis();
        key = 1999;
        for (i = 0; i < nRecords; i++) { 
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            String strKey = Long.toString(key);
            Vector result = database.read(new LongKey(key), intIndex);
            if (result.size() != 1) { 
                throw new Error("Long key search failed");
            }            
            Record rec = (Record)result.get(0);
            if (rec.intKey != key || !strKey.equals(rec.strKey)) {                
                throw new Error("Long key search failed");
            }            
            result = database.read(new StringKey(Long.toString(key)), strIndex);
            if (result.size() != 1) { 
                throw new Error("String key search failed");
            }            
            rec = (Record)result.get(0);
            if (rec.intKey != key || !strKey.equals(rec.strKey)) {                
                throw new Error("String key search failed");
            }          
        }
        System.out.println("Elapsed time for performing " + nRecords*2 + " index searches: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");
 
        start = System.currentTimeMillis();
        BTreeIterator iterator = new BTreeIterator(intIndex);
        key = Long.MIN_VALUE;
        i = 0; 
        do { 
            Vector result = database.read(iterator);
            if (result.size() != 1) { 
                throw new Error("Long key search failed");
            }            
            Record rec = (Record)result.get(0);
            if (rec.intKey < key) { 
                throw new Error("Invalid long key order");
            }
            key = rec.intKey;
            i++;
        } while (iterator.moveNext());
                
        if (i != nRecords) { 
            throw new Error("Incorrect number of records: " + i);
        }
            
        String strKey = "";
        iterator = new BTreeIterator(strIndex);
        i = 0;
        do { 
            Vector result = database.read(iterator);
            if (result.size() != 1) { 
                throw new Error("Long key search failed");
            }            
            Record rec = (Record)result.get(0);
            if (rec.strKey.compareTo(strKey) < 0) { 
                throw new Error("Invalid long key order");
            }
            strKey = rec.strKey;
            i++;
        } while (iterator.moveNext());

        if (i != nRecords) { 
            throw new Error("Incorrect number of records: " + i);
        }
        System.out.println("Elapsed time for iterating through " + (nRecords*2) + " records: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");
        
        start = System.currentTimeMillis();
        key = 1999;
        for (i = 0; i < nRecords; i++) { 
            key = (3141592621L*key + 2718281829L) % 1000000007L; 
            database.remove(new LongKey(key), intIndex);
        }
        System.out.println("Elapsed time for deleting " + nRecords + " records: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");
    }
}

TestIndex Java sources for FastObjects

import java.util.*;
import javax.jdo.*;
import com.poet.jdo.admin.DatabaseAdministration;

public class TestIndex { 
    final static int nRecords = 100000;

    static public void main(String[] args) throws Exception {    
        Properties properties = new Properties();
        properties.setProperty("javax.jdo.PersistenceManagerFactoryClass",
                               "com.poet.jdo.PersistenceManagerFactories");
        properties.setProperty("javax.jdo.option.ConnectionURL", "fastobjects://LOCAL/MyBase");
        PersistenceManagerFactory pmf = JDOHelper.getPersistenceManagerFactory(properties);
        
        PersistenceManager pm = null;
        try
        {
            pm = pmf.getPersistenceManager();
        } 
        catch ( JDOUserException e )
        {
            java.util.Properties prop = new java.util.Properties();
            prop.put("schema","MySchema");
            DatabaseAdministration.create(pmf.getConnectionURL(), prop);
            pm = pmf.getPersistenceManager();
        }

        pm.currentTransaction().begin();
        long start, key;
        Map intIndex, strIndex;
        int i;

        Indices root = null;
        Extent extent = pm.getExtent(Indices.class, false);
        Iterator iterator = extent.iterator();
        if (iterator.hasNext()) {
            root = (Indices)iterator.next();
        } else { 
            root = new Indices();
            root.strIndex = new TreeMap();
            root.intIndex = new TreeMap();
            pm.makePersistent(root);
        }
        extent.close(iterator);
        pm.currentTransaction().commit();

        pm.currentTransaction().begin();
        intIndex = root.intIndex;
        strIndex = root.strIndex;
        
        start = System.currentTimeMillis();
        key = 1999;
        for (i = 0; i < nRecords; i++) { 
            Record rec = new Record();
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            rec.intKey = key;
            rec.strKey = Long.toString(key);
            intIndex.put(new Long(rec.intKey), rec);                
            strIndex.put(rec.strKey, rec);                
        }
        pm.currentTransaction().commit();
        System.out.println("Elapsed time for inserting " + nRecords + " records: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");

        start = System.currentTimeMillis();
        pm.currentTransaction().begin();
        intIndex = root.intIndex;
        strIndex = root.strIndex;
        key = 1999;
        for (i = 0; i < nRecords; i++) { 
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            Record rec1 = (Record)intIndex.get(new Long(key));
            Record rec2 = (Record)strIndex.get(Long.toString(key));
            if (rec1 == null || rec1 != rec2 || rec1.intKey != key) { 
                throw new Error("Index search failed");
            }
        }
        pm.currentTransaction().commit();
        System.out.println("Elapsed time for performing " + nRecords*2 + " index searches: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");
        
        start = System.currentTimeMillis();
        pm.currentTransaction().begin();
        intIndex = root.intIndex;
        strIndex = root.strIndex;
        iterator = intIndex.values().iterator();
        key = Long.MIN_VALUE;
        for (i = 0; iterator.hasNext(); i++) { 
            Record rec = (Record)iterator.next();
            if (rec.intKey < key) { 
                throw new Error("Invalid order for long key");
            }
            key = rec.intKey;
        }
        if (i != nRecords) { 
            throw new Error("Invalid number of records");
        }
        iterator = strIndex.values().iterator();
        String strKey = "";
        for (i = 0; iterator.hasNext(); i++) { 
            Record rec = (Record)iterator.next();
            if (rec.strKey.compareTo(strKey) < 0) { 
                throw new Error("Invalid order for string key");
            }
            strKey = rec.strKey;
        }
        if (i != nRecords) { 
            throw new Error("Invalid number of records");
        }
        pm.currentTransaction().commit();
        System.out.println("Elapsed time for iterating through " + (nRecords*2) + " records: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");
        
        start = System.currentTimeMillis();
        pm.currentTransaction().begin();
        intIndex = root.intIndex;
        strIndex = root.strIndex;
        key = 1999;
        for (i = 0; i < nRecords; i++) { 
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            Record rec = (Record)intIndex.get(new Long(key));
            Record removed = (Record)intIndex.remove(new Long(key));
            if (removed != rec) { 
                throw new Error("Remove operation is failed");
            }
            strIndex.remove(Long.toString(key));
            pm.deletePersistent(rec);
        }
        pm.currentTransaction().commit();
        System.out.println("Elapsed time for deleting " + nRecords + " records: " 
                           + (System.currentTimeMillis() - start) + " milliseconds");
        pm.close();
    }
}

TestIndex CSharp sources for FastObjects.Net

using System;
using System.Collections;
using FastObjects;
using System.Diagnostics;

[Persistent(HasExtent=false)]
public class Record
{
    public long intKey;
    public string strKey;
}

[Persistent]
public class Indices
{
    [ItemType(typeof(long), typeof(Record))]
    public SortedList intIndex;
    [ItemType(typeof(string), typeof(Record))]
    public SortedList strIndex;
}

public class TestIndex { 
    const int nRecords = 100000;


    static public void Main(string[] args) 
    {    
        IObjectScope os = Database.Get("FastObjects://LOCAL/MyBase").GetObjectScope();

        os.Transaction.Begin();
        DateTime start;
        long key;
        SortedList intIndex, strIndex;
        int i;

        Indices root = null;
        IExtent extent = os.GetExtent(typeof(Indices));
        IDBObjectEnumerator iterator = extent.GetEnumerator();
        if (iterator.MoveNext()) {
            root = (Indices)iterator.Current;
        } else { 
            root = new Indices();
            root.strIndex = new SortedList();
            root.intIndex = new SortedList();
            os.Add(root);
        }
        iterator.Dispose();
        os.Transaction.Commit();

        os.Transaction.Begin();
        intIndex = root.intIndex;
        strIndex = root.strIndex;
        
        start = DateTime.Now;
        key = 1999;
        for (i = 0; i < nRecords; i++) { 
            Record rec = new Record();
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            rec.intKey = key;
            rec.strKey = Convert.ToString(key);
            intIndex[rec.intKey] = rec;                
            strIndex[rec.strKey] = rec;                
        }
        os.Transaction.Commit();
        Console.WriteLine("Elapsed time for inserting " + nRecords + " records: " + (DateTime.Now - start));

        start = DateTime.Now;
        os.Transaction.Begin();
        intIndex = root.intIndex;
        strIndex = root.strIndex;
        key = 1999;
        for (i = 0; i < nRecords; i++) { 
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            Record rec1 = (Record)intIndex[key];
            Record rec2 = (Record)strIndex[Convert.ToString(key)];
            Debug.Assert(rec1 != null && rec1 == rec2 && rec1.intKey == key);
        }
        os.Transaction.Commit();
        Console.WriteLine("Elapsed time for performing " + nRecords*2 + " index searches: " 
                           + (DateTime.Now - start));
        
        start = DateTime.Now;
        os.Transaction.Begin();
        intIndex = root.intIndex;
        strIndex = root.strIndex;
        key = Int64.MinValue;
        i = 0;
        foreach (Record rec in intIndex.Values) {
            Debug.Assert(rec.intKey >= key);
            key = rec.intKey;
            i += 1;
        }
        Debug.Assert(i == nRecords);
        string strKey = "";
        i = 0;
        foreach (Record rec in strIndex.Values) {
            Debug.Assert(rec.strKey.CompareTo(strKey) >= 0);
            strKey = rec.strKey;
            i += 1;
        }
        Debug.Assert(i == nRecords);
        os.Transaction.Commit();
        Console.WriteLine("Elapsed time for iterating through " + (nRecords*2) + " records: " + (DateTime.Now - start));
        
        start = DateTime.Now;
        os.Transaction.Begin();
        intIndex = root.intIndex;
        strIndex = root.strIndex;
        key = 1999;
        for (i = 0; i < nRecords; i++) { 
            key = (3141592621L*key + 2718281829L) % 1000000007L;
            Record rec = (Record)intIndex[key];
            intIndex.Remove(key);
            strIndex.Remove(Convert.ToString(key));
            os.Remove(rec);
        }
        os.Transaction.Commit();
        Console.WriteLine("Elapsed time for deleting " + nRecords + " records: " + (DateTime.Now - start));
        os.Dispose();
    }
}