Distributed Information System (DIS)
  • Home
  • The blog
  • Contact

Time value encoding in DIS

5/26/2008

0 Comments

 

One fundamental question is the encoding of a time value. A time value has two types of use. One is as time stamp and the other is just as a general time reference.

Requirements

On one hand, a time stamp has the requirement to have a well defined and controlled precision, while the covered time span can be limited (i.e. +/- 200 years).  On the other hand, a general time reference needs to be applicable to a very large time span, with less constrains on the precision limit.

Options

For the time reference value one could use a double precision float representation with seconds as units. All arithmetic operations are provided right out of the box and generally hardwired in the processor. Conversion to calendar time is trivial since one simply has to extract the integer part of the value and convert it to a time_t value. From there one can use the common calendar time conversion and formatting functions.

For time stamps, using integers seems preferable. But we still have a choice between a split encoding like the timeval structure, a 64bit fixed point encoding, or an integer with very small time unit (i.e. nanoseconds).

Discussion

There is not much to discuss about the absolute time. Using a double precision float is an optimal solution. For time stamps however we have three different solutions.

From my experience, I've seen that split time encoding like the timeval structure is not convenient to use when dealing with time arithmetics. It is even error prone if the user has to program the operations himself.

I also tried to implement a fixed point time encoding class with the decimal point between bit 29 and 30. But this is tricky to get right and some operations are not trivial to implement correctly. This is because fractional computation requires normalization and optimal rounding errors handling.

A 64bit  integer using  nanoseconds as time units is apparently the most simple and straightforward time stamp encoding. Converting to seconds is done with a simple 64bit integer division which is also hardwired in most recent processors. Conversion to other time units like microseconds, milliseconds, days or week is as accurate and simple. Multiplication or division with decimal scalar values is also trivial.

Another advantage of the 64bit integer nanosecond values is that there is no need of special functions to do the conversions or operations. A programmer can easily figure out what to do and use conventional arithmetic operations.

With a 64 bit signed integers with nanosecond units, the covered time span is over +/- 292 years range. One can thus afford keep the current time_t January 1970 epoch and push back the wrapping limit far away. 

Conclusion

In DIS, we'll thus use a double precision float for general time reference value and a 64bit integer with nanosecond units for time stamps and delays encoding.

Note: I've seen the use of a double precision float for time encoding in some Windows operating system API. I still have to see the use of a 64bit signed integer with nanosecond units. It would make sense as an upgrade of time_t which is required since we are getting close to the wrapping limit.Update : It has been brought to my attention that Java stores time values in a signed 64bit integer with milliseconds as time units relative to January 1, 1970. The covered time span is thus +/- 290 million years. I'll stay with the nanosecond units for time stamps.

0 Comments

Progress status and cardinality encoding....

5/12/2008

0 Comments

 

It is time for a new communication duty on my project. It's still in a steady progress but not as fast as I would like. I now use the latest version of libgc. I spent most of my time last month searching the source of a major memory leak. I finally found out that it was caused by STL containers. I changed the code and now use gc_allocator.  Even strings had to be changed. Now the client and server run without any memory leak. I thought of changing language (i.e. D) but I didn't want to cut myself away from the C++ community as potential users. So I had to sort out the problem and I finally did.

The client service communication model is now finalized and under test. The system is ready to support transmitted data processing (compression, authentication and encryption).

In this  note I'll explain the encoding of a key data type I name cardinality. A cardinality is the number of elements in a sequence. Because of its efficient encoding I extended its use to other type of information. What makes a cardinality different from a classical unsigned integer is that small values are much more frequent than big values. Consider for instance strings. Most strings will be less than 256 bytes in length but from time to time big strings may show up.

We could thus benefit from an encoding that is compact for small values and eventually bigger for big values. I spent some time investigating all the possible encodings I could find and the most interesting ones have been found in BER and in ICE.

BER's cardinality encoding

BER stands for Basic Encoding Rules and is used in SNMP and x509 certificates encoding. In this encoding the cardinality value is chopped in 7 bits chunks and stored in as many bytes in big endian order. The leading bytes with value 0 are dropped. The most significant bit of all the bytes is set to one, except in the last byte where it is set to 0 and signals the last byte.

I implemented a BER encoder and decoder a long time ago and, as attractive as this encoding might be, it is not trivial to do and it requires some bit manipulations on bytes which makes it not optimal in term of performance.

ICE's cardinality encoding

ICE is an inter-object communication protocol which is really worth looking at and eventually use until DITP is available ;). Check their web site for more information.

ICE was apparently initially inspired by IIOP  that simply doesn't have a cardinality. Sequence length are encoded as a straight 4 byte integer. But ICE encodes classes and method names as strings which are generally short. The 4 byte encoding overhead was very clear. So the designer of ICE's encoding added a small change in it that significantly increased its efficiency.

If the cardinality value is less than 255, the value is stored in a single byte. Otherwise a byte with the value 255 is written and followed by the 4 byte value encoding the cardinality value. Encoding and decoding is trivial and much more efficient then the BER encoding. It is not as compact for values bigger or equal to 255, but the overhead is insignificant when considering the amount of data that follows (i.e. string).

IDR's cardinality encoding

IDR extends the idea found in ICE by supporting 8 byte encoding and the intermediate 2 byte integer size. As with ICE, if the value is smaller than 255 the cardinality is encoded as a single byte. If less than 65535, it is encoded in a total of three bytes. The first byte holds the value 255 and the cardinality value is stored in the following 2 bytes as a classical unsigned integer. If the value is bigger, it is followed by a 4 byte value, etc. up to the 8 byte integer.

The encoding is as trivial and efficient as the one of ICE, if we take care to detect and encode small values first. It is more compact for values up to 65535, and then becomes less efficient because big values have a long encoding size. But as already pointed out, this overhead is insignificant when considering the amount of data that follows.

Use of cardinality in IDR

The cardinality encoding is so efficient that it its use has been extended for other types of information. Here is a short list of them.

- Object references in serialized object aggregates are encoded as cardinality because a reference is simply the object index number in the sequence of serialized objects. The value 0 is reserved for the null reference and the first serialized object is thus identified by the reference value 1. Small reference values are expected to be the most frequent.

- Methods are identified by a number instead of a string as is common place with inter-object communication protocol. The encoding is much more compact and efficient to handle and, on the server side, the method call dispatching becomes simple and efficient because the identifier can be used as an index in a method pointer table. Encoding the method identifier as a cardinality was an obvious solution since small values will be the norm.

- A channel can host multiple concurrent client-service connections.  These bindings are identified by a unique number, starting with 1 and incrementing for each new binding. We can expect that most frequent binding identification will have a small value and thus the use of cardinality encoding imposed itself.

The progress may be slow but the benefit is that taking the time to think and explore the possible solutions for every choice yields a better thought out product.

0 Comments

    Author

    Christophe Meessen is a  computer science engineer working in France.

    Any suggestions to make DIS more useful ? Tell me by using the contact page.

    Categories

    All
    Business Model
    Database
    Dis
    Ditp
    Dvcs
    Git
    Gob
    Idr
    Misc
    Murphys Law
    Programming Language
    Progress Status
    Startup
    Suggested Reading
    Web Site

    Archives

    December 2017
    November 2015
    September 2015
    February 2013
    December 2012
    November 2012
    May 2012
    February 2012
    March 2010
    October 2009
    September 2009
    July 2009
    June 2009
    May 2009
    February 2009
    January 2009
    November 2008
    September 2008
    August 2008
    July 2008
    May 2008
    April 2008
    March 2008
    February 2008
    January 2008
    December 2007
    October 2007
    August 2007
    July 2007
    June 2007
    May 2007

    RSS Feed

    Live traffic feed
    You have no departures or arrivals yet. Wait a few minutes and check again.
    Powered by FEEDJIT
Powered by Create your own unique website with customizable templates.