Distributed Information System (DIS)
  • Home
  • The blog
  • Contact

Median value selection (Fixed)

12/20/2017

0 Comments

 
In 2009 I presented a heap based median selection algorithm. It was original, and was apparently very fast when compiled with the Intel compiler (icc). Since I don't have the Intel compiler anymore, I can't test it's performance. It's slower than the nthElement code given below when compiled with g++-7 -O3.

Here is the fixed code.

float fixedHeapMedian (float *a) {
  const unsigned char HEAP_LEN = 13;
  float left[HEAP_LEN], right[HEAP_LEN], *p, median;
  unsigned char nLeft, nRight;

  // pick first value as median candidate
  p = a;
  median = *p++;
  nLeft = nRight = 0;

  for (;;) {
    //dumpState(left, nLeft, median, right, nRight, p, 27 - (p-a));
    //assert(stateIsValid(left, nLeft, median, right, nRight));

    // get next value
    float val = *p++;

    // if value is smaller than median, append to left heap
    if (val <= median) {
      // move biggest value to the top of left heap
      unsigned char child = nLeft++, parent = (child - 1) / 2;
      while (child && val > left[parent]) {
        left[child] = left[parent];
        child = parent;
        parent = (parent - 1) / 2;
      }
      left[child] = val;

      // if left heap is full
      if (nLeft == HEAP_LEN) {
        //cout << "---" << endl;
        // for each remaining value
        for (unsigned char nVal = 27-(p - a); nVal; --nVal) {
          //dumpState(left, nLeft, median, right, nRight, p, nVal);
          //assert(stateIsValid(left, nLeft, median, right, nRight));
          // get next value
          val = *p++;
          // discard values falling in other heap
          if (val >= median) {
            continue;
          }
          // if val is bigger than biggest in heap, val is new median
          if (val >= left[0]) {
            median = val;
            continue;
          }
          // biggest heap value becomes new median
          median = left[0];
          // insert val in heap
          parent = 0;
          child = 2;
          while (child < HEAP_LEN) {
            if (left[child-1] > left[child]) {
              child = child-1;
            }
            if (val >= left[child]) {
               break;
            }
            left[parent] = left[child];
            parent = child;
            child = (parent + 1) * 2;
          }
          left[parent] = val;
        }
        return median;
      }
    } else {
      // move smallest value to the top of right heap
      unsigned char child = nRight++, parent = (child - 1) / 2;
      while (child && val < right[parent]) {
        right[child] = right[parent];
        child = parent;
        parent = (parent - 1) / 2;
      }
      right[child] = val;

      // if right heap is full
      if (nRight == HEAP_LEN) {
        //cout << "---" << endl;
        // for each remaining value
        for (unsigned char nVal = 27-(p - a); nVal; --nVal) {
          //dumpState(left, nLeft, median, right, nRight, p, nVal);
          //assert(stateIsValid(left, nLeft, median, right, nRight));
          // get next value
          val = *p++;
          // discard values falling in other heap
          if (val <= median) {
            continue;
          }
          // if val is smaller than smallest in heap, val is new median
          if (val <= right[0]) {
            median = val;
            continue;
          }
          // heap top value becomes new median
          median = right[0];
          // insert val in heap
          parent = 0;
          child = 2;
          while (child < HEAP_LEN) {
            if (right[child-1] < right[child]) {
              child = child-1;
            }
            if (val <= right[child]) {
              break;
            }
            right[parent] = right[child];
            parent = child;
            child = (parent + 1) * 2;
          }
          right[parent] = val;
        }
        return median;
      }
    }
  }
}
0 Comments

C source code for MSB encoding and decoding

11/2/2015

1 Comment

 
For a detailed explanation see Efficiently encoding variable-length integers in C/C++.

#include <stdint.h>
#include <string.h>

// Little endian encoding
size_t encodeMSBlittleEndian(uint64_t value, uint8_t* out) {
    uint8_t *p = out;
    while (value > 127) {
        *p++ = value | 0x80;
        value >>= 7;
    }
    *p++ = value;
    return p - out;
}

// Little endian decoding
size_t decodeMSBlittleEndian(uint64_t *value, uint8_t* in) {
    // locate end of int
    uint8_t *p = in;
    while (*p++ & 0x80);
    size_t size = p - in;
    //decode int
    uint64_t ret = 0;
    do {
      ret = (ret << 7) | (*--p & 0x7F);
    } while (p != in);
    *value = ret;
    return size;
}

Note that little endian encoding makes encoding fast but requires more work to decode. When encoding the integer once and decoding it many times, big endian encoding should be favored.


// Big endian encoding
size_t encodeMSBbigEndian(uint64_t value, uint8_t* out) {
    uint8_t buf[9], *p = buf + 9;
    *--p = value & 0x7F;
    while (value >>= 7) {
      *--p = value | 0x80;
    }
    size_t size = buf + 9 - p;
    memcpy(out, p, size);
    return size;
}

// Big endian decoding
size_t decodeMSBbigEndian(uint64_t *value, uint8_t* in) {
    uint8_t *p = in;
    uint64_t ret = *p & 0x7F;
    while (*p & 0x80) {
        ret = (ret << 7) | (*++p & 0x7F);
    }
    *value = ret;
    return p - in + 1;
}
1 Comment

Presenting the Timez data type

9/21/2015

0 Comments

 
I'm currently working on an implementation of the Date Time stamp I described in this post and that I decided to name Timez. It seamed trivial to implement at first, but I then discovered the particular property of the system time regarding leap seconds and the problem it represent.

Before explaining the problem, let me briefly explain what a Timez is. The idea is simple but brilliant (thanks).
Photo
Clocks displaying local time of different location of the world. [http://sapling-inc.com]

The Timez stamp

A time stamp is generated at a particular location on the surface of the globe. It thus have a specific local time offset relative to the UTC time. A user in a different time zone may want to
  1. view the stamp time with the local time where it was produced ;
  2. sort stamps by UTC time, thus ignoring the local time offset of its origin ;
  3. view the stamp time with his own local time offset  ;
  4. view the stamp time with the local time offset of another location in the world.  
Use cases are for instance a web forum with messages of people from different time zone. Messages have to be sorted by time regardless of the senders local time offset. etc. Another use case is a messaging system like mail. ISO 8601 defines a standard ASCII time representation convenient for humans, but its not compact and efficient.

The solution I came up with is to combine into a 64 bit signed integer a time expressed as the number of micro seconds relative to an epoch and the local time offset expressed as a number of minutes where the stamp was generated.
Picture
The number of micro seconds is a signed integer. When it is negative, it represents the number of micro seconds left to elapse to reach the epoch. When positive, it is the number of micro seconds elapsed since the epoch. The time range covered by this value is +/- 142 years relative to the epoch's year.

The time offset is an unsigned integer. It's the time offset plus 1024, and its range is +/- 17 hours (+/-1023 minutes). If the bits of the time offset field are all zero, the time offset value is -1024 and the Timez value is invalid or undefined.

Up to here, it's all simple and straightforward. So I started implementing the Timez data type in C.

Time in Timez is without leap seconds correction

The time without leap second correction is called the TAI (International Atomic Time) time. The GPS (Global Positioning System) time is also uncorrected by leap second. They differ by their epoch only. I decided that Timez refers to these clocks to avoid the problems resulting from the leap second correction.

But to my big surprise, there is actually no way to get the time in POSIX (all Unix flavors) or Windows without leap second corrections a.k.a. TAI or GPS time. The system time you get with the functions time(), gettimeofday() or  clock_gettime() is the number of seconds elapsed since the 1970-01-01T00:00:00 UTC minus the leap seconds.

Investigating this further, it appears that the time handling problem on computers is actually a rabbit hole. I learned a lot in the process, but also wasted a significant amount of time. It is frustrating because it clearly result from standard definition and support lagging.

Happily, things are slowly changing, but only very slowly. Since Linux Kernel 3.10 there is a new clock id that can be used by clock_gettime() named CLOCK_TAI. But on my computer it currently still returns the same time as CLOCK_REALTIME. Apparently you need a version of NTP higher than 4.2.6 to get the CLOCK_TAI clock adjusted.

What is then still missing is a conversion between leap second corrected time and uncorrected time. I plan to provide such function so that Timez can be used with operating systems that don't provide TAI or GPS time. I'll unfortunately have to hard code the table of leap seconds because there is no easy access to a dynamically updated table.

If you want to learn more about leap seconds I suggest to read this section in Wikipedia.  An interesting part is about the proposal to drop the leap second correction. I also encourage you look see this short video on the Time & Time zone problem from the Computerphile.

The epoch of Timez

This was a difficult decision to make. CLOCK_TAI is using the epoch 1970-01-01T00:00:00 UTC as CLOCK_REALTIME. This means that negative count of seconds cover the period before this epoch. With 64bit integers to encode the court of seconds, this is not a problem.

With 53 bits encoding the number of microseconds, we are short. We can only cover +/- 142 years around the epoch. By picking the same epoch as CLOCK_TAI we would only have ~100 years left until the Timez time counter would wrap.

I then identified three options.
  1. Epoch = 1970-01-01T00:00:00 UTC + 2^52 : the covered time range is then from 1970 to 2254 ;
  2. Epoch = 2050-01-01T00:00:00 TAI : the covered time range is then from 1908 to 2192 ;
  3. Epoch = 1970-01-01T00:00:00 UTC + 2^52 - (2^31) * 1000000: the covered time range is then from 1902 to 2186.

Option 1 would have the advantage to push the wrapping limit the farther away in the future. The disadvantage is that it can't represent time before 1970. The epoch offset is a value easy to remember.

Option 2 would have the advantage to allow representing time in the past. But the epoch offset would be an obscure integer magic number corresponding to the number of micro seconds between 1970 and 2150.
 
Option 3 has the advantage to cover the time span of 32 bit signed integer time_t value. The Timez would thus be backward compatible with the time_t values. The epoch offset is still a magic word but more easily obtained than the one of option 2. However, conversion between corrected and uncorrected time is not well defined before 1972.

Considering the pros and cons of the different options, I choose option 2. The Timez epoch is 1970-01-01T00:00:00 UTC + 2^52. The value 2^52 is the timez epoch offset relative to the POSIX time epoch. Note that 1970-01-01T00:00:00 UTC is 1970-01-01T00:00:10 TAI.

To convert a CLOCK_TAI value to a Timez micro second count use the following expression :
#define TIMEZ_EPOCH 0x10000000000000LL
struct timespec tp;
if (clock_gettime(CLOCK_TAI, &tp) ) /*fail*/ ;
int64_t t = tp.tv_sec * 1000000 + tp.tv_nsec / 1000 - TIMEZ_EPOCH;
0 Comments

hostname and hostname --fqdn mystery

2/10/2013

0 Comments

 
I have just installed a fresh Ubuntu 12.04 LTS server named home. Why not. Preparing the installation of PHP with fastcgi and nginx, inspired by this tutorial, I was puzzled by the fact I get home with the commands hostname and hostname -f.

I'm expected to receive the fully qualified domain name when using the hostname -f command.

It takes some time and manual page reading to find out that this result is normal. Indeed the /etc/hostname file must contain the server name, and not the fully qualified domain name.

The right command to use to get the fully qualified domain names is
hostname --all-fqdns and not hostname -f. There is no need to change the /etc/hosts file. It should contain
127.0.0.1       localhost
127.0.1.1       home
This 127.0.1.1 is weird, but it was set so by default.
0 Comments

Date time stamp binary encoding

12/18/2012

2 Comments

 
Photo
Infinite Clock II by Robbert van der Steeg

A date time stamp is a reference in time. This post consider only date time stamps used as time references in computer systems with a limited time span like now +/- 100 years.

It presents a binary encoding with microsecond unit resolution for absolute time encoding including time zone information or relative time intervals for arithmetic time computation.



Introduction

Operating systems classically represent time as an integer value corresponding to the number of seconds elapsed since 1970-01-01 00:00. Unfortunately the big time resolution granularity and absence of time zone information makes it inconvenient to use as time reference for world wide communicating applications.

Rationale

The rational of this encoding choice is to privilege efficient date time comparison and local time computation or UTC time and time zone extraction with simple to remember and trivial operations. Arithmetic operations on time should also be straightforward.

Time zone encoding

As of ISO 8601, the international normalization of date time representation, the time offset relative to UTC has a minute granularity. According to this bug report the smallest time zone offset value relative to UTC may be  -15:56:00 in Asia/Manila and the biggest 15:13:42 in America/Metlakatla. We may round this to -16:00 to +15:59. This time span represent 2 x 960 = 1920 minutes. Thus 11 bits are sufficient to encode the time zone. The value is encoded as an unsigned integer relative to 1024. Thus -40 is encoded as 1024 - 40 = 984 and +40 as 1024 + 40 = 1064. An hour is 60 minutes, thus 2:04 is encoded as 2 x 60 + 4 = 124.

Time encoding

If we use 64 bit integers, this leaves 53 bits for time encoding. The obvious choice is to use the UTC time as universal reference and the time elapsed since 1970-01-01 00:00 in some unit to get an integer representation. This provides a well normalized and easy to remember time reference. It also simplify conversion from the existing (old) 32bit system time encoding. Reserving one bit as sign bit so that a 64 bit signed integer data type can be used, we have 52 bits left. Using microsecond time units, the time value can be in a year range of 1970 +/- 142. This leaves 100 years left ahead of us.

Encoding summary

The time is encoded in a 64 bit signed integer. The 53 most significant bits represent a signed time delay in microsecond time units.

When the value represent a time interval or the result of some time computation the 11 less significant bits are 0 so that conventional signed integer arithmetic operations can be use for time computation. The only constrain is with time interval division where the the 11 less significant bits of the result must be cleared.  

When the time is an absolute time, the 53 most significant bits encode the time interval relative to the 1970-01-01 00:00 UTC time. The 11 less significant bits encode the local time offset relative to the UTC time in minute units and added by 1024 so that as it is encoded as an unsigned integer value. The value 0 (-1024) is not a valid time offset value.

Time operations

  • Testing if a time value is an absolute time or an interval is performed by testing if the 11 less significant bits are all 0.
  • To perform time computation, first clear the 11 less significant bits then use conventional integer addition, subtraction and multiplication arithmetic operations. 
  • To perform time interval division, use the normal integer division operation and clear the 11 less significant bits of the result.
  • Comparing absolute times can be done as conventional integer comparison as well for time intervals. Comparing absolute time, with time interval won't make sense unless the time interval is relative to the 1970-01-01 00:00 UTC time.
  • Extracting the UTC time zone in minute units is performed by clearing the 53 most significant bits and subtracting 1024 to the resulting value.
  • Conversion to double precision floats with second units is trivial and without loss of precision, but it will lack the time zone information.

Final remarks

This encoding is trivial to understand and to manipulate by using conventional integer arithmetics, comparison or bit wise operations. Its value may represent an absolute time or a time interval with the possibility to distinguish between these two types of value. Time comparison or arithmetic operations in this representation is more efficient than by using double float encoding.

This encoding is perfectly suited for date time stamping in the defined limited range and using such encoded date as indexed key in a database or when sorting stamped information is needed. It allows to display any absolute time using the ISO 8601 convention or any country specific representation.

However this time encoding has two limitations which are minor weakness. The first limitation is the restricted time span covered by the encoding. The second limitation is the inability to encode summer or winter time saving information. The later is not impairing absolute time comparison because the UTC time is used as reference. The problem is just the inability to determine if the time zone offset includes or not the winter or summer time. But this is also the case with the ISO 8601 representation.
2 Comments

Base32 encoding proposal

11/26/2012

1 Comment

 
Base64 is a popular data encoding used to represent binary data as a sequence of ASCII characters. What is less popular is the Base32 encoding because it is generates a less compact encoding.

However, Base32 encoding has the benefit to provide an encoding that is easy to handle "manually" by humans. I suggest to use Base32 encoding to provide a compact identifier encoding that users have to remember and may have to spell out to other people.

It is for this reason that I would prefer to use such Base32 encoding for the user identifiers key of a web service.

With 4 Base32 ASCII code value one can encode one number in a million. With 5 Base32 ASCII code, we can encode one value in 32 million and with 6 Base32 ASCII code we can encode one value in a billion. I can't wait for my users to have 5 or even 6 ASCII codes in their identifiers.

My proposed encoding is the same as the Crockford's Base32 alphabet with the difference that the letter U is preserved and the letter W is removed. I guess the letter U has been removed to avoid confusions with two 1 in sequence. But I find that removing the W is preferable because it is less convenient to memorize and spell out.

Note: with a Base64 encoding, only 5 Base64 ASCII codes a needed for one number in a billion, but the complexity to remember, spell out and distinguish upper and lower case letters makes it inefficient.
1 Comment

IDR encoding compared to Go language encoding

5/12/2012

0 Comments

 
As the author of the IDR encoding (yet unpublished), I was very curious do see how it compares to the data encoding proposed in the Go language designed by the Google team (gobs of data).

There are two fundamental difference between the two.

Value encoding

Gobs encodes value by a tag byte followed by a compact byte encoding of the value. The tag identifies the type of the value and its encoded byte length. The byte encoding drops trailing 0 bytes of the value.

IDR uses the most common computer internal representation of data as encoding and has thus no marshaling work.

Advantages

Gobs has two major benefits. The first benefit is that the type of data is provided with the value which allows anyone to decode the values of a message without prior knowledge of its content. The second benefit of such encoding is that data can be split in blocs anywhere since decoding is processed byte after byte.

IDR has the advantage of fast and trivial marshaling as in RPC and IIOP.

Disadventages

The price to pay with Gobs is the additional tag byte and the marshaling work. With IDR, it is the code complexity to ensure the atomicity of the base values if a data stream needs to be split and the absence of base value type information with the data.

Type encoding

Gobs provides the maximum type information with the message so that it is self describing. This makes the encoding more complex since conciseness competes with expressiveness.

RPC, IIOP and ICE rely on the context to determine the type of encoded data. The encoding targeting mainly use in communication, this optimization make sense to some extend.

IDR precedes any message with a type reference. The type reference is a key to a distributed database similar to the DNS from which a description of the data contained in the message may be obtained. It is possible to obtain a concise form to efficiently parse the data by a program or a detailed expressive form with comments to be used by humans.

The IDR data type description strategy seems the most efficient because the data type description is written once. But the decoupling of the type description from the data expose to the risk of loosing access to the data description if it gets deleted.

Conclusion

There are some good and bad points on both sides and there is no easy way to merge the good points into a new optimal encoding.

My experience is that the IDR encoding, while simple and efficient on some aspects, was quite complex to develop.

Today I still favor IDR's choice because of the marshaling efficiency. Olivier Pisano managed to translate the C++ IDR library to the D language in a very short time. So maybe it is just the conception and validation of IDR that took so much time.

I like very much the smart encoding of the base values in Go, but not so much to force all floating point values to be encoded into a double precision float (64bit). I hope they'll change that.

There are other differences between IDR and Gob which have not been detailed here. What they have in common is that both may use their encoding to support persistence. IDR may use it with its distributed database.



0 Comments

Numbering schema yielding identical lexical and numerical ordering

2/4/2012

2 Comments

 
Picture
It may be desirable in some situation to be able to assign a numerical reference (integer) to a resource with the particular property that the string  representation of the reference preserves the numerical ordering. This blog post presents a numbering method that has this property. The proposed numbering schema achieves this goal and avoids adding zero's or spaces in front of the numbers, thus keeping the strings short. The price to pay is that there will be gaps between the numbering sequence. The numbers in these gaps are invalid numbers in this numbering schema and may be easily recognized and used for error detection.

The problem: You probably experienced that sorting a list of strings representing the integer sequence "1", "2", "3", ..., "10", "11", ... "20", "21", ... yields the weird result "1", "10", "11", ... "2", "20", "21", ... "3", ... This shows up, for instance, when naming files by numbers. We get this result because strings are sorted in lexicographical order which means they are ordered by digit value, one by one from left to right. So in a lexicographical order, "10" is smaller than "2" which is the opposite of the numerical order.

In the situations where this is an unacceptable nuisance we have a set of solutions we could pick one from.

Use a specially crafted sorting algorithm able to detect that it deals with numbers in ASCII representation instead of text strings. In some context, changing the sorting algorithm is not possible (i.e. file names). 

Another possibility is to add some zero's or spaces in front of the number in its ASCII representation. The problem with this method is to know how many zero's or spaces should be added. There should be at least as many as the number of digits in the biggest number we need to represent. In some context it is not possible to know the biggest number we will have to dealt with and this introduces a highest value constrain which is preferable to avoid if possible.

The solution: The proposed solution is to use a numbering schema where we simply add in front the number a digits in the number. For instance the number 234 has 3 digits. This number would then be coded as "3123" in the proposed schema where the 3 (shown in bold) is added in front of the number.

The number is valid if the string contains only digits and the first digit is length minus one. The value 0 is represented as 0. For negative numbers, if you need them, the number of digits must be inserted between the minus sign (-) and the number.

There is also an upper limit in the maximum number of digits the number can have. The biggest number that may be represented with this numbering schema is 10 billion minus one. 

With this numbering schema, the sequence "1", "2", "3", ..., "10", "11", ... "20", "21" becomes "11", "12", "13", ..., "210", "211", ... "220", "221" with the added digit in front shown in bold. The lexicographical sorting of this number sequence will preserve this order.

The price to pay is that the numbering sequence is not compact. It has gaps containing invalid numbers (i.e. 23, 123,... ). This may be considered an inconvenient but has also the benefit to make it possible to detect errors and invalid values.

Generating such number sequence is trivial as well as checking their validity.

Application example: I "invented" this coding schema when looking for an optimal way to numerically reference resources assigned incrementally for a web service (i.e. userId, documentId, imageId, ....). The numbering provides a direct mapping with a numerical table index value as well as a compact string representation. The size of the reference would grow smoothly as needed with the number of references.

Another application is as document id in NoSQL databases like CouchDB, MongoDB, etc. Keep the id compact and sorted.

Using a Base64 like encoding

A more compact coding would use a base64 like encoding. Conversion between the ASCII and binary encoding would not be as straightforward, but identifiers would be much more compact and still preserve the sorting of ASCII and binary representation.

To generate such encoding, split the binary representation in groups of 6 bits, starting from the less significant bit (right most) toward the most significant bit. Then replace all the left most chunks that have all bits to zero with a single chunk coding the number of 6bits chunks left. For instance  ...00000|110010|010011 becomes 000010|110010|010011 because there are only two significant chunks in the number and 2 is encoded with 6 bits as 000010. The last step is to replace each 6 bit chunks in the resulting chunk sequence with the ASCII codes provided in the following table.

Picture
Mapping between chunk's 6 bit binary integer value and ASCII letters used for encoding
The resulting encoding is very similar to Base64 encoding but has the particular properties to preserve the sorting order of the chunk integer value and the associated ASCII value as well as using ASCII codes that may be used in URLs or filenames. Except for the value 0, the ASCII representation will never start with a '-'. 

Conversion between the ASCII representation and the binary representation is more complicated, especially when it has to be done by humans. Though a benefit of this coding is that its ASCII representation will be short for small numbers. The ASCII coding will have n+1 letters for numbers with n significant chunks. For up to 24 bit numbers (over 16 millions values), the longest ASCII encoding will be 5 letters.
2 Comments

Distributed Version Constrol System (DVCS) usage model

3/28/2010

1 Comment

 
Picture
Subversion has been my software version control system for years now. It is simple and straightforward but is inappropriate for some usage patterns that required sharing intermediate development code between developer or combining an official release version track with one or more development tracks.

Distributed Version Control Systems with Git, Mercurial or Bazaar solves these problems. The best way to understand this is by reading Vincent Driessen's blog post titled "A successful Git branching model". It presents a usage model for Distributed Version Control System (DVCS) using git, but it work as well with Mercurial or Bazaar.

The Mercurial tutorial provided by Joel Spolsky provides a very good introduction which explains why DVCS are better than the centralized version control systems like subversion.

I still have to chose between the three. For now my preference is Git for technical reasons. The ergonomic aspect is important too, but fore this I usually rely on desktop integrated tools like turtoiseGit. I'm currently a very happy user of RabitVCS which currently supports only Subversion. I hope they will support Git or Mercurial soon.

1 Comment

Log structured database

3/1/2010

1 Comment

 
Picture
The distributed information system (DIS) needs a database to store its information and a simple key value database would do the job. Today, Tokyo Cabinet seems the best choice for such type of database.

Why a log structured database ?

My attention was recently caught by the blog post Damn cool Algorithms: log structured storage. The white paper presenting RethinkDB provides a more exhaustive view of the benefits of this data structure and some disadvantages too. The LWN.net article Log-structured file systems: There's one in every SSD covers the use of log structure in SSD file systems.

While surfing the web to get more informations on log structured database, I found the following blog note presenting the experimental YDB log structured database with some interesting benchmark showing that YDB is roughly 5.6 time faster than Tokyo Cabinet and 8 time faster than Berkeley DB with random writes. These numbers justify some deeper investigation.

The performance benefit is mainly due to constraining write operations to the end of the file because read access can benefit from memory caches, writes not. With random location writes, the disk writing head needs to move into position (seek) and this has a huge latency compared to transistor state changes or data transmission speed.

Reducing disk head movements may thus yield a significant performance increase. Note that this won't be true with SSD disks anymore, but other constrains come in play too where a log structured database may still be attractive (evenly distributed and grouped writes). 

The Record Index

As you may guess writing data to the end of the file implies that modified records are copied. The record offset is then modified which implies an update of the index too. If the index, generally tree structured, is also stored in the log database, it result in cascade of changes which increases the amount of data to write to disk.

This makes log structured database less attractive, especially if the index is a BTree of record keys. A BTree key index is not very compact and not trivial to manipulate, especially if keys are of varying length.

I finally found a better solution derived from reading the white paper presenting the The PrimeBase XT Transactional Engine describing a log structured table with ACID property for an RDMS table, and more recently the article Using Uninitialized Memory for Fun and Profit describing a simple data structure to use an uninitialized array.

The idea is to use an intermediate record index which is basically a table of record offset and size. The entry index in the table is the record identifier and is used as key to locate the record in the file. The record identifier is associated to a record for its life time and may be reused for a new record after the record has been deleted.

Benefits of the record index

The record index is stored as a tree index where non lead nodes hold the offset to the lower level nodes of the tree. Changing an offset in a leaf node will still imply a change in all the nodes up to the root of the tree, but the index is much more compact than a conventional BTree associating the record key with its offset and size. The record identifier doesn't need to be stored in the index because it is its relative position in it. 

Another benefit of this intermediate record index is that the record key index will now refer to the record identifier and this doesn't change when the record is modified. It is then possible to have multiple index to the records or to use the record identifier inside the user data to support record reference graphs (i.e. linked lists, etc.).

By storing the record identifier along with the record data, the garbage collector or the crash recovery process can easily determine if a record is valid or. It simply has to compare the record offset and size with the one found in the record index. If it is the same, the record is the latest valid version.

Snapshots and recovery points

The dirty pages of the record index need only to be saved at snapshot time. In case of process or system crash, the database should be restored to the last saved snapshot. A snapshot correspond to a coherent state of the database. A snapshot is saved any time the user closes the database. Restoring the database to some snapshot saved state boils down to truncate the file after the last valid record of the file.

If snapshots saving is very frequent and crash recovery very rare, it is possible to use lightweight snapshots. For such snapshot only a small record is appended to the record stream which tags the point in the file where the snapshot occurred. When the database is recovered at some saved snapshot point, the recovery process can continue the recovery process beyond that recovery point by replaying all the changes until the last valid lightweight snapshot. The state of the database can then be restored to the latest lightweight snapshot, but with a slightly bigger effort than a saved snapshot recovery.

Garbage collector

For the garbage collector (GC) the classical method may be applied which consist in opening a secondary log file and progressively copy valid records into it in background while it is used. A database backup is as simple as copying the file.

When the lifetime duration of records varies a lot, it might be better to use generational log files, an algorithm used with memory garbage collector. The idea is to avoid copying constant records due to some other records short lifetime of frequent change generated garbage. The idea is to group records according to their change frequency into separated log structured database. 

A first log structured database contains all new or changed records. The garbage collector progress then at the same speed as records are written to the end of the file. Every valid data it finds is then copied in a second generation record log file. These records have lasted a GC cycle without a change. Additional generation database may be added for even slower changing records.

The use of multiple log files will induce some disk writing head movements, but it will be balanced by saving the effort to repeatedly copy constant records.

Conclusion

It is not my intent to implement this shortly. I just wanted to document the method which seems to be the canonical way to handle the record index problem and for which I couldn't find a description on the web.
1 Comment
<<Previous

    Author

    Christophe Meessen is a  computer science engineer working in France.

    Any suggestions to make DIS more useful ? Tell me by using the contact page.

    Categories

    All
    Business Model
    Database
    Dis
    Ditp
    Dvcs
    Git
    Gob
    Idr
    Misc
    Murphys Law
    Programming Language
    Progress Status
    Startup
    Suggested Reading
    Web Site

    Archives

    December 2017
    November 2015
    September 2015
    February 2013
    December 2012
    November 2012
    May 2012
    February 2012
    March 2010
    October 2009
    September 2009
    July 2009
    June 2009
    May 2009
    February 2009
    January 2009
    November 2008
    September 2008
    August 2008
    July 2008
    May 2008
    April 2008
    March 2008
    February 2008
    January 2008
    December 2007
    October 2007
    August 2007
    July 2007
    June 2007
    May 2007

    RSS Feed

    Live traffic feed
    You have no departures or arrivals yet. Wait a few minutes and check again.
    Powered by FEEDJIT
Powered by Create your own unique website with customizable templates.