Distributed Information System (DIS)
  • Home
  • The blog
  • Contact

Distributed Version Constrol System (DVCS) usage model

3/28/2010

1 Comment

 
Picture
Subversion has been my software version control system for years now. It is simple and straightforward but is inappropriate for some usage patterns that required sharing intermediate development code between developer or combining an official release version track with one or more development tracks.

Distributed Version Control Systems with Git, Mercurial or Bazaar solves these problems. The best way to understand this is by reading Vincent Driessen's blog post titled "A successful Git branching model". It presents a usage model for Distributed Version Control System (DVCS) using git, but it work as well with Mercurial or Bazaar.

The Mercurial tutorial provided by Joel Spolsky provides a very good introduction which explains why DVCS are better than the centralized version control systems like subversion.

I still have to chose between the three. For now my preference is Git for technical reasons. The ergonomic aspect is important too, but fore this I usually rely on desktop integrated tools like turtoiseGit. I'm currently a very happy user of RabitVCS which currently supports only Subversion. I hope they will support Git or Mercurial soon.

1 Comment

The 8 fallacies of distributed computing

10/17/2009

4 Comments

 
The following two paragraphs are the introductory paragraphs of the document Fallacies of distributed computing (pdf) by Arnon Rotem-Gal-Oz that presents the 8 fallacies of distributed computed.

"Distributed systems already exist for a long tThe software industry has been writing distributed systems for several decades. Two examples include The US Department of Defense ARPANET (which eventually evolved into the Internet) which was established back in 1969 and the SWIFT protocol (used for money transfers) was also established in the same time frame [Britton2001].

Nevertheless, In 1994, Peter Deutsch, a sun fellow at the time, drafted 7 assumptions architects and designers of distributed systems are likely to make, which prove wrong in the long run - resulting in all sorts of troubles and pains for the solution and architects who made the assumptions. In 1997 James Gosling added another such fallacy [JDJ2004]. The assumptions are now collectively known as the "The 8 fallacies of distributed computing" [Gosling]:
  1. The network is reliable
  2. Latency is zero
  3. Bandwidth is infinite
  4. The network is secure
  5. Topology doesn't change
  6. There is one administrator
  7. Transport cost is zero
  8. The network is homogeneous
..."

While in the process of designing a new distributed information system, it a good idea to check how it position itself regarding these 8 fallacies.

The network is reliable

DIS uses TCP which was designed to be reliable and robust. Reliable means that data is transmitted uncorrupted to the other end and robust means that it may resist to a certain amount of errors. There is however a limit to the robustness of a TCP connection, and in some conditions connection to a remote service may even not be possible.

DITP, the communication protocol of DIS, is of course designed to handle connection failures. Higher level and distributed services will have to take it in account too.

Making a distribute information system robust implies to anticipate connection failures at any stage of the communication. For instance, a flock of servers designed to synchronize with each other may suddenly be partitioned in two or more unconnected flocks because of a network failure, and be connected back together later.

The latency is zero

Latency was a major focus in the design of the DITP protocol because DIS is intended to be used for World Area Network (WAN) applications. DITP reduces latency impact by supporting asynchronous requests. These requests are batched and processes sequentially by the server in the order of emission. If a request in the batch is aborted by an exception, subsequent requests of the batch are ignored. This provides a fundamental functionality to support transactional applications.

In addition to this, DIS may also support the ability to send code to be executed by a remote service. This provides the same functionality as JavaScript code embedded in web pages and executed by browsers, allowing to implement powerful and impressive web 2.0 applications.

With DIS, remote code execution is taken care by services made available by the server manager if he wants to support them. The services may then process different types of pseudo-codes: JavaScript, Haxe, JVM, Python, ... Many different pseudo-codes services may then coexist and evolve independently of DIS. Such functionality is of course also exposed to security issues. See the secure network fallacy for an insight on how DIS addresses it.

Bandwidth is infinite

This fallacy is the rational of the Information Data Representation (IDR) design. It uses binary and native data representation. In addition to be very fast and easy to Marshall, it is also very compact.

DITP supports also user defined processing of transmitted data so that compression algorithms may be applied to them. DITP is also multiplexing concurrent communication channels in the same connections, allowing to apply different transmitted data processing to each channel. By choosing the channel the user may decide to compress transmitted data or not. 

The network is secure

A distributed system designed for a world wide usage must obviously take security in account. This means securing the transmitted data by mean of authentication and cyphering, as well as authenticating communicating parties and enforce access or action restriction rules.

Communication security is provided by the DITP protocol by mean of the user specified transmitted data processing. As data compression, these can also handle data authentication and cyphering. Different authentication and cyphering methods and algorithms can coexist in DIS and may evolve independently of the DITP protocol.

Authentication and access control may use conventional passwords methods as well as user identification certificates. But instead of using x509 certificates, DIS uses IDR encoded certificates corresponding to instances of certificate classes. Users may then derive their own certificates with class inheritance. They may extend the information carried in the certificate or combine different certificate types together.

An authentication based on password checking or user identity certificate matching doesn't scale well for a world wide distributed system because they need to access a reference database. With distributed services, accessing a remote database introduces latencies and replicating it (i.e. caches) weakens its security by multiplying the number breach points.

The authentication mechanism favored in DIS uses member certificates. These certificates are like club or company member access cards. When trying to access a service, the user present the corresponding certificate and the service needs simply to check the certificate validity.

With such authentication mechanism, the service can be scattered all over the Internet and remain lightweight as is required for embedded applications (i.e. smart phones, car computers, ...). The authentication domain can also handle billions of members as well and easily as a few ones. Member certificates may be extended to carry specific informations and connection parameters.

Topology doesn't change

The ability to handle network topology changes initiated the conception of DIS in 1992. It is thus designed from the start to address this issue in a simple, robust and efficient way. It is not a coincidence that the DIS acronym resembles the one of DNS. DIS is a distributed information system as the DNS is a distributed naming system. DIS uses the proven architecture of the DNS and applies it to generic information with additional functionalities like allowing to remotely manage the information. The DNS is known to be a corner stone of the network topology change solution, as will be DIS.

There is one administrator

As the DNS, DIS supports a distributed administration. Information domain administrator have full liberty and authority in the way they organize and manage their information domain as long as the interface to DIS respects some standard rules. As for the DNS, there will be a central administration that defines the operational rules and control their application. If DIS becomes a broadly adopted system, the central administration will be composed of members elected democratically and coordinated with the Internet governance administration if such structures happens to be created.

Transport cost is zero

The transport cost is indeed not zero but most of it is distributed and shared by the users. There remains however a residual cost for the central services and administration for which a revenue has to be identified. The DIS system will allow to obtain such a revenue and there is a rational reason why it ought to.

Imposing a financial cost to some domains or features of DIS which are limited or artificially limited resources provides a mean to apply a perceptible pressure on its misbehaving users (i.e. spam).

The network is homogeneous

DITP is designed to support different types of underlying transport connections. The information published in DIS is treated like an opaque byte block and may be of any type as well as its description language. It may be XML with its DTD description, binary with C like description syntax, python pickles or anything else. Of course it will also contain IDR encoded information with its Information Type Description.

Conclusion

The conclusion is that DIS, DITP and IDR have been designed without falling on any of the common fallacies. This is partly due to the long maturation process of its conception. While this may be considered as a shortcoming, it may also be its strength since it allowed to examine all aspects wisely with time.
4 Comments

A Distributed Information System ? Nice, but what for ?

9/5/2009

0 Comments

 
Here is a (long) blog note I would recommend reading : "Snakes on the web" written by Jackob Kaplan-Moss (September 4, 2009). It is a talk given at PyCon Argentina and PyCon Brazil, 2009.

It presents an analysis on the current situation of web edition and desirable future system properties.

My impression, and this is not a coincidence, is that DIS matches most of these requirements since it was designed to address the short comings of the actual systems.
0 Comments

DITP and The black triangle

7/11/2009

0 Comments

 

A hacker news submission references the "The black triangle" blog note. I can only backup the author since I have experienced this many time.

For short, with some programs the visible part of it is merely just a black triangle while the invisible part may be complex or required a lot of efforts to achieve. The black triangle is then generally just a simple visual example to prove that the underlying system works.

That is the state of progress of DITP. I'm working to get the black triangle to become visible. In doing so I'm also writing the protocol specification so that the protocol may be reviewed and implemented by third parties in other languages or libraries.

The black triangle is like the first fruits of a fruiterer tree that may, sometime, took a long time to grow up to the point to be able to produce fruits.

0 Comments

"A note on distributed computing" (1994)

7/7/2009

0 Comments

 


"A note on distributed computing"


Jim Waldo, Geoff Wyant, Ann Wollrath, Sam Kendall. Nov 1994.

Abstract:

We argue that objects that interact in a distributed system need to be dealt with in ways that are
intrinsically different from objects that interact in a single address space. These differences are required because distributed systems require that the programmer be aware of latency, have a different model of memory access, and take into account issues of concurrency and partial failure.

We look at a number of distributed systems that have attempted to paper over the distinction between local and remote objects, and show that such systems fail to support basic requirements of robustness and reliability. These failures have been masked in the past by the small size of the distributed systems that have been built. In the enterprise-wide distributed systems foreseen in the near future, however, such a masking will be impossible.

We conclude by discussing what is required of both systems-level and application-level programmers and designers if one is to take distribution seriously.

0 Comments

What is wrong with HTTP ?

6/25/2009

0 Comments

 

Here is a document presenting a review on what is good an bad with HTTP. It provides some light on the choices I made for DIS. I couldn't identify the author's name in the text. Sorry. 

What is wrong with HTTP ?


In this essay, the first of a pair on browser apps, I explore how they are better than traditional desktop apps in some ways, but worse in others. Some of the disadvantages of browser apps are deeply rooted in the use of HTTP URLs for naming. In the second essay, I will present a design sketch for a new platform, are placement for HTTP combining both styles' advantages.Right now, we're seeing a massive shift to browser apps, largely server-side browser apps. As I warned in "People, places, things,and ideas," [18] this move to server-side browser apps imperils our software freedom; I outlined how to solve this problem in "The equivalent of free software for online services." [19] This pair of essays represents more detail on this problem and proposed solution.

read more ...

0 Comments

Climbing the Mountain [Paul Buchheit]

4/2/2008

0 Comments

 

Paul Buchheit provides additional thoughts on what makes a startup successful. A good team ? A good idea ? A good execution ? Which one is more important, which is less ? He compares it to climbing a mountain with a gold pot at the top. A good analogy.

0 Comments

How to correctly define a standard...

3/19/2008

0 Comments

 

The "Martian headset" is a long but very interesting article on software standards published on the Joel on Software blog.

I've learned that it is not enough to publish a standard specification document. At least one reference implementation is required. Java did this with its compiler and managed by that to ensure interoperability. It could even resist attempts to break the standard.

Lesson learned !

0 Comments

Startup success fundamentals...;

3/10/2008

0 Comments

 

As a follow up to the previous blog note, I found this blog note of Paul Buchheit very enlightening.

0 Comments

Better than Free

2/3/2008

0 Comments

 

All along the development of DITP and DIS, the question of which business model to apply taps my mind. Here is a blog note I found really enlightening on this issue. 

"Better Than Free" by Kevin Kelly

0 Comments
<<Previous

    Author

    Christophe Meessen is a  computer science engineer working in France.

    Any suggestions to make DIS more useful ? Tell me by using the contact page.

    Categories

    All
    Business Model
    Database
    Dis
    Ditp
    Dvcs
    Git
    Gob
    Idr
    Misc
    Murphys Law
    Programming Language
    Progress Status
    Startup
    Suggested Reading
    Web Site

    Archives

    December 2017
    November 2015
    September 2015
    February 2013
    December 2012
    November 2012
    May 2012
    February 2012
    March 2010
    October 2009
    September 2009
    July 2009
    June 2009
    May 2009
    February 2009
    January 2009
    November 2008
    September 2008
    August 2008
    July 2008
    May 2008
    April 2008
    March 2008
    February 2008
    January 2008
    December 2007
    October 2007
    August 2007
    July 2007
    June 2007
    May 2007

    RSS Feed

    Live traffic feed
    You have no departures or arrivals yet. Wait a few minutes and check again.
    Powered by FEEDJIT
Powered by Create your own unique website with customizable templates.