Minimizing latency was one of the main aspects driving the design of the DITP protocol. You may check wikipedia for a definition of latency in the networking context.
While network speed has still allot of room to increase, network latency doesn't. Today 80GB/s backbone fiber links are getting common, and we know that there is no limit to reach Tera Byte/s speed or even above. We can easily achieve this by frequency multiplexing or adding more parallel fibers. So we have plenty of room to extend the number of Bytes/s we can send. We can expect a network speed gain of a factor of 100 or maybe even 1000 in the next 25 years.
On the other hand we have a hard and very close limit in transmission latency. This is all the fault of the speed of light limit (~300.000 Km/s). See the numbers. The distance between Paris (France) and New York (USA) is ~ 6000 Km. So it takes 20ms for a single bit to reach the other end and there is no way to lower this time unless we find some "worm hole" in our knowledge of physics.
We are still far from this lower bound limit, but only by a factor 10 or less. Thus a protocol designed for WAN (World Area Network) application should really care about latency.
Here is how I learned the lesson. I designed a protocol to transmit 2MB blocks between computers. The protocol was trivial and worked very well in LAN (Local Area Network) applications. We then had the opportunity to test it on a leased 6GB/s long distance connection between CERN (Geneva in Switzerland) and the university of Alberta (Canada). The surprise was that the bandwidth usage never exceeded 2%. We found out that it was caused by the network latency which was ~500ms. In this context, the handshake time dominated the transmit time, something we didn't see in a LAN. So our protocol had to be redesigned ! Since that day I understood how important and critical network latency can be.
The first lesson we may learn from this analysis is that when designing a modern protocol for potential WAN usage, minimizing network latency is much more important than encoding concision.
The second important lesson is that inter-object communication will be impaired by the network latency of long distance connections. In such context I thus expect that the agent model will be more efficient. In this model, it is a piece code, a program or even a virtual robot that is sent to the remote host. And this is exactly what is already happening today with JavaScript code in web pages. I assume this tendency will develop and extend in the next 10 years.
DITP is ready for this since it can be used as the transport layer of such agent transmission. All we need is a special remote service object acting as an agent host. This is on top of the DITP communication layer so that many different types of agents can coexist. The technology can thus also evolve and preserve backward compatibility. There are other reasons why DIPT has a good potential for such usage model, but it is still a bit early to expose them.
Network latency is an often disregarded parameter, but things might change in a near future when we'll get closer to the hard limit !
1 Comment
1/26/2008 05:00:50 pm
This week in Montpellier, some people where able to test an FTTH connection from Free.
Reply
Leave a Reply. |
AuthorChristophe Meessen is a computer science engineer working in France. Any suggestions to make DIS more useful ? Tell me by using the contact page. Categories
All
Archives
December 2017
|