Conservatism in Computer Science

I have attended a seminar discussing how to encourage reproducability in scientific research of the Internet. Obviously, everybody agrees that it is desirable that research findings are reproduced by independent studies (and I mean reproduced and not just repeated although repeated is more than nothing). The question, however, is how to get there. For me this is largely an issue of incentives and I am sure there are a couple of things that can be done to increase incentives to reproduce research. I particularly liked the idea to organize repathons, reproducability hackathons where students reproducing work can meet authors of papers they are trying to reproduce and I believe research funding organizations will need to use their power to make it easier to reproduce researchs.

IETF (still) too slow (followup)

I am following up on my previous post on this topic, which was focusing on issues related to the timely development of YANG modules. There are other key factors that determine the speed in which work completes and that are often ignored in the IETF when people discuss work to be taken on and define milestones.

A big part is the management of the human resources. Yes, this may sound strange give that the IETF is a volunteer organization and hence does not directly “control” human resources. But still, if a WG starts a new project, the WG and the WG leadership should be clear about the resources that are needed to finish the project. In particular, the following matters a lot:

RFC #42 is RFC 8342

My 42nd RFC has been published and it got the number RFC 8342. Despite the funny number, I believe this is one of the more important RFCs I have worked on since it tells us how to think about configurations and their relationship to operational state. A few more RFCs will appear during the coming weeks providing the technology extensions that allow us to use the new framework in practice. Work on this document started with a trip to Stockholm in May 2016 but the discussions have a much longer history and it feels good to have them settled and the document published.

Closed source causes gear to die early

I am using a Garmin running watch, meanwhile seven years old. I had to replace the battery once but otherwise it just works. But recently, Apple decided to remove the serial driver code for this running watch from Mac OS High Sierra and hence I am not able to read out data anymore. Luckily, someone wrote a Linux command line utility some time ago to read out the data from Garmin watches (the tool is currently broken on newer versions of Ubuntu but people seem to be working on fixing this). This story is another example demonstrating that it is only open source tools that you can rely on longterm. This is where open source really shines: as long as a program is doing something useful for some people, someone will step up and maintain and fix it or even improve it. Commercial software simply gets unusable due to business decisions. And many people in the software industry do not care about long term data archives - except those companies that like to get your data in order to turn data into valuable assets of their own. I will now extract all my GPS running date from various programs I have used over time into a common format, say good bye to closed source running applications, and contribute some patches to open source programs.

More crappy little rules please

I observe that people at a certain age get enthusiastic about creating rules in an attempt to make this a better world, or to improve engineering, or to organize people more efficiently, or whatever they love to create rules for. And then I observe that people further down the road of life get more relaxed again, most likely they learned that long catalogues of rules simply do not achieve much (the more rules there are, the less likely it becomes that they will be read, understood, and followed). Yes, I have done my own experiments in this space as well - after all, it is good to have experimental evidence, no?

German trains and stormy weather

We recently had a storm in the nothern part of Germany - not very unusual in Fall. Three days later I wanted to go home by train via Hannover. There were still some train tracks closed but the train company offered an ‘alternative connection’. Before entering the alternative connection train in Hannover, travelers were informed that they would receive more information in the train. (If you happen to know the German train system, you will know that people working on the trains usually have the least information.) Before arriving at the station in a small village where the train had to end due to closed tracks, travelers got told that there will be a connecting bus service. Upon leaving the station, there was no bus service nor anybody to talk to. Just rain.

IETF (still) too slow

The IETF is slow. The IETF is too slow. We know that, no news here.

There was a discussion today about this topic in the OPS area meeting of the 99th IETF meeting, mostly driven by the fact that the networking industry (not just the IETF) is hit by a wave of YANG modules. Some ideas were presented to help organize this process. I personally do not think the IETF has a problem due to a lack of version numbers nor do I think the IETF has a real problem with format conversions of artefacts. There are many formats people use to produce content and to convert into the IETF blessed I-D format. What I personally find lacking is the following:

Network Slicing

It seems the networking industry has created another hot buzzword: network slicing. The story, however, is more than 20+ years old. Telcos want to use their infrastructure (5G nowadays) to provide different services that are targeted to specific network use cases. Sounds good? Well, the telcos leave out that this is not done just to improve the network usage experience but to create new business models that allow to charge different amounts of money for the different network slices. I consider this technology created to support business models.

Farewell PlanetLab

After joining Jacobs University, I ordered two computers in order to join PlanetLab, a truly innovative distributed computing and experimentation infrastructure at that time. The two computers become part of PlanetLab about 10 years ago and they recently asked to be retired. I think 10 years of service for mostly unknown researchers running even more unknown experiments is a great achievement and so I pulled the plug. I hope the silicon can enjoy the remaining time in the rack (with less heat) until it is time to make space available for others to come.