Monday, November 26, 2018

Notes from IETF 103 Bangkok




This is my attempt at a sharable summary of the recent Bangkok IETF - the 103rd IETF meeting.

This was the first time IETF met in Bangkok.
Anywhere in Bangkok is hard to get to - the traffic in this metropolis is a nightmare, at all times of the day. But once we got there, the venue functioned very well - adequate size (we were 800ish rather than the usual 1000ish people, so it was a little roomy), lots of nearby restaurants, and by a very wide margin the best break snacks of any IETF ever!

QUIC


The star of the show (at least at the stack layers I usually frequent) was QUIC, as has been true for a couple of years. This group has been “doing final things” for the last year, with the occasional redesign thrown in - the hope is that this time, we’ve got a stable spec.
The long running battle for the “spin bit” (basically: a connection can volunteer to reveal some information about its performance to third party monitors in the network) was finally settled with a consensus call - happiness erupted at this being settled.

DOH


The Domain Name System has been fairly stable for the last few decades, with the slow rollout of DNSSEC and the “presentation layer” of IDN being the biggest changes being contemplated. This changed a bit recently, with two new technologies - DNS over TLS and DNS over HTTP - being defined and actively pushed. DOH in particular, while seeming to be a simple thing by itself, with laudable goals (don’t tell everyone who can sniff your network what sites you’re looking up), the deployment patterns that seem likely (most people seem to think that Internet giants already dominating the Web will be the ones to offer service to “most people”, and that browsers will push users towards making use of those services) - this would replace the “little big brother” of your ISP with the “big big brother” of the Internet giants - not necessarily perceived as a boon to privacy.

Crypto everywhere


One interesting observation was that crypto is now part and parcel of just about everything happening in the IETF.
QUIC, in addition to being a transport protocol, is a crypto delivery veichle that changes the balance of what’s revealed to the network vs what’s private between the participants.
MLS, despite being called “message layer security”, is really trying to design an efficient key sharing protocol for large groups, taking lessons from examples such as Telegram and Signal. And DINRG (the IRTF Decentralized Internet Infrastructure Research Group) was almost solely devoted to ways in which people could use crypto functions - one example was a protocol that claimed to ensure everyone could agree that a group decision has been made, even if (within certain constraints) they didn’t agree on who should be part of making the decisions. More traditional “esoteric” crypto functions, like being able to show proof that someone is a member of a group without knowing the identity of the member, were also in abundance, of course.

The important point may not be the individual proposals - the point may be that crypto is now such a fundamental part of the Internet that whenever you try to do something or deploy something, the question is not whehter you use cryptography - the question is what you hide, what you reveal, what you agree upon and what you can prove to each other - and in all cases, relying on the answers requires serious, competent use of cryptography.

Hacking as part of the IETF process


My two first days at the IETF were largely devoted to the “hackathon” - a gathering of 200ish people (most, but not all, IETFers) devoted to writing code that helps the net move forward.
My particular table’s interest was in test suites for webRTC protocols - others were doing protocol implementations, testing, or even redesigns - the largest table being (of course) devoted to QUIC interoperation testing.
This get-together is felt by the participants to be important - it lets people argue on the basis of “this works, this does not” rather than “this is not elegant” or “I feel this is not implementable” - and thus serves to connect the whole community back to its “make it work” roots.
It’s also a great place for interacting with people whose primary interest is open-source, not specifications.
Prediction: It will grow.

Reorganizations


The IETF decided a year ago that it would reorganize its structure - having done that ten years back, we knew very well that there were some issues with the system. A new strucure is now in place (the “IETF LLC, a disregarded entity of the Internet Society” has been formally incorporated and is in the process of getting its first permanent board), and people are busy at work updating documents to reflect the new structure.
This effort has proved uncontroversial - at the plenary, people were much more interested in worrying about whether we were polite to each other, and whether our disagreements were couched in such a way that we scare away people - it seemed strange to see a technical organization spend so much time worrying about interpersonal relationships and so little time worrying about technological megatrends and their impact on the organization.



Wednesday, July 2, 2014

A rumination on birthdays and birthday wishes

Another birthday has come and gone. I now have to get used to a new number that I have to give up when people say "How old are you?".
Otherwise, I don't feel much different from who I was last week.

What I remember best from this birthday is the good wishes. From the people who were with me on the day, family and friends - but also all the others - the ones who wished me well.

On my birthday, well-wishes stream in from all over the globe. Most carried by electronic media of some kind, but all of them with a human sending them.
People whose lives have touched mine - some a lot, some a little, some long ago, some recently. On this day, they remember me, and take the time to click a button, type a few words, send some happy thoughts my way.

And I feel touched. Touched that all these people remember me - that I've meant something to them so important that when my name shows up in the "birthday reminder" box of whatever social tool they're using - or when it shows up on their calendar, if that's what they're using - or if they simply remember - they take the time to let me know that they still remember me.

I remember them too. I wish them well. I'm lousy at taking time out from my own schedule to reach out to the people I don't see that often, and perhaps even worse at letting them know I care.
But I am happy that they took this time out of their busy lives to remember me - even if they spent only a few seconds, the fact that they chose to do so touches my heart.

A birthday wish is not a huge thing. But it touches a heart.
Thank you!

Sunday, March 18, 2012

The Byte Stream Fallacy

When discussing the various APIs involved in audio and video in a Web browser, we frequently hear statements that, if taken at face value, translate to “here we pass the data stream from one object to the other”.

Well …. no. We don’t.

It’s easy to fall into that trap, and especially easy to imagine it when we’re using a programming model that looks like “pipes connecting nodes”, with some of the nodes doing things like showing video on a screen, fetching audio from a file, or passing a signal across a network.

One imagines that each step in the process involves a stream of bits and bytes flowing between our objects, carrying our sound and pictures through all the various steps we specify for them.

But it isn’t that way in reality, and we’d better be aware of it, even though our APIs will weave an ever more complete illusion - and even though that illusion is the one we program against using that API.

Consider a really simple scenario, from a videoconferencing situation: A video camera on your computer records your face and a microphone records your voice; it’s passed to my computer and presented on my screen, and I’ve chosen to record the session to a file. The bytes flow; what could be simpler?

Except that this is not what’s happening.

Look at the wire between the camera and the computer. It’s an USB cable, carrying a complex negotiation protocol, which, inside it, carries one picture at a time from the camera’s CCD sensor to your computer’s memory buffer, in a format called YUV2 - which is very easy to decode, but takes a *lot* of space.
In your computer, the receiver driver decodes the data and formats it in a suitable form for your memory buffer - which is NOT YUV2; it’s an internal format.
Every 40 msec, a timing signal comes from the camera, signalling that a picture is complete; the driver switches its writing to another memory buffer, leaving the first buffer to the display handlers.
One display handler does a quick transpose of the buffer to a buffer on your graphics card, which will then rescale the image using specialized hardware to display in a corner of your screen as your “self-image”.
Another display handler passes the buffer content to a codec encoding routine, which will carefullly compare the image to the previous image transferred, and pick the most efficient mechanism for signalling the differences between the image and the previous one - packing this into a series of smaller “packet” buffers, and equipping each packet with a header that says where it’s coming from, where it’s supposed to go, and a timestamp that says which picture frame it belongs to.
Once the encoding process is complete, control of the packet buffers is handed to the network card, which takes care of sending them across the network - well before the codec is asked to start encoding the next frame.
The connection between the computers is not a simple bit pipe either. Sometimes packets get lost; the logic in the pipe has to deal with figuring out whether the loss matters, and asking the sender to do something about it if it mattered - either send it again (causing delay) or decide to send the next picture in such a way that it can be decoded without reference to the lost packet; the last version requires the network component to reach back into the codec component and tell it to behave differently than it otherwise would.

Once the packets arrive at my machine, the inverse process happens: The stream of packets gets decoded into a memory buffer, my machine’s display functions blit the buffer into my graphics card memory for transformation and display, and somewhere along the way - which may be either in the decoder or at the memory buffer - some function picks up the incoming stream of data, possibly reencoding it into another codec’s format, decorates it with the necessary markers for saying which pieces belong where (often the Matroska or AVI container file formats), combines it with the similarly-processed stream of data from the audio, and writes it to a file.

There are a few steps along this very simplified picture of a processing pipeline where we can talk about a stream of bytes: At the USB cable and at the file interface.
At all other points on the processing pipeline, there is complex interaction, timers, buffers, packets and logic that completely confound the “stream”.

And I’ve completely ignored the process of negotiation among the various parties that precedes the transmission, and sometimes renegotiates in the middle; when I resize my display window to be able to see your face better, it’s entirely possible that signalling will go all the way back to your camera and tell it to change its resolution - without any intervention from the controllers.

We need to view this process as a pipeline because that’s a model we can usefully deal with - and the software beneath the surface is capable of transforming that model into an useful set of configurations of components to perform the functions we want.

But we should not forget that it’s only an useful model. It’s not the truth.

(Comment on Google+ if possible, please)

Monday, April 19, 2010

Thunder from the Thunderbird

With my new Linux install, Ubuntu Lucid, I got my first experience of Thunderbird 3.

I've used Thunderbird for years to manage my non-work email; there's a server in a basement somewhere holding about 70 Gbytes of mail that I've accumulated over the years, and Thunderbird's been doing a fairly good job of getting me access to it, and storing offline copies of folders when I wanted it.

Thunderbird 3 continues that tradition, with two little twists:
  • The default setting is now "Make offline copies of mail".
  • The default setting per-mailbox is now "Make a copy of this mailbox".
The result: When I started Thunderbird, and left it running for a few hours, I wondered where my computer has gone.... the Thunderbird process had quickly passed multiple gigabytes in size, and the .thunderbird folder had grown to a hefty 14 Gbytes. System nearly unusable.

That's when I searched the options to find the two facts above, and discovered the third little twist:

There's no interface to turn off the backing copy for all folders.

There's also no interface to say what the default value for the "synchronized" bit on newly discovered folders should be; it seems to be always set to "true".

I have to go through my folders one at a time (and as you can imagine, there are a few of them), and turn off the backing store. And then I discover that Thunderbird hasn't even found them all; when I open a folder with subfolders, Thunderbird finds the new ones, and happily decides that these are more folders that need to be copied locally. So it's not even an one-pass operation.

I've given up for now and told Thunderbird not to use offline storage. That pass through the config editor was just too much for me.

Sigh. Mulberry, why did you leave us?

Thursday, November 19, 2009

On being half the way there

Living between places is an interesting experience.

This fall, I'm working in Stockholm, Sweden, while my home remains in Trondheim, Norway.
Most of the time, it works fine, but occasionally, things just weird me out.... such as this week, when I wanted to pay for my gym membership.

First some background:

  • I live in Norway, but work in Sweden. Which means that I have to pay tax in Sweden.
  • Normally, Swedish authorities keep track of people by "personnumber" - the YYMMDD of your birthdate + a 4-digit serial number.
  • A "personnummer" is only given to people who live in Sweden. I don't.
  • So the Swedish authorities have said they won't give me a regular Swedish personnumber, they'll give me a "Samordningsnummer" - just like a personnumber, but they add 60 to the day of my birth, just to be different.
So far, so good - my bank allows me to create an account using that, my payroll authorities accept that for paying taxes, and so on. All is well.

Then I go down to the gym at Sats Zenit, sign up for a year's membership, and sign the papers that will allow them to deduct the monthly fee from my account.

No can do.

It turns out that their autogiro system (not the bank's, not the authorities', but the gym's) requires both a bank account number and a personnumber in order to deduct money.

And they won't accept my "samordningsnummer".

Not a big deal. I can afford to just pay the year's fees. But somehow, this little episode brought home to me that the way I currently live, I'm indeed between places - not all the way there.

And the world isn't quite used to people doing that.

Thursday, August 6, 2009

Efaktura - how not to do it

My bank has this wonderful feature called "efaktura" - where a company can send me its bill through the bank, I can click it and approve it - all without a single paper being harmed, seemingly to the benefit of all.
But when I logged on to my bank today, I had 6 "eFaktura" waiting, and I quailed.

Why?

Because of what it entails.....

  1. Click the "eFaktura" button. Get a list.
  2. Click the name of the payee. This brings up a popup with the actual bill; checking what you're billed for is usually a good idea.
  3. Click the triangle next to the payee and choose "approve". This brings up a window where I can (among other things) choose which account to pay from. So far, so good.
  4. Click "Approve". Get an error message because the payment date is in the past.
  5. Experiment with the calendar widget until you get to choose "tomorrow". ("ASAP" and "Today" are not options.) Hit "Approve" again.
  6. Wait for the Java applet to start. This brings you a review.
  7. Hit "Next".
  8. Enter your password. (You may also be asked for your ID number and one-time password; the choice seems to be random.)
  9. Get to the "confirmation" screen. There's no link going back to the list of "eFaktura", so you have to go back to "start" and find the "eFaktura" button again.
Oh, and remember to close the popup window.

This is stupid. If I could loop through 2-5 for all my "eFaktura", and then go through 6-9 once, like I do for my regular bills, I would be less disgusted. If the error message for "The payment date is in the past" showed a button for "pay ASAP", I'd feel that the person designing this MIGHT have thought about the user.

As it is, I'm unhappy.

Saturday, December 6, 2008

Where in the world is the earphone plug?



I recently had the joyful experience of travelling on one of KLM's new Boeing 777 planes.
There are certain rituals you get used to when travelling long distances.... you find the plastic bag marked "not a toy for small children", open it, draw out the headset, spend a few minutes hunting for the headset socket, then sit back and listen to the music.

This time, the first parts of the ritual successfully completed, I encountered a problem: I couldn't find the earphone socket!

To make a long story short: I found it at last - inside the back of the armrest, next to where the seatbelt is attached.

There are two ways to get a plug into this socket - the more conventional one seems to be:
  • Hold the plug in your right hand.
  • Place your right elbow at the level of your knee.
  • Twist your arm so that your lower arm is pointing straight back, and a little to the left.
  • From that position, move your hand slightly up and away from the body, underneath the armrest, so that you can insert the plug (no peeking!) into the socket so conveniently placed at the back end of the armrest, an inch from your hip.
The alternative (and slightly less painful) procedure is to get up out of your seat, kneel down in a position appropriate to worshipping the god of hangovers, make eye contact with the socket, and insert the plug with a straight jab - hoping that the contortions required to get back into your seat won't dislodge it.

Now, KLM had obviously realized that this contribution to in-flight exercise wasn't going to endear them to all its customers - so a short extension cord (about 20 cm) had been attached next to the socket. But the installation instructions had left something to be desired.... the extension cord wasn't in fact plugged into the socket - it was looped back on itself, creating another puzzle that needed untangling.


Just to add insult to injury, I had to ask for a new headset in order to get sound in both ears.... twice. It's not a high quality device.

Sometimes I think that certain designers deserve cruel and unusual punishment. In this case, I was thinking that "those who design airplane seats..... should be condemned to travel in them".

One ray of light in an otherwise depressing experience: Their in-flight system DOES run Linux.


Postscript

A few weeks later, I made another trip on a similar plane, with the same kind of seat - this time with Northwest - and realized just what the designer had been thinking.

Northwest varies the ritual of the headphone - when you arrive at your seat, the headphone is already resting in the armrest compartment, plugged in for your use. And when you leave the plane, your headphone is not collected - you leave it, ready for the next passenger.

I'm not sure what they do for ear pad hygiene. But it's clear that the seat designers and the procedure designers had had a talk - just a pity that that the seat designers did not talk to enough procedure designers.

Oh well. I know how to find the earphone plug now.