2019 December 14 18:11

It’s been a bad year for writing, and I apologize. I’ve spent a lot of time going down rabbit holes – many of them quite interesting! – and then neglecting to write anything about my experience. I’ve lost track of all the rabbit holes!

Most recently, I watched an amazing “fireside chat” with Steve Jobs from the 1997 WWDC (worldwide developer’s conference). This was the moment when Jobs had returned to Apple from NeXT. Apple was cratering, on the verge of bankruptcy, and the developer community was fed up with Apple’s false starts and dead ends. This is reflected in some of the questions that were asked.

The whole video is interesting. Jobs’s perspective on where things should be going – network file systems! your home directory anywhere! – and the whole Rhapsody story (which I had totally forgotten) both make for a fascinating historical moment.

How and why we write software has also changed dramatically. One of the selling points of the NeXT software stack (NeXTStep/OpenStep and WebObjects) was the dramatic acceleration of writing custom software. Now everyone wants to write one app that will run anywhere. The NeXT idea was to write for the one platform that would give you the most leverage, not write an app to the lowest common denominator of all the platforms that you want to support!


Jaron Lanier

I’ve also been bingeing on Jaron Lanier talks and re-reading his book You are not a gadget. His historical and philosophical views about the computer industry – where it started and where it is now going – are both fascinating and distressing. Fascinating because he remembers the joy and hope in those early days, using these new machines for creative, humanistic work; distressing because the direction that everything has gone is quite the opposite. Now it’s all fake AI, machine learning, and surveillance capitalism: spyware posing as apps and services.

Here is a good talk to get you started. I was very moved by his comments (around the 45:00 mark) about the joys and mysteries of being human and connecting with others – an unusual and refreshing position for a technologist. The talk ends after 51 minutes, followed by a strange Q & A: the questions are in German (untranslated) and the answers, from Jaron, are in English. I tried to make sense of this part and failed, but someone who knows both languages would probably enjoy this part of the conversation. ;-)

His explanation of his book Who owns the future? – about how “siren servers” (eg Google and Facebook) are sucking the life out of the middle class – including interesting asides about early Marx (in the role of “technology critic”) and the “machine anxiety” of the nineteenth century – is also thought-provoking:


2FA, FIDO, and Yubikeys

After hearing from several people about the insecurities of using SMS in second factor authentication (2FA), I decided to figure out how Yubikeys work – and discovered the FIDO Alliance and all the work that they have been doing.

I like that the FIDO approach separates the authentication into two pieces: a remote protocol (the client – your computer or phone – talking to the remote host, or relying party) and the local protocol (the client talking to the authenticator). Having the authenticator be a small device external to the client is a really good idea, since it gives the user control over the process – the authenticator only works if the user presses a capacitive sensor – and physically isolates the private keys from the huge software attack surface of the much more complex computing device that it’s attached to (the client).

Of course, once I started reading about U2F I realized that some of these small authenticators – possibly including Yubikeys (I don’t know yet) – don’t have local memory to store private keys. What they do instead is “wrap” (ie, encrypt) these local private keys using a symmetric key burned into the device at manufacture, and embed the wrapped private key into the key “handle” – which is sent off to be stored by the relying party on some cloud server! So now the promise that “your private keys don’t leave the device” is quite a bit weaker, and the security of those keys deeply depends on the strength of the keys and ciphers used in the wrapping process. This has me a bit worried, frankly.

But overall the FIDO schemes are a huge improvement over the existing mess we currently have with passwords and data breaches. A pretty good video about the FIDO approach is this talk by Rolf Lindemann of Nok Nok Labs:

Here is another nice intro to security keys:


2019 September 25 13:49

After learning that Bitbucket announced that they are sunsetting Mercurial support, I went through all my Bitbucket repos and converted the Mercurial ones to Git. I thought I would share my experience of Bitbucket, Mercurial, and the conversion process.


2019 September 25 01:19

A quick update to the previous post. After suffering from disk thrashing and terrible performance – lots of spinning beach balls – I decided to re-install High Sierra but onto an HFS+ partition (case-sensitive of course!) rather than APFS. I almost decided to install El Capitan instead but thought it was worth giving High Sierra another try.

It’s been night and day so far, compared to the APFS install. The machine is snappy and works well.

My advice is: Don’t install APFS on spinning disks! Ever!


2019 March 20 00:48

I just upgraded a mid-2011 iMac (iMac12,1 for you Apple nerds) to High Sierra. I thought I would wait until all the bugs were ironed out. Hah. That worked out really well. ;-)

I’m not here to talk about bugs in the software, but about bugs in the “documentation”. In particular about APFS (the new Apple File System) and its vaunted encryption feature. There is no – or contradictory – information available about how to use it, and, in particular, how to do a clean install onto an APFS encrypted disk.

“Help” on the machine itself is notoriously useless. But finding authoritative information about APFS from Apple – through any channel – is nearly impossible.

I did find a post on Reddit that claimed that FileVault and APFS encryption were the same thing, and also claimed that Apple suggests that in order to achieve an encrypted APFS install, that we install to an un-encrypted disk and turn on FileVault after the fact. This didn’t sound right.

People are talking about this, and everyone is confused:

Then I found this post on discussions.apple.com which seemed to highlight everything that was bothering me about the APFS encryption story – especially the confusing System Preferences >> Security & Privacy UI about turning on FileVault. What does this do? Is it the same thing as somehow turning on APFS encryption using Disk Utility or the diskutil apfs command? Who knows!

This support topic about institutional recovery keys sheds some light on the subject. It suggests that Apple wants us to think that “FileVault” and “encrypted disk” are basically the same thing – even though the mechanisms for achieving that encryption are different among CoreStorage, HFS+, and APFS. Conflating related but distinct notions usually leads to people having an erroneous (aka false) mental model of a system, and this often leads to frustration or worse – maybe even to catastrophic data loss. This isn’t a good place to cut corners on clarity.

Reading the diskutil man page can be an exercise in frustration as well. There is some discussion of “crypto users” and “crypto keys” but no good definitions of these. Also, what does -role do? What do the B, R, and V flags mean? There is no explanation.

Is APFS ready for production use on spinning disks? I installed it on the iMac’s 2011-vintage 500GB Seagate drive. (It even has 512 byte sectors! How totally passé!) Was this a mistake? The machine isn’t sluggish exactly, but it’s not screaming along either. Is APFS slowing it down? Or just the fact that it’s memory-limited (I need to upgrade it beyond its current 4GB of RAM)?

Dear Apple: APFS is a great idea, and long overdue, but it’s kind of a mess! The lack of user data integrity checking – instead depending on the drives and their firmware to do this! – is a huge mistake. Brian Cantrill has ranted publicly about ZFS’s checksumming (of everything) turning up all kinds of disk firmware bugs that would otherwise have led to data corruption.

Also, please publish better man pages, some clear manuals, security white papers, and “best practices” deployment notes, for APFS.

As I sat down to write this – I have only just finished getting my working environment set up again on this machine – I found a bug. In Vim. I had to build my own version of Vim because the version that ships with High Sierra – 8.0.something – seemed to be ignoring my ~/.vim/after/ftplugin/ files. Rather than debug this I just built a recent 8.1 and it seems to work fine. While this isn’t Apple’s fault exactly, the bug was in the version of Vim that they ship. Oh well.


2019 January 30 13:57

Happy New Year!

I wish you and your loved ones all the best in 2019!

For my part, I’m going to start the year off with a bit of good news for software developers, and two somewhat curious discoveries.

First, the good news. GitHub have recently announced that their free tier will include unlimited private repositories, each with up to three collaborators.

I’ve been using GitHub for public projects and Bitbucket for private ones precisely because Bitbucket has for years offered the option of unlimited private repositories in their free tier. However, Bitbucket limits you to five collaborators across all repositories, whereas GitHub will now allow up to three collaborators for each repository. I like both platforms, so I doubt I’ll move everything over, but this change should put some pressure on Bitbucket.

Now for the curious bits.

A few days ago, while down a rabbit hole related to IPv6, I discovered that broadband internet in Latvia costs a fraction of what it costs here in the USA, thanks largely to monopolies granted many years ago by the FCC.

I’m paying Comcast $82 per month for 150 Mbps cable internet service. In Latvia I could get 100 Mbps for €12 or 300 Mbps for €15.

Comcast have been “advertising” on NPR that they are helping families get connected to the internet. I think this claim would ring truer if decently-fast Comcast broadband cost $15/month rather than $80 or more.

Lastly, having decided to change my site generator to generate HTML instead of XHTML, I had to figure what was going on in the world of “HTML5” – which I have blissfully ignored for the past few years. I was surprised to discover that the HTML standard has actually forked. There are now two standards bodies – the W3C and WHATWG – both working on and publishing HTML standards. W3C – who had abandoned HTML in favor of XHTML and other XML-based technologies – have jumped back onto the HTML bandwagon; but whereas the WHATWG (an organization of browser vendors) want HTML to be a “living standard”, the W3C want it to be, as it has been in the past, a versioned standard.

The WHATWG has this to say about the difference in approaches (from the HTML spec introduction):

For a number of years, both groups [W3C and WHATWG] then worked together. In 2011, however, the groups came to the conclusion that they had different goals: the W3C wanted to publish a “finished” version of “HTML5”, while the WHATWG wanted to continue working on a Living Standard for HTML, continuously maintaining the specification rather than freezing it in a state with known problems, and adding new features as needed to evolve the platform.

Since then, the WHATWG has been working on this specification (amongst others), and the W3C has been copying fixes made by the WHATWG into their fork of the document (which also has other changes).

You can choose to follow either standard, but since your HTML will be, in all likelihood, consumed by a web browser (or a web browser engine such as Electron), the WHATWG standard is more likely to be accurate and useful.

To compare and contrast: W3C’s latest HTML standard vs WHATWG’s “living” HTML standard


Read the 2018 journal.