2018 February 10 18:23

While I’m in a microblogging mood I thought I would add something about CWEB and literate programming that I didn’t say before.

I’ve tried reading (parts of) the source to TEX and Metafont (by running weave and TEX on the .web source files and looking at the resulting DVI or Postscript file) and found that I was annoyed and distracted by the “fancy” typography. In order to understand the code, I had to translate – in my head – the fancy symbols back into the sequences that I might type at the keyboard.

I’m sure I’m not alone in this. In fact, here is Norman Ramsey, the author of noweb, on the subject:

Most of my programs are edited at least as often as they are read, and it is distracting to have to switch between plain ASCII for editing and fancy fonts and symbols for reading. It is much better for the literate-programming tool to display the code almost exactly as written. (I do believe in typographical distinction for chunk names.)

The problem, fundamentally, is that I’m mostly going to be sitting in the editor – not reading the Postscript/PDF – and I want what I see in that context to be readable. (Hence, I think, the current popularity of “syntax coloring”. It’s a lightweight way to add a tiny bit of semantic annotation to the text without being too intrusive, and without requiring graphics or a GUI.)

I think the idea of switching back and forth – between the “authoring” format and the rendered output – is unnecessary cognitive overhead.

Another thing: WEB and CWEB were developed, in part, so that the exposition of a program to a human reader could be done in a comfortable, sensible pedagogical sequence, rather than in whatever sequence the compiler required. tangle (for WEB and Pascal) and ctangle (for CWEB and C) reorder the code to suit the compiler. This gives the author the freedom to choose any order of exposition.

I’ve mostly been writing Forth code for the last few years, and Forth interpreters and compilers tend to be one-pass, so it’s difficult to do forward references. Hence, one tends to build things bottom-up. Because of the granular nature of Forth (lots of small words that do one thing) it’s also easy to test bottom-up. This has the huge advantage that at each layer you’re building on a lower layer that has been already tested, and each layer of your language has richer semantics. It’s like a layer cake of domain-specific languages (DSLs). This is one of the “features” of Forth that makes it so powerful. (Of course, one could write C code the same way, but Forth has the advantage that the syntax (what little there is) is extensible, and there is no distinction between system-level and application-level code, nor is there a distinction between “built-in” and user-written words. Everything is made of the same stuff.)

Forth is really a meta-language, rather than a language. One tends to start with the built-in words and then build an application-specific (domain-specific) vocabulary in which the rest of the system is written. But again, what’s strange about this is that it’s a continuum. Every word you write gets you closer to the perfect set of words for your application (if you’re doing it right).

So why this long aside?

Bottom-up is also, I think, a great order for exposition/exegesis/explanation to a human reader. You bring them along with you as you build your DSL – or layers of DSLs.

And so it has always seemed to me that Forth doesn’t really need special “literate programming” tools. If written well, and commented well, Forth code can naturally be literate code.


2018 February 09 17:17

Michael Fogus (author of Functional Javascript) a few years ago published a good list of CS/programming papers worth reading (at least twice). I thought I’d include it here. His list includes Out of the tar pit, which I was just thinking about; I came across his post while searching for a PDF of that paper.


2018 February 09 12:20

Last night I watched two interesting talks by George Neville-Neil about using FreeBSD in teaching and research. The first is about using FreeBSD and Dtrace to “peer inside” a running operating system, in order to learn its workings; the second is about using FreeBSD as a substrate for research.


2018 February 08 19:12

Latest rabbit hole: from RISC-V to MMIX via the Zork Z-machine, the Crowther/Woods Adventure game, and Knuth’s CWEB.

How did it happen?

I’m a fan of RISC-V, so I watched this video – hopefully the first in a series! – about building a RISC-V processor using modern “TTL” logic chips:

Robert does a great job of explaining the RISC-V instruction set and his design choices for the registers. Too bad he’s concentrating on the least interesting part of RISC-V. Once he starts talking about the the instruction decoder and the ALU it should get interesting.

I enjoyed the video, so I decided to see what else he’s been up to. I found a video about building a CPU in an FPGA. Sounds interesting, right?

Part way through I decided that the Z-machine was too complicated. Writing an adventure in Forth would be much more interesting. Hmm – where is the source for the Crowther/Woods Adventure game anyway?

The first version I found was a port (from Don Woods’s FORTRAN code) by Don Knuth. Knuth’s version is written in CWEB, his literate programming tool for C.

I had forgotten that there was another version of Adventure, written by Jim Gillogly, that used to be a part of every BSD system. I’m not sure about the other BSDs, but FreeBSD got rid of most of the games a number of years ago. DragonFlyBSD still has the Gillogly version of Adventure in its base system.

Thinking it would be fun to try Knuth’s version, I went to find a copy of CWEB. In order to compile a program written using CWEB you need ctangle, a tool that extracts the C code in a compiler-friendly form.

Knuth’s CWEB page has a broken link to the CWEB source. I ended up downloading CWEB version 3.64c from CTAN.

You have to be careful untarring the CWEB source. Unlike most source packages, cweb.tar.gz does not create a subdirectory when untarred. You have to do that yourself. Compiling with a recent version of GCC (I’m using 6.4.0) generates a lot of warnings. (There is a patched version of CWEB on Github.)

I didn’t bother to install it. After gunzipping Knuth’s advent.w.gz I pointed ctangle at it, got a .c file, and compiled that. (More GCC warnings.)

I think, if I were going down this path again, I would instead try to build the Gillogly BSD version.

However! While poking around on Knuth’s homepage I rediscovered MMIX. I may, some time in the future, write a muforth target compiler for MMIX, for two reasons:

We’ll see if this happens. ;-)


2018 February 01 01:08

I’ve just added support for asciinema and asciicasts to my web site generator! Here is a simple example, recorded on my Acer Chromebook 14.

And here I am doing a pointless “demo” of my Lua-based static site generator (the engine behind this site and the muforth site):


2018 January 30 19:40

I got an email this morning from Google, announcing that the new Search Console (BETA!) will solve all of my problems.

(Search Console is what Google now calls what used to be called Google Webmaster Tools.)

Yes, I was “excited” by the idea that I could finally learn why Google has been refusing to index a good chunk of this site.

I feel like I’ve been doing everything right. Link rel=canonical metadata? Check. Trailing slash on all URLs? Check. Sitemap.xml? Check. But nothing seemed to help. Every time I checked, a quarter to a third of the pages on the site were not indexed.

So I checked out the new Search Console. Sure enough, 45 of my pages are not in the index. But no explanation why. However, I can request, one-by-one, to have the pages indexed! But but but... I’ve already submitted a carefully-crafted sitemap file that describes exactly all the URLs I would like indexed!

Several of the URLs – page URLs lacking a trailing slash, which I don’t use at all on my site – have “crawl problems” because they exhibit a “redirect”. Yes: Github Pages is (correctly) issuing 301 (permanent) redirects for these URLs. But Google refuses to follow them for a random fraction of my site?!?

Oh, and no surprise: the Search Console (BETA!) is the usual Google user-interface dumpster fire.

“Hey Google! Code much?”

It was thinking, before this announcement, that I might switch to relying on Yahoo/Bing instead for my “webmaster tools” experience.

Given what I’ve just seen – and even notwithstanding my terrible previous experience with Bing Webmaster Tools – maybe that isn’t a bad idea.

<sigh>


2018 January 28 22:47

I should also mention that I’ve made some aesthetic changes to the site, bringing it closer in style to the muforth site than previously. In fact, except for the choice of heading font, they are almost indistinguishable. (This is, IMHO, a bug and not a feature. Each site needs its own color and design scheme.)

I hope that the change away from the beautiful but somewhat hard-to-read Alegreya typefaces – I was using the serif and sans – is an improvement. It makes me sad to admit failure, but perhaps Alegreya is better suited to print than to digital screens.

The current trio of fonts is: Encode Sans (normal and condensed) for headings, Droid Serif for the body text, and Fira Mono for code and other monospaced text.

(Hmm – just noticed that there is an Encode Sans semi-condensed. Maybe I should try that too...)

I hope these changes improve the readability of the site.

And – and this is just weird – I noticed, as I was searching for a URL to link to – that all the Droid fonts have vanished from Google Fonts and are now only available from Monotype! And yet, I’m still using Droid Serif, here and on the muforth site... Are the fonts suddenly going to stop working? Do I need to start searching for a new body font?!? Argh!


2018 January 28 21:35

I decided to publish an identicon demo that I put together a while ago. It’s totally out of context (I’ve been meaning to publish a page explaining the history and current usage of identicons, but haven’t yet) but you might find it intriguing, or perhaps even beautiful.

Also, sharing it might shame me into publishing my other thoughts on the subject!


2018 January 26 23:22

Happy New Year!

I’d thought I’d ring in 2018 by deleting a third of my web site.

Google can’t seem to be bothered to index all of it (they only tell me that they have indexed 110-ish out of 140-ish pages, but won’t tell me which ones) and there was a lot of old mossy junk in there that I don’t care about anymore, and that no one else probably ever cared about.

I doubt I’m done purging, but I thought I would make a first pass at an early spring cleaning.

I also hope to write more in 2018! It’s been a bit ridiculous. I have several long rants I want to write that I can never seem to get around to. I’d love to change that this year. A weekly rant? Wouldn’t that be nice?

Maybe none of this matters, anyway. But it’s something to do. ;-)


Read the 2017 journal.