- Hypertext Books (revisions, revisions rendered)
- Translating Engelbart for Today (revisions, revisions rendered)
- The Work Is the Work (revisions, revisions rendered, hyperglossarized)
- Personal Conflicts in a Group Setting (revisions, revisions rendered)
- Practical Introduction to Copyright (revisions, revisions rendered)
- Groupware versus Hypertext? (revisions, revisions rendered)
- Advertising versus Internet (revisions, revisions rendered)
- About “skreutzer” (revisions, revisions rendered)
- Difficulties with Climate Change (revisions, revisions rendered)
- What are the “Diagram” and “Map” Tools? (revisions, revisions rendered)
- Modes of Operation (revisions, revisions rendered)
- Re: Doug Engelbart, Human Systems, Tribes, and Collective Wisdom (revisions, revisions rendered)
- Augmented Reality (revisions, revisions rendered)
- Watching the Recordings of Engelbart’s Great 1968 Demo 50 Years Later (revisions, revisions rendered)
- #thedemoat50 (revisions, revisions rendered)
- Short Description of the “htx” System (revisions, revisions rendered)
- Personal Hypertext Report #11 (revisions, revisions rendered)
- What’s the “Symbol” Tool? (revisions, revisions rendered)
- Personal Hypertext Report #10 (revisions, revisions rendered)
- Personal Hypertext Report #9 (revisions, revisions rendered)
- Personal Hypertext Report #8 (revisions, revisions rendered)
- Personal Hypertext Report #7 (revisions, revisions rendered)
- What’s the “Topic” Tool? (revisions, revisions rendered)
- On Complexity (revisions, revisions rendered)
- Personal Hypertext Report #6 (revisions, revisions rendered)
- Personal Hypertext Report #4 (revisions, revisions rendered)
- Re: A Theoretical Model for Knowledge Work & Symbol Manipulation (4 Aug) (revisions, revisions rendered)
- Personal Hypertext Report #3 (revisions, revisions rendered)
- Personal Hypertext Report #2 (revisions, revisions rendered)
- Personal Hypertext Report #1 (revisions, revisions rendered)
- Intro – Why this blog? (revisions, revisions rendered)
- Incomplete List of Desired Hypertext System Capabilities (revisions, revisions rendered)
- Change Tracking Text Editor (revisions, revisions rendered)
- HyperCard (revisions, revisions rendered)
- Federated Wiki (revisions, revisions rendered)
- Track Changes (Book by Matthew Kirschenbaum) (revisions, revisions rendered)
- Are You a Robot? (revisions, revisions rendered)
- Internet für Geflüchtete in Bietigheim-Bissingen, Bericht 2017 (Revisionen, Revisionen visualisiert)
- Copyright versus Digital (revisions, revisions rendered)
- My Journey Through Text (revisions, revisions rendered)
- Web Apps revisited (PWA) + Geolocation for Augmented Reality + local File-I/O for Web Stack on Desktop (revisions, revisions rendered)
- Video Sites Crisis (revisions, revisions rendered)
- Project Ideas (revisions, revisions rendered)
- Angriff auf die Privatsphäre der Geflüchteten und all ihrer Kontakte (Revisionen, Revisionen visualisiert)
- Goodbye, mobileread.com (revisions, revisions rendered)
- Brother MFC-J4420DW (Revisionen, Revisionen visualisiert)
- Intro – Warum dieser Blog? (Revisionen, Revisionen visualisiert)
- Über „skreutzer“ (Revisionen, Revisionen visualisiert)
Today, exactly 50 years after Douglas Engelbart’s Great Demo, I’m watching the recordings in their entirety for the first time. Sure, I’ve seen parts of it many times and easily gained an intuitive understanding how NLS did some of the capabilities and what they are, but I almost have to apologize to people in the hypertext community that I didn’t watch the recordings in their entirety. There are some reasons for that: I think others can’t imagine how difficult and time-consuming it is for me (keep in mind that demands active, focused, prime/quality time in front of the computer + Internet connection, which is usually reserved for productive work). For me, the 1:30 hours translate into many hours, and I find it very difficult to watch things happening on a screen produced by a system I don’t have and can’t have, so it’s impossible to arrive at some understanding/feel for how and why the things happen and work the way they do, and even figuring that out is outright irrelevant in the abscence of the system of course. It’s more like passively watching/experiencing/consuming an art installation that has little to do with any practical problem solving or tools. Even more so, the obscure hardware, programming languages and meta compilers don’t exist any more, so it’s like reading some old handbook on writing assembler code for this one particular industrial machine that went extinct a long time ago, an exercise that can only have historical/archeological value. On the other hand, it feels great to see some of my technical intuition confirmed; to watch the recording in a dark room with the typical black and white shining bright from the screen as if it were from the projector at the San Francisco Civic Auditorium during the Fall Joint Computer Conference (but also alone and quite some sadness, in a dark and cold winter month – the legacy feel of the demo is very sterile anyways and not much lively as it were in color and with audience); to be able to do it on the exact same day 50 years later; to not care much about the confused claims, misunderstandings and future goals voiced during the anniversary festivities; and most importantly, that – as assumed – the text capabilities shouldn’t be too difficult to build. I find it truly surprising that most people from today’s hypertext “community” (?) I’ve encountered so far seem to be pretty unaware or confused about the mechanics of NLS, despite it’s a primary subject of study and the 1968 demo recording expected watching. Anyway, what follows are a few notes from watching the recording.
1/3, 0:02 Note that the lines above a character mean that it is uppercase. Output seems to be upper case only (but then, how did they get the line above the character?), which of course leads to a lot of things ending up being written case-insensitive (lower case) because if you don’t see it anyway, why type it? The printed ARPANET Network Information Center Journal, on the other hand, is case-sensitive.
1/3, 5:18 FILE OWNER ENGELBART, CREATED 12/7/68. That’s probably his user account created on/for the demo environment (longer display name and short initials DCE seem to be part of the account information).
1/3, 5:34 How does the system know that the copy operation should create a second statement, and not just append the text of the first statement to the end of the first statement, leaving the result to be one statement? Either he did it explicitly by using a statement copy command instead of a text copy command, or a statement is identified by a blank line (with an implicit blank line at the end of the text), so the system has no trouble identifying statements for outline collapse and automatic numbering etc.
1/3, 6:45 Done manually, all in text. Up to this point, it seems like NLS has no knowledge about the “statement” (I’m pretty sure that it is just a synonym of theirs for “sentence”/“paragraph”, indicated by the outline collapse that didn’t only include the start of the “statement”/sentence, but also the first few words of the paragraph, as much of them as the terminal size can hold for a single line), so formatting isn’t done as a semantic ViewSpec here, with the system recognizing a (outline) title up to the colon.
1/3, 8:05 Don’t understand how he accidentally deleted everything, if that’s by selecting all of the text or what?
1/3, 8:16 So there’s no versioning except for saving manually. Maybe saved states are versioned, that’s how the NIC Journal did it, but no change tracking or undo on the character level, as it seems. Maybe they added it later, this is the early 1986 implementation.
1/3, 9:45 The command is
INSERT BRANCH, maybe it went to the end of the list. At 1/3, 9:58, the command changes to
INSERT CHARACTER and “PRODUCE” goes to where his cursor was before where it was from before moving “ASPIRIN” down or by a new, extra click (maybe done to insert the new “branch” (=“statement”/“paragraph”/collapsed outline title as the “list item”) right there). Just guessing. Update: At 1/3, 10:43, it becomes clear that the first
INSERT BRANCH of “PRODUCE” went under the statement “SOUP” as a sub-statement. At 1/3, 11:18 it becomes clear that this is the default way the
INSERT BRANCH operation/command works.
1/3, 11:09 What happens if we run out of letters in the alphabet to number the second level?
1/3, 13:10 That means that the numbers/names can never be changed once they have been initially assigned, can they? Not so much because they’re not in the data of the target and therefore calculated by the system, but a merely textual reference to a particular number doesn’t know that it is a number other than the user invoking a jump command on it, therefore it can’t be found/updated if the number in the target changes. It would also bad for remembering them. Update: at 1/3, 16:58 after re-arranging the list, 2A4 doesn’t exist any more.
1/3, 14:38 Note that the syntax for encoding names was indicated at 1/3, 5:17 with “NAME DELIMITERS ARE ‚[‚ AND ‚)‘”, probably in this awkward combination to allow regular  and (). Seems to be a per-file setting and probably could be overwritten if [) was already used by something else in the text, so for jumping to names/labels, the delimiters don’t matter, but the author(s) needs to be aware when writing them.
1/3, 20:49 So it must be a pixel/raster screen/display (in contrast to text terminals/screens of dedicated word processor machines), because there was the shopping graphic, the mouse cursor and different font sizes. Text is reflowable.
1/3, 22:15 So a “link” is basically a “name”/“label” + ViewSpec settings/instructions to be applied onto the target.
1/3, 28:32 One has to wonder if the text is still editable, ViewSpecable, a file, and where the lines belong to, if they’re absolutely/relatively positioned, how one would draw a line, what happens if the text changes and would cause problems for the positioning of the lines.
1/3, 28:48 Note how the lines at “COMPOSE, STUDY, MODIFY” change while moving to the next view (same at 1/3, 29:18), it’s probably similar to a “presentation slide” in todays terms (LibreOffice Impress) or static image, with clickable links in it (HyperCard, XHTML Image Maps, etc.).
1/3, 32:03 It’s funny how their mice look a little bit like the brick attached to the pen ;-).
2/3, 13:18 The picture didn’t scale the text, did it? On the other hand, Jeff Rulifson said that the “labels” are also “statement names” (probably manually inserted into the picture, or are they jump link “names”/“labels”, or extracted live from the “statement”/=paragraph/outline title?).
2/3, 22:14 So the code is organized and written exactly in the same way the user interacts with it, instead of both separated from each other and making the code and inner workings inaccessible to the user and the developers/programmers/implementers as well. Imagine you’re clicking through a GUI and a separate code editor window would narrow down or bring up what parts of code get executed, instead of stepping through things in a disconnected debugger with breakpoints etc. as we do today.
2/3, 26:23 Too bad that I split text and definition areas vertically and not horizontally in my glossary capability prototype :-(. Horizontal splitter solves the issue with left-to-right, top-to-bottom reading that when reading from left-to-right, the eye expects the text to flow horizontally, and the vertical splitter in the center is just confusing because it breaks this direction from the expected movement, raising the question if from the top, the left or the right side should be read first. With a horizontal splitter, left-to-right is never interrupted and top-to-bottom isn’t either. On the other hand, vertical splitting is better for alignment, for parallel matching texts (because both lines of use and definition could be aligned, and with a horizontal splitter, the eye has to match and jump separated lines all the time). Could be a ViewSpec option to define what’s preferred. Another option is inline expansion (or popups/overlays, if you’re not annoyed by them).
3/3, 8:12 They’re doing it analogically, aren’t they? The resizing might be an indicator. Or does the computer actually stream video footage data to a raster/pixel canvas? Sure, Engelbart might see Bill on his screen, but that’s probably because of the 3/3, 7:47 “hardware-wise available feature” in terms of camera overlay on the display. Mouse pointers and NLS screen might be shared via the computer, but as both look at the same shared screen, the question is if Bill sees himself from the relay of Doug’s display, or if he sees Doug in exchange, or the empty space (as I would suspect). The fact that NLS needs to make space in terms of windowing for the camera overlay is another indicator that NLS isn’t aware of the camera feed and doesn’t reflow the text automatically based on the canvas window size (and there’s no indication that the resizing of the camera overlay is done by a command or something, that’s probably analogue controls of the camera/display, very similar how they overlayed Doug’s face, hands and mouse feeds earlier).
3/3, 14:16 OK, now they’re drawing free-form or is the text snapping to a grid? Is it still a text file, and what would be the order of the text nodes? Based on pixel position or order of creation? What ViewSpecs can be executed on such an image or the text within it? The same ViewSpec commands, addition of “names”/“labels”, applying the Content Analyzer, or different operations? It might be just a free-form drawing canvas with no or little relation to the usual NLS text capabilities (but making text that’s probably overlayed/rendered above the drawing a link is clearly possible as demonstrated with the shopping list map 1/3, 16:08, so the text is there and positioned as entity/node and not rastered into static pixels, and the lines/drawings are probably stored as a rasterized/pixel static image, or too retain their attributes/existence as nodes/entities, similar to SVG/path/scene/object rendering in contrast to framebuffer-based techniques). This distinction or the potential lack of it, of sole text controls/interaction and the concerns of visual drawing, is quite important to understand the nature/design of a system and the capabilities that are supported or prevented by it.
3/3, 16:41 So a “catalog” is just the collapsed outline list of everything, or did they create/curate/maintain that manually? “Keywords” seem to be the xFiles indirect pointers (we’re learning about indirect retrieval still) that can make collections/lists regardless of the actual full catalog list in the file, structure or order (“keywords” might be a virtual list in comparison to the “catalog” as the actual list in a file).
3/3, 18:00 Ah, that’s how they do it. So the “names”/labels at the start are the already known link targets (anchors in XHTML), and then they add just their own, individual keyword names to the “statement”/paragraph, so they can filter/search for all “statements”/paragraphs that were marked with the same keyword. Question is if that’s per-user basis (but what if they want to share their keyword assignations?) or if they have to be globally unique to avoid conflicts. Seems like they’re ordinary “names”/labels anyway. “Keywords” seem to be an operation/command to add/assign “names”/labels quickly by simply clicking without the need to write them out every time (and maybe without changing the file for all the other users, but just add the personal, user-based keyword to the file metadata (xFile structure), or every user can see all the keywords by the others and can then decide to add his own or accept the existing ones as demonstrated by Bill, accepting the official “name”/label/keyword as present at the beginning of the “statement”/paragraph).
3/3, 19:16 Looks like by clicking, they draw a circle over the currently selected node, or replace the clicked character with an ‚O‚.
3/3, 19:18 Being able to weight keywords probably makes them different from ordinary, unweighted, unweightable “names”/labels.
So after having watched the entire recording, it’s very apparent once again that the demonstrated capabilities are very, very simple. Most of them are artificially constructed by the users themselves, by using just a few and simple basic, fundamental commands and ViewSpec settings, which in clever combination allow the useful structuring and navigation of texts. In fact, almost all of it seems to be made up of text. It’s very telling that the first example and introduction to the whole system is a copy operation with the help of markers, a long forgotten and abandoned technique from the earlier time of dedicated word processor machines. In general, without fancy graphics because of the lack of computer power, most innovation had to go into text, so the demo shows interactions that are based around concepts like interpreting a click on a character to select the entire word or adjacent whitespace, until interrupted by a different type of character. None of that is coincidence, take Ted Nelson’s JOT for example. The separate command line that lists the input so far, narrows down on options and informs the user about what’s currently going on as some sort of interactive/live documentation helps to not confuse/conflict writing activities with system interaction/navigation/operation.
WORDPRESS AND THE WEB/BROWSERS/SERVERS IN GENERAL ARE SUCH A SHIT, AT 3:50AM I AGAIN LOST TEXT I’VE TYPED INTO IT, maybe just because of what’s new/different in the 5.0 update or the new stupid behavior of visual vs. “code” editor (very likely that it was the
autop “feature” again because a closing
</p> was missing as I would type that later and WordPress tried to be clever by closing or removing the paragraph automatically, removing it together with my text, or it was the so-called “autosave” feature that only knows about intervals of a minute and not about
window.sessionStorage – just because it’s a totally confused joke of an “editor” to begin with), and it was of course a total mistake of mine to ever entrust it any writing to begin with, so I shouldn’t post anything to this instance again and replace it completely. This post had a nice end and conclusion about why there can’t be any excuse for not having decent text capabilities 50 years later despite nobody wants them any more for various reasons + a lot of other more exciting/lucrative opportunities present themselves. Its loss demands tools to be built which make sure that this can never happen again.
My only two contributions to Doug@50/#thedemoat50: glossary prototype (wait until it’s loaded: a completion message will appear after ~1-2 minutes, then click the light-blue italic words) and tracking changes on the character level while writing + visualizing them. Not too difficult to imagine both capabilities to be combined/integrated, what category of problems may arise and which sort of solutions would be required. More on the background of the glossary capability (there are also some blog posts: 1, 2 and 3) and the versioning capability. Try them yourself: glossary capability (might not work locally because of the stupid Same-Origin-Policy restriction in browsers) and change tracking text editor + history visualization (requires Java 1.6 or higher).
It’s pretty obvious that these are my personal results/solutions and not the result of a collaborative group effort. I just wasted a full year with trying to find a single individual in the contemporary hypertext community who would be interested in discussing, designing, building or using a capability infrastructure that powers a hypertext system architecture, likely because of incompatible differences in approach, paradigms, capacity, perspectives and goals.