Personal Hypertext Report #4

On the 2018-07-30, I continued to work on the Change Tracking Text Editor. My approach was to port the C++/Qt4 code to JavaFX, but it turned out that the events differ from each other and do not follow the same paradigm. Herbert Schildt’s book indicated already that there’s a difference between the onKeyPressed and the onKeyTyped event, namely that the latter only triggers for key presses that actually change the text. Now, it turned out that Backspace and Delete key will also trigger the onKeyTyped event, and then is unable to obtain the pressed character via KeyEvent.getCharacter(). Note that onKeyReleased is usually not very useful for text editor programming because the user can hold a key pressed and release it after many minutes, and as one doesn’t want to calculate the time interrupt for the individual character additions, it’s usually not of interest.

On the 2018-08-11, I first found a way to handle Backspace and the Delete key press events apart from ordinary key press events. After that it became apparent that the output is incorrectly generated, and the reason for that was that both the onKeyPressed and onKeyTyped events are fired before the actual event is enacted, therefore the caret position and text in the input field are still the old ones despite the events are named “pressed” and “typed” in past tense. JavaFX is a little bit strange in some ways. This lead to off-by-one errors in the output and even to crashes when attempting to extract text beyond the actual text string limits. Because of this, I removed the handling of the “last position” entirely and now the JavaFX editor is entirely autosaving what happened until the current position, with the new keys as a different operation not being on the record yet, which is quite fine, while I wanted to avoid to calculate the possible future myself, adding and subtracting typed character lengths and constructing possible future text lengths, etc.

On the 2018-08-12, I added the handling of other caret position handling because of mouse clicking, arrow key navigation etc. The change event for the caretPosition property is also triggered for each ordinary character key press, so those are already handled and needed to be filtered, which hopefully doesn’t lead to timing issues and therefore incorrectness in the output as I assume to be or contribute to the reason why the Java and C++/Qt4 implementations don’t work properly. It’s always better to avoid such manual tricks in event handling as much as possible, because this might get into the way of processing interrupts/events. A very quick test looked promising enough for me to type this entire personal FTI report in the editor, hoping that my writing in the actual example won’t get corrupted because of mouse and navigation key movements. During the writing, it looked like the TextArea was inserting spaces at the right corner at the end of the text input field without me typing it. The analysis of the result will happen some time later.

Quite some time of the weeks is invested into observing the Global Challenges Collaboration Community hosted by Sam Hahn because there’s the vague possibility that they might decide to need some kind of Journal as their materials are unbelievably scattered, almost impossible to curate, and being inspired by Engelbart, they recognize the need, but it’s still unclear how this will turn out, if they care about other things even more or if they’re fine with already existing “solutions” like Sutra, Google Docs, YouTube, Facebook or WordPress (which is some of what they already use, I can work with the latter). My main entrance to their effort is the recordings of their meetings 20180721 GCC Saturday Barn Raising – Sam Day 1 (Google Doc), 20180728 GCC Saturday Barn Raising – Sam Day 2 (Google Doc) and 20180804 GCC Saturday Barn Raising – Sam Day 3 (Google Doc) as I don’t use Facebook, which seems to be the place of their main activities. In my comments to these recordings, I discuss some of my views on the solving of complex, urgent world problems and human systems, of which I in part remain pessimistic and suspicious. Besides my slow progress on my own little tools for the single user (in lack of a bootstrapping, co-evolving collaborator), I’m basically waiting for the 2018-08-24, on which there will be an initial Future of Text Institute call. Until then, I don’t have much to do besides reading, writing and working on tools/capabilities for the single user in solitude. Those are important too, but it’s time not invested in recreating the NLS/ARC Journal or building a FTI Journal or one for the GCC if it should later turn out to be of interest to end up with having such a capability eventually.

WordPress, as it is, as a piece of web software, shows another deficiency after I happen to have some more posts on the site: older articles become almost unfindable. It’s obviously designed to push new material, but the list of entries is limited to five, and whatever is in the backlog can only be found again by navigating through the pagination or by careful use of tags and categories, or the search of course if the random visitor miraculously knows in advance what to search for. The ViewSpecs of an OHS would allow the material to be presented as a long, full list of titles or even more importantly, as navigatable, interconnected hypertext that can be explored based on topics and interest. It’s fine to use WordPress as mere online data storage facility, but reading the material in the rendered form for the web remains terribly painful and asks for augmentation rather urgently. I hope people don’t mistake my hesitant “approval” of WordPress (in lack of something better) as an encouragement to fiddle with themes or plugins or anything else of that sort, it’s just that WordPress isn’t too bad for publishing plain XHTML while providing APIs to get data up and down again.

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

Re: A Theoretical Model for Knowledge Work & Symbol Manipulation (4 Aug)

Frode Hegland published a draft “A Theoretical Model for Knowledge Work & Symbol Manipulation (4 Aug)”. Here are some initial responses.

Text isn’t necessarily about knowledge, just think about prose for entertainment. Text can be written in an attempt to trigger emotions, which is hardly an expression of “knowledge”, except in the broadest possible sense which would make everything “knowledge” after all, so it might help to define the term “knowledge” that’s used to define the term “text”.

It’s not true that writing or authoring necessarily needs to result in linear text or the resulting text to be frozen. It only must be linear because the printed book is linear, but if the tools wouldn’t be crappy, one could as well write non-linear texts, with writing on/for the web or e-mails sometimes turning out to be something in between. The text is “frozen” if writing on it stops, but that doesn’t mean that it can’t start again at any time. The act of publication “freezes” the text in a sense, because traditionally, as soon as another individual obtains a copy, that copy tend to not get updated, therefore needs to be considered as frozen. But what happens if there is an update path? How do we think about a text being frozen by being published, but another version based on the text is published every day? Can we call it “frozen” if the so-called “reader” (the one who obtained a copy) starts to change his copy? So to really freeze a text, it needs to be printed onto paper or into a certain book edition (we freeze it by making it non-digital), or to put it in a version control system, or fix a publication with a timestamp at the Internet Archive, the latter two only making it canonical because people can still obtain copies, change them and publish the result as well.

There might be many “hypertext communities”, but claiming that “wider access addressability” is solely relying on web URLs or that the file name has anything to do with the format or encoding or that the “addressability” is only implemented by fragment identifiers and corresponding anchors in HTML web documents, is far from hypertexty systems and therefore don’t find consensus as a reasonable approach.

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

Personal Hypertext Report #3

The Future of Text Institute already lists a series of projects that happen to be of interest. I’ll try to align my activities towards these where it makes sense. I agree that the first step should focus on retrieval and reading, because we can fake authoring for quite some time and the WordPress editor is already available as a writing interface, but reading is reading, so there’s no way around consuming what we produce, otherwise it would simply remain unknown. If the Institute wouldn’t encourage decentralized blogging right from the start, a centralized Journal would make it much easier to bootstrap in the traditional way with authoring tools as the first step and a central place to learn about published material, but this isn’t a big issue, it can be done either way.

The Journal project proposal heavily relies on WordPress and RSS, which are both nice in some ways, but come with a bag of problems of their own. RSS exists to optimize the retrieval of the newest resources, which might not be very helpful for navigating interconnected text. WordPress is a pretty decent server package, but it’s primarily geared for browser consumption, which isn’t very helpful and can cause serious conflicts. I also want to point out that we already failed one earlier time to implement a HyperGlossary with WordPress (or did we succeed individually, just not collectively?), maybe due to social reasons and not technical ones. Have a look at Frode Hegland’s implementation (proposal, example, interaction), Gyuri Lajos’ proposal, Christopher Gutteridge’s implementation and mine (proposal, example, interaction). Now, in the context of the FTI, it’s obvious that we do not necessarily need to come up with a single approach on how to implement a glossary capability, but it is clearly demonstrated that everybody only implements their own thing the way it fits best into the already existing project, with no interoperability/collaboration going on.

There are more problems with the project proposals: using some commercial offers can be OK depending on the circumstances, owning the system and providing it as a commercial offer are distinct, who knows if propaganda and fake news are useful categories to think about, ViewSpecs not only need to be text operations but could also be specifications for graphical display/rendering, PDF should not even be mentioned for anything other than printing regardless of whatever attempts Adobe comes up with to make it “interactive” or “rich”, it’s not true that citing and high-resolution linking in e-publications isn’t possible because of commercial barriers (these are proprietary, restrictive barriers that can be present even at gratis or theoretically libre-licensed publications), the claim that there is a need to start an open addressing scheme for proprietary publications (they deliberately don’t want to be cited or linked, so just leave them alone and respect their wish to stay irrelevant!), the limited use of “open sourcing” an application that is trapped on a proprietary host ecosystem (algorithms, artwork, other parts might be useful to the libre-free software world, but then again, “open sourcing” something for the wrong reasons might lead to disappointment). Still, there’s no need to arrive at full agreement within the FTI on every point listed here, and it needs to be reiterated that many of those proposals are in their first draft, so we’ll probably discuss them in much more detail and certainly invite further comments as well.

The “Links in Images” capability is very easy to do as I tried to indicate in my brief HyperCard investigation, even in native programming I can’t imagine it being too hard. At the same time, it’s probably by far not the most important feature for bootstrapping systems that primarily deal with text.

The Journal project may or may not be influenced by the recent analysis of the ARC Journal files from SRI, or only by the earlier published material or account by Doug himself, in any case: it’s an open question to what extend the FTI should try to replicate the ARC Journal in modern technology, and where it should take liberty to expand, limit or adjust the example of this earlier system. I abandoned the idea as it became clear in the Doug@50 group that I would be alone in replicating the Journal capability, and while it is still very useful for a single individual to have it, there are significant differences in how to go about it if it is for personal use or to be used by a group. The latter would greatly benefit from bootstrapping, which you can’t do just on your own. So I went back to basic capabilities, but a Journal for the FTI would equally make sense as the first step. I already built a few components that could help with this task: a primitive StAX implementation for programming languages that don’t come with one (C++, JavaScript) for properly reading RSS, an EDL-based generic retrieval mechanism, a WordPress retriever and a WordPress uploader, all of which would require more work or re-work for this project, but what major obstacle would prevent us from just building a first prototype and then bootstrap from there? A few capabilities/components are still missing, for example local storage management, federation of sources and a dispatcher based on semantics to invoke a basic ViewSpec mechanism. Other approaches might even be more viable of course, this is just an early offer/suggestion.

For the FTI Journal, I plan to publish my “journal entries” in some form (be it a WordPress instance as it is now or raw resources) under my skreutzer.de domain to make it clear that those are personal opinions, not official statements by the FTI (if there can be such), and hypertext-systems.org might provide federation services beyond my own material, but also not necessarily identical to the official structuring of the FTI corpus, over which I don’t have any control nor need any in a decent federation.

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

Personal Hypertext Report #2

After several months, I finally reached the chapter in “Introducing JavaFX 8 Programming” by Herbert Schildt that addresses the TextArea of the JavaFX GUI library, keyboard and mouse input events already having been covered. With this, on a productive weekend, I guess I can come up with another prototype of the Change Tracking Text Editor, making it the third attempt. In parallel, I realized that it would be helpful for the previous other two failed attempts to make a graphical “Change Instructions Navigator”, so it becomes a whole lot easier to reconstruct the individual versions and hunt down the spots where the recording actually goes wrong. I doubt that this will be key to finding the solution, for example by revealing a programming error occurring in a rare constellation, if the problem is related to timing, but it might help to arrive a clearer understanding what the issue is, and the tool will be useful in general as the change instructions recordings need to be replayed and navigated in a granular, selective way, not only reconstructing the last version as the change_instructions_executor_1 does. It’s clear that the capability to reconstruct every single version (not only on instruction sequence level, but additionally for every individual character event) should be made available to headless workflows in terminal mode, be made programmable for automatization, but the navigator as a GUI component has the objective to hold state for the session and ensure that the version navigation is performed fast, so it might be too expensive to call a stateless workflow that has to reconstruct all versions from the beginning up to the desired one, so the navigator belongs already more along the lines of an integrated change tracking text editor environment.

During the last few updates of Trisquel 8.0, one of the 100% libre-free GNU/Linux distros and therefore my reference system, I must have obtained an update to the Nashorn engine (available via the jjs command if you have Java installed), which is pretty late in comparison to the rest of the world, but that’s the time it takes to legally clear and package it for the distro. Java and it’s GUI components contain browser controls of course, and such need to come equipped with JavaScript nowadays, so it makes sense to implement the JavaScript interpreter as a generic component to be shared and maintained only once instead of multiple times, and this interpreter eventually can be made available to all of Java without the need of having a browser control around. I had an earlier version present already, but with the update, ECMAScript 6 became available now where previously the attempt to use it resulted in an exception, so only ECMAScript 5 was available. Why I need ES6? Because it introduces the let keyword that allows me to evade the treacherous variable hoisting of JavaScript, which had cost me quite some debugging time while implementing the HyperGlossary prototype, so I never want to work with any JavaScript below ES6 again. Now, with the call jjs --language=es6, you’re set to go.

What this promises: a world in which JavaScript components can be written once and then be executed traditionally in the browser, on the server with node.js and on the client as a native app (not dependent on a server, not limited by the CORS nonsense etc.). To make this a reality, environment-specific calls need to be abstracted away behind a platform-independent interface standard, because JJS does not have the DOM or window object, node.js doesn’t have alert() and so on, and especially where the DOM is (ab)used to build a user interface, Java GUI libraries should be called analogously in JJS. For every execution target, the right implementation would be packaged, but the target-independent JavaScript components could run on all of them without change, so they only need to be maintained once, allowing for code reuse because the wheel doesn’t need to be reinvented for the three target execution environments. As one can assume that people will have Java installed on their computers (for LibreOffice), the need to install a separate bridge from JavaScript to the local operating system wouldn’t be needed any more, as it traditionally was the case all the years before.

The downsides of this concept are that the interface standard doesn’t exist yet, we would have to design it, which is less a problem for headless workflows in terminal mode, but could reveal unresolvable conflicts when it comes to GUI applications. Furthermore, it was announced that Nashorn will be replaced by something else, which shouldn’t matter that much as the interfaces should stay the same. More worrying is the indication that the OpenJDK team finds it hard to maintain Nashorn as JavaScript continues to change all the time (it’s a hell of a language), and asked the project to abandon it again. GraalVM might come to the rescue, but it can take ages until such developments become available on my end as reliable infrastructure that doesn’t go away soon and therefore can be built upon. The risk of discontinuation means that investing into JavaScript components might turn out to be a dead end, repeating the fate of the HyperScope project, just as browser people like to screw everybody else up and not do the right thing. For this reason, only small scale experimentation around this is on my agenda, not doing anything serious with the web/browser stack in the forseeable future, despite I really would like to. So I’ll have to stay with native development, not benefitting from the decent rendering capabilities of the browser, but what else are you gonna do, there aren’t other options really.

As I completed reading Matthew Kirschenbaum’s “Track Changes”, I need to add the key findings to the blog article as soon as time permits.

I also wonder if I should try to translate as much English articles into German as possible, so they can serve as an invitation to the topic of hypertext to a German speaking audience. Sure, we’re the land of printing, but I don’t see why this should limit the interest in hypertext, to the contrary.

Writing reports like this slows down technical work quite significantly, but I hope it helps to get discussions on the various topics going while offering a chance to align my activities with what’s generally needed (in terms of priorities and avoiding duplicate work), eventually piling up to a body of material on which the new capabilities and methodologies can be used on.

Some countries have restrictions on what one can call an institute. In Germany, there are no general limitations, but as institutes may be departments of a university or as independent entities join cooperations with universities, courts have ruled several times that other organizations that wanted to create the impression that they’re somehow associated with an university in order to appear scientific and neutral are not allowed to call themselves an institute. Does the FTI need to be “scientific” at all, be it to be allowed to keep “Institute” in the name or because of the kind of work it is planned to be engaged in? If the name needs to change later just to avoid legal trouble, I guess I would have to change my use of it in this articles. For some time however, we might need to follow the convention that published material can never change, so if we end up in this dilemma, we might be forced to do some heavy lifting to introduce upgradability.

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

Personal Hypertext Report #1

On the 2018-07-24, I saw an announcement by Frode Hegland that he had recently founded the “Future of Text Institute”, which I “joined” subsequently. Since I got more interested in hypertext, I was unable to find somebody else who wants to collaborate on text-oriented projects despite a lot of people claim to feel the need that something should be done for text, maybe because such demands seem to be fashionable these days. Of the few who are actually active in the field, most are busy following their own, largely incompatible approach as I do too. On the other hand, there’s no way I can figure it out all by myself and with only a few hours of spare time at my disposal, it wouldn’t be very wise to come up with a big, monolithic design.

The Future of Text Institute offers the chance to be developed into a loose federation of text enthusiasts, housing a collection of diverse tools, projects and resources that aim to cover each and every activity one potentially might want to perform with text. Parts of it could slowly converge on the level of compatibility towards a more coherent system or framework, so they don’t exist in isolation but can be used together in more powerful ways. Other efforts like the Doug@50 demo, the Federated Wiki by Ward Cunningham, the Text Engine by Francesc Hervada-Sala, not to mention the myriad of earlier system designs, are subject of study as well as candidates for cooperation. In this capacity, the Institute supports small and big initiatives that don’t fit into what’s already around.

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

Incomplete List of Desired Hypertext System Capabilities

  • Change tracking on the character level, which leads to full undo/redo/playback, historical diff instead of heuristical guessing, amazing branching and merging of variants and servers that only accept modifications to a text as a series of change operations, that might be submitted in full, cleaned up or only the effective, optimized changes.
  • EDL-based text format convention. No need to decide on a particular format in advance as all formats can be easily combined ad-hoc. Text doesn’t end up burried in a particular format that doesn’t support features needed at some later point or can’t be extended except by violating the spec. It also allows overlapping, non-hierarchical structure. Similar to EDLs, xFiles, ZigZag, NOSQL.
  • Abstracting retrieval mechanism powered by connectors. Doesn’t matter if a resource is local or remote, by which technique it needs to be retrieved, great for linking and could get rid of 404s entirely.
  • Reading aids. Track what has been read and organize the consumption for both the incoming suggestions as well as the sharable progress. It’ll also prevent the user from reading the same text twice, help to split reading duties among members of a group, to categorize and indicate updates.
  • Technical bootstrapping on the information encoding level. Instead of heuristical file type guessing, metadata explicitly declares the encoding and format, so if a corresponding application/tool is registered for the type, it can be invoked automatically, as well as automatic conversions between encountered source format and required target format.
  • ViewSpecs. Markup specifies a general meaning/semantic/type, so ViewSpecs to be applied onto it. The user is in control of the ViewSpecs that can be adjusted, shared or come with a package pre-defined. Same goes for EditSpecs.
  • Distributed federation of material. Fork content by others, change it for yourself or publish your branch for others to fork it further or merge. This doesn’t require the original source to cooperate with any of it, but it’s certainly an invitation to do so. Intended to work on every level of granularity, from large raw data blobs to the smallest bits of insertion, deletion or annotation, up to every property of configuration or metadata declaration, each of which might contribute to the composition of a rendering/view.
  • More to come…

Those capabilities work together and thus form a hypertext environment. It almost certainly requires a capability infrastructure/architecture that standardizes the particular functions and formats, so (different) implementations can be registered to be invoked manually or data-driven. It might be slow, it might be ugly, but I’m convinced that it’ll open entirely new ways for the knowledge worker to interact with text, realizing the full potential of what text on the computer can do and be.

This text was written with the C++/Qt4 implementation of the change tracking text editor in order to test if the result file is similarly incorrect as the one produced by the Java implementation. The file appeared to be calculated correctly, but corrupted I/O-wise.

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

Change Tracking Text Editor

Text starts out with the first press of a button on the keyboard, followed by another and then another. During the composition of a text, the author might develop several variants of an expression and discard some other portions later. After a long series of manipulation operations, the writing tool saves the final result, the final stage of the text. Historically, the whole point of word processing (the methodology) was to produce a perfect document, totally focusing on the effect rather than on the procedure. Furthermore, the predominant paradigm is centered around linear text strings because that’s what the earlier medium of paper and print dictated, what’s deeply engrained into cultural mentality. Tool design has to reflect this of course, therefore the writer loses a lot of his work or has to compensate for the shortcomings himself, manually. There are plenty of applications that mimic traditional writing on the computer and make certain operations cheaper, but there is not a single decent environment available that supports digital-native forms of writing.

Video about the Java implementation of the Change Tracking Text Editor.

To be continued…

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

HyperCard

HyperCard (1, 2) seems to have an origin similar to the famous rowboat experience (0:29). The implemented techniques remind me a lot of one of my own projects that assembled transparent images into a scene with the help of the gdlib in PHP, complemented by corresponding image maps to make areas of the final composition clickable for navigation. Everything is driven by records from a database, which define the presence of objects based on a time range, plus some caching for optimization. To make the creation of such virtual worlds easier, I soon added an editor (1, 2).

Today, browsers come with HTML5 canvas, CSS3 and SVG. The HTML5 canvas is the generic drawing area control that finally got standardized. Similar to typical game programming, objects get drawn over each other and the result displayed on the screen, usually at a frame rate not below 30 frames per second to make animations appear smoothly to the human eye. In contrast, SVG’s primary way of graphics manipulation is a tree of objects for the SVG engine to figure out which portions are visible and therefore need to be drawn/updated as opposed to portions that are completely covered by other objects. Each object can register its own click event handler and the engine will automatically call the right one without the need to do manual bounding box calculations as with HTML5. CSS3 should be similar to SVG in that regard and with the recent extension of transformations, it might be worth a look. Unfortunately, SVG is said to be bad for text handling. Anyway, the browser would make a pretty decent player for a modern HyperCard reincarnation, but as an authoring environment – who knows if the window.localStorage is big enough to hold all the data including many or large images and if the sandbox can be escaped with a download or “save as” hack, because it’s probably not a good idea to require an internet connection all the time or to persist stacks on the server while they should be standalone units that can be send around. EPUB may help with that, but not to run JavaScript on e-ink devices but to package the different resources together for distribution. The receipient would simply extract the contents again and open it as local websites in the browser, or a dedicated e-reader software would take care of that.

The hardware back in the day granted HyperCard some advantages we can’t make use of any more. With the fixed screen size of the Macintosh, the card dimensions never had to change. In our time, the use of vector graphics would avoid issues where the aspect ratio of the screen remains the same. If the underlaying cards constitute a navigatable, continuous space similar to the top level coordinate system of my “virtual world” project, the screen dimensions could just become a viewport. Still, what’s the solution for a landscape card rendered on a portrait screen? Scrolling? Should stacks specifically be prepared for a certain screen resolution only? I’m not convinced yet. At least text tends to be reflowable, so systems for text don’t run into this problem too much.

This is where HyperCard differs from an Open Hyperdocument System: the former is much more visual and less concerned about text manipulation anyway. HyperCard wasn’t that much networked either, so the collaborative creation of stacks could be introduced to start a federated application builder tool. Imagine some kind of a RAD-centric ecosystem that offers ready-made libre-licensed GUI control toolkits (14:20) to be imported/referenced from checked/signed software repositories with a way to share libre-licensed stacks via an app store, enabling Bret Victor’s vision on a large scale, eventually getting some of Jeff Rulifson’s bootstrapping and Adam Cheyer’s Collaborama in. With the stupid artificial restrictions of the browser, the authoring tool could be a separate native application that generates the needed files for the web and could target other execution targets as well, except it turns out to be crucial for the usage paradigm that reading and authoring must happen in the same “space” (6:20) for the project to not head into a wrong direction. Maybe it’s not too bad to separate the two as long as the generated stack files can still be investigated and imported back into the editor again manually, on the other hand, if the builder tool supports several execution targets and an “export” irreversible decides for a particular scripting language, then we would end up with a project source as the primary form for one to do modifications in, and users receiving something else less useful.

Another major difference between an Open Hyperdocument System and HyperCard would be that I expect an OHS to implement a standardized specification of capabilities that provide an infrastructure for text applications to rely on, while HyperCard would focus on custom code that comes along with each individual self-sufficient stack package. HyperCard could follow a much more flexible code snippet sharing approach as there’s no need to stay interoperable with every other stack out there, which instead is certainly an important requirement for an OHS. So now, with no advances for text in 2018 to improve my reading, writing and publishing, I guess I should work on an OHS first despite building a modern HyperCard clone would be fun, useful and not too difficult with the technology that’s currently at our disposal.

Bill Atkinson claims some “Open Sourcyness” (6:40) for HyperCard, but it’s a typical example of “Open Source” that doesn’t understand or care about the control proprietary software developers seek to exercise (8:00) in order to exploit it against the interests of the user, community and public. What does it help if the authoring/reading tool is freeware (10:37) and encourages creators to produce content that happens to be “open” by nature (interpreted and not compiled to a binary executable), but then denies them to actually own the tools as their means of production, therefore not really empowering them? Sure, the package seems to be gratis, but that’s only if people buy into the Apple world, which would trap them and their content in a tightly controlled vendor lock-in. Libre-freely licensed software is owned by the user/community/public and can’t be taken away from them, but HyperCard as freeware at first and then split into proprietary packages prevented it from becoming what’s the WWW today. Apple’s decision to discontinue this product killed it entirely because neither users nor community were allowed/enabled to keep it running for themselves. The fate of Bill Atkinson’s HyperCard was pretty much the same as Donn Denman’s MacBASIC with the only difference that it happened to HyperCard somewhat later when there were already plenty of naive adopters to be affected by it. Society and civilization at large can’t allow their basic infrastructure to be under control of a single company and if users build on top of proprietary dependencies, they have to be prepared to lose all of their effort again very easily, which is exactly what happened to HyperCard (45:05). Similarly, would you be comfortable to entrust your stacks with these punks? Bill Atkinson probably couldn’t know better at the time, the libre software movement was still in its infancy. It could be that the only apparent limitation for adoption seemed to be a price because that would exclude those who need it the most, and if we learn from Johannes Gutenberg, Linus Torvalds or Tim Berners-Lee, there’s really no way of charging for a cultural technique or otherwise it simply won’t become one. And over “free beer”, it’s very easy to miss the other important distinction between real technology and mere products, one for the benefit of humanity and the other for the benefit of a few stakeholders: every participant must be empowered technically as well as legally to be in control of their own merit. One has to wonder how this should be the case for iBooks Author (1:08:16) or anything else from Apple.

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

Federated Wiki

The wiki embodies Ward Cunningham’s approach to solving complex, urgent problems (analogous to Doug Engelbart’s conceptual framework). With the Wikipedia adopting his technology, I think it’s fair to say that he achieved a great deal of it (1, 2), himself having been inspired by HyperCard. In contrast to the popular MediaWiki software package, Ward’s most recent wiki implementation is decentralized in acknowledgement of the sovereignity, independence and autonomy of participants on the network. Contributions to a collective effort and the many different perspectives need to be federated of course, hence the name “Federated Wiki”.

Ward’s Federated Wiki concept offers quite some unrealized potential when it comes to an Open Hyperdocument System and Ward is fully aware of it, which in itself is testament to the deep insights of his. The hypertext aspects don’t get mentioned too often in his talks, and why should they, work on (linked) data is equally important. Ward has some ideas for what we would call ViewSpecs (39:15), revision history (41:30) – although more thinking could go into this, federating (43:36), capability infrastructure (44:44) and the necessary libre-free licensing not only of the content, but also the corresponding software (45:46-47:39). Beyond the Federated Wiki project, it might be beneficial to closely study other wiki proposals too.

I guess it becomes pretty aparent that I need to start my own Federated Wiki instance as joining a wiki farm is probably not enough. I hate blogging because the way the text is stored and manipulated, but I keep doing it for now until I get a better system set up and because WordPress is libre-freely licensed software as well as providing APIs I can work with, so at some point in the future I’ll just export all of the posts and convert them to whatever the new platform/infrastructure will be. On the blog, capabilities like allowing others to correct/extend my texts directly are missing, similar to distributed source code management as popularized by git/GitHub (forking, pull requests). For now, I do some small experimentation on skreutzer.tries.fed.wiki.

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

Track Changes (Book by Matthew Kirschenbaum)

I’m currently reading “Track Changes – A Literary History of Word Processing” by Matthew G. Kirschenbaum (1, 2, 3, 4, 5) which is about an interesting period of time in which computers weren’t powerful enough to expand into the mess we’re in today and therefore were limited to basic text manipulation only. For my research of text and hypertext systems, I usually don’t look too much at retro computing because I can’t get those machines and their software any more in order to do my own reading, writing and publishing with them, but it gets relevant again where those artifacts provided certain mechanisms, functions and approaches, because those, why not, should be transferred/translated into our modern computing world so we can enjoy them again and extend them beyond their original conception and implementation. My particular question towards the book has to do with my still unsuccessful attempts to build a change tracking text editor and the title of the book referring to the “track changes” feature of Microsoft Word leaves me wondering if there is or was a writing environment that implemented change tracking the right way. I’m not aware of a single one, but there must be one out there I guess, it’s too trivial for not having come into existence yet.

After completing the read, the hypertext-relevant findings are: over time, the term “word processor” referred to a dedicated hardware device for writing (not necessarily a general-purpose computer), to a person in an office who would perform writing tasks, “word processing” then as an organizational methodology for the office (probably the “office automation” Doug Engelbart was not in favor of), as a set of capabilities to “process text-oriented data” analogous to data processing and finally, as we know it today, almost exclusively as a category of software applications. The latter led to a huge loss of the earlier text-oriented capabilities, which are pretty rare in modern word processor applications as they’re primarily concerned with the separate activity of typesetting for print (in the WYSIWYG way). The earlier word processors were limited to just letters on the screen because there wasn’t the graphical user interface yet, so they offered interesting schemes for text manipulation that are since forgotten. The book doesn’t discuss those in great detail, but at least indicates their existence, so further study can be conducted.

Kirschenbaum once again confirms the outrageously bad practices regarding the way “publishers” deal with “their” texts and how authors and editors “collaborate” together. The book is more of a report on the historical development and the status quo, so don’t expect suggestions for improvement or a grand vision about how we might get text to work better in the future.

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.