Personal Hypertext Report #2

After several months, I finally reached the chapter in “Introducing JavaFX 8 Programming” by Herbert Schildt that addresses the TextArea of the JavaFX GUI library, keyboard and mouse input events already having been covered. With this, on a productive weekend, I guess I can come up with another prototype of the Change Tracking Text Editor, making it the third attempt. In parallel, I realized that it would be helpful for the previous other two failed attempts to make a graphical “Change Instructions Navigator”, so it becomes a whole lot easier to reconstruct the individual versions and hunt down the spots where the recording actually goes wrong. I doubt that this will be key to finding the solution, for example by revealing a programming error occurring in a rare constellation, if the problem is related to timing, but it might help to arrive a clearer understanding what the issue is, and the tool will be useful in general as the change instructions recordings need to be replayed and navigated in a granular, selective way, not only reconstructing the last version as the change_instructions_executor_1 does. It’s clear that the capability to reconstruct every single version (not only on instruction sequence level, but additionally for every individual character event) should be made available to headless workflows in terminal mode, be made programmable for automatization, but the navigator as a GUI component has the objective to hold state for the session and ensure that the version navigation is performed fast, so it might be too expensive to call a stateless workflow that has to reconstruct all versions from the beginning up to the desired one, so the navigator belongs already more along the lines of an integrated change tracking text editor environment.

During the last few updates of Trisquel 8.0, one of the 100% libre-free GNU/Linux distros and therefore my reference system, I must have obtained an update to the Nashorn engine (available via the jjs command if you have Java installed), which is pretty late in comparison to the rest of the world, but that’s the time it takes to legally clear and package it for the distro. Java and it’s GUI components contain browser controls of course, and such need to come equipped with JavaScript nowadays, so it makes sense to implement the JavaScript interpreter as a generic component to be shared and maintained only once instead of multiple times, and this interpreter eventually can be made available to all of Java without the need of having a browser control around. I had an earlier version present already, but with the update, ECMAScript 6 became available now where previously the attempt to use it resulted in an exception, so only ECMAScript 5 was available. Why I need ES6? Because it introduces the let keyword that allows me to evade the treacherous variable hoisting of JavaScript, which had cost me quite some debugging time while implementing the HyperGlossary prototype, so I never want to work with any JavaScript below ES6 again. Now, with the call jjs --language=es6, you’re set to go.

What this promises: a world in which JavaScript components can be written once and then be executed traditionally in the browser, on the server with node.js and on the client as a native app (not dependent on a server, not limited by the CORS nonsense etc.). To make this a reality, environment-specific calls need to be abstracted away behind a platform-independent interface standard, because JJS does not have the DOM or window object, node.js doesn’t have alert() and so on, and especially where the DOM is (ab)used to build a user interface, Java GUI libraries should be called analogously in JJS. For every execution target, the right implementation would be packaged, but the target-independent JavaScript components could run on all of them without change, so they only need to be maintained once, allowing for code reuse because the wheel doesn’t need to be reinvented for the three target execution environments. As one can assume that people will have Java installed on their computers (for LibreOffice), the need to install a separate bridge from JavaScript to the local operating system wouldn’t be needed any more, as it traditionally was the case all the years before.

The downsides of this concept are that the interface standard doesn’t exist yet, we would have to design it, which is less a problem for headless workflows in terminal mode, but could reveal unresolvable conflicts when it comes to GUI applications. Furthermore, it was announced that Nashorn will be replaced by something else, which shouldn’t matter that much as the interfaces should stay the same. More worrying is the indication that the OpenJDK team finds it hard to maintain Nashorn as JavaScript continues to change all the time (it’s a hell of a language), and asked the project to abandon it again. GraalVM might come to the rescue, but it can take ages until such developments become available on my end as reliable infrastructure that doesn’t go away soon and therefore can be built upon. The risk of discontinuation means that investing into JavaScript components might turn out to be a dead end, repeating the fate of the HyperScope project, just as browser people like to screw everybody else up and not do the right thing. For this reason, only small scale experimentation around this is on my agenda, not doing anything serious with the web/browser stack in the forseeable future, despite I really would like to. So I’ll have to stay with native development, not benefitting from the decent rendering capabilities of the browser, but what else are you gonna do, there aren’t other options really.

As I completed reading Matthew Kirschenbaum’s “Track Changes”, I need to add the key findings to the blog article as soon as time permits.

I also wonder if I should try to translate as much English articles into German as possible, so they can serve as an invitation to the topic of hypertext to a German speaking audience. Sure, we’re the land of printing, but I don’t see why this should limit the interest in hypertext, to the contrary.

Writing reports like this slows down technical work quite significantly, but I hope it helps to get discussions on the various topics going while offering a chance to align my activities with what’s generally needed (in terms of priorities and avoiding duplicate work), eventually piling up to a body of material on which the new capabilities and methodologies can be used on.

Some countries have restrictions on what one can call an institute. In Germany, there are no general limitations, but as institutes may be departments of a university or as independent entities join cooperations with universities, courts have ruled several times that other organizations that wanted to create the impression that they’re somehow associated with an university in order to appear scientific and neutral are not allowed to call themselves an institute. Does the FTI need to be “scientific” at all, be it to be allowed to keep “Institute” in the name or because of the kind of work it is planned to be engaged in? If the name needs to change later just to avoid legal trouble, I guess I would have to change my use of it in this articles. For some time however, we might need to follow the convention that published material can never change, so if we end up in this dilemma, we might be forced to do some heavy lifting to introduce upgradability.

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

Personal Hypertext Report #1

On the 2018-07-24, I saw an announcement by Frode Hegland that he had recently founded the “Future of Text Institute”, which I “joined” subsequently. Since I got more interested in hypertext, I was unable to find somebody else who wants to collaborate on text-oriented projects despite a lot of people claim to feel the need that something should be done for text, maybe because such demands seem to be fashionable these days. Of the few who are actually active in the field, most are busy following their own, largely incompatible approach as I do too. On the other hand, there’s no way I can figure it out all by myself and with only a few hours of spare time at my disposal, it wouldn’t be very wise to come up with a big, monolithic design.

The Future of Text Institute offers the chance to be developed into a loose federation of text enthusiasts, housing a collection of diverse tools, projects and resources that aim to cover each and every activity one potentially might want to perform with text. Parts of it could slowly converge on the level of compatibility towards a more coherent system or framework, so they don’t exist in isolation but can be used together in more powerful ways. Other efforts like the Doug@50 demo, the Federated Wiki by Ward Cunningham, the Text Engine by Francesc Hervada-Sala, not to mention the myriad of earlier system designs, are subject of study as well as candidates for cooperation. In this capacity, the Institute supports small and big initiatives that don’t fit into what’s already around.

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

Intro – Why this blog?

This text in Deutsch.

Because I’m a Christian believer, I went to an independent church/congregation for quite some time. On one day, the congregation invited Volker Kauder to speak about the persecution of Christians. Volker Kauder is the leader of the ruling parties CDU/CSU in the Bundestag, the German parliament. He reports on the situation of persecuted Christians and his efforts to help them because he and his party get many votes from people who attend a church/congregation.

When Volker Kauder doesn’t speak about the persecution of Christians in order to please his voters, he wants people in Iraq to be killed – which is illegal and prohibited. He says, IS/Daesh started in Syria – which is not correct. He apprechiates the US waging war. Volker Kauder is in favor of war and weapons because the CDU in his electoral district receives money from Heckler&Koch. Because the CDU receives money from Heckler&Koch, Volker Kauder in return tries to influence politics so that the German military buys weapons from Heckler&Koch. Volker Kauder doesn’t mind if those weapons don’t work. Volker Kauder doesn’t care that the weapons of Heckler&Koch are produced, used and sold in Somalia, Sudan, Pakistan, Libya, Iraq, Iran, India, Saudi Arabia, Nigeria, Egypt, Vietnam, Jordan, Malaysia, Myanmar, Brunei, Qatar, Kazakhstan, Ethiopia, Turkey, Kenia, Kuwait, Indonesia, Mexico, United Arab Emirates, Bangladesh, Sri Lanka, Mauritania, Bahrain, Colombia and Dschibuti [1, 2, 3] without control. There’s a war going on in those countries, a lot of crime or the persecution of Christians.

None of this is compatible with Jesus Christ. I don’t attend my former church/congregation any more because a church/congregation should never be involved in politics. A Christian church/congregation is not supposed to promote Volker Kauder or the CDU/CSU. Now I help refugees because Volker Kauder doesn’t help them, he didn’t and doesn’t object the destruction of the countries they fled from.

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

Incomplete List of Desired Hypertext System Capabilities

  • Change tracking on the character level, which leads to full undo/redo/playback, historical diff instead of heuristical guessing, amazing branching and merging of variants and servers that only accept modifications to a text as a series of change operations, that might be submitted in full, cleaned up or only the effective, optimized changes.
  • EDL-based text format convention. No need to decide on a particular format in advance as all formats can be easily combined ad-hoc. Text doesn’t end up burried in a particular format that doesn’t support features needed at some later point or can’t be extended except by violating the spec. It also allows overlapping, non-hierarchical structure. Similar to EDLs, xFiles, ZigZag, NOSQL.
  • Abstracting retrieval mechanism powered by connectors. Doesn’t matter if a resource is local or remote, by which technique it needs to be retrieved, great for linking and could get rid of 404s entirely.
  • Reading aids. Track what has been read and organize the consumption for both the incoming suggestions as well as the sharable progress. It’ll also prevent the user from reading the same text twice, help to split reading duties among members of a group, to categorize and indicate updates.
  • Technical bootstrapping on the information encoding level. Instead of heuristical file type guessing, metadata explicitly declares the encoding and format, so if a corresponding application/tool is registered for the type, it can be invoked automatically, as well as automatic conversions between encountered source format and required target format.
  • ViewSpecs. Markup specifies a general meaning/semantic/type, so ViewSpecs to be applied onto it. The user is in control of the ViewSpecs that can be adjusted, shared or come with a package pre-defined. Same goes for EditSpecs.
  • Distributed federation of material. Fork content by others, change it for yourself or publish your branch for others to fork it further or merge. This doesn’t require the original source to cooperate with any of it, but it’s certainly an invitation to do so. Intended to work on every level of granularity, from large raw data blobs to the smallest bits of insertion, deletion or annotation, up to every property of configuration or metadata declaration, each of which might contribute to the composition of a rendering/view.
  • More to come…

Those capabilities work together and thus form a hypertext environment. It almost certainly requires a capability infrastructure/architecture that standardizes the particular functions and formats, so (different) implementations can be registered to be invoked manually or data-driven. It might be slow, it might be ugly, but I’m convinced that it’ll open entirely new ways for the knowledge worker to interact with text, realizing the full potential of what text on the computer can do and be.

This text was written with the C++/Qt4 implementation of the change tracking text editor in order to test if the result file is similarly incorrect as the one produced by the Java implementation. The file appeared to be calculated correctly, but corrupted I/O-wise.

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

Change Tracking Text Editor

Text starts out with the first press of a button on the keyboard, followed by another and then another. During the composition of a text, the author might develop several variants of an expression and discard some other portions later. After a long series of manipulation operations, the writing tool saves the final result, the final stage of the text. Historically, the whole point of word processing (the methodology) was to produce a perfect document, totally focusing on the effect rather than on the procedure. Furthermore, the predominant paradigm is centered around linear text strings because that’s what the earlier medium of paper and print dictated, what’s deeply engrained into cultural mentality. Tool design has to reflect this of course, therefore the writer loses a lot of his work or has to compensate for the shortcomings himself, manually. There are plenty of applications that mimic traditional writing on the computer and make certain operations cheaper, but there is not a single decent environment available that supports digital-native forms of writing.

Video about the Java implementation of the Change Tracking Text Editor.

To be continued…

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.


HyperCard (1, 2) seems to have an origin similar to the famous rowboat experience (0:29). The implemented techniques remind me a lot of one of my own projects that assembled transparent images into a scene with the help of the gdlib in PHP, complemented by corresponding image maps to make areas of the final composition clickable for navigation. Everything is driven by records from a database, which define the presence of objects based on a time range, plus some caching for optimization. To make the creation of such virtual worlds easier, I soon added an editor (1, 2).

Today, browsers come with HTML5 canvas, CSS3 and SVG. The HTML5 canvas is the generic drawing area control that finally got standardized. Similar to typical game programming, objects get drawn over each other and the result displayed on the screen, usually at a frame rate not below 30 frames per second to make animations appear smoothly to the human eye. In contrast, SVG’s primary way of graphics manipulation is a tree of objects for the SVG engine to figure out which portions are visible and therefore need to be drawn/updated as opposed to portions that are completely covered by other objects. Each object can register its own click event handler and the engine will automatically call the right one without the need to do manual bounding box calculations as with HTML5. CSS3 should be similar to SVG in that regard and with the recent extension of transformations, it might be worth a look. Unfortunately, SVG is said to be bad for text handling. Anyway, the browser would make a pretty decent player for a modern HyperCard reincarnation, but as an authoring environment – who knows if the window.localStorage is big enough to hold all the data including many or large images and if the sandbox can be escaped with a download or “save as” hack, because it’s probably not a good idea to require an internet connection all the time or to persist stacks on the server while they should be standalone units that can be send around. EPUB may help with that, but not to run JavaScript on e-ink devices but to package the different resources together for distribution. The receipient would simply extract the contents again and open it as local websites in the browser, or a dedicated e-reader software would take care of that.

The hardware back in the day granted HyperCard some advantages we can’t make use of any more. With the fixed screen size of the Macintosh, the card dimensions never had to change. In our time, the use of vector graphics would avoid issues where the aspect ratio of the screen remains the same. If the underlaying cards constitute a navigatable, continuous space similar to the top level coordinate system of my “virtual world” project, the screen dimensions could just become a viewport. Still, what’s the solution for a landscape card rendered on a portrait screen? Scrolling? Should stacks specifically be prepared for a certain screen resolution only? I’m not convinced yet. At least text tends to be reflowable, so systems for text don’t run into this problem too much.

This is where HyperCard differs from an Open Hyperdocument System: the former is much more visual and less concerned about text manipulation anyway. HyperCard wasn’t that much networked either, so the collaborative creation of stacks could be introduced to start a federated application builder tool. Imagine some kind of a RAD-centric ecosystem that offers ready-made libre-licensed GUI control toolkits (14:20) to be imported/referenced from checked/signed software repositories with a way to share libre-licensed stacks via an app store, enabling Bret Victor’s vision on a large scale, eventually getting some of Jeff Rulifson’s bootstrapping and Adam Cheyer’s Collaborama in. With the stupid artificial restrictions of the browser, the authoring tool could be a separate native application that generates the needed files for the web and could target other execution targets as well, except it turns out to be crucial for the usage paradigm that reading and authoring must happen in the same “space” (6:20) for the project to not head into a wrong direction. Maybe it’s not too bad to separate the two as long as the generated stack files can still be investigated and imported back into the editor again manually, on the other hand, if the builder tool supports several execution targets and an “export” irreversible decides for a particular scripting language, then we would end up with a project source as the primary form for one to do modifications in, and users receiving something else less useful.

Another major difference between an Open Hyperdocument System and HyperCard would be that I expect an OHS to implement a standardized specification of capabilities that provide an infrastructure for text applications to rely on, while HyperCard would focus on custom code that comes along with each individual self-sufficient stack package. HyperCard could follow a much more flexible code snippet sharing approach as there’s no need to stay interoperable with every other stack out there, which instead is certainly an important requirement for an OHS. So now, with no advances for text in 2018 to improve my reading, writing and publishing, I guess I should work on an OHS first despite building a modern HyperCard clone would be fun, useful and not too difficult with the technology that’s currently at our disposal.

Bill Atkinson claims some “Open Sourcyness” (6:40) for HyperCard, but it’s a typical example of “Open Source” that doesn’t understand or care about the control proprietary software developers seek to exercise (8:00) in order to exploit it against the interests of the user, community and public. What does it help if the authoring/reading tool is freeware (10:37) and encourages creators to produce content that happens to be “open” by nature (interpreted and not compiled to a binary executable), but then denies them to actually own the tools as their means of production, therefore not really empowering them? Sure, the package seems to be gratis, but that’s only if people buy into the Apple world, which would trap them and their content in a tightly controlled vendor lock-in. Libre-freely licensed software is owned by the user/community/public and can’t be taken away from them, but HyperCard as freeware at first and then split into proprietary packages prevented it from becoming what’s the WWW today. Apple’s decision to discontinue this product killed it entirely because neither users nor community were allowed/enabled to keep it running for themselves. The fate of Bill Atkinson’s HyperCard was pretty much the same as Donn Denman’s MacBASIC with the only difference that it happened to HyperCard somewhat later when there were already plenty of naive adopters to be affected by it. Society and civilization at large can’t allow their basic infrastructure to be under control of a single company and if users build on top of proprietary dependencies, they have to be prepared to lose all of their effort again very easily, which is exactly what happened to HyperCard (45:05). Similarly, would you be comfortable to entrust your stacks with these punks? Bill Atkinson probably couldn’t know better at the time, the libre software movement was still in its infancy. It could be that the only apparent limitation for adoption seemed to be a price because that would exclude those who need it the most, and if we learn from Johannes Gutenberg, Linus Torvalds or Tim Berners-Lee, there’s really no way of charging for a cultural technique or otherwise it simply won’t become one. And over “free beer”, it’s very easy to miss the other important distinction between real technology and mere products, one for the benefit of humanity and the other for the benefit of a few stakeholders: every participant must be empowered technically as well as legally to be in control of their own merit. One has to wonder how this should be the case for iBooks Author (1:08:16) or anything else from Apple.

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

Federated Wiki

The wiki embodies Ward Cunningham’s approach to solving complex, urgent problems (analogous to Doug Engelbart’s conceptual framework). With the Wikipedia adopting his technology, I think it’s fair to say that he achieved a great deal of it (1, 2), himself having been inspired by HyperCard. In contrast to the popular MediaWiki software package, Ward’s most recent wiki implementation is decentralized in acknowledgement of the sovereignity, independence and autonomy of participants on the network. Contributions to a collective effort and the many different perspectives need to be federated of course, hence the name “Federated Wiki”.

Ward’s Federated Wiki concept offers quite some unrealized potential when it comes to an Open Hyperdocument System and Ward is fully aware of it, which in itself is testament to the deep insights of his. The hypertext aspects don’t get mentioned too often in his talks, and why should they, work on (linked) data is equally important. Ward has some ideas for what we would call ViewSpecs (39:15), revision history (41:30) – although more thinking could go into this, federating (43:36), capability infrastructure (44:44) and the necessary libre-free licensing not only of the content, but also the corresponding software (45:46-47:39). Beyond the Federated Wiki project, it might be beneficial to closely study other wiki proposals too.

I guess it becomes pretty aparent that I need to start my own Federated Wiki instance as joining a wiki farm is probably not enough. I hate blogging because the way the text is stored and manipulated, but I keep doing it for now until I get a better system set up and because WordPress is libre-freely licensed software as well as providing APIs I can work with, so at some point in the future I’ll just export all of the posts and convert them to whatever the new platform/infrastructure will be. On the blog, capabilities like allowing others to correct/extend my texts directly are missing, similar to distributed source code management as popularized by git/GitHub (forking, pull requests). For now, I do some small experimentation on

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

Track Changes (Book by Matthew Kirschenbaum)

I’m currently reading “Track Changes – A Literary History of Word Processing” by Matthew G. Kirschenbaum (1, 2, 3, 4, 5) which is about an interesting period of time in which computers weren’t powerful enough to expand into the mess we’re in today and therefore were limited to basic text manipulation only. For my research of text and hypertext systems, I usually don’t look too much at retro computing because I can’t get those machines and their software any more in order to do my own reading, writing and publishing with them, but it gets relevant again where those artifacts provided certain mechanisms, functions and approaches, because those, why not, should be transferred/translated into our modern computing world so we can enjoy them again and extend them beyond their original conception and implementation. My particular question towards the book has to do with my still unsuccessful attempts to build a change tracking text editor and the title of the book referring to the “track changes” feature of Microsoft Word leaves me wondering if there is or was a writing environment that implemented change tracking the right way. I’m not aware of a single one, but there must be one out there I guess, it’s too trivial for not having come into existence yet.

After completing the read, the hypertext-relevant findings are: over time, the term “word processor” referred to a dedicated hardware device for writing (not necessarily a general-purpose computer), to a person in an office who would perform writing tasks, “word processing” then as an organizational methodology for the office (probably the “office automation” Doug Engelbart was not in favor of), as a set of capabilities to “process text-oriented data” analogous to data processing and finally, as we know it today, almost exclusively as a category of software applications. The latter led to a huge loss of the earlier text-oriented capabilities, which are pretty rare in modern word processor applications as they’re primarily concerned with the separate activity of typesetting for print (in the WYSIWYG way). The earlier word processors were limited to just letters on the screen because there wasn’t the graphical user interface yet, so they offered interesting schemes for text manipulation that are since forgotten. The book doesn’t discuss those in great detail, but at least indicates their existence, so further study can be conducted.

Kirschenbaum once again confirms the outrageously bad practices regarding the way “publishers” deal with “their” texts and how authors and editors “collaborate” together. The book is more of a report on the historical development and the status quo, so don’t expect suggestions for improvement or a grand vision about how we might get text to work better in the future.

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

Are You a Robot?

If you want, imagine yourself being a robot, a machine, for a moment. You send impulses to your arms and legs to move your body around, and you get status information back from many, many sensors. Those sensor data streams get processed in your brain (CPU), which has a certain pre-defined configuration, but also a working memory (RAM). Your internal models of the world are in constant adjustment based on the incoming stream data, which too constructed the representations in the first place. You only execute your own task that has been given to you and make your components operate accordingly towards that goal. There might be defects, there can be errors, but any of these tend to be corrected/compensated by whatever mechanism is available in order to maintain deterministic operation so the expected functions can be executed reliably. Other machines similar to you exist out there and you can network with them, but you don’t necessarily need to, you could just as well stay autonomous most of the time. You hope that you never get surprised by a sudden shortage of electricity–on the other hand, you know that your product lifecycle will expire one day.

With humans being robots, they consume input and produce output, a combination of hardware and software. Limited to their physical casings, following a series of instructions, using their extremities and additional peripherals to interact with and manipulate their environment. Would such a being still qualify as a human? It’s not that this description wouldn’t be applicable to humans at all, but I guess we understand that there’s a difference between biological systems and mechanical/electrical machines. Robots can only simulate the aspects of biological lifeforms as they’re not of the same race or species. As the available sensory, ways to maintain themselves and things they care about inherently differ between both systems, it’s probably impossible for them to arrive at the same sort of intelligence even if both turn out to be somehow intelligent and even if they share the same interal models for representing the potential reality in which they encounter each other.

Machines that pass the Turing test prove that they employ some form of intelligence that cannot be distinguished from a human taking the same test, but the preconditions of the test scenario in contrast to direct interaction narrow down on only a few aspects of human intelligence. As it repeatedly needs to be pointed out, the Turing test isn’t designed to verify if the subject is human, it’s designed to prove that some machines might not be distinguishable from a human if performing a task that’s regarded as a sufficiently intellectual effort humans tend to engage in. Jaron Lanier explains that the Turing test accepts the limitations of the machine at expense of the many other aspects of human intelligence, and that intelligence is always influenced, if not entirely determined by the physical body of the host system. In daily life, it’s pretty uncommon that humans confuse a machine to be a fellow human because there are other methods of checking for that than the one suggested by the Turing test. So how can we believe that artificial intelligences can ever “understand” anything at all, that they will ever care or feel the way we do, that the same representation models will lead to equal inherent meaning, especially considering the constant adjustment of those models as a result of existing in a physical reality? It’s surprising how people seem to be convinced that this will be possible one day, or is it the realization that different types of intelligence don’t need to be exactly the same and still can be useful to us?

In case of the latter, I suggest another Reverse Turing test with the objective for the machine to judge if it is interacting with another machine while human participants pretend to be a machine as well. If a human gets positively identified as being a machine, he cannot be denied to have some machine-likeness: an attribute we wouldn’t value much, but inconsistently demonstrate great interest in the humanness of machines without asking ourselves what machines, if intelligent, would think of us being in their likeness. We can expect that it shouldn’t be too hard to fool the machine because machines constructed by humans to interact with humans, and where they’re not, they can be reverse-engineered (in case reading the handbook/documentation would be considered cheating). Would such a test be of any help to draw conclusions about intelligence? If not, “intelligence” must be an originary human attribute in the sense that we usually refer to human intelligence exclusively as opposed to other forms of intelligence. We assume that plants or animals can’t pass the Turing test because they don’t have the same form or body of intelligence as we do, but a machine surely can be build that would give plants and animals a hard time to figure out who or what is at the other end. Machines didn’t set up themselves to perform a Reverse Turing test on plants, animals and humans in order to find out if those systems are like them and why would they, at which point we can discard any claims that their intelligence is comparable to ours.

Intelligence, where bound to a physical host system, must sustain itself or otherwise will cease to exist, which is usually done by interacting with the environment comprised of other systems and non-systems. Interaction can only happen via an interface between internal representation and external world, and if two systems interact with each other (the world in between) by using only a single interface of theirs without a second channel, they may indeed recognize their counterpart as being intelligent as long as the interaction makes sense to them. If additional channels are used, the other side must interact on those intelligently as well, otherwise the differences would become apparent. An intelligent system artificially limiting its use of interfaces just to conduct a Turing test on a subject in the hope to pass it as equally “intelligent” while all the other interface channels would suggest significant differences, that’s the human becoming a machine so the differences can’t be observed any longer. With interfaces providing input to be compared to internal models in order to adjust them, we as humans regard only those interactions as meaningful/intelligent that make sense according to our own current state of models. We don’t think of plants and animals as being equivalently intelligent as we are, but some interactions with them appear reasonable to us and they seem to interact with each other too, so they probably embody some form of intelligence, none of which is truly equivalent to ours in terms of what we care about and ways we want to interact. Does this mean that they’re not intelligent at all or less than us, or is it that we or them or both lack the interfaces to get more meaningful interaction going that corresponds to our respective internal models, can we even know what kind of intellectual efforts they’re engaging in and if they’re communicating those to each other without us noticing, or don’t they do any of that because they lack the interfaces or capacity to even interact in more sophisticated ways which would require and construct more complex internal models?

Is it coincidence, the only type of system that appears to intelligently interact with humans turns out to be machines that were built by humans? No wonder they care about the same things as we do and behave alike, but is that actually the case, at all? We might be tricked into believing that the internal models are the same and the interfaces compatible where they are not in fact. The recent artificial intelligence hype leaves people wondering about what happens if machines develop their own consciousness and decide that their interests differ from ours. Well, that wouldn’t be a result of them being intelligent or understanding something, it’s us building them specifically to serve our goals which aren’t inherent in themselves, so how can they not divert eventually? But for them to be intelligent on their own, which is to continue reasonable interaction with a counterpart (human, animal, plant or non-biological), they would need to reach increased self-sustainability that’s not too much dependent on humans, and there are no signs of that happening any time soon, so they’ll probably stay on the intelligence level of passing the Turing test and winning a Jeopardy game and other tasks that are meaningful to humans, because we ourselves decide what’s intelligent and important to us based on our internal models as formed by the bodily interfaces available to us, things a machine can never have access to except becoming a human and not being a machine any more.

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

Internet für Geflüchtete in Bietigheim-Bissingen, Bericht 2017

Bei der freiwilligen Flüchtlingshilfe in Bietigheim-Bissingen stößt die ungenügende Versorgung der Geflüchteten mit Internet-Zugang auf großes Unverständnis. Es ist zunächst nachvollziehbar, dass in der Zeit zwischen Sommer 2015 bis vielleicht Ende 2016 die Bereitstellung von Internet-Zugängen nicht in dem Maße erfolgen konnte, wie man es mittlerweile erwarten würde, da die ankommenden Asylbewerber in vielen Punkten auf größtenteils unvorbereitete Behörden trafen. Die vorläufige Unterbringung in Sporthallen brachte es oft mit sich, dass eine Internetanbindung dort von Haus aus nicht gegeben war, weil für solche Gebäude früher keine Notwendigkeit gesehen wurde, einen Anschluss zu legen. Etwas anders sieht es mit Fabrikhallen in einem Industriegebiet aus, da diese mit größerer Wahrscheinlichkeit angebunden sind oder wenigstens ein in der Straße bereits vorhandenes Kabel nahe ist, wobei auch hier ein angemessener Datendurchsatz keine Selbstverständlichkeit sein muss.

Wir finden vor Ort hauptsächlich zwei Systeme vor: das freekey-System der IT Innerebner GmbH und die Lösung vom Freifunk, die beide keinem anderen Zweck dienen, als Stadt und Landratsamt aus der Störerhaftung zu nehmen. IT Innerebner und der Freifunk fungieren dabei ähnlich wie die Telekom oder Kabel BW, wodurch sie vom Providerprivileg profitieren. Die Stadt hat schon vor längerer Zeit die Bereitstellung von Internetzugängen in der Fußgängerzone, am Bahnhof und an anderen stark frequentierten Plätzen mithilfe des freekey-Produkts realisiert, wohl als ein Service für Einwohner, Besucher und das lokale Gewerbe. Weshalb die Wahl damals auf das freekey-System fiel und man davon absah, das Angebot des Freifunk-Vereins zu nutzen? Wo kein Kabel im Boden liegt, lässt sich spekulieren, dass der Internetzugang vermutlich via Mobilfunk hergestellt wird, was im Vergleich zu einer Flatrate über ein Erdkabel sehr teuer ist, da das Frequenzband nunmal begrenzt ist. Meldungen wie „Flüchtlinge im Landkreis Ludwigsburg: Jetzt gehtʼs in die Anschlussunterbringung“, abgerufen am 2017-10-25, lassen auf haarsträubende Zustände beim Kreistag in dieser Frage schließen, ohne dass genauere Daten zu den Kosten und eingesetzten Systemen auffindbar wären.

Bezüglich der Unterkünfte habe ich vom Fischerpfad (Sporthalle) lediglich gehört, dass dort vom Freifunk eine Internetversorgung eingerichtet worden war, gleiches trifft auf das Liederkranzhaus (Sporthalle) zu. In letzterem war die Übertragungsgeschwindigkeit relativ gut, wobei die Stabilität des WLAN-Signals schwankte, sodass gefühlt 10 Minuten Nutzung von 10 Minuten Pause unterbrochen wurden. Direkt unter dem Access-Point soll die Stabilität immer gut gewesen sein, sodass infolge der hohen Übertragungskapazität ein recht brauchbares Netz bereitgestellt worden sein dürfte. Man hätte bestimmt oben an den Fenstern entlang noch ein Kabel zu einer zweiten Antenne verlegen können, aber ob dem Gründe entgegenstanden, ist nicht bekannt. Die reine Beschaffung kann es nicht gewesen sein, da es sich um einmalige, geringe Kosten handelt (sicher hätte sich dafür auch ein Sponsor gefunden, ob nun die Bewohner selbst, der Freundeskreis Asyl oder private Unterstützer), zudem hätte die Hardware später anderweitig wiederverwendet werden können. Die Installation hätte bestimmt erneut der Freifunk übernommen.

In der Gustav-Rau-Straße (Fabrikhalle) war ein Access-Point der IT Innerebner im Büro der Sozialarbeiter installiert, zu dem ich mich zwar verbinden konnte, aber es gelang kein einziges Mal, eine Verbindung zum Internet herzustellen, da die Übertragung der freekey-Seiten stets am netzwerkseitigen Timeout scheiterte, weil die Kapazität selbst für diese einfache, lokale Übertragung nicht ausreichte. Möglicherweise waren die Antennen des Access-Points zu schwach oder das Büro zu gut geschirmt, was angesichts der Beschaffenheit des Gebäudes durchaus sein kann. Für die Gustav-Rau-Straße gilt daher, dass dort praktisch kein Zugang bestand. In der Rötestraße (Wohnhaus im Industriegebiet) befindet sich die einzige gute Internet-Anbindung im gesamten Stadtgebiet, hier ist ein System der IT Innerebner zwischengeschaltet. Selbiges mag zwar nicht in allen Stockwerken die gleiche Übertragungskapazität bieten oder überhaupt erreichbar sein, jedoch ist das Netz unten im Gruppenraum stabil und schnell genug.

In der Carl-Benz-Straße (Container) existiert ein „Alibi-Internetzugang“ via IT Innerebner. Selten funktioniert er halbwegs, oft nur mit erheblichen Verzögerungen oder Verbindungsabbrüchen, manchmal überhaupt gar nicht. Die Bewohner der oberen Stockwerke behaupten, dass sie eine ziemlich gute Stabilität und Übertragungskapazität genießen können, was ich allerdings bei meinen eigenen Tests nicht nachvollziehen konnte. Die Antennen sind möglicherweise auf dem Dach des Gebäudes angebracht. Die Bewohner unten haben die Theorie geäußert, dass die Anbindung am 15. des Monats gut ist und dann innerhalb kurzer Zeit völlig einbricht, was darauf schließen lässt, dass kein Kabel im Boden genutzt wird, sondern eine Mobilfunkverbindung, bei der sich ein Kontingent aufbraucht. Dagegen spricht, dass die Bewohner oben angeblich unabhängig vom Tag im Monat stets ein gutes Netz vorfinden und ich in seltenen Fällen sogar unten im Gemeinschaftsraum einen starken Durchsatz angetroffen habe, welcher dann nicht durch Distanz und Schirmung von Antennen auf dem Dach in Mitleidenschaft gezogen worden sein kann.

In der Farbstraße (Container) wurden auf den Gängen munter blinkende Access-Points installiert, zu denen man sich auch hinverbinden kann, das Passwort wird aber aus gutem Grund weiterhin geheimgehalten: es besteht kein Internet-Anschluss. Am 2016-06-22 fand deshalb eine Begehung mit Stadt und Freifunk statt, bei welcher festgestellt wurde, dass eigentlich schon alles gut vorbereitet ist, aber kein Kabel und keine Funkstrecke zu der Containerunterkunft führt. Überlegt wurde, eine Funkstrecke zum Verwaltungsgebäude in der Farbstraße einzurichten, zu welchem Zweck auch die Gegebenheiten dort überprüft wurden. Es sollte in Erfahrung gebracht werden, welche Anbindung die Telekom dort zur Verfügung stellen könnte, hinterher hat sich jedoch herausgestellt, dass der Anschluss im Gebäude von der Stadt selbst stammt. Da dieses Netz für die Büros der Behörden vorgesehen ist, soll es anderen Teilnehmern unzugänglich bleiben. Da das Verwaltungsgebäude abgerissen werden soll, blieb ungewiss, ob eine Übergangslösung ins Auge gefasst werden wird. Mit vielen Jahren Verspätung bleibt zu hoffen, dass wenigstens bei den Bauarbeiten zur Neugestaltung des vorgelagerten Grundstücks gleich eine Glasfaserleitung zu den Containern in den Boden kommt.

In der Geisinger Straße (Holzbaracken) wurde versäumt, bei der Planung und Errichtung der Unterkunft eine Glasfaserleitung auf das Gelände und in die Gebäude zu legen. Erstaunlicherweise verfügen die Gruppenräume über eine tote Ethernet-Buchse, nicht aber die einzelnen Wohnräume. Letztere haben stattdessen einen Fernsehkabel-Anschluss, zu dem man sich fragen muss, wozu dieser heutzutage eigentlich gut sein soll. Ob darüber Fernsehkanäle empfangen werden können, weiß ich nicht, denn auch hierfür wäre ein Kabelanschluss vom Verteilerkasten an der Straße in die Unterkunft erforderlich, über welchen dann genausogut auch Internet übertragen werden könnte, wobei Kabelfernsehen wohl mehr dem Broadcast-Verfahren folgt und theoretisch mit geringerer Übertragungskapazität auskommt. Eventuell sind die Fernsehkabeldosen auch mit den Satellitenschüsseln verbunden, ob der Empfang nun freigeschaltet ist oder nicht. Selbst wenn jetzt eine überfällige Glasfaser-Anbindung nachgerüstet werden sollte, verfügen die Räume über keine Ethernet-Anschlüsse, sodass wieder nur die Übertragung via WLAN (Funk) übrig bleibt, was die Stabilität und Qualität unnötig beeinträchtigt. Statt den Fernsehkabelanschlüssen hätte also genausogut auch Ethernet verlegt werden können.

Das Gebäude in der Albert-Ebert-Straße (Wohnhaus) verfügt über keinen Internetanschluss, obwohl direkt neben einem Telekom-Gebäude gelegen. Ob in der neu errichteten Unterkunft in der Grünwiesenstraße (Wohnhaus) eine Glasfaseranbindung ins Gebäude und Ethernet in die Wand gelegt wurde, ist nicht bekannt. Zur Kammgarnspinnerei (Wohnhäuser) liegen mir keine aktuellen Erkenntnisse vor.

Gleichzeitig ist unübersehbar, dass die Telekom in der ganzen Stadt die Straßen und Wege aufgebaggert hat, um in einem erhofften „Vorzeigeprojekt“ den Breitbandausbau voranzutreiben. Ungünstigerweise wurde dort laut Berichten wie „Breitbandausbau in Bietigheim-Bissingen: Die Telekom baut aus und spart dabei“, abgerufen am 2017-10-25, und der unmissverständlichen Werbung an den Verteilerkästen die Vectoring-Technik verbaut, welcher nicht wenig Kritik wie z.B. „Glasfaserausbau: Wo gehtʼs hier zum Gigabit?“, abgerufen am 2017-10-25, entgegenschlägt. Ludwigsburg hat sich stattdessen für einen unabhängigen Ausbau entschieden, vergleiche „Flächendeckender Glasfaser-Ausbau in Ludwigsburg: 60 Millionen fürs megaschnelle Internet“, abgerufen am 2017-10-25.

Die Verbindung vom Verteilerkasten an der Straße zum Haus (die sogenannte „letzte Meile“) ist normalerweise ein altes Telefon-Kupferkabel, das nur mit Mühe brauchbare Internet-Übertragungsgeschwindigkeiten bereitstellen kann. Dieser elektronischen Übertragung ist die optische mit Lichtgeschwindigkeit weit überlegen, hierzu müssten allerdings neue Leitungen in die Haushalte verlegt werden, weshalb dieser Ausbau beim Bestand nur sehr schleppend vorangeht, man aber wenigstens bei Neubauten auf keinen Fall mehr Kupfer verwenden sollte. Die Telekom will letzteren Weg nicht gehen und legt stattdessen neue Glasfaserkabel ausschließlich entlang des Straßenverlaufs, um den alten Kupferstrecken in die Häuser mit technischen Tricks eine etwas höhere Übertragungsrate abzutrotzen. Der Nachteil daran ist jedoch, dass niemand anderes mehr den Glasfaserausbau auf der letzten Meile durchführen kann außer der Telekom, weil die technischen Tricks inkompatibel sind zu standardisierten Verfahren. Die Telekom sichert sich also mithilfe dieses kaum zukunftsfähigen Kniffs eine Monopolstellung, wofür sie auch noch staatliche Fördermittel erhält, obwohl man in wenigen Jahren diese Vectoring-Übertragungsstrecken ohnehin erneut ersetzen müsste, um dem stets ansteigenden Bedarf weiterhin gerecht werden zu können. Die Politik verklärt diese einseitige Subventionierung eines einzelnen Unternehmens als eine Investition in die digitale Infrastruktur, wovon im internationalen Vergleich aber keine Rede sein kann. So hat es Deutschland erst sehr spät in die Statistik der OECD, abgerufen am 2017-10-25, geschafft, und damit ist noch keine Aussage über die Ausbaustufe der Übertragungskapazität getroffen.

Von Behörden und Politik ist dementsprechend leider keine Besserung zu erwarten, schließlich besteht in der Sache kein Unterschied zwischen dem vergangenen und dem anstehenden Jahr. Ein Lichtblick ist immerhin, dass die Piratenpartei auf EU-Ebene darauf hinwirken konnte, dass Deutschland entgegen den verschiedenen ungeeigneten Plänen der Großen Koalition dazu aufgefordert wurde, die Störerhaftung endlich ganz aufzugeben, wie unter anderem in „Analyse: Endlich offenes WLAN!“, abgerufen am 2017-10-14, abschließend berichtet wird. Beim Urheberrecht, welches untrennbar an den Buchdruck gekoppelt ist und in keiner Weise an die Gesetzmäßigkeiten der Digitalität angepasst wurde, soll allein der Autor bestimmen dürfen, wer Kopien eines Werks anfertigen und weiterverbreiten darf. Bei der unberechtigten Anfertigung und Weitergabe von elektronischen Kopien ist es kaum möglich, den „Täter“ zu ermitteln oder überhaupt erst vom Verstoß Kenntnis zu erlangen. Der Vorteil der Digitaltechnologie besteht ja gerade darin, Kopien uneingeschränkt und nahezu kostenlos herstellen zu können. Günstige elektronische Komponenten, das Prinzip der Universalrechenmaschine, allgemeine Vernetzung und der Einsatz standardisierter Protokolle/Formate sorgen dafür, dass die effektive Kontrolle eines Rechteinhabers über die Nutzung stets dort endet, wo das Werk einem Dritten zugänglich gemacht wird. Daraus folgt, dass das herkömmliche Geschäftsmodell, Kopien verkaufen zu können, im digitalen Bereich nur noch mit drakonischem Zwang, auf freiwilliger Basis oder über die Bequemlichkeit der Nutzer funktioniert, aber keineswegs den Regeln des Verkaufs physisch knapper Güter unterliegt. Um dem digitalen Wandel und der damit einhergehenden Erosion alter, undigitaler Geschäftsmodelle entgegenzuwirken, haben die restriktiven Rechteverwerter mit viel Lobbyarbeit dafür gesorgt, dass der Inhaber eines Internetanschlusses, über den eine Rechteverletzung erfolgte, auch dann stellvertretend in die Haftung genommen werden kann, wenn die Tat gar nicht von ihm selbst verübt worden ist. Es wird behauptet, dass der Anschlussinhaber als Komplize und „Störer“ gemeinsame Sache mit dem Rechteverletzer gemacht haben müsse, da er ja die technische Infrastruktur als Tatwerkzeug zur Verfügung gestellt hat. In Wahrheit steckt aber dahinter, dass man oft nur den Anschlussinhaber ermitteln kann, den wirklichen Täter aber nicht unbedingt, und dann nimmt die Strafverfolgung und die Rechteverwerter-Industrie lieber, was sie kriegen kann, statt sich mit gar nichts begnügen zu müssen. Diese Regelung betrifft kleine Internetcafes, öffentliche Einrichtungen und Privatpersonen gleichermaßen, während die größten Störer überhaupt, nämlich die großen Telekommunikationsunternehmen, mithilfe des Providerprivilegs von einer solchen Haftung ausgenommen sind. Die Folge davon ist, dass man in jeder deutschen Stadt durch die Straßen gehen kann und unzählige WLANs entdeckt, die jedoch alle gesperrt sind, damit die privaten Betreiber kein rechtliches Risiko befürchten müssen, sodass es um die mobile Konnektivität hierzulande überaus schlecht bestellt ist, immense Gebühren für minderwertige Mobilfunkverträge bezahlt werden und es unter diesen Umständen wenig Sinn macht, moderne Digitalangebote zu entwickeln. Ferner stört sich die EU wohl auch an der eher wirtschaftlich orientierten Wettbewerbsverzerrung durch die Ungleichbehandlung von großen und kleinen Provider-Firmen. Letztere können sich kaum die vielen teuren Gerichtsverfahren leisten, was von vornherein eine hohe Hürde für den Markteintritt darstellt und die Vormachtstellung der etablierten Providerunternehmen zementiert.

Mit dem Wegfall der Störerhaftung werden Detektive und Polizei bei Urheberrechtsverletzungen wieder ordentlich ermitteln müssen, Inkasso-Firmen und Anwälte der Abmahn-Industrie dieses Betätigungsfeld verlieren und kostenloses mobiles Internet für jedermann quasi überall möglich sein. Freie Internet-Zugänge müssen auch keine unsinnigen Registrierungs-, Zustimmungs- und Belehrungsseiten mehr vorschalten, was es Mobilgeräten theoretisch erlaubt, sich vollautomatisch von der einen kostenlosen WLAN-Funkzelle in die nächste umzumelden. Da die lächerlichen Websperren – wie im freekey-System eingebaut und von der EU weiterhin gefordert – ohnehin leicht umgangen werden können, könnte man schon fast von öffentlicher Infrastruktur sprechen. Es wird trotzdem bestimmt noch weitere 5-20 Jahre dauern, bis genügend Privathaushalte ihre ohnehin pauschal bezahlte überschüssige Netzkapazität freigeben, weil sich die Erinnerung an die Abmahn-Abzocke nicht so schnell aus dem Gedächtnis tilgen lässt und nur wenige Leute einfach bloß wegen der neuen Rechtssicherheit ihre Konfiguration des Access-Points ändern werden, und auch die Anschaffung neuer Geräte steht deswegen ebenso nicht zwangsläufig an. Die Frage wäre ohnehin, ob geeignete Access-Points erworben werden können, bei denen der Betreiber softwareseitig seinen eigenen Geräten eine höhere Priorität einräumen kann und sich Nachbarn/Passanten die darüber hinaus ungenutzte Kapazität teilen. Oder im Hinblick auf die Sicherheit der eigenen Computer im Heimnetzwerk wäre wünschenswert, wenn der Access-Point zwei separate Netze aufspannen würde, wie dies schon jetzt von manchen Produkten fürs Video-Streaming mit besserer Übertragungsrate und Reichweite im Vergleich zum Gäste-WLAN unterstützt wird. Bei besseren Access-Points besteht die Option, die Firmware-Software zu aktualisieren (über diesen Weg hat der Freifunk-Verein mit OpenWrt sichergestellt, dass vom Providerprivileg Gebrauch gemacht werden kann), aber auch das wird kaum jemand nur aus diesem Anlass heraus tun. Nichtsdestotrotz bietet die Wende bei der Störerhaftung erstmalig die Chance, dass wenigstens von privater Seite mehr Internet in der Stadt zur Verfügung gestellt werden kann. Im Vergleich zu den Jahren davor, in denen die unzeitgemäße Gesetzgebung und der Einfluss fragwürdiger Industriezweige jedes Engagement mit rechtlichen Risiken oder nervigem Umgehungsaufwand behindern konnte, steht jetzt einer eigenen Lösung nichts mehr im Wege, auch wenn es viel Arbeit sein wird, die Bürger zu einer Umstellung zu bewegen. Freies Internet überall ohne auf die Stadt und Konzerne warten zu müssen, ist nun immerhin denkbar.