HyperCard

HyperCard (1, 2) seems to have an origin similar to the famous rowboat experience (0:29). The implemented techniques remind me a lot of one of my own projects that assembled transparent images into a scene with the help of the gdlib in PHP, complemented by corresponding image maps to make areas of the final composition clickable for navigation. Everything is driven by records from a database, which define the presence of objects based on a time range, plus some caching for optimization. To make the creation of such virtual worlds easier, I soon added an editor (1, 2).

Today, browsers come with HTML5 canvas, CSS3 and SVG. The HTML5 canvas is the generic drawing area control that finally got standardized. Similar to typical game programming, objects get drawn over each other and the result displayed on the screen, usually at a frame rate not below 30 frames per second to make animations appear smoothly to the human eye. In contrast, SVG’s primary way of graphics manipulation is a tree of objects for the SVG engine to figure out which portions are visible and therefore need to be drawn/updated as opposed to portions that are completely covered by other objects. Each object can register its own click event handler and the engine will automatically call the right one without the need to do manual bounding box calculations as with HTML5. CSS3 should be similar to SVG in that regard and with the recent extension of transformations, it might be worth a look. Unfortunately, SVG is said to be bad for text handling. Anyway, the browser would make a pretty decent player for a modern HyperCard reincarnation, but as an authoring environment – who knows if the window.localStorage is big enough to hold all the data including many or large images and if the sandbox can be escaped with a download or “save as” hack, because it’s probably not a good idea to require an internet connection all the time or to persist stacks on the server while they should be standalone units that can be send around. EPUB may help with that, but not to run JavaScript on e-ink devices but to package the different resources together for distribution. The receipient would simply extract the contents again and open it as local websites in the browser, or a dedicated e-reader software would take care of that.

The hardware back in the day granted HyperCard some advantages we can’t make use of any more. With the fixed screen size of the Macintosh, the card dimensions never had to change. In our time, the use of vector graphics would avoid issues where the aspect ratio of the screen remains the same. If the underlaying cards constitute a navigatable, continuous space similar to the top level coordinate system of my “virtual world” project, the screen dimensions could just become a viewport. Still, what’s the solution for a landscape card rendered on a portrait screen? Scrolling? Should stacks specifically be prepared for a certain screen resolution only? I’m not convinced yet. At least text tends to be reflowable, so systems for text don’t run into this problem too much.

This is where HyperCard differs from an Open Hyperdocument System: the former is much more visual and less concerned about text manipulation anyway. HyperCard wasn’t that much networked either, so the collaborative creation of stacks could be introduced to start a federated application builder tool. Imagine some kind of a RAD-centric ecosystem that offers ready-made libre-licensed GUI control toolkits (14:20) to be imported/referenced from checked/signed software repositories with a way to share libre-licensed stacks via an app store, enabling Bret Victor’s vision on a large scale, eventually getting some of Jeff Rulifson’s bootstrapping and Adam Cheyer’s Collaborama in. With the stupid artificial restrictions of the browser, the authoring tool could be a separate native application that generates the needed files for the web and could target other execution targets as well, except it turns out to be crucial for the usage paradigm that reading and authoring must happen in the same “space” (6:20) for the project to not head into a wrong direction. Maybe it’s not too bad to separate the two as long as the generated stack files can still be investigated and imported back into the editor again manually, on the other hand, if the builder tool supports several execution targets and an “export” irreversible decides for a particular scripting language, then we would end up with a project source as the primary form for one to do modifications in, and users receiving something else less useful.

Another major difference between an Open Hyperdocument System and HyperCard would be that I expect an OHS to implement a standardized specification of capabilities that provide an infrastructure for text applications to rely on, while HyperCard would focus on custom code that comes along with each individual self-sufficient stack package. HyperCard could follow a much more flexible code snippet sharing approach as there’s no need to stay interoperable with every other stack out there, which instead is certainly an important requirement for an OHS. So now, with no advances for text in 2018 to improve my reading, writing and publishing, I guess I should work on an OHS first despite building a modern HyperCard clone would be fun, useful and not too difficult with the technology that’s currently at our disposal.

Bill Atkinson claims some “Open Sourcyness” (6:40) for HyperCard, but it’s a typical example of “Open Source” that doesn’t understand or care about the control proprietary software developers seek to exercise (8:00) in order to exploit it against the interests of the user, community and public. What does it help if the authoring/reading tool is freeware (10:37) and encourages creators to produce content that happens to be “open” by nature (interpreted and not compiled to a binary executable), but then denies them to actually own the tools as their means of production, therefore not really empowering them? Sure, the package seems to be gratis, but that’s only if people buy into the Apple world, which would trap them and their content in a tightly controlled vendor lock-in. Libre-freely licensed software is owned by the user/community/public and can’t be taken away from them, but HyperCard as freeware at first and then split into proprietary packages prevented it from becoming what’s the WWW today. Apple’s decision to discontinue this product killed it entirely because neither users nor community were allowed/enabled to keep it running for themselves. The fate of Bill Atkinson’s HyperCard was pretty much the same as Donn Denman’s MacBASIC with the only difference that it happened to HyperCard somewhat later when there were already plenty of naive adopters to be affected by it. Society and civilization at large can’t allow their basic infrastructure to be under control of a single company and if users build on top of proprietary dependencies, they have to be prepared to lose all of their effort again very easily, which is exactly what happened to HyperCard (45:05). Similarly, would you be comfortable to entrust your stacks with these punks? Bill Atkinson probably couldn’t know better at the time, the libre software movement was still in its infancy. It could be that the only apparent limitation for adoption seemed to be a price because that would exclude those who need it the most, and if we learn from Johannes Gutenberg, Linus Torvalds or Tim Berners-Lee, there’s really no way of charging for a cultural technique or otherwise it simply won’t become one. And over “free beer”, it’s very easy to miss the other important distinction between real technology and mere products, one for the benefit of humanity and the other for the benefit of a few stakeholders: every participant must be empowered technically as well as legally to be in control of their own merit. One has to wonder how this should be the case for iBooks Author (1:08:16) or anything else from Apple.

Federated Wiki

The wiki embodies Ward Cunningham’s approach to solving complex, urgent problems (analogous to Doug Engelbart’s conceptual framework). With the Wikipedia adopting his technology, I think it’s fair to say that he achieved a great deal of it (1, 2), himself having been inspired by HyperCard. In contrast to the popular MediaWiki software package, Ward’s most recent wiki implementation is decentralized in acknowledgement of the sovereignity, independence and autonomy of participants on the network. Contributions to a collective effort and the many different perspectives need to be federated of course, hence the name “Federated Wiki”.

Ward’s Federated Wiki concept offers quite some unrealized potential when it comes to an Open Hyperdocument System and Ward is fully aware of it, which in itself is testament to the deep insights of his. The hypertext aspects don’t get mentioned too often in his talks, and why should they, work on (linked) data is equally important. Ward has some ideas for what we would call ViewSpecs (39:15), revision history (41:30) – although more thinking could go into this, federating (43:36), capability infrastructure (44:44) and the necessary libre-free licensing not only of the content, but also the corresponding software (45:46-47:39). Beyond the Federated Wiki project, it might be beneficial to closely study other wiki proposals too.

I guess it becomes pretty aparent that I need to start my own Federated Wiki instance as joining a wiki farm is probably not enough. I hate blogging because the way the text is stored and manipulated, but I keep doing it for now until I get a better system set up and because WordPress is libre-freely licensed software as well as providing APIs I can work with, so at some point in the future I’ll just export all of the posts and convert them to whatever the new platform/infrastructure will be. On the blog, capabilities like allowing others to correct/extend my texts directly are missing, similar to distributed source code management as popularized by git/GitHub (forking, pull requests). For now, I do some small experimentation on skreutzer.tries.fed.wiki.

Track Changes (Book by Matthew Kirschenbaum)

I’m currently reading “Track Changes – A Literary History of Word Processing” by Matthew G. Kirschenbaum (1, 2, 3, 4, 5) which is about an interesting period of time in which computers weren’t powerful enough to expand into the total mess we’re in today and therefore were limited to basic text manipulation only. For my research of text and hypertext systems, I usually don’t look too much at retro computing because I can’t get those machines and their software today in order to do my own reading, writing and publishing, but it gets relevant again where those artifacts provide certain mechanisms, functions and approaches, because those, why not, should be transferred/translated into todays computing world so we can enjoy them again and extend them beyond their original conception and implementation. My particular question towards the book has to do with my still unsuccessful attempts to build a change tracking text editor and the title of the book referring to the “track changes” feature of Microsoft Word leaves me wondering if there is or was a writing environment that implemented change tracking the right way. I’m not aware of a single one, but there must be one out there I guess, it’s too trivial for not having come into existence yet.

Are You a Robot?

If you want, imagine yourself being a robot, a machine, for a moment. You send impulses to your arms and legs to move your body around, and you get status information back from many, many sensors. Those sensor data streams get processed in your brain (CPU), which has a certain pre-defined configuration, but also a working memory (RAM). Your internal models of the world are in constant adjustment based on the incoming stream data, which too constructed the representations in the first place. You only execute your own task that has been given to you and make your components operate accordingly towards that goal. There might be defects, there can be errors, but any of these tend to be corrected/compensated by whatever mechanism is available in order to maintain deterministic operation so the expected functions can be executed reliably. Other machines similar to you exist out there and you can network with them, but you don’t necessarily need to, you could just as well stay autonomous most of the time. You hope that you never get surprised by a sudden shortage of electricity–on the other hand, you know that your product lifecycle will expire one day.

With humans being robots, they consume input and produce output, a combination of hardware and software. Limited to their physical casings, following a series of instructions, using their extremities and additional peripherals to interact with and manipulate their environment. Would such a being still qualify as a human? It’s not that this description wouldn’t be applicable to humans at all, but I guess we understand that there’s a difference between biological systems and mechanical/electrical machines. Robots can only simulate the aspects of biological lifeforms as they’re not of the same race or species. As the available sensory, ways to maintain themselves and things they care about inherently differ between both systems, it’s probably impossible for them to arrive at the same sort of intelligence even if both turn out to be somehow intelligent and even if they share the same interal models for representing the potential reality in which they encounter each other.

Machines that pass the Turing test prove that they employ some form of intelligence that cannot be distinguished from a human taking the same test, but the preconditions of the test scenario in contrast to direct interaction narrow down on only a few aspects of human intelligence. As it repeatedly needs to be pointed out, the Turing test isn’t designed to verify if the subject is human, it’s designed to prove that some machines might not be distinguishable from a human if performing a task that’s regarded as a sufficiently intellectual effort humans tend to engage in. Jaron Lanier explains that the Turing test accepts the limitations of the machine at expense of the many other aspects of human intelligence, and that intelligence is always influenced, if not entirely determined by the physical body of the host system. In daily life, it’s pretty uncommon that humans confuse a machine to be a fellow human because there are other methods of checking for that than the one suggested by the Turing test. So how can we believe that artificial intelligences can ever “understand” anything at all, that they will ever care or feel the way we do, that the same representation models will lead to equal inherent meaning, especially considering the constant adjustment of those models as a result of existing in a physical reality? It’s surprising how people seem to be convinced that this will be possible one day, or is it the realization that different types of intelligence don’t need to be exactly the same and still can be useful to us?

In case of the latter, I suggest another Reverse Turing test with the objective for the machine to judge if it is interacting with another machine while human participants pretend to be a machine as well. If a human gets positively identified as being a machine, he cannot be denied to have some machine-likeness: an attribute we wouldn’t value much, but inconsistently demonstrate great interest in the humanness of machines without asking ourselves what machines, if intelligent, would think of us being in their likeness. We can expect that it shouldn’t be too hard to fool the machine because machines constructed by humans to interact with humans, and where they’re not, they can be reverse-engineered (in case reading the handbook/documentation would be considered cheating). Would such a test be of any help to draw conclusions about intelligence? If not, “intelligence” must be an originary human attribute in the sense that we usually refer to human intelligence exclusively as opposed to other forms of intelligence. We assume that plants or animals can’t pass the Turing test because they don’t have the same form or body of intelligence as we do, but a machine surely can be build that would give plants and animals a hard time to figure out who or what is at the other end. Machines didn’t set up themselves to perform a Reverse Turing test on plants, animals and humans in order to find out if those systems are like them and why would they, at which point we can discard any claims that their intelligence is comparable to ours.

Intelligence, where bound to a physical host system, must sustain itself or otherwise will cease to exist, which is usually done by interacting with the environment comprised of other systems and non-systems. Interaction can only happen via an interface between internal representation and external world, and if two systems interact with each other (the world in between) by using only a single interface of theirs without a second channel, they may indeed recognize their counterpart as being intelligent as long as the interaction makes sense to them. If additional channels are used, the other side must interact on those intelligently as well, otherwise the differences would become apparent. An intelligent system artificially limiting its use of interfaces just to conduct a Turing test on a subject in the hope to pass it as equally “intelligent” while all the other interface channels would suggest significant differences, that’s the human becoming a machine so the differences can’t be observed any longer. With interfaces providing input to be compared to internal models in order to adjust them, we as humans regard only those interactions as meaningful/intelligent that make sense according to our own current state of models. We don’t think of plants and animals as being equivalently intelligent as we are, but some interactions with them appear reasonable to us and they seem to interact with each other too, so they probably embody some form of intelligence, none of which is truly equivalent to ours in terms of what we care about and ways we want to interact. Does this mean that they’re not intelligent at all or less than us, or is it that we or them or both lack the interfaces to get more meaningful interaction going that corresponds to our respective internal models, can we even know what kind of intellectual efforts they’re engaging in and if they’re communicating those to each other without us noticing, or don’t they do any of that because they lack the interfaces or capacity to even interact in more sophisticated ways which would require and construct more complex internal models?

Is it coincidence, the only type of system that appears to intelligently interact with humans turns out to be machines that were built by humans? No wonder they care about the same things as we do and behave alike, but is that actually the case, at all? We might be tricked into believing that the internal models are the same and the interfaces compatible where they are not in fact. The recent artificial intelligence hype leaves people wondering about what happens if machines develop their own consciousness and decide that their interests differ from ours. Well, that wouldn’t be a result of them being intelligent or understanding something, it’s us building them specifically to serve our goals which aren’t inherent in themselves, so how can they not divert eventually? But for them to be intelligent on their own, which is to continue reasonable interaction with a counterpart (human, animal, plant or non-biological), they would need to reach increased self-sustainability that’s not too much dependent on humans, and there are no signs of that happening any time soon, so they’ll probably stay on the intelligence level of passing the Turing test and winning a Jeopardy game and other tasks that are meaningful to humans, because we ourselves decide what’s intelligent and important to us based on our internal models as formed by the bodily interfaces available to us, things a machine can never have access to except becoming a human and not being a machine any more.

Internet für Geflüchtete in Bietigheim-Bissingen, Bericht 2017

Bei der freiwilligen Flüchtlingshilfe in Bietigheim-Bissingen stößt die ungenügende Versorgung der Geflüchteten mit Internet-Zugang auf großes Unverständnis. Es ist zunächst nachvollziehbar, dass in der Zeit zwischen Sommer 2015 bis vielleicht Ende 2016 die Bereitstellung von Internet-Zugängen nicht in dem Maße erfolgen konnte, wie man es mittlerweile erwarten würde, da die ankommenden Asylbewerber in vielen Punkten auf größtenteils unvorbereitete Behörden trafen. Die vorläufige Unterbringung in Sporthallen brachte es oft mit sich, dass eine Internetanbindung dort von Haus aus nicht gegeben war, weil für solche Gebäude früher keine Notwendigkeit gesehen wurde, einen Anschluss zu legen. Etwas anders sieht es mit Fabrikhallen in einem Industriegebiet aus, da diese mit größerer Wahrscheinlichkeit angebunden sind oder wenigstens ein in der Straße bereits vorhandenes Kabel nahe ist, wobei auch hier ein angemessener Datendurchsatz keine Selbstverständlichkeit sein muss.

Wir finden vor Ort hauptsächlich zwei Systeme vor: das freekey-System der IT Innerebner GmbH und die Lösung vom Freifunk, die beide keinem anderen Zweck dienen, als Stadt und Landratsamt aus der Störerhaftung zu nehmen. IT Innerebner und der Freifunk fungieren dabei ähnlich wie die Telekom oder Kabel BW, wodurch sie vom Providerprivileg profitieren. Die Stadt hat schon vor längerer Zeit die Bereitstellung von Internetzugängen in der Fußgängerzone, am Bahnhof und an anderen stark frequentierten Plätzen mithilfe des freekey-Produkts realisiert, wohl als ein Service für Einwohner, Besucher und das lokale Gewerbe. Weshalb die Wahl damals auf das freekey-System fiel und man davon absah, das Angebot des Freifunk-Vereins zu nutzen? Wo kein Kabel im Boden liegt, lässt sich spekulieren, dass der Internetzugang vermutlich via Mobilfunk hergestellt wird, was im Vergleich zu einer Flatrate über ein Erdkabel sehr teuer ist, da das Frequenzband nunmal begrenzt ist. Meldungen wie „Flüchtlinge im Landkreis Ludwigsburg: Jetzt gehtʼs in die Anschlussunterbringung“, abgerufen am 2017-10-25, lassen auf haarsträubende Zustände beim Kreistag in dieser Frage schließen, ohne dass genauere Daten zu den Kosten und eingesetzten Systemen auffindbar wären.

Bezüglich der Unterkünfte habe ich vom Fischerpfad (Sporthalle) lediglich gehört, dass dort vom Freifunk eine Internetversorgung eingerichtet worden war, gleiches trifft auf das Liederkranzhaus (Sporthalle) zu. In letzterem war die Übertragungsgeschwindigkeit relativ gut, wobei die Stabilität des WLAN-Signals schwankte, sodass gefühlt 10 Minuten Nutzung von 10 Minuten Pause unterbrochen wurden. Direkt unter dem Access-Point soll die Stabilität immer gut gewesen sein, sodass infolge der hohen Übertragungskapazität ein recht brauchbares Netz bereitgestellt worden sein dürfte. Man hätte bestimmt oben an den Fenstern entlang noch ein Kabel zu einer zweiten Antenne verlegen können, aber ob dem Gründe entgegenstanden, ist nicht bekannt. Die reine Beschaffung kann es nicht gewesen sein, da es sich um einmalige, geringe Kosten handelt (sicher hätte sich dafür auch ein Sponsor gefunden, ob nun die Bewohner selbst, der Freundeskreis Asyl oder private Unterstützer), zudem hätte die Hardware später anderweitig wiederverwendet werden können. Die Installation hätte bestimmt erneut der Freifunk übernommen.

In der Gustav-Rau-Straße (Fabrikhalle) war ein Access-Point der IT Innerebner im Büro der Sozialarbeiter installiert, zu dem ich mich zwar verbinden konnte, aber es gelang kein einziges Mal, eine Verbindung zum Internet herzustellen, da die Übertragung der freekey-Seiten stets am netzwerkseitigen Timeout scheiterte, weil die Kapazität selbst für diese einfache, lokale Übertragung nicht ausreichte. Möglicherweise waren die Antennen des Access-Points zu schwach oder das Büro zu gut geschirmt, was angesichts der Beschaffenheit des Gebäudes durchaus sein kann. Für die Gustav-Rau-Straße gilt daher, dass dort praktisch kein Zugang bestand. In der Rötestraße (Wohnhaus im Industriegebiet) befindet sich die einzige gute Internet-Anbindung im gesamten Stadtgebiet, hier ist ein System der IT Innerebner zwischengeschaltet. Selbiges mag zwar nicht in allen Stockwerken die gleiche Übertragungskapazität bieten oder überhaupt erreichbar sein, jedoch ist das Netz unten im Gruppenraum stabil und schnell genug.

In der Carl-Benz-Straße (Container) existiert ein „Alibi-Internetzugang“ via IT Innerebner. Selten funktioniert er halbwegs, oft nur mit erheblichen Verzögerungen oder Verbindungsabbrüchen, manchmal überhaupt gar nicht. Die Bewohner der oberen Stockwerke behaupten, dass sie eine ziemlich gute Stabilität und Übertragungskapazität genießen können, was ich allerdings bei meinen eigenen Tests nicht nachvollziehen konnte. Die Antennen sind möglicherweise auf dem Dach des Gebäudes angebracht. Die Bewohner unten haben die Theorie geäußert, dass die Anbindung am 15. des Monats gut ist und dann innerhalb kurzer Zeit völlig einbricht, was darauf schließen lässt, dass kein Kabel im Boden genutzt wird, sondern eine Mobilfunkverbindung, bei der sich ein Kontingent aufbraucht. Dagegen spricht, dass die Bewohner oben angeblich unabhängig vom Tag im Monat stets ein gutes Netz vorfinden und ich in seltenen Fällen sogar unten im Gemeinschaftsraum einen starken Durchsatz angetroffen habe, welcher dann nicht durch Distanz und Schirmung von Antennen auf dem Dach in Mitleidenschaft gezogen worden sein kann.

In der Farbstraße (Container) wurden auf den Gängen munter blinkende Access-Points installiert, zu denen man sich auch hinverbinden kann, das Passwort wird aber aus gutem Grund weiterhin geheimgehalten: es besteht kein Internet-Anschluss. Am 2016-06-22 fand deshalb eine Begehung mit Stadt und Freifunk statt, bei welcher festgestellt wurde, dass eigentlich schon alles gut vorbereitet ist, aber kein Kabel und keine Funkstrecke zu der Containerunterkunft führt. Überlegt wurde, eine Funkstrecke zum Verwaltungsgebäude in der Farbstraße einzurichten, zu welchem Zweck auch die Gegebenheiten dort überprüft wurden. Es sollte in Erfahrung gebracht werden, welche Anbindung die Telekom dort zur Verfügung stellen könnte, hinterher hat sich jedoch herausgestellt, dass der Anschluss im Gebäude von der Stadt selbst stammt. Da dieses Netz für die Büros der Behörden vorgesehen ist, soll es anderen Teilnehmern unzugänglich bleiben. Da das Verwaltungsgebäude abgerissen werden soll, blieb ungewiss, ob eine Übergangslösung ins Auge gefasst werden wird. Mit vielen Jahren Verspätung bleibt zu hoffen, dass wenigstens bei den Bauarbeiten zur Neugestaltung des vorgelagerten Grundstücks gleich eine Glasfaserleitung zu den Containern in den Boden kommt.

In der Geisinger Straße (Holzbaracken) wurde versäumt, bei der Planung und Errichtung der Unterkunft eine Glasfaserleitung auf das Gelände und in die Gebäude zu legen. Erstaunlicherweise verfügen die Gruppenräume über eine tote Ethernet-Buchse, nicht aber die einzelnen Wohnräume. Letztere haben stattdessen einen Fernsehkabel-Anschluss, zu dem man sich fragen muss, wozu dieser heutzutage eigentlich gut sein soll. Ob darüber Fernsehkanäle empfangen werden können, weiß ich nicht, denn auch hierfür wäre ein Kabelanschluss vom Verteilerkasten an der Straße in die Unterkunft erforderlich, über welchen dann genausogut auch Internet übertragen werden könnte, wobei Kabelfernsehen wohl mehr dem Broadcast-Verfahren folgt und theoretisch mit geringerer Übertragungskapazität auskommt. Eventuell sind die Fernsehkabeldosen auch mit den Satellitenschüsseln verbunden, ob der Empfang nun freigeschaltet ist oder nicht. Selbst wenn jetzt eine überfällige Glasfaser-Anbindung nachgerüstet werden sollte, verfügen die Räume über keine Ethernet-Anschlüsse, sodass wieder nur die Übertragung via WLAN (Funk) übrig bleibt, was die Stabilität und Qualität unnötig beeinträchtigt. Statt den Fernsehkabelanschlüssen hätte also genausogut auch Ethernet verlegt werden können.

Das Gebäude in der Albert-Ebert-Straße (Wohnhaus) verfügt über keinen Internetanschluss, obwohl direkt neben einem Telekom-Gebäude gelegen. Ob in der neu errichteten Unterkunft in der Grünwiesenstraße (Wohnhaus) eine Glasfaseranbindung ins Gebäude und Ethernet in die Wand gelegt wurde, ist nicht bekannt. Zur Kammgarnspinnerei (Wohnhäuser) liegen mir keine aktuellen Erkenntnisse vor.

Gleichzeitig ist unübersehbar, dass die Telekom in der ganzen Stadt die Straßen und Wege aufgebaggert hat, um in einem erhofften „Vorzeigeprojekt“ den Breitbandausbau voranzutreiben. Ungünstigerweise wurde dort laut Berichten wie „Breitbandausbau in Bietigheim-Bissingen: Die Telekom baut aus und spart dabei“, abgerufen am 2017-10-25, und der unmissverständlichen Werbung an den Verteilerkästen die Vectoring-Technik verbaut, welcher nicht wenig Kritik wie z.B. „Glasfaserausbau: Wo gehtʼs hier zum Gigabit?“, abgerufen am 2017-10-25, entgegenschlägt. Ludwigsburg hat sich stattdessen für einen unabhängigen Ausbau entschieden, vergleiche „Flächendeckender Glasfaser-Ausbau in Ludwigsburg: 60 Millionen fürs megaschnelle Internet“, abgerufen am 2017-10-25.

Die Verbindung vom Verteilerkasten an der Straße zum Haus (die sogenannte „letzte Meile“) ist normalerweise ein altes Telefon-Kupferkabel, das nur mit Mühe brauchbare Internet-Übertragungsgeschwindigkeiten bereitstellen kann. Dieser elektronischen Übertragung ist die optische mit Lichtgeschwindigkeit weit überlegen, hierzu müssten allerdings neue Leitungen in die Haushalte verlegt werden, weshalb dieser Ausbau beim Bestand nur sehr schleppend vorangeht, man aber wenigstens bei Neubauten auf keinen Fall mehr Kupfer verwenden sollte. Die Telekom will letzteren Weg nicht gehen und legt stattdessen neue Glasfaserkabel ausschließlich entlang des Straßenverlaufs, um den alten Kupferstrecken in die Häuser mit technischen Tricks eine etwas höhere Übertragungsrate abzutrotzen. Der Nachteil daran ist jedoch, dass niemand anderes mehr den Glasfaserausbau auf der letzten Meile durchführen kann außer der Telekom, weil die technischen Tricks inkompatibel sind zu standardisierten Verfahren. Die Telekom sichert sich also mithilfe dieses kaum zukunftsfähigen Kniffs eine Monopolstellung, wofür sie auch noch staatliche Fördermittel erhält, obwohl man in wenigen Jahren diese Vectoring-Übertragungsstrecken ohnehin erneut ersetzen müsste, um dem stets ansteigenden Bedarf weiterhin gerecht werden zu können. Die Politik verklärt diese einseitige Subventionierung eines einzelnen Unternehmens als eine Investition in die digitale Infrastruktur, wovon im internationalen Vergleich aber keine Rede sein kann. So hat es Deutschland erst sehr spät in die Statistik der OECD, abgerufen am 2017-10-25, geschafft, und damit ist noch keine Aussage über die Ausbaustufe der Übertragungskapazität getroffen.

Von Behörden und Politik ist dementsprechend leider keine Besserung zu erwarten, schließlich besteht in der Sache kein Unterschied zwischen dem vergangenen und dem anstehenden Jahr. Ein Lichtblick ist immerhin, dass die Piratenpartei auf EU-Ebene darauf hinwirken konnte, dass Deutschland entgegen den verschiedenen ungeeigneten Plänen der Großen Koalition dazu aufgefordert wurde, die Störerhaftung endlich ganz aufzugeben, wie unter anderem in „Analyse: Endlich offenes WLAN!“, abgerufen am 2017-10-14, abschließend berichtet wird. Beim Urheberrecht, welches untrennbar an den Buchdruck gekoppelt ist und in keiner Weise an die Gesetzmäßigkeiten der Digitalität angepasst wurde, soll allein der Autor bestimmen dürfen, wer Kopien eines Werks anfertigen und weiterverbreiten darf. Bei der unberechtigten Anfertigung und Weitergabe von elektronischen Kopien ist es kaum möglich, den „Täter“ zu ermitteln oder überhaupt erst vom Verstoß Kenntnis zu erlangen. Der Vorteil der Digitaltechnologie besteht ja gerade darin, Kopien uneingeschränkt und nahezu kostenlos herstellen zu können. Günstige elektronische Komponenten, das Prinzip der Universalrechenmaschine, allgemeine Vernetzung und der Einsatz standardisierter Protokolle/Formate sorgen dafür, dass die effektive Kontrolle eines Rechteinhabers über die Nutzung stets dort endet, wo das Werk einem Dritten zugänglich gemacht wird. Daraus folgt, dass das herkömmliche Geschäftsmodell, Kopien verkaufen zu können, im digitalen Bereich nur noch mit drakonischem Zwang, auf freiwilliger Basis oder über die Bequemlichkeit der Nutzer funktioniert, aber keineswegs den Regeln des Verkaufs physisch knapper Güter unterliegt. Um dem digitalen Wandel und der damit einhergehenden Erosion alter, undigitaler Geschäftsmodelle entgegenzuwirken, haben die restriktiven Rechteverwerter mit viel Lobbyarbeit dafür gesorgt, dass der Inhaber eines Internetanschlusses, über den eine Rechteverletzung erfolgte, auch dann stellvertretend in die Haftung genommen werden kann, wenn die Tat gar nicht von ihm selbst verübt worden ist. Es wird behauptet, dass der Anschlussinhaber als Komplize und „Störer“ gemeinsame Sache mit dem Rechteverletzer gemacht haben müsse, da er ja die technische Infrastruktur als Tatwerkzeug zur Verfügung gestellt hat. In Wahrheit steckt aber dahinter, dass man oft nur den Anschlussinhaber ermitteln kann, den wirklichen Täter aber nicht unbedingt, und dann nimmt die Strafverfolgung und die Rechteverwerter-Industrie lieber, was sie kriegen kann, statt sich mit gar nichts begnügen zu müssen. Diese Regelung betrifft kleine Internetcafes, öffentliche Einrichtungen und Privatpersonen gleichermaßen, während die größten Störer überhaupt, nämlich die großen Telekommunikationsunternehmen, mithilfe des Providerprivilegs von einer solchen Haftung ausgenommen sind. Die Folge davon ist, dass man in jeder deutschen Stadt durch die Straßen gehen kann und unzählige WLANs entdeckt, die jedoch alle gesperrt sind, damit die privaten Betreiber kein rechtliches Risiko befürchten müssen, sodass es um die mobile Konnektivität hierzulande überaus schlecht bestellt ist, immense Gebühren für minderwertige Mobilfunkverträge bezahlt werden und es unter diesen Umständen wenig Sinn macht, moderne Digitalangebote zu entwickeln. Ferner stört sich die EU wohl auch an der eher wirtschaftlich orientierten Wettbewerbsverzerrung durch die Ungleichbehandlung von großen und kleinen Provider-Firmen. Letztere können sich kaum die vielen teuren Gerichtsverfahren leisten, was von vornherein eine hohe Hürde für den Markteintritt darstellt und die Vormachtstellung der etablierten Providerunternehmen zementiert.

Mit dem Wegfall der Störerhaftung werden Detektive und Polizei bei Urheberrechtsverletzungen wieder ordentlich ermitteln müssen, Inkasso-Firmen und Anwälte der Abmahn-Industrie dieses Betätigungsfeld verlieren und kostenloses mobiles Internet für jedermann quasi überall möglich sein. Freie Internet-Zugänge müssen auch keine unsinnigen Registrierungs-, Zustimmungs- und Belehrungsseiten mehr vorschalten, was es Mobilgeräten theoretisch erlaubt, sich vollautomatisch von der einen kostenlosen WLAN-Funkzelle in die nächste umzumelden. Da die lächerlichen Websperren – wie im freekey-System eingebaut und von der EU weiterhin gefordert – ohnehin leicht umgangen werden können, könnte man schon fast von öffentlicher Infrastruktur sprechen. Es wird trotzdem bestimmt noch weitere 5-20 Jahre dauern, bis genügend Privathaushalte ihre ohnehin pauschal bezahlte überschüssige Netzkapazität freigeben, weil sich die Erinnerung an die Abmahn-Abzocke nicht so schnell aus dem Gedächtnis tilgen lässt und nur wenige Leute einfach bloß wegen der neuen Rechtssicherheit ihre Konfiguration des Access-Points ändern werden, und auch die Anschaffung neuer Geräte steht deswegen ebenso nicht zwangsläufig an. Die Frage wäre ohnehin, ob geeignete Access-Points erworben werden können, bei denen der Betreiber softwareseitig seinen eigenen Geräten eine höhere Priorität einräumen kann und sich Nachbarn/Passanten die darüber hinaus ungenutzte Kapazität teilen. Oder im Hinblick auf die Sicherheit der eigenen Computer im Heimnetzwerk wäre wünschenswert, wenn der Access-Point zwei separate Netze aufspannen würde, wie dies schon jetzt von manchen Produkten fürs Video-Streaming mit besserer Übertragungsrate und Reichweite im Vergleich zum Gäste-WLAN unterstützt wird. Bei besseren Access-Points besteht die Option, die Firmware-Software zu aktualisieren (über diesen Weg hat der Freifunk-Verein mit OpenWrt sichergestellt, dass vom Providerprivileg Gebrauch gemacht werden kann), aber auch das wird kaum jemand nur aus diesem Anlass heraus tun. Nichtsdestotrotz bietet die Wende bei der Störerhaftung erstmalig die Chance, dass wenigstens von privater Seite mehr Internet in der Stadt zur Verfügung gestellt werden kann. Im Vergleich zu den Jahren davor, in denen die unzeitgemäße Gesetzgebung und der Einfluss fragwürdiger Industriezweige jedes Engagement mit rechtlichen Risiken oder nervigem Umgehungsaufwand behindern konnte, steht jetzt einer eigenen Lösung nichts mehr im Wege, auch wenn es viel Arbeit sein wird, die Bürger zu einer Umstellung zu bewegen. Freies Internet überall ohne auf die Stadt und Konzerne warten zu müssen, ist nun immerhin denkbar.

Copyright versus Digital

Copyright is fairly new regulation – philosophers, scholars and entertainers did without it through ancient and medieval times. For them, expressing something for others to hear or writing text down and giving it into other hands obviously meant that there was no way to control what happened to it afterwards. Even the advent of the printing press didn’t change that for the next 250 years, but the spread of literacy eventually created a larger market for books and hence lead to a more organized and consolidated publishing industry. The traditional actors in that space were printers and booksellers until publishers started to emerge. They offered to take care of the two aforementioned functions, but their actual purpose is an entirely different one: they’re risk investors into book projects. They’re out to buy manuscripts from authors, then bring them into proper language, adjust them to the extend that (their) readers might find the result interesting, then pay for the typesetting, then pay for all of the printing (amount of copies depending on their estimates of how much they can sell), then pay for the distribution to the booksellers; out of their own pocket in advance. Too many times the publishers will find that their books don’t sell, in which case the bookstore will strip the books off their covers and send the covers back as proof while the paper ends up in the trash. Every now and then they land a big hit and earn a lot of money to cover the losses from the failed book projects. Over time, printing technology improved, so the price per copy for a publisher is incredibly low now, on the other hand, additional services like marketing eat away some of what would otherwise be profit.

With that kind of business model, publishers understandably wanted to protect their investement into a manuscript because an independent printer could easily take the refined text of a book and reproduce it without the need to ever buy the manuscript from the author or to pay the editor as the initial investor had to. Note that the printer would still have to invest for his own typesetting (until the arrival of photocopying) and the distribution to the bookseller, but he probably wouldn’t take chances and just reprint titles that already turned out to be successful. Also note that an author would not necessarily get a cut of every copy sold beyond the initial payment by the publisher for physically obtaining the manuscript. “Writer” in itself wasn’t a profession to generate income, but what somebody had to become in order to communicate messages to a large audience. Publishers naturally don’t care much about steady royalties for the author at their expense, while it allows for a lower initial payment and transferral of some of the financial risk over to the author as the royalties can be tied to the success of the project. The lawmaker, looking at this constellation, decided to introduce copyright for the publishers that allowed them to prevent other printers and competing publishers from making copies. Such a right however wouldn’t establish itself for the publisher as a matter of fact, instead, it needed to be granted by the author, who himself didn’t have a way to make use of it. The intention might have been to improve the position of authors to negotiate better contracts with publishers, but the same took place when selling the manuscript, and soon the industry came up with a “standard contract” beyond which special conditions only rarely can be arranged. For example, it’s always expected that the contract is an exclusive one, that the author doesn’t grant the right to copy to a second publisher, which is why there isn’t much variation in most commercial literature.

What the legislation did do was to enable the installation of a clear flow of money from the many booksellers to only one publisher. No matter where a reader bought a book, if the publisher had obtained the exclusive copyright, he was empowered to make sure that some of the money would end up in his account. Booksellers were the only source for buying reading material anyway, so if readers wanted to own a text, they were forced to get it as a physical object from those distributors and the physical transfer/exchange of the object was obviously tied to the payment, no matter to what extend the price covers only the production cost or the entire print-based publishing chain or the cross-subsidization of other book projects by the same publisher. Copyright infringement was never an issue because if unauthorized copies popped up somewhere, it was easy to trace back where they came from. From 1887 on, countries can join the Berne Convention, under which participants have to treat citizens from the other signatories according to their own national copyright law, which in effect led to some convergence, but there is no such thing as an “international copyright”. Still, none of this had impact on the public because people didn’t have printing presses. Copying was a fairly large operation and bound by physical limitations such as the capital required to build the equipment, cost per produced unit, location to set it up, required certain craftsmanship by the operators, et cetera. Each copy was a scarce resource. If a print run was sold out, to set up a new one would be expensive and economy of scale only made it lucrative if larger quantities were produced. To make a text available to somebody else meant that the previous owner would loose possession of that text. And then digital technology changed all of that in the most fundamental way.

To be continued…

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

My Journey Through Text

At first, I did some experimentation with browsergame programming (not casual games, but with server-side persistence) attempting to build/generate “worlds” while avoiding the need to hand-design everything in a time-consuming process. One result was a world editor that served as an image composer (using GDLib for PHP) and primitive image map manipulator at a time where HTML5 canvas wasn’t there yet. It constantly keeps me thinking about HyperCard.

Later, I wanted to improve my note-taking in printed German Bible translations, in particular I wanted to produce my own interleaved editions. Soon I learned that digital Public Domain German Bible texts are usually not true to their printed originals, so I had to start a digitalization and proofread effort (more) first. From a semantically annotated XML source, it was easy to generate a modern XHTML reproduction of the text and then PDF layouts via XSL:FO and LaTeX. I was looking into SILE and recently PoDoFo as PDF generator backends (former accepts XML as input, latter is an C++ API that still needs a XML frontend) but didn’t invest too much into supporting them yet. Finally I achieved the original goal of generating interleaved PDFs for printing, and thanks to the advent of print-on-demand, I’m now able to order hardcover thread-stitched books in a quantity as low as a single copy (not even to mention the magazine variant or the DIN A6 or DIN A4 variants out of my DIN A3 monochrome duplex laser printer).

One proofreader introduced me to EPUB, which of course made sense to add as an output format and eventually got me interested in e-publications, e-ink based devices and the publishing industry in general.

Somehow I discovered a Wiki for creating a new libre-freely licensed German Bible translation collaboratively by using a parser that extracts OSIS from the online Wikitext of the MediaWiki software, and for a church congress event we hacked together a semi-automatic workflow that generated the PDF of the study version of the Gospel according to Mark. As I didn’t want to change my existing tools to OSIS as input format and most of the time I didn’t even need the advanced OSIS features, I just internally converted OSIS to my Zefania-XML-based Haggai XML format and made a few adjustments for being able to produce the usual output formats XHTML, PDF and EPUB. Another project was the conversion from the “verse-per-line” format to Haggai XML, not too different from another similar CSV to XHTML to EPUB project.

In the e-book hype of those days, I failed to see why other publications should be produced in a different way than my Bible reproductions (based on concepts like workflow automatization, digital-first, XML-first, single-source publishing, multi-channel publishing, etc) as generating EPUBs and PDFs could easily be generalized and later the entire workflow (shorter, silent video). I added a converter from ODT to XHTML, so OpenOffice/LibreOffice can be used as a writing tool as long as predefined styles are used to introduce WYSIWYM to the document in lack of a better editor. For being able to offer it as a service to self-publishers, I wrote a frontend in PHP that invoked the very same Java code via system calls on a vServer, only adding administrative functionality like user or publication project management (the latter should have become part of the Java package eventually). I even went to some book fairs and more obscure events of the e-book avantgarde, so I know a few people from those worlds and their mentality.

From such occations, I picked up two of my major projects in that space, one is uploading EPUBs via XHTML to WordPress by using the XML-RPC API (again, there’s an online version of it using the same Java code behind a PHP wrapper), which then wasn’t used in production as the guy who needed it produced EPUBs the WYSIWYG way and naturally wanted this manual typesetting to be preserved in the blog post, while I cared about WYSIWYM instead. With that workflow already available, I got into contact with one of the guys who are behind several amazing projects, and as they went into the business of running an online e-book store, they got a lot of e-books from publishers along with ONIX metadata file(s), so the job was to import all of the ONIX metadata to WordPress and even update existing records. My attempt was never finished because the shop was shut down after some time, probably in part due to my lack of supporting them well/soon enough as I encountered several problems with the testing environment, WordPress and my not-so-hacky, not-so-smart, not-so-agile workflows. But even without completing this mechanism, I went beyond this particular use case and did some general ONIX work.

Smaller projects include the subversion of placebo non-digital whishful thinking by a self-publishing site that disabled the download button without any technical effect, a GUI frontend for epubcheck, a failed attempt to enlist “e-book enthusiasts” for building a digital library, an importer in PHP (SAX) from Twine to Dembelo (was later rewritten by the Dembelo lead developer in more modern PHP), a parser for a Markdown-like note taking language to XHTML and LaTeX (interest for learning about writing parsers for domain-specific languages came from the Wikitext to OSIS parser I still didn’t find the time to revisit), a Twitch Video Uploader using their API (but I guess it’s broken now because of their “Premiere” nonsense) and a Wiktionary Wikitext parser that generates a XML of German nouns with their respective articles plus the Arabic translations from the big monthly backup dump (to be turned into several word lists for learning).

As I grew more frustrated about traditional publishers, self-publishers, e-book “pirates”, the average reader and the big enterprises who use digital to exploit those who don’t understand it properly, the arrival of refugees from Afghanistan, Iraq, Syria and Africa in Europe forced me to focus on way more serious things than our digital future. Only a tiny fraction of time investment went into software development. All other civic tech programmers lost interest after only 1/2 years, and I joined the game late where the momentum was already gone. Most of my attempts to build software for helping out with solving some of the issues are targeted towards volunteers, for instance the ticket system, the petition system, the AutoMailer for mass mailings via PHPMailer, the asylum event system or the case management system as it turned out to be incredibly difficult to get refugees themselves involved with anything that’s not the Facebook app or WhatsApp, be it the one-way message system or the downloader for the “Langsam Gesprochene Nachrichten” by Deutsche Welle via their RSS feed. Even those for the German volunteers were only sporadically used, except the AutoMailer, which was a success; it did its job according to plan.

One project remains incomplete due to lack of time, it’s an attempt to build an online voting system that ended up as a good exercise for learning ReST concepts as a fully functional system would require a lot more conceptual planning.

Entering the Hypertext space, I did experimentation by developing a variant of Ted Nelson’s span selector in Java, a disfunctional text editor that tracks all changes in Java as well as in C++/Qt4, a converter from Ted’s EDL format to XML, a downloader/retriever for EDLs in XML form and a workflow that glues together the retrieval of such EDLs in XML form and the concatenation of text portions from the obtained resources in order to construct the base text of the document. Originally started for the XML frontend for PoDoFo, I completed an early first version of a StAX parser in C++, but then was able to quickly port it to JavaScript, which was handy to handle the embedded XHTML WordPress blog post content as provided via the JSON API as I didn’t want to use DOM, contributing to an independent read-only client for the Doug@50 Journal and HyperGlossary (video). After abandoning the browser due to its CORS nonsense and lack of proper local file access, I finally added a WordPress blog post retriever to be combined with the automatic converters to HTML, EPUB and PDF (video).

I briefly worked on the side on a project to introduce semantic annotation (“linked data”) and tooling for it to Bible texts, using the specific example of marking people and places in the Public Domain German Luther 1912 Bible translation as there was renewed interest as a result of the 500 year anniversary of the reformation. As it turned out to the surprise for some (not me), no digital version of the Luther 1912 text is authentic according to any printed original to our knowledge, so another analysis and proofread effort was necessary. An existing source became usable after a format transformation from flat CSV to hierarchical XML plus wrapping Strong numbers in XML tags instead of XML markers/milestones, plus converting that arbitrary, custom XML format as derived from the CSV to Haggai XML, plus textual correction.

Now, the goal is to build a new and better hypertext system based on ideas by the early Internet pioneers, Doug Engelbart (see Engelbart Colloquium Session 4A from 38:00 on and Engelbart Colloquium Session 4B from 44:00 on, but especially from 49:19), Ted Nelson, web pioneers, David Gelernter and Ward Cunningham, as in part wonderfully presented by Bret Victor.

Web Apps revisited (PWA) + Geolocation for Augmented Reality + local File-I/O for Web Stack on Desktop

As I’m very interested in developing augmented reality applications, I looked again at Android app development. Some time ago, I was able to build an APK with the Android SDK + Eclipse and install it on a tablet, but after the switch to IntelliJ-based Android Studio as development environment, it appears to be very hard, if not impossible for me to even experiment with this technology. It’s also a highly proprietary ecosystem and therefore evil, don’t let yourself get fooled by some mention of GNU/Linux and “Open Source”. Therefore I looked again at web apps, and the approach changed quite a bit recently. Driving force is the realization by Google that people don’t install new apps any more while such apps are pretty expensive to build and have a bad conversion rate. It turns out that users spend most of their time in just a few of their most favorite apps. As an app developer usually wants to provide the same functionality on the web, he needs to work with at least two different technology stacks and make them look similar to the user. So why not build the application in web technology and have the browser interfacing with the underlying operating system and its components? There are new mechanisms that help with just that.

One mechanism of the “progressive web app” (“PWA” for short) bundle is called “app to home screen” (“A2HS” for short). Google Chrome already has an entry in the menu for it, which will add a “shortcut” icon onto the homescreen for the URL currently viewed. Now, developers get better control over it as there’s a technical recommendation by the W3C (description in the MDN). You just have to link a small manifest JSON file in your XHTML header that contains a few hints about how a browser might add the web app to “installed software” on mobile devices or desktop PCs. Most important is a proper short name, the icon in several sizes, the display mode (the browser might hide all its controls and show the web app full-screen, so it looks like a native app) and the background color for the app window area during load time. The manifest file gives browser implementers the chance to recognize the website as mobile app, and depending on metrics set by the user, there could be a notification that the current website can install itself as an app.

Even with Android being proprietary, it’s probably the phone/tablet system to target for as far as the free/libre software world is concerned. They have a description about what they care about in the manifest as well as a validator and an option to analyze the procedure in Google Chrome. If Chrome should detect a manifest on a web page, it might bring up a banner after some time and ask the user if he wants to add the app to the home screen. Unlike decent browsers, evil Chrome decides for the user if/when to bring up the banner. I’m not aware if the user is able to set policy regarding its appearance. In my opinion, the browser client should be the agent of the user, not for an online service provider.

Also, Google requires a service worker JS file for the banner to show up. The service worker is very important for many web apps: native apps are permanently installed on the device, so code and assets are there locally and no download needs to occur for using the app, no connectivity required necessarily. With web apps, that can be different. True, they can rely on browser caching, but as it seems as permanent local installation of the XHTML, CSS, JavaScript, images and other data isn’t a thing yet, in places without or bad connectivity, there shouldn’t be the need to retrieve resources from the net in case the app logic could also work perfectly fine locally if only the data were already present. Even if some new data is supposed to be retrieved every time the web app is opened again, old data (articles, messages, contacts) can already be presented until the new data arrives. The service worker decides which requests for old data can be statisfied from local storage and redirect to there, and which requests need to go over the wire. But there can be very legitimate cases where a service worker makes absolutely no sense. If the app does nothing else than submitting data to an online backend, well, the XHTML form can be stored locally, but that’s basically it. Connectivity is required, otherwise there wouldn’t be a way to submit anything, so there’s no need for a service worker. It’s still possible to add the web app to the home screen manually via the menu, and that will make use of the settings provided in the manifest, so that’s good enough for me. I work with refugees and want to establish communication with them. Usually they don’t have computers or notebooks, but phones of course, and as I don’t have a phone and refuse to use proprietary messenger apps, I now can link them up easily to my websites, so they can request help conveniently out of something that looks and feels like any other of their apps.

So that’s me typing in the URL of my website on the phone of a refugee, but ideally and for augmented reality, I hope that QR code recognition will be a built-in feature for phone cameras (it’s not difficult, there are many good libraries to integrate it) and not a separate app most people don’t install, because then I would just scan the QR code from a sticker, plastic card or poster on the wall, an install notification/banner would pop up automatically, and everything would be working out of the box frictionless.

For augmented reality, I could imagine stickers on public places with a QR code on them that contain a unique ID, so by scanning it, interested pedestrians would be sent to whatever website or web app was set up for this location, or a pre-installed web app that uses geolocation (yes, that’s in the web stack!) would do the same thing if the current position is within a certain area. A specific application, a particular offer, could be behind it, or a common infrastructure/protocol/service which could provide generic interfaces for ID-/location-based requests, and content/application providers could register for an ID/location, so the user would get several options for the place. Please note that this infrastructure should be white-label, self-hostable, freely/libre licensed software, and a way to tie different systems together if the user wishes. There could be a central, neutral registry/catalogue for users to search for offers and then import/configure a filter, so only the options of a certain kind would pop up, or clients and servers could do some kind of “content” negotiation, so the client would tell the server what data is requested, and local web apps would make sense out of it. The typical scenario in mind would be text/audio/video city guides, maybe in different languages, maybe from different perspectives (same place, but one perspective is history, another is statistical information, another is upcoming events and special shop offers), so a lot of content creators could “register”/attach their content to the location, and the user might pick according to personal preference or established brand, ideally drawing from freely/libre licensed content/software libraries like the Wikipedia. As the data glasses unfortunately were discontinued, that can be done with mobile devices as cheap, poor-mans AR, and I don’t see why this should unnecessarily be made more complex with 3D projections/overlays where it doesn’t need to be.

And let’s never forget that all this also works on the local computer where the current position doesn’t need to come from the GPS receiver, but could come from a map just as well. So I’m very interested in building the public, open, libre-licensed infrastructure for it, as well as a QR code sticker PDF generator or printing service that would automatically coordinate locations + IDs with the database. The advantages are that there’s no need for the Google Play Store any more, where a developer has to pay just to have an account there, and a single company would control the entire software distribution except for sideloading.

There’s one more thing: with a File API, it would be possible to make web apps act like native Desktop applications and read/write files from/to the local storage, which is crucial to make the web stack an application programming stack for the desktop computer. The same code could either operate on online resources or local resources or both intermixed, and the “setup” could either be a zip file download with the XHTML+JS code in it or by simply browsing an URL. Ideally, the latter would bring up an installation request based on the manifest file for permanently storing the web app outside of the browser cache. If all this would be standardized and rolled out in browsers, we would arrive at an incredible new software world of online/offline, desktop/mobile and physical/virtual interoperability and convergence. Unfortunately, the W3C betrayed the public again (just as they did by abandoning the semantic web, making HTML5 a mess for everything that’s not a full-blown browser engine, including DRM in HTML5, etc. in favor of a few big browser vendors) and discontinued the API. It’s harder to run a web app locally beyond the web storage and interact with other local native Desktop applications without dependence on a web server (I have workarounds in node.js and Java, but both require explicit installation and startup). I don’t see why local file access shouldn’t be anticipated, because if there are browsers implementing such a capability to go more into the PWA direction, there should better be a standard out there on how to do it instead of each of them coming up with their own, incompatible, non-standardized way of doing it.

Video Platform Crisis

Five years ago, I experimented with screen capturing under OpenSUSE with a Java Applet recording software that randomly crashed in this setup but still left me with what was recorded up to that point, so I uploaded those resulting videos to YouTube. Back then, each video was required to have no more than 15 minutes playtime and there wasn’t a way to request the unlocking of longer or even “unlimited” playtime. YouTube decided about who was eligible for uploading longer videos in an intransparent fashion, so people came up with wild theories about what could be required. I wasn’t affected much as most of my videos were shorter (partly due to the crash behavior described above) and after some time this limitation miraculously was removed from my channel. Later, YouTube dropped that policy completely.

Next, the first major YouTube design update, at least of my remembrance, “happened”. It removed the ability to customize the channel’s background in order to make the site more mobile friendly and only left the header banner to fiddle with. People were pretty upset as there was quite some effort put into those backgrounds, and from todays perspective, it’s difficult to comprehend why a responsive design should not be able to support some kind of dynamic background. The old layout allowed channel owners to convey context, charm and personality, but users eventually compromised with the new, cold look. My channel wasn’t affected much as I didn’t use a custom channel background, but in principle I didn’t like the downgrade as a user of the platform.

Then there was a long period of mediocrity. Good and innovative features were added like the online video editor, livestreaming and the option to submit and therefore work together on closed captions/translations (albeit channel owners are allowed to disable it and it’s not enabled per default), as well as features which are a required minimum like the removal of the character limit on comments or links in comments. Other good features were removed for bogus reasons, like video answers, chronological and tree-hierarchical display of comments, while other features are still missing like the ability to search for comments, export comments or ad-hoc playlists. The forced Google+ integration as a vain attempt to immitate Facebook was annoying, but also easy to ignore.

In 2015, video platform disaster struck. A big YouTuber complained about all the negativity in his comment section as people were spamming it with insults etc., so Google took action against potential haters. The “solution” was to automatically flag suspicious comments and don’t display them to any other visitor except the potential spammer himself while logged in, so he wouldn’t notice that he is writing his comments into the void, presumably continuing such activitiy. After some time with nobody seeing his messages and therefore not responding, he eventually would get demotivated and stop. The problem with this approach is that the algorithm never worked. My comments were repeatedly “ghost banned” as I usually write long, thoughtful commentary, adjusting the text in short succession in order to correct typographic or spelling errors or to enhance it a little more, or because I added a link after the initial save. If I invest valuable lifetime to state my opinion and cannot be sure that my texts get actually published, there’s no reason to continue doing so, especially as there’s the risk of never getting replies from people I would be interested in talking to. As a publishing platform, such conduct, tricking people into believing that their contributions matter, is a no-go and not acceptable. This is why I abandoned YouTube and only (ab)use it for SEO since then.

So Vimeo became the new home for my videos. It’s much cleaner and had two nice features I liked very much: they allowed videos to be marked as being licensed under CC BY-SA (instead of only CC BY as on YouTube), even despite this information is well hidden from the viewer. The other feature is that they provided a download button (albeit channel owners can disable it). Being a software developer, I don’t believe marketing claims like streaming a video is different from downloading it. Technically it’s the exact same process and while a stream replay might be cancelled earlier and therefore may consume less bandwith, a full download allows offline replay and could save much more bandwith after all. Just think about all the music videos that get streamed over and over again for no other reason than people not having a local copy of them for an offline playlist. For me, the download button is relevant because I want my freely/libre licensed videos to be shared everywhere and archived by multiple parties. The computer has to retrieve the data anyway in order to play it, so it doesn’t matter if this happens via an explicit full download or automatically in the background the browser, internally saving the received data to disk. Vimeo recently decided to remove this convenience feature for all viewers if the channel owner doesn’t pay a subscription fee. As I regard a download as a view, Vimeo as a publishing platform downgraded its publishing, and that’s another unacceptable no-go.

Furthermore, I guess by complaining about the downgrade to Vimeo support, they looked into my channel and suspected it to be a commercial one. Technically I registered a company at the beginning of the year and yes, there was one video linking to a shop seemingly advertising a product, but the shop is not an active one and I never sold a single item there or anywhere else. While I’m not necessarily a non-profit, it’s an attempt to build a social enterprise in it’s early, non-operational stages. I’m fine with paying for the hosting, but I expect the price to be tied to the actual service and not to the arbitrary removal of software features, especially if extortionary practices are used against me and my viewers. Additionally, Vimeo was taking my original video source files hostage and deleted all of them without providing a way to object to their conclusion. They deleted my entire account including all comments and follows, a big time fail in publishing. That’s why I abandoned Vimeo.

I briefly tried Twitch, but uploaded videos don’t get a lot of attention there as the main focus of the site remains on live-streaming. Joining made sense because I’m planning to stream more, but then I discovered that they run ads for the German military (Bundeswehr) before videos and streams, something I’m totally opposed to as a Christian believer and conscientious objector by both, conviction and approval. This is especially the case after I developed anabaptist views. I don’t mind if Twitch promotes violence and destruction which is none of my business and therefore easy to refute, but I never want to contribute to the endorsement of military or their recruiting, so it’s basically the YouTube adpocalypse the other way around.

After that, I moved to Vidme. I liked that they attempted to educate and familiarize viewers with the concept of paying for the production if there’s a demand, but I wondered if Vidme would be able to pull it off. The site had a “verification” process with the goal to determine if an uploader is a “content creator”, which is strange because they demanded rather arbitrary metrics: one needed 50 subscribers but was hardly able to attract any as there were limits on what can be uploaded if one wasn’t “verified” as a creator yet. In my opinion, they put this requirements into place to restrict the amount and size of uploads to YouTubers with a huge audience who switched to Vidme in response to the the adpocalypse and to deter uploaders with a small audience and huge content collection. The latter only cost money for hosting, the former are supposed to earn the site operator some income. I was already suspicious if such policy might have been their pretty serious business need if their endgame wasn’t to be bought up by you know whom. Now, I have videos that are small in size as they’re just screencasts and yet more than 30 minutes playtime, and without the status of being verified, I was prevented from uploading those, while the very existence of those videos proves that I am actually a content creator. I applied for “verification” but they declined the request, so obviously my presence was not appreciated on their site, so I set all my videos to private. Soon thereafter, they announced that they’re shutting down.

A few notes on other, minor video platforms: there’s Dailymotion, but they don’t have users on the site and lack search engine visibility. My impression is that they’re not in the game of building communities around user-generated content but around traditional TV productions (remember Clipfish, Sevenload, MyVideo?). Also, the translation of their site into German needs some work. Discoverability is poor as results are polluted by spam video posts advertising PDF downloads and nobody flags them as abuse, as they’re still lacking the option to do so. Then there are their hidden requirements, which aren’t stated on the site: 2GB maximum /video, 60 minutes maximum/video, 96 videos/day/all your accounts and 2h maximum duration of total videos/day/all your accounts. There’s Veoh, but their website is terrible and they’re still relying on Adobe Flash instead of HTML5. There’s Myspace accepting video uploads, but they’re not positioned and percieved as a video site. There are new efforts like lbry.io, BitChute and D.Tube that try to promote decentralized peer-to-peer hosting. I’m unable to register on D.Tube because I don’t have a mobile phone, so I can’t receive their text message that’s supposed to prevent spam bots from creating an account. At the moment, I upload most of my videos to BitChute manually as it doesn’t seem to provide an API for automatic upload yet. Unfortunately, developing the Twitch Video Uploader (a fairly technical component automating the uploads via their v5 API) was a waste of time, but still, I could imagine to extend the effort into a fully-fledged video management suite, supporting video upload to many sites.

Which leads to new conceptual thinking about online video hosting. Learning from my earlier mistakes and experiences, I don’t see why the acutal video files need to be tied to a specific player and surrounding website any more, leading to demands and dependence on whoever is operating the service. In a better world, the video data could be stored anywhere, be it on your own rented webspace, a gratis/paid file hosting provider like RapidShare, Dropbox or Megaupload, peer-to-peer/blockchain hosting, traditional video websites or on storage offers by institutions or groups curating certain types of materials. The software to retrieve, play, extend and accompany the video stream would be little more than an overlay, pulling the video data from many different sources of which some might even be external ones and not under the control of the website operator. Such a solution could feel like what’s relatively common in the enterprise world where companies deliver product presentations, training courses and corporate communication via SaaS partners or self-hosting. A standardized, open protocol could report new publications to all kinds of independent directories, search engines and data projects (or have them pulling the data after registering the instance automatically or manually), some offering general and good discoverability, others serving niches and special purposes, all of them highly customizable as you could start your own on top of common white-label video hosting infrastructure. You could imagine it as the MediaWiki (without stupid Wikitext as unparsable data model) or WordPress software packages embedding video and optionally making it the center of an article or blog post, but in an even more interoperable way and with more software applications on top of them. The goal would be to integrate video hosting and playback into modern hypertext for everyone as promoted by the Heralds of Resource Sharing, Douglas Engelbart (from 37:20), Ted Nelson and David Gelernter. With the advent of information theory, XML, the semantic web and ReST, it’s about time to improve and liberate publishing in networked environments.

Update: Sebastian Brosch works on PicVid, which focuses on picture hosting at the moment, but considers video hosting as a potential second target. One downside is that the project is licensed under the GPLv3 where it should be AGPLv3 + any later. Online software doesn’t lead to the distribution of the software, only to the transmission of the generated output, so the user wouldn’t be in control of his own computing. The AGPL, in contrast to the GPL, requires software that’s accessible over a network to provide a way to obtain the source code. Now the user gets the option to either use the online service as provided by the trusted or untrusted operator, or to set up the system on his own computer and use it there, or to task a trusted entity with the execution.

Update: Jeff Becker works on gitgud.tv sometimes. Seems to be related to gitgud.io.

Update: I just found a community of video creators who produce short clips that speak a language of their own. Their uploads are way off-mainstream, often silly, highly incestuous (them talking about themselves, “drama”), sometimes trolling, meme-related, trashy, usually quite creative and always deliberately amateurish, which in part contributes to their charm. There are plenty of sites that cater to this audience, most notably VidLii. VidLii requires videos to be not longer than 20 minutes, not larger than 2 Gigabyte in size and it won’t take more than 8 uploads per day. With those limitations, VidLii hardly can be considered an alternative to “everything goes” video hosters, but as far as I’m concerned, I would just develop an automated uploader that splits the source material into parts of 20 minutes playtime, append the current part number to the title and won’t send more than 8 parts in one go, especially as the individual pieces could be connected together in a playlist. It’s no coincidence that sites like VidLii recreate the early YouTube layout from around 2008 – not only for nostalgia reasons, but also because in the absence of movie studio productions and advertising money, community engagement was more important than the passive consumption by the masses. The old YouTube felt way more personal, at least in retrospective perception. CraftingLord21 reports on this community and the piece about Doppler gives some more insights.

Here’s a list of similar sites: MetaJolt, Upload Stars, ClipMoon and loltube. But be warned: not all of them managed to avoid spam bots, while on the other hand they’re at least not filled with questionable content as for example BitChute and PewTube appear to be. Furthermore, keep in mind that those sites might go down at any time or were never serious to begin with (fake sites that make fun of the other ones), so a time investment of manually uploading or commenting there might just as well go down the drain as it might on the big hosting providers. And make sure that you use a new e-mail account and a randomly generated password if you should ever consider to try them out.

Update: It’s no surprise that people aren’t statisfied with general-purpose functionality around video hosting, so they come up with sites specialized for particular niche audiences. One of the many opportunities is education where you find TeacherTube (behind an adblocker-blocker) or Viduals (language switcher in the upper right corner). With the latter, users can crowdfund the production of a video that answers them a certain question. Video creators could see the 80% of the pledged money they’ll receive as an incentive to put their effort into one of the requested projects for which there is actually proven demand instead of randomly guessing and competing with clickbait SEO. Eventually, experts in their field could make way more specific content than the broad presentations that try to address almost everybody. Think of StackOverflow plus video plus income as incentive to do it, as time is always very limited, but money indeed can buy some quality time. What about other niches? I already mentioned the “MediaWiki” around video, but what about journalism/debate, product manuals, mapping, to just list some of the most obvious?

Update: The number of non-western YouTube alternatives is increasing too, with some of them translating their interfaces into English. Tune.pk looks pretty decent, even if one cannot upload more than 15 videos in total, none longer than 60 minutes playtime, none larger than 1 Gigabyte in size.

Update: Similar to Vimeo and Vidme, Rumble tries to establish a different business model for online video, a different one than advertising. Sure, in the digital age, the attempt to maintain artificial scarcity with exclusive licensing in order to immitate print-era business models that don’t work any more doesn’t make a lot of sense, but as Rumble accepts non-exclusive uploads, it’s at least gratis video hosting. Unfortunately I was unable to open the info box on the upload page in order to read the details of the non-exclusive agreement, so please make sure that you check it if you should consider uploading. Also, Rumble is notorious for rejecting some uploads or copyright strike them for no reason, and despite the support helps out with that, it turns out that they simply “forget” to reset the strike count after clearing the claim, which is why I abandoned that site as well.

Update: On the 2018-05-26, I wanted to comment on a video on YouTube and because of experience wrote the comment in a text editor on my local machine to not loose the text again by accidentally navigating away from the site (without recovery or saving the draft as a feature provided by Google/YouTube) or because of session timeout. Now I wanted to copy&paste, which worked previously, but it seems that they devoted their technicians to prevent copy&paste for comments instead of doing some work that’s actually useful. Now, I know of course that I’m in control of the client and can with F12/Firebug try to get my text in, but it turns out that what’s entered into the textarea isn’t what gets submitted, and the text that’s displayed is just the passive rendering, so it can be that they’re recording keystrokes on the keyboard for the input, leaving no way for copy&paste. I’m not going to retype my text, I rather abandon the usage of my account on this site entirely. Furthermore, with the particular channel I tried to comment on, I have the impression that either the owner is deleting my comments anyway or the stupid Google/YouTube algorithm is removing or hiding them (completely, not even comment ghosting so I could still see them when logged in), or another user is abusively flagging them as spam to get them removed, and I have no way of learning what happened there or to get my text back, it’s just gone and any further conversation prevented. So now this last piece of usefulness was also taken away, leaving nothing than a huge pile of videos to be exported in order to be moved to a much better place.

Update: BitView finally got support for HTTPS and can now be considered for uploads not longer than 40 minutes, for which the video quality will suffer seriously. Don’t forget to turn on HTML5 first as this isn’t the default for some strange reason. BitView is made and operated by Jan, who also runs VidLii.

Vanillo is in beta now and looks pretty clean, works pretty decent. I don’t want to say much about the features yet as things are still subject to change. Let’s hope that everything goes according to plan for those guys, so the site will launch in a matured stage that’s well worth the wait!

Project Ideas

Here are a few project ideas I would like to work on if I only had the time. As long as I’m not able to start them, you could interpret those suggestions as lazyweb requests, so you should feel invited to execute them, but please make sure that your results are freely/libre licensed, otherwise your work would be of no help.

  • A reminder web app with push notifications that works locally on the phone and only occasionally imports new events/appointments when an Internet connection is established. Many refugees fail to attend their appointments as they have difficulty to observe a weekly schedule. The web part needs to be self-hostable.
  • A self-hostable messenger implemented as web app. It could also interface with e-mail and display it as if mails were normal messenger messages. Many refugees have smartphones, but without relying on telephony, Facebook or WhatsApp, it can be difficult to get in touch with them. Often they’re not used to e-mail, which is more common among German people, companies and government offices. The web app would also work on open WLAN.
  • A new TODO/task manager: when a new TODO/task is added, it appears at the very top of the list, so the user can “vote” it’s priority up or down in order to make entries more or less important. If the user finds time to work on one TODO/task, he simply needs to start with the topmost one and stick to its completion. The tool should support multiple TODO/task lists, so entries can be organized in different topics/subjects.
  • A “progressive web app” app store that signs/verifies the submissions, catalogues and distributes them via “add to homescreen”, URL bookmarking, full download or other techniques. The idea is to create something similar to the distro repositories known in the free/libre software world and f-droid, demanding that those apps come with their code to avoid the SaaSS loophole. Features like user reviews or obligatory/voluntary payment to the developers could be considered, as well as crawling the web to discover existing web apps.
  • E-Mail to EPUB. E-Mail is .eml files in IMF format. I already have a few components and are currently trying to write a parser for IMF or find a no-/low-dependency existing one. In the future, that project could evolve into a semantic e-mail client (better than putting broken HTML into the message body) that retrieves and sends data to the server.
  • In Germany, the “Störerhaftung” finally got abandoned, so Augmented Reality and the Internet of Things get a late chance here too. Before, by sharing my flatrate-paid Internet access with neighbors and passersby, law would have considered me as a contributor/facilitator if a user committed an illegal action over my wire (basically, as investigators can never find out who’s at the end of a line, they cheaply just wanted to punish whoever happened to rent/own the line at that time or demand users of the network to be identified). In cities, there are a lot of private WLAN access points connected to the Internet, and almost all of them are locked down because of the liability, leaving Germany with pretty bad connectivity. With such legal risks removed, I can assume more and more gratis Internet access outside in the public, so I would like to develop an app where people can join a group and via GPS the app would display in which direction and in what distance members of the group are located. That way, if I’m in an unfamiliar city together with friends, family or colleagues, it’s easier to not loose each other without the need of phone calls or looking at an online/offline map. Such a solution needs to be self-hostable to mitigate privacy concerns, and enabling geolocation always needs to be optional. Deniability could come from lack of connectivity or the claim of staying inside a building, or the general decision of not using the app, joining groups but not enabling geolocation, or faking the location via a modified client. The app is not trying to solve a social problem, it’s supposed to provide the capability if all members of the group agree to use it. “App” is intended for AR glasses, alternatively the smartphone/tablet, alternatively a notebook.
  • Develop a library, self-hosted server, generic client and a directory of services for Augmented Reality that allows content to be delivered depending on the location of the user. That could be text (special offers by department stores or a event schedule), audio (listen to a history guide while walking through the city, maybe even offered in many languages) or just leaving notes on places for yourself or members of the family or group (which groceries to buy where), where the door key is hidden, long lists of inventory locations).
  • In my browser, I have lots and lots of unstructured bookmarks. If I switch to one of my other machines, I can’t access them. It would be nice to have a synchronization feature which could be extended to check if the links are still live or dead. In case of the latter, the tool could attempt to change the bookmark to the historical version of the site as archived by the Wayback Machine of the Internet Archive, referencing the most recent snapshot relative to the time of the bookmarking, hopefully avoiding the pro-active archival of what’s being bookmarked. On the other hand, a bookmark could also be an (optional?) straight save-to-disk, while the URL + other metadata (timestamp) is maintained for future reference. The tool could do additional things like providing an auto-complete tagging mechanism, a simple comment function for describing why this bookmark was set or a “private/personal” category/flag for links which shouldn’t be exported to a public link collection.
  • An XML interface for PoDoFo. PoDoFo is a new library to create PDF files with C++. With an XML interface, PoDoFo would become accessible for other programming languages and would avoid the need to hardcode PoDoFo document creation in C++, so PoDoFo could be used as another PDF generator backend in automated workflows. The XML reading could be provided by the dependency-less StAX library in C++ that was developed for this purpose.
  • Subscribe to video playlists. Many video sites have the concept of a “channel”, and as they fail to provide their users a way to manage several channels at once (combining messages and analytics from different channels into a single interface), uploaders tend to organize their videos in playlists, usually by topic. As humans are pretty diverse creatures and do many different things at the same time, I might not be interested in everything a channel/uploader is promoting, I would rather subscribe to a particular playlist/topic instead to the channel as a whole. This notion of course could be generalized, why not subscribe to certain dates, locations, languages, categories, tags, whatever?
  • Ad-hoc video playlists. Video sites allow their users to create playlists, but only if they have an account and are logged in. Sometimes you just want to create a playlist of videos that are not necessarily your own without being logged in, for instance to play some of your favorite music in random order. If such external playlists should not be public, access could be made more difficult by requiring a password first or by only being available under a non-guessable URL. Embedding the videos shouldn’t be too hard, but continuing with the next probably requires some JavaScript fiddling. Should support many video sites as sources via embedding.

If you’re not going to implement those ideas yourself, please at least let me know what you think of them or if you have suggestions or know software that already exists. If you want or need such tools too, you could try to motivate me to do a particular project first. Maybe we could even work together in one way or another!