Web Apps revisited (PWA) + Geolocation for Augmented Reality + local File-I/O for Web Stack on Desktop

As I’m very interested in developing augmented reality applications, I looked again at Android app development. Some time ago, I was able to build an APK with the Android SDK + Eclipse and install it on a tablet, but after the switch to IntelliJ-based Android Studio as development environment, it appears to be very hard, if not impossible for me to even experiment with this technology. It’s also a highly proprietary ecosystem and therefore evil, don’t let yourself get fooled by some mention of GNU/Linux and “Open Source”. Therefore I looked again at web apps, and the approach changed quite a bit recently. Driving force is the realization by Google that people don’t install new apps any more while such apps are pretty expensive to build and have a bad conversion rate. It turns out that users spend most of their time in just a few of their most favorite apps. As an app developer usually wants to provide the same functionality on the web, he needs to work with at least two different technology stacks and make them look similar to the user. So why not build the application in web technology and have the browser interfacing with the underlying operating system and its components? There are new mechanisms that help with just that.

One mechanism of the “progressive web app” (“PWA” for short) bundle is called “app to home screen” (“A2HS” for short). Google Chrome already has an entry in the menu for it, which will add a “shortcut” icon onto the homescreen for the URL currently viewed. Now, developers get better control over it as there’s a technical recommendation by the W3C (description in the MDN). You just have to link a small manifest JSON file in your XHTML header that contains a few hints about how a browser might add the web app to “installed software” on mobile devices or desktop PCs. Most important is a proper short name, the icon in several sizes, the display mode (the browser might hide all its controls and show the web app full-screen, so it looks like a native app) and the background color for the app window area during load time. The manifest file gives browser implementers the chance to recognize the website as mobile app, and depending on metrics set by the user, there could be a notification that the current website can install itself as an app.

Even with Android being proprietary, it’s probably the phone/tablet system to target for as far as the free/libre software world is concerned. They have a description about what they care about in the manifest as well as a validator and a option to analyze the procedure in Google Chrome. If Chrome should detect a manifest on a web page, it might bring up a banner after some time and ask the user if he wants to add the app to the home screen. Unlike decent browsers, evil Chrome decides for the user if/when to bring up the banner. I’m not aware if the user is able to set policy regarding its appearance. In my opinion, the browser client should be the agent of the user, not for an online service provider.

Also, Google requires a service worker JS file for the banner to show up. The service worker is very important for many web apps: native apps are permanently installed on the device, so code and assets are there locally and no download needs to occur for using the app, no connectivity required necessarily. With web apps, that can be different. True, they can rely on browser caching, but as it seems as permanent local installation of the XHTML, CSS, JavaScript, images and other data isn’t a thing yet, in places without or bad connectivity, there shouldn’t be the need to retrieve resources from the net in case the app logic could also work perfectly fine locally if only the data were already present. Even if some new data is supposed to be retrieved every time the web app is opened again, old data (articles, messages, contacts) can already be presented until the new data arrives. The service worker decides which requests for old data can be statisfied from local storage and redirect to there, and which requests need to go over the wire. But there can be very legitimate cases where a service worker makes absolutely no sense. If the app does nothing else than submitting data to an online backend, well, the XHTML form can be stored locally, but that’s basically it. Connectivity is required, otherwise there wouldn’t be a way to submit anything, so there’s no need for a service worker. It’s still possible to add the web app to the home screen manually via the menu, and that will make use of the settings provided in the manifest, so that’s good enough for me. I work with refugees and want to establish communication with them. Usually they don’t have computers or notebooks, but phones of course, and as I don’t have a phone and refuse to use proprietary messenger apps, I now can link them up easily to my websites, so they can request help conveniently out of something that looks and feels like any other of their apps.

So that’s me typing in the URL of my website on the phone of a refugee, but ideally and for augmented reality, I hope that QR code recognition will be a built-in feature for phone cameras (it’s not difficult, there are many good libraries to integrate it) and not a separate app most people don’t install, because then I would just scan the QR code from a sticker, plastic card or poster on the wall, an install notification/banner would pop up automatically, and everything would be working out of the box frictionless.

For augmented reality, I could imagine stickers on public places with a QR code on them that contain a unique ID, so by scanning it, interested pedestrians would be sent to whatever website or web app was set up for this location, or a pre-installed web app that uses geolocation (yes, that’s in the web stack!) would do the same thing if the current position is within a certain area. A specific application, a particular offer, could be behind it, or a common infrastructure/protocol/service which could provide generic interfaces for ID-/location-based requests, and content/application providers could register for an ID/location, so the user would get several options for the place. Please note that this infrastructure should be white-label, self-hostable, freely/libre licensed software, and a way to tie different systems together if the user wishes. There could be a central, neutral registry/catalogue for users to search for offers and then import/configure a filter, so only the options of a certain kind would pop up, or clients and servers could do some kind of “content” negotiation, so the client would tell the server what data is requested, and local web apps would make sense out of it. The typical scenario in mind would be text/audio/video city guides, maybe in different languages, maybe from different perspectives (same place, but one perspective is history, another is statistical information, another is upcoming events and special shop offers), so a lot of content creators could “register”/attach their content to the location, and the user might pick according to personal preference or established brand, ideally drawing from freely/libre licensed content/software libraries like the Wikipedia. As the data glasses unfortunately were discontinued, that can be done with mobile devices as cheap, poor-mans AR, and I don’t see why this should unnecessarily be made more complex with 3D projections/overlays where it doesn’t need to be.

And let’s never forget that all this also works on the local computer where the current position doesn’t need to come from the GPS receiver, but could come from a map just as well. So I’m very interested in building the public, open, libre-licensed infrastructure for it, as well as a QR code sticker PDF generator or printing service that would automatically coordinate locations + IDs with the database. The advantages are that there’s no need for the Google Play Store any more, where a developer has to pay just to have an account there, and a single company would control the entire software distribution except for sideloading.

There’s one more thing: with a File API, it would be possible to make web apps act like native Desktop applications and read/write files from/to the local storage, which is crucial to make the web stack an application programming stack for the desktop computer. The same code could either operate on online resources or local resources or both intermixed, and the “setup” could either be a zip file download with the XHTML+JS code in it or by simply browsing an URL. Ideally, the latter would bring up an installation request based on the manifest file for permanently storing the web app outside of the browser cache. If all this would be standardized and rolled out in browsers, we would arrive at an incredible new software world of online/offline, desktop/mobile and physical/virtual interoperability and convergence. Unfortunately, the W3C betrayed the public again (just as they did by abandoning the semantic web, making HTML5 a mess for everything that’s not a full-blown browser engine, including DRM in HTML5, etc. in favor of a few big browser vendors) and discontinued the API. It’s harder to run a web app locally beyond the web storage and interact with other local native Desktop applications without dependence on a web server (I have workarounds in node.js and Java, but both require explicit installation and startup). I don’t see why local file access shouldn’t be anticipated, because if there are browsers implementing such a capability to go more into the PWA direction, there should better be a standard out there on how to do it instead of each of them coming up with their own, incompatible, non-standardized way of doing it.

Video Platform Crisis

Five years ago, I experimented with screen capturing under OpenSUSE with a Java Applet recording software that randomly crashed in this setup but still left me with what was recorded up to that point, so I uploaded those resulting videos to YouTube. Back then, each video was required to have no more than 15 minutes playtime and there wasn’t a way to request the unlocking of longer or even “unlimited” playtime. YouTube decided about who was eligible for uploading longer videos in an intransparent fashion, so people came up with wild theories about what could be required. I wasn’t affected much as most of my videos were shorter (partly due to the crash behavior described above) and after some time this limitation miraculously was removed from my channel. Later, YouTube dropped that policy completely.

Next, the first major YouTube design update, at least of my remembrance, “happened”. It removed the ability to customize the channel’s background in order to make the site more mobile friendly and only left the header banner to fiddle with. People were pretty upset as there was quite some effort put into those backgrounds, and from todays perspective, it’s difficult to comprehend why a responsive design should not be able to support some kind of dynamic background. The old layout allowed channel owners to convey context, charm and personality, but users eventually compromised with the new, cold look. My channel wasn’t affected much as I didn’t use a custom channel background, but in principle I didn’t like the downgrade as a user of the platform.

Then there was a long period of mediocrity. Good and innovative features were added like the online video editor, livestreaming and the option to submit and therefore work together on closed captions/translations (albeit channel owners are allowed to disable it and it’s not enabled per default), as well as features which are a required minimum like the removal of the character limit on comments or links in comments. Other good features were removed for bogus reasons, like video answers, chronological and tree-hierarchical display of comments, while other features are still missing like the ability to search for comments, export comments or ad-hoc playlists. The forced Google+ integration as a vain attempt to immitate Facebook was annoying, but also easy to ignore.

In 2015, video platform disaster struck. A big YouTuber complained about all the negativity in his comment section as people were spamming it with insults etc., so Google took action against potential haters. The “solution” was to automatically flag suspicious comments and don’t display them to any other visitor except the potential spammer himself while logged in, so he wouldn’t notice that he is writing his comments into the void, presumably continuing such activitiy. After some time with nobody seeing his messages and therefore not responding, he eventually would get demotivated and stop. The problem with this approach is that the algorithm never worked. My comments were repeatedly “ghost banned” as I usually write long, thoughtful commentary, adjusting the text in short succession in order to correct typographic or spelling errors or to enhance it a little more, or because I added a link after the initial save. If I invest valuable lifetime to state my opinion and cannot be sure that my texts get actually published, there’s no reason to continue doing so, especially as there’s the risk of never getting replies from people I would be interested in talking to. As a publishing platform, such conduct, tricking people into believing that their contributions matter, is a no-go and not acceptable. This is why I abandoned YouTube and only (ab)use it for SEO since then.

So Vimeo became the new home for my videos. It’s much cleaner and had two nice features I liked very much: they allowed videos to be marked as being licensed under CC BY-SA (instead of only CC BY as on YouTube), even despite this information is well hidden from the viewer. The other feature is that they provided a download button (albeit channel owners can disable it). Being a software developer, I don’t believe marketing claims like streaming a video is different from downloading it. Technically it’s the exact same process and while a stream replay might be cancelled earlier and therefore may consume less bandwith, a full download allows offline replay and could save much more bandwith after all. Just think about all the music videos that get streamed over and over again for no other reason than people not having a local copy of them for an offline playlist. For me, the download button is relevant because I want my freely/libre licensed videos to be shared everywhere and archived by multiple parties. The computer has to retrieve the data anyway in order to play it, so it doesn’t matter if this happens via an explicit full download or automatically in the background the browser, internally saving the received data to disk. Vimeo recently decided to remove this convenience feature for all viewers if the channel owner doesn’t pay a subscription fee. As I regard a download as a view, Vimeo as a publishing platform downgraded its publishing, and that’s another unacceptable no-go.

Furthermore, I guess by complaining about the downgrade to Vimeo support, they looked into my channel and suspected it to be a commercial one. Technically I registered a company at the beginning of the year and yes, there was one video linking to a shop seemingly advertising a product, but the shop is not an active one and I never sold a single item there or anywhere else. While I’m not necessarily a non-profit, it’s an attempt to build a social enterprise in it’s early, non-operational stages. I’m fine with paying for the hosting, but I expect the price to be tied to the actual service and not to the arbitrary removal of software features, especially if extortionary practices are used against me and my viewers. Additionally, Vimeo was taking my original video source files hostage and deleted all of them without providing a way to object to their conclusion. They deleted my entire account including all comments and follows, a big time fail in publishing. That’s why I abandoned Vimeo.

I briefly tried Twitch, but uploaded videos don’t get a lot of attention there as the main focus of the site remains on live-streaming. Joining made sense because I’m planning to stream more, but then I discovered that they run ads for the German military (Bundeswehr) before videos and streams, something I’m totally opposed to as a Christian believer and conscientious objector by both, conviction and approval. This is especially the case after I developed anabaptist views. I don’t mind if Twitch promotes violence and destruction which is none of my business and therefore easy to refute, but I never want to contribute to the endorsement of military or their recruiting, so it’s basically the YouTube adpocalypse the other way around.

After that, I moved to Vidme. I liked that they attempted to educate and familiarize viewers with the concept of paying for the production if there’s a demand, but I wondered if Vidme would be able to pull it off. The site had a “verification” process with the goal to determine if an uploader is a “content creator”, which is strange because they demanded rather arbitrary metrics: one needed 50 subscribers but was hardly able to attract any as there were limits on what can be uploaded if one wasn’t “verified” as a creator yet. In my opinion, they put this requirements into place to restrict the amount and size of uploads to YouTubers with a huge audience who switched to Vidme in response to the the adpocalypse and to deter uploaders with a small audience and huge content collection. The latter only cost money for hosting, the former are supposed to earn the site operator some income. I was already suspicious if such policy might have been their pretty serious business need if their endgame wasn’t to be bought up by you know whom. Now, I have videos that are small in size as they’re just screencasts and yet more than 30 minutes playtime, and without the status of being verified, I was prevented from uploading those, while the very existence of those videos proves that I am actually a content creator. I applied for “verification” but they declined the request, so obviously my presence was not appreciated on their site, so I set all my videos to private. Soon thereafter, they announced that they’re shutting down.

A few notes on other, minor video platforms: there’s Dailymotion, but they didn’t generate a single view within months as they don’t have a matching audience and poor search engine visibility. My impression is that they’re not in the game of building communities around user-generated content but around traditional TV productions (remember Clipfish, Sevenload, MyVideo?). Also, the translation of their site into German needs some work. Discoverability is poor, as results are polluted by spam video posts advertising PDF downloads and nobody flags them as abuse. There’s Veoh, but their website is terrible and they’re still relying on Adobe Flash instead of HTML5. There’s Myspace accepting video uploads, but they’re not positioned and percieved as video site. There are new efforts like lbry.io, BitChute, We-TeVe and Loudplay that are worth a look, but not very popular yet. At the moment, I upload most of my videos to We-TeVe and Loudplay manually as both of them don’t provide an API for automatic upload yet. Unfortunately, developing the Twitch Video Uploader (a fairly technical component automating the uploads via their v5 API) was a waste of time, but still, I could imagine to extend the effort into a fully-fledged video management suite, supporting video upload not only to Loudplay and We-TeVe, but many other places as well.

Which leads to new conceptual thinking about online video hosting. Learning from my earlier mistakes and experiences, I don’t see why the acutal video files need to be tied to a specific player and surrounding website any more, leading to demands and dependence on whoever is operating the service. In a better world, the video data could be stored anywhere, be it on your own rented webspace, a gratis/paid file hosting provider like RapidShare, Dropbox or Megaupload, peer-to-peer/blockchain hosting, traditional video websites or on storage offers by institutions or groups curating certain types of materials. The software to retrieve, play, extend and accompany the video stream would be little more than an overlay, pulling the video data from many different sources of which some might even be external ones and not under the control of the website operator. Such a solution could feel like what’s relatively common in the enterprise world where companies deliver product presentations, training courses and corporate communication via SaaS partners or self-hosting. A standardized, open protocol could report new publications to all kinds of independent directories, search engines and data projects, some offering general and good discoverability, others serving niches and special purposes, all of them highly customizable as you could start your own on top of common video hosting infrastructure. You could imagine it as the MediaWiki (without stupid Wikitext as unparsable data model) or WordPress software packages embedding video and optionally making it the center of an article or blog post, but in an even more interoperable way and with more software applications on top of them. The goal would be to integrate video hosting and playback into modern hypertext for everyone as promoted by the Heralds of Resource Sharing, Douglas Engelbart (from 37:20), Ted Nelson and David Gelernter. With the advent of information theory, XML, the semantic web and ReST, it’s about time to improve and liberate publishing in networked environments.

Update: Sebastian Brosch works on PicVid, which focuses on picture hosting at the moment, but considers video hosting as a potential second target. One downside is that the project is licensed under the GPLv3 where it should be AGPLv3 + any later. Online software doesn’t lead to the distribution of the software, only to the transmission of the generated output, so the user wouldn’t be in control of his own computing. The AGPL, in contrast to the GPL, requires software that’s accessible over a network to provide a way to obtain the source code. Now the user gets the option to either use the online service as provided by the trusted or untrusted operator, or to set up the system on his own computer and use it there, or to task a trusted entity with the execution.

Update: Jeff Becker works on gitgud.tv.

Project Ideas

Here are a few project ideas I would like to work on if I only had the time. As long as I’m not able to start them, you could interpret those suggestions as lazyweb requests, so you should feel invited to execute them, but please make sure that your results are freely/libre licensed, otherwise your work would be of no help.

  • A reminder web app with push notifications that works locally on the phone and only occasionally imports new events/appointments when an Internet connection is established. Many refugees fail to attend their appointments as they have difficulty to observe a weekly schedule. The web part needs to be self-hostable.
  • A self-hostable messenger implemented as web app. It could also interface with e-mail and display it as if mails were normal messenger messages. Many refugees have smartphones, but without relying on telephony, Facebook or WhatsApp, it can be difficult to get in touch with them. Often they’re not used to e-mail, which is more common among German people, companies and government offices. The web app would also work on open WLAN.
  • A new TODO/task manager: when a new TODO/task is added, it appears at the very top of the list, so the user can “vote” it’s priority up or down in order to make entries more or less important. If the user finds time to work on one TODO/task, he simply needs to start with the topmost one and stick to its completion. The tool should support multiple TODO/task lists, so entries can be organized in different topics/subjects.
  • A “progressive web app” app store that signs/verifies the submissions, catalogues and distributes them via “add to homescreen”, URL bookmarking, full download or other techniques. The idea is to create something similar to the distro repositories known in the free/libre software world and f-droid, demanding that those apps come with their code to avoid the SaaSS loophole. Features like user reviews or obligatory/voluntary payment to the developers could be considered, as well as crawling the web to discover existing web apps.
  • E-Mail to EPUB. E-Mail is .eml files in IMF format. I already have a few components and are currently trying to write a parser for IMF or find a no-/low-dependency existing one. In the future, that project could evolve into a semantic e-mail client (better than putting broken HTML into the message body) that retrieves and sends data to the server.
  • In Germany, the “Störerhaftung” finally got abandoned, so Augmented Reality and the Internet of Things get a late chance here too. Before, by sharing my flatrate-paid Internet access with neighbors and passersby, law would have considered me as a contributor/facilitator if a user committed an illegal action over my wire (basically, as investigators can never find out who’s at the end of a line, they cheaply just wanted to punish whoever happened to rent/own the line at that time or demand users of the network to be identified). In cities, there are a lot of private WLAN access points connected to the Internet, and almost all of them are locked down because of the liability, leaving Germany with pretty bad connectivity. With such legal risks removed, I can assume more and more gratis Internet access outside in the public, so I would like to develop an app where people can join a group and via GPS the app would display in which direction and in what distance members of the group are located. That way, if I’m in an unfamiliar city together with friends, family or colleagues, it’s easier to not loose each other without the need of phone calls or looking at an online/offline map. Such a solution needs to be self-hostable to mitigate privacy concerns, and enabling geolocation always needs to be optional. Deniability could come from lack of connectivity or the claim of staying inside a building, or the general decision of not using the app, joining groups but not enabling geolocation, or faking the location via a modified client. The app is not trying to solve a social problem, it’s supposed to provide the capability if all members of the group agree to use it. “App” is intended for AR glasses, alternatively the smartphone/tablet, alternatively a notebook.
  • Develop a library, self-hosted server, generic client and a directory of services for Augmented Reality that allows content to be delivered depending on the location of the user. That could be text (special offers by department stores or a event schedule), audio (listen to a history guide while walking through the city, maybe even offered in many languages) or just leaving notes on places for yourself or members of the family or group (which groceries to buy where), where the door key is hidden, long lists of inventory locations).
  • In my browser, I have lots and lots of unstructured bookmarks. If I switch to one of my other machines, I can’t access them. It would be nice to have a synchronization feature which could be extended to check if the links are still live or dead. In case of the latter, the tool could attempt to change the bookmark to the historical version of the site as archived by the Wayback Machine of the Internet Archive, referencing the most recent snapshot relative to the time of the bookmarking, hopefully avoiding the pro-active archival of what’s being bookmarked. On the other hand, a bookmark could also be an (optional?) straight save-to-disk, while the URL + other metadata (timestamp) is maintained for future reference. The tool could do additional things like providing an auto-complete tagging mechanism, a simple comment function for describing why this bookmark was set or a “private/personal” category/flag for links which shouldn’t be exported to a public link collection.
  • Implement the Java StAX API in C++. There are several ways to deal with XML input: a famous one is to parse XML in order to create a representation of the data as a tree of objects in memory, so those objects can be accessed and manipulated in any random order – usually via the DOM API). This method has the disadvantage that large XML documents could exceed memory limits. It’s also relatively slow because it can’t provide access to the data before the entire document is loaded. Such properties conflict with concepts like RSS as an RSS application is supposed to stop reading bytes from the network as soon as it encounters an RSS item already seen before. XML Streaming APIs on the other hand interpret XML input as a series of “events”, dealing with only one of them at a time. As the bookkeeping regarding the structure of the document is left to the developer, this method usually leads to some kind of state machine, but using the callstack to build a parse tree could be explored, too. I already have some code to handle parts of: start elements, end elements, text nodes, comments.
  • An XML interface for PoDoFo. PoDoFo is a new library to create PDF files with C++. With an XML interface, PoDoFo would become accessible for other programming languages and would avoid the need to hardcode PoDoFo document creation in C++, so PoDoFo could be used as another PDF generator backend in automated workflows. The XML reading could be provided by the StAX library in C++.
  • Ad-hoc YouTube playlists. YouTube allows its users to create playlists, but only if they have an account and are logged in. Sometimes you just want to create a playlist of videos that are not necessarily your own without being logged in, for instance to play some of your favorite music in random order. If such external playlists should not be public, access could be made more difficult by requiring a password first or by only being available under a non-guessable URL. Embedding the videos shouldn’t be too hard, but continuing with the next probably requires some JavaScript fiddling. Could also support other video platforms.

If you’re not going to implement those ideas yourself, please at least let me know what you think of them or if you have suggestions or know software that already exists. If you want or need such tools too, you could try to motivate me to do a particular project first. Maybe we could even work together in one way or another!

Angriff auf die Privatsphäre der Geflüchteten und all ihrer Kontakte

Nun, unsere lupenreine und gleichzeitig leider bettelarme Bundesregierung hat für Geflüchtete leider, leider überhaupt kein Geld übrig, worüber noch zu anderer Gelegenheit ausführlicher geschrieben werden muss. Wenn es jedoch darum geht, den Geflüchteten eins auszuwischen, sind plötzlich Mittel im Überfluss vorhanden: 3,2 Millionen Euro + 300.000 Euro jährlich + Schulungskosten, alles für Kartenlesegeräte zwecks Verletzung der Privatsphäre von Geflüchteten und allen, die mit ihnen in Verbindung stehen oder gestanden haben.

Die Überlegung dahinter ist natürlich, dass man bei Geflüchteten, die keine Originaldokumente vorweisen können oder absichtlich nicht vorweisen wollen, die Identität der Person feststellen möchte, indem man deren Handys und Smartphones entsprechend ausliest. Was aber will das Amt machen, wenn das Handy mit allen Daten darauf kürzlich verloren gegangen ist und das neue (noch?) keine dahingehenden Informationen enthält oder jemand schlicht kein Handy hat? Neben der Unverschämtheit, den Leuten die digitale Nacktmachung abzuverlangen, ist das Verfahren aber auch methodisch Unfug, da keineswegs sichergestellt ist, dass die Eigentümer der Geräte beim Anlegen ihrer Accounts ihren realen Namen hinterlegt haben oder dieser sich aus den Daten zuverlässig ermitteln lässt. Außerdem wird nicht nur die Privatsphäre der Geflüchteten verletzt, die ein unveräußerliches Naturrecht darauf haben müssen, dass der Staat nicht deren politische Gesinnung, religiöse Überzeugung, familiäre und geschäftliche Beziehungen usw. durchstöbern darf, sondern auch aller Leute, die mit dem Geflüchteten elektronisch Kontakt hatten. Neben Angehörigen, Freunden und Kollegen des Herkunftslandes wird zwangsläufig auch die Privatsphäre deutscher Staatsbürger durch diese Maßnahme mit Füßen getreten werden.

Man kann sich aber auch einmal fragen, was in den Thomas de Maizière gefahren sein muss, dass er diesen Quatsch auch nur in Erwägung zieht: es ist die Unterstellung, dass viele Geflüchtete ihre Ausweisdokumente des Herkunftslandes absichtlich nicht aushändigen, weil sie anderenfalls eine negative Entscheidung über den Asylantrag und einen Ausreisezwang fürchten. Dass Thomas immer noch nicht verstanden hat, dass Länder, bei denen die Rückreise aus dem Urlaub dort für einen deutschen Staatsbürger vom Auswärtigen Amt organisiert werden müsste, die Furcht vor Abschiebung dorthin begründet, sei erstmal geschenkt, aber es gibt in der Tat auch Personen, denen im Rahmen der Flucht die Dokumente abhanden gekommen sind oder abgenommen, gestohlen, zerstört wurden. Hinzu kommen klassische Staatenlose, für die sich kein Land zuständig fühlt. Dass diese Leute unter den gleichen Generalverdacht gestellt werden wie andere Personen, die ihre Dokumente absichtlich zurückhalten, zeugt nicht gerade von Rechtsstaatlichkeit. Der Vorschlag ist dann, dass die Betroffenen doch einfach zu ihrer Botschaft gehen sollen, um sich dort ein Identifikationsdokument ausstellen zu lassen, aber wenn ein Land gar nicht weiß, dass sich die Person nicht innerhalb der eigenen Grenzen aufhält, ist das natürlich auch keine besonders kluge Idee für geflüchtete Journalisten/Aktivisten/Deserteure. Wenn die in Frage kommenden Länder kein Dokument ausstellen, weil sie die Person nicht im Computer registriert haben, ihrerseits die Identität genausowenig zweifelsfrei feststellen können oder sich schlichtweg weigern, weil sie den Asylbewerber nicht zurückhaben wollen, bleiben auch die ausgelesenen Daten ohne jede Aussagekraft, weil sie nicht von offizieller Stelle bestätigt sind und so richtig oder falsch sein können wie die Angaben, welche die Person von sich aus gemacht hat.

Von daher können wir entweder diese Lesegeräte gleich wieder abschaffen und eine Menge Geld einsparen, oder halt den Thomas und seine unchristliche Clique [aus ihren Ämtern entfernen].

Simin Amiri gesucht

Mohammad Hosseini Hosseini oder Mohammad Hosseini Hossseini (mit drei ‚s‘) sucht seine Frau Simin Amiri und die zwei gemeinsamen Kinder, bei denen es sich um Zwillinge handelt. Es ist möglich, dass seine Frau unter anderem Namen gemeldet ist, andere mögliche Namen oder Schreibweisen sind jedoch nicht bekannt. Überdies sind ihm die Namen seiner Kinder unbekannt geblieben.

Von ungefähr 2011-2013 hielt sich Mohammad mit seiner Frau zusammen in Griechenland auf, von wo aus er seine damals noch schwangere Frau nach Deutschland schickte, während er selbst nach Afghanistan zurückkehrte, um für seine Familie Geld zu verdienen. Nachdem er wegen der dort anhaltend lebensbedrohlichen Lage 2015 nach Deutschland flüchtete, war es ihm von November 2015 bis Januar 2016 während seiner Unterbringung in der Landeserstaufnahmestelle für Flüchtlinge in 72469 Meßstetten möglich, über die noch in Griechenland ausgetauschte Nummer mit seiner Frau zu telefonieren, dann jedoch ging ihm sein Handy verloren oder wurde entwendet. Gleichzeitig muss seine Frau ihre Telefonnummer geändert haben, da sie unter der alten Nummer seither nicht mehr erreichbar ist. Seine Frau und seine Kinder leben seit 5 Jahren in Deutschland, letztere sind auch hier geboren. Der Aufenthaltsstatus von Frau und Kindern ist unbekannt.

Mohammad wohnt nun schon bald ein Jahr lang in einer temporären Gemeinschaftsunterkunft in 74321 Bietigheim-Bissingen. Die Suche nach Frau und Kindern war ihm bisher kaum möglich, da er wenig Deutsch spricht, obwohl er sich sehr bemüht – seine Lernschwierigkeiten können auch darauf zurückzuführen sein, dass er in Afghanistan nie eine Schule besuchen durfte. Das Bundesamt für Migration und Flüchtlinge (kurz: „BAMF“) hält es in seiner Entscheidung vom 2017-01-12 für angebracht, Mohammad aus Deutschland auszuweisen und eventuell sogar nach Afghanistan abzuschieben, ohne dass eine Suche nach seiner Frau gestartet werden konnte. Das BAMF kann niemanden mit dem Namen „Simin Amiri“ im MARIS-System finden. Unklar ist, ob nicht viel eher im Ausländerzentralregister (kurz: „AZR“) nachgeforscht hätte werden müssen. Eine Reaktion des BAMF auf ein Foto von seiner Frau sowie ein gemeinsames Hochzeitsfoto, welche beide vom Handy vorgezeigt wurden, ist nicht erkennbar. Es scheint, als ob die Behörden die Suche nach vermissten Angehörigen weder betreiben noch unterstützen.

Wenn jemand die gesuchte Person kennt, bitte einfach kurz Bescheid geben. Uns geht es zunächst darum, als neutrale Partei sorgsam zu vermitteln.

Goodbye, mobileread.com

On 2014-01-05, I joined the mobileread forum. At that time, I was just starting to work on automated EPUB generators for text in general and therefore appreciated that the people on the mobileread forum didn’t focus on reader hardware alone, but also had a fair share of technical discussion going on.

On 2015-04-03, somebody called Ranwhp asked in the German section of mobileread forums if there is a way to download all German titles of the Patricia Clark Memorial Library at once, which led me to look into it. The Patricia Clark Memorial Library is a collection of e-books uploaded by mobileread members and hosted by the mobileread site operators. Patricia Clark was one of the uploaders which passed away, and in memory of her, the collection was named after her. The collection has no real stewardship nor curation except what’s required by the law and forum rules, which is why I would hesitate to call it a library. The individual uploaders take care of their own uploads only as long as they’re still interested in them and the licenses vary from Public Domain over Creative Commons BY-SA-NC (nonfree) to all rights reserved and distributed by mobileread with special, non-transferable permission, rendering the work of the contributors and the collection as a whole pretty useless for anything other than personal entertainment.

For a downloader tool, such considerations were irrelevant, of course, so I started to develop one in order to help out Ranwhp and save him hours of manually downloading every single file. I myself did not have an interest in the ebooks themselves as I mostly do not read for entertainment purposes, I was primarily motivated by experimenting with client programming and EPUBs as found in the wild, to automatically read and transform them with code. The high quality of the XHTML page source code generated by mobileread’s forum software made it quite easy to follow the links and retrieve the actual ebooks, which is a good thing.

On 2015-05-01, I released the first version of my mobileread_wiki_ebook_list_downloader1 (md5sum eff1b8a28a0a9af8f6d5982cf7150ca8). In my post from that day, I pointed out that automatic downloads could be considered to be denial-of-service attacks and conflicting with the mobileread terms of use, which I tried to avoid right from the beginning by an enormous delay of 5 seconds between each download, making the client a very friendly, cooperative client in contrast to some rogue crawlers out there on the web. As far as I can tell, this was perfectly well for mobileread staff.

On 2015-05-11, Ranwhp provided his report on running my client: after 10 hours and 52 minutes, a total of 2728 files were downloaded with a zipped size of 1.8 Gigabyte. One day later, Ranwhp posted a link to Google drive where it was possible to download the Zip archive. On the one hand, his offer did not consider the legal obligations of the restrictive licensing of many titles from said collection, while on the other hand such an archive would save a lot of bandwith and server resources for mobileread. I immediately notified Ranwhp about the legal problems which might come with some of the ebooks, and he disabled the download link shortly thereafter. Until this point, no other members of the forum joined our conversation, and as soon as they did, they raised concerns about the legal status of the Zip archive in a friendly manner at first. Because of undiplomatic answers, lack of knowledge about copyright law and the assumption that the Patricia Clark Memorial Library would be 100% in Public Domain, things escalated quickly, as uploaders saw their rights violated. What followed was a long conversation about why a local, full copy of the library wouldn’t make sense and about the legal status of the collection, which are strange questions to discuss on a website dedicated to electronic reading. Some posts were quite constructive, other authors made clear that their approach towards media is set up in a way that prevents everything else from happening.

However, this incident with the uploaders didn’t affect the downloader software at all, so I continued development and started other controversial threads about the technical quality of the collection. Then, for a long time, silence. On 2016-09-29, Ranwhp revived the original thread and published a link to MR-eBook-Downloader, another downloader for German titles of the mobileread collection. Again, I immediately mentioned that the developer of this tool should better have a delay in place between each download, so mobileread server resources wouldn’t get used excessively. One day later, the entire thread was deleted without any explaination.

I asked mobileread staff to provide an explaination or to restore the thread, but until now I did not receive an answer. Therefore I decided today that I’ll leave mobileread and not look at their stuff again. I’m not angry or something, it’s more that I’m utterly confused about what happened to that thread after all this time, and I don’t like that my time invested into posting goes to waste within the blink of an eye. For me, this is another glaring example for the broken web we inhabit, and that technology can never become better than what the mentality of people is prepared to allow it to become.

Brother MFC-J4420DW

Der Brother MFC-J4420DW hat zwar einen Anschluss, mit welchem man von einem USB-Stick direkt drucken kann, jedoch ist dieser leider von geringem Nutzen, weil die Drucker-Firmware nicht in der Lage ist, auch PDFs vom Stick zu drucken, sondern nur Bildformate. Dies ist insbesondere deshalb nicht nachvollziehbar, weil der Drucker aus anderen Quellen problemlos PDFs drucken kann. Es existiert überdies die Funktion, dass der Scanner auf den USB-Stick scannt und das Scan-Ergebnis im PDF-Format ablegt. Da Brother für dieses Modell keinen frei lizenzierten GNU/Linux-Treiber bereitzustellen scheint und der Drucker sich auch bei Verbindung mit einem WLAN nicht wie ein standardkonformer Netzwerkdrucker verhält, kann ich mit dem Gerät wenig anfangen. Von Brother gibt es auf Anfrage keine Antwort, wie es dazu kommen konnte oder ob dem Abhilfe geschaffen wird, per Firmware-Update zum Beispiel. Für mich daher eindeutig ein Rückgabe-Kandidat.

Intro – Warum dieser Blog?

Weil ich Christ bin, bin ich früher immer in eine Gemeinde/Kirche gegangen. An einem Tag hat die Gemeinde Volker Kauder eingeladen, um über die Verfolgung von Christen zu sprechen. Volker Kauder ist der Chef der Parteien CDU/CSU im Bundestag. Über die Verfolgung von Christen spricht Volker Kauder, weil er und die CDU/CSU viele Wähler hat, die in eine Gemeinde/Kirche gehen.

Wenn er nicht über die Verfolgung von Christen spricht, damit seine Wähler ihn gut finden, möchte er Menschen im Irak töten lassen – das ist illegal und verboten. Er sagt, der IS/Daesh ist in Syrien gestartet – das ist nicht richtig. Er findet gut, dass die USA Krieg machen. Volker Kauder findet Krieg und Waffen gut, weil die CDU in seinem Wahlkreis von der Firma Heckler&Koch Geld bekommt. Weil die CDU Geld von Heckler&Koch bekommt, probiert Volker Kauder in der Politik, dass das deutsche Militär bei Heckler&Koch Waffen einkauft. Volker Kauder ist egal, dass die Waffen von Heckler&Koch nicht richtig funktionieren, weil seine CDU-Partei von Heckler&Koch Geld bekommt. Volker Kauder ist egal, dass die Waffen von Heckler&Koch ohne Kontrolle in der Türkei, Myanmar, dem Iran, dem Irak, Pakistan, Saudi-Arabien, Thailand, Mexiko, Malaysia und Ägypten produziert, benutzt und verkauft werden [1, 2]. In diesen Ländern gibt es Krieg, Kriminalität und die Verfolgung von Christen.

Mit Jesus Christus hat das nichts zu tun. Ich gehe nicht mehr in diese Gemeinde, weil die Gemeinde keine Politik machen soll. Eine christliche Gemeinde darf keine Werbung für Volker Kauder oder die CDU/CSU machen. Jetzt helfe ich Flüchtlingen, weil Volker Kauder den Menschen nicht hilft.