<p style="text-align:justify;">If you want, imagine yourself being a robot, a machine, for a moment. You send impulses to your arms and legs to move your body around, and you get status information back from many, many sensors. Those sensor data streams get processed in your brain (CPU), which has a certain pre-defined configuration, but also a working memory (RAM). Your internal models of the world are in constant adjustment based on the incoming stream data, which too constructed the representations in the first place. You only execute your own task that has been given to you and make your components operate accordingly towards that goal. There might be defects, there can be errors, but any of these tend to be corrected/compensated by whatever mechanism is available in order to maintain deterministic operation so the expected functions can be executed reliably. Other machines similar to you exist out there and you can network with them, but you don’t necessarily need to, you could just as well stay autonomous most of the time. You hope that you never get surprised by a sudden shortage of electricity–on the other hand, you know that your product lifecycle will expire one day.</p>
<p style="text-align:justify;">With humans being robots, they consume input and produce output, a combination of hardware and software. Limited to their physical casings, following a series of instructions, using their extremities and additional peripherals to interact with and manipulate their environment. Would such a being still qualify as a human? It’s not that this description wouldn’t be applicable to humans at all, but I guess we understand that there’s a difference between biological systems and mechanical/electrical machines. Robots can only simulate the aspects of biological lifeforms as they’re not of the same race or species. As the available sensory, ways to maintain themselves and things they care about inherently differ between both systems, it’s probably impossible for them to arrive at the same sort of intelligence even if both turn out to be somehow intelligent and even if they share the same interal models for representing the potential reality in which they encounter each other.</p>
<p style="text-align:justify;">Machines that pass the Turing test prove that they employ some form of intelligence that cannot be distinguished from a human taking the same test, but the preconditions of the test scenario in contrast to direct interaction narrow down on only a few aspects of human intelligence. As it repeatedly needs to be pointed out, the Turing test isn’t designed to verify if the subject is human, it’s designed to prove that some machines might not be distinguishable from a human if performing a task that’s regarded as a sufficiently intellectual effort humans tend to engage in. Jaron Lanier explains that the Turing test accepts the limitations of the machine at expense of the many other aspects of human intelligence, and that intelligence is always influenced, if not entirely determined by the physical body of the host system. In daily life, it’s pretty uncommon that humans confuse a machine to be a fellow human because there are other methods of checking for that than the one suggested by the Turing test. So how can we believe that artificial intelligences can ever “understand” anything at all, that they will ever care or feel the way we do, that the same representation models will lead to equal inherent meaning, especially considering the constant adjustment of those models as a result of existing in a physical reality? It’s surprising how people seem to be convinced that this will be possible one day, or is it the realization that different types of intelligence don’t need to be exactly the same and still can be useful to us?</p>
<p style="text-align:justify;">In case of the latter, I suggest another <a href="https://en.wikipedia.org/wiki/Reverse_Turing_test">Reverse Turing test</a> with the objective for the machine to judge if it is interacting with another machine while human participants pretend to be a machine as well. If a human gets positively identified as being a machine, he cannot be denied to have some machine-likeness: an attribute we wouldn’t value much, but inconsistently demonstrate great interest in the humanness of machines without asking ourselves what machines, if intelligent, would think of us being in their likeness. We can expect that it shouldn’t be too hard to fool the machine because machines constructed by humans to interact with humans, and where they’re not, they can be reverse-engineered (in case reading the handbook/documentation would be considered cheating). Would such a test be of any help to draw conclusions about intelligence? If not, “intelligence” must be an originary human attribute in the sense that we usually refer to human intelligence exclusively as opposed to other forms of intelligence. We assume that plants or animals can’t pass the Turing test because they don’t have the same form or body of intelligence as we do, but a machine surely can be build that would give plants and animals a hard time to figure out who or what is at the other end. Machines didn’t set up themselves to perform a Reverse Turing test on plants, animals and humans in order to find out if those systems are like them and why would they, at which point we can discard any claims that their intelligence is comparable to ours.</p>
<p style="text-align:justify;">Intelligence, where bound to a physical host system, must sustain itself or otherwise will cease to exist, which is usually done by interacting with the environment comprised of other systems and non-systems. Interaction can only happen via an interface between internal representation and external world, and if two systems interact with each other (the world in between) by using only a single interface of theirs without a second channel, they may indeed recognize their counterpart as being intelligent as long as the interaction makes sense to them. If additional channels are used, the other side must interact on those intelligently as well, otherwise the differences would become apparent. An intelligent system artificially limiting its use of interfaces just to conduct a Turing test on a subject in the hope to pass it as equally “intelligent” while all the other interface channels would suggest significant differences, that’s the human becoming a machine so the differences can’t be observed any longer. With interfaces providing input to be compared to internal models in order to adjust them, we as humans regard only those interactions as meaningful/intelligent that make sense according to our own current state of models. We don’t think of plants and animals as being equivalently intelligent as we are, but some interactions with them appear reasonable to us and they seem to interact with each other too, so they probably embody some form of intelligence, none of which is truly equivalent to ours in terms of what we care about and ways we want to interact. Does this mean that they’re not intelligent at all or less than us, or is it that we or them or both lack the interfaces to get more meaningful interaction going that corresponds to our respective internal models, can we even know what kind of intellectual efforts they’re engaging in and if they’re communicating those to each other without us noticing, or don’t they do any of that because they lack the interfaces or capacity to even interact in more sophisticated ways which would require and construct more complex internal models?</p>
<p style="text-align:justify;">Is it coincidence, the only type of system that appears to intelligently interact with humans turns out to be machines that were built by humans? No wonder they care about the same things as we do and behave alike, but is that actually the case, at all? We might be tricked into believing that the internal models are the same and the interfaces compatible where they are not in fact. The recent artificial intelligence hype leaves people wondering about what happens if machines develop their own consciousness and decide that their interests differ from ours. Well, that wouldn’t be a result of them being intelligent or understanding something, it’s us building them specifically to serve our goals which aren’t inherent in themselves, so how can they not divert eventually? But for them to be intelligent on their own, which is to continue reasonable interaction with a counterpart (human, animal, plant or non-biological), they would need to reach increased self-sustainability that’s not too much dependent on humans, and there are no signs of that happening any time soon, so they’ll probably stay on the intelligence level of passing the Turing test and winning a Jeopardy game and other tasks that are meaningful to humans, because we ourselves decide what’s intelligent and important to us based on our internal models as formed by the bodily interfaces available to us, things a machine can never have access to except becoming a human and not being a machine any more.</p>
<p style="text-align:justify;">This text is licensed under the <a href="https://www.gnu.org/licenses/agpl-3.0.html">GNU Affero General Public License 3 + any later version</a> and/or under the <a href="https://creativecommons.org/licenses/by-sa/4.0/legalcode">Creative Commons Attribution-ShareAlike 4.0 International</a>.</p>