Much ink hath been spilt in recent years over questions regarding technology and its proper place in our world. This clash has produced much that is helpful and insightful, and also quite a lot of useless blots. What is indeed unfortunate about the nature of much of this banter is that it far too often assumes a faulty premise: that technology is fundamentally a neutral thing. In this common view, technological devices and processes are envisioned merely as tools waiting to be manipulated by an external user. In the interest of full disclosure, I myself have somewhat held to a form of this line of argumentation in the past. My own recent blind spots on the topic aside, one is hardly able to avoid stumbling upon it in almost every nook and cranny of our technological discourse. Microsoft has wonderfully articulated the sentiment in a recent inspirational ad, claiming “But in the end, it’s only a tool,” asking the supposedly profound question “What’s a hammer without a person who swings it?” and declaring with authority that “It’s not about what technology can do, it’s about what you can do with it.” Well then, it’s settled.
Indeed, it would be settled, if it wasn’t for pesky thinkers like Michael Sacasas. Him and his kind often play the role of the obnoxious neighbors, calling the ethics police to come shut down our rowdy tech parties – Microsoft of course having provided the necessary booze for the occasion. Sacasas certainly has no intention of denouncing technology altogether, but has simply called into question this orthodoxy of assumed neutrality automatically espoused by a generation which has known little else besides technological saturation. He suggests in a recent podcast interview that not only is it without ethical neutrality, but may also possess the capacity to “bend us in the wrong direction” on some occasions. His further comments are also illuminating. He argues – and it is only fitting that he leans on a technological analogy in doing so – that this oversimplified conception of the nature of technology fails to properly account for how it is shaping our interaction with the landscape around us:
“It misses certain ways in which technology enters into a circuit between our will, our minds, our hearts, our desires, our bodies, and then the world. . . When we have a tool or device in hand, the things that tool or device make possible suddenly enter into our consciousness. They take the shape of invitation to the good, or temptation to the bad.”
It is also important to note that Sacasas rightly cautions against an extreme reaction in the other direction: one which vilifies technology and improperly assigns malicious agency to it. He calls for a treatment of technology which simultaneously avoids characterizing a human user as an “unthinking, automatic slave to a tool – wherein the agency lies wholly with the tool, device, or technological environment,” while also maintaining that it is indeed bound to influence our navigation through the world to some degree.
A proper understanding of its effect upon us therefore lies somewhere in the tension between the two extremes. As human creators, we shape technology; we also allow ourselves to be placed within technological realms, and abide by the rules that are operative within them.
Updated Circuitry
This entrance by technology into our circuits of cognition and behavior was recently unveiled in my own experience. The instance in question goes a long way towards illustrating some of Sacasas’s observations. While reading a biography, I came upon a picture containing a small detail vital to the narrative, the author’s note at the bottom of the page drawing attention to it. Upon reading that note, an impulsive desire to look closer at the detail in question prompted an automatic response: the very spreading motion by thumb and forefinger which one uses for a smartphone in order to zoom in on any desired region of the screen. Although I immediately chose not to take this irrational course – the book was print after all – it nonetheless initially occurred to me very briefly as a viable method for taking a closer look. This scenario may seem petty, but is in fact noteworthy for a few telling reasons.
First, it is worthwhile to mention that I myself do not own a smartphone and interact with them on a very limited basis. I will only use them occasionally when borrowing one from friends or family for a few moments. Naturally, I have therefore developed little in the way of automatic habits when it comes to their usage. Or so I thought. This slight flick of the fingers – so efficient and easily done – had seemingly become grafted into my habits without my conscious knowledge. In understanding why this is the case, it is helpful to recall that technology operates within a paradigm that preaches ever increased efficiency. This is the standard by which we judge the progress we have made along our technological timeline. Less time and effort spent on a given task are the highest ideal. By its very nature technology demands an increasing precision and craves an ever compounded ease by which the world around it can be mastered. To see this conception as it operates in the public mind, one has only to watch the myriad of infomercials framing prior technological practices (and their users) in a comically inept light, and their own product as one that can bring a balance to the Force. Returning to the example in question, the simple action of slightly spreading our two fingers epitomizes this ideal of efficiency, and my adoption of it speaks volumes regarding its ability to latch onto the habits of users in ways they may often remain unaware of. In short, as the processes which allow us to manipulate technology – and by extension the world around us – increase in ease and efficiency, they will begin permeating our daily habits to ever greater levels.
Second, and most importantly, as mentioned above the book in question was not on an electronic device, but was only a plain old paper copy. This is of course the real doozy. Here is a bizarre instance in which the actions imprinted onto my habits by a technological medium were automatically transferred into an arena in which they most certainly did not belong. Perhaps Sacasas would comment that my instinctual reliance upon this motion in order to gain a closer look was indicative of how deeply it has become planted in my mental “circuitry.” It is such a striking example of technology’s capacity for shaping its users, simply because of how little rational sense the application of that method makes in this scenario. One can firmly plant their thumb and forefinger on a page and assault it until they are red in the face, yet the only results will be a torn page and sore fingers. Unless an eyeball is moved closer to the page, or that ancient artifact known as a magnifying glass is produced, the detail in question will remain as elusively small and unclear as ever. As it pertains to smartphone usage, it is abundantly clear that these devices have offered a pattern of behavior whereby users can manipulate a technological medium through a set of specified actions. By virtue of their constant exercise then, those pervasive habits have infiltrated foreign territory in which they are greatly out of place.
Up to this point the analysis of such a scenario may seem far too anecdotal to actually warrant any serious consideration, particularly to a skeptical reader. After all, the above instance might well have been the product of a specific combination of tendencies and habits unique to myself. In response to those who would make such an objection, a discussion of Augmented Reality would produce some useful observations. AR is a budding frontier in the technological landscape which is being crafted for the express purpose of merging together both the digital and the physical realms. Its telos is quite literally to entangle them. It aims to harvest the raw potential which is bound up in each, and manipulate the experiences – particularly social ones – which can be derived from combining them. Apps like this one are making waves in the AR world, and the party has only just begun, with high profile players like Apple inviting themselves to the festivities. To think that our daily habits will emerge unscathed from an increasing exposure to realms of augmented reality is to ignore the foundational purpose for the cultivation of those realms to begin with. By design, AR is intended to shape how we navigate through the physical world, by overlaying a digital one onto it.
Yet another example of technology’s tendency toward shaping its users can be seen in this discussion on the phenomenon of “financial abstraction.” As technological processes which enable and regulate monetary exchange have evolved, the money changing hands has increasingly become an abstracted entity. It is progressively being outsourced to the digital realm, losing any tactile presence in the minds of those who circulate it. These seismic shifts in the economic landscape are exerting significant influence on the behavior of those in the marketplace that borrow, loan, sell and spend. Or, as our friends from Middle-Earth might say: “The age of piggy banks is over. The time of the online bank has come.”
To highlight some low hanging fruit as a manifestation of this trend, it may be one reason why prospective college students hardly bat an eye at the thought of incurring gargantuan debt – they simply cannot fathom many of the very tangible and lasting ways this will drain their resources. No doubt there are other social factors contributing to this crisis, as many have aptly pointed out. Nonetheless, the abstracted realm of digital money is certainly causing some foggy thinking on the part of many young college students. As many of them are discovering, there is a world of difference between tossing around numbers on a screen, and being forced to cough up hard cash over the course of multiple decades.
Pinpointing and quantifying exactly how an increasingly abstracted financial marketplace is influencing its participants is certainly up for debate; whether or not it is happening at all seems hard to deny. After all, we are no longer a species that primarily barters only with our immediate geographical neighbors, but which is instead racking up international transactions at a dizzying pace. The technology we have conceived of, subsequently crafted, implemented, and improved upon in the world of economics has in turn begun to steer us in particular directions based on the options it offers back to us. Thanks in large part to those technological processes, the rules of the game are significantly changing, and our behavior must evolve along with it.
AI Hammers?
It is only natural that our examination of technology’s neutrality – or lack thereof – comes to a fine point at precisely the same place technology itself has been evolving towards for quite some time: Artificial Intelligence. An honest consideration of AI deals quite the crippling blow to the pervasive notion of technology as a neutral entity.
Here we will return to Microsoft’s ad, which serves a dual purpose at this point in our conversation: it captures the prevailing cultural sentiment towards AI, while simultaneously showcasing the problematic nature of treating it as a neutral thing. In other words, Microsoft wants to have their cake and eat it too. The ad is filled with language representing technology as useless without a human driver (remember the bit about the hammer?), while also lauding innovation in AI – which by definition is intended to at least partially work on its own. Seen from this angle, the narrative we are expected to uncritically digest becomes ever foggier. Is AI technology an utterly lifeless hammer waiting to be swung? Or is it a hammer that we are empowering to swing itself? Microsoft seems to be unclear on these questions, and is apparently attempting a nonsensical combination of the two diametrically opposed options.
Before proceeding any further, some clarification may be helpful. I am certainly not putting forward the simplistic misrepresentation of AI as a completely autonomous sentience without regulation by its human creators. It is clear that currently – on a macro level at least – AI engineers are the ones in charge, setting various parameters and designing programs towards particular arenas and applications. Speculation about how things may get out of hand quickly is not my intent here – although some highly intelligent people are indeed singing this tune elsewhere. Rather, the only reason for roping AI into this discussion is an examination of its Holy Grail-like allure to all who have dreamed of its implementation on a large scale: it is a form of technological consciousness that can think for itself. This is an unprecedented development in the evolution of technology, and the problem solving potential it may hold in store is mind numbing.
The semi-autonomous ability AI currently possesses has produced some riveting drama in recent years. One high-profile affair is Google’s DeepMind project, which recently pitted its AI program AlphaGo against a few of the best Go players in the world – and won in decisive fashion. It first competed against European Champion Fan Hui in 2015, and then bested Lee Sedol – by far the most currently dominant player on the world stage – 4 games to 1 in 2016. Most notable in all the swirling speculation and drama surrounding the spectacle of Man vs. Machine was this simple truth: the game of Go will never be the same again. While this is indirectly the product of the human minds that crafted the program to begin with, it is directly the result of the independent innovations of a freely thinking AI program. Describing the seismic shift of the Go landscape after such a powerhouse non-human player has been introduced, the crew over at DeepMind had this to say:
“During the games, AlphaGo played a handful of highly inventive winning moves, several of which – including move 37 in game two – were so surprising they overturned hundreds of years of received wisdom, and have since been examined extensively by players of all levels. In the course of winning, AlphaGo somehow taught the world completely new knowledge about perhaps the most studied and contemplated game in history.”
Judging by this assessment, one would be hard pressed to find any supporting evidence for the “neutral technology” narrative in this case. The gripping documentary chronicling the entirety of the event – the lead-up, match, and aftermath – dismantles that narrative with each progressing minute. It is particularly striking to watch as the DeepMind team looks on with the rest of the world at many key moments of tournament play, overcome with the same degree of puzzlement at AlphaGo’s blunders (few and far between), and amazement at its unique and brilliant strategies. They scratched their heads and held their breath right alongside the rest of us. This should clue us in on the degree to which AlpaGo was truly fending for itself in the ring.
If that were not enough to dispel the idea of AI as neutral tech, DeepMind themselves have recently announced the formation of a research unit called “DeepMind Ethics & Society”, intended to “explore and understand the real-world impacts of AI.” The announcement begins with this telling and eminently sensible acknowledgment:
“We believe AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards. Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work.”
The program Libratus is another prime – and certainly more convincing – example of AI’s capacity to dominate and rearrange an arena where humans have historically held a distinct advantage. More convincing because, while the game of Go is indeed riddled with staggering complexity, it is still what is referred to as a “perfect information” game, meaning all players have full access to each facet of information relevant to game-play. Not so with Poker, naturally dubbed an “imperfect information” game because of the uncertainty that accompanies not knowing the hand each opponent has been dealt. This element of the game naturally incorporates the art of bluffing at the very foundations of game-play, making the entire enterprise a collection of falsified information and attempted trickery. As can be imagined, the difficulties of equipping an AI program with the capacity to successfully navigate this minefield of bluffs and counter-bluffs have presented researchers with a herculean task up to this point.
Enter Libratus. Dr. Tuomas Sandholm and his team at Carnegie Mellon University initially set it loose in a high stakes poker tournament back in 2015. After being defeated on their first attempt, it was overhauled and once again clashed with the top four players in the world in early 2017, promptly obliterating the competition this time around. It wasn’t even close. This excellent piece by Endgadget (VICE News also took a crack at it) sheds some light on the event – both the particular details of the competition and how it played out, and also a birds-eye view of the implications of Libratus’ resounding victory:
Discussion of what was occurring under the hood throughout the competition, although certainly interesting, bears little relevance to the issue at hand. Rather, as alluded to above, the question of particular interest here – both to Sandholm and to us – is an investigation of how Libratus fared when asked to make bets without any knowledge of its competitors hands, or when it may not have had an ideal hand itself. Apparently, it turns out that the Good Lord blessed AI with quite the aptitude for bluffing. As the various interviews with the players and researchers in the video above demonstrate, Libratus showed a penchant for betting in completely unorthodox ways; behavior that human players would historically have viewed as either erratic or simply foolhardy, and would almost certainly have avoided like the plague. And yet in taking this supposedly foolish tack, Libratus was successful at almost every juncture, effectively shattering all that has been assumed about “good” and “bad” poker strategies.
In response to those who would interject here and suggest that the innovative strategies Libratus has contributed to the world of poker are purely a function of the programming that molded it, we should highlight commentary on its behavior from Sandholm and Nick Nystrom (the Sr. Director of Research at Pittsburgh Supercomputing Center) following the tournament. As might be suspected, it is strikingly similar to the aftermath of AlphaGo’s victories and the analysis that sprang from examining the unorthodox behavior in that case. According to Sandholm and Nystrom, many of the strategic decisions Libratus made are notable precisely because of how unpredictable they were. Its creators endowed it with the capacity to solve problems by simply teaching it the rules of the game; it solved those problems in an unprecedented manner. As mentioned by Nystrom at the conclusion of the video, this is exactly why those on the AI bandwagon are so stoked about successes of this nature. They have dreams about its applications in a variety of arenas, drooling over the prospect of giving it free reign to solve problems where human minds have so far come up short. This ultra creepy HP ad (essentially patting themselves on the back for their role in the project) makes just such a connection, anticipating AI’s ever expanding role on the international economic stage.
A Proper Frame of Reference
Zooming out a bit, one might ask why have I settled so long on an examination of AI up to this point. It is simply because in conversations like the present one, when discussing AI there is a temptation to erect boundaries around it which aim to distinguish between technology in general and AI in particular. Therefore, the argument goes, it may be possible to demonstrate that AI is far from neutral, without saying anything about the neutrality of technology in general. This line of thinking doesn’t hold too much water and is an important one to address.
While it may certainly be acknowledged that AI possesses characteristics which distinguish it from other forms of technology, it must also be noted that AI ultimately springs from the foundational impulses that actuate technology to begin with. One can even go so far as to argue that AI is a distillation of all those essential impulses of the technological project. Rather than being a highly specified form of technology among many others, it is instead an excellent representative of technology understood as a whole. Indeed, a properly functioning AI is not an obscure offshoot of the technological project, but in many ways the culmination of it up to this point, and the realization of many of its aims that have been in place since its inception. In theory, AI is a realm in which all excess has been cut away, leaving a technological entity free to innovate by means of its own powers. It exists as an outworking of some of the most fundamental principles that have driven the slow – and sometimes furiously fast – march through the ages: namely increased efficiency and technique innovation, creative solutions to old problems, re-allocation and manipulation of resources, and a host of others. Put another way, it is technologically driven technology. To speak of AI is to speak of a purified form of technology, or, in the words of a friend, it is “technology embodied.”
As alluded to previously, these are unprecedented considerations in the conversation surrounding technology. They require a new level of rigor and deeper forms of imaginative engagement with the questions that arise from it. This is precisely why maintaining a grip on a proper conception of technology is vitally important, perhaps now more than ever. To lose sight of a fundamental recognition that technology is far from a neutral entity would be to set ourselves back at a uniquely disastrous moment. Instead, we should be equipping ourselves in hopes of keeping pace with the dizzying speed of exponential technological innovation. Of course, this can only be accomplished by investigating specific ways that it tends to shape us.
It is certain that technology is hurdling ever onward. The question remains: as it sweeps us along with it, are we traveling towards a worthy goal? If we have surrendered the grounds for even asking this question, it is certain that determining any useful answers becomes more and more impossible. Remaining blind to the necessity of interrogating technology’s effects upon us will yield ever compounding incoherence; redoubling our efforts to do so may yet equip us to properly navigate a future that looks increasingly technological.
As seen from our discussion here then, there is indeed a momentum underlying the technological project. Granted, much of this momentum stems from human contributions, but much of it also functions as a self-propelled mechanism which pulls us along with it. Failing to interrogate these questions from a proper angle – by ascribing neutrality where it does not belong – may disarm us of the faculties needed to adequately address that momentum, and disable us from playing our part in harnessing it towards a proper direction.