Technology: A tool or an agent?

Last week, the CEO of Google, Sundar Pichai, presented in a quite impressive  keynote the capacities of their new digital assistant Google Duplex. They showed how their assistant was able to make calls to book appointments or make restaurant reservations while carrying out quite naturally sounding conversations. In fact, it was able to understand and react to ambiguity and could deal with information that was not quite what it originally asked for – Quite a leap forward from the very limited interactions and tasks that for example Apple’s Siri used to be able to perform.

Sparked by this presentation, this article juxtaposes the most dominating two views on the position of technology currently exposed by the main technology providers:

One view is that technology mainly serves as a tool for people, as sort of a “bicycle of the mind” enhancing human capabilities. This view is notably demonstrated by the philosophies of Apple and Microsoft. Both these companies come from a background of being the main drivers of personal computing and come from the same era in the 1970’s.

The other view is that technology is supposed to act as an agent, that is by carrying out tasks for you independently. This view is attributed to Google and Facebook – companies that are much younger than Apple and Microsoft from the internet era. In fact, this is exactly what the new abilities of Duplex should show you: It is able to carry out tedious tasks, such as making appointments, for you instead of you. In this second scope, you could also place the autonomously driving cars.

One of the main implications of these different view points is their different ethical setup. While in the tool model, the users – the ones literally using the tools – are always in control and therefore assume responsibility of the actions of operating these tools. On the other hand, the question of responsibility becomes less clear in the second model when technology has its own agency. If this technological agent, say your autonomous digital assistant or your autonomous car does something that goes against your intentions? Who is at fault? You, the technology provider, or maybe the tech agent itself? What if it actually acted upon something that you wanted but maybe only subconsciously?

Therefore, defining the exact scope, the precise intentions and possible means of the actions performed by such an agent seems crucial. In my opinion, having these three things transparent is what we as users should demand from all our “agent providers” aka Google and Facebook.