Friday, April 10
Who’s Johnny? Anthropomorphic framing in human-robot interaction, integration, and policy
Kate Darling, Massachusetts Institute of Technology (MIT) Media Lab
As we increasingly create spaces where robotic technology interacts with humans, our tendency to project lifelike qualities onto robots raises questions around use and policy. Based on a human-robot-interaction experiment conducted in our lab, this paper explores the effects of anthropomorphic framing in the introduction of robotic technology. It discusses concerns about anthropomorphism in certain contexts, but argues that there are also cases where encouraging anthropomorphism is desirable. Because people respond to framing, framing could serve as a tool to separate these cases.
Ken Goldberg will be the discussant for Kate Darling’s paper, Who’s Johnny? Object Personification and the Significance of Framing in Robot Design, Integration, and Policy on April 10 at 8:40 am. He is a Professor of industrial engineering and Operations Research in the College of Engineering at the University of California at Berkley. He is also a Professor of Radiation Oncology at the University of California at San Francisco and the Faculty Director of CITRIS Data and Democracy Initiative. He has published extensively on robots in medicine. He is also an inventor, artist, and documentarian.
This paper will be presented at 9:00 am on April 10.
Robot Passports
Anupam Chander, UC Davis
If the Internet of Things offers eyes and ears and robots add arms and legs, both these revolutionary technologies often depend on brains and memories located far away. This is the nature of the remote sensor/server architecture utilized by both the Internet of Things and cloud robotics. Thus, both the Internet of Things and robots rely on the free flow of information across national borders. But this global free flow of data is increasingly at risk to claims that such flows jeopardize privacy and security. Increasingly, national laws restrict the transfer of information outside the home country. A Dropcam, a Fitbit, a Nest thermometer and even a Google car all depend on the flow of data to the home country of their creators. The Internet of Things and cloud robotics may thus find themselves foiled by national borders, victim to a new privacy-based non-tariff barrier to trade.
Can international trade law, which after all seeks to liberalize trade in both goods and services, help stave off attempts to erect border barriers to this new type of trade? The smart objects of the 21st century consist of both goods and information services, and thus are subject to multiple means of government protectionism, but also trade liberalization. This paper is the first effort to locate and analyze the Internet of Things and modern robotics within the international trade framework.
Ann Bartow will be the discussant for Anupam Chander’s paper, Robot Passports on April 10 at 10:15 am. She is a Professor of Law at Pace Law School with specialties in intellectual property law and gender studies. She is an Advisory Board member of the Electronic Privacy Information Center (EPIC), a member of the American Law Institute, and the past Chair of the American Association of Law Schools Executive Committee of the Defamation and Privacy Section. Her research has focused on intellectual property law, particularly copyright, both in the United States and abroad.
This paper will be presented at 10:15 am at April 10.
Panel: Robotics Governance
Regulating Robots: A Multi-scale Approach to Developing Robot Policy and Technology
Peter Asaro, The New School
There has been increasing discussion about how best to approach the question of developing regulatory policy alongside technological innovation in the field of robotics. From the individual approach of traditional engineering ethics for individuals, to a corporate approach to self-policing and internal ethics review boards, as Google and Deep Mind have embarked upon, to a national regulatory commission, to attempts at international regulation of certain kinds of robotic systems. In this paper I will examine the unique challenges of regulating robots and autonomous systems at each scale, as well as prospects for an approach that address the regulation of robots across these scales.
Running though each scale of regulation are concerns about the predictability of complex systems and unanticipated risks, as well as the challenge of establishing and articulating the fundamental values, public interests and norms which engineers, firms and policies ought to strive to meet. Meeting these concerns and challenges will require a creative mix of expertise, modeling, testing and the education and involvement of the public in discussing the values integral to the development of advanced robotic technologies. The paper will reflect on the ways in which this might be achieved to the advantage of both the technologies industries and the public interest.
Sketching an Ethics Evaluation Tool for Robot Design and Governance
Jason Millar, Carleton University
As we march down the road of automation, robots are being programmed to make an increasing number of decisions autonomously, that is, without direct human control. However, those decisions can have a direct impact on the people using the robots. Autonomous cars are being programmed to decide how best to navigate dangerous, often lethal situations, thus directly impacting drivers/users. Internal Cardiac Defibrillators (ICDs) must decide if and when to administer potentially life saving electrical shocks to a recipient’s heart, thus directly impacting recipients. And social media sites, like Facebook, contain automation algorithms that “curate” users’ identities in ways that are often unknown to the user, and are often designed to subtly manipulate them, thus directly impacting them. These examples demonstrate that as we delegate an increasing number of decisions to robots, we find that many of those instances of delegation are of a type that directly impact users’ personal moral lives.
This paper will provide a useful addition to the interdisciplinary field of robot ethics: a detailed sketch of a philosophically informed design tool that can be used to guide an evaluation of any robotic technology. A solution of this sort will benefit engineers, designers, ethicists, lawyers, and policymakers by: (i) sketching a standard cross disciplinary language for use in analyzing a particular class of automated decision-making in robotics; (ii) sketching a standardized set of documentation and processes for use in performing a proportional analysis; and (iii) identifying a preliminary set of key evaluation criteria for use in performing a proportional analysis making transparent the underlying philosophical model for discussion and critical analysis in an interdisciplinary context. Ultimately the tool could help engineers and designers make robots that are more trustworthy and trusted, and could provide guidance to policymakers concerned with the governance of robotics.
Driving Lessons: Learning from the History of Automobile Regulation to Inform Domestic Drone Regulation
Kristen Thomasen, University of Ottawa
Surveillance drones have transformative potential. Pervasive, tireless, and alternately imposing or discrete, drones equipped with cameras or other surveillance devices can, among other things, provide a vicarious a sense of freedom and escape to their drivers, offer new opportunities for expression, creativity and exploration, constitute luxury items for consumption, and more importantly, change the lived experience of public space and provide commercial enterprises, personal users and the state with an opportunity to obtain extensive visual information for profit, criminal justice, or personal uses.
Drones will not be the first technology to have these cultural impacts. The car, another transformative technology, had many of these same effects on society. The automobile has become iconic in North American culture, associated with freedom, escape and exploration, prestige and luxury, and a pervasive impact on public space (which ultimately led to the redesign of the modern city). The car also serves as a major source of profit for private and public actors. Given the similarities between these technologies, the drone has the potential to be the car of the 21st century in terms of its cultural impact, generating changes in lifestyle, urban design, and the experience of privacy in public, among other things. The regulatory history of the car can therefore offer insights into the social and political challenges involved in integrating a new transformative technology, the drone, into society. This paper will examine the history of the North American regulation of automobile safety in order to make recommendations for the regulation of drone surveillance.
David G. Post will moderate the Robotics Governance Panel on April 10 at 11:30 am. Post is currently a Senior Fellow at the New America Foundation’s Open Technology Initiative, as well as a Non-Resident Fellow at the Center for Democracy and Technology and an Adjunct Scholar at the Cato Institute. Until his retirement in Fall 2014, he was the I. Herman Stern Professor of Law at the Beasley School of Law at Temple University, where he taught intellectual property law and the law of cyberspace; he also has taught at the law schools at Georgetown and George Mason University. Post is the author of In Search of Jefferson’s Moose: Notes on the State of Cyberspace (Oxford), a Jeffersonian view of Internet law and policy, and (co)-author of Cyberlaw: Problems of Policy and Jurisprudence in the Information Age (West), and has published numerous scholarly articles on intellectual property law, the law of cyberspace, and complexity theory, including the most-frequently-cited law review article published in the last 75 years in the field of intellectual property, Law and Borders: The Rise of Law in Cyberspace. His writings and additional information can be found online at http://www.davidpost.com.
The Robotics Governance Panel will be at 11:30 am on April 10.
Regulating Healthcare Robots in the Hospital and the Home: Considerations for Maximizing Opportunities and Minimizing Risks
Drew Simshaw et al., Indiana University and Duke University
Some of the most dynamic areas of robotics research and development today are healthcare applications. Demand for these robots will likely increase in the coming years due to their effectiveness and efficiency, an ageing population, the rising cost of healthcare, and the trend within the industry toward personalized medicine. But all-purpose “healthcare companions” and robotic “doctors” will not be available for purchase or be deployed in our hospitals any time soon. Rather, robots in healthcare will be an evolution in the coming decades.
There are basic, pressing issues that need to be addressed in the nearer future in order to ensure that robots are able to maintain sustainable innovation with the confidence of providers, patients, consumers, and investors. We will only be able to maximize the potential of robots in healthcare through responsible design, deployment, and use, which must include taking into consideration potential issues that could, if overlooked, manifest themselves in ways that harm patients and consumers, diminish the trust of key stakeholders of robots in healthcare, and stifle long-term innovation by resulting in overly restrictive reactionary regulation. In this paper, we focus on the issues of patient and user safety, security, and privacy, and specifically the effect of medical device regulation and data protection laws on robots in healthcare.
Discussant: Cindy Jacobs, University of Washington
This paper will be presented at 2:00pm on April 10.
Legal and Ethical Issues in the Use of Telepresence Robots: Best Practices and Toolkit
Nathan Matias & Chelsea Barabas, MIT Center for Civic Media
Christopher T. Bavitz, Berkman Center for Internet & Society, Harvard Law School
Cecillia Xie & Jack Xu, Harvard Law School Cyber Law Clinic Students, Spring 2015
The Cyberlaw Clinic, Berkman Center for Internet & Society, Harvard Law School
The deployment of telepresence robots creates enormous possibilities for enhanced long-distance interactions, educational opportunities, and bridging of social and cultural gaps. One can imagine scenarios in which telepresence robots may be used to foster political inclusion by enabling citizens to remotely attend gatherings such as city council meetings or to improve access to healthcare by enabling doctors to check on patients in remote locations. Telepresence robots may also be used to promote information exchange by enabling multiple users in a remote location to attend an event (such as a class or conference) and actively participate in ways that go well-beyond what could accomplished via a mere livestreams.
The use of telepresence robots raises some legal and ethical issues, however. Telepresence robots present torts issues, intellectual property questions, and wiretap potential. Telepresence robots may also be vulnerable to malicious actors. This proposal outlines the development of a law and ethics toolkit directed to those who operate and allow others to operate telepresence robots, describing some of the potential legal ethical issues that arise from their use and offering proposed responses and means of addressing and allocating risk.
Laurel Riek is the Clare Boothe Luce Assistant Professor of Computer Science and Engineering at the University of Notre Dame and Fellow at Notre Dame’s Reilly Center for Science, Technology, and Values. Her research interests include robotics, social signal processing, health informatics, and roboethics. She focuses on designing autonomous robots able to sense, respond, and adapt to human behavior. Her work also tackles real-world problems in healthcare, by creating novel sensing and robotics technology to improve patient safety. Riek has received the NSF CAREER Award, a Qualcomm Research Scholar Award, several best paper awards, and five recognition awards during her eight-year tenure as a Senior Artificial Intelligence Engineer / Roboticist at MITRE. She was recently named as one of ASEE’s 20 Faculty under 40. Riek serves on the editorial board of IEEE Transactions on Human Machine Systems, as well as numerous conference program committees. She received her Ph.D. in Computer Science from the University of Cambridge, and her B.S. in Logic and Computation from Carnegie Mellon University.
This paper will be presented at 3:15 pm on April 10.
Saturday, April 11
Unfair and Deceptive Robots
Woodrow Hartzog, Samford University’s Cumberland School of Law; Affiliate Scholar, Center for Internet and Society at Stanford Law School
Robots like household helpers, personal digital assistants, automated cars, and personal drones are or will soon be available to consumers. These robots raise common consumer protection issues, such as fraud, privacy, data security, and risks to health, physical safety and finances. Robots also raise new consumer protection issues, or at least call into question how existing consumer protection regimes might be applied to such emerging technologies. Yet it is unclear which legal regimes should govern these robots and what consumer protection rules for robots should look like.
The thesis of the article is that the FTC’s grant of authority and existing jurisprudence make it the preferable regulatory agency for protecting consumers who buy and interact with robots. The FTC has proven to be a capable regulator of communications, organizational procedures, and design, which are the three crucial concepts for safe consumer robots. Additionally, the structure and history of the FTC shows that the agency is capable of fostering new technologies as it did with the Internet. The agency defers to industry standards, avoids dramatic regulatory lurches, and cooperates with other agencies. Consumer robotics is an expansive field with great potential. A light but steady response by the FTC will allow the consumer robotics industry to thrive while preserving consumer trust and keeping consumers safe from harm.
Ryan Calo will discuss Woodrow Hartzog’s paper, Unfair and Deceptive Robots on April 11 at 8:30 am. He is an Assistant Professor at the University of Washington School of Law and an Assistant Professor at the University of Washington Information School. He is a co-director of the University of Washington Tech Policy Lab and an Affiliate Scholar at the Stanford Law School Center for Internet and Society and the Yale Law School Information Society Project. He is on the Advisory board of the Electronic Privacy Information Center, the Electronic Frontier Foundation, and the Future of Privacy Forum. He has published on robotics and privacy.
This paper will be presented at 8:30 am on April 11.
The Presentation of Machine in Everyday Life
Tim Hwang, Data & Society Research Institute; Karen Levy, NYU School of Law, Data & Society Research Institute
In this paper, we consider the distance between how autonomous systems present themselves to users and how they “really are” – in other words, the optics of autonomous systems vs. their internal mechanisms and capabilities. Intelligent systems are commonly designed to portray themselves as less autonomous or capable than they technically are. We describe these design elements as “theaters of volition” – signals that might, for instance, make users feel as though they are in more control of an autonomous system than they in fact are. (Consider, for instance, a nonfunctioning steering wheel on a self-driving vehicle, or a “door close” elevator button that is not actually functional.)
In thinking about theaters of volition, we apply Erving Goffman’s classic sociological conception of the “presentation of self in everyday life.” Goffman (1959) famously characterized social identity as a juxtaposition of “front stage” and “back stage” elements. In the front stage, the actor performs rituals that have social meaning to the audience; in the back stage, the “real” actor can emerge. In the same way, theaters of volition are dramaturgical performances by machines, aimed at smoothing social interaction by increasing their acceptability.
As policy concerns around intelligent and autonomous systems come to focus increasingly on transparency and usability (for instance, recent calls for “algorithmic literacy”), the time is ripe for an inquiry into the theater of autonomous systems. When do (and when should) law and policy explicitly regulate the optics of autonomous systems (for instance, as Calo (2012) describes, requiring electric vehicle engines to “rev” audibly for safety reasons) as opposed to their actual capabilities? What are the benefits and dangers of doing so? What economic and social pressures compel a focus on system theater, and what are the ethical and policy implications of such a focus?
Evan Selinger will be the discussant for Karen Levy & Tim Hwang’s paper The Presentation of the Machine in Every Day Life on April 11 at 9:45 am. He is an Associate Professor of Philosophy and the Head of Research Communications, Community & Ethics at the Media, Arts, Games, Interaction, Creativity (MAGIC) Center at Rochester Institute of Technology. Much of his research focuses on ethical dimensions of science and technology, with a growing emphasis on privacy. Deeply committed to public philosophy, Evan routinely supplements his scholarly publications with contributions to magazines, newspapers, and blogs, e.g., Christian Science Monitor, Wired, The Atlantic, Slate, etc. During 2015-2016, he will spend a sabbatical year as a Senior Fellow at The Future of Privacy Forum. More information can be found at http://eselinger.org.
This paper will be presented at 9:45 am on April 11.
I Did It My Way: On Law and Operator Signatures for Teleoperated Robots
Tamara Bonaci et al., University of Washington
Teleoperated robotic systems are those where a human operator controls a remote robot through a communication network. In surgery, bomb disposal, underwater exploration, and other applications, institutions such as courts, agencies, and firms will want to determine and verify the identity, skill level, and other traits of the remote operator. The concept of operator signature represents a new approach to monitor, analyze, and validate operators’ performance. This approach is based on the assumption that each operator interacts with a remote robot in a unique way, thus generating a unique biometric (signature), which can be extracted and used for further validation.
This paper discusses legal liability and evidentiary issues that operator signatures could occasion or help to resolve. We first provide a background of teleoperated robotic systems, and introduce the concept of operator signatures. We then discuss some cyber-security risks that may arise during teleoperated procedures, and describe the three main task operator signatures seek to address—identification, authentication, and real-time monitoring. Third, we discuss legal issues that arise for each of these tasks. We discuss what legal problems operator signatures help mitigate. We then focus on liability concerns that may arise when operator signatures are used as a part of a real-time monitoring and alert tool. We consider the various scenarios where actions are conducted on the basis of an operator signature alert. Finally, we provide preliminary guidance on how to balance the need to mitigate cyber-security risks with the desire to enable adoption of teleoperation.
Margot Kaminski will be the discussant for the Tamara Bonaci et al. paper, I Did it My Way: On Law and Operator Signatures for Teleoperated Robots on April 11 at 11 am. She is an Assistant Professor of Law at Ohio State University, Moritz College of Law. She researches and writes on law and technology. She is a graduate of Harvard University and Yale Law School. Professor Kaminski’s research and policy work focuses on media freedom, online civil liberties, international intellectual property law, legal issues raised by AI and robotics, and surveillance. She has written on law and technology for the popular press, and appeared on NPR’s On the Media and other radio shows and podcasts. From 2011 to 2014, Professor Kaminski served as the executive director of the Information Society Project at Yale Law School, an intellectual center addressing the implications of new information technologies for law and society. She remains an affiliated fellow of the Yale ISP.
This paper will be presented on April 11 at 11:00 am.
Panel: Robot Economics
Colin Lewis is a behavioral economist and data scientist, who studies the impact of behavior, economics and culture on the future by exploring the interactions between technology and society.
Andra Keay is the Managing Director of Silicon Valley Robotics, an industry group supporting the innovation and commercialization of robotics technologies. She is also founder of Robot Launch, global robotics startup competition, and cofounder of Robot Garden, a new robotics hackerspace. Andra is Director of Industry & Startup Relations at Robohub.org, the global site for news and views on robotics. Andra graduated as an ABC film, television and radio technician in 1986 and obtained a BA in Communication from the University of Technology, Sydney (UTS) Australia, in 1998. She obtained her MA in Human-Robot Culture at the University of Sydney, Australia in 2011, building on a background as a robot geek, STEM educator and film-maker.
Garry Mathiason, Esq., is a senior partner with Littler Mendelson, the largest global law firm exclusively devoted to labor and employment law. He originated and co-chairs Littler’s Robotics, Artificial Intelligence (AI) and Automation Practice Group, providing legal advice and representation to the robotics industry, as well as employers deploying this technology in the workplace. His robotics and AI practice includes workplace safety standards, privacy requirements, robot collaboration and human displacement, anti-discrimination law, and legislative and regulatory developments. He is widely recognized as a futurist and one of the leading authorities on employment law trends in the United States. He routinely advises Fortune 1000 employers regarding workplace law compliance, class action litigation, employee skill requirements, and retraining programs.
Mr. Mathiason has been named one of the top 100 most influential attorneys in the nation by the National Law Journal and has received the highest rankings from Chambers USA, Who’s Who Legal, and The Best Lawyers in America. In 2013, he was among ten attorneys recognized in Human Resource Executive’s inaugural Hall of Fame as one of the nation’s most powerful employment attorneys. He has argued cases before the U.S. and California Supreme Courts. Mr. Mathiason is a founder of NAVEX Global, the ethics and compliance experts, which provides superior legal compliance solutions through an array of GRC products and services.
Dan Siciliano is the moderator of the Robot Economics Panel on April 11 at 1 pm. He is a Professor at Stanford Law School and the Associate Dean for Executive Education and Special Programs. He is the Faculty Director of the Arthur and Toni Rembe Center for Corporate Governance and a Co-Director of Stanford’s Directors’ College. In addition to being a law professor, Mr. Siciliano is an entrepreneur and serves as a governance consultant and trainer to board directors of several Fortune 500 companies.
The Robot Economics Panel will be at 1:00 pm on April 11.
Personal Responsibility in the Age of User-Controlled Neuroprosthetics
Patrick Moore et al., University of Washington
The fields of robotics and medical prosthetics are natural allies—each is concerned with designing and building systems capable of interacting with the physical world in a human-like fashion. Roboticists draw from nature in designing manipulators, mobility systems, and sensors; prosthetics in turn use these technologies to mimic biological systems and repair or augment the human body.
But what of the mind? If one accepts the principle that the mind is a product of physical processes which occur within the human brain, it follows that our concepts of self, of free will, and of social responsibility also flow from those processes. What happens when human minds develop prosthetics capable of altering a user’s cognition, emotional states, and perception of the world by tampering directly with the brain/mind mechanism?
Such devices are not fiction. With an installation base of over 100,000 users, so-called “brain pacemakers”, or deep-brain stimulators, operate at the cutting edge of our ability to integrate computer technology directly into the human brain. A typical DBS system consists of a control and power unit implanted in the user’s chest cavity, connected to a set of trans-cranial electrodes which extend into the user’s brain. Current implementations of DBS technology, however, have no sensors—and thus lack the capacity to determine whether a user is currently experiencing pathological symptoms. Such open-loop systems must be continuously active while the user is conscious in order for the system to grant any therapeutic benefits. This continual stimulation overexposes users to the side- effects of DBS. In addition to physical sensations such as tingling, burning, and proprioceptive distortions, users also report neuropsychiatric effects, including cognitive and speech dysfunction, impulsivity, and changes in self-image. Though many of the neuropsychiatric effects of DBS are unintentional consequences of today’s primitive systems, it is easy to imagine more advanced systems designed to cause deliberate changes in any of the human mind. While the engineering challenges surrounding CL-DBS are the subject of ongoing research and development, we believe that this technology opens new doors in the realms of law and ethics. The spaces beyond those doors must be explored if this technology is to have a role to play in the future in our society. The threshold question: who will be in control? The computer, or the brain.
This paper investigates whether giving users volitional control over a CL-DBS system is ethically and legally permissible. We believe that it is not only permissible—it is in fact advantageous when compared to the alternative of making the system’s operation entirely automatic. From an ethical perspective, volitional control maintains the integrity of the self by allowing the user to view the technology as restoring, preserving, or enhancing one’s abilities without the fear of losing control over one’s own humanity. This preservation of self- integrity carries into the legal realm, where giving users control of the system keeps responsibility for the consequences of its use in human hands.
Dr. Meg Leta Jones will be the discussant for the Patrick Moore et al. paper, Personal Responsibility in an Age of User-Controlled Neuroprosthetics on April 11 at 2:45 pm. She is an assistant professor in Georgetown University’s Communication, Culture & Technology department where she researches and teaches in the area of technology law and policy. Her research interests cover a wide range of technology policy issues including comparative censorship and privacy law, engineering design and ethics, legal history of technology, robotics law and policy, and the governance of emerging technologies. Prof. Jones received her B.A. and J.D. from the University of Illinois and her Ph.D. from the University of Colorado, Engineering & Applied Science, (Technology, Media & Society).
This paper will be presented at 2:45 pm on April 11.