Kevin L. Miller
Researching, Writing, Inventing and Practicing
At the Intersection of Law and Technology

Photo taken from the top of Mt. Ruapehu at the southern end of the Taupo Volcanic Zone in New Zealand. Elevation: 9,177'

Kevin L. Miller

Publications and Research

A Framework for Unified Privacy Management in a Multi-Actor Environment

•Panelist at WE ROBOT 2017—Yale Univ., New Haven (Apr. 2017)
•Presenter at DATA FOR POLICY Conference—London (Sept. 2017)

The presence of robotic devices in our environment gives rise to unique privacy problems unlike those in other domains. Despite rapid advancement in the perception, movement, and learning capabilities of robots, issues in robot privacy remain without an effective research program. This research advances the conversation by proposing technological solutions aimed at the nexus between privacy as a legal and sociological concept and robot control in multi-actor environments. Its goal is to investigate technological approaches that lay the groundwork for addressing the unique and very real privacy management challenges arising when humans coexist with robots and other devices. As applied to the field of robotics, the purpose of privacy management is to govern the observation, movement, and recording activities of a robot in accordance with the expectations of the humans with which it interacts.

This paper considers the basic robot privacy problem from an essentially cybernetic viewpoint: aligning command and control of robots with human expectations in an environmental context. We develop a reference technical architecture, or “framework,” necessarily incomplete but arrayed as a multi-pronged research agenda, to define structural concerns and implementation options that can assist in meeting the privacy challenges entailed by this new robotic environment. While predominantly a technical framework, this work uses legal and sociological understandings to design a model that exposes systemic assumptions and neutrally adapts norms to account for cultural and contextual subtleties.  More specifically, the objective is to ensure that robot control functions—namely, sensor activation and recording, as well as movement and action—meet the contextually sensitive privacy expectations of individuals coinhabiting the robot’s zone of influence.

To that end, a taxonomic schema is described that can be accessed by robotic device makers to inform sensor collection, data collection, storage parameters and constraints, and the permissible range of movements, motions, and activities of a robot based on individualized, context- and role-sensitive privacy preference rules. A privacy preference enunciator device and associated transport mechanisms are introduced that allow individuals and the robots they encounter in ad hoc environments to exchange privacy preference data in accordance with the taxonomic schema. Privacy preference rule selection and comprehensive resolution protocols are developed that allow for the automated or interactive resolution of conflicts arising between individuals in multi-actor environments or ambiguous contexts. Accountability and audit mechanisms are discussed, as are trust and security models for mitigating secondary privacy harms.

What We Talk About When We Talk About "Reasonable Cybersecurity": A Proactive and Adaptive Approach

90(8) The Florida Bar Journal 22 (Sept/Oct 2016)
Rpt. in The Computer & Internet Lawyer (March 2017)

The current U.S. legal framework for cybersecurity is a patchwork, consisting of a number of overlapping federal standards aimed at regulated entities in various sectors, state cyber-breach notification laws, state statutes, and caselaw arising from consumer’s actions against companies. Despite the lack of a comprehensive standard, a requirement for organizations to implement affirmative cybersecurity practices has arisen as a result of the body of administrative law stemming from Federal Trade Commission (FTC) enforcement actions. Although the FTC lacks any specific statutory authority to regulate cybersecurity policy, it has repeatedly used its broad authority to prohibit “unfair or deceptive acts or practices in or affecting commerce” to enforce data protection standards against companies.

The near ubiquity of state cyber-breach notification laws is testament to the practically universal belief that organizations should notify individuals when hackers steal their data. This bare statutory duty has in some cases disoriented companies with respect to their deeper legal obligations under a reasonable cybersecurity standard. Companies have become quite adept at enacting incident response plans that notify customers and relevant agencies, provide a year of credit monitoring, and hire cyber-defense contractors to review and secure their data systems after the fact. However, such plans are directed at what to do after one’s defenses have failed, rather than implementing reasonable cybersecurity to avoid problems. an organization has affirmative responsibilities to protect key customer data, and that the notion of reasonable security is shaped by and evolves with technology, regulatory guidelines, and common practices in a business sector. These responsibilities, and the company’s burden to implement a process that adapts to changing practice over time, must be proactive, rather than reactive, at its core.

Total Surveillance, Big Data, and Predictive Crime Technology: Privacy's Perfect Storm

19(1) Jour. of Technology Law & Policy 105 (June 2014)

Since the first widespread uses of computer databases in the 1970s, experts have warned of the Orwellian “computer state” in which governments and private corporations collect, store, and share vast troves of data about citizens. In the last decade or so, new technologies have been brought to bear upon the information management challenge posed by this deluge of data. These new techniques have targeted three distinct, but related, areas. First, they have enabled the cataloging of human behaviors that were previously ephemeral. These enhanced cataloging powers have coincided with an increasing willingness by law enforcement agencies to conduct—and courts to condone—widespread, total surveillance of citizens in the name of national security. Second, semantic query systems and “big data” analytical engines have introduced an approach to discerning patterns in data that prior systems lacked. The methodology underlying these approaches is tacit, but, I will argue, likely flawed. Third, these new techniques of surveillance gathering and data analysis have begun to transition into their next phase, prediction and scoring of individuals’ risk of criminal behavior. Individualized suspicion of criminal activity once triggered a review of a person’s data portfolio, but now the data portfolio triggers individualized suspicion.

While predictive techniques have been used in targeted areas of criminology for decades, this article argues that the move toward predictive policing using automated surveillance, semantic processing, and analytics tools magnifies each technology’s harms to privacy and due process, while further obfuscating the systems’ technological and methodological limitations. Furthermore, they do so with little offsetting diminishment of the risk of criminal activity or terrorism. The time is right to revisit predictive systems in light of these new advancements.

Legal protections for individual privacy are at a low ebb in the United States, as countless commentators and the recent release of long-secret FISA court opinions have demonstrated. A long string of cases interpreting the First and Fourth Amendments have shown that those legal doctrines are mostly inadequate to meet the challenges posed by the use of modern, technologically amplified surveillance and prediction techniques. My purpose here is to consider the legal, technical, and methodological issues raised by surveillance-fed predictive systems that may substantiate policy arguments against their widespread adoption. If this policy position is convincing, then legal and economic arguments could be brought to bear to discourage the conditions which have fostered the explosive growth and abuse of these systems.

With those objectives in mind, the paper proceeds in four parts. Part II describes the paradigm of the “triple threat” to privacy which stems from total surveillance, big data analytics, and actuarial trends in policing. Part III surveys methodological problems with big data analytics and predictive policing which make these tools much less useful than advertised. Part IV considers the difficulties of using traditional First and Fourth Amendment doctrine in the light of technological advances. Finally, Part V discusses the possible methods of curbing the use of these flawed tools in the pre-crime prediction arena by exploring various expanded legal and economic approaches.

The Kampala Compromise and Cyberattacks: Can there be an International Crime of Cyber-Aggression?

23 Univ. of Southern California Interdisciplinary Law J. 217 (Winter 2014)

At the Kampala Review Conference in 2010, after decades of delay and debate, the States Parties to the Rome Statute finally agreed on a definition of the crime of aggression acceptable for prosecuting individuals at the International Criminal Court (ICC). Reactions to the new definition have been mixed, and many scholars have expressed concern that the new crime will have narrow applicability to modern conceptions of warfare. Cyberattacks, drone strikes, and chemical and biological attacks, conducted by both state and non-state actors, fit poorly into conceptions of warfare born during World War II and the Cold War.

This Article considers whether the definitions adopted at Kampala can be applied to cyberattacks and hence can be used to decelerate the arms race in an increasingly aggressive cyberspace. After briefly reviewing the history behind the Crime of Aggression, this Article examines several recent cyberattacks with the goal of clarifying the key differences between cyberattacks and conventional attacks. It argues that the definitions at Kampala can be flexibly interpreted by the ICC judges to encompass cyber-aggression. However, the meanings of key terms in the definition of aggression inevitably will undergo further shaping by international standard-setting and State practice. The Article argues that U.S. practice, in particular, is expanding the definition, paradoxically making U.S. more likely to be perceived as cyber-aggression, and that U.S. policy should be reshaped in light of its influence on this developing area of international law. The Article then considers whether the Crime of Aggression's leadership clause forms a barrier to the prosecution of cyberattacks. Ultimately, the author concludes that, at least for the time being, practical and jurisdictional barriers will likely forestall the Crime of Aggression from being applied in the context of cyber-aggression. Consequently, several other methods should be simultaneously pursued by international groups to promote a peaceful cyberspace.

Cyberattacks, the Laws of War, and the Crime of Aggression

22 ILSA Quarterly 21 (Oct. 2013).

Categorizing state-sponsored cyberattacks using classical descriptions of war and weaponry has proven challenging for the international community. How is a “cyber-weapon” classified when it has no physical manifestation other than inconvenience? How is data loss quantified? When a nation uses a computer virus to attack another nation's infrastructure, is the attacker breaking any laws? Is the victim state justified in responding in self-defense? Assuming a nation has the right to counterattack, how do planners evaluate the proportionality of their response, especially if the counterattack includes traditional munitions? International law is far from settled in this area. This article examines both the traditional laws of war and the newly-drafted ICC crime of aggression in the context of state-sponsored cyberattacks.

Professional NT Services: A comprehensive guide to professional service design and development

(Wiley/Wrox, 1998)

This book describes the design and implementation of highly scalable multi-tiered systems using Windows Services, a core part of the Windows architecture. The book teaches you how to implement robust NT service designs that coherently fit the system architecture of which they are a part. The book covers basic theory and structure of NT services, and how the different architectural components of a system interconnect along with a survey of specific usage patterns. Bare-bones service architecture and code structure is analyzed for the purpose of understanding what distinguishes a service from other types of executible objects in NT. The book covers event logging and which services to use to report errors and other status information and how to add event logging to a C++ service class and retrospectively monitor usage pattern implementation. It outlines the basics of NT's security architecture, and how to programmatically control security from inside services. It introduces the concept of resource pooling by way of OBDC's implementation and discusses expanding and improving on that implementation through the Quartermaster usage patterns, in which a service pre-initializes database connections to make database query functions more efficient on large client/service or internet systems. It shows you how to use Active Template Library to create services that can host COM objects, and use the Quartermaster implementation to demonstrate the Business Object pattern. It covers debugging and tuning services along with how to write administration programs for services. Finally, it provides a checklist for developing your own services, covering issues like control, communication, security, and threading.