Published on
Technology Partners, Big Data and the Future of Privacy
Technology has certainly had an edifying impact on administrative practices in higher education. Self-service, 24/7 availability of basic administrative transactions — ranging from admissions to registration to financial and student accounts issues — has vastly improved the student experience from the long lines, inconvenient hours and “arena events” that typified these functions a generation ago. Beyond administrative functions, the core functions of teaching and learning have likewise experienced enormous improvements because of technology. Aside from more timely and broader access to information, course content and educational material, colleges can now offer much richer content, online lectures and real-time tutoring and technology-enabled learning analytics in ways that will transform higher education in fundamental ways. These improvements should certainly be celebrated to the extent that they make higher education more effective, efficient, affordable and more broadly available.
As with other transformational shifts, however, it is important to pragmatically embrace the new while paying realistic attention to its potential risks. The ubiquity of technology on campus — often implemented through contracts with outside vendors — is generating massive amounts of very sensitive and commercially-valuable personal data about students, and it is increasingly allowing third-parties to access, warehouse, mine and use the data in ways that threaten well-established precepts of good privacy practice.
Such seemingly pedestrian functions as simple storage of enrollment records can quickly transform into massive longitudinal individual tracking systems that generate information previously known only to the students themselves and, in some cases, information that even the subjects don’t know about themselves. Enrollment tracking data repositories, for example, can now trace individuals across multiple institutions — from K-12 through graduate school — and even follow them into the labor market. Various efforts are underway to link these data marts with other data systems — health records, juvenile justice, police and court records, military service and unemployment insurance — to create “human capital” data systems that could be used to promote more effective policy and planning. The Big Brother undertones of even the most basic deployments of such systems are fairly obvious. What’s more, the well-intentioned aspirations of these attempts as social engineering often involve providing access to third-parties, typically IT firms, and often start-ups with promising insights but not the deep pockets and the track-record to ensure they will be around to face the consequences of possible breaches or internal misuses of the data entrusted to them.
Insufficient privacy protections in implementation of IT on campus can quickly lead to dystopian extremes that exceed Orwell’s worst fears. Consider, for example, the growing use of web-based instructional materials and e-textbooks. As more analytics are embedded into instructional material, more information about students’ habits, intellectual strengths and weaknesses and modes of engagement with the subject matter are being collected, analyzed and stored. While this information can certainly lead to breakthroughs in teaching and the advancement of learning, its possible misuses and its potentially detrimental impact on individual privacy are fairly obvious. Colleges and universities understand the importance of protecting the privacy of their students, and realize their excellent track record in that regard is the reason for the high levels of trust students and families place in colleges.
But with the rapid growth of technology on campus, institutional attention to privacy has too frequently degenerated to platitudes and abstract contract provisions that confuse security with privacy. Security, of course, addresses issues related to preventing unauthorized access to information, which is indeed a huge problem in itself. Privacy, in contrast, has to do with authorized access, and contracts should at a minimum be able to articulate, with specificity and intentionality, what information is collected, why it is collected, who will get to access and use it, whether and under what circumstances it may be re-disclosed, how long it will be retained, how it will be disposed of, what rights the subjects of information have to review and correct their records and what obligations and liabilities accrue to custodians of personal information. In the United States, many IT contracts fall far short of this level of concreteness, and instead make vague references to confidentiality and the federal educational privacy law known as FERPA, the Family Educational Rights and Privacy Act of 1974, as amended.
Using FERPA compliance as a substitute for real privacy is sloppy practice for various reasons, not least of all because Big Data enthusiasts, in their zeal to access records, convinced the Obama administration to significantly rewrite and weaken FERPA in 2011. Beyond what privacy advocates (correctly) view as the wholesale evisceration of FERPA, amorphous references to a legal baseline are a poor replacement for articulating privacy ‘dos and don’ts’ in a clearheaded and intentional manner. Proper privacy contract provisions should ideally track Fair Information Practices and conform to their framework as appropriate. The Department of Homeland Security also articulates these in connection with its mission. As medical researchers have learned and demonstrated through their development of the Institutional Review Boards, sound privacy practices — far from being an obstacle to legitimate uses of data — promote public confidence in, and greater systemic integrity of, data-driven research and quality improvement.
Author Perspective: Association