By Kas R. DeCarvalho

In recent years, we’ve observed legal and regulatory developments in the areas of privacy law and the misuse of personal data begin to catch up with the massive growth in global electronic connectivity and access to the internet, including measures passed in the United States and Europe guaranteeing users of the internet certain controls over their personal data—notably including the European Union’s 2018 General Data Protection Regulation (GDPR).

With that said, related technological advancements continue to spawn rife new avenues of business and consumer concern, together with a growing landscape of potential areas for increased legislation and regulatory control. Chief among these, arguably the Wild West of developing jurisprudence, relates the field of Artificial Intelligence, or “AI.”

AI is the catch-all term referring to the array of emerging computing technologies that perform tasks traditionally requiring human input, such as learning and decision-making. The unquestionable and often touted benefit of AI relates to the potentially transformative benefits and innovations offered across virtually every industry and government. More and more products are hitting the market that tout extraordinary new means of assisting consumers with running their homes and daily lives, extracting businesses from previously murky, subjective areas of decision-making in the health, insurance, credit assessment and other major commerce fields.

As companies increasingly embed AI in their products, services, and decision-making processes, together with the many proposed benefits have also come predictable concerns about the risks of misuse and/or or unintended consequence inherent in any nascent technology. This has prompted a myriad of related discussions and recent regulatory initiatives for developing applicable standards as both the private and public sectors seek to provide reliable, robust and transparent AI systems. Attention is now shifting quickly from the relatively well-established privacy debates and related ensuing regulations to how that data is used by the software—particularly by complex, evolving algorithms that might approve loans or home purchases, diagnose physical and mental health conditions, drive automobiles, recommend prison sentences, or screen job applicants…i.e., Artificial Intelligence.

To this end, we have been observing a number of lawmakers considering the challenges and benefits of AI, together with a growing body of new regulations introduced to study and/or attempt to reasonably mitigate the related concerns.

There is currently little or no legislation at the Federal level that has yet been enacted to regulate the use of AI, but we have observed a significant increase in the number of State initiatives and proposed regulatory frameworks across the country, collectively suggesting grounds for reasonably projecting the likely direction of both State and Federal regulation across multiple industries with scaling up AI-driven initiatives.

In 2020, general AI bills or resolutions were introduced in at least 13 States, and in Utah, a resolution was passed to create a related tech initiative in higher education.

By the end of 2021, general AI bills or resolutions were introduced in 17 States—including four bills pending in Massachusetts, relating to state agency automated decision-making, various individual data and privacy rights, and the creation of a commission on transparency in the use of AI—with regulations enacted in Alabama, Colorado, Illinois and Mississippi.

A recent bill being considered by the Rhode Island General Assembly is contemplating a restriction on insurers’ use of algorithms and predictive models, and directing the Health Insurance Commission and R.I. Dept. of Business Regulation to engage in a stakeholder process related to the adoption of and implementation of regulations.

Policymakers’ general areas of regulatory concern have centered on addressing the potential lack of transparency—both with respect to applicable algorithms and effective notice to the public of the use of AI—and the potential for AI-based systems inadvertently introducing bias and discrimination to decision-making.

In this rapidly developing landscape, best practices for companies will necessarily include vibrant new processes and tools for system audits and testing, documentation and traceability of both data and related processes, together with revised diversity and ethics awareness training protocols to assess whether products and services affected by AI nevertheless remain aligned with company values and/or raise regulatory concerns. BMW, Google, Microsoft and a host of other forward-looking companies are developing formal AI policies with a renewed focus on diversity, fairness, privacy and consumer safety. Unless companies—including companies not directly involved in AI development and/or implementation— look to proactively engage these challenges, they will risk potentially catastrophic regulatory and consumer trust issues in the future.

PLDO remains committed to serving our clients’ needs through both monitoring emerging regulations, and consulting on best applicable practices as this area continues to evolve. If you have questions pertaining to AI or other business matters, please contact PLDO Partner Kas R. DeCarvalho at 401-824-5100 or email