The Artificial Intelligence (“AI”) industry is rapidly emerging as one of the most interesting new advancements likely to impact a broad array of industries. AI is well beyond the nascent stages of development, and is quickly taking center stage as a gateway profit maximizer for forward-thinking business leaders. In short, AI has arrived.
While the roots of the current wave of technologies stem as far back as the 1950’s, current related advancements have provided both the critical rationale and the technical underpinnings for today’s advancements. Industry observers will recall that as recently as three to five years ago, the buzzword in the technology space was “Big Data,” springing out of advancements in the Internet and media fields. As defined, Big Data describes the gathering and storage of previously unimaginable amounts of data across virtually every consumer platform. The subsequent issue was not a lack of data, but rather, a need for new and faster tools to effectively manipulate and use that data, without which Big Data collection and storage was meaningless. The new computing architectures that have been developed to solve this problem define the essence of today’s Artificial Intelligence.
More specifically, the modern landscape of cutting-edge AI tools falls into the three basic elements of: a) Gathering and storage of massive amounts of information; b) Training computing systems to analyze the information for purpose-specific outcomes such as predictive analyses of risk profiles and likely consumer trends; and c) Generating nimble, current and understandable reporting to human end-users.
Predictably, the banking and insurance fields have been early adopters, driving investment at creating faster and more effective methods of evaluating current/immediate coverage risks and borrower viability, using an astonishing array of information parameters, from digital imaging of post-accident automobiles, geographic data, consumer spending habits, political affiliation, and virtually limitless other potential factors.
Is AI for you? Address the business problem, first.
A commonly heard frustration voiced by AI developers is the prevalence of C-Suite/Executive directives for the technical corps to “get up to date with Artificial Intelligence.” The typical responses include various iterations of, “…to do what?”
This isn’t to suggest a limitation to the array of potential uses for AI, but it does define the basic premise that AI is not merely a new software or hardware platform to be draped over any existing enterprise wanting an uptick in profitability. Rather, the current slate of AI technologies functions to address a specific business problem, and the most effective deployment of any AI platform necessarily requires first targeting the problem to be addressed. With that said, the array of currently active uses of AI is eye-opening. Among the uses includes tailoring educational/tutoring services to individual students for best results, a popular streaming service’s platform for individualizing the screen interface for each consumer browsing the site (including the reporting metrics and follow-up data showing materially improved viewership), bleeding-edge improvements to robot and prosthetic technologies, improvements to curating social media user interfaces, fraud prediction in lending and consumer credit markets, instructing self-piloting drones to avoid real time obstacles, identifying “fake media” reporting, and many others across dozens of verticals such as the health care, retail, and security industries. In short, before undertaking a material investment, the critical exercise is to determine what aspect of a given business can potentially be improved with an Artificial Intelligence solution, followed by building and/or training the AI platform to suit that specific need.
Navigating the legal and regulatory issues with AI
As fast as AI is disrupting industries across multiple verticals, we’ve already observed a rise in related regulatory issues, beginning with the inherent problems around over-reliance upon machine generated decision-making. Where the design, training and/or testing of an AI system are faulty, critical variables can be overlooked, yielding poor results. Alternately, a given system may function perfectly, but any AI system can only be as vibrant as the data provided. By way of example, in 2017, a lending institution in Australia erroneously sent thousands of debt recovery letters to welfare recipients, based upon a data error in the lender’s AI system. Closer to home, Facebook announced in an earnings report in April 2019 that the company expects to be fined as much as 5 Billion USD by the Federal Trade Commission over privacy issues…at least some of which related to company activities likely involving AI engine determinations related to targeted advertising to subscribers.
Ironically, we can also expect a material rise in the use of Artificial Intelligence by regulators to seek out and identify potential misconduct and improve market oversight. Earlier this month, the Financial Times reported that the Australian Securities and Investments Commission is pursuing AI and natural language processing technology as a viable compliance and enforcement tool for the financial industry. This trend will predictably continue around the globe, and across virtually every major market sector.
PLDO attorneys are carefully following how plaintiffs and government agencies have begun to challenge uses of Artificial Intelligence in the U.S. Courts, and on a closely related front, we’re tracking the national landscape of privacy-related regulations (notably including the California Consumer Privacy Act, the first material equivalent of the EU-based GDPR, which will be going into effect on January 1, 2020)—all of which will potentially intersect with the use of AI and machine learning platforms. Can the GDPR’s “right to an explanation” requirement be interpreted broadly enough to require full reporting of AI matrixes otherwise protected as trade secrets? Will Facebook survive the US government’s argument that they violated US federal and state antidiscrimination laws with the company’s targeted job and housing advertisements?
The tech industry appears to recognize that a broader adoption of AI technology necessarily requires building consumer trust, which in turn drives a need for fair, interpretable, safe and transparent systems. Consequently, industry efforts to combat system bias and create responsible Artificial Intelligence platforms is currently one of the most interesting and key discussions in the development space.
As our team continues to monitor these developments and the continuing evolution of Artificial Intelligence in the marketplace, we will advise on best practices for anticipating and mitigating the related legal and regulatory risks. If you have question on this issue or other business matters, please contact Attorney Kas R. DeCarvalho at 401-824- 5100 or email KD@pldolaw.com.
Disclaimer: This blog post is for informational purposes only. This blog is not legal advice and you should not use or rely on it as such. By reading this blog or our website, no attorney-client relationship is created. We do not provide legal advice to anyone except clients of the firm who have formally engaged us in writing to do so. This blog post may be considered attorney advertising in certain jurisdictions. The jurisdictions in which we practice license lawyers in the general practice of law, but do not license or certify any lawyer as an expert or specialist in any field of practice.
Recent Comments