DEVELOPING AN EFFECTIVE “AI” POLICY

By Joel K. Goloskie

April 11, 2025

Artificial Intelligence (“AI”) is now a very real part of our business ecosystem, and will play an increasing role in many companies’ continued growth.  Leveraging a business’s data assets and the skills of its workforce, there are numerous ways that AI can help generate efficiency and innovation to bolster a company’s leadership position.  Likewise, companies increasingly find themselves being served by vendors who use AI to augment the services they provide.  Any use of AI by or on behalf of a company must be undertaken responsibly and ethically.  A well-tailored AI Policy is an important step in ensuring that critical goal.

The use of AI creates unique opportunities and risks, and should only be undertaken in accordance with an AI Policy that is tailored to the company and its particular activities.  Before using any product containing or utilizing AI on behalf of the company, associates should be required to read and ensure compliance with this Policy.  At a minimum, any associate who uses AI or an AI-enabled tool should be responsible for:

(1) ensuring that such use is consistent with the company’s AI Policy;
(2) monitoring the data input into any such tool; and
(3) confirming the accuracy of the output of such tool.

AI should be used as leverage, not as a shortcut.  Each associate who uses AI must ensure that its output includes no hallucinations, and that the quality of the output is at least as high as would be attained without the use of AI.

As a cardinal rule, any use of AI by or for a company must ensure the privacy and security of the information processed.  This may seem self-evident in regard to, say, HIPAA-protected health information, but it applies equally to other personal information such as employee data.  Also critical is that AI use must protect a company’s intellectual property, including policies and procedures, business plans, and similar documents.  If this information is processed by software that utilizes AI, there is a very real possibility for the privacy and security of that information to be compromised, absent adequate safeguards. Moreover, AI must be used responsibly and ethically, and never in a way that intentionally or unintentionally causes or supports impermissible discrimination.

Just as the capabilities of AI continue to evolve, so does the potential value of AI to a company and its customers.  PLDO continues to explore the potential and the risks inherent in this transformative technology, and will provide updates as appropriate.  In the meantime, if you have any questions or would like assistance in the development of an AI Policy and related procedures, please contact PLDO Partner Joel Goloskie at (617) 771-1154 or [email protected]

Recent Posts

TO COMPETE OR NOT TO COMPETE

Scrutiny of new and existing noncompetition and nonsolicitation provisions affecting employees and independent contractors has magnified since February 26, 2025. That Wednesday, the Federal Trade Commission (FTC) issued a directive to form a Task Force to examine...

CYBERSECURITY IN 2025 – A SHIFTING LANDSCAPE

The biggest cybersecurity risk that businesses faced years ago was losing access to their data due to a ransomware attack where a hacker would lock up a company’s files and only release them back if a ransom was paid. Wisely, many businesses responded to these threats...