DISCLAIMER: The information provided in this article, other knowledge base articles, and the Compliancy Group website do not, and are not intended to, constitute legal advice. All information, content, and materials in the Knowledge Base and on the Compliancy Group website are for general informational purposes only.
The HIPAA Security Rule contains no mention of the phrase "AI" or the phrase "artificial intelligence." The Department of Health and Human Services, which enforces HIPAA, has yet to release guidance on the use of AI in a HIPAA environment. Whether the use of generative AI is compliant with HIPAA depends upon number of factors. The use of AI poses HIPAA Security Rule and Privacy Rule concerns that must be addressed before use can be considered. Some Common issues on use of generative AI in a HIPAA environment are addressed in this article. Consideration of whether to use AI in any particular circumstance should only be made after careful evaluation of all relevant factors, and with input of appropriate internal and external personnel.
1. The HIPAA De-Identification Standard and AI:
If input of PHI into an AI platform is not desired, then PHI should be de-identified before it is disclosed to an AI platform. De-identified data is not subject to the HIPAA Privacy Rule. Failure to ensure de-identification may result in a breach of unsecured PHI. In an environment where identification of PHI before disclosure of the information to an AI platform is sought, consider training employees and those in the organization responsible for de-identification on how the deidentification works.
2. Business Associate Agreement Considerations:
A covered entity must enter into a signed business associate agreement with an AI vendor that the CE seeks to use to create, maintain, receive, and/or transmit PHI. Many AI vendors do not enter into a business associate agreement, or only enter into one if a specific level of service is purchased.
A business associate agreement might be found to be invalid, if the sole function the AI service is to perform is to use a covered entity's PHI to teach or refine the model, as opposed to the AI service's providing some kind of service or performing an operation for the covered entity.
3. Minimum Necessary Standard
The minimum necessary standard should be observed when considering what data to input into an AI model.
4. Secure Data Storage and Transmission
Secure data storage and transmission methods should be used when an AI model is processing PHI. These methods include encrypting data at rest and in transit, and ensuring the AI language model is hosted on a secure and compliant infrastructure. Entities may consider using private clouds, on-premises servers, or HIPAA-compliant cloud services.
5. Purpose of Loading of Information
If information is being loaded into the AI platform for treatment, payment, or healthcare operations, patient authorization might not be necessary. Patient authorization for loading of information into AI, and use or disclosure of such information, must be obtained when required. Use of AI that constitutes a "recording" under state law must comply with state and federal law recording consent laws.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article