Huski AI's Forward-Looking Approach for AI Usage Regulation

At Huski AI, we are thrilled about the promises of new generative artificial intelligence to enhance how we serve our clients. However, as a company comprised of AI and product experts, we are cognizant of the risks accompanying the surge in AI services. That is why we are taking a proactive stance to ensure our professionals utilize these groundbreaking technologies responsibly while upholding our strict commitment to safeguarding data privacy for our clients and ourselves.

An Internal Policy on Generative AI Utilization

Our new policy aims at the use of generative AI services like LLMs for our company’s work by our employees. For clarification, by “generative AI,” we specifically refer to products and services powered by large language or image models that ingest prompts to generate content. In the spirit of transparency and knowledge-sharing, we have decided to share the full text of the policy below this article.

Our approach to regulating internal AI use has 3 main themes:

Confidentiality – Our company classifies data based on our data classification matrix. Any data with a classification of confidential or above cannot be fed into prompts by default. This guarantees that client data or sensitive firm data is not inadvertently disclosed or used to train these systems.
Responsible Use – Our people are accountable for AI-generated content as if it were their own and must carefully review the content for accuracy and any issues that could adversely impact third parties.
Service-specific Review – Not all generative AI companies are made equal. Whether our professionals should use a particular service depends on how that service approaches data processing, compliance, and legal terms.

How We Implement Our Policy to Mitigate Risks

From this starting point, when Huski employees identify a vendor with generative AI services or integrations that the professional or team would like to utilize, our security and legal teams can assess how the specific system works and review the governing legal terms to determine the level of risk. We have found that there can be significant differences between systems. Huski AI ensures that any vendor leveraging generative AI is scrupulous about the use of client data. We determine what data will be processed, how it will be used, and what safeguards are in place to protect our data. We only allow our professionals to use vendors with generative AI tools if we are satisfied with the answers to these questions.

We hold ourselves to the same high standards in our approach to developing our integrations with generative AI tools. Our company recently announced AI AssistTM, which allows users to instantly generate redlines to documents in our platform. This feature leverages a partnership with OpenAI, one of the leading AI platforms. When evaluating our partnership, we thoroughly vetted OpenAI to ensure we were satisfied with their security posture and the legal terms protecting our and our clients’ data. For instance, text fed into our HuskiGPT tool is protected by a data processing agreement with OpenAI and is not used to train OpenAI’s large language models.

Looking to the Future

While our focus is on mitigating risks to our company, we are also mindful of not over-regulating the use of generative AI. We believe in the value provided by these tools, and we look forward to as-yet-unknown use cases that our professionals will develop.

Generative AI is still in its infancy. Huski AI is committed to staying at the forefront of responsible use as AI technology evolves. We expect to update this policy, potentially frequently, over time. We hope by sharing this policy we will continue the conversation about the most effective ways to manage AI risk for legal practices. Discussion and feedback are welcomed.