Massive Interest in Generative AI (Gen AI) Leads to ‘Shadow AI’ Gen AI usage is growing rapidly among knowledge workers, with 75% of them using it. But in the absence of clear guidance from their organizations, most such employees (nearly 80%) are bringing their own AI tools to work, and 52% are also reluctant to admit to using AI tools. These are among the findings of a survey of 31,000 employees across 31 countries done by Microsoft and LinkedIn. Such “shadow AI” usage is commonplace and poses several risks and challenges to organizations.
Risks of a ‘Free-for-All’ AI
For example, the information entered into the gen AI tools could become public or weaken the organization’s intellectual property rights and trade secrets. The organization may be exposed to legal risks if non-public information, confidential customer data, or private employee data is leaked during the usage of gen AI tools. Another concern is the tools providing incorrect or biased information, which can damage the reputation of or create a legal liability for the organization. In addition, there are some questions (and lawsuits pending) as to whether the AI vendors may have used data without permission and also whether the content generated may violate owners’ copyrights. Not surprisingly, there have been restrictions, and even outright bans, on the use of gen AI in many organizations.
Lack of Organizational AI Policies Is a Ticking Time Bomb
All in all, this is clearly an area where organizations should tread cautiously. But several industry surveys find that most organizations do not have formally defined policies for employees’ gen AI usage. A survey by software reviews site G2 finds that only a third of organizations have such a policy for AI usage, a Conference Board survey finds that three-quarters of companies lack an AI policy, and IT governance industry body ISACA finds that only 10% of organizations have a formal AI policy. In the face of rising adoption of gen AI and the shadow AI phenomenon, this is a clear and present danger.
Components of an AI Policy
The road to an organization’s success with AI starts with putting in place an acceptable-use policy. By establishing what’s permitted and what’s prohibited, and by delineating the do’s and don’ts, such a policy can promote responsible use and ensure legal compliance.
The different components of an AI policy include:
- Policy scope and applicability
- Permitted AI uses (and prohibited uses)
- Permitted AI tools (and prohibited tools)
- Responsibilities of users (and their managers)
- Data security and controls being put in place
- Data privacy and governance requirements
- Legal and regulatory compliance requirements
- Reporting protocols and procedures
- Review mechanisms
Developing an AI policy requires inputs from multiple stakeholders and functional teams involved with an organization, such as business, IT, HR, legal, compliance, risk, cybersecurity, and governance. It is common for one of the teams (for example, IT or risk) to act as the project lead to develop the policy. The AI policy may require changes or updates to other existing policies, such as information security, data privacy, procurement guidelines, and risk management. It may also impose certain obligations and requirements on third-party vendors and partners, which should be reflected in the relevant contracts.
Implementing the Policy
A policy alone won’t suffice. Employees need to be trained both on the policy and in the right use of tools. After vetting and validating fit-for-purpose, an approved list of tools can be drawn up to ensure consistency and clarity. Of course, as technology advances, this list may have to be revisited periodically. The policy also needs to be updated with new technology developments. Governance mechanisms with such review processes should be part of the overall AI governance of the organization.
In the U.S., there is already a variety of AI regulations. The European Union’s AI Act came into force in August 2024. Other countries have their own applicable regulations. The applicable regulations influence the organizational AI policy.
There Is No One-Size-Fits-All Policy
There will be domain-related or industry-specific requirements that influence the AI policy. For example, in a business context, productivity may be paramount. But in an educational setting, the learning goals and outcomes of the students must be considered. If students turn in their schoolwork using AI, and instructors use AI to evaluate student work, where does that leave us? Each scenario will require nuanced considerations of what is deemed acceptable or not.
Even within a specific industry, two organizations may take two completely different approaches, both of which may be justifiable. For example, two schools may take different approaches to letting students use AI tools. Even within the same school, two faculty members may have contrasting approaches. Context certainly matters. What is critical is that any policy decision be made by design and not by default.
Different aspects of the policy may be deemed important based on the industry. For example, for a law firm, factual accuracy reigns supreme and determines the contours of AI usage, while an advertisement firm may have some leeway there. A government agency or a heavily regulated industry may emphasize security or privacy in its AI policies.
In conclusion, an acceptable-use policy for AI is a tangible artifact of the organization’s AI strategy and is a key component of AI governance. Get cracking on that, pronto!