As AI becomes increasingly adopted in more industries, its users attempt to achieve the delicate balance of making efficient use of its utility while striving to protect the privacy of its customers. A common best practice of AI is to be transparent about its use and how it reaches certain outcomes. However, there is a good and bad side to this transparency. Here is what you should know about the pros and cons of AI transparency, and possible solutions to achieve this difficult balance. AI increases efficiency, leverages innovation, and streamlines processes. Being transparent about how it works and how it calculates results can lead to several societal and business advantages, including the following.
Increased Justice: The number of uses of AI has continued to expand over the last several years. AI has even extended into the justice system, with AI doing everything from fighting traffic tickets to being considered as a more outcome than a jury. When companies are transparent about their use of AI, they can increase users’ access to justice. People can see how AI gathers key information and reaches certain outcomes. They can have access to greater technology and more information than they would typically have access to without the use of AI.
Avoidance of Discrimination: One of the original drawbacks of AI was the possibility of discriminatory outcomes when the AI was used to detect patterns and make assumptions about users based on the data it gathers. However, AI has become much more sophisticated today and has even been used to detect discrimination. AI can ensure that all users’ information is included or that their voice is heard. In this regard, AI can be a great equalizer.
Instilled Trust: When AI users are upfront about their use of AI and explain this use to their client base, they are more likely to instil trust. People need to know how companies reach certain results, and being transparent can help bridge the gap between businesses and their customers. Customers are willing to embrace AI. 62% of the people surveyed in Salesforce’s State of the Connected Consumer reported that they were open to AI that improved their experiences, and businesses are willing to meet this demand. 72% of executives say that they try to gain customer trust and confidence in their product or service by being transparent about their use of AI, according to recent Accenture survey. Companies that are able to be transparent about their use of AI and the security measures they have put in place to protect users’ data may be able to benefit from this increased transparency.
More Informed Decision Making: When people know that they are interacting with an AI system instead of being tricked into believing it is a human, they can often adapt their own behaviour to get the information they need. For example, people may use keywords in a chat box instead of completed sentences. Users may have a better understanding of the benefits and limitations of these systems and make a conscious decision to interact with the AI system.
Drawbacks: While transparency can bring about some of the positive outcomes discussed above, it also has several drawbacks, including the following:
Lack of Privacy: A significant argument against AI and its transparency is the potential lack of privacy. AI often gathers big data and uses a unique algorithm to assign a value to this data. However, to obtain results, AI often tracks each and every online activity, so you can get free background checks, AI tracks keystrokes, search, and use of the business’ website. Some of this information may also be sold to third parties. Additionally, AI is often used to track people’s online behavior, from which they may be able to discern critical information about a person, including his or her:
- Race or ethnicity
- Political beliefs
- Religious affiliations
- Gender
- Sexual orientation
- Health conditions
Even when people choose not to give anyone online this sensitive information, they may still experience its loss due to AI capabilities. Additionally, AI may track publicly available information. However, when there is not a human to check the accuracy of this information, one person’s information may be confused with another’s.
Hacked Explanations: When companies publish their explanations of AI, hackers may use this information to manipulate the system. For example, hackers may be able to make slight changes to the code or input to achieve an inaccurate outcome. In this way, hackers use a company’s own transparency against it. When hackers understand the reasoning behind AI, they may be able to influence the algorithm. This type of technology is not typically encouraged to detect fraud. Therefore, the system may be easier to manipulate when stakeholders do not put additional safeguards in place.
Intellectual Property Theft: Another potential problem that may arise when a company is transparent about its use of AI is the possibility that its proprietary trade secrets or intellectual property are stolen by these hackers. These individuals may be able to look at a company’s explanations and recreate the proprietary algorithm, to the detriment of the business.
Vulnerability to Attacks: With so much information readily available online, 78 million Americans say they in terms of cybersecurity. When companies spell out how they use AI, this may make it easier for hackers to access consumers’ information or create data breach can lead to identity theft, such as the notorious Equifax data research that compromised 148 million Americans’ private records.
Susceptibility to Regulation: Disclosures about AI may bring about additional risks, such as more stringent regulation. When AI is confusing and inaccessible, regulators may not understand it or be able to regulate it. However, when businesses are transparent about the role of AI, this may bring about a more significant regulatory framework about AI and how it can be used. In this manner, innovators may be punished for their innovation.
Easier Target for Litigation: When businesses are clear about how they are protecting consumer’s data in the interest of being transparent, they may unwittingly make themselves more vulnerable to legal claims by consumers who allege that their information was not used properly. Clever lawyers can carefully review AI transparency information and then develop creative legal theories about the business’ use of AI.
They may focus on what the business did not do to protect a consumer’s privacy, for example. They may then use this information to allege the business was negligent in its actions or omissions. Additionally, many AI systems operate from a simpler model. Companies that are transparent about their algorithms may use less sophisticated algorithms that may omit certain information or cause errors in certain situations. Experienced lawyers may be able to identify additional problems that the AI causes to substantiate their legal claims against the business.
The truth behind: Anyone who has seen a Terminator movie or basically any apocalyptic movie knows that even technology that was developed only for the noblest of reasons can potentially be weaponized or used as something that ultimately damages society. Due to the potential for harm, many laws have already been passed that require certain companies to be transparent about their use of AI. For example, financial service companies are required to disclose major factors they use in determining a person’s creditworthiness and why they make an adverse action in a lending decision. Even if a business is not yet required to be transparent about its use of AI, the time may soon come when it does not have a choice in the matter. In response to this likely outcome, some businesses are being proactive and establishing internal review boards that test the AI and identify ethical issues surrounding it. They may also collaborate with their legal department and developers to create solutions to problems they identify.
By carefully assessing their potential risk and establishing solutions to problems before disclosure becomes mandatory, businesses may be better situated to avoid the risks associated with AI transparency.