We have officially entered the era of AI-driven conversational platforms. ChatGPT has exploded onto the AI scene, creating huge amounts of frantic interest in this rapidly growing technology. Despite the perceived novelty of this now (in) famous AI chatbot, commercial use of these types of applications has been happening for a while – from customer service chatbots and virtual assistants to social media and training bots, millions of these and other AI conversational platforms are launched each year.
There are several clear benefits of conversational AI, including 24/7 customer service, increased efficiency, and cost-effectiveness and these are driving the integration of conversational AI into business applications at break-neck speed – at least 67% of the global population used chatbots for customer support in the last year.
While conversational AI technology is clearly good for business, it is also the source of a lot of fear, mistrust, and urban myth. And so the era of AI proliferation is also heralding in the era of AI conversation compliance.
AI conversation compliance is a necessary step that organizations have to take to ensure that all interactions with an AI platform adhere to appropriate ethical and legal standards.
It is a critical part of ensuring customers trust and accept this technology, and it is a key way to protect the organization from legal and reputational risks as we enter new ethical waters.
Ethics Concerns in AI Communications and How to Address Them
Safe and Sound: Data Privacy and Security
Conversational AI systems do not escape the general growing concerns around individual rights to privacy and data security. AI communications platforms save vast amounts of personal data, and this data can be vulnerable to breaches or cyber-attacks.
Data is also often collected without the user’s knowledge and explicit consent, further raising concerns about the misuse of personal information.
Standards regarding privacy and security concerns are well documented and legislated – over 120 countries now have data privacy laws, and the expectations of AI conversational systems are no different from any other use case. AI systems have to be developed with a focus on data privacy and security, and need to follow the rules of best practice – minimize data collection, obtain clear and explicit consent, ensure secure data storage, limit data sharing with third parties, allow for opt-out of data collection, and delete data any time it is requested.
There’s No Such Thing As Neutral: Bias and Discrimination
Human bias has long been an issue in almost every aspect of human interaction – every decision-making process is colored by our assumptions, both conscious and unconscious. Conversational AI systems are not immune to this bias.
The path to unbiased conversational AI is complicated, but it always starts with ensuring a diverse set of viewpoints when designing and setting up training parameters for an application. Businesses also need to test algorithms on diverse groups or the outcomes can be alarming – as illustrated by the Amazon recruitment bot that was trained to use past resumes to learn the criteria the organization was looking for in technicians and screen applicants accordingly. Because of gender bias, women were underrepresented in technical roles causing the AI assistant to believe that the organization preferred male applicants. Consequently, the bot gave female applicants a lower rating.
It’s also important to keep humans involved in the monitoring and testing process. In this way, any biases or discriminatory behaviors that emerge over time because of the training dataset can be identified quickly and addressed. Finally, developers must ensure that the language used by conversational applications is inclusive, respectful, and culturally sensitive.
From Black Box to Glass: Transparency and Accountability
AI systems use complex algorithms and machine learning models that make it almost impossible for users to understand how the system is making decisions and generating responses. This ‘black box’ system – where you can see the inputs and outputs of a system but not the inside workings of it – limits how much a user is able to decide how to utilize the information.
Lack of transparency is also concerning when users don’t know they are talking to an AI communications system, and believe they are interacting with a human. This can change the type of responses they give or the kind of information they are prepared to share, as well as how willing they are to take suggestions or give consent for certain actions.
AI doesn’t have to be as unintelligible as it seems. Explaining to users how conversational AI works can be done in a way that helps them to understand and weigh up how to use the system outputs.
It is also important to prioritize transparency in the deployment of conversational AI systems. This is done by making it clear when a user is engaging with AI instead of a human, and obtaining their explicit consent to do so. It may also be useful to program the AI conversational system to be transparent about its nature which would help reduce confusion and ensure that users have a more accurate understanding of the AI’s capabilities and limitations. A good example of this is when ChatGPT is asked what it thinks about something, it responds with the following: “As an AI language model, I don’t have personal opinions or thoughts, but I can provide general information based on my training data and programming.”
Legal Compliance in AI Conversations
Technology and the legislative process seem to work at opposite speeds, and so currently there are no specific laws aimed at addressing the use of AI conversational systems. Instead, organizations need to look at legislation regulating related areas to get guidance and set up best practices. Some of these laws include:
- Consumer protection acts
- Data privacy and security acts such as the GDPR
- Intellectual property and copyright laws
- Inequality and discrimination acts
- Product liability acts
- Surveillance and security acts
There are some unique legal risks that organizations that use AI for conversation face, including:
- Deceptive Trade Practice Risks: If a user believes that they are interacting with a human when in fact they are dealing with a chatbot, or if an AI-generated product is marketed as being made by a person, the organization could be breaking legislation that prohibits deceptive trade practices.
- Intellectual Property (IP) Risks: These can arise in several ways. The training data used for AI systems will inevitably include third-party I.P., which means the AI system could produce an output that infringes on someone else’s I.P. rights. Disputes can also arise over who owns the I.P. generated by an AI system, particularly if multiple parties contributed to its development.
- Validation Risks: AI is becoming so human-like that an AI chatbot can even have a hallucination. AI hallucinations occur when the model generates an output that is different from what is expected – in other words, it gets something wrong or makes a false statement. This raises issues of liability if a consumer acts on incorrect information.
One of the major challenges for organizations is finding ways of keeping track of all the AI conversational platforms they are using, and storing all the interactions so they can be monitored and searched when needed. For example, a bank recently made an inventory of all their systems that use AI-powered algorithms and found that there were a total of 20,000. Given the massive number of interactions across multiple channels, huge volumes of data will not only need to be stored but quickly accessed and analyzed.
Keep the AI Conversation Going by Prioritizing Compliance
As conversational AI continues to advance and become integrated into most interactions between organizations and their customers, it is essential to ensure that these systems are designed and used ethically. Failure to do so can have serious consequences, including reputational damage, legal liabilities, and negative impacts on users and society. Prioritizing compliance is therefore critical for all organizations using AI for conversation.
LeapXpert’s federated architecture enables seamless integration with AI chatbots and allows you to keep a complete record of all conversations between users and your AI systems to ensure that data privacy and governance standards are met. Integrated with leading third-party archiving, surveillance, and analytics platforms, all records are securely stored and available alongside all the existing business data.
SUBSCRIBE TO OUR NEWSLETTER
Useful tips and helpful information.
You can unsubscribe at any time - obviously!