Chatbots are active AI. Artificial intelligence (AI) imitates human behavior to recognize information, and respond in some way to achieve a desired goal. In fact, chatbots work similarly to voice-activated virtual assistants such as Alexa or Google Home. These are all forms of “active” AI (as opposed to passive). They are often used to gather personal information and with consumer trust comes great responsibility. To use this data ethically, and legally, here are few important points for media and technology companies to be aware of:
1. Privacy policies are crucial.
According to Daniel M. Goldberg, counsel to the Privacy & Data Security Group at the firm Frankfurt Kurnit, there are several general privacy principles to keep in mind: transparency, consumer choice (having to opt-in, and being able to opt-out), having “reasonable” security in measures in place, and collecting limited information in order to retain the minimum amount necessary for the product or service. Companies can face steep fines from the Federal Trade Commission (FTC), or in keeping with the Children’s Online Privacy Protection Act (COPPA), for not having proper, accurate and complete privacy policies and collection practices in place.
2. The law wasn’t built for any of this.
Privacy takes on an extra level of importance considering, unlike Europe (GDPR) and other countries, the U.S. does not have privacy laws yet on the federal level. States, however, do have legislation for issues like data breaches dictating how companies must respond, and how quickly.
As noted in “Chatbots and AI: Business, Legal and Ethical Concerns,” technology is a rapidly moving hare, and the law is a slowly advancing tortoise. Laws written for traditional media are constantly having to be interpreted, adapted and rewritten for newer platforms. A new California bot law going into effect this summer will have national implications, and requires explicit disclosure upfront when you’re getting messaging from a machine versus a human.
3. Without the right parameters in place, things can turn ugly quickly.
A cautionary tale was brought up in multiple panels of Microsoft’s bot Tay, who essentially turned into a bigot in 24 hours spewing sexist, racist and anti-Semitic remarks. It’s important to have the right governance in place for any AI/chatbot technology. You need to have the right filters in place so that hateful and negative rhetoric doesn’t rise to the top for driving the most engagement for pure shock value. However, rest assured there are plenty positive applications of chatbot technology. From answering questions on orders placed, to checking account activity and getting product information, the applications are far and wide. Brands have even built chatbots to keep insomniacs company and answer medical questions.
Companies have responsibilities to use all our data they gather responsibly and legally. Privacy must be taken seriously and best practice guidelines to safeguard personal information should be followed. AI technology may be efficient, but for certain sensitive or personal matters, it still probably makes sense to deal with a real human being.