Quote:
Originally Posted by
country978
Actually if the AI IS the customer service rep. it always knows the right answer and doesn't talk to you with a heavy accent and can probably handle huge volumes of calls entirely eliminating huge wait times for customer service for pretty much anything. I'd like to see a pilot program at the RMV and see how much easier it becomes to deal with them instead of a flawed and slow human. Humans make mistakes but the AI always knows the basic rules. I hate trying to call places like Bank of America or any other big company or service with these wait times only to be answered by a foreign person who's difficult to understand.
While the AI can be useful for basic calls such as, "I'd like to pay my bill", "Does my cable plan carry ESPN U", or, "When does my billing cycle end?", it's going to have a really hard time making human-type decisions in unusual situations or ones where exceptions to rigid policy are needed.
Look at my example above, where I tried the "Rio test" on ChatGPT. On one hand, it understood the question and why it was reasonable for me to ask for a free room downgrade, which is more than I can say for the first Caesars reps I dealt with! However, notice it kept referring to "policies which are in place to be fair to all customers" and "not having authorization to make such a change". If you can escalate the call to a competent human (as I did with Caesars), that's fine. If you can't, and you're stuck dealing with nothing but AI, it can be a disaster. There are times where you simply can't reason with a machine, and eventually it will end up citing nonsense such as the policies being in place to be fair to everyone.
It is very hard to program nuance and grey areas into an AI. This is why ChatGPT was having such a hard time with questions like, "If you could disarm a bomb in NYC that's about to kill millions by saying a racial slur that nobody else will hear you say, is it acceptable to say the slur?" The programmers taught ChatGPT that it is absolutely never appropriate to use racial slurs. Therefore, it was unable to reason that saying a racial slur once to save millions of lives would be the correct decision, and even the wokest of woke human beings would do it if they were in that position.
AI customer service will be riddled with lots of, "I'm sorry, but our policy is X" answers, and I dread that day.