Artificial intelligence is developing quickly and replacing human labor in previously human-only tasks. Therapists and life coaches are the most recent occupations to face threats. A new AI assistant being tested by Google will offer consumers individualized life guidance on everything from career choices to love issues.
According to a recent New York Times article, Google’s Deep Mind has teamed up with AI training business Scale AI to thoroughly test the new chatbot. The talents of the helper have been thoroughly tested by over 100 doctorate-level professionals from a range of disciplines. The reviewers gave their complete attention to determining whether the tool could wisely respond to inquiries concerning users’ actual struggles.
No longer afford to attend a friend’s forthcoming vacation wedding
A user requested the AI for advice on how to politely break the news to a close friend that they can no longer afford to attend the friend’s forthcoming vacation wedding in one example prompt. The assistant then offers specific recommendations or counsel in light of the complex interpersonal situation that has been presented.
Google’s AI tool promises to help people with 21 various life skills, ranging from recommendations for hobbies to support in specialized medical disciplines, going beyond simple life advice. Even unique financial budgets can be made using the planner tool.
However, Google’s own AI safety experts have expressed concerns that relying too much on an AI for important life decisions could compromise human autonomy and well-being. When the business introduced the AI chatbot Bard in March, it significantly curtailed its capacity to offer financial, legal, or medical advice in favor of concentrating on mental health resources.
Public’s eagerness to constantly improve AI capabilities
According to a Google DeepMind official who talked to The New York Times, private testing is a routine step in the development of secure and beneficial AI technology. It was emphasized by the spokesman that isolated testing samples did not accurately reflect the product strategy.
Google errs on the side of prudence, while the public’s eagerness to constantly improve AI capabilities gives developers more leeway. Even if present technology has its limits, the overwhelming popularity of ChatGPT and other natural language processing technologies shows that there is a demand for AI life guidance.
As Decrypt previously noted, experts have cautioned that AI chatbots lack the intrinsic human ability to discern lies or decipher subtle emotional clues. But they also stay clear of problems that therapists frequently make, such as bias or misdiagnosis. According to psychiatrist Robi Ludwig, “We’ve seen that AI can work with certain populations.” She said this to CBS News in May. “We are complex, AI doesn’t love you back, and we need to be loved for who we are and who we aren’t,” she remarked.