Skip to main content

Google Bard could soon become your new AI life coach

Generative artificial intelligence (AI) tools like ChatGPT have gotten a bad rep recently, but Google is apparently trying to serve up something more positive with its next project: an AI that can offer helpful life advice to people going through tough times.

If a fresh report from The New York Times is to be believed, Google has been testing its AI tech with at least 21 different assignments, including “life advice, ideas, planning instructions and tutoring tips.” The work spans both professional and personal scenarios that users might encounter.

Google Bard on a green and black background.
Mojahid Mottakin / Unsplash

It’s the result of Google merging its DeepMind research lab with its Brain AI team and is “indicative of the urgency of Google’s effort to propel itself to the front of the AI pack,” the report states.

According to one example cited in The Times, Google has been working on how to answer a query from a user who wants to attend a close friend’s wedding but is unable to afford the travel costs to do so.

Aside from that, the AI’s tutoring function could help people improve their skills or learn new ones, while its planning aspect may be able to aid users in creating a financial budget or whipping up a meal plan.

User wellbeing

A person holds a phone with the Google logo and word 'Bard' on the screen. In the background is a Google Bard logo.
Mojahid Mottakin / Unsplash

The move to help users with their most pressing personal challenges is a stark change from Google. In December 2022 — shortly after rival OpenAI’s ChatGPT was unleashed on the world — an internal Google slide deck cautioned against encouraging people to get too emotionally attached to AI tools, according to the report from The New York Times.

In fact, Google’s own safety experts warned in December that taking life advice from AI could result in “diminished health and well-being” and a “loss of agency,” with the potential for some users to mistakenly think the AI was sentient and able to understand them in ways a human can.

As recently as the Google Bard launch in March 2023, Google said the tool was forbidden from advising users on medical, financial, or legal matters. If the company goes ahead and builds these capabilities into its AI tools, it will mark a striking turnaround — and could raise questions over whether Google is prioritizing primacy in the AI race over users’ wellbeing.

Winning at all costs

A MacBook Pro on a desk with ChatGPT's website showing on its display.
Hatice Baran / Unsplash

A life coach is not the only AI-based tool Google is apparently working on. Among its other projects are tools that can generate scientific and creative writing, help journalists write headlines, as well as find and extract patterns from text.

Yet even ideas like these were criticized by Google just months ago when the company said there was a risk of “deskilling” creative writers through the use of generative AI.

Whether any of these tools become reality is unclear at this moment, but it seems Google is determined to pull ahead in the AI race. Doing so could come at a cost, though — as its own experts have pointedly argued.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
ChatGPT is violating your privacy, says major GDPR complaint
ChatGPT app running on an iPhone.

Ever since the first generative artificial intelligence (AI) tools exploded onto the tech scene, there have been questions over where they’re getting their data and whether they’re harvesting your private data to train their products. Now, ChatGPT maker OpenAI could be in hot water for exactly these reasons.

According to TechCrunch, a complaint has been filed with the Polish Office for Personal Data Protection alleging that ChatGPT violates a large number of rules found in the European Union’s General Data Protection Regulation (GDPR). It suggests that OpenAI’s tool has been scooping up user data in all sorts of questionable ways.

Read more
Malware is spreading through Google Bard ads — here’s how to avoid them
A person holds a phone with the Google logo and word 'Bard' on the screen. In the background is a Google Bard logo.

As the public adjusts to trusting artificial intelligence, there also brews a perfect environment for hackers to trap internet users into downloading malware.

The latest target is the Google Bard chatbot, which is being used as a decoy for those online to unknowingly click ads that are infected with nefarious code. The ads are styled as if they are promoting Google Bard, making them seem safe. However, once clicked on, users will be directed to a malware-ridden webpage instead of an official Google page.

Read more
ChatGPT may soon moderate illegal content on sites like Facebook
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

GPT-4 -- the large language model (LLM) that powers ChatGPT Plus -- may soon take on a new role as an online moderator, policing forums and social networks for nefarious content that shouldn’t see the light of day. That’s according to a new blog post from ChatGPT developer OpenAI, which says this could offer “a more positive vision of the future of digital platforms.”

By enlisting artificial intelligence (AI) instead of human moderators, OpenAI says GPT-4 can enact “much faster iteration on policy changes, reducing the cycle from months to hours.” As well as that, “GPT-4 is also able to interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labeling,” OpenAI claims.

Read more