Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

Newegg wants you to trust ChatGPT for product reviews

AI-generated review on Newegg's website.
Jacob Roach / Digital Trends

Newegg, the online retailer primarily known for selling PC components, has pushed AI into nearly every part of its platform. The latest area to get the AI treatment? Customer reviews.

On select products, Newegg is now showing an AI summary of customer reviews. It sifts through the pile, including the review itself and any listed pros and cons, and uses that to generate its own list of pros and cons, along with its own summary. Currently, Newegg is testing the feature on three products: the Gigabyte RTX 4080 Gaming OC, MSI Katana laptop, and Ipason gaming desktop.

I’ve previously covered Newegg’s mishaps with AI via its ChatGPT-driven PC builder. The company confirmed it’s using ChatGPT once again to generate these product review summaries, and they are filled with issues.

Smart tech, dumb summaries

AI-generated review summary on Newegg.
Jacob Roach / Digital Trends

For instance, the MSI Katana summary is contradictory. It says the laptop is both “recommended for beginners” and also for “those seeking a high-performance laptop.” The summary also lists an “effective cooling system” early in the summary, but “loud fan noise” and “hot running temperatures” later on.

Elsewhere, with the Ipason desktop, the summary does a decent job listing the pros and cons of customer reviews. However, it fails to even mention that the 1TB hard drive in the desktop is formatted as three separate hard drives, which was a common complaint among the real customer reviews. Instead, it just says the machine has “limited storage space,” likely because the AI was confused about the formatting woes.

In addition to the summary, Newegg is now showing “review bytes” under the product photo. These are small quotes from the AI-generated review, and critically, the display doesn’t show they were generated by AI.

Newegg's AI-generated Review Bytes.
Jacob Roach / Digital Trends

If you actually follow these links and go to the AI-generated section, there’s a disclaimer you can click on that reads: “Efforts have been made to ensure accuracy, but individual experiences, options, and interpretations may vary and influence the generated content.”

Problems brewing

The AI isn’t writing its own review out of thin air, but it is serving as a replacement for reading customer reviews that may have more nuance. The individual experiences shouldn’t be discounted, either. In the case of Ipason, the AI-generated review is for a product that isn’t sold by Newegg. Things like customer service and support are important for marketplace items, and the AI largely glosses over that area.

Digital Trends asked Newegg if this feature will be largely rolled out across its website, or if it will target specific items first, and hasn’t yet received a response. For products like the Gigabyte RTX 4080 Gaming OC, the summary feature works (even if it isn’t as helpful as a full RTX 4080 review). There are some quirks still. For instance, the RTX 4080 summary lists “no video output on initial setup due to defective power adapter” as a con. Care to elaborate? Because that con alone overshadows the overwhelming number of pros these AI-generated summaries list.

My main concern is for marketplace items where the potential for fake reviews is higher. It’s always tough to know if customer reviews are fake or real, even with verified badges. We don’t have any data for Newegg, but recent research by the UK government suggests that upwards of 15% of reviews on Amazon aren’t real. Maybe that’s why Amazon hasn’t released its AI-driven summary tool yet.

Ultimately, customer reviews on the internet aren’t the most reliable way to make buying decisions, and using them to generate an AI summary makes them even less reliable.

Editors' Recommendations

Jacob Roach
Senior Staff Writer, Computing
Jacob Roach is a writer covering computing and gaming at Digital Trends. After realizing Crysis wouldn't run on a laptop, he…
ChatGPT is violating your privacy, says major GDPR complaint
ChatGPT app running on an iPhone.

Ever since the first generative artificial intelligence (AI) tools exploded onto the tech scene, there have been questions over where they’re getting their data and whether they’re harvesting your private data to train their products. Now, ChatGPT maker OpenAI could be in hot water for exactly these reasons.

According to TechCrunch, a complaint has been filed with the Polish Office for Personal Data Protection alleging that ChatGPT violates a large number of rules found in the European Union’s General Data Protection Regulation (GDPR). It suggests that OpenAI’s tool has been scooping up user data in all sorts of questionable ways.

Read more
ChatGPT may soon moderate illegal content on sites like Facebook
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

GPT-4 -- the large language model (LLM) that powers ChatGPT Plus -- may soon take on a new role as an online moderator, policing forums and social networks for nefarious content that shouldn’t see the light of day. That’s according to a new blog post from ChatGPT developer OpenAI, which says this could offer “a more positive vision of the future of digital platforms.”

By enlisting artificial intelligence (AI) instead of human moderators, OpenAI says GPT-4 can enact “much faster iteration on policy changes, reducing the cycle from months to hours.” As well as that, “GPT-4 is also able to interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labeling,” OpenAI claims.

Read more
Zoom backpedals, says it will no longer use user content to train AI
A woman on a Zoom call.

Like everyone else, Zoom has added AI features to improve its app and videoconferencing service. We all love the ease and speed AI provides, but there are often concerns about the data used to train models, and Zoom has been at the center of the controversy. It's backpedaling now, saying it won't use user content to train its AI models.

News leaked in May 2022 that Zoom was working on emotion-sensing AI that could analyze faces in meetings. Beyond the potential for inaccurate analysis, the results could be discriminatory.

Read more