Skip to main content

ChatGPT may have helped someone win the lottery. Could it be true?

A man from Thailand claims that he has used ChatGPT to generate numbers that helped him win the lottery.

Patthawikorn Boonrin recently went viral after sharing details on TikTok, of how he used the AI chatbot developed by OpenAI to generate numbers that he in turn used to play the lottery and win. His strategy includes inputting some hypothetical questions as well as some prior winning numbers as a ChatGPT query, according to Mashable.

Recommended Videos

The winning numbers for Boonrin’s draw were 57, 27, 29, and 99, and he won 2,000 Thai Baht (US$59). While the prize was not large, he told a local publication that he has used this strategy to generate lottery numbers in the past. He added that ChatGPT told him not to get “too obsessed” with the method, noting that winning the lottery was a matter of luck, and also suggested that he should go out and get some exercise.

Please enable Javascript to view this content

Boonrin plans to share more about his experience using ChatGPT to generate lottery numbers on TikTok. Surely, he will garner even more attention if he scores another, even greater win. However, that might introduce lottery companies into the ongoing conversation surrounding the ethics of ChatGPT.

There have been several opinions about the ethics of and issues with ChatGPT since its inception in November 2022. Institutions such as colleges and universities have banned the use of the AI chatbot, under the premise that it could ramp up plagiarism and cheating on campuses. Meanwhile, several industries including journalism, communications, art, and technology, among others have embraced the service.

However, the implementation hasn’t been without folly. Publications using AI to quietly generate articles have been found publishing pieces with inaccurate information, artwork that is supposed to be human-like is missing limbs and digits, and the GPT language model used by other companies has gone rogue when pushed out to the public.

There’s no telling what issues could arise if more people attempt to use ChatGPT as a method to generate lottery numbers, especially if people want the ego boost of sharing online how they earned their winnings.

Fionna Agomuoh
Fionna Agomuoh is a Computing Writer at Digital Trends. She covers a range of topics in the computing space, including…
Microsoft is letting anyone use ChatGPT’s $200 reasoning model for free
Copilot on a laptop on a desk.

OpenAI’s o1 model is now a part of Microsoft Copilot AI experience. Microsoft 365 users can access the model for free through a new toggle called 'Think Deeper' that is now available for Copilot chat.

Microsoft AI chief, Mustafa Suleyman recently announced details of the new Microsoft 365 feature on LinkedIn. The feature can assist with advice, planning, and deep diving into various topics, among other tasks. Unlike other Copilot features, which are embedded within Microsoft 365 desktop programs, you can access Think Deeper through the Copilot web-based chat at copilot.microsoft.com or via the downloadable Copilot app. You must have a Microsoft account to access the feature.

Read more
ChatGPT’s latest model is finally here — and it’s free for everyone
OpenAI's ChatGPT blog post is open on a computer monitor, taken from a high angle.

We knew it was coming but OpenAI has made it official and released its o3-mini reasoning model to all users. The new model will be available on ChatGPT starting Friday, though your level of access will depend on your level of subscription.

OpenAI first teased the o3 model family on the finale of its 12 Days of OpenAI livestream event in December (less than two weeks after debuting its o1 reasoning model family). CEO Sam Altman shared additional details on the o3-mini model in mid-January and later announced that the model would be made available to all users as part of the ChatGPT platform. He appears to have delivered on that promise.

Read more
DeepSeek can create criminal plans and explain mustard gas, researchers say
Phone running Deepseek on a laptop keyboard.

There's been a frenzy in the world of AI surrounding the sudden rise of DeepSeek -- an open-source reasoning model out of China that's taken the AI fight to OpenAI. It's already been the center of controversy surrounding its censorship, it's caught the attention of both Microsoft and the U.S. government, and it caused Nvidia to suffer the largest single-day stock loss in history.

Still, security researchers say the problem goes deeper. Enkrypt AI is an AI security company that sells AI oversight to enterprises leveraging large language models (LLMs), and in a new research paper, the company found that DeepSeek's R1 reasoning model was 11 times more likely to generate "harmful output" compared to OpenAI's O1 model. That harmful output goes beyond just a few naughty words, too.

Read more