A recent report by The Register has raised concerns about the potential misuse of AI models like ChatGPT in phishing schemes. Tests conducted using GPT-4.1 revealed that the chatbot correctly identified official URLs for companies in the finance, retail, tech, and public service sectors only 66% of the time.
Even more alarming, in 29% of the cases, the links provided by the model were either inactive or suspended. While this might seem like a harmless mistake, it opens the door for malicious actors. Cybercriminals can purchase these unclaimed domains and create convincing fake websites that appear legitimate—tricking unsuspecting users into sharing sensitive information such as passwords, credit card numbers, or personal data.
This finding highlights the importance of verifying website links independently, especially when they are suggested by automated tools. Users are encouraged to double-check URLs before clicking and always access websites through trusted sources or official channels.
As AI tools become more integrated into daily life, ensuring their outputs are accurate and secure is essential—not just for user experience, but for public safety as well.


