Select Language

English

Down Icon

Select Country

Portugal

Down Icon

AI Assistants Share Sensitive User Data

AI Assistants Share Sensitive User Data

Artificial intelligence (AI) assistants are collecting and sharing sensitive user data, such as medical and banking records, without proper safeguards, a study released Wednesday concludes.

Researchers from University College London in the United Kingdom and the University of Reggio Calabria in Italy analyzed ten AI assistants, which need to be downloaded and installed on a computer or mobile phone to be used, and discovered that they collect a lot of personal data about users' online activity.

The analysis revealed that several assistants transmitted full content of web pages, including information visible on the screen, to their servers.

One of the AI assistants, Merlin, captured form inputs such as banking or health information, while ChatGPT, Copilot, Monica, and Sider were able to infer user attributes such as age, gender, income, and interests, and used this information to personalize responses even across different browsing sessions.

"This data collection and sharing is not trivial. Beyond selling or sharing data with third parties, in a world where massive cyberattacks are frequent, there is no way to know what happens to browsing data once it has been collected ," warned Anna Maria Mandalari, one of the study's authors, quoted in a statement from University College London.

For the study, researchers simulated real-world browsing scenarios by creating the persona of a wealthy millennial [digital native] male from California, which they used to interact with the assistants while completing common internet tasks, including reading the news, shopping on Amazon, and watching videos on YouTube.

Private space activities were also included, such as accessing a university health portal, logging into a dating service, or accessing pornography, which researchers assumed users did not want to be tracked, as the data is personal and sensitive.

According to the study, conducted in the United States, some AI assistants, including Merlin and Sider, did not stop recording activity when the user switched to the private space as they should have , violating US data protection laws by collecting confidential health and education information.

The study's authors argue for the urgent need for regulatory oversight of AI assistants to protect users' personal data and recommend that developers adopt privacy-by-design principles, such as explicit user consent for data collection.

observador

observador

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow