![]() ![]() Having AI and machine learning power more and more of the services we depend on requires us to understand its security strengths and weaknesses, in addition to the benefits we can obtain, so that we can trust the results. “As we rely more and more on AI in the future, we need to understand what we need to do to protect it from potential abuse. I think social media platforms are already facing attacks that are similar to the ones demonstrated in this research, but it’s hard for these organizations to be certain this is what’s happening because they’ll only see the result, not how it works,” said Patel.Īccording to F-Secure Vice President of Artificial Intelligence Matti Aksela, it’s important to acknowledge and address the potential challenges with the security of AI. “We performed tests against simplified models to learn more about how the real attacks might actually work. While the experiments were performed using simplified versions of the AI mechanisms that social media platforms and other websites are likely to employ when providing users with recommendations, Patel believes Twitter and many other popular services are already dealing with these attacks in the real world. ![]() Then, he performed experiments that involved retraining these models using data sets containing additional retweets (thereby poisoning the data) between selected accounts to see how recommendations changed.īy selecting appropriate accounts to retweet and varying the number of accounts performing retweets along with the number of retweets they published, even a very small number of retweets were enough to manipulate the recommendation system into promoting accounts whose content was shared through the injected retweets. At the same time, research has highlighted potential risks in relying on social media as a source: a 2018 investigation** found that Twitter posts containing falsehoods are 70% more likely to be retweeted.įor his research, Patel collected data from Twitter and used it to train collaborative filtering models (a type of machine learning used to encode similarities between users and content based on previous interactions) for use in recommendation systems. Respondents aged 18-29 identified social media as their most frequent source of news. “Examining how these ‘combatants’ can manipulate AI helps expose the limits of what AI can realistically do, and ideally, how it can be improved.”Ī PEW Research Center survey* conducted in late 2020 found that 53% of Americans get news from social media. These include organic conversations and ads, but also messages intended to undermine and erode trust in legitimate information,” said Patel. ![]() “Twitter and other networks have become battlefields where different people and groups push different narratives. However, their growing influence over what people see and do on the internet has raised concerns about their susceptibility to various types of abuse, such as their active use to spread disinformation and promote conspiracy theories.Īndy Patel, a researcher with cyber security provider F-Secure’s Artificial Intelligence Center of Excellence, recently completed a series of experiments to learn how simple manipulation techniques can affect AI-based recommendations on a social network. Helsinki, Finland – June 24, 2021: AI-based recommendation systems are used in many online services we enjoy today, including search engines, online shopping sites, streaming services, and social media. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |