Home United States USA — software Machine learning can help to flag risky messages on Instagram while preserving...

Machine learning can help to flag risky messages on Instagram while preserving users' privacy

123
0
SHARE

As regulators and providers grapple with the dual challenges of protecting younger social media users from harassment and bullying, while also taking steps to safeguard their privacy, a team of researchers from four leading universities has proposed a way to use machine learning technology to flag risky conversations on Instagram without having to eavesdrop on them. The discovery could open opportunities for platforms and parents to protect vulnerable, younger users, while preserving their privacy.
As regulators and providers grapple with the dual challenges of protecting younger social media users from harassment and bullying, while also taking steps to safeguard their privacy, a team of researchers from four leading universities has proposed a way to use machine learning technology to flag risky conversations on Instagram without having to eavesdrop on them. The discovery could open opportunities for platforms and parents to protect vulnerable, younger users, while preserving their privacy.

The team, led by researchers from Drexel University, Boston University, Georgia Institute of Technology and Vanderbilt University recently published its timely work—an investigation to understand what type of data input, such as metadata, text, and image features could be most useful for machine learning models to identify risky conversations—in the Proceedings of the Association for Computing Machinery’s Conference on Human-Computer Interaction. Their findings suggest that risky conversations can be detected by metadata characteristics, such as conversation length and how engaged the participants are.
Their efforts address a growing problem on the most popular social media platform among 13-to-21-year-olds in America. Recent studies have shown that harassment on Instagram is leading to a dramatic uptick of depression among its youngest users, particularly a rise in mental health and eating disorders among teenage girls.
“The popularity of a platform like Instagram among young people, precisely because of how it makes its users feel safe enough to connect with others in a very open way, is very concerning in light of what we now know about the prevalence of harassment, abuse, and bullying by malicious users,” said Afsaneh Razi, Ph.D., an assistant professor in Drexel’s College of Computing & Informatics, who was a co-author of the research.
At the same time, platforms are under increasing pressure to protect their users’ privacy, in the aftermath of the Cambridge Analytica scandal and the European Union’s precedent-setting privacy protection laws. As a result, Meta, the company behind Facebook and Instagram, is rolling out end-to-end encryption of all messages on its platforms. This means that the content of the messages is technologically secured and can only be accessed by the people in the conversation.

But this added level of security also makes it more difficult for the platforms to employ automated technology to detect and prevent online risks—which is why the group’s system could play an important role in protecting users.

Continue reading...