Techsemut English

How China’s WeChat is tackling fake news differently from Facebook

How China’s WeChat is tackling fake news differently from Facebook

How China’s WeChat is tackling fake news differently from Facebook
November 25
03:04 2017

Luzhou, China. Photo credit: Mingwei Li / Unsplash.

In a small, bright office filled with books, Huamin Qu gives me a bird’s-eye view of WeChat, arguably China’s most influential app. His screen shows a red pinwheel of nodes that map how content is shared throughout the enormous social network of almost a billion users.

Called WeSeer, the internal tool is the ultimate gauge of China’s netizen hivemind: it can predict which articles will go viral in the next hour, pinpoint key accounts driving the spread of information, and identify stories of interest for different communities, whether it’s locals in Beijing or people who love AI.

It’s an advertiser’s wet dream – or a powerful tool for information control.

“It’s a double-edged sword,” says Qu, a professor of computer science at Hong Kong University of Science and Technology (HKUST), which opened a joint artificial intelligence lab with WeChat in 2015. Big data analytics can be used to capture criminals, but it can also target other groups of people, he says.

“We are more working on the technical side,” he emphasizes.

A demo video of WeSeer shows how an article spreads through WeChat over time. Image credit: WeChat-HKUST Joint Lab on Artificial Intelligence Technology.

Fake news is the new scourge of the modern world. As people increasingly turn to online platforms to understand the world, fake news has the power to sow doubt almost anywhere, as evidenced by how it disrupted the recent democratic elections in the US. At the center of it all is Facebook, whose platform encompasses a user base of roughly 2 billion monthly active users – larger than the population of any single nation.

But while the Silicon Valley juggernaut has sought to distance itself from the fake news epidemic, Chinese tech firms are taking it head on. In China, there are no debates around the role that companies play in deciding what the public sees. The Great Firewall, which blocks thousands of sites like Facebook and The New York Times, might keep information from the outside world from filtering in, but domestically, the government expects tech giants to take responsibility.

Tech companies must balance the cost of censorship with running a business.

For online content platforms, blocking keywords and taking down ‘illegal content’ – which can range from celebrity gossip to sensitive political topics – is par for the course. The more influential the company, the more culpable they are.

“Effectively, the government authorities are pushing responsibility for content control and other types of information control down to the companies, who then also offload that to users in some sense,” says Masashi Crete-Nishihata, research manager at The Citizen Lab, a research lab at the University of Toronto that has conducted numerous studies on online censorship in China.

“For the companies, they’re trying to balance keeping their business active, keeping users interested in the platform, having a good user experience, and of course, doing all that while staying within the line set by the government,” he adds.

That last point is a key driver behind the development of WeSeer. On top of predicting article popularity, Qu’s research team has been tasked with automating rumor detection, a rising priority for the app as its user base grows to triple the size of the US population.

Huamin Qu, professor of computer science at Hong Kong University of Science and Technology. He specializes in data visualization. Photo credit: Tech in Asia.

Wrangling fake news

In September, the Chinese government fined Tencent, Weibo, and Baidu the maximum penalty under the new cybersecurity law for failing to prevent the spread of harmful information. By flagging rumors before they erupt and reverberate through the platform, WeChat could one day quell fake news before it takes off.

Tencent, operator of WeChat, declined to comment on WeSeer.

Here’s how WeSeer works: articles that are shared between accounts, both on personal newsfeeds and via public accounts, create a path when they travel through the social network. In WeChat, these traces are especially unique because the platform is a closed system. Unlike Facebook, only first-degree contacts can see your Moments, the app’s newsfeed-like feature.

If you think about it, some rumors are like the truth.

That means some articles pass through 50 to 60 layers within the social network as they’re shared from account to account, says Qu. By analyzing these propagation paths, you can start classifying articles by their behavior.

Radiating outward over time, the propagation paths look strangely beautiful, like red drops spreading on a page. Other visualizations resemble supernovas, showcasing a bright explosion of activity in the center.

Some paths have an “overall pattern just like a virus,” he explains. “You suddenly capture a lot of attention or you just slowly build up.”

To identify potential falsehoods, Qu’s research group must delve deeper into the accounts themselves to assess how credible they are. For instance, if a computer science professor shares something on AI, perhaps the article should be considered more legitimate. But it’s still a work in progress.

“I think it’s a very challenging problem,” emphasizes Qu. “If you think about it, some rumors are like the truth.”

WeSeer can break down readership by different segments (ex: location, age, community). This display shows the results of an article published October 2016: “3 minutes of news for breakfast”. Image credit: WeChat-HKUST Joint Lab on Artificial Intelligence Technology.

Cost center

The pressure to control information will only increase for Chinese tech companies, as Beijing tightens its grip on the country’s cyberspace. In August, government regulators released new rules requiring real-name registration for users who post comments. A month later, the Cyberspace Administration of China published another set of regulations that holds creators of online groups accountable for anything discussed in their forums.

That could mean rising costs related to content moderation, as tech companies still largely depend on human moderators. Even at Facebook, taking down gruesome content, such as beheadings and sexual violence, still requires human input – not to mention something as complex as rumor classification or responding to changing government directives.

Something that’s blocked one day is not blocked the next.

In preparation for China’s 19th party congress, a key political event that reshuffles top leaders every five years, WeChat began blocking relevant keywords as early as a year prior, according to a report conducted by The Citizen Lab. Due to the sensitivity of the event, researchers found that even seemingly benign keywords were blocked, such as ’19th Party Congress Power.’

One of the challenges of the lab’s work, which has conducted multiple studies on keyword censorship on WeChat, is that “the censorship is dynamic,” explains Crete-Nishihata. “Something that’s blocked one day is not blocked the next.”

Toutiao, whose news aggregation platform sees over 120 million daily active users, has rapidly grown its team of content auditors and reviewers in Tianjin, a northeastern city neighboring Beijing. According to a source that spoke to Reuters, the Chinese unicorn has almost a thousand reviewers – up from 30 to 40 two years ago.

Momo, a Chinese social networking app, has also expanded its content moderation team in Tianjin. Since launching its app in 2011, the company has hired more than 400 content reviewers to meet the needs of its growing live streaming business, a spokesperson tells Tech in Asia. Wages for content auditing jobs across different tech firms range from US$455 to US$910 a month, according to job posts on Lagou, a Chinese hiring site.

Toutiao headquaters in Beijing. Photo credit: Tech in Asia.

These tech companies must balance the cost of censorship with running a business. After all, they have their own commercial goals, which may or may not align with the government’s agenda. This tension is illustrated in a report published in October by Blake Miller, PhD candidate at the University of Michigan, which featured a leaked dataset of Weibo censorship logs from 2011 to 2014. Specifically, Weibo’s content review team disobeyed certain government directives because “they were concerned that censorship on the Sina Weibo platform would drive users to competitor Tencent Weibo’s website,” writes Miller.

“We should not be stricter than Tencent,” states one of the leaked logs, referring to instructions on banning users. “Today maintain user blocks, tomorrow as soon as you receive instructions, release the block.”

In that sense, smarter information controls, such as WeSeer, could be a competitive advantage. It would not only reduce the cost of human labor but also help companies make judgment calls on what content to block – and the ensuing blowback from readers. Other tech firms including Toutiao are also investing in AI-driven analytical tools to help weed out low-quality content and fake news – something that state media People’s Daily has slammed it for in the past.

Currency converted from Chinese yuan. Rate: US$1 = RMB 6.59.

The Robert Bosch Foundation provided travel expenses for Sonja Peteranderl and Eva Xiao to cover this story under the Tandem scholarship.

source : techinasia

Related Articles

0 Comments

No Comments Yet!

There are no comments at the moment, do you want to add one?

Write a comment

Write a Comment

Your email address will not be published.
Required fields are marked *