The messaging platform bans or disables over 8 million accounts on an average every month globally.
“We are particularly focused on prevention because we believe it is much better to stop harmful activity from happening in the first place than to detect it after harm has occurred, WhatsApp said in the report, in accordance with Rule 4(1)(d) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
“The abuse detection operates at three stages of an account’s lifestyle: at registration; during messaging; and in response to negative feedback, which we receive in the form of user reports and blocks. A team of analysts augments these systems to evaluate edge cases and help improve our effectiveness over time,” WhatsApp said.
These accounts were tracked through WhatsApp’s tools and resources to prevent harmful behaviour on the platform, the company said in the report
The company received 345 grievances between May 15 and June 15, of which the majority 204 of them were appeals against ban. It took action on 63 accounts.
In May, the Facebook-owned platform
filed a case in the Delhi High Court against the government seeking to block the new IT rules. WhatsApp had opposed the mandate to trace the origin of particular messages sent on the service saying the service is encrypted.
The company said that between the reporting period, it banned about 2 million WhatsApp accounts from Indian users and more than 95% of such bans are due to the unauthorized use of automated or bulk messaging (“Spam”).
It added that the company is consistently investing in technology, people and processes to keep users safe and secure on its end to end encrypted platform. “In addition to the behavioural signals from accounts, we rely on available unencrypted information including user reports, profile photos, and group photos and descriptions, besides deploying advanced AI tools and resources to detect and prevent abuse on our platform.” It also said it employs a team of engineers, data scientists, analysts, researchers, and experts in law enforcement, online safety, and technology developments to oversee our user-safety efforts.