Facebook has removed 583 million false accounts in the first quarter


The social network plays transparency on its moderation and publishes for the first time the details of its interventions. Although it seems to effectively detect manipulation attempts, their magnitude remains impressive.

Criticized for having allowed the fake news to proliferate during the US presidential campaign or not to intervene quickly enough against the direct slippage of certain users, Facebook strives to fight more effectively against content that violates its rules. He inaugurates this week a report of a new kind taking stock of his actions of moderation. The document is only interested in posts that appeared on the screens of users and have not been previously recalibrated by automatic detection systems. The period taken into account runs from October to March 2018. And the numbers are impressive. This is not surprising given that the platform has 2 billion active users and that the majority of the population of some Western countries connects to it.

4% of active users would be linked to fake accounts

First assessment that leaves staggering, the number of false accounts deleted, which amounts to 583 million from January to March 2018, against 694 million from October to December 2017. Facebook estimates that 3 to 4% of monthly active users are in reality of fake accounts. Some are created manually by individuals, but most often, they are produced at the chain by bots or scripts. Their evolution is cyclical and strongly influenced by waves of cyberattacks. Facebook emphasizes that it is imperative to detect them as quickly as possible, as they are often the starting point for other violations such as spam or scams. The platform seems to be effectively combating this phenomenon as it has managed to detect 98.5% of all false accounts reported by users in the first quarter of 2018.

Often linked to fake accounts, spam is another Facebook fight. Who defines it as any activity "automated (published by robots or scripts, for example) or coordinated (using multiple accounts to disseminate and promote deceptive content)," reads the report. "This includes commercial spam, misleading advertising, fraud, malicious links and the promotion of counterfeit products," said the social networking giant. Again, it is to intervene as quickly as possible to prevent spam from spreading. Facebook claims to have intervened on 837 million contents from January to March and against 727 million from October to December. In this area, it borders on excellence since it prides itself on detecting 99.7% of spam before it is reported.

Terrorist propaganda difficult to quantify

There are problems, however, that Facebook struggles to quantify. It is primarily terrorist propaganda. He simply admits that he is unable to reliably assess the number of contents involved, but points out that their proportion appears extremely low compared to the mass of violent or sexual contributions. Not without self-satisfaction, Facebook also advances the thesis that its automatic moderation would work so well that most content promoting terrorism would be removed even before being seen. It concedes, however, to intervene on 1.9 million content already online in the first quarter and is proud to have discovered 99.5% before they are reported.


Facebook struggles to moderate hate content

His weak point is hate speech. Because they are excessively delicate to spot. The problem: the understanding of the context. And for the moment, artificial intelligence is not helpful. This explains that this category displays the worst moderation performance with only 38% of the content detected before reporting users at the beginning of the year.

Another difficulty is the representation of nudity and sex. We know the famous story of Gustave Courbet's painting The Origin of the World, repeatedly censored by Facebook. The social network has now admitted some artistic representations of nudes, he remains extremely frightened on this point. It estimates to have intervened on 21 million contents in the first quarter.

Finally, there is what Facebook calls "graphic violence", that is to say any content that "glorifies violence or celebrates the suffering or humiliation of others". This time, the probability of being confronted is much higher since out of 10 000 contents seen on a screen, between 22 and 27 are cataloged "violent". This percentage has increased in part because Facebook has improved its detection technology, but the platform also concedes the emergence of a greater volume of relevant content.

Visiting Paris to host a "Content Summit", Facebook's director of content policy Monica Bickert told Le Monde newspaper that a team of 7500 French-speaking moderators took turns monitoring the contents of our website 24 hours a day. language.

paypal,facebook,yahoo,mail,google,maps,ebay,amazon,barcelone,realmadrid,netflix,craigslist,AliCarter,Liverpool,AlfieEvans,YankeesVsAngels,RonanFarrow,YeVsThePeople,MesotheliomaLawFirm,Donate,CarToCharity,California,Donate,Car,ForTaxCredit,DonateCarsInMa;Insurance,Loans,Mortgage,Attorney,Credit,Lawyer,Donate,Degree,Hosting,Claimcashfear,softwares,money,football,SPORTNEWS,cars,carrental,cellphone,phonenumber,forex,torrent,voip,net,adsence,tollsspeakers,tipsspeakers,iphonespeakers,phones,iphone4,facebook,youtube,twitter,livematch,newslive,watchmatchforfree,watchlaligaforfree,watchserieAliveonjsc+,softwares,football,SPORTNEWS,cars,carrental,cellphone,phonenumber,forex,torrent,voip,net,adsence,tollsspeakers,tipsspeakers,iphonespeakers,phones,iphone4,facebook,youtube,twitter,livematch,newslive,watch match for free,watch laliga for free,watch serie A live on jsc+,windows 7,windows 8

Commentaires

CALL US

Nom

E-mail *

Message *

Posts les plus consultés de ce blog

We tested the Nuraphone, the headset that fits your hearing

They turned Amazon Echo into an audio snitch

Dolby Digital, Atmos or DTS ... what do these audio technologies hide?