Weekly New/Digital Media (21)

Mark Zuckerberg: Facebook will fix 'fake news' problem but will not verify stories itselfmark-zuckerberg.jpg


Summary:
"Facebook's CEO says the social network is taking measures to prevent the spread of misinformation but he and his colleagues do not want to be 'arbiters of truth'" 

Action which was necessary due to the fears regarding the results of the US election being somewhat due to the fake news circulating facebook. 
They aim to do this by making it easier for users to report fake stories, waring them it may be false and working closely with 'fact cheaking' organisations is a tactic announced by Mr Zuckerberg. This issue of misinformation has been around for a long time and especially due to the fact that facebook try to connect audiences with news stories that would be meaningful to them.
“We believe in giving people a voice, which means erring on the side of letting people share what they want whenever possible,” he wrote. He doesnt want to prevent certain content due to discouraging certain opinions by mistaking them as 'false content'. 
An example being that in the run up to the election The Denver Guardian (a fake news site publishing made-up stories) published the widely shared story of “FBI agent suspected in Hillary email leaks found dead in apparent murder-suicide,” 
He claims that "“We will continue to work with journalists and others in the news industry to get their input, in particular, to better understand their fact-checking systems and learn from them.”" 
They also aim to altering its advertising polices to make it more difficult for people to make a profit from fake news stories (cause many fake news is for money rather than political influence" 


My opinion:
Facebook is known for its slow reactions to flagged and reported content so this doesn't exactly mean they will be fast to react to the 'fake news stories' that are flagged up. Moreover, in terms of the systems that would help to determine fake and real news stories, would face the same problems as similar softwares now. Facebooks suggested stories, and the Vietnamese girl image, are already based on algarhythems and they seem to be getting things wrong. The software won't be able to detect human emotion, have any social intelligence and most likely it will be based of old existing data to provide empirical evidence so how would it differentiate between a new and a false news story? Furthermore, some 'news stories' may be based on individual accounts, rasing awareness against propaganda etc. and facebook is most likely to be bias when determining the validity of these various news stories so should such a thing even be put in place.

Yes, false news may be a form of free speech, but does that mean that audiences should be manipulated with out right lies? 

Comments

Popular posts from this blog

Identities: applying feminism

Weekly New/Digital Media (62)

Weekly New/Digital Media (9)