Privacy, Technology and Perspective
A Turning Point for Broadcasting Online Misinformation and Disinformation? For years now, many people have decried decay of truth. Of particular concern has been the floods of misinformation (“facts” which are not right) and disinformation (“facts” which the speaker knows are not right and publishes anyway), increasingly pouring through every channel communication. Such falsehoods have been amplified on online social media platforms both by AI-enabled bots, and through large-scale broadcast technologies whose algorithms determine what people see when they search or scroll through those platforms.
The issues underlying online misinformation and disinformation are numerous and complex. In our view, the best single source for scholarship in this area is the Misinformation Review published quarterly by the Shorenstein Center on Media, Politics, and Public Policy of the Harvard Kennedy School of Government:
Its research director, Joan Donovan, has given an excellent synopsis of the disinformation crisis recently, which you can read here:
Generally, commentators have suggested that we combat both misinformation and disinformation with more information, social-media education, and education in basic civics and critical thinking, but that battling this scourge in the digital ecosystem without the aid of the platforms themselves has met only limited success. And, up to now, nothing has convinced the platforms to act.
Yet, after January 6th and with President Biden’s administration and an evenly split Congress, we wonder if things are starting to change.
Soon after the mob stormed into the Capitol and five people died, the platforms started to act. Facebook suspended then-President Trump’s account and is now referring the issue to its powerful Oversight Board, which you can read about here:
After months of applying disclaimers and “this is disputed” labels, Twitter then dumped Trump’s account. Additionally, Apple and Google dropped Parler from their app store, and Amazon Web Services (AWS) stopped hosting Parler.
The significance of recent events, however, is not in the inevitable pushback from angry users, but in the remarkable fact that these platforms have at last begun to take powerful actions. Schools, courts, scholars, and certainly Congress haven’t slowed the algorithm-driven flood of misinformation and disinformation that has poisoned our public conversations, and otherwise haven’t convinced users to be more thoughtful in how they use the platforms.
Perhaps the platforms are starting to see – at last – that since their algorithms are largely responsible for amplifying and enlarging this flood, they bear at least some responsibility for slowing it, too. And perhaps the political pressure of a new administration also helps with this realization.
Regardless, we only wish it hadn’t taken the recent sights, sounds, and deaths to make the platforms enforce their own Terms.
Hosch & Morris, PLLC is a boutique law firm dedicated to data privacy and protection, cybersecurity, the Internet and technology. Open the Future℠.