The Gatekeepers of Knowledge Don’t Want Us to See What They Know

Meanwhile, the Conservative focus solely on Canadian gatekeepers:

We are living through an information revolution. The traditional gatekeepers of knowledge — librarians, journalists and government officials — have largely been replaced by technological gatekeepers — search engines, artificial intelligence chatbots and social media feeds.

Whatever their flaws, the old gatekeepers were, at least on paper, beholden to the public. The new gatekeepers are fundamentally beholden only to profit and to their shareholders.

That is about to change, thanks to a bold experiment by the European Union.

With key provisions going into effect on Aug. 25, an ambitious package of E.U. rules, the Digital Services Act and Digital Markets Act, is the most extensive effort toward checking the power of Big Tech (beyond the outright bans in places like China and India). For the first time, tech platforms will have to be responsive to the public in myriad ways, including giving users the right to appeal when their content is removed, providing a choice of algorithms and banning the microtargeting of children and of adults based upon sensitive data such as religion, ethnicity and sexual orientation. The reforms also require large tech platforms to audit their algorithms to determine how they affect democracy, human rights and the physical and mental health of minors and other users.

This will be the first time that companies will be required to identify and address the harms that their platforms enable. To hold them accountable, the law also requires large tech platforms like Facebook and Twitter to provide researchers with access to real-time data from their platforms. But there is a crucial element that has yet to be decided by the European Union: whether journalists will get access to any of that data.

Journalists have traditionally been at the front lines of enforcement, pointing out harms that researchers can expand on and regulators can act upon. The Cambridge Analytica scandal, in which we learned how consultants for Donald Trump’s presidential campaign exploited the Facebook data of millions of users without their permission, was revealed by The New York Times and The Observer of London. BuzzFeed News reported on the offensive posts that detailed Facebook’s role in enabling the massacre of Rohingyas. My team when I worked at ProPublica uncovered how Facebook allows advertisers to discriminate in employment andhousing ads.

But getting data from platforms is becoming harder and harder. Facebook has been particularly aggressive, shutting down the accounts of researchers at New York University in 2021 for “unauthorized means” of accessing Facebook ads. That year, it also legally threatened a European research group, AlgorithmWatch, forcing it to shut down its Instagram monitoring project. And earlier this month, Twitter began limiting all its users’ ability to view tweets in what the company described as an attempt to blockautomated collection of information from Twitter’s website by A.I. chatbots as well as bots, spammers and other “bad actors.”

Meanwhile, the tech companies have also been shutting down authorized access to their platforms. In 2021, Facebook disbanded the team that oversaw the analytics tool CrowdTangle, which many researchers used to analyze trends. This year, Twitter replaced its free researcher tools with a paid version that is prohibitively expensive and unreliable. As a result, the public has less visibility than ever into how our global information gatekeepers are behaving.

Last month, the U.S. senator Chris Coons introduced the Platform Accountability and Transparency Act, legislation that would require social media companies to share more data with researchers and provide immunity to journalists collecting data in the public interest with reasonable privacy protections.

But as it stands, the European Union’s transparency efforts rest on European academics who will apply to a regulatory body for access to data from the platforms and then, hopefully, issue research reports.

That is not enough. To truly hold the platforms accountable, we must support the journalists who are on the front lines of chronicling how despots, trolls, spies, marketers and hate mobs are weaponizing tech platforms or being enabled by them.

The Nobel Peace Prize winner Maria Ressa runs Rappler, a news outlet in the Philippines that has been at the forefront of analyzing how Filipino leaders have used social media to spread disinformation, hijack social media hashtags, manipulate public opinion and attack independent journalism.

Last year, for instance, Rappler revealed that the majority of Twitter accounts using certain hashtags in support of Ferdinand Marcos Jr., who was then a presidential candidate, had been created in a one-month period, making it likely that many of them were fake accounts. With the Twitter research feed that Rappler used now shuttered, and the platforms cracking down on data access, it’s not clear how Ms. Ressa and her colleagues can keep doing this type of important accountability journalism.

Ms. Ressa asked the European Commission, in public comments filed in May, to provide journalists with “access to real-time data” so they can provide “a macro view of patterns and trends that these technology companies create and the real-world harms they enable.” (I also filed comments to the European Commission, along with more than a dozen journalists, asking the commission to support access to platform data for journalists.)

As Daphne Keller, the director of the program on platform regulation at Stanford’s Cyber Policy Center, argues in her comments to the European Union, allowing journalists and researchers to use automated tools to collect publicly available data from platforms is one of the best ways to ensure transparency because it “is a rare form of transparency that does not depend on the very platforms who are being studied to generate information or act as gatekeepers.”

Of course, the tech platforms often push back against transparency requests by claiming that they must protect the privacy of their users. Which is hilarious, given that their business models are based on mining and monetizing their users’ personal data. But putting that aside, the privacy interests of users are not being implicated here: The data that journalists need is already public for anyone who has an account on these services.

What journalists lack is access to large quantities of public data from tech platforms in order to understand whether an event is an anomaly or representative of a larger trend. Without that access, we will continue to have what we have now: a lot of anecdotes about this piece of content or that user being banned, but no real sense of whether these stories are statistically significant.

Journalists write the first draft of history. If we can’t see what is happening on the biggest speech platforms in the globe, that history will be written for the benefit of platforms — not the public.

Source: The Gatekeepers of Knowledge Don’t Want Us to See What They Know

The Data That Turned the World Upside Down – Motherboard

While much of this is alarming and disturbing – particularly voter suppression – I would love to see some psychometrics complement conventional polling with respect to immigration issues to test different ways of posing questions:

But to what extent did psychometric methods influence the outcome of the election? When asked, Cambridge Analytica was unwilling to provide any proof of the effectiveness of its campaign. And it is quite possible that the question is impossible to answer.

And yet there are clues: There is the fact of the surprising rise of Ted Cruz during the primaries. Also there was an increased number of voters in rural areas. There was the decline in the number of African-American early votes. The fact that Trump spent so little money may also be explained by the effectiveness of personality-based advertising. As does the fact that he invested far more in digital than TV campaigning compared to Hillary Clinton. Facebook proved to be the ultimate weapon and the best election campaigner, as Nix explained, and as comments by several core Trump campaigners demonstrate.

Many voices have claimed that the statisticians lost the election because their predictions were so off the mark. But what if statisticians in fact helped win the election—but only those who were using the new method? It is an irony of history that Trump, who often grumbled about scientific research, used a highly scientific approach in his campaign.

Another big winner is Cambridge Analytica. Its board member Steve Bannon, former executive chair of the right-wing online newspaper Breitbart News, has been appointed as Donald Trump’s senior counselor and chief strategist. Whilst Cambridge Analytica is not willing to comment on alleged ongoing talks with UK Prime Minister Theresa May, Alexander Nix claims that he is building up his client base worldwide, and that he has received inquiries from Switzerland, Germany, and Australia. His company is currently touring European conferences showcasing their success in the United States. This year three core countries of the EU are facing elections with resurgent populist parties: France, Holland and Germany. The electoral successes come at an opportune time, as the company is readying for a push into commercial advertising.

Kosinski has observed all of this from his office at Stanford. Following the US election, the university is in turmoil. Kosinski is responding to developments with the sharpest weapon available to a researcher: a scientific analysis. Together with his research colleague Sandra Matz, he has conducted a series of tests, which will soon be published. The initial results are alarming: The study shows the effectiveness of personality targeting by showing that marketers can attract up to 63 percent more clicks and up to 1,400 more conversions in real-life advertising campaigns on Facebook when matching products and marketing messages to consumers’ personality characteristics. They further demonstrate the scalability of personality targeting by showing that the majority of Facebook Pages promoting products or brands are affected by personality and that large numbers of consumers can be accurately targeted based on a single Facebook Page.

In a statement after the German publication of this article, a Cambridge Analytica spokesperson said, “Cambridge Analytica does not use data from Facebook. It has had no dealings with Dr. Michal Kosinski. It does not subcontract research. It does not use the same methodology. Psychographics was hardly used at all. Cambridge Analytica did not engage in efforts to discourage any Americans from casting their vote in the presidential election. Its efforts were solely directed towards increasing the number of voters in the election.”

The world has been turned upside down. Great Britain is leaving the EU, Donald Trump is president of the United States of America. And in Stanford, Kosinski, who wanted to warn against the danger of using psychological targeting in a political setting, is once again receiving accusatory emails. “No,” says Kosinski, quietly and shaking his head. “This is not my fault. I did not build the bomb. I only showed that it exists.”

Source: The Data That Turned the World Upside Down – Motherboard