Usher: Viewpoint diversity [at universities]

Sound critique of the methodology used and resulting conclusions:

Last week, the MacDonald-Laurier Institute released a truly bad paper on “viewpoint diversity” at Canadian Universities.  How bad was it, you ask?  Really bad.  Icelandic rotting shark bad.  Crystal Pepsi bad.  Final Season of Game of Thrones bad. 

The basic thrust of the paper, co-written by Christopher Dummitt and Zachary Patterson, is that

  • The Canadian professoriate is well to the left of the Canadian public
  • Within the academy, those who describe themselves as being on the right are much more likely to say they “self-censor” or find work a “hostile environment”
  • This is an attack on academic freedom
  • There should therefore, in the name of academic freedom, be a significant government bureaucracy devoted to ensuring that right-wingers are hired more often and feel more at home within the academy.

Dummitt and Peterson are not, of course, the first to note that the academy is somewhat left-leaning.  Back in 2008, MR Nakhaie and Barry D. Adam, both then at the University of Windsor, published a study in the Canadian Journal of Sociology showing that university professors were about three times as likely to have voted NDP in the 2000 general election than the general population (the NDP got about 8.5% of the vote in that election), about as likely to have voted Liberal, and less likely to have voted Bloc, Conservative, or Reform.   Being at a more prestigious institution made a professor less likely to support the NDP, as did being a professor in business or in the natural sciences. 

(This effect of discipline on faculty political beliefs is not a Canadian phenomenon but a global one.  Here is a summary of US research on the issue, and an old but still interesting article from Australia which touches on some of the same issues). 

Anyways, this new study starts out with a survey of professors.  The sample they ended up with was ludicrously biased: 30% from the humanities, 47% from the Social Sciences and 23% from what they call “STEM” (where are health professions?  I am going to assume they are in STEM).  In fact, humanities professors are 13% of the overall faculty, social science profs 23%, and the rest of the professoriate 64%.  Despite having read the Nakhaie/Adam paper, which explains exactly how to get the data that would allow a re-weighting of the data (you can buy it from Stastcan, or you can look up table 3.15 in the CAUT Almanac, which is a couple of years out of date but hardly incorrect) , the authors claim that “relatively little information was available for the population of professors in Canada so no weights were developed”.  In other words, either through incompetence or deliberate feigning of ignorance, the authors created a sample which overrepresented the known-to-be most leftist bits of the academy by 2 times and underrepresented the known-to-be less leftists bits of the academy by a similar factor, and just blithely carried on as if nothing were amiss.

Then – this is the good one if you are familiar with conventions of Canadian political science – they divided respondents into “left-wing” and “right-wing” partially by asking them to self-locate on a four-point likert scale which left no space to self-identify as a centrist and partly by asking them about their views on various issues or how they self-described on a simple left-right scale.  If they voted Green, Bloc, NDP or Liberal they were “left-wing” and if they voted Conservative or People’s party they were “right-wing”.  Both methods came up with a similar division between “left” and “right” among professors (roughly 88-12, though again that’s a completely unadjusted figure).  Now, generally speaking, no one in Canadian political science forces such a left-right choice, because the Liberals really aren’t particularly left wing.  That’s why there is nearly always room for a “centre” option.  Certainly, Nakhaie and Adam did so – why didn’t Dummitt and Peterson? 

Anyways, having vastly exaggerated the degree of polarization and the pinkness of professorial views, and on this basis declared a “political monoculture”, the authors then go on to note that the embittered right-wing professors appear to have different feelings about the workplace than do the rest of their colleagues.  They are three times more likely, for example, to say their departmental climate is “hostile”, for instance, or to say that they “fear negative repercussions” if their political views – specifically, on social justice, gender, and Equity Diversity and Inclusion – were to become known. They are twice as likely to say they have “refrained from airing views or avoided pursuing or publishing research” (which is a hell of a conflation of things if what you’re interested in examining is academic freedom).  On the basis of this, plus a couple of other questions that conflate things like job loss with “missed professional opportunities” or that pose ludicrous hypothetical questions about prioritizing social justice versus academic freedom, they declare a “serious crisis” which has “disturbing implications for the ability of universities to continue to act as bastions of open inquiry and rational thought in modern Canada” which requires things like legislation on academic freedom, and a bunch of things which would effectively ban universities from anti-racism initiatives.

Look, this is a bad study, full stop.  The methodology and question design are so obviously terrible that it seems hard to avoid the conclusion that its main purpose was to confirm the authors’ biases, and clearly whatever editorial/peer review process the Macdonald Laurier Institute uses to oversee these publications needs major work.  But if a result is significant enough, even a bad methodology can find it: might this be such a case?

Maybe.  Part of the problem is that this paper spends a lot of effort conflating “viewpoint diversity” with “party identification diversity”, which is absurd.  I mean, there are countries which allocate academic places based on party identity, but I doubt that those are places where many Canadian academics would want to teach.  Further, on the specific issues where people apparently feel they have a need to “not share their opinions” on issues concerning race and gender, there are in addition to a censorious left a lot of bad faith right-wing concern trolls too, which kind of tempers my ability to share the authors concern that this is a necessarily “bad thing”.  And finally, this idea that the notion of being an academic means you should be able to say whatever you want without possibility of facing criticism or social ostracism – which I think is implicitly what the authors are suggesting – is a rather significant widening of the concept of academic freedom that wouldn’t find universal acceptance.

I think the most you can say about these issues really is first that viewpoint diversity should be a concern of every department, but that to reduce it to “party identification” diversification or some notion of both-sidesism (anti-vaxxers in virology departments, anyone?) should be seen for the grotesquerie that it is.  Second, yes, society (not just universities) is more polarized around issues like gender and race and finding acceptable and constructive common language in which to talk about these concepts is difficult, but, my dudes, banging on about why someone who happens to have a teaching position is absolved from the hard work of finding that language because of some abstract notion of academic freedom is not helpful. 

And in any event, you could make such points without the necessity of publishing a methodologically omnishambles of a report like this one.  Just sayin.’

Source: Viewpoint diversity

How to Change Minds? A Study Makes the Case for Talking It Out.

Interesting study although hard to apply in practice (except ensure that no blowhards!):

Co-workers stuck on a Zoom call, deliberating a new strategy for a crucial project. Roommates at the kitchen table, arguing about how to split utility bills fairly. Neighbors at a city meeting, debating how to pay for street repairs.

We’ve all been there — in a group, trying our best to get everyone on the same page. It’s arguably one of the most important and common undertakings in human societies. But reaching agreement can be excruciating.

“Much of our lives seem to be in this sort of Rashomon situation — people see things in different ways and have different accounts of what’s happening,” Beau Sievers, a social neuroscientist at Dartmouth College, said.

A few years ago, Dr. Sievers devised a study to improve understanding of how exactly a group of people achieves a consensus and how their individual brains change after such discussions. The results, recently published online but not yet peer-reviewed, showed that a robust conversation that results in consensus synchronizes the talkers’ brains — not only when thinking about the topic that was explicitly discussed, but related situations that were not.

The study also revealed at least one factor that makes it harder to reach accord: a group member whose strident opinions drown out everyone else.

“Conversation is our greatest tool to align minds,” said Thalia Wheatley, a social neuroscientist at Dartmouth College who advises Dr. Sievers. “We don’t think in a vacuum, but with other people.”

Dr. Sievers designed the experiment around watching movies because he wanted to create a realistic situation in which participants could show fast and meaningful changes in their opinions. But he said it was surprisingly difficult to find films with scenes that could be viewed in different ways. “Directors of movies are very good at constraining the kinds of interpretations that you might have,” he said.

Reasoning that smash hits typically did not offer much ambiguity, Dr. Sievers focused on films that critics loved but did not bring blockbuster audiences, including “The Master,” “Sexy Beast” and “Birth,” a 2004 drama in which a mysterious young boy shows up at a woman’s engagement party.

None of the study’s volunteers had seen any of the films before. While lying in a brain scanner, they watched scenes from the various movies without sound, including one from “Birth” in which the boy collapses in a hallway after a tense conversation with the elegantly dressed woman and her fiancé.

After watching the clips, the volunteers answered survey questions about what they thought had happened in each scene. Then, in groups of three to six people, they sat around a table and discussed their interpretations, with the goal of reaching a consensus explanation.

All of the participants were students in the same master of business administration program, and many of them knew each other to varying degrees, which made for lively conversations reflecting real-world social dynamics, the researchers said.

After their chats, the students went back into the brain scanners and watched the clips again, as well as new scenes with some of the same characters. The additional “Birth” scene, for example, showed the woman tucking the little boy into bed and crying.

The study found that the group members’ brain activity — in regions related to vision, sound, attention, language and memory, among others — became more aligned after their conversation. Intriguingly, their brains were synchronized while they watched the scenes they had discussed, as well as the novel ones.

Groups of volunteers came up with different interpretations of the same movie clip. Some groups, for example, thought the woman was the boy’s mother and had abandoned him, whereas others thought they were unrelated. Despite having watched the same clips, the brain patterns from one group to another were meaningfully different, but within each group, the activity was far more synchronized.

The results have been submitted for publication in a scientific journal and are under review.

“This is a bold and innovative study,” said Yuan Chang Leong, a cognitive neuroscientist at University of Chicago who was not involved in the work.

The results jibe with previous research showing people who share beliefs tend to share brain responses. For example, a 2017 studypresented volunteers with one of two opposite interpretations of “Pretty Mouth and Green My Eyes,” a short story by J.D. Salinger. The participants that had received the same interpretation had more aligned brain activity when listening to the story in the brain scanner.

And in 2020, Dr. Leong’s team reported that when watching news footage, brain activity in conservatives looked more like that in other conservatives than that in liberals, and vice versa.

The new study “suggests that the degree of similarity in brain responses depends not only on people’s inherent predispositions, but also the common ground created by having a conversation,” Dr. Leong said.

The experiment also underscored a dynamic familiar to anyone who has been steamrollered in a work meeting: An individual’s behavior can drastically influence a group decision. Some of the volunteers tried to persuade their groupmates of a cinematic interpretation with bluster, by barking orders and talking over their peers. But others — particularly those who were central players in the students’ real-life social networks — acted as mediators, reading the room and trying to find common ground.

The groups with blowhards were less neurally aligned than were those with mediators, the study found. Perhaps more surprising, the mediators drove consensus not by pushing their own interpretations, but by encouraging others to take the stage and then adjusting their own beliefs — and brain patterns — to match the group.

“Being willing to change your own mind, then, seems key to getting everyone on the same page,” Dr. Wheatley said.

Because the volunteers were eagerly trying to collaborate, the researchers said that the study’s results were most relevant to situations, like workplaces or jury rooms, in which people are working toward a common goal.

But what about more adversarial scenarios, in which people have a vested interest in a particular position? The study’s results might not hold for a person negotiating a raise or politicians arguing over the integrity of our elections. And for some situations, like creative brainstorming, groupthink may not be an ideal outcome.

“The topic of conversation in this study was probably pretty ‘safe,’ in that no personally or societally relevant beliefs were at stake,” said Suzanne Dikker, a cognitive neuroscientist and linguist at New York University, who was not involved in the study.

Future studies could zero in on brain activity during consensus-building conversations, she said. This would require a relatively new technique, known as hyperscanning, which can simultaneously measure multiple people’s brains. Dr. Dikker’s work in this arena has shown that personality traits and conversational dynamics like taking turns can affect brain-to-brain synchrony.

Dr. Wheatley agreed. The neuroscientist said she has long been frustrated with her field’s focus on the isolated brain.

“Our brains evolved to be social: We need frequent interaction and conversation to stay sane,” she said. “And yet, neuroscience still putters along mapping out the single brain as if that will achieve a deep understanding of the human mind. This has to, and will, change.”

Source: How to Change Minds? A Study Makes the Case for Talking It Out.

Data sharing should not be an afterthought in digital health innovation

Agree that data sharing is intrinsic to healthcare innovation, not just digital.

In Ontario, Epic provides a measure of integration for providers and patients, which I have found useful for my personal health data (and I like the fact that I see my test results sometimes earlier than my doctors!).

But surprising no mention of CIHI and its current role in compiling healthcare data, which I have used to analyse trends in birth tourism:

Within Canada and abroad, many health-care organizations and health authorities struggle to share data effectively with biomedical researchers. The pandemic has accentuated and brought more attention to the need for a better data-sharing ecosystem in biomedical sciences to enable research and innovation.

The siloed and often entirely disconnected data systems suffer from a lack of an interoperable infrastructure and a common policy framework for big data-sharing. These are required not only for rapidly responding to emergency situations such as a global pandemic, but also for addressing inefficiencies in hospitals, clinics and public health organizations. Ultimately this may result in delays in providing critical care and formulating public health interventions. An integrated framework could improve collaboration among practitioners and researchers across disciplines and yield improvements and innovations.

Significant investments and efforts are currently underway in Canada by hospitals and health authorities to modernize health data management. This includes the adoption of electronic health record systems (EHRs) and cloud computing infrastructure. However, these large-scale investments do not consider data-sharing needs to maximize secondary use of health data by research communities.

For example, the adoption of Cerner, a health information technology provider, as an EHR system in British Columbia represents the single largest investment in the history of B.C. health care. It promises improved data-sharing, and yet the framework for data-sharing is non-existent.

Operationalization of a data-sharing system is complex and costly, and runs the risk of being both too little and too much of a regulatory burden. Much can be learned from both the SARS and COVID-19 pandemics in formulating the next steps. For example, a national committee was formed after SARS to propose the creation of a centralized database to share public health data (the National Advisory Committee on SARS). A more recent example is the Pan-Canadian Health Data Strategy, which aims to support the effective creation, exchange and use of critical health data for the benefit of Canadians.

New possibilities to help heath care providers and users safely share information are providing innovative solutions that deal with a growing body of data while protecting privacy. The decrease in storage costs, an increase of inexpensive processing power and the advance of platforms as a service (PaaS) via cloud computing democratize and commoditize analytics in health care. Privacy-enhancing technologies (PETs), backed by national statistical organizations, signal new possibilities to help providers and users safely share information.

Researchers as major data consumers recognize the importance of sound management practices. While these practices focus on the responsibilities of research institutions, they also promote sharing of biomedical data. Two examples are the National Institutes of Health’s data-sharing policy and Canada’s tri-agency research data management policy. These policies are based on an understanding of what’s needed in infrastructure modernization, in tandem with what’s needed for robust data-sharing and good management policies.

What about hospitals and health authorities as data producers? Who is forging a new structure and policy to direct them across Canada to increase data-sharing capacity?

Public health organizations operate with a heavy burden to comply with a multitude of regulations that affect data-sharing and management. This challenge is compounded by uncertainty surrounding risk quantification for open data-sharing and community-based computing. This uncertainty often translates into the perception of high risk where risk tolerance is low by necessity. As a result, there is a barrier to investing in new infrastructure and, just as importantly, investing in cultural change in management during decision-making processes related to budgeting.

Better understanding of the system is needed before taking the next steps, particularly when looking at outdated infrastructure governed by policies that never anticipated innovation and weren’t designed to accommodate rapid software deployment. Examining and assessing the current state of the Canadian health-care IT infrastructure should include an evaluation of the benefits of broad data-sharing to help foster momentum for biomedical advances. By looking at the IT infrastructure as it stands now, we can see how inaction costs society time, money and patient health.

One approach is to create a federated system. What this means is a common system capable of federated data-sharing and query processing. Federated data-sharing is defined as a series of decentralized, interconnected systems that allow data to be queried or analyzed by trusted participants. These systems require compliance with regulations, including legal compliance; system security and data protection by design; records of processing activities; encryption; managing data subject consent; managing personal data deletion; managing personal data portability; and security of personal data.

Because much of Canada’s IT infrastructure for health data management is obsolete, there needs to be significant investment. As well, the underlying infrastructure needs to be rebuilt to communicate externally with digital applications through a security framework for continuous authentication and authorization.

Whatever system is used must be capable of ensuring patient privacy. For example, individuals might be identified by reverse engineering data sets that are cross-referenced. The goal is to significantly minimize ambiguity in assessing the associated risk to allow compliance with privacy protections in law and practice. Widely used frameworks exist that address these issues.

The market is providing available technologies and cost-effective methods that can be used to enable large-scale data-sharing that meet privacy protection criteria. What is needed is the collective will to proceed, to upgrade obsolete data infrastructure and address policy barriers. Initiatives and applications in other jurisdictions or settings face similar challenges, but our research and development can be accelerated to help enhance data sharing and improve health outcomes.

Source: Data sharing should not be an afterthought in digital health innovation

Does the federal government fund and support racially discriminating groups and individuals?

Drawing the contrast between the relative kid glove treatment of the “Freedom” Convoy and providing them a further platform in the Rouleau Commission:

Federal funding of hateful messaging has been in the news lately after Prime Minister Justin Trudeau condemned the comments of a senior consultant who was working on a federally funded anti-racist project. As a result of Laith Marouf’s anti-French and anti-Semitic postings, the $130,000 funding of the group, the Community Media Advocacy Centre, was suspended. This happened, despite the federal government apparently knowing about Marouf’s past.

Contrast that to the multi-million dollar federal Public Order Emergency Commission inquiry into the federal government temporary use of the 1988 Emergencies Act this past February to remove the Freedom Convoy protesters from downtown Ottawa.

Yet the Freedom Convoy group antics, which paralyzed Ottawa’s downtown core for more than three weeks this past January and February, forced authorities to spend millions of dollars in policing costs. The Freedom Convoy leaders are now wanting even more money than could be granted under Treasury Board guidelines and are asking for $450,000 of their $5-million in donations to be unfrozen, now held in escrow. This request may actually happen even though it’s a bit rich when such participation brings with it more propaganda for their disruptive causes.

In addition to the Rouleau inquiry giving the Freedom Convoy legal standing and funding, this federally funded commission is encouraging written submissions from Freedom Convoy supporters. The federally funded commission has made a point of encouraging Freedom Convoy supporters to make written submissions to it

The federal commission’s lead question on its website asks for those on side of the Freedom Convoy “protest” to describe their experiences. Those who were affected by the “protest” activities are asked secondarily to offer their experiences.

Rouleau is, in effect, placing the Freedom Convoy participants in an important, if not equal, position to those who were affected by the convoy, giving them more space and prominence than they deserve.

Canada’s Public Safety Minister Marco Mendicino, meanwhile, according to highly redacted cabinet minutes, saw the Freedom Convoy protesters falling into two categories: the “harmless and happy with a strong relationship to faith communities,” and the “harder extremists trying to undermine government institutions and law enforcement.” However, it seemed as if the police in Ottawa were siding with the Freedom Convoy protesters during February’s occupation.

Mendicino did not comment on the tacit mix and dynamics of the happy folks and extremists who occupied downtown Ottawa for more than three weeks.

Does this leave federal authorities less than neutral or too easy on the illegal activities of the Freedom Convoy participants? Do we not remember the federal government’s past racist actions with residential schools or the internment of Canadian Japanese citizens during the Second World War?

The federal government continues to mount ineffective anti-racist “campaigns” and decries anti-Semitic activities without taking action.

It’s also standing idly by when it comes to Quebec’s racist Bill 21 and its overt discrimination against minorities and those, for instance, who teach and wear hijabs being ousted.

The Freedom Convoy protesters appear to be treated as official interveners.

So let’s call the feds for what they are: a bunch of yellow-shrinking-stand-by con artists.

Ken Rubin founded the Ottawa People’s Commission to hear from residents about the harm incurred during last February’s Freedom Convoy siege in Ottawa, though the views expressed here are his own personal ones.

Source: Does the federal government fund and support racially discriminating groups and individuals?

Harris: The future of malicious artificial intelligence applications is here

More on some of the more fundamental risks of AI:

The year is 2016. Under close scrutiny by CCTV cameras, 400 contractors are working around the clock in a Russian state-owned facility. Many are experts in American culture, tasked with writing posts and memes on Western social media to influence the upcoming U.S. Presidential election. The multimillion dollar operation would reach 120 million people through Facebook alone. 

Six years later, the impact of this Russian info op is still being felt. The techniques it pioneered continue to be used against democracies around the world, as Russia’s “troll factory” — the Russian internet Research Agency — continues to fuel online radicalization and extremism. Thanks in no small part to their efforts, our world has become hyper-polar, increasingly divided into parallel realities by cherry-picked facts, falsehoods, and conspiracy theories.

But if making sense of reality seems like a challenge today, it will be all but impossible tomorrow. For the past two years, a quiet revolution has been brewing in AI — and despite some positive consequences, it’s also poised to hand authoritarian regimes unprecedented new ways to spread misinformation across the globe at an almost inconceivable scale.

In 2020, AI researchers created a text generation system called GPT-3. GPT-3 can produce text that’s indistinguishable from human writing — including viral articles, tweets, and other social media posts. GPT-3 was one of the most significant breakthroughs in the history of AI: it offered a simple recipe that AI researchers could follow to radically accelerate AI progress, and build much more capable, humanlike systems. 

But it also opened a Pandora’s box of malicious AI applications. 

Text-generating AIs — or “language models” — can now be used to massively augment online influence campaigns. They can craft complex and compelling arguments, and be leveraged to create automated bot armies and convincing fake news articles. 

This isn’t a distant future concern: it’s happening already. As early as 2020, Chinese efforts to interfere with Taiwan’s national election involved “the instant distribution of artificial-intelligence-generated fake news to social media platforms.”

But the 2020 AI breakthrough is now being harnessed for more than just text. New image-generation systems, able to create photorealistic pictures based on any text prompt, have become reality this year for the first time. As AI-generated content becomes better and cheaper, the posts, pictures, and videos we consume in our social media feeds will increasingly reflect the massively amplified interests of tech-savvy actors.

And malicious applications of AI go far beyond social media manipulation. Language models can already write better phishing emails than humans, and have code-writing capabilities that outperform human competitive programmers. AI that can write code can also write malware, and many AI researchers see language models as harbingers of an era of self-mutating AI-powered malicious software that could blindside the world. Other recent breakthroughs have significant implications for weaponized drone control and even bioweapon design.

Needed: a coherent plan

Policy and governance usually follow crises, rather than anticipate them. And that makes sense: the future is uncertain, and most imagined risks fail to materialize. We can’t invest resources in solving every hypothetical problem.

But exceptions have always been made for problems which, if left unaddressed, could have catastrophic effects. Nuclear technology, biotechnology, and climate change are all examples. Risk from advanced AI represents another such challenge. Like biological and nuclear risk, it calls for a co-ordinated, whole-of-government response.

Public safety agencies should establish AI observatories that produce unclassified reporting on publicly available information about AI capabilities and risks, and begin studying how to frame AI through a counterproliferation lens

Given the pivotal role played by semiconductors and advanced processors in the development of what are effectively new AI weapons, we should be tightening export control measures for hardware or resources that feed into the semiconductor supply chains of countries like China and Russia. 

Our defence and security agencies could follow the lead of the U.K.’s Ministry of Defence, whose Defence AI Strategy involves tracking and mitigating extreme and catastrophic risks from advanced AI.

AI has entered an era of remarkable, rapidly accelerating capabilities. Navigating the transition to a world with advanced AI will require that we take seriously possibilities that would have seemed like science fiction until very recently. We’ve got a lot to rethink, and now is the time to get started.

Source: The future of malicious artificial intelligence applications is here

Roose: We Need to Talk About How Good A.I. Is Getting

Of note. World is going to become more complex, and the potential for AI in many fields will continue to grow, with these tools and programs being increasing able to replace, at least in part, professionals including government workers:

For the past few days, I’ve been playing around with DALL-E 2, an app developed by the San Francisco company OpenAI that turns text descriptions into hyper-realistic images.

OpenAI invited me to test DALL-E 2 (the name is a play on Pixar’s WALL-E and the artist Salvador Dalí) during its beta period, and I quickly got obsessed. I spent hours thinking up weird, funny and abstract prompts to feed the A.I. — “a 3-D rendering of a suburban home shaped like a croissant,” “an 1850s daguerreotype portrait of Kermit the Frog,” “a charcoal sketch of two penguins drinking wine in a Parisian bistro.” Within seconds, DALL-E 2 would spit out a handful of images depicting my request — often with jaw-dropping realism.

Here, for example, is one of the images DALL-E 2 produced when I typed in “black-and-white vintage photograph of a 1920s mobster taking a selfie.” And how it rendered my request for a high-quality photograph of “a sailboat knitted out of blue yarn.”

DALL-E 2 can also go more abstract. The illustration at the top of this article, for example, is what it generated when I asked for a rendering of “infinite joy.” (I liked this one so much I’m going to have it printed and framed for my wall.)

What’s impressive about DALL-E 2 isn’t just the art it generates. It’s how it generates art. These aren’t composites made out of existing internet images — they’re wholly new creations made through a complex A.I. process known as “diffusion,” which starts with a random series of pixels and refines it repeatedly until it matches a given text description. And it’s improving quickly — DALL-E 2’s images are four times as detailed as the images generated by the original DALL-E, which was introduced only last year.

DALL-E 2 got a lot of attention when it was announced this year, and rightfully so. It’s an impressive piece of technology with big implications for anyone who makes a living working with images — illustrators, graphic designers, photographers and so on. It also raises important questions about what all of this A.I.-generated art will be used for, and whether we need to worry about a surge in synthetic propaganda, hyper-realistic deepfakes or even nonconsensual pornography.

But art is not the only area where artificial intelligence has been making major strides.

Over the past 10 years — a period some A.I. researchers have begun referring to as a “golden decade” — there’s been a wave of progress in many areas of A.I. research, fueled by the rise of techniques like deep learning and the advent of specialized hardware for running huge, computationally intensive A.I. models.

Some of that progress has been slow and steady — bigger models with more data and processing power behind them yielding slightly better results.

But other times, it feels more like the flick of a switch — impossible acts of magic suddenly becoming possible.

Just five years ago, for example, the biggest story in the A.I. world was AlphaGo, a deep learning model built by Google’s DeepMind that could beat the best humans in the world at the board game Go. Training an A.I. to win Go tournaments was a fun party trick, but it wasn’t exactly the kind of progress most people care about.

But last year, DeepMind’s AlphaFold — an A.I. system descended from the Go-playing one — did something truly profound. Using a deep neural network trained to predict the three-dimensional structures of proteins from their one-dimensional amino acid sequences, it essentially solved what’s known as the “protein-folding problem,” which had vexed molecular biologists for decades.

This summer, DeepMind announced that AlphaFold had made predictions for nearly all of the 200 million proteins known to exist — producing a treasure trove of data that will help medical researchers develop new drugs and vaccines for years to come. Last year, the journal Science recognized AlphaFold’s importance, naming it the biggest scientific breakthrough of the year.

Or look at what’s happening with A.I.-generated text.

Only a few years ago, A.I. chatbots struggled even with rudimentary conversations — to say nothing of more difficult language-based tasks.

But now, large language models like OpenAI’s GPT-3 are being used to write screenplayscompose marketing emails and develop video games. (I even used GPT-3 to write a book review for this paper last year — and, had I not clued in my editors beforehand, I doubt they would have suspected anything.)

A.I. is writing code, too — more than a million people have signed up to use GitHub’s Copilot, a tool released last year that helps programmers work faster by automatically finishing their code snippets.

Then there’s Google’s LaMDA, an A.I. model that made headlines a couple of months ago when Blake Lemoine, a senior Google engineer, was fired after claiming that it had become sentient.

Google disputed Mr. Lemoine’s claims, and lots of A.I. researchers have quibbled with his conclusions. But take out the sentience part, and a weaker version of his argument — that LaMDA and other state-of-the-art language models are becoming eerily good at having humanlike text conversations — would not have raised nearly as many eyebrows.

In fact, many experts will tell you that A.I. is getting better at lots of things these days — even in areas, such as language and reasoning, where it once seemed that humans had the upper hand.

“It feels like we’re going from spring to summer,” said Jack Clark, a co-chair of Stanford University’s annual A.I. Index Report. “In spring, you have these vague suggestions of progress, and little green shoots everywhere. Now, everything’s in bloom.”

In the past, A.I. progress was mostly obvious only to insiders who kept up with the latest research papers and conference presentations. But recently, Mr. Clark said, even laypeople can sense the difference.

“You used to look at A.I.-generated language and say, ‘Wow, it kind of wrote a sentence,’” Mr. Clark said. “And now you’re looking at stuff that’s A.I.-generated and saying, ‘This is really funny, I’m enjoying reading this,’ or ‘I had no idea this was even generated by A.I.’”

There is still plenty of bad, broken A.I. out there, from racist chatbots to faulty automated driving systems that result in crashes and injury. And even when A.I. improves quickly, it often takes a while to filter down into products and services that people actually use. An A.I. breakthrough at Google or OpenAI today doesn’t mean that your Roomba will be able to write novels tomorrow.

But the best A.I. systems are now so capable — and improving at such fast rates — that the conversation in Silicon Valley is starting to shift. Fewer experts are confidently predicting that we have years or even decades to prepare for a wave of world-changing A.I.; many now believe that major changes are right around the corner, for better or worse.

Ajeya Cotra, a senior analyst with Open Philanthropy who studies A.I. risk, estimated two years ago that there was a 15 percent chance of “transformational A.I.” — which she and others have defined as A.I. that is good enough to usher in large-scale economic and societal changes, such as eliminating most white-collar knowledge jobs — emerging by 2036.

But in a recent post, Ms. Cotra raised that to a 35 percent chance, citing the rapid improvement of systems like GPT-3.

“A.I. systems can go from adorable and useless toys to very powerful products in a surprisingly short period of time,” Ms. Cotra told me. “People should take more seriously that A.I. could change things soon, and that could be really scary.”

There are, to be fair, plenty of skeptics who say claims of A.I. progress are overblown. They’ll tell you that A.I. is still nowhere close to becoming sentient, or replacing humans in a wide variety of jobs. They’ll say that models like GPT-3 and LaMDA are just glorified parrots, blindly regurgitating their training data, and that we’re still decades away from creating true A.G.I. — artificial general intelligence — that is capable of “thinking” for itself.

There are also tech optimists who believe that A.I. progress is accelerating, and who want it to accelerate faster. Speeding A.I.’s rate of improvement, they believe, will give us new tools to cure diseases, colonize space and avert ecological disaster.

I’m not asking you to take a side in this debate. All I’m saying is: You should be paying closer attention to the real, tangible developments that are fueling it.

After all, A.I. that works doesn’t stay in a lab. It gets built into the social media apps we use every day, in the form of Facebook feed-ranking algorithms, YouTube recommendations and TikTok “For You” pages. It makes its way into weapons used by the military and software used by children in their classrooms. Banks use A.I. to determine who’s eligible for loans, and police departments use it to investigate crimes.

Even if the skeptics are right, and A.I. doesn’t achieve human-level sentience for many years, it’s easy to see how systems like GPT-3, LaMDA and DALL-E 2 could become a powerful force in society. In a few years, the vast majority of the photos, videos and text we encounter on the internet could be A.I.-generated. Our online interactions could become stranger and more fraught, as we struggle to figure out which of our conversational partners are human and which are convincing bots. And tech-savvy propagandists could use the technology to churn out targeted misinformation on a vast scale, distorting the political process in ways we won’t see coming.

It’s a cliché, in the A.I. world, to say things like “we need to have a societal conversation about A.I. risk.” There are already plenty of Davos panels, TED talks, think tanks and A.I. ethics committees out there, sketching out contingency plans for a dystopian future.

What’s missing is a shared, value-neutral way of talking about what today’s A.I. systems are actually capable of doing, and what specific risks and opportunities those capabilities present.

I think three things could help here.

First, regulators and politicians need to get up to speed.

Because of how new many of these A.I. systems are, few public officials have any firsthand experience with tools like GPT-3 or DALL-E 2, nor do they grasp how quickly progress is happening at the A.I. frontier.

We’ve seen a few efforts to close the gap — Stanford’s Institute for Human-Centered Artificial Intelligence recently held a three-day “A.I. boot camp” for congressional staff members, for example — but we need more politicians and regulators to take an interest in the technology. (And I don’t mean that they need to start stoking fears of an A.I. apocalypse, Andrew Yang-style. Even reading a book like Brian Christian’s “The Alignment Problem” or understanding a few basic details about how a model like GPT-3 works would represent enormous progress.)

Otherwise, we could end up with a repeat of what happened with social media companies after the 2016 election — a collision of Silicon Valley power and Washington ignorance, which resulted in nothing but gridlock and testy hearings.

Second, big tech companies investing billions in A.I. development — the Googles, Metas and OpenAIs of the world — need to do a better job of explaining what they’re working on, without sugarcoating or soft-pedaling the risks. Right now, many of the biggest A.I. models are developed behind closed doors, using private data sets and tested only by internal teams. When information about them is made public, it’s often either watered down by corporate P.R. or buried in inscrutable scientific papers.

Downplaying A.I. risks to avoid backlash may be a smart short-term strategy, but tech companies won’t survive long term if they’re seen as having a hidden A.I. agenda that’s at odds with the public interest. And if these companies won’t open up voluntarily, A.I. engineers should go around their bosses and talk directly to policymakers and journalists themselves.

Third, the news media needs to do a better job of explaining A.I. progress to nonexperts. Too often, journalists — and I admit I’ve been a guilty party here — rely on outdated sci-fi shorthand to translate what’s happening in A.I. to a general audience. We sometimes compare large language models to Skynet and HAL 9000, and flatten promising machine learning breakthroughs to panicky “The robots are coming!” headlines that we think will resonate with readers. Occasionally, we betray our ignorance by illustrating articles about software-based A.I. models with photos of hardware-based factory robots — an error that is as inexplicable as slapping a photo of a BMW on a story about bicycles.

In a broad sense, most people think about A.I. narrowly as it relates to us — Will it take my job? Is it better or worse than me at Skill X or Task Y? — rather than trying to understand all of the ways A.I. is evolving, and what that might mean for our future.

I’ll do my part, by writing about A.I. in all its complexity and weirdness without resorting to hyperbole or Hollywood tropes. But we all need to start adjusting our mental models to make space for the new, incredible machines in our midst.

Source: We Need to Talk About How Good A.I. Is Getting

Why so many people mistrust science and how we can fix it

Some interesting thoughts on how to address mistrust:

Not since the Scopes Monkey Trial a century ago, in which a Tennessee high school science teacher was found guilty of violating the state’s law prohibiting the teaching of Darwin’s Theory of Evolution, have anti-scientific attitudes been so apparent and openly embraced by political leaders in the United States. 

The denial, now decades long, of the evidence of human-induced climate change by a large segment of the population, reinforced by the rhetoric of powerful Republicans like governors Ron DeSantis of Florida and Greg Abbott of Texas, has been matched over the past two-and-a-half years by the wholesale rejection of scientific evidence about COVID-19 by many of these same politicians and much of the American population, approximately 40% of whom reject the science about both. 

Drawing on decades of marketing and psychology research, which show that it is critical to understand your target audience so that a product can be positioned properly in the market, Dr Aviva Philipp-Muller, professor at the Beedie School of Business at Simon Fraser University, and her team determined there are four different reasons why people have anti-science attitudes. 

Having anatomised the principles behind each attitude, “Why are people anti-science, and what can we do about it?”, published last month in the Proceedings of the National Academy of Sciences of the United States of America, proposes strategies to counter each of the four anti-science attitudes.

“Persuasion researchers have known for a little while that getting in your audience’s head and understanding where they’re coming from is step one of trying to win them over. There’s no one-size-fits-all persuasion tactic. So, if you’re not getting through to someone, you might need to reassess why they’re anti-science in the first place and try to speak directly to that basis,” says Philipp-Muller.

Reason one: Suspicions about scientists and experts

The first group Philipp-Muller and her co-authors, Professor Spike WS Lee (Rotman School of Management, University of Toronto) and Social Psychology Professor Richard E Petty (Ohio State University, Columbus), discuss are suspicious of scientists and experts. 

One reason large sections of the population mistrust scientists such as Dr Anthony Fauci is because of the cynicism about elite institutions (including the Centers for Disease Control and Prevention) and the stereotyping of scientists as cold and unfeeling. This view of medical experts contrasts sharply, it is worth noting, with the avuncular characters in television soap operas and films from the 1950s, 1960s, 1970s and 1980s. 

A further contributing factor to the mistrust of scientists harkens back to what prompted Tennessee politicians, who had strong support from their evangelical constituents, to ban the teaching of evolution: modern science’s antipathy to Christian teachings, beliefs and values.

During the COVID-19 crisis, faith in scientists has also been weakened by what many in the public saw as confusing recommendations and even backtracking about masking: from there being no need to wear masks, to saving surgical masks and N95 masks for medical workers, to everyone needing to wear an N95 mask. 

(The fact that the recommendations changed because of new information – ie because that is how science works – Philipp-Muller told University World News, is not relevant to how much of the public responded to the recommendations.)

Reason two: Social identities

Both communications professors and marketers have studied how social identities largely determine recipients’ openness to a message. It comes as no surprise that because in the past they were subjected to (often heinous) experiments without their knowledge, both American Blacks and Indigenous peoples are wary of medical scientists, for example. 

“For individuals who embrace an identity [eg evangelical Christians], scientists are members of the outgroup,” Philipp-Muller writes, and are therefore not to be believed. This can be seen in the way, in the United States and some other countries, televangelists and preachers told their flocks that taking the COVID vaccine showed a lack of faith in the efficacy of prayer.

Social identity dynamics, augmented by social media, Philipp-Muller says, play a major role in the rise of (demonstrably false) conspiracy theories, such as the claim that the COVID-19 vaccine contains microchips.

Reason three: Overturning a world view

Perhaps the most infamous example of the third basis for rejecting science – a message that overturns a world view – is the Catholic Church’s rejection of Copernicus’ discovery that the Earth orbits the sun, holding onto an erroneous view that had stood for four centuries. 

To avoid “cognitive dissonance”, individuals will hold to an erroneous view even after they are presented with evidence. This is one reason why “fake news” and misinformation are so difficult to counter, notes the study.

Reason four: Epistemic style

The final basis for anti-science thinking, Philipp-Muller and her team discern, occurs when there is a “mismatch between the delivery of the scientific message and the recipient’s epistemic style”; in other words, when information is delivered in a manner at odds with the recipient’s way of thinking. 

For example, people who are more comfortable thinking in concrete terms are more likely to dismiss issues like climate change because it is often presented in abstract terms divorced from the individual’s daily life.

One of the most interesting points Philipp-Muller and her team make is how, for large sections of the public, the rhetorical structures scientists use end up undercutting the authority of their conclusions. 

Since the science is evolving in real time, when speaking of COVID-19 or climate change, scientists “hedge their findings and avoid over-claiming certainty as they try to communicate the preliminary, inconclusive, nuanced or evolving nature of scientific evidence”. 

Partially because the public is poorly educated as to how science operates – famously summarised by the philosopher Karl Popper as working through the Falsification Principle – the rhetorical structures used by scientists lead people with low tolerance for uncertainty to reject both the information and recommendations that scientists like Fauci give. 

(The Falsification Principle holds that, as opposed to an opinion or statement of religious faith, a scientific theory must be testable and structured so that it can conceivably be proven false.)

“There are a lot of people who don’t really have tolerance for uncertainty and really need to be told things in black and white. And so there’s a mismatch between how scientists tend to communicate information and how whole segments of the population tend to process information,” says Philipp-Muller.

The limitations of science education

Improving scientific literacy, the default solution of professors, will only go so far towards solving the problem of anti-science attitudes, says Philipp-Muller, especially if such education is conceived of as teaching students a list of facts. 

“That’s not going to be helpful and, in fact, could backfire,” she told University World News

Further, for the four anti-science attitudes held in the general public, it is too late for science education. Accordingly, the authors propose strategies to counter each of the four anti-science biases.

To counteract the view that scientists as people are not trustworthy, the study suggests three main steps. 

The perceived “coldness” of elite scientists can be countered by recruiting more females into the STEM (science, technology, engineering and mathematics) fields. 

Scientists should also simplify their language and write “lay summaries” that should appear alongside the ubiquitous jargon-laden abstracts. 

Because of the low level of scientific literacy among the general public, Philipp-Muller and her team say, scientists must don a teacher’s cap and “communicate to the public that substantive debate and disagreement are inherent in the scientific process” in clear and unambiguous terms, without falling into the false neutrality of what’s been dubbed “both sideism”. 

Marketing and persuasion research show that being perceived as open to other points of view actually increases openness among recipients. Philipp-Muller’s team suggests “honestly acknowledging any drawbacks of their position [such as the infringement on rights by requirements to mask up because of COVID] while ultimately explaining in clear and compelling terms why their position is still the most supported or justifiable one”.

Countering the ingroup/outgroup attitude requires scientific communicators to find a shared social identity with their intended audience. 

In one town, Philipp-Muller told me, proponents did not counter resistance to a water-recycling programme by piling up the scientific evidence for the plan. Rather, more people supported the proposal when the presenter emphasised the fact that she also lived in the same region and thus shared what’s called a “superordinate identity”. 

One way to earn the trust of racialised communities that are wary of scientists is to “train marginalised individuals to be scientists working within their own communities”. In one paradigmatic project, to overcome the suspicions of the Indigenous community where the scientists were studying the human genome, researchers trained Indigenous individuals to be genome researchers.

On overcoming resistance to the scientific message itself, Philipp-Muller says: “I think science education can be a really useful tool for combating anti-science attitudes, especially with number three, which is when the scientific message’s evidence is contrary to a person’s belief. 

“If we can get in and ensure that people have good scientific reasoning skills so that when they’re presented with new scientific information, they are able to assess whether or not it’s valid, that will help ensure that they can get on board with accurate and valid scientific information and also learn what kind of evidence is shaky.”

An appeal to values

A further strategy to combat anti-science attitudes triggered by the content of the message involves appealing to recipients’ deep-seated values. 

The term Philipp-Muller and her co-authors use for this is “self-affirmation”, which has nothing New Agey about it and nothing to do with radical individualism. Rather, self-affirmation refers to a process during which people focus on the values that matter to them, such as caring for one’s family, in ways unrelated to the conflict or issue at hand. 

The finding of common ground has the effect of reducing “cognitive dissonance” experienced when presented with scientific information that is contrary to one’s ingrained way of thinking. 

Studies have shown that increasing an individual’s sense of self-integrity and security reduces the threat that dissonance poses to their sense of themselves. “Self-affirmation interventions have been used successfully,” says the article, “to reduce defensiveness and increase acceptance of scientific information regarding health behaviours and climate change.”

Philipp-Muller’s discussion of how to overcome the many mismatches between individuals’ epistemic styles and how scientists present scientific information is how the science behind marketing informs the proposals. 

After noting that large tech companies use the “fine-grained, person-specific” data to target people to change their consumer behaviour and that consumer researchers learned long ago to use rich psychological and behavioural data to segment and target consumers, they suggest that “public interest groups could adopt similar strategies and use the logic of target communications with different audiences in mind”. 

For example, abstract messages could be delivered to those who think abstractly and concrete messages for those who think concretely.

A timely intervention

Philipp-Muller and her co-authors’ analysis and prescriptions for countering anti-science attitudes could not be more timely.

I interviewed her on the morning of 27 July. A few hours later, Vic L McConought, a member of the Canadian Legislative Assembly (provincial parliament) who is running to be leader of the province’s United Conservative Party, which would make him Alberta’s next premier, tweeted about the leadership debate that evening. 

Despite the fact that 87% of Albertans are vaccinated, he primed his Twitter followers by writing: “I assume the first question is about Science … My answer is ‘Science will be held to task for its crimes if I am elected leader’.”

Source: Why so many people mistrust science and how we can fix it

‘Risks posed by AI are real’: EU moves to beat the algorithms that ruin lives

Legitimate concerns about AI bias (which individual decision-makers have), also need to address “noise,” variability among decision-making by people for comparable cases:

It started with a single tweet in November 2019. David Heinemeier Hansson, a high-profile tech entrepreneur, lashed out at Apple’s newly launched credit card, calling it “sexist” for offering his wife a credit limit 20 times lower than his own.

The allegations spread like wildfire, with Hansson stressing that artificial intelligence – now widely used to make lending decisions – was to blame. “It does not matter what the intent of individual Apple reps are, it matters what THE ALGORITHM they’ve placed their complete faith in does. And what it does is discriminate. This is fucked up.”

While Apple and its underwriters Goldman Sachs were ultimately cleared by US regulators of violating fair lending rules last year, it rekindled a wider debate around AI use across public and private industries.

Politicians in the European Union are now planning to introduce the first comprehensive global template for regulating AI, as institutions increasingly automate routine tasks in an attempt to boost efficiency and ultimately cut costs.

That legislation, known as the Artificial Intelligence Act, will have consequences beyond EU borders, and like the EU’s General Data Protection Regulation, will apply to any institution, including UK banks, that serves EU customers. “The impact of the act, once adopted, cannot be overstated,” said Alexandru Circiumaru, European public policy lead at the Ada Lovelace Institute.

Depending on the EU’s final list of “high risk” uses, there is an impetus to introduce strict rules around how AI is used to filter job, university or welfare applications, or – in the case of lenders – assess the creditworthiness of potential borrowers.

EU officials hope that with extra oversight and restrictions on the type of AI models that can be used, the rules will curb the kind of machine-based discrimination that could influence life-altering decisions such as whether you can afford a home or a student loan.

“AI can be used to analyse your entire financial health including spending, saving, other debt, to arrive at a more holistic picture,” Sarah Kocianski, an independent financial technology consultant said. “If designed correctly, such systems can provide wider access to affordable credit.”

But one of the biggest dangers is unintentional bias, in which algorithms end up denying loans or accounts to certain groups including women, migrants or people of colour.

Part of the problem is that most AI models can only learn from historical data they have been fed, meaning they will learn which kind of customer has previously been lent to and which customers have been marked as unreliable. “There is a danger that they will be biased in terms of what a ‘good’ borrower looks like,” Kocianski said. “Notably, gender and ethnicity are often found to play a part in the AI’s decision-making processes based on the data it has been taught on: factors that are in no way relevant to a person’s ability to repay a loan.”

Furthermore, some models are designed to be blind to so-called protected characteristics, meaning they are not meant to consider the influence of gender, race, ethnicity or disability. But those AI models can still discriminate as a result of analysing other data points such as postcodes, which may correlate with historically disadvantaged groups that have never previously applied for, secured, or repaid loans or mortgages.

And in most cases, when an algorithm makes a decision, it is difficult for anyone to understand how it came to that conclusion, resulting in what is commonly referred to as “black-box” syndrome. It means that banks, for example, might struggle to explain what an applicant could have done differently to qualify for a loan or credit card, or whether changing an applicant’s gender from male to female might result in a different outcome.

Circiumaru said the AI act, which could come into effect in late 2024, would benefit tech companies that managed to develop what he called “trustworthy AI” models that are compliant with the new EU rules.

Darko Matovski, the chief executive and co-founder of London-headquartered AI startup causaLens, believes his firm is among them.

The startup, which publicly launched in January 2021, has already licensed its technology to the likes of asset manager Aviva, and quant trading firm Tibra, and says a number of retail banks are in the process of signing deals with the firm before the EU rules come into force.

The entrepreneur said causaLens offers a more advanced form of AI that avoids potential bias by accounting and controlling for discriminatory correlations in the data. “Correlation-based models are learning the injustices from the past and they’re just replaying it into the future,” Matovski said.

He believes the proliferation of so-called causal AI models like his own will lead to better outcomes for marginalised groups who may have missed out on educational and financial opportunities.

“It is really hard to understand the scale of the damage already caused, because we cannot really inspect this model,” he said. “We don’t know how many people haven’t gone to university because of a haywire algorithm. We don’t know how many people weren’t able to get their mortgage because of algorithm biases. We just don’t know.”

Matovski said the only way to protect against potential discrimination was to use protected characteristics such as disability, gender or race as an input but guarantee that regardless of those specific inputs, the decision did not change.

He said it was a matter of ensuring AI models reflected our current social values and avoided perpetuating any racist, ableist or misogynistic decision-making from the past. “Society thinks that we should treat everybody equal, no matter what gender, what their postcode is, what race they are. So then the algorithms must not only try to do it, but they must guarantee it,” he said.

While the EU’s new rules are likely to be a big step in curbing machine-based bias, some experts, including those at the Ada Lovelace Institute, are pushing for consumers to have the right to complain and seek redress if they think they have been put at a disadvantage.

“The risks posed by AI, especially when applied in certain specific circumstances, are real, significant and already present,” Circiumaru said.

“AI regulation should ensure that individuals will be appropriately protected from harm by approving or not approving uses of AI and have remedies available where approved AI systems malfunction or result in harms. We cannot pretend approved AI systems will always function perfectly and fail to prepare for the instances when they won’t.”

Source: ‘Risks posed by AI are real’: EU moves to beat the algorithms that ruin lives

Klein: I Didn’t Want It to Be True, but the Medium Really Is the Message

Good long read on the impact of social media, harking back to McLuhan (and Innis) on how the medium and means of communications affects society:

It’s been revealing watching Marc Andreessen, the co-founder of the browsers Mosaic and Netscape and of A16Z, a venture capital firm, incessantly tweet memes about how everyone online is obsessed with “the current thing.” Andreessen sits on the board of Meta and his firm is helping finance Elon Musk’s proposed acquisition of Twitter. He is central to the media platforms that algorithmically obsess the world with the same small collection of topics and have flattened the frictions of place and time that, in past eras, made the news in Omaha markedly different from the news in Ojai. He and his firm have been relentless in hyping crypto, which turns the “current thing” dynamics of the social web into frothing, speculative asset markets.

Behind his argument is a view of human nature, and how it does, or doesn’t, interact with technology. In an interview with Tyler Cowen, Andreessen suggests that Twitter is like “a giant X-ray machine”:

You’ve got this phenomenon, which is just fascinating, where you have all of these public figures, all of these people in positions of authorityin a lot of cases, great authoritythe leading legal theorists of our time, leading politicians, all these businesspeople. And they tweet, and all of a sudden, it’s like, “Oh, that’s who you actually are.”

But is it? I don’t even think this is true for Andreessen, who strikes me as very different off Twitter than on. There is no stable, unchanging self. People are capable of cruelty and altruism, farsightedness and myopia. We are who we are, in this moment, in this context, mediated in these ways. It is an abdication of responsibility for technologists to pretend that the technologies they make have no say in who we become. Where he sees an X-ray, I see a mold.

Over the past decade, the narrative has turned against Silicon Valley. Puff pieces have become hit jobs, and the visionaries inventing our future have been recast as the Machiavellians undermining our present. My frustration with these narratives, both then and now, is that they focus on people and companies, not technologies. I suspect that is because American culture remains deeply uncomfortable with technological critique. There is something akin to an immune system against it: You get called a Luddite, an alarmist. “In this sense, all Americans are Marxists,” Postman wrote, “for we believe nothing if not that history is moving us toward some preordained paradise and that technology is the force behind that movement.”

I think that’s true, but it coexists with an opposite truth: Americans are capitalists, and we believe nothing if not that if a choice is freely made, that grants it a presumption against critique. That is one reason it’s so hard to talk about how we are changed by the mediums we use. That conversation, on some level, demands value judgments. This was on my mind recently, when I heard Jonathan Haidt, a social psychologist who’s been collecting data on how social media harms teenagers, say, bluntly, “People talk about how to tweak it — oh, let’s hide the like counters. Well, Instagram tried — but let me say this very clearly: There is no way, no tweak, no architectural change that will make it OK for teenage girls to post photos of themselves, while they’re going through puberty, for strangers or others to rate publicly.”

What struck me about Haidt’s comment is how rarely I hear anything structured that way. He’s arguing three things. First, that the way Instagram works is changing how teenagers think. It is supercharging their need for approval of how they look and what they say and what they’re doing, making it both always available and never enough. Second, that it is the fault of the platform — that it is intrinsic to how Instagram is designed, not just to how it is used. And third, that it’s bad. That even if many people use it and enjoy it and make it through the gantlet just fine, it’s still bad. It is a mold we should not want our children to pass through.

Or take Twitter. As a medium, Twitter nudges its users toward ideas that can survive without context, that can travel legibly in under 280 characters. It encourages a constant awareness of what everyone else is discussing. It makes the measure of conversational success not just how others react and respond but how much response there is. It, too, is a mold, and it has acted with particular force on some of our most powerful industries — media and politics and technology. These are industries I know well, and I do not think it has changed them, or the people in them (myself included), for the better.

But what would? I’ve found myself going back to a wise, indescribable book that Jenny Odell, a visual artist, published in 2019. In “How to Do Nothing: Resisting the Attention Economy,” Odell suggests that any theory of media must first start with a theory of attention. “One thing I have learned about attention is that certain forms of it are contagious,” she writes.

When you spend enough time with someone who pays close attention to something (if you were hanging out with me, it would be birds), you inevitably start to pay attention to some of the same things. I’ve also learned that patterns of attention — what we choose to notice and what we do not — are how we render reality for ourselves, and thus have a direct bearing on what we feel is possible at any given time. These aspects, taken together, suggest to me the revolutionary potential of taking back our attention.

I think Odell frames both the question and the stakes correctly. Attention is contagious. What forms of it, as individuals and as a society, do we want to cultivate? What kinds of mediums would that cultivation require?

This is anything but an argument against technology, were such a thing even coherent. It’s an argument for taking technology as seriously as it deserves to be taken, for recognizing, as McLuhan’s friend and colleague John M. Culkin put it, “we shape our tools, and thereafter, they shape us.”

There is an optimism in that, a reminder of our own agency. And there are questions posed, ones we should spend much more time and energy trying to answer: How do we want to be shaped? Who do we want to become?

Source: I Didn’t Want It to Be True, but the Medium Really Is the Message

Paul: There’s More Than One Way to Ban a Book

Significant and worrisome:

In the 1950s, Vladimir Nabokov’s “Lolita” was banned in France, Britain and Argentina, but not in the United States, where its publisher, Walter Minton, released the book after multiple American publishing houses rejected it.

Minton is part of a noble tradition. Over the years, American publishers have fought back against efforts to repress a wide range of works — from Charles Darwin’s “On the Origin of Species” to Maya Angelou’s “I Know Why the Caged Bird Sings.” Just last year, Simon & Schuster defended its book deal with former Vice President Mike Pence, despite a petition signed by more than 200 Simon & Schuster employees and other book professionals demanding that the publishing house cancel the deal. The publisher, Dana Canedy, and chief executive, Jonathan Karp, held firm.

The American publishing industry has long prided itself on publishing ideas and narratives that are worthy of our engagement, even if some people might consider them unsavory or dangerous, and for standing its ground on freedom of expression.

But that ground is getting shaky. Though the publishing industry would never condone book banning, a subtler form of repression is taking place in the literary world, restricting intellectual and artistic expression from behind closed doors, and often defending these restrictions with thoughtful-sounding rationales. As many top editors and publishing executives admit off the record, a real strain of self-censorship has emerged that many otherwise liberal-minded editors, agents and authors feel compelled to take part in.

Over the course of his long career, John Sargent, who was chief executive of Macmillan until last year and is widely respected in the industry for his staunch defense of freedom of expression, witnessed the growing forces of censorship — outside the industry, with overt book-banning efforts on the political right, but also within the industry, through self-censorship and fear of public outcry from those on the far left.

“It’s happening on both sides,” Sargent told me recently. “It’s just a different mechanism. On the right, it’s going through institutions and school boards, and on the left, it’s using social media as a tool of activism. It’s aggressively protesting to increase the pain threshold, until there’s censorship going the other way.”

In the face of those pressures, publishers have adopted a defensive crouch, taking pre-emptive measures to avoid controversy and criticism. Now, many books the left might object to never make it to bookshelves because a softer form of banishment happens earlier in the publishing process: scuttling a project for ideological reasons before a deal is signed, or defusing or eliminating “sensitive” material in the course of editing.

Publishers have increasingly instituted a practice of “sensitivity reads,” something that first gained traction in the young adult fiction world but has since spread to books for readers of all ages. Though it has long been a practice to lawyer many books, sensitivity readers take matters to another level, weeding out anything that might potentially offend.

Even when a potentially controversial book does find its way into print, other gatekeepers in the book world — the literary press, librarians, independent bookstores — may not review, acquire or sell it, limiting the book’s ability to succeed in the marketplace. Last year, when the American Booksellers Association included Abigail Shrier’s book, “Irreversible Damage: The Transgender Craze Seducing Our Daughters,” in a mailing to member booksellers, a number of booksellers publicly castigated the group for promoting a book they considered transphobic. The association issued a lengthy apology and subsequently promised to revise its practices. The group’s board then backed away from its traditional support of free expression, emphasizing the importance of avoiding “harmful speech.”

A recent overview in Publishers Weekly about the state of free expression in the industry noted, “Many longtime book people have said what makes the present unprecedented is a new impetus to censor — and self-censor — coming from the left.” When the reporter asked a half dozen influential figures at the largest publishing houses to comment, only one would talk — and only on condition of anonymity. “This is the censorship that, as the phrase goes, dare not speak its name,” the reporter wrote.

The caution is born of recent experience. No publisher wants another “American Dirt” imbroglio, in which a highly anticipated novel was accused of capitalizing on the migrant experience, no matter how well the book sells. No publisher wants the kind of staff walkout that took place in 2020 at Hachette Book Group when the journalist Ronan Farrow protested its plan to publish a memoir by his father, Woody Allen.

It is certainly true that not every book deserves to be published. But those decisions should be based on the quality of a book as judged by editors and publishers, not in response to a threatened, perceived or real political litmus test. The heart of publishing lies in taking risks, not avoiding them.

You can understand why the publishing world gets nervous. Consider what has happened to books that have gotten on the wrong side of illiberal scolds. On Goodreads, for example, vicious campaigns have circulated against authors for inadvertent offenses in novels that haven’t even been published yet. Sometimes the outcry doesn’t take place until after a book is in stores. Last year, a bunny in a children’s picture book got soot on his face by sticking his head into an oven to clean it — and the book was deemed racially insensitive by a single blogger. It was reprinted with the illustration redrawn. All this after the book received rave reviews and a New York Times/New York Public Library Best Illustrated Children’s Book Award.

In another instance, a white academic was denounced for cultural appropriation because trap feminism, the subject of her book “Bad and Boujee,” lay outside her own racial experience. The publisher subsequently withdrew the book. PEN America rightfully denounced the publisher’s decision, noting that it “detracts from public discourse and feeds into a climate where authors, editors and publishers are disincentivized to take risks.”

Books have always contained delicate and challenging material that rubs up against some readers’ sensitivities or deeply held beliefs. But which material upsets which people changes over time; many stories about interracial cooperation that were once hailed for their progressive values (“To Kill a Mockingbird,” “The Help”) are now criticized as “white savior” narratives. Yet these books can still be read, appreciated and debated — not only despite but also because of the offending material. Even if only to better understand where we started and how far we’ve come.

Having both worked in book publishing and covered it as an outsider, I’ve found that people in the industry are overwhelmingly smart, open-minded and well-intentioned. They aren’t involved in some kind of evil plot. Book people want to get good books out there, and to as many readers as possible.

An added challenge is that all of this is happening against the backdrop of a recent spate of shameful book bans that comes largely from the right. According to the American Library Association, of the hundreds of attempts to remove books from schools and libraries in 2021, a vast majority were made in response to content related to race and sex — red meat for red states, with Texas and Florida ranking high among those determined to quash artistic freedom and limit reader access. Republican politicians, for so long forces of intolerance, are now deep in the book-banning business.

We shouldn’t capitulate to any repressive forces, no matter where they emanate from on the political spectrum. Parents, schools and readers should demand access to all kinds of books, whether they personally approve of the content or not. For those on the illiberal left to conduct their own campaigns of censorship while bemoaning the book-burning impulses of the right is to violate the core tenets of liberalism. We’re better than this.

Source: There’s More Than One Way to Ban a Book