Sign In  |  Register  |  About Pleasanton  |  Contact Us

Pleasanton, CA
September 01, 2020 1:32pm
7-Day Forecast | Traffic
  • Search Hotels in Pleasanton

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

A mathematician walks into a bar (of disinformation)

Disinformation, misinformation, infotainment, algowars — if the debates over the future of media the past few decades have meant anything, they’ve at least left a pungent imprint on the English language. There’s been a lot of invective and fear over what social media is doing to us, from our individual psychologies and neurologies to wider […]

Disinformation, misinformation, infotainment, algowars — if the debates over the future of media the past few decades have meant anything, they’ve at least left a pungent imprint on the English language. There’s been a lot of invective and fear over what social media is doing to us, from our individual psychologies and neurologies to wider concerns about the strength of democratic societies. As Joseph Bernstein put it recently, the shift from “wisdom of the crowds” to “disinformation” has indeed been an abrupt one.

What is disinformation? Does it exist, and if so, where is it and how do we know we are looking at it? Should we care about what the algorithms of our favorite platforms show us as they strive to squeeze the prune of our attention? It’s just those sorts of intricate mathematical and social science questions that got Noah Giansiracusa interested in the subject.

Giansiracusa, a professor at Bentley University in Boston, is trained in mathematics (focusing his research in areas like algebraic geometry) but he’s also had a penchant of looking at social topics through a mathematical lens, such as connecting computational geometry to the Supreme Court. Most recently, he’s published a book called How Algorithms Create and Prevent Fake News to explore some of the challenging questions around the media landscape today and how technology is exacerbating and ameliorating those trends.

I hosted Giansiracusa on a Twitter Space recently, and since Twitter hasn’t made it easy to listen to these talks afterwards (ephemerality!), I figured I’d pull out the most interesting bits of our conversation for you and posterity.

This interview has been edited and condensed for clarity.

Danny Crichton: How did you decide to research fake news and write this book?

Noah Giansiracusa: One thing I noticed is there’s a lot of really interesting sociological, political science discussion of fake news and these types of things. And then on the technical side, you’ll have things like Mark Zuckerberg saying AI is going to fix all these problems. It just seemed like, it’s a little bit difficult to bridge that gap.

Everyone’s probably heard this recent quote of Biden saying, “they’re killing people,”in regards to misinformation on social media. So we have politicians speaking about these things where it’s hard for them to really grasp the algorithmic side. Then we have computer science people that are really deep in the details. So I’m kind of sitting in between, I’m not a real hardcore computer science person. So I think it’s a little easier for me to just step back and get the bird’s eye view.

At the end of the day, I just felt I kind of wanted to explore some more interactions with society where things get messy, where the math is not so clean.

Crichton: Coming from a mathematical background, you’re entering this contentious area where a lot of people have written from a lot of different angles. What are people getting right in this area and what have people perhaps missed some nuance?

Giansiracusa: There’s a lot of incredible journalism, I was blown away at how a lot of journalists really were able to deal with pretty technical stuff. But I would say one thing that maybe they didn’t get wrong, but kind of struck me was, there’s a lot of times when an academic paper comes out, or even an announcement from Google or Facebook or one of these tech companies, and they’ll kind of mention something, and the journalist will maybe extract a quote, and try to describe it, but they seem a little bit afraid to really try to look and understand it. And I don’t think it’s that they weren’t able to, it really seems like more of an intimidation and a fear.

One thing I’ve experienced a ton as a math teacher is people are so afraid of saying something wrong and making a mistake. And this goes for journalists who have to write about technical things, they don’t want to say something wrong. So it’s easier to just quote a press release from Facebook or quote an expert.

One thing that’s so fun and beautiful about pure math, is you don’t really worry about being wrong, you just try ideas and see where they lead and you see all these interactions. When you’re ready to write a paper or give a talk, you check the details. But most of math is this creative process where you’re exploring, and you’re just seeing how ideas interact. My training as a mathematician you think would make me apprehensive about making mistakes and to be very precise, but it kind of had the opposite effect.

Second, a lot of these algorithmic things, they’re not as complicated as they seem. I’m not sitting there implementing them, I’m sure to program them is hard. But just the big picture, all these algorithms nowadays, so much of these things are based on deep learning. So you have some neural net, doesn’t really matter to me as an outsider what architecture they’re using, all that really matters is, what are the predictors? Basically, what are the variables that you feed this machine learning algorithm? And what is it trying to output? Those are things that anyone can understand.

Crichton: One of the big challenges I think of analyzing these algorithms is the lack of transparency. Unlike, say, the pure math world which is a community of scholars working to solve problems, many of these companies can actually be quite adversarial about supplying data and analysis to the wider community.

Giansiracusa: It does seem there’s a limit to what anyone can deduce just by kind of being from the outside.

So a good example is with YouTube, teams of academics wanted to explore whether the YouTube recommendation algorithm sends people down these conspiracy theory rabbit holes of extremism. The challenge is that because this is the recommendation algorithm, it’s using deep learning, it’s based on hundreds and hundreds of predictors based on your search history, your demographics, the other videos you’ve watched and for how long — all these things. It’s so customized to you and your experience, that all the studies I was able to find use incognito mode.

So they’re basically a user who has no search history, no information and they’ll go to a video and then click the first recommended video then the next one. And let’s see where the algorithm takes people. That’s such a different experience than an actual human user with a history. And this has been really difficult. I don’t think anyone has figured out a good way to algorithmically explore the YouTube algorithm from the outside.

Honestly, the only way I think you could do it is just kind of like an old school study where you recruit a whole bunch of volunteers and sort of put a tracker on their computer and say, “Hey, just live life the way you normally do with your histories and everything and tell us the videos that you’re watching.” So it’s it’s been difficult to get past this fact that a lot of these algorithms, almost all of them, I would say, are so heavily based on your individual data. We don’t know how to study that in the aggregate.

And it’s not just that me or anyone else on the outside who has trouble because we don’t have the data. It’s even people within these companies who built the algorithm and who know how the algorithm works on paper, but they don’t know how it’s going to actually behave. It’s like Frankenstein’s monster: they built this thing, but they don’t know how it’s going to operate. So the only way I think you can really study it is if people on the inside with that data go out of their way and spend time and resources to study it.

Crichton: There are a lot of metrics used around evaluating misinformation and determining engagement on a platform. Coming from your mathematical background, do you think those measures are robust?

Giansiracusa: People try to debunk misinformation. But in the process, they might comment on it, they might retweet it or share it, and that counts as engagement. So a lot of these measurements of engagement, are they really looking at positive or just all engagement? You know, it kind of all gets lumped together?

This happens in academic research, too. Citations are the universal metric of how successful researches is. Well, really bogus things like Wakefield’s original autism and vaccines paper got tons of citations, a lot of them were people citing it because they thought it’s right, but a lot of it was scientists who were debunking it, they cite it in their paper to say, we demonstrate that this theory is wrong. But somehow a citation is a citation. So it all counts towards the success metric.

So I think that’s a bit of what’s happening with engagement. If I post something on my comments saying, “Hey, that’s crazy,” how does the algorithm know if I’m supporting it or not? They could use some AI language processing to try but I’m not sure if they are, and it’s a lot of effort to do so.

Crichton: Lastly, I want to talk a bit about GPT-3 and the concern around synthetic media and fake news. There’s a lot of fear that AI bots will overwhelm media with disinformation — how scared or not scared should we be?

Giansiracusa: Because my book really grew out of a class from experience, I wanted to try to stay impartial, and just kind of inform people and let them reach their own decisions. I decided to try to cut through that debate and really let both sides speak. I think the newsfeed algorithms and recognition algorithms do amplify a lot of harmful stuff, and that is devastating to society. But there’s also a lot of amazing progress of using algorithms productively and successfully to limit fake news.

There’s these techno-utopians, who say that AI is going to fix everything, we’ll have truth-telling, and fact-checking and algorithms that can detect misinformation and take it down. There’s some progress, but that stuff is not going to happen, and it never will be fully successful. It’ll always need to rely on humans. But the other thing we have is kind of irrational fear. There’s this kind of hyperbolic AI dystopia where algorithms are so powerful, kind of like singularity type of stuff that they’re going to destroy us.

When deep fakes were first hitting the news in 2018, and GPT-3 had been released a couple years ago, there was a lot of fear that, “Oh shit, this is gonna make all our problems with fake news and understanding what’s true in the world much, much harder.” And I think now that we have a couple of years of distance, we can see that they’ve made it a little harder, but not nearly as significantly as we expected. And the main issue is kind of more psychological and economic than anything.

So the original authors of GPT-3 have a research paper that introduces the algorithm, and one of the things they did was a test where they pasted some text in and expanded it to an article, and then they had some volunteers evaluate and guess which is the algorithmically-generated one and which article is the human-generated one. They reported that they got very, very close to 50% accuracy, which means barely above random guesses. So that sounds, you know, both amazing and scary.

But if you look at the details, they were extending like a one line headline to a paragraph of text. If you tried to do a full, The Atlantic-length or New Yorker-length article, you’re gonna start to see the discrepancies, the thought is going to meander. The authors of this paper didn’t mention this, they just kind of did their experiment and said, “Hey, look how successful it is.”

So it looks convincing, they can make these impressive articles. But here’s the main reason, at the end of the day, why GPT-3 hasn’t been so transformative as far as fake news and misinformation and all this stuff is concerned. It’s because fake news is mostly garbage. It’s poorly written, it’s low quality, it’s so cheap and fast to crank out, you could just pay your 16-year-old nephew to just crank out a bunch of fake news articles in minutes.

It’s not so much that math helped me see this. It’s just that somehow, the main thing we’re trying to do in mathematics is to be skeptical. So you have to question these things and be a little skeptical.

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Photography by Christophe Tomatis
Copyright © 2010-2020 Pleasanton.com & California Media Partners, LLC. All rights reserved.