I think big data companies only like good news. So I think they're just hoping that they don't get sued, essentially.
Especially from my experience as a quant in a hedge fund - I naively went in there thinking that I would be making the market more efficient and then was like, oh my God, I'm part of this terrible system that is blowing up the world's economy, and I don't want to be a part of that.
Most people don't have any association in their minds with what they do and with ethics. They think they somehow moved past the questions of morality or values or ethics, and that's something that I've never imagined to be true.
There might never be that moment when everyone says, "Oh my God, big data is awful."
Occupy provided me a lens through which to see systemic discrimination.
I don't think anybody's ever notified that they were sentenced to an extra two years because their recidivism score had been high, or notified that this beat cop happened to be in their neighborhood checking people's pockets for pot because of a predictive policing algorithm. That's just not how it works.
I set up a company, an algorithmic auditing company myself. I have no clients.
I think what's happened is that the general public has become much more aware of the destructive power of Wall Street.
The Facebook algorithm designers chose to let us see what our friends are talking about. They chose to show us, in some sense, more of the same. And that is the design decision that they could have decided differently. They could have said, "We're going to show you stuff that you've probably never seen before." I think they probably optimized their algorithm to make the most amount of money, and that probably meant showing people stuff that they already sort of agreed with, or were more likely to agree with.
Google is so big you have no idea what a given person does.
An insurance company might say, "Tell us more about yourself so your premiums can go down." When they say that, they're addressing the winners, not the losers.
I think there's inherently an issue that models will literally never be able to handle, which is that when somebody comes along with a new way of doing something that's really excellent, the models will not recognize it. They only know how to recognize excellence when they can measure it somehow.
People are starting to be very skeptical of the Facebook algorithm and all kinds of data surveillance.
People felt like they were friends with Google, and they believed in the "Do No Evil" thing that Google said. They trusted Google more than they trusted the government, and I never understood that.
I wanted to prevent people from giving them too much power. I see that as a pattern. I wanted that to come to an end as soon as possible.
The NSA buys data from private companies, so the private companies are the source of all this stuff.
The national conversation around white entitlement, around institutionalized racism, the Black Lives Matter movement, I think, came about in large part because of the widening and broadening of our understanding of inequality. That conversation was begun by Occupy.
We've learned our lesson with finance because they made a huge goddamn explosion that almost shut down the world. But the thing I realized is that there might never be an explosion on the scale of the financial crisis happening with big data.