News US

Video: Opinion | Why Are Palantir and OpenAI Scared of Alex Bores?

If you are living in New York’s 12th Congressional District, you may have seen these endless attacks on Alex Bores, one of the Democrats running there. “He made hundreds of thousands of dollars building and selling the tech for ICE, enabling ICE and powering their deportations while making bank. ICE is powered by Bores’s tech.” Yikes. Bores did work for Palantir. The rest of that attack is not what you might call true, but what interests me is who is paying for it. The Super PAC Leading the Future and its subsidiary Think Big. Who funds the Super PAC Leading the Future? Well, among their big donors are co-founders of OpenAI, Andreessen Horowitz and wait for it, Palantir. So why is a co-founder of Palantir, Joe Lonsdale, in this case, funding a super PAC to try to destroy a candidate on the grounds that he once worked for Palantir? The reason is, Leading the Future is a super PAC dedicated to destroying anyone who might regulate the tech industry in general, or AI specifically. In a way these funders don’t like. And Bores is a member of the New York State Assembly, co-authored and passed the RAISE Act, one of the first pieces of AI regulation passed in any major state. There is a principle here that is much more important than any single congressional seat. You’ll hear it honestly if you just listen to AI founders talk. They say they believe in it. Sam Altman, a co-founder of OpenAI, who it should be said has been horribly targeted in recent violent attacks by anti-AI individuals. He was trying to cool down temperatures here, writing, “It is important that the Democratic process remains more powerful than companies.” It is important that the Democratic process remains more powerful than companies. Altman is right, but it’s his co-founder, Greg Brockman, who is one of the major donors to leading the future, who is trying to make sure the Democratic process is subordinate to the companies and is trying to do it by funding a super PAC that can unleash enough money to crush any legislators who cross them. Bores in general has been a pretty effective legislator. In just over three years. In the New York State Assembly, he’s passed 30 Bills and has been recognized by the Center for Effective Lawmaking as one of the most effective freshman legislators. But it’s his ideas on regulating AI that particularly interests me, in part because I think they make sense and are worth discussing things like an AI dividend. But in part because I just really do not want to live in the world, that Leading the Future is trying to create a world where the AI industry hoovers in enough money that it can then destroy anyone who might regulate them. And what’s funny about all this is you’ll hear it. Alex Bores is not an anti AI kind of guy. I think he gets AI pretty well. I think he’s trying to balance its risks and its possibilities. But if you’re looking for a pure AI backlash candidate, he’s not it. And I think that tells you something that what Leading the Future and super PAC and groups that might emerge like it are actually trying to do is to stop anyone from legislating on AI. So if the Democratic process is actually going to mean something here, ideas are going to have to speak louder than this kind of money. So I wanted to hear what Bores would actually do if given the chance. As always, my email [email protected]. Alex Bores, welcome to the show. Thanks for having me. So I want to begin a bit in your early political memories, but how did your politics begin? Well, it began with something that I wouldn’t necessarily call politics. Only in retrospect would I put that phrase on it. But it was with my parents in union fights in second grade, my dad and his colleagues were locked out by Disney for fighting for better health care There were contract disputes for over a year and Disney wouldn’t budge. And finally, the workers went on strike. And in response, Disney locked them out for three months and cut off their health care benefits, including my dad’s friend who was about to start chemotherapy. And thankfully, the union stepped in and they paid for the treatment and he survived. But my dad would pick me up from second grade and bring me to the picket line, and that was my first experience of people working together for change. He would put me in front of the Disney store, and when people walk past picket lines, it’s not hard to do. It’s a lot harder to walk past an eight-year-old with a sign that says Disney is mean to my dad. And so that was my first lesson. Both that health care needs to be universal, but also that the way we win is by working together, that if you’re one worker, you’re one person. You’re one. Anything advocating, it’s easy to get crushed. But if you have a union, you have an organization, you have a campaign, you have a movement. Well, then you stand a chance. What did your dad do for Disney? My dad was a worker for Monday Night Football at the time, so he did graphics and videotape and instant replay. He worked in the trucks, eventually became a technical director, but he was one of the people that’s actually sending out the signal before it hits your TV. And so you then study industrial labor relations at Cornell and then get a computer science degree. I’m curious about what those two very different disciplines taught you? Well, they sound very different. But every day it seems to be more and more intertwined. At the School of Industrial and Labor Relations, I learned economic theory. I learned collective bargaining. I learned and how to run campaigns and organizations in ways that actually can change power and win things. And I learned to stand up for working people and to view a lot of interactions in the world through that lens. Wait, be specific about that. What did you learn about how to stand up for working people? Well, my freshman year, we ran a campaign against Nike. Cornell was sponsored by Nike. Our athletic teams sponsored by Nike. So I was part of a group called Cornell Students Against Sweatshops. It was affiliated with USAS, United Students Against Sweatshops, and they taught us how to build a campaign over time. We learned how to be strategic. So you start with a clear demand. In this case, it was Nike had laid off 1,800 workers in Honduras without giving them legally mandated severance pay. And we argued that the Cornell code of conduct required that Nike be responsible for their subcontractors actions, that they make the workers whole. So we put that into demand. Then you build up over a period of educating. And so we’d have teach ins, we’d have ridiculous actions to grab attention. We did a working out for workers’ rights where we were in the quad. And just like playing 80s music and getting people hey, what’s going on. Oh, well, let me talk to you about what’s going on in Honduras. And then you build up to more aggressive actions that require a reaction from the administration. We ended up being successful in that campaign. Cornell decided it was going to cut its contracts. And I think something like three weeks after Cornell made that announcement, Nike about-face paid the workers all the money they were owed and gave them job training and health care for a year. So I want you you’re telling me about how you learned to do activism in college, which is interesting. But I want to go a level deeper than that. You’re doing industrial and labor relations Yeah. What is the deeper theory or thesis of the relationship between workers and corporations, between labor and capital that you came out of that with? There’s so much that’s in contention between workers and capital. But in the best worlds, how you’re actually working together to grow the economy, that workers are not out there to bankrupt any company that they want the company to grow. And so there’s fights over how you distribute the pie, but theoretically, both want to grow that pie. And then there’s really interesting relationships internationally. One of the things that I discovered was for so many of the countries where we thought labor conditions were awful, the laws on the books were actually quite good. The question was with enforcement and if the home countries actually tried to do enforcement, the factories would just up and leave and go somewhere else. So the lever where maybe you can change that is in the countries that are buying most of the goods. And so we would apply pressure in the US about holding countries to the standards they had already set up for their workers. So I feel like you’re describing to me the education of a young radical here. You’re walking picket lines at 8, you’re studying industrial and labor Relations, doing anti corporate malfeasance campaigns, skeptical of globalization. How do you end up at Palantir? So I really wanted to be a lawyer. But every lawyer I spoke to told me not to be a lawyer. That was my experience, too. Or take time off in between. Make sure that’s what you want to do. And so I went to a economic litigation consulting firm called Cornerstone Research, where we were preparing expert witnesses for trial. And so we were doing economic modeling and playing with data. But I was interacting with lawyers all the time. So building a skill set, but could see what they were doing. And I found I really enjoyed the economic modeling. I really enjoyed playing with data and also to that ideology. As I’m growing up. I’m a Democrat. I believe that government can and should be a force for good, but that also means we take on the burden of proving it. And so I was a young believer in I probably wouldn’t have put it in these terms back then, but expanding government capacity and making sure government is actually delivering and Palantir in 2014 in the Obama administration was about how can we expand government capacity while protecting privacy and civil liberties. And so at the time, it felt very much the natural fit. So I want to stay in this 2014 moment, because this is a period when there is a lot of optimism that the technology is going to solve some very fundamental problems of democracy, that you’re going to have all the civic tech that the interfacing between citizens and the government is going to be much smoother, much better that these companies are fundamentally good. Google doesn’t want to be evil. Facebook wants to connect the world. Palantir wants to make your data comprehensible. And I think there’s also an underlying view that the answers to our problems are out there somewhere in these masses of data. And if you can just make the whole thing legible, you could get the answers. And something poisons pretty quickly, I’d say after 2014 like that really feels like a different ideological moment than we’re in entirely. What was wrong about that? Or what would you add or change to my rendition of that optimism? A lot of that is true. The Palantir story that was told to prospective employees and Alex Karp would do this a lot was that he most feared fascism, that he had just finished being a German philosophy student, and he was most afraid of fascism developing. And fascism happens when government fails to provide for its citizens and they start blaming someone else for it. And people then feed that hunger and that hatred. And he couldn’t do anything about the latter, but he could do something about government failing to deliver. And so the reason that he wanted to do Palantir was after 9/11, after this real rise in a feeling of being unsafe, could we build the systems that would allow government to make people feel safe, but build it in such a way that was protecting privacy and civil liberties that was the pitch. That was the fundamental idea was we were there in many ways to stop fascism. And how’d it work. Trump’s elected in 2016. That was a weird bit for… With the aggressive support of Peter Thiel, one of the Palantir early investors. I mean, I don’t know if would you call Peter Thiel, a Palantir co-founder? I think so I think that’s the phrase that is given. But Alex Karp was very much fighting for Hillary at the time. And if you look at donations of employees at Palantir, they tell a very skewed story towards the Democrats as well Yeah, Silicon Valley is very Democratic in this period. Absolutely, absolutely. You have a lot of Obama administration figures they can’t go to Wall Street anymore. That’s not kosher for a Democrat. But you can go to Silicon Valley. Yep and but that election 2016, but even more so his reelection in 2024 is a real failure of that mission and to now see leaders of the company and Silicon Valley broadly throwing their lot in with what I think is a fascist regime is a real disappointing switch. So you’re at Palantir from 2014 to 2019. You start, I think, as a data scientist, by the end, you’re one of the people leading the relationship with the government Yeah, I focused on the federal civilian side. So what is that work? So that was work with the Department of Justice, with CDC to track epidemics, with Veterans Affairs, to better staff their hospitals and give veterans the care they deserve and need. It was helping a lot of the federal civilian agencies. How much is what we now think of as AI and generative AI starting to come into the work you all are doing then? Not at all. And here’s what I mean by that. Palantir was aggressively anti-AI in that period. It believed that data integration was the true source of value, and that AI was a magic layer that would be applied on top. And it was all marketing, and we were doing the real work that was getting data to come together. And can you describe what the difference is in those two views Yeah. What is data integration versus whatever they thought AI was? Yeah, well, so AI in a very naive sense, I mean, we’ll talk about it in other ways now. But this is before agentic models and all of this. But AI is doing analysis of data. And before you can do the analysis of that data, it needs to be organized in a way that AI can make sense of it. But the actual thing that’s difficult is organizing all your data together. That requires hard work, and there’s no magic to do that yet. And the software plus engineers going on site and doing a lot of that hard work to do the manual hookups, that was always going to be the true source of value. So you’re at Palantir, across the end of the Obama administration and into the first Trump administration Yeah now, Palantir working with the government is a different animal depending on which government it’s working with. Very much so. How does that change? I was leading the work at the Loretta Lynch, Barack Obama DOJ, and then all of a sudden the Jeff Sessions, Donald Trump, DOJ and priorities changed pretty drastically. The work with the banks was probably wrapping up anyway just because of time, but clearly there was no more interest in that work. The contract that we had us choose three mutually agreed upon case types. And so I met with the new leadership after the transition. This is early 2017 and said, what do you want to prioritize? What do you want to work on? And they said, the opioid epidemic. We said, great, we definitely want to do that work. They said violent crime. Cool as long as it’s not a dog whistle Yeah, we’d love to work on that. And then they said civil immigration. And I said, we’re not touching that. That’s not the work that we are building this for. And I was empowered as the lead of the project to do that. I had a contract that allowed me to because it was three mutually agreed upon case types. And while I was there and in the DOJ project, we didn’t do any of that work. That’s not how the decision went at every customer or in every project. So Palantir, during this period does begin working on immigration with the Trump administration. I never worked on any of those projects. And so I was never cleared on it. But to the best of my understanding, during that time, it was not stopping the Trump administration from using it for immigration. I don’t think there was building of features specifically for deportations, but I could be wrong about that. But even the fact that they weren’t going to stop it from being used in that way got a number of employees, myself included, quite upset. You leave Palantir in 2019. Why? Separately from me on a project that I never worked on, Palantir had signed a contract with a department within ICE called HSI, Homeland Security Investigations that during the Obama administration was focused on anti-human trafficking, anti-drug trafficking, sometimes counterfeiting, things that are not controversial and that everyone would support. And then when Trump comes in 2017, they try to change the nature of that work. They tried to get another part of ICE called ERO, Enforcement and Removal Operations, the part that everyone thinks of as ICE, to get access to the software and to use it for deportations. And there were a lot of conversations internally at Palantir about what was actually happening. Us employees couldn’t always see that if we weren’t cleared on the project. And a fundamental question came up of well, why not write into the contract those same protections that we have elsewhere where we can say, don’t use it for deportations. And eventually executives made clear to us that they were not going to do that they were going to renew the contract without putting in those guardrails. And so I made plans to quit. So there was a Bloomberg story that questioned this. Clearly coming from somewhere inside Palantir. And it says that there was shortly before you left, I think it said five days before you left a warning from HR about sexually explicit comments you had made to a coworker. And then separately that when you did your exit interview, you said you were actually leaving because you were burnt out and there was too much travel. So I want to take these as pieces. Was there a sexual harassment claim against you at Palantir? And is that why you left? No and no. This came out of an attack from executives at Palantir that are upset that I am pushing for AI regulation and that I’ve called out Palantir’s work in the past. As I told Bloomberg, when they reached out, I had expressed my concerns about the work with ICE internally. I had begun interviewing months and months before. I had an offer in hand. I then had retold a story of something that had happened to me on the job. Someone didn’t like that retelling, had talked to HR. HR had one conversation with me where I shared exactly what had happened. And that was the end of it. There was no file, no letter, none of the things that are claimed in that story, They dropped the matter immediately. You weren’t disciplined inside the company or something. Nothing like that. And this seemed like what the Bloomberg story said. But I want to check it. The infraction was a story you told or something you said, not something done with or towards a colleague. Correct. It was I mean, the story goes into it, it was a, well, see, now Can I retell the story here? Is sort of the but that’s sort of the question is it was a paper goods manufacturer that was talking about uses of tissues. It sold tissues. The marketing department was talking about how tissues are used. And I retold that example from the presentation on how tissues were being used in odd things that had happened while working at the company and then the burnout and travel side of it. The argument there is that you’re making this claim that you took a moral stand against the way it was being used, but actually you’re just kind of tired of working there. As has been cited in multiple sources, multiple current Palantir employees have backed me up that they heard me talk about ICE and stand up and do all of that. I have no idea what notes they took from the exit interview. I asked to see them. I was told by the Bloomberg reporter she didn’t actually have them, that this had just been told to her by the executives so they could claim whatever they want on top of the notes that again, I never saw. I know what I had said before and during and that I had brought this up many times. And a year after I left, Palantir emailed and called me, begging me to come back. Feels like if there had actually been a real thing there, they probably wouldn’t have done that. So no, you just heard me be fairly critical about Palantir I had before as well. The executives there didn’t take kindly to that. And the Super PAC that is attacking me is against any regulation on AI. And this is just another desperate hit by them. I have been amused that the Super PAC, which is attacking you, which is partially funded by Joe Lonsdale, a Palantir co-founder, that one of its core attacks on you is that you worked at Palantir. Correct. That’s a pretty strong level of political shamelessness. I would agree, I would agree. I mean, so I would say lying about an employee’s record, but they are very terrified. They are very afraid of me in office. And beyond that, they’ve said publicly that they are trying to make an example out of me, that they want to beat up on me so bad that when the idea of regulating AI comes in the future, that politicians run in the opposite direction, and so they’re not primarily concerned with what is honorable or what is true, they are concerned with causing pain. So 2022, you’re elected to the New York State Assembly in 2025. You passed the RAISE act, which gets us into the AI regulations you’re alluding to. This is one of the first major pieces of AI legislation passed by any state in the country. What was before we get into what does it do. What was the philosophy behind it? When you were working on that bill. And I know you had co-sponsors on it. What were you all seeing and what were you all trying to achieve? We were seeing AI develop extremely rapidly and industry themselves warning about what was coming. This is after the letter that was signed by so many executives saying that we should treat the risk of extinction from AI equal to global nuclear war and promoting perhaps a pause. Many of them had signed voluntary commitments with the Biden White House saying, we are going to take certain safety precautions and this is the first step towards binding federal regulation. And then we saw no binding federal regulation come. And we had also heard from companies themselves that they were O.K with certain safety standards, but they were in a competitive marketplace and that if they see their competitors starting to skimp on safety and cut corners, they would be forced to as well. So when you hear that call, you say, O.K, you should establish some baseline that people can’t go below so that there is some established safety standards that everyone is playing by. What’s the baseline you tried to establish? There were a few provisions in there. One was that you had to have a safety plan that you made public and actually stuck to that largely followed best practices in the industry around how you were going to test the models for specific risks, how you were going to record those tests, and what you would do with that information, that you had to report to the government. Critical safety incidents, which we specifically defined in the bill, if it goes wrong in these sorts of ways, may not have harmed anyone yet, but could suggest something is coming. You have to let us know about it. And those provisions largely survive till the end. There were two others that were in the original that ended up getting cut out. One of them was that you can’t release a model if it fails your own safety test, basically designed for the way the tobacco companies operated, where they were the first to know that cigarettes cause cancer, but denied it publicly and continued to release their products or fossil fuel companies. That knew oil caused climate change but denied it. We’re saying if you knew your model was particularly risky have to take action on that. And the last provision was third party audits, was saying that you can put up whatever standard you want, you can assert that you’re going to follow it, but someone else should check your work, not the government, but just a different party should come in the same way. We have financial audits, the same way we have SOC2 security audits that another party needs to look at and say, yes, you are following this. And presumably you’re working on this bill. What, 2024 2025 before it passes? Yeah. How have your views on AI, the risks it poses, the questions it raises changed with the subsequent pace of model releases? I think things have happened much faster than I thought they would. And I think our ability to pass legislation has moved much slower than I thought it would. And so that difference in speed between how AI is advancing and how government reacts is wider than I was expecting when I started on this process. How have you thought about the change in public opinion? Because it looks to me like we’re seeing a pretty powerful AI backlash rising. You have polls showing now more Americans are worried about AI than are enthusiastic about it. There’s a lot of counter data center energy Yeah, playing out throughout the country. What have you made of how quickly the politics have shifted? That surprised me. I both how many people have focused on it, but also how bipartisan it’s remained. You of all people know about polarization and most issues end up polarized and this one hasn’t so far. And it has resisted that longer than I thought it would that if you talk to voters, you see across Republicans, Democrats and independents, pretty similar attitudes across state legislators, pretty similar attitudes even in Congress. There’s more bipartisanship than you would think. I mean, surveys regularly show that about 10 percent of people I put the genie back in the bottle and pretend it never existed. And I empathize, but I don’t think that’s the way forward. 10 percent of people represented by the Super PAC Leading the Future want to just let it rip. That’s the Super PAC that’s attacking you. Yes they want to just let it rip. They don’t care how many people, it hurts, just how fast it moves. And 80 percent of Americans want to see some benefits, but see a lot of risk and think it’s moving too fast and want to have some say in its development that the fact that it stayed so bipartisan has surprised me. And also the fact that it’s risen up in people’s minds. So much has the pessimism around it surprised you. And we were talking earlier about the period when there was a lot of optimism about tech, about software, about the internet. And I think you can really look from, I mean, early computers, your early internet all the way pretty late into the social media era. You probably around Trump, I think things begin to turn. Cambridge Analytica, algorithmic feeds. But that’s a long time when these systems and technologies are present for people, and there’s a fundamental optimism about them. AI, ChatGPT, I think, is when this really burst into public consciousness, that’s 2023. We’re here in 2026 and the polling is already turned negative. I mean, the week before we recorded this, Sam Altman was targeted in two separate violent attacks. There was a Molotov cocktail thrown into his home. Awful. Two other people shot at his door. I was a little shocked to see people celebrating these attacks online saying, where can we support the bail fund. Yeah, this has moved into fury and fear and pessimism really, really quickly. Why do you think that is? Well, there was a separate split in AI around capabilities. The debate used to be is this real or is it stochastic parrots? But usually even before that is it just slop that is never going to actually replace a human fancy autocomplete. Exactly so we had these debates on one dimension which was like, is it good for people’s it bad for people. And then there was this other dimension of how big an impact is it going to have. And I think that debate’s been collapsed. People are not skeptical of its power anymore or some are but fewer and fewer each day. And so the intensity with which we’re having that first debate has really ramped up, but I think it’s also been that we saw what happened with social media. We saw what happened with these previous revolutions that were supposed to change everything for the better. And we’ve seen platforms establish with great promise. And then over time, once they get power, really turn on their users. And so people are no longer willing to believe the story that is told about a technology or a platform always benefiting people. And you see this argument from some of the AI founders, they say, well, it’ll create material abundance for everyone. It will create, there’ll be no more poverty. Everyone will have everything. And everyone’s looking around saying, of course, that’s not what’s going to happen. You’re a private company, you’re going to profit. You’re going to keep it all for yourself. Like, how are we going to force it to. Sam Altman recently said it’ll be like a utility. It’s like utilities are really highly regulated. And so people are just not willing to believe that spin anymore and yet seeing really quickly changes in their lives. Jasmine Sun, the AI writer, just wrote this kind of interesting piece on AI populism, and I thought the way she defined it was interesting and a little more subtle than you normally hear, which is she wrote, I define populism as a worldview in which AI is viewed not only as a normal technology, but as an elite political project to be resisted. And what she’s getting at there is AI populism, I think, and the AI backlash tends to include two dimensions. One is that this technology is being overhyped. The other, as it’s often put to me in emails, is being pushed down our throats that it’s not a thing people want. It is a thing being forced upon them. Now, there’s all this investment behind it. So the investment needs to be paid off. So the companies really have to do it. And that if you take the power seriously, you see it in a different way. That kind of almost like any version of having AI in the economy, is going to be just a way of paying off these huge investments that we’re not getting a technology we want. We are having a new paradigm forced upon us. How do you think about that? I think it’s a beautiful description. I think what I hear from my neighbors is very much the feeling that this is moving so quickly that we don’t have control, and the Americas people so far have not had a say in it. So, yeah, I think the first part of that definition of the belief in its capabilities, that part is shrinking as part of the dialogue as we’re seeing it do more and more. But the fact that it is being thrown at us and we currently don’t have control, I think, is what’s motivated so many people to be thinking about AI. It has always struck me that if you listen to the founders and leaders of their companies. They are very specific on the harms, and the gains are very general sounding. So you’ll hear Dario Amodei talking about 50 percent of entry level white collar workers seeing their jobs automated away. There actually are Waymos on the streets now. You can see that those could take jobs from taxi drivers and Uber drivers. There has been all this talk about existential risk. The sense that you could build something smart enough to disempower human beings. And then it’s like there’s a lot of specificity on replacing coders. And then you get these very vague, it’s going to help with drug development. It’s going to solve, material scarcity. And I think if you’re a normal person being offered this technology, that might make sure your 13-year-old son has AI porn bot before he has a real girlfriend. And you might lose your job. And maybe there’s some chance the human race doesn’t maintain control over its own future. Why wouldn’t you want to pause on that? Absolutely absolutely. When you’re seeing the harms day by day, whether it’s your kid, the pedagogy at schools hasn’t been updated. And some people still think that assigning take home essays teaches critical thinking doesn’t anymore. And on top of that see chatbots and you see some of the truly horrific stories that have happened to teenagers. And maybe you go to your job and your company. Now has a hiring freeze. They’re not laying people off yet, but they’re not doing their usual hiring. And you’re worried about what’s coming from that. Are you all going to be necessary in the future? And then you see your utility bill go up and maybe a data center is built near you. Maybe it wasn’t, but you’re starting to think about what’s causing that. And then on top of that you see people saying, oh yeah, and it might kill everyone. These are the news stories that are coming in, and you’re maybe not seeing that benefit. And there are benefits. This is not a story of a technology that is just bad, but it’s moving really, really quickly. And a few people are controlling the direction. And many people have lost confidence in government’s ability to steer it. It becomes a question of if Democratic institutions can govern this technology before it governs us. I think pretty clearly, no. Well, I’m running a campaign to change that. I guess we’ll talk about that. But I think being worried about how fast these systems are moving and having any awareness at all of how fast the US government now moves should make one worried. Absolutely and so one thing you do see is proposals emerging to try to slow AI down by functionally choking off some of the inputs. So there’s a Bernie Sanders AOC bill to just have a data center moratorium. There’s some bipartisan interest in this. Ron DeSantis in Florida has a bill that would be very restrictive on data center construction. What do you think about a data center moratorium? The Bernie Sanders AOC proposal is a moratorium until we pass real regulation that protects people. I agree with that. I think we should pass real regulation today. Do you agree with the data center moratorium until we do? Well, I think what they are calling for is that we need the real regulation. They don’t think that bill is going to pass in this split Congress. They are setting the terms of the debate, which says, why are we going forward with this until we’ve done the real work. And I think that’s the right question to ask. If I could wave a magic wand and pass any bill I’d want it wouldn’t be the moratorium. It would be the regulations that the moratorium is calling for. But putting that as a negotiating tactic, I think, is meeting the moment in the scale. Bernie talks about the potential benefits of AI and also talks about the risks and the downsides. I think he’s been the clearest communicator on it. But you’re right, it’s a bipartisan issue. It is not one that is left right. So in your framework for AI regulation, you have a somewhat different approach to data centers. You seem to see them as a kind of opportunity, an opportunity for what they could be an opportunity. And this is again, you need the regulation first. It’s not oh yeah, this will work in the future. And given the political power of these companies, I would be very skeptical of them doing it unless we pass regulation with teeth. But the idea is that our electric grid is so outdated and so in need of updates throughout the country. But even here in New York, and it also slows down the renewable energy transition, because if you want to have solar on homes, you need a grid that is more responsive to generation happening in a distributed manner. And it’s not right now. And we’ve tried to upgrade the grids. We need funds to do it. And the only options on the table are the government pays for it, which is taxpayers, you and I, or it adds to our utility bills, which is rate payers again. You and I. And here comes an industry with for all intents and purposes, and unlimited private capital that is really willing to pay for time. They are desperate for speed in building these out. And so what I’m saying is you can set the incentives such that if you want to build a data center and you’re doing X percentage renewable, it should be very high percentage and you will pay not just for the connection to the grid and all the infrastructure that’s needed for that, but you’ll also pay, on top of that, a fee to make the grid more resilient and help the upgrades elsewhere. So you need to pay above and beyond the infrastructure upgrades so that you can truly make the grid more green and more reliable. Well, then we’ll move you to the front of the interconnection queue. And by doing that, we’ll push your competitors to the back of the interconnection queue, and you set up a incentive to actually build things in a way that benefit us. Is it possible to do, given the way our build outs and infrastructure really work. And the reason I’ve developed some cynicism here is I remember being promised the smart grid of the future in the 2009 American Recovery and Reinvestment Act Yeah and we didn’t quite get that. No, I don’t think anybody said at the end of that where our grid was now smart. And then we passed the Inflation Reduction Act and the bipartisan Infrastructure bill, which between the two of them had a lot of thoughts about energy generation. And other things were meant to work on the grid. And I’m not saying there were no upgrades made to the grid anywhere, but I am saying that I keep getting promised gigantic grid overhauls and then being told a couple of years later, whoops, that somehow our grid is still this archaic mess where the biggest problem for getting new green energy online is we can’t connect it. Your cynicism is warranted, 100 percent. And, I dare say you wrote a whole book on ways that we could make that easier to do. But maybe the difference here is you have private capital coming up to do it, and the whole proposal is being precise on ways that we can expedite and by expediting, shifting the ones that are dirty and not paying their way to the back of the line. So as I understand the theory underneath the data center approach, it’s really that if all this money is going to flood into AI, and AI is going to be, at least in part, built on the collective commons of the entire culture that came before it, that we should benefit. That is not just Sam Altman created some magic algorithm, Sam Altman and OpenAI and Anthropic and Grok and so on inhaled the entire internet, ate up my books and the books of everybody else around, and trained these systems on them. You have an idea in there that I think tracks this theory more closely than other things I’ve seen, which is an AI dividend. Talk me through that. So the AI dividend starts from thinking about how we can give Americans a real stake in the AI economy. And it starts with humility that we don’t know exactly how it’s going to go. We don’t know how disruptive it’s going to be, but right now is the time to plan for the potential outcomes that could come. And there’s always been this conversation. In classes at ILR, it was that, oh, every technology revolution has always created more jobs than its destroyed. Arguable, maybe, but this is the first time someone’s building a technology and stating that the goal is to replace all human labor. It is to be better than humans at everything, and that the metric by which we understand how good the technology is getting is how functionally, how well it is capable of mimicking different forms of human labor. Exactly right. And then exceeding them. Exactly right. I mean, you are creating a replacement for human labor machine. Exactly and it’s the first time that has been tried, and it doesn’t mean it will succeed, but it certainly means government needs to take it seriously. And so the idea of the AI dividend is, what if we end up in that world where all human labor is replaced, or just a significant portion of it is displaced. How do you have a society that is actually functioning then? And you have to start talking about universal basic income, and the idea is to make sure that we are setting up the structures. Now, that would lead for Americans to be protected if we end up in that future. And I have a lot of things about how we can prevent that future changes, et cetera. But the AI dividends almost that insurance policy and you could fund it via boring things like a wealth tax that have been talked about. You could fund it via token tax. So putting a tax on the usage of AI, maybe limited to commercial opportunities where you’re replacing human labor or not. And that’s a fine policy as long as investment in capital always leads to more jobs, which has been economic theory for hundreds of years. But maybe AI is shifting that. And so if it’s shifting that we need to shift our tax policy to be taxing AI and to be discounting hiring humans and token tax starts to get at that. But then the other funding mechanism that I talk about for the AI dividend is actually taking warrants in these companies, large out of the money warrants where you say, if the value of this the AI companies were to go up an enormous amount, then the government would have the right to buy shares at a set price. They basically only pay off if one or multiple of the companies are wildly successful. Basically, if they are replacing all human labor. And if you Institute that now, then VCs celebrate it and say you’re participating in the upside. And if you try to implement it after one of them are successful, then you’re seizing the means of production and seizing wealth. And so my idea is you go down all of these paths, you start to find ways to have the revenue to actually fund universal basic income or investments in job retraining or just a broader safety net, but do it in ways that automatically scale and adjust and kicked in at the speed of AI. Here’s a concern I’ve always had about this set of policies, or this set of answers to the problem of AI and job displacement. So I’ve been very, very near the universal basic income debate a long time. My wife, Annie Lowrey, wrote a book on universal basic income called “Give People Money.” I used to work closely with Dylan Matthews, who did a lot of writing on universal basic income and the trick of universal basic income to me, which maybe you support on its own merits. Which is fine, but is under any plausible scenario of AI job displacement. It is happening to some people and not all people. And I see looking skeptically, but I don’t see a world in which one day we wake up and everybody’s jobs are gone. It’s going to start with some people’s jobs. It’ll start with some people’s jobs. So if I thought it was going to be everybody’s job all at once, I wouldn’t worry about it because then we would just figure out a policy to compensate everyone. But you imagine you’re a teamster and you drive a truck, right. And you’re making $80,000, $120,000 a year. And the autonomous truck companies put you and your fellow teamsters out of work. And don’t worry, we’ve actually passed universal basic income. No it’s totally. And you’re now getting $37,000 from your universal basic income. Yes, 100 percent, and I’m getting $37,000 from the universal basic income. And I’m still here in my podcasting studio. You got screwed. I got a check. What worries me the most is I don’t think we’re going to a world of full automation. But even if you believed we were is transition and some people are going to really lose out and other people are going to be unaffected or gain. And I don’t hear policy ideas that seem to know what to do with the people who are losing out along the way. The people who are actually getting displaced, not the world of everybody’s displaced. But the world is graduating with a marketing degree is now likely. You are three times more likely to be unemployed than you were before, or coders are suddenly seeing a contraction in demand for their services. But some coders are making a ton of money Yeah like, how do you think about the differentials here. Universal basic income by itself is insufficient. And I would love to understand why you think we’re not headed to a world of full automation. Because it’s tough for me to see where that stops once we start on it. But we can come back to that. There will be a period of transition either way. I don’t think it’ll be all at once. And so the idea is not just oh yeah, we’re all going to have this basic income because you’re right, people will be screwed by that. The idea is to do a number of things simultaneously, which include changing the tax code so that we’re actually charging for the use of AI and discounting the use of labor. And that’s a way to protect jobs and slow down the transition itself. It’s investments not just in universal basic income, but in job retraining programs and in structures that help people go into new careers. Now, granted, they have a really bad track record. This is my concern, a really bad track record. But it doesn’t mean you shouldn’t still be investing in community colleges and finding ways to improve it as much as possible. But you’re right to just say that, oh, we’re just going to give a universal basic income is not enough. We have to think about other ways of adjusting that transition, which could include when you have people who have a permit or training or license that takes a number of years to acquire, maybe you still require that for the transition for five years or 10 years. So people can turn that training into equity, and that’s another way that they have a stake in the AI economy. We’re going to need a lot of policy solutions. That’s why the framework I put out has 43 different ideas in it. But let’s get very specific on this. And I want to come back to the question of full automation. But New York City is facing a near-term question here, which is Waymo, the autonomous vehicle company. They have had permits to do the mapping and testing here needed to eventually roll out Waymo in New York City, the way it’s been rolled out in San Francisco and Phoenix and other places, and that set of permits have expired. And Mayor Mamdani has been, I would say, very noncommittal about whether or not he wants to extend them. He said, if a company like Waymo finds itself in New York City, what they will also find is a city government that is committed to delivering for the workers who keep the city running. Those workers also include our taxi drivers. So here you have this very near question. I mean, Waymo is a technological advance. They are nice to ride in. They are safer from all the data we have. They also will if you roll them out in mass in the coming years, displace taxi drivers, Uber drivers, Lyft drivers. How do you balance that? It’s a tough and ongoing question that the speed of the transition only makes worse. There are ways of again maybe you require medallion for Waymo’s for a set amount of time. And that’s what enables some bit of transition. But then you’re only protecting the medallion owners and not the drivers. But that’s maybe a piece of what that transition looks like, especially for those that have gone into a huge amount of debt to buy that medallion. You think about job retraining and other places that can go in. You think about a broader safety net, but we don’t have a full policy solution for any disruption that happens this quickly. It just hasn’t been developed. And we need people in government that are willing to take that problem seriously and look for solutions that aren’t just stop or go because this technology is coming. But so what’s your version of that solution for Waymo. Because Waymo is interesting to me or autonomous vehicles, right. You can think of many different companies trying to do this, even more so than I think, at least the public conversation around generative AI, where I think the gains, which we can talk about. It has been sometimes hard to see what they are in the way people talk about it. Driverless cars really do have gains. A world of driverless cars is safer. There are a lot of people who have mobility issues right now, or discrimination issues and getting picked up and all kinds of things where they could really be helped. They are just fascinating technology. You’re not going to have people falling asleep and then hitting somebody on the road. Slowing them down has a cost, a cost in just the convenience people might experience, but also cost in safety. It cost potentially in lives saved. And speeding them up has a cost in displacement. So you said we need politicians willing to take this seriously. You’re a politician. You’re looking to take this seriously Yeah what do you do. Well, I said a few different options and things that we can do together, which is the Waymo. Keep going. Is it. That’s the answer. You’ll charge Waymo for medallions. That money goes into the coffer. Who gets that money? I think you can specifically be focused on job retraining and on people who are displaced. And you can try to share the benefits in that way is a portion of that answer that we have to go to. But the real question is, should we be investing in Waymo’s or in public transit. We have a great system to move people around, and we actually need an investment in improving that. I took a Waymo for the first time in LA, and it was a light rain for New York City standards. But I think a thunderstorm for LA standards. And I got in the Waymo and it went 20 feet, and it pulled over to the side of the road and just said dialing support. Didn’t say what. No, no, no, why it was calling, et cetera. And I found out later, it turns out almost every Waymo in the city had done it at the same time because it couldn’t handle rain. And so support timed out and I was sitting there for 12 minutes. My first Waymo I ever rode and I dialed or I went to call an Uber or Lyft or something. And finally support came through and the person was like, oh yeah, it seems like you’re stuck. Like, I’ll drive you out of there. And so I have questions about how they function in the rain in New York City. And I have questions about when the backup is human drivers. It seems like it’s another form of outsourcing as well. So yes, in the long term theoretical. Will autonomous vehicles be safer than humans. In most cases, yes. But to say that we are definitely there right now, I wouldn’t say we’re there necessarily right now. It’s only in the conditions in which they’re willing to do them, which are quite limited. There you go. Like you can’t take a Waymo from San Francisco to Phoenix can only take one inside San Francisco or Phoenix. So all of that is to say, I think it’s this hypothetical of they’re ready to go and be safer right now is not right. But I think they’re safer in the place they drive. And the reason I’m pushing on this is not because I’m pro Waymo or anti Waymo. It’s that there is a question that public officials are facing right now about how quickly to move forward into that world. And, Zohran Mamdani could extend the permits and accelerate Waymo coming to New York City. Or he could drag his feet and keep it out of New York City. And then there are some ideas in the middle about maybe you could have Waymo paying high prices. But even to the extent you’re doing that, what you’re doing is pulling Waymo in. I think people sometimes don’t quite want to face up to that. There is a yes or no question on some of these issues. And in the long run, do you want to protect the jobs of taxi drivers or do you want to have autonomous vehicles operating inside of your city is a kind of yes or no question. I think, as Keynes says, in the long run, we’re all dead. There’s a question of speed, not yes or no. And I think most people here are from 0 to 100, somewhere between 40 and 60. And we’re being described as yes or no. I think it’s not ready right now for the environment of New York City. It will be ready sometime in the future. And with a lot of we need to be thoughtful on that transition, on how it benefits people and how it hurts them. I think it is almost easier to imagine ways of handling the financial consequences of AI for people, even though I don’t actually think we figured that out. Then the consequences for their dignity, for their purpose. People train for jobs. That job is part of their identity, and then all of a sudden it’s getting taken from them and you’re going to say, hey, taxi worker over here at the community college, you can retrain to be a home health aide, that there’s something here that we’re going to have to balance, the economic efficiencies or pushes forward with the basic deal we offer people in this country and in this economy, which is that study for something, you learn how to do a job, you apprentice, and that we value you for doing that. And then we’re supposed to treat that as having value. I feel like we don’t talk about this dignity dimension enough. So I’m curious how you think about it. I think it for so long, humans have been defined by their job, and that’s become a piece of the dignity that you, in this worldview, have purpose, have value because of the thing that you do. And that’s been ingrained in people for a while. And if we keep that mindset, then UBI is an extremely disappointing answer to it, and I think for lots of reasons, it’s not the full solution. The world that is painted by the AI optimists is we’re going to get to this post working area where people no longer derive their purpose from work. I’m skeptical. We’ll be like the British gentry. I’m skeptical. I’m skeptical. But you believe in full automation. So then you think we’re going to dystopia on our current path Yeah, but I think we have the chance to change it. When you throw the ball down the field mentally, what if you’re skeptical. What is the good outcome here? What is the good outcome If we have automated away, which you seem to think is very possible, or at least very large percentage of the economies jobs. And yet what we have is something better than at least where we’ve been or where we are. It would have to be at the point where it’s not just your basic material needs are met, but the standard of living is higher than it is now, where you can go about your day and be in a better place than you are right now. And this isn’t a perfect analogy. AI is different in all kinds of ways, but if you look 100 years ago, the average American worked 60 hours a week and had a much lower standard of living. Now the average American works 40 hours a week has a higher one. We could get to one where we work 20 hours or 10 hours and have a higher one yet. But we were able to do that transition because workers had power, because Americans had political power, because we were able to shape that technology to work for us, either directly through legislation or by organizing unions and doing it indirectly at the workplace. If this transition happens too quickly and we lose that political power, it doesn’t just happen. So I want to talk about something where I am, where we already are seeing the effects of it. And you talk about this, it’s very early in your plan, which is kids. And one of my theories of legislating, having covered a lot of this, is sometimes a crucial thing in building legislative capacity is to just find places where there’s enough consensus to legislate a bit, so people learn about the issue and learn how to legislate on it. There’s all kinds of experiments consenting adults can run on themselves. I am pretty worried about the situation with AIs and kids, and we really don’t know what it’s going to mean for kids to have relationships with AIs and to grow up where they’ve got AI friends and so on. What is your approach to kids and generative AI? I agree with you. I think kids in some ways need more protection, and we don’t know a lot of the impacts that AI will have. That doesn’t mean we don’t look at places where it can benefit kids. I mean, I could imagine a world where having a personalized tutor at exactly your level in each subject and able to communicate with you in exactly the way you like to learn as a supplement to what you’re getting from teachers in the classroom and your parents is a helpful thing. But teachers and parents need a view into all of the interactions, and we need strong data protection. And I think broadly, a lot of these projects, even when you think if some teenagers should be allowed on or not, need to be thoughtful on the mental health impacts. This is a really scary period. And we’ve seen the big stories about chat bots, but then we’ve also seen like ChatGPT integrated into teddy bears and things that just feel really unnecessary. So what’s in your plan on this? What do you actually want to do? So age verification for certain aspects of these interactions. The mental health checking as I said, engaging and updating pedagogy, making sure that teachers and parents have a view into any interaction that goes with AI. Broad protection on training of kids’ data and data privacy aspects as well. And yes, we need to prepare kids for the jobs of the future. I don’t think you should shut off access to AI. People should be exposed to these tools as they are in high school and college and getting there. But being really thoughtful about what those interactions are, when you say updating pedagogy, how do you want to update it. Well, so you can still assign essays, but if you just do a take home essay, people are just putting it into ChatGPT and everyone knows this. But I’ve done a few things where high school students come up to Albany, and when the teacher leaves the room, I say, how many of you use ChatGPT to write an essay? And every hand goes up. So should we be requiring essays written by hand? Should we require them written in Google Docs or a program like it so you can actually watch keystrokes being entered? Just updating for the tools that are up there and making sure the old way of teaching is still teaching. I’m hiring for something right now. And it has really disoriented me that cover letters are now completely useless. I have hired I’ve been involved in the hiring for hundreds of positions now, given my time at Vox, and cover letters were always quite important to me as a way of sussing out maybe somebody whose qualifications were less obvious for the role. But you could see in the way they wrote an unusual mind at work. And now I’m not saying that’s completely impossible. You can still write a great cover letter, although increasingly it’s getting a little. But it is getting harder and harder to know what you’re looking at. Like, are you looking at somebody who is a great mind at work, or are you looking at somebody who’s cyborging it with an AI system? And Maybe that’s fine, because that’s the world. And somebody who’s very facile at using them is actually showing they have a skill that others don’t. But on the other hand, I actually want to know how the person thinks, not how good they are at prompting. To completely knock out our ability to evaluate somebody’s writing skills. Can I ask not any of your current employees, obviously, but people you’ve interviewed. Have you noticed a loss of just skill in writing? I haven’t noticed it yet, but I would say I have not hired since I got good enough. I’ve definitely noticed it. And I think people underestimate this because they’re used to the quirks of poorly prompted ChatGPT writing. And it is incredibly, incredibly easy to spot Yeah, but if you know how to use the systems and you’re better at it and you’re using more advanced forms of ChatGPT or Claude or Gemini people can’t tell. But I think when you ask people to write things, it’s just not. I think there’s been a few years now where that skill is not being taught. And you have pointed out that writing is how many people strengthen their ideas, that the work that goes into that is part of the work of thinking. And I have noticed as people have again, not speaking to anyone I’ve hired, but people have applied or others that I think there has been a decrease in people’s ability to write well and express their thoughts clearly and do the editing work. So one thing in your AI framework that I thought was interesting was that you want to expand the government’s capacity on AI. What does that mean? It means making sure that we have the expertise within government to understand this technology and help contribute in a positive way to its development. And this has been horribly underinvested, because we’re not taking this technology as seriously as we need to. This is the first major technology that has developed basically without any government progress, any government work in it. Al Gore didn’t invent the internet, but DARPA did develop the intranet that became the internet. And even the space race was obviously primarily government led. I was completely developed in the private sector. I mean, some grants on research, but it was done outside the structures of government. And so we need to be hiring in the expertise within government if we are going to help to govern and lead to good outcomes here. Can we do that with the way government hires? I run into this question before talking to people inside the federal government. Inside state governments. Government hiring for very good reasons has structured pay scales and worries about horizontal equity and a million things that make sense when you’re very worried about corruption and patronage and favoritism. The market for top AI talent is insane, right. What Meta will pay you, what Google Alphabet will pay you. What OpenAI. What Anthropic will pay you, what they can pay you. I don’t think any of them are going to pay me. But yeah, not you specifically, but one. There’s a question of not cutting funding for the parts of government trying to do this, but there’s also the question of how do you just make sure the government has the staffing talent to keep up in a market that’s hot. We absolutely should make it easier for government to hire experts and to pay more in order to compete in that way. I mean, we’ve found a way to let states directly fund more hiring. It’s usually the football coach in any state. I’d rather it be a real eye expert that’s working to make this future actually work for Americans. I want to get you to expand on this a bit because I think as we’re hearing a lot of reports of Anthropic Mythos, which I have not had access to it, so I don’t know how good it is really at hacking every computer system on the planet, but they are saying it is very capable at that. And I think you really quickly, if we’re going to have AI companies creating what are functionally cyber super weapons, the ability of the government to actually oversee these systems becomes pretty paramount very quickly. I think Anthropic is an interesting place, and it is posing a lot of governance challenges in opposite directions at the same time. On the one hand, you can’t just have a private company creating cyber super weapons and hope for the best. On the other hand, we just watched with the Anthropic and Department of Defense Department of War controversy. When you’re dealing with the Trump administration, do you really want this kind of quasi nationalization of labs. I think we’re seeing simultaneously that it is uncomfortable having these systems as private as they are. It is uncomfortable recognizing that if the government gets its hands on them, they could be used for whatever a particular government’s purposes might be. And so it’s left a lot of us, I think, who care about regulation and care about governance in an awkward spot. It is deeply uncomfortable because we are talking about such extreme power. And it’s a question of where that power lies. If you take as a given that there will be a superintelligence developed, which I don’t see any reason why there won’t be at this point. Then of course, it’s an uncomfortable question about where that sits, because you’re talking about something that is smarter than any human ever. That is a real power question. And this is a real question that needs to be settled by policy, that needs to be settled by law, that if you’re just leaving it up to the whims of an executive branch where there’s no restrictions on them, or private companies where there’s no law. Both of those feel deeply uncomfortable. This is why we need Congress to step up to the plate and actually decide how this division should happen. So in the answers, you’ve given me two things that have come clear in the background of the way you think about this is one seem to believe we’re going to go to full automation necessarily tomorrow. But you react with a lot of skepticism. When I said I didn’t think we would get there. I think there’s a significant likelihood and we should take it seriously. And that superintelligence is also a real possibility, that we’re not necessarily going to stop at human level, or even a bit beyond your average worker, that we could be soon dealing with something. I think for a lot of people, they would hear that and say, so why not stop it? Why do you want to create the machine. God, that will put us all out of work when we all agree we don’t have good policy answers to what that would mean? Why do we want a superintelligence that we have no guarantee we will know how to control? If this is your set of views, why move forward as opposed to trying to throw your body on the train tracks? Well, I don’t think right now metaphorically throwing your body on the train tracks will make a strong difference. And I do think we should slow down the development until we’ve made a lot more progress on the alignment problem. I do think we’re getting into a really risky territory. What you need, and one of the sections of the plan is about diplomacy. It’s about international action. We should be engaging with other countries, should be engaging with China. We should be building universal verification systems on what is happening, both at the chip level, where you can look at the geography and how it’s being used, and in the models themselves. We should be trying to lower the temperature on there being an arms race. Even at the height of the Cold War, we had the red phone to Moscow. So yeah, I am worried if I had a magic wand, I would slow things down until we had better guarantees about what we were stepping into and where we were going. So now I want to flip the valence of this conversation. We’ve been talking, as I think most of the AI conversation does, about what I would call AI harm reduction, right. If this technology is moving forward, how do we make sure it causes as little harm as possible. But I think for people to want this technology to move forward, for it to actually even be conceptually a good idea for this technology to move forward, I think the case has to be better than that. And we were talking earlier about many ways the absence of a positive vision for AI. These companies have to make back in the coming years, a lot of investment. And as best I can tell, the business model they’ve come up with is replacing white collar workers. And to some degree, subscription fees for people asking, ChatGPT to look at a mole. What I have been wondering about for some time is all these promises of AI for drug development, AI for energy innovations. What would it look like to have a public agenda that actually tried to make that real, that actually tried to make it such that there was more AI development that went in those directions and that we got more out of it? So, I mean, I’ve heard you talk before about your interest in I drug development. I want to hear thinking, even if it’s not a full policy agenda, on what it would mean to have a positive agenda for where the public sector is shaping this towards social good as opposed to simply private profit, we would build out an initiative that we’ve done in New York called Empire AI, which was that the state government bought a large cluster of GPUs and committed to continuing to build that out and gave our public universities access to it so they could run experiments at a much cheaper rate and made a public investment on a research front to go after lots of things, including AI alignment and AI safety, but we could be directing grants to that specific research, and we could be building the infrastructure in the government to make that cheaper. I absolutely believe we should be trying to use AI for good, and New York was the first state to do this. Others are following, but the federal government has the resources to really do a deep investment here. And yeah, for a while, AI benefits have been riding on the story of AlphaFold and serving and solving protein folding, which was an incredible advance and has sped up drug discovery. But there could be more like that out there. There are definitely more like that out there. If there’s not, then we got then we’ve been sold a bill of goods here, and I think the government should be making use of this technology for good and directing research in that way that doesn’t by the way, solve alignment problems. It could be that you want it to do really good things. And then actually in pursuing that, it goes off in a whole other different direction. But yes, that is a good use of public investment. So let’s focus in on drug development for a minute, because I think it’s in some ways like the clearest case, let’s say you imagine what certainly seems possible, which is that in the next call it three to five years, AI systems begin generating a pace of molecules worthy of investigation, either new molecules or existing molecules that the AI systems scour the data and realize they might have other uses. And if you know anything about drug development, you have choke points all across that process. There’s what the FDA can do. There’s getting everything from rats to monkeys to humans for trials that a world in which we suddenly had more good candidates would be a world where the choke points became something very different. And this gets a little bit more towards the way you were thinking. I think about the grid, which is if I is going to create, if we imagine I will create all this pressure for investment and it will create all this demand for something, how do you use that pressure to open up parts of the system that have been clogged that have fallen somewhat into disrepair. How would you make it possible for your economy to actually benefit from AI, which requires operating knowledge, not just in the world of probabilistic predictions, but actually in the world of things, of steel, of cement of human beings who are willing to sign up for a drug trial. Well, that’s why there’s more to my platform than just the AI piece I’m giving you, giving you a good opportunity to talk about it here. But we have to cut red tape and cut regulations. One of the ways that I have used I already is I put every statute in New York State through an LLM and asked it to identify laws that are out of date, that require paper when we could do something digitally, a bunch of ways of checking that we have requirements that are just getting in the way of getting things done. What Jen Pahlka might call the policy cruft that develops over time and put together now a 60 page bill for this session of just pulling out a bunch of these old requirements that are getting in the way of doing things. We can do the similar thing with regulations, not just with statutes, but where have we developed practices that are now in the way of moving forward in drug discovery or broadly Yeah, we need to change policies that stop government from getting things done. And sometimes that’s in technology doing the thing more efficiently. Sometimes times that’s in using the technology or not, but finding ways to identify. Choke points and find ways to alleviate them. Or we’re talking it’s tax week. A lot of us are a lot of us who waited until the end or paid our taxes this week, and it was already possible for the IRS to pre-fill a tax form for most Americans who have pretty straightforward taxes and lobbying has made that very hard in the Trump administration has made that harder. But it would be fundamentally, as a technical matter, trivial for there to be through the IRS, a tax preparation AI system that every American had access to where they uploaded their forms. It was cross-checked with IRS data, and it did their taxes for them in seconds. Saving people a lot of time and energy. Like the capacity to actually have give every American an I accountant under the auspices of the IRS. If we don’t do it, it’s not because we can’t. There’s a real question of whether or not the lobbyists allow people to do that. But the relationship between people and the state could really be transformed if government chose to transform it percent. And I think we need to make that a priority. So I have a bill that I’ve been pushing for a few years to make it easier for different agencies within New York City to share data that you give to them for the purpose of signing you up for benefits, so that if they sign you up for one benefit, you can automatically be assigned for another one that right now is restricted and we should change that. Obviously, New York City invested like $100 million on building a portal, but actually what we need are changes on the back end of laws that make it easier to share that data. I’ll go a step forward, which I was speaking with the tax Department in New York State and advocating for O.K, free file. It makes it easy for you. You don’t need another software. But why can’t we just do it for New Yorkers. We have a lot of the information as New York State Department. And the answer I got back is that so much of the information we have is actually wrong. They had this need to just improve the data internally first. And I said, O.K, why don’t you just find companies that are wrong or build systems to help them. And they were like, we’re working on that. But give us five years. Like that’s where we want to get so that we can automate it. So maybe it does come back around to data integration and just having the data correct. And it might not be any more that the technical aspects of how to do your taxes is the limitation. But just as the underlying data that we’re feeding accurate enough for it, I guess the principle I’m trying to get at here is, to the extent you don’t believe we’re going to pause. I’m not saying you don’t but one doesn’t that we are going to move forward at some pace here, which seems likely. I think actually benefiting from AI as a public is a harder challenge than people have given it credit for. I don’t think just because the systems get better, there is necessarily a public benefit. There could be individual benefits, individual harms. But if we want drug discovery to accelerate, we need to open up the systems that would allow drug discovery to move faster. If we want the relationship between people in the state to get cleaner, we need to actually create the conditions for it and overhaul very, very, very difficult and archaic and multilayered and error filled, government databases. And it’s interesting because I do think right now throughout the private sector, you see companies with greater and lesser degrees of success, trying to figure out, what does it mean to rebuild ourself to use AI. Everything from how teams are structured to how our data works. The government because it doesn’t get competed out of business by New by New governments is working on much older systems and it’s very, very hard to build them. But I think for AI to be worth it, you’re going to need a lot more of this kind of investment at a much higher level of ambition. And right now, I’m not saying we don’t even seem to be able to legislate on the harms very effectively. So I’m not confused as to why we are focusing there. But I do worry a bit about it, because there’s a world where we’ve done some reasonable harm reduction legislation and done very little benefit from it, and that’s a world where we’ve kind of pushed AI, towards being a worker replacement machine as opposed to having a public vision for what we want from it. I I00 percent agree. And this is the hard work of governing. I don’t think these are maybe the easy places where we can build the legislative muscle. I would hope so. I think that’s probably around kids, but I think these are parts of the places where we have to work together to change that. And part of it will be on AI and setting up incentives, and part of it will be building the infrastructure that allows that to happen. We’re talking a lot about pretty high concepts here. One of my first bills in the state legislature was to help the state get on cloud computing, because it mostly uses mainframes, and the speaker of the assembly mostly uses mainframes. In 2023. Yes, yes, the speaker of the assembly codes in Fortran. And I always joke that his retirement plan is going to be fixing all the state systems because they still run on Fortran. There’s just work that needs to be done on modernizing to allow us to take advantage of the benefits, and that will require both direct investments and a lot of legislating to encourage that direction. So one of the reasons I wanted to have this conversation with you is you’ve ended up whether you wanted to or not, a bit of a test case for how all this is going to work. So you’re running for Congress. And there is, as I’ve mentioned before, the Super PAC that’s funded by co-founders of Palantir, OpenAI, and Andreessen Horowitz. They’ve spent a million opposing your campaign so far. Suggested 2.5 so far. Oh, 2.5, and suggested they might spend up to 10 million. At the same time, I’ve looked at some of their statements. Greg Brockman, who’s one of the OpenAI founders and is a major donor to this PAC, he has said being pro AI does not mean being anti-regulation means being thoughtful, crafting policies to secure AI’s transformative benefits while mitigating risks and preserving flexibility as the technology continues to evolve rapidly. So what’s their problem with you? If they really, truly believed in having one national framework that regulates AI and balances the benefits and risks, they’d be supporting me. I think it’s a difference between what they say for marketing purposes and what they actually believe, and their actions portray that. So OpenAI last week released a policy document that mirrors a lot of my policies. The emphases are different. I wouldn’t say that I felt parts of it. Parts of it Yeah they’re like, we believe in a 32 hour work week and Yeah, yeah, but they did say they wanted third party audits. But sometime in the future I think we’re already there. And there was much more of an emphasis on society dealing with the problems after the fact as opposed to restrictions on the developers. I’m not saying it’s a match, but they put forward some policies there and they also put later in the week policies specifically around kids out that included safe harbor provisions included testing encouraging red teaming of models. So when you red team a model or red team any software, you get people to try to intentionally break it and to do something that’s not supposed to do. And you might want to red team it around producing child sexual abuse material to make sure that it can’t get out in the world. And right now, in every state in the country, red teaming it and producing that material would be illegal. We have a no tolerance policy on the production of the material. Now, obviously no DA is going to go after you for that. But one of the things they talk about there is they want to extend safe harbor provisions so that you can actually encourage red teaming Yeah, I mean, this is my concern and I’ve heard this from people on the Hill like people in the Senate. Elissa Slotkin said a version of this to me on the record that at the exact moment that AI is becoming so powerful that it would be irresponsible for Congress to not be starting to construct regulations, legislative structures, transparency, kids that the AI industry now has so much money that much as crypto did before it, it’s able to create a kind of Super PAC of that has a Death Star like capability. Now, it’s weird because Anthropic is one of the founders of another PAC that is more pro-regulation and is supporting you. So you have players on both sides, but a world where AI will have this much money and the political system is this permeable to money is a world where in order to regulate AI, you’re going to need to have to sign up your own AI patron to support you. And so I feel like there is some bigger question of political economy and power here that has ended up getting a bit of a test case in this race, which is I think, quite worrisome. I just think we could very, very quickly end up in a scenario where politicians are terrified of the issue, and that’s the goal of leading the future. The goal, as they’ve stated, is to extract so much pain in this race and to beat me up so badly that when the idea of AI regulation is proposed in the future, politicians run in the other direction. I mean, they have said publicly that they want to make an example out of me. Think about what that means. Not that oh, we have a different view. And so we want to make an example out of Alex Bores And they want to do that because not because I have ideas that are outside the mainstream or when I proposed my framework, I got praise from those on the left. I also the chief futurist of OpenAI, retweeted it. They’re coming after me because I successfully passed the bill. Frameworks there’s lots of frameworks. Those are cheap. Who’s going to put political capital forward and get something actually done And they tried to prevent any states from moving forward by putting this preemption language in legislation that failed. So they instead got this executive order from Donald Trump to target states that want to regulate AI and try to extract punishment, that they would cut off funding, that they would sue the states. And it targeted the RAISE Act, along with a few other bills throughout the country. So why are they coming after me. Because I might actually get a bill passed. This goes back a little bit in our conversation, but what actually in the race act do they fight. Because as somebody who cares about AI regulation. And I think it’s a good start. What actually got enacted there is a pretty soft bill. It is. So it is the strongest AI safety bill in the country. And I’m embarrassed by that fact, when it should be much stronger. When they come after it, when they’re trying to get it changed, what are they so upset about. It’s that there’s any regulation whatsoever that really is the challenge and that there is any regulation that they have to play by any rules is such an anathema to them. And they don’t have to win forever. They only have to push this off for an election cycle or two. The speed with which AI is developing, the amount of political power, let alone capital that they will have to deploy in the future, will be unbounded. We already have elected officials who are terrified to take up this cause, despite how popular it is, because they see all the money on the other side and they’re risk averse. I’m running for Congress. I talk to every member of Congress I can, and I hear from them in quiet conversations Yeah, we’re watching this race. We want to see if this is a issue that you can win on standing with people or if the money just swamps everything. And the lesson that will be learned by members of Congress if the Super PAC wins is run the other way, is don’t actually touch this. Maybe you can say a speech on it. Maybe you can go on a podcast about it, but don’t try to pass the bill because they will end your career. I think that’s a place to end. Always our final question what are three books you’d recommend to the audience? So the first is my favorite book of all time, and I know you have thoughts on this book, but it’s “A Theory of Justice” by John Rawls. I think it does the best job of setting up a broad framework of rights of humans, while also understanding when inequalities could be justified. And I think it’s the best place to start for political philosophy. So I know you’ve tried it a few times. I will point out that in the intro he says, this is a third of the book that you have to read to get the basics of it. And here’s the half of the book you have to read to really deeply understand it. And the rest is, for the academics. And so I’d encourage you to give it another try. The second one is “World Eaters” by Catherine Bracy which is marketed as this deeply anti-vax book, but I actually think is written by a tech insider and a much more nuanced approach to the incentives that venture capital sets up. And that is always for growth, growth, growth and don’t think about the social consequences. And I’ll add that since VC is always pushing for a company that will scale no matter what. I saw this happen to my wife, who’s a YC founder, and built a business that probably could have been fine on its own, but had the venture investment and it was scale or die. And so a lot of the negative externalities that have come from that, I think it’s a really timely look as we are building out AI and the last ones, I think a little more whimsical, but goes back to our conversation about the skill of writing. And it’s “Bird by Bird” by Anne Lamott, which is just a delightful read and is a good reminder for any procrastinators to just break down your work and do it bird by bird. That’s where the title comes from, but is such a well-written leads by example and in the instructions on the art of writing. And I encourage especially when our skill of writing is being degradated for people to be intentional in that practice and to read that book. Alex Bores, thank you very much Thanks for having me.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button