One Big Bad Bill

Jacob Haimes -- RECORDING DATE
This episode was recorded June 15, 2025
-- END RECORDING DATE

Jacob Haimes
Welcome to Muckrakers, where we dig through the latest happenings around so-called AI. In each episode, we highlight recent events, contextualize the most important ones, and try to separate muck from meaning. I'm your host, Jacob Hames, and joining me is my co-host, Igor Krawczuk.

Igor Krawczuk
Thanks, Jacob. Normally we do a little bit of more lighthearted news in the beginning of these episodes, but the world of AI was not only kind of depressing the last couple of weeks, but also boring. So as your little bit of lighthearted news and learning for this week, the Swiss have their National Gymnastics Festival this week in my hometown of Lausanne, and this happens only every six years.

Apparently the Swiss have a thing for multi-annual festivals because there's also a wine festival. It happens only every 20 years. So if you're looking for like an extra special vacation in six years or in 18 years, now you know where to go. But now sadly we have to return from the world of Swiss festivals onto Trump's one big bad bill, which was already passed in the house and is currently in the Senate and hotly discussed in American media and we're going to join in on it.

Jacob Haimes
Yeah, so first of all, guess just to mention explicitly that we will not be going over all aspects of the bill. As is typical with this government, the one big bad bill is designed, as Steve Bannon put it, to flood the zone. So while that is the case, there are some general themes. One is that it's a pork...

Igor Krawczuk
Flooding the zone means they're gonna be doing so much bad stuff in it that nobody has the ability to catch all of them who also aren't familiar with the term.

Jacob Haimes
Yes. So yeah, essentially it's a sort of an authoritarian strategy of putting out just a ton of changes, a ton of content. And then it is practically impossible or actually impossible for a single person to be able to sort of, you know, take everything and comprehend it in a meaningful way. You need to have trusted experts that know specific portions of it that you can sort of defer to when their area of expertise becomes relevant.

And that's sort of the only way, as far as I know, to deal with this strategy. But even then it can be like really difficult when you have that set up. anyways, there are general themes for the bill though, and they are first that it's a pork bill for their supporters. This was a term I didn't know about, but Igor brought up essentially is just giving money or like giving leniency to the people who support you, which there's been a ton of in this government. So yeah, but especially towards the industrial complex, like, sorry, military industrial complex, and like surveillance. It's also integrating and consolidating power of the executive branch, including removing checks and balances and defunding a lot of programs in the classic starve-the-beast strategy, rolling back Biden slash Obama era, and earlier civil protections that have been in the US.

So, like I said, these spread from repealing environmental protection agency regulations on greenhouse gas emissions and other pollutants, investigation and prosecution of immigrants, oxening portions of the radio spectrum that we might want to use for future military uses or other things just to have as a buffer. And also petty things like specifically shutting down the free electronic direct filing system of the IRS.

There are lots of reasons to think that this bill is absurd and be concerned and mad about it. But those things are not our specialty. We will be focusing on areas that pertain to AI in some way. This hopefully this can serve as the,
you know, expert for this space. And then I strongly encourage you to go out and look for other sources and look into the bill in the other aspects so that you know what else is going on in those regards.

Igor Krawczuk
Yeah, so one person that I as a non-American like and like also because it aligns with my politics, know, like make your own judgment is Legal Eagle is like a law YouTuber who does relatively unpartisan like it's not really Democrat or Republican, but is a very like institutionalist and liberal in the classical sense.

And he's of course horrified by the whole Trump thing, but he does like actual like lawyer analyses of various bills and he's also doing this bill among amongst others. So that's like a lawyer angle on the bill that I personally can recommend.

Jacob Haimes
Yeah. So, within this bill, there is so much that even the stuff about AI can be split into a bunch of different categories. so we're just going to give a quick overview of what those things are. One is general funding allocation for rollout. So this is a budgetary bill. This is, the main purpose of it is supposed to be to give money to the government to do things.

And so a lot of this is saying, hey, let's give money to different portions of the government to spend on developing AI. That's a big portion of what this bill is. That also includes research specifically. But that it's important to note that when we when the government is funding research, that basically means Department of Defense.

And so, yeah. State-sponsored tech research is just abysmal in the United States and I think most places. So they're trying to push that and say let's spend more money on this sort of thing. Then there's also requirements for what you can spend the money on. So specific things within the military, specific things in like general government aspects as well.

So there are required uses for this allocation in certain areas. There's also tech tax credits, which I guess I don't have as much knowledge on this one, but Igor, you seem to feel like it was essentially a time bomb that could be used as leverage against private tech companies.

Igor Krawczuk
We're going to get onto that when we go in this section. This is also like me picking up on like the chatter and attack community, but it's basically from the perspective of the started bubble that I'm embedded in, this actually has some relation to AI and how it relates to jobs and how it relates to like the velocity that people will be going at.

Jacob Haimes
And then the last section is the moratorium or Peter Thiel's wet dream. you know, JD Vance is in the White House for a reason. I guess it was this for Peter Thiel because, you know, this is in the bill somehow, which doesn't make sense and actually might make it not pass. We'll get into that later as well. And then there's, you know, other random stuff. AI is showing up all over the place. There are various other things, but those are the big sections. So we're going to start with one of the items that is required to spend some of this funding on that's been allocated towards AI. And that is specifically Medicaid fraud detection systems.

Igor Krawczuk
Trump has made big promises to not touch Medicaid and Medicare and that all of the magical money savings he's gonna do to fix the budget, blah blah, are gonna somehow be possible without reducing social spending and by lowering taxes. But the only reason why they can do this is because famously Doge set out to find two trillion dollars of waste in the federal budget and they totally found them like in the time that Elon Musk had set for that. And there was no awkward conversation about this at all. But there's more probably.

So this is one of the ways they're trying to like seed what they would call efficiency in the government with having like automated systems. then know like fraud is always the boogeyman that they bring out with the welfare queens and the straw man of like, we need to really put a lot of effort into making sure nobody gets any money they shouldn't be getting without doing like a cost benefit analysis of like, maybe it's actually fine if like 0.5 % scam the way into like free healthcare. If like it costs more to shut down and there's like a cost and like this in particular gives them money to like roll out AI integration, which they have gone on the record already of saying, we hope this will work in the IRS, like where they already fired onto people and they just said, yeah, we're gonna just replace them with AI who are gonna like check the filings and will detect like fraud for us or will detect people who don't file. And this will now also be done in healthcare.

Jacob Haimes
And then this is all in the name of efficiency, supposedly. But when you take this in conjunction with the fact that they're removing the e-filing system, which is, you know, very efficient and very valuable for the government, it doesn't make any sense. So really this is more about giving specific surveillance, Palantir-like companies and also companies that are doing this sort of, I guess, monitoring more access and then giving the state that access as well so that they can really take control, essentially, of these processes.

And even in the best scenario. So let's assume that this isn't used for like, this isn't misused by the government. This isn't misused or poorly handled by the private companies who are handling the data, which of course both have horrible track records on. But for a second, let's assume that the best version of this, it does work. Even that is a problem.

And so there's a lot of research on this. I think I've mentioned this a couple of times, and Igor has as well, but the way that machine learning systems are designed is inherently unfair because of how society is right now.

Igor Krawczuk
So like to make this a bit more precise, so it's like two things. Yeah. Palantir comes into a mix because Palantir is like one of the leading kind of like IT providers for the government or for governments and who are already doing like deals with like the UK where they're trying to roll out like similar systems for the administration, where the hospitals are kind of saying, no, we don't want this. This doesn't actually help. But it's like a thing that's being pushed because it's a centralization of control.

So that's why we say Palantir gets pork here, it's like Palantir and, also other, conservative tech people who have like plays in software for managing large systems. The reason why we say ML systems are inherently biased because that's how the world is built. Like if you try to do like the, from first principles, unbiased thing. You don't put any of your own goals into the system. say, I will let the data decide. Well, then you're imitating the current world. And then if the current world has a bias, you're also imitating the bias. That's the whole reason why the field of fair ML exists.

The problem is, and we're going to put citations into the show notes here, is that if you try to compensate for how the real world is right now as a way of changing it, which does work, it's called performative prediction, and it's a whole ongoing field of research. But then like you will make more mistakes and you're also going to possibly not work as well on the favored group because you can kind of say, I will use only the diagnostic criteria that work on like white people, for example, to detect skin cancer. And then, of course, that will be easier because I just ignore a whole chunk of the population. And so I will have like a much higher detection rate, which looks good in the metrics and you know, like all the minorities can get fucked. And then if you say, oh, but I want it to be fair. It is almost impossible to bump up the performance of a minority group without sacrificing performance on the majority group.

And then you're, you're in effect penalizing one group in order to not even like benefit the minority groups so much because you can't make up the missing data that you don't have on them because they are a minority and that design choice of doing this or not doing this is then inherently like something political, right? There's nothing objective or a priori fair about this and different jurisdictions differ on this. One of the papers that we will link is like an EU non-discrimination analysis, and EU and the US have very different takes on like how they're trying to approach like non-discrimination.

So that's kind of like the fundamental conundrum from a theoretical perspective. And this also kind of matters in practice. I'm not sure, do you want to talk about the Dutch volunteering to be the control group for this or the test group, depending on how you phrase it?

Jacob Haimes
No, you can bring up this one. I have a lot more to say about the next thing we want to talk about. So it works.

Igor Krawczuk
Okay. So then I try to keep it brief. Lighthouse reports is an amazing journalistic collective that have done top-notch reporting on tech. I refuse to keep doing AI if it's not required for context. they, amongst other things, look at the city of Rotterdam a couple of years ago, who had done like a, know, AI before it was cooled, AKA linear regression for fraud detection.

Well, it might have been like a random forest model, like, let's, let's, and would it surprise you to learn that the AI system that was trained by the like a company for the city of Rotterdam had a bias against immigrants, single mothers, and basically all the minority groups that you can think of.

Jacob Haimes
Wow, I'm shocked.

Igor Krawczuk
Yeah, like shocked. then now Amsterdam tried with his knowledge, they tried in collaboration with Lighthouse reports and MIT review to actually prove it could be done. We can truly make a fair automated fraud detection system. And they documented the whole thing. Would surprise you to hear that they failed.

Jacob Haimes
Not really.

Igor Krawczuk
Well, I think I've used up that joke. Yeah, they also failed and like they touch on a thing that one of the main researchers in the papers we talked about said in a pop science article of like, the old system is biased. It's biased against one group and another group. If you have a imbalanced world, either in reaction to that bias or because of that bias, know, like either it's exactly the same bias or it's like a counter bias, you're going to get, like, you're going to probably overshoot and put in a different type of bias. And so the only way to really kind of like make an equitable AI system would be to, before you deploy the AI, you create an equitable world. And then you train it on the equitable data.

Or you do this very cutting edge, a research called performance prediction that tries to like rigorously analyze how do you have to nudge it in order to make these predictions, like nudge the world towards a fair state, but that's very, this is not what's gonna happen. Let's be real here. So all of these things like in combination is like why even though they will say, yeah, we're doing our best efforts. we're getting the best experts. We're paying, you know, like to get only like fair ML, which like, let's also be real here. They might not do that.

This program is a really, really bad idea. And it leads a bit into the next topic as well, which Jacob can talk about, which is if you have an AI system handle like the medical system, that is one step towards having the centralized database that is very easy to connect different parts of people's lives into, which you know, as a German, there's a bit of history there why that is a very bad idea. And I will give over to Jacob now, who has a different perspective on the same thing, but also highly relevant.

Jacob Haimes
Yeah, so, and we'll get back to the like the German perspective as well, because I think it's important to just address that and remind everyone about that. But essentially, it's not just the medical aspects either. It's also I mean, I think that there's a one big beautiful data set as well, that has essentially trying to bring together all aspects of the data that the government keeps on United States citizens.

The justification, guess the reasoning behind it, as far as I can tell, like the main thing is they're saying it desilos, which is of course a fun word to throw around because people love those kinds of words. But what that will do is it will cause there to be this dataset, is being essentially handed over to Palantir as well as whoever else the government wants to contract to do their work because they can't do it themselves because the government doesn't understand tech. And that's just like true as noted with the having...

Igor Krawczuk
It is also true because of consistent pressures to not build up capacity in the federal government as part of the aforementioned start the beast strategy, right? Like this is something I would like to like contextualize strongly because I feel like that's often gets lost in the American context.

Jacob Haimes
Yeah, it's, you have to take into account that like, they're setting this up so that the government is reliant on the private sector. And that's like part of this push. In addition to that, it is centralizing all of this data so that it can then be used against citizens.

And this is not something that's like a far-fetched kind of concern. Yes, it occurs, you know, essentially in Minority Report, which is one of my favorite AI touchstones. But people have been talking about this for decades. The ACLU has articles from like 2002, 2003 saying we shouldn't be doing national ID cards. We shouldn't be centralizing all this information because all of these things that are bad.

But it's not just groups like the ACLU either. The Cato Institute, which I did some research on, is a right-leaning think tank which has very high credibility ratings, like the highest you can possibly get. They recently, relatively recently, I think this one was from like 2018 or so, or maybe this one was just a couple months ago.

It doesn't matter. There are articles that are just a couple months old. There are articles that are a couple days old. And there are articles from many years ago, all of which are saying, if you value your privacy, resist national ID cards. If you value the government not being able to know everything about you whenever they want to, regardless of where they are coming from, resist master files like the one that is being proposed by Trump with the one big beautiful data set.

There are very strong, like, it's really bad to be consolidating all of this information.

Igor Krawczuk
To make this a bit concrete again, when we say master file, the problem if you consolidate everything like this is that there's no place where there can be an internal checks and balances. The idea is that if there is no national ID card, but even if there's only a state ID, let's say, then if you go from state to state and you're trying to evade the federal government, if the federal government becomes tyrannical, that becomes a possibility, right?

Because if the government captures one governor who is willing to ignore the constitution, completely out of their idea, then other governors can resist. That's the whole idea of the independent states and having a small government, which I thought one of the parties was really for, but I guess it was a hallucination.

Jacob Haimes
Yeah, and I guess that was sort of the last thing to bring in here is that this idea of like we shouldn't have these very large reaching government arms that are able to know where everyone is at any point in time. That is a very typically right leaning sentiment. That is something that people on that side have said, you we don't want big government. We don't want to be giving over this power and our money because it's not like this is going to be free. This is going to cost money and it's going to cost money, which is going to go to the private sector. And it is the exact opposite of small government and letting states decide, so, if you are anti-government, if you are right leaning at all in terms of conservative, how, if you're conservative about how big the government should get, about how involved they can be in your life, this should be incredibly concerning to you.

I used to think that the people who were like really freaks about data privacy were freaks. I thought that, you know, everyone's got all the data already, which I mean is true to a certain extent, but that doesn't mean that we shouldn't be really concerned about it because this is the way that we get into those sci-fi scenarios like what's in Minority Report. So, I do just want to make sure that that's very clear, that this is how we get started in that direction.

Igor Krawczuk
Also, much less sci-fi because this is in the past. I'm from Germany. The Americans, when they waltzed in and liberated Germany from the Nazis, they then rolled back some of Hitler's reforms, which were exactly stuff like this. He called it the Gleichschaltung, the synchronization of the state.

They basically enforce that like every state according to Germany has its own police force that by design is not allowed to like co-locate or consolidate their records. They need to be separate. And then there needs to be like a procedure of handing off like one state's files to another state. And you can try to make that exchange efficient. But the difference between having a very efficient exchange system is that like there's a record of who acts as a thing and every time there's like a centralized database, you have cases of like, you know, cop looks up woman that he gave like a warning to and then sends her like, weird SMS about doesn't she want to hang out once. Like that's a thing that is right now in German, like social media, like stuff like this happens. And by reducing the blast radius of this, even without a full fascist takeover, you can like diminish the impact of this.

But if you have a fresh takeover...

Jacob Haimes
Okay... So, Hitler and the Nazis had a system where it was easy to access everything.

Igor Krawczuk
They created it. One of their first things was to make sure that all of the media had one line, two gobbles, and the propaganda minister was able to dictate what was the correct and the not correct thing. By the way, like this was one of the things that Trump floated during the Harvard case, when different reporters were differently mean to him.

Another thing that they did is like they integrated all of the police stations because every state had their own and they just created in like one big like a Reich Police system because again like centralized seat of power and they did this in every branch of the government where they tried to make sure that like the whole thing was really unified onto like the one unitary executive at the top which was der Führer and then when the US came in, they broke that apart.

They broke it apart again, they rolled it back. also added new additional safeguards and also just the trauma of it all was also enough to really enshrine this thing of you don't keep data, data is a liability. It's called data frugality. It's like you only ever have the data that you need and you delete everything that you don't need. It was like a guideline of a German state for 50 years and they only now started to roll it back to compete with Americans.

And the reason for that is that the classic, you should not store data, you don't need a story in Europe is the Dutch health record system, which just in case we might need, know, as a utility asked you like your age and like what, what like your health issues were, but they also asked your religion. And that meant when the Nazis came in, they had like a detailed list of every Jew in the Netherlands, including where they lived. And one of the big things that the resistance did in the Netherlands was try to burn that shit. And they failed. They burned only like a quarter or a third of it. And so the Holocaust was extremely efficient in the Netherlands. Because you know, it's easy. You just have a list. You just go through and you collect everyone. And if you have no nice big master file at all, that avoids that.

If that big master file is now broken up and you have, know, like the health insurance data is not directly part of like the executive branch, but they need to go through a procedure and the people that are trained in accessing the systems are not fully on board with you. It's again, this internal safeguard thing. So we don't need to go to like sci-fi, future crime, whatever things, even though that also sucks and that's also being tested in different cities already.

But like...

Jacob Haimes
It already happened.

Igor Krawczuk
It already happened and if a single government official can type into government clode, hey, pull up all of the records of like the people I don't like, including what we can use as leverage against them, they are traitors to the country and we need to make sure that we get them. And they can just do that. That's a huge, accelerant for abuse. Yeah. Which brings us to the last point we prepared for a section, which I hand over to you, Jacob.

Jacob Haimes
Yeah, so also buried in, you know, Anthropic recent announcements is Claude for national security, right? So the thing that they said they weren't ever going to do there is just a thing that they're doing now.

And I guess the reason to just bring this up here and in conjunction with all of this other information about oversight and lack of oversight and integration is that a lot of it's happening like right before our eyes. So we really need to be like paying attention to this stuff.

Also, it ties in nicely with the next sort of mention we wanted to make, aren't going to do as deep of a dive in this one. But a lot of the money that's being put forward in this bill is military integration of AI. So one thing that's notable is that a lot of things that say, there's a lot of automation wording in there about drones and other things like that.

That's essentially the same thing. It's automating of warfare. Now there are legitimate things to say about like the positives and negatives of having those. I mean, I think there are a lot more negatives given the tendencies of the people who use those systems as can be seen in the Israel-Hamas sort of war and the abuse of AI systems and leaning on them as sort of fact in order to increase efficiency in their operations there. I think that's an excellent example. It's not just AI though, it's all tech. I talked about this in the recent Into AI Safety episode, so if you're interested in that, definitely go check that out. But...

They are bringing in more tech executives and more AI applications automation into the military.

Igor Krawczuk
Which like some context for that, like they are literally having the tech execs pass through the same program that they have the military doctors and other like military experts pass through. So they will become like direct commissioned officers so that they can operate within like embedded into the military context and know how to behave there in order to like streamline the integration and push the product basically.

And I assume it's also a place so they can like steal the valor and call themselves, you know, like, have served, I am a officer in the military, blah, blah, blah. I went through the computational school where on Hacker News, had it like described as a dinner school or something like that. You know, you're basically like, if you're like a highly skilled person already who has the skills that they need to have already, which for these people will be like their tech background, it's basically just like an etiquette course and like a navigational course. like, I wanted to give this context also just so people can call bullshit when like one of these tech bros talks about, I went to this really tough officer school.

But the main gist here is strong integration. And they also passed an executive order to like directly launch that, which I quickly went through before the show. It's nothing super dramatic, it's basically just like accelerating unarmed drone programs and like reshuffling some rules and focusing on like made in America stuff like this, it's like a push towards again, fully automated warfare, less humans in the loop, less accountability, where there's already whistleblowers talking about, yeah, like the human in the loop is already processing, you know, like every killer request in like three to 10 seconds. So they have like three to 10 seconds to click a button of a yes, kill this person and what happens is go like click click click click click click click exactly how you would expect somebody who is basically a vibe coding deaf.

Jacob Haimes
And I guess the last thing is this isn't, again, this isn't new. People have been relying on tech at the expense of having human oversight to make these life-threatening decisions for a long time. There were scandals in the 2000s when the US started using metadata to inform whether or not someone should be killed and then people relied on just that and then that's bad. Yeah, of course it is. But it's all sort of the same story here of over-reliance on the tech and not seeing it as fallible.

Igor Krawczuk
Yeah, like, one thing is like this exit order basically reduces a lot of the constraints and pushes stuff forward and it also includes the civilian sector. they also have...

Jacob Haimes
This is the Unleashing American Drone Dominance Executive Order, is mostly now what we're talking about here, but we're just bringing it up because it ties heavily into the use of AI and automation in the military.

Igor Krawczuk
Yeah, like it's, but it is tied together. It goes hand in hand. Like the bill provides the funding and the very strong mandates and the exit order like makes it more precise. And it includes again, like AI expedited waiver reviews and relaxations of what you can do with the drones, even for the commercial sectors. like, previously drone operators would have to actually keep an eye on the drones. Now, like if they're say, a camera operator or whatever it's legally to go like out of their view, even if crashes into somebody that's like in line with the deregulation.

I'll do the next one very briefly. Yeah. Because it's a bit technical, but basically a lot of tech salaries can be classified as R&D in an accounting standard. That means they can be offset against the operating income so that the taxable income is reduced.

Jacob Haimes
So specifically here, we're talking about the one big bad bill gives incentives and credit tax credits to tech by allowing a lot of these tech positions to be classified as R&D, which can then be counted at like against your...

Igor Krawczuk
To be precise, you can always classify things as R &D and then there's rules of how you can amortize them. it used to be that you can just call it R &D and in the same year that you have a salary, you deduct it from your income, that's good. Which means if you're running a business that barely breaks even in terms of surviving and your net profit is zero, basically, that actually works out for you. Like you can actually just like, know, get just enough money to break even and keep improving your product. And then at some point you have like built up a good product and then you're to make profits and then you get taxed.

But in 2017, like a bit before 2017, that changed and it like forced companies to amortize over five years. So basically you pay now, you get money now, but you can only put 20 % of that money against your income, which means, you know, like 80 % of the money that you got is now profit. And Trump put a pause for this regime into action in 2017, that expired in 2022. And now they are bringing that part back as another pause.

Where this is relevant for AI in the sense that if this goes through, then a lot of AI R &D programs are now fully tax deductible from profit instead of only being partially tax deductible in the same year. So it's an accelerant for AI progress.

And you can have opinions about is it reasonable to do this or not. But the main thing why we're calling it out is they're not just reclassifying things. They're not saying, hey, classifying this in a way that needs to be amortized over five years is bad for innovation. It should always be R&D. They're not doing that. They're doing, you know, for Trump's presidency, this gets to be R&D. And then afterwards, let's see. Which is a control technique in my eyes.

Jacob Haimes
Yeah, absolutely.

Igor Krawczuk
And I think that deserves to be called out. And it's also relevant context for if you seen like the tech layoff wave happening in the last couple years, like a good chunk of that is not AI, even though people frame it as that. It's just high interest rates and this thing being in effect for a while and people trimming their R &D budget because they can't actually like justify it from like pure bean counter accounting level.

That's basically like this part of the bill. It's not necessarily like bad, but you the way it is being done is bad. And depending on your position, it might also be intrinsically bad. And then what is intrinsically bad...

Jacob Haimes
Yeah, and then we have the last section. So this is like one section and there's actually two parts to it. So the first part is essentially just like giving money to upgrade, update systems, I guess. So that fits in well with all the other stuff we mentioned about, you know, national ID cards, the one big data set, all that stuff.

But then after that, there is a section which creates a moratorium on all regulations of AI by states. So what this would do is it would prevent any states from enforcing any regulations on AI for the next 10 years. Essentially, allowing AI companies to do whatever the fuck they want for 10 years.

And this will have direct impacts immediately. There are laws that are already on the books. So there's one in Colorado, which is where I'm based, which is probably one of the more like innovative laws out there on AI, I would say at least, that is in late stages or potentially already has been accepted. I don't remember exactly where it is, but it hasn't gotten into effect yet at the very least. But that would be prevented from being applied. There are also other ones in, I think there's one in, let's see, California for sure. There are a couple others. But...

Essentially, it means that the US wouldn't have any AI regulation because there's no way that the national level, the federal government is going to is going to do anything that's actually meaningful. And so it's leaving it up to the states. But now it's saying actually states, you don't have that choice. And so there are a lot of, I mean, this is essentially what I do for a job is try to establish and promote the idea that like we need regulations on these systems. And it would make it so that you can't really do anything in the US.

Now, this sort of stipulation, I guess, doesn't pass the Byrd rule, as far as I can tell. So the Byrd rule is something that says budget reconciliation bills, which is what this bill is, aren't supposed to have like extraneous stipulations attached to them. And if they do, then it requires more votes in the Senate for it to get passed.

So, I believe that this bill as is doesn't get through the Byrd rule. So they'd need a 60 out of 100 majority as opposed to the 51 out of 100 majority that they need for a budget reconciliation bill that does pass the Byrd rule and all the other stuff. So it might not even be allowed, right? But it's still being presented and pushed through.

Yeah, I guess I get a little bit heated or passionate when it comes to this, but Igor, do you have anything else you wanted to add that I probably forgot because I was distracted?

Igor Krawczuk
Yeah, I was just going to bring up a nitpick that technically this doesn't bar the federal government from making any AI regulations, which is like, Sam has been calling for federal regulation, it's a state level. But my response to such a nitpick would be, what the fuck, no, get real. Like this is not, this is not what's going to happen...

Jacob Haimes
There's a reason why all of the tech people are calling for national regulation. It's because it's infeasible.

Igor Krawczuk
In the US.

Jacob Haimes
Sorry, in the US, yes, but that's where people are calling for the national level regulation that don't actually want it. They say the US needs to nationally regulate because then they get to appear like they are being safe or whatever. But there's no actual threat because there's no way that something that actually had teeth, that actually was able to regulate in a meaningful way, would get passed at the national level without having something that's in place at the state level first that's working. Like maybe eventually that is possible if we get something working at the state level and then it can be expanded. But that is, in my opinion, not possible unless there are some major changes in how the Congress, House and Senate are set up.

And anyone who is saying like, well, the federal government can regulate it. Yeah, that's... I would love to have your worldview.

Igor Krawczuk
I like I, for me, I'm, I'm sympathetic to some of the arguments against, you know, like over regulating innovation, because like, I've come to realize that I'm indeed like a closeted classic, classical liberal with just like some weird, more lefty leanings. So you know, freedom good, actually, but in the US context, kind of goes against the whole states rights thing, no, like,

Jacob Haimes
No, yeah, it just does.

Igor Krawczuk
So that's what you get.

Jacob Haimes
Which is something that's supposed to be really big for the typically conservative people is that the states are supposed to be able to decide. And this literally just says, actually, you can't. Actually, on this thing, on this one thing, which you have already decided you want to regulate, we're saying you can't do that now. And so it's just this hypocrisy is absurd to me.

Igor Krawczuk
And one thing that's maybe important to like, preemptible on the topic is like, this will be justified. I have no doubt as like, but this is a national security issue. Like AI is a strategic resource. We need to make sure we move fast. It's like the states can't outlaw nuclear weapons either even though they would like to, because this is like national security concern.

In which case like, the whole national security concern and if everything becomes a national security concern is again one of the playbooks of how the Nazis went around. There's the idea of having national concerns for everything and the state running everything. this is not an ad hominem. In the sense that, the bad person does it, therefore you shouldn't do it. This is a thing of if you do all of the other stuff that is like authoritarian and centralizing power and can lead in a very bad direction. And then you also learn like escalate tensions in a protest that was relatively peaceful. And then you just like started ascending in military, for example, if that ever happened.

Then in that context, it should be boring if you also like complete the rest of the puzzle, because like then, know, like if we ask an LLM, complete this thing after some in-context learning priming, I can tell you what it would extrapolate to. So people should be concerned actually about their values.

Jacob Haimes
This decision just baffles me. Like the idea that anyone could be behind it makes no sense. And actually I think that holds a good bit of water because one of the things that has been said by Republicans in the House that voted for this bill, and not just one, multiple Republicans in the House that voted for this bill, including Marjorie Taylor Greene, you know, the worst one or whatever, and others, is like essentially, well I didn't know about that part.

And to those people I say, fuck all the way off. I do not give a single shit that you did not know.

It is literally your job.

If I know more about this bill, when I am not being paid by citizens, and you are, that's a serious problem and you can go fuck yourself. Like I just, I don't even understand.

Igor Krawczuk
I don't even live in this country and I seem to know more about the lawmaking in the country.

Jacob Haimes
Yeah. I guess like the last thing that I wanted to end on is especially in light of like last week, at least when we're recording this. I guess when it comes out, it'll probably be like two weeks ago's major news was Trump and Musk having their little like couple spat, which, you know, that's not I get it. It's it's interesting and it's fun to follow. But it's not what we should be worrying about.

What we should be worrying about are things like this bill, which are just massive breaches of the Constitution and what America says it's supposed to be. Or at least it has for a long time. And by America, I mean the United States, because, you know, Canada and Mexico are probably doing okay-ish. And they aren't the ones saying this. Well, at least, yeah.

Essentially, I'm ignoring those. I'm talking about the United States.

Igor Krawczuk
I think that's all the muck Jacob can handle this week without having a stroke out of anger at his technical representatives.

Jacob Haimes
Well, they aren't mine. Colorado is actually doing pretty good. As far as the states go individually, think Colorado has one of the best governed states by a pretty significant margin, which is nice. But yeah, that's not the whole picture.

Igor Krawczuk
Yeah, I'm too Europe-brained where, you know, like if you vote for a party, like it's like a mixed thing. We don't have like the regional representation as strong as you guys. but anyway, thanks for listening guys and call your representatives if you are concerned about this stuff and get informed. Sorry, you can continue.

Jacob Haimes
Yeah, no, I mean, I was just gonna say, if you found the things we discussed in this episode surprising and you oppose the bill, please write an email or call, but write an email is relatively easy to your Senator and tell them why you don't like it, why they should be opposing it, or at least make a social media post sharing the episode.

Like typically I would start with that and say, you know, share the episode or leave a review or whatever, but like, this time I really care a lot more about getting as many people to write their representatives and say, this is bullshit. We shouldn't be doing this. So it's, yeah, it's really important that all of us know about the stuff and stay informed and act now before it's been implemented.

One Big Bad Bill
Broadcast by