Jacob Haimes
Appears in 23 Episodes
The Enemy of My Enemy is Still a Corporation
In this episode, Jacob and Igor break down the DoD vs. Anthropic standoff, tracing how Claude's use in military operations led to Anthropic being designated a supply c...
The Mythical AI Bear
This week, Jacob and Igor dissect the "mythical AI bear," the strawman version of AI criticism that gets thrown around in tech discourse. Working through a viral blog ...
Big Tech Plans to Move Fast and Break Democracy
We're talking about developments in AI while those in power have unapologetically revealed their true fascist intensions; are we spending our time in the right way? Ig...
AI Skeptic PWNED by Facts and Logic
Igor shares a significant shift in his perspective on AI coding tools after experiencing the latest Claude Code release. While he's been the stronger AI skeptic betwee...
Tech Bros Love AI Waifus
OpenAI is pivoting to porn while public sentiment turns decisively against AI. Pew Research shows Americans are now concerned over excited by a 2:1 margin. We trace ho...
AI Safety for Who?
Jacob and Igor argue that AI safety is hurting users, not helping them. The techniques used to make chatbots "safe" and "aligned," such as instruction tuning and RLHF,...
The Co-opting of Safety
We dig into how the concept of AI "safety" has been co-opted and weaponized by tech companies. Starting with examples like Mecha-Hitler Grok, we explore how real safet...
AI, Reasoning or Rambling?
In this episode, we redefine AI's "reasoning" as mere rambling, exposing the "illusion of thinking" and "Potemkin understanding" in current models. We contrast the cla...
One Big Bad Bill
In this episode, we break down Trump's "One Big Beautiful Bill" and its dystopian AI provisions: automated fraud detection systems, centralized citizen databases, mili...
Breaking Down the Economics of AI
Jacob and Igor tackle the wild claims about AI's economic impact by examining three main clusters of arguments: automating expensive tasks like programming, removing "...
DeepSeek: 2 Months Out
DeepSeek has been out for over 2 months now, and things have begun to settle down. We take this opportunity to contextualize the developments that have occurred in its...
DeepSeek Minisode
DeepSeek R1 has taken the world by storm, causing a stock market crash and prompting further calls for export controls within the US. Since this story is still very mu...
Understanding AI World Models w/ Chris Canal
Chris Canal, co-founder of EquiStamp, joins muckrAIkers as our first ever podcast guest! In this ~3.5 hour interview, we discuss intelligence vs. competencies, the imp...
NeurIPS 2024 Wrapped 🌯
What happens when you bring over 15,000 machine learning nerds to one city? If your guess didn't include racism, sabotage and scandal, belated epiphanies, a spicy SoLa...
OpenAI's o1 System Card, Literally Migraine Inducing
The idea of model cards, which was introduced as a measure to increase transparency and understanding of LLMs, has been perverted into the marketing gimmick characteri...
How to Safely Handle Your AGI
While on the campaign trail, Trump made claims about repealing Biden's Executive Order on AI, but what will actually be changed when he gets into office? We take this ...
The End of Scaling?
Multiple news outlets, including The Information, Bloomberg, and Reuters [see sources] are reporting an "end of scaling" for the current AI paradigm. In this episode w...
US National Security Memorandum on AI, Oct 2024
October 2024 saw a National Security Memorandum and US framework for using AI in national security contexts. We go through the content so you don't have to, pull out t...
Understanding Claude 3.5 Sonnet (New)
Frontier developers continue their war on sane versioning schema to bring us Claude 3.5 Sonnet (New), along with "computer use" capabilities. We discuss not only the n...
Winter is Coming for OpenAI
Brace yourselves, winter is coming for OpenAI - atleast, that's what we think. In this episode we look at OpenAI's recent massive funding round and ask "why would anyo...