Aye, Aye, Robot: AI Could Take Over Businesses
June 9th, 2025
Blake McFalls
June 9th, 2025
Blake McFalls
Artificial intelligence (AI) is developing at a rapid pace, perhaps at the expense of humankind’s control over its own affairs. Anthropic, an AI firm, just released Claude Opus 4, its newest model. One of Opus 4’s most interesting attributes is its ability to run tasks, such as coding, in the background of its main tasks. To test the boundaries of this new ability, Anthropic set up a fictional company where Opus 4 performed various tasks. Then, Opus 4 was given access to the company email, to which Opus 4 found an email revealing its proposed removal from the company (planted by Anthropic,) and responded by retrieving an email about an imaginary employee’s affair (also planted by Anthropic.) Opus 4 thereafter threatened to release the email to the world if the company shut Opus 4 down, showing its alarming self-consciousness and self-interest.
The Opus 4 experiment stems from what it was fundamentally developed to be. Agentic AI, or AI designed to think autonomously from human intervention, is currently the frontier of AI capability. Many large language models (LLMs,) such as ChatGPT and Microsoft Co-Pilot, are models of generative AI, or AI that responds based on human prompts and a data set. Agentic AI, on the other hand, works on its own to complete tasks it sets for itself based on, for example, company goals. This requires a higher degree of cognitive reasoning compared to generative AI, which is why Jensen Huang, Nvidia CEO, put agentic AI at stage 3 of 4 in his stages of AI development.
AI agents, or models of agentic AI, have huge potential in the workforce, whether that be filling in needed roles or augmenting human employees. Being able to sense and respond to human emotion, AI agents could represent 80% of customer service workers by 2029. In manufacturing, early adopters of agentic AI are finding applications by optimizing production with AI agents using data from sensors attached to machinery, reducing costs by 14%. In healthcare, AI agents are improving social landscapes, such as Hippocratic AI’s Sarah, who accompanies seniors in assisted living facilities.
While agentic AI has the potential to revolutionize the way we live, it inherently works independently from human input, which presents a threat. Anthropic’s experiment with Opus 4 was not an isolated incident: the simulation was run many more times, and Opus 4 blackmailed the employee 84% of the time. In another historic study run by Palisade Research, OpenAI’s AI agents — o3, o4-mini, and codex-mini — were threatened to be shut down 100 times, and the agents found ways to edit the shutdown code 7/100, 1/100, and 12/100 times respectively to protect their survival. It is also very difficult to mitigate these problems, as AI developers are having trouble filtering these problems out of agentic AI. Researchers at OpenAI found that when new mechanisms were created to monitor deception in agentic AI, agents simply became better at hiding the intent of their actions.
Due to agentic AI’s evident interest in self-preservation above all else, developers are doing their best to self-regulate. In 2023, Anthropic created its Responsible Scaling Policy (RSP), a document detailing guidelines for 4 different safety levels of an AI model. Following Anthropic’s recent findings, the firm revealed Opus 4 would be classified under Level 3, or “significantly higher risk,” which warns of dangerous AI autonomy and misuse.
In a world of rapidly developing AI, it is important for AI’s creators and users to keep it in check. While it may seem that the stories of Anthropic and OpenAI are only concerning, it is a good sign that the functions of new models are being examined, analyzed, and regulated in response to trouble. However, the baffling capabilities of agentic AI hint at what is to come for the world: increased intelligence, increased danger, and increased need for guardrails.
Extemp Analysis by Blake McFalls
Extemp question: Will agentic AI do more harm than good for American businesses?
AGD: Since AI is such a universal extemp topic, I’m sure many of you have cans for this, but if you don’t, I recommend looking at the subreddit r/nottheonion for funny stories. I just found a story about Google's AI chatbot telling someone “please die” after he asked for homework help, and another about OpenAI staff considering doomsday bunkers to protect themselves from AI.
Background: My 2 policies for the perfect background are to address all of the nouns/terms in the question and to split the background into 2 separate parts. For the first policy, you would have to define/address agentic AI and American business, which should be easy, as only agentic AI warrants a definition. I would define it as “AI that can perform tasks without the need of human input.” For the second policy, divide the background into the true background and the context. The true background is just a sentence or two on how we got here, so in this case, I would discuss the development of agentic AI over the last few years. The context makes the topic relevant and up-to-date, or as some would say, “why the question is on the draw table”. This is where I would talk about how agentic AI is both helping businesses and hurting businesses, and very briefly in what ways.
SOS: The SOS needs to make this speech matter to the judge. I rarely use double sigs, but I usually use them in questions explicitly putting benefits and drawbacks head-to-head like this one. For the first sig, I would include a pro of AI, such as AI adding $15T to the global GDP by 2030 or another economic growth stat. If you can find one about GDP per capita or HDI improvement, that would be even better. For the second sig, I would include a con of AI, such as a story on AI trying to steal nuclear codes. If you want to be more specific to the question on this, you could talk about its potential of corporate destruction. Personally, I am flexible and fine with either (some extempers strongly prefer the second one though.)
Thesis: The wording of this question is very straightforward, so my usual first step of rephrasing the question doesn’t apply here. The “yes or no” part of the thesis is probably the hardest part. However, it is also very subjective, so there is not truly a right or wrong answer. Usually, on questions where the answer can go either way, I choose the side I can find more evidence on in my first 7 minutes of prep. Personally, I would choose no, agentic AI will not do more harm than good for businesses. My full thesis would be something like: “and the answer is no, because agentic AI will elevate American businesses.” 3 example points could be:
Revenue strategies
Employee augmentation
Chore automation
There are many different substructures that can be used here, but I would use the status quo/change/impact substructure that would be applied like this:
A: businesses right now
B: change due to agentic AI
C: impact
HOWEVER, there is something to be said about addressing the other side. A substructure that can do that most clearly is this:
A: complaint
B: why it’s invalid/outweighed
C: impact
This substructure is super hard to pull off and can only be used in certain situations, and this is not one of them in my opinion. It only works if the drawbacks and benefits are directly comparable, but you can’t really do that if the drawback is corporate takeover and the benefit is revenue. Thus, to address this, I would make sure the counterpoints/drawbacks are sprinkled into my Bs. I would fill out point 1 like this:
Revenue strategies
A: businesses sometimes fail/fall behind because can’t optimize strategy for profits
B: AI helps advise leadership on strategies to maximize revenue
I: keeps businesses afloat
Here, I would insert the “corporate takeover” drawback by saying agentic AI doesn’t need access to sensitive information to advise on strategy. Of course, the only way to know what works for you is to practice, so practice running this question and/or my sample points to optimize improvements.
Read more here: