
Hi {{first_name}}!
We just witnessed a week that signaled a fundamental re-tooling of the tech economy.
On one side, OpenAI closed a $122 billion funding round. To put that in context, it is the largest private raise in history. The previous record for a private round (prior to all the AI fueled rounds) was Ant Financial raising $14 billion in 2018. OpenAI just raised nearly nine times that amount.
On the other side, Oracle cut 30,000 jobs this week. These weren't performance-based cuts. They were a strategic move to free up capital for AI infrastructure.
When the giants start moving tens of billions of dollars and tens of thousands of people in seven days, it is a signal we need to look at closely. AI is becoming the new baseline for how businesses operate.
But for most SMB owners, these headlines feel a world away. You aren’t raising billions or managing data centers. You are trying to figure out if your team is actually getting more productive or just playing with new tools.
Today, we are cutting through the noise of the "Big Tech" wars to look at the developments you can actually use. That includes a new assessment I built to help you measure your team’s real progress.
Here’s what’s on the radar:
The Great Re-Tooling: Why $122B raises and 30,000 layoffs matter for your business.
The "Harness" vs. The Engine: What NVIDIA's latest release tells us about the future of work.
The Ampra AI Readiness Audit: A free tool I created to find the gaps in your team’s AI strategy.
Ok, let’s dive in.
New and Noteworthy

Slack Turns Slackbot Into an Agent Orchestration Layer: Slack announced a major overhaul of Slackbot on March 31, positioning it as a single interface for managing AI agents, apps, and business data from one conversation. The updates include intelligent meeting notes that automatically update your CRM and assign tasks during calls, voice commands that execute multi-step workflows, and the ability to route requests to the right agent behind the scenes without you needing to know which tool handles what. For teams already in Slack, this moves it from a messaging platform to something closer to a command center.
OpenAI Closes a $122 Billion Funding Round: OpenAI just completed the largest private fundraise in history, pulling in $122 billion at an $852 billion valuation. Amazon led with $50 billion, followed by Nvidia and SoftBank at $30 billion each. The company now generates $2 billion in monthly revenue and serves over 900 million weekly ChatGPT users. The bigger signal here: OpenAI is building toward a unified "AI superapp" combining ChatGPT, Codex, search, and agentic tools ahead of an expected IPO.
Anthropic Insights from Two Data Leaks: Anthropic accidentally shipped over 500,000 lines of its own source code to a public registry this week. The leak revealed an unreleased, next-generation model codenamed "Claude Mythos." Internal files describe Mythos as a "step change" in capability that features recursive self-fixing and autonomous reasoning. While the leak raises questions about operational discipline at the world's leading safety-focused lab, it also confirms that a significantly more powerful version of Claude is imminent.
Oracle Lays Off Up to 30,000 Workers to Fund AI Infrastructure: Oracle executed what analysts believe is its largest layoff ever, cutting an estimated 20,000 to 30,000 employees, roughly 18% of its workforce. Employees received 6 a.m. termination emails with no warning. The cuts are tied directly to Oracle's AI data center buildout, which requires an estimated $156 billion in capital spending.
Alibaba Drops a Frontier Multimodal Model: Alibaba's Qwen team released Qwen3.5-Omni, a fully multimodal model that natively handles text, images, audio, and video in a single architecture with a 256K token context window. It supports speech recognition in 113 languages and outperformed Google's Gemini 3.1 Pro on audio and audio-visual benchmarks. Notably, this is Alibaba's first closed-source Qwen release, a departure from their open-source track record. If you’re tracking the multimodal race, this one moved the needle.
Apple Plans to Open Siri to Third-Party AI in iOS 27: Apple confirmed that the upcoming "Extensions" architecture in iOS 27 will allow users to swap out Siri’s brain for Google Gemini or Anthropic’s Claude. This ends Apple's exclusive partnership with OpenAI and gives iPhone users the flexibility to use their preferred AI assistant natively. For business owners on Apple devices, this means a lot more flexibility in which AI assistant you use natively.
Google Releases Veo 3.1 Lite for Developers: Just as OpenAI pivoted away from its Sora video app, Google released Veo 3.1 Lite. This is a high-speed, low-cost video generation model designed specifically for developers and businesses. By dropping the price of video generation to as little as $0.05 per second, Google is making it affordable to embed AI-generated video into marketing tools, training materials, and product demos. Google isn't building the flashiest video model. It's building the most accessible one, at a price point that makes video generation practical to embed in real products.
Microsoft Introduces Critique: Multi-Model Deep Research Inside Copilot: Microsoft introduced a new feature called Critique inside the Copilot Researcher agent. It uses a multi-model approach where one AI model drafts a response and a second model (often Claude) reviews it for accuracy and citation quality before you see it. This addresses the "hallucination" problem head-on. By having models check each other's work, Microsoft is making AI-generated research reliable enough for actual business decision-making.
A Free 2-Minute Assessment That Tells You Whether Your Team Is Experimenting with AI or Building Capability

I started building something new this week that I am excited to share.
Over the past year, I have been having multiple conversations every day with business owners about AI. The most common thing I hear is not "we aren't using AI." It is "we are using it, but I honestly don't know if we are doing it well."
There is no baseline. There is no way to measure where the team actually is versus where they think they are. Without that clarity, it is impossible to know what to prioritize next.
So I created the Ampra AI Readiness Audit.
It is a short self-assessment you can take in about three minutes. You answer a few questions about how your team uses AI today, how leadership is approaching it, and whether there is any structure around the tools. These are the factors that separate organizations that are just experimenting from the ones that are actually building capability.
Once you complete it, you get a score and some insights. We then send you a full custom report that breaks down exactly what your score means. It outlines where the gaps typically are at your stage and the specific focus areas that will create the most traction right now.
I designed this because the biggest mistake I see businesses make with AI is not choosing the wrong tool. It is skipping the readiness step entirely. They jump into software purchases without understanding if their team has the foundation to make it stick. Tools without skills, strategy without culture, and enthusiasm without structure all lead to the same place.
This gives you the honest picture first. Then you can decide what to do with it.
Take the audit here: www.ampra.ai/ai-readiness-audit
If you want to talk through your results afterward, just hit reply. I am happy to spend 30 minutes walking through what it means and identifying the highest leverage next step for your team.
Please also note that this is a new tool that I am still developing so if you see anything that needs to change or if you have feedback on how it can continue to improve I am all ears!
From Chatbot to Operating System: Why the "Harness" Matters More Than the Engine
I have talked about NemoClaw and OpenClaw in recent editions. If you have been following along, you already know the direction NVIDIA is pushing. They want AI agents that run securely, act on your behalf, and live on your own hardware.
The release of Nemotron 3 Super this week is the model that makes that vision practical for a business. Here is why it matters.
Nemotron 3 Super is a 120 billion parameter model. In plain terms, it has a massive amount of built-in knowledge. But it uses a "Latent Mixture of Experts" architecture. Think of it like a company with 120 specialists on staff. For any given task, the system only activates the 12 specialists who are relevant.
The result is a model that is roughly 3x faster than traditional models of this size. It is smart enough to handle complex reasoning but efficient enough to run on much smaller hardware.
The Shift from Engine to Harness: Kari Briski, NVIDIA's VP of Generative AI Software, made a point at GTC 2026 that I think every business owner needs to hear. She said we have moved past the phase of judging an AI model in isolation.
The real question now is how the "harness" around the model performs.
The model is just the engine. The harness is the system that manages memory, routes tasks between different tools, and maintains context over long interactions. If you have an engine but no steering wheel or brakes, you do not have a car. You have a hazard.
For a business, the "harness" is what makes AI reliable. It is the difference between a chatbot that gives you a generic answer and an agent that can actually monitor your business data and take action when something looks wrong.
The Reality of Agents Briski shared a story at GTC that illustrates this perfectly. An NVIDIA developer connected an OpenClaw agent to his home sensors. While he was away, the system noticed his water usage was climbing. It flagged a possible leak, asked if it should contact a plumber, drafted the email, and sent it after receiving approval.
That is not a "chatbot" interaction. It is an autonomous workflow.
The Strategic Takeaway There are two signals here that I think land differently than the usual AI hype:
Open models are winning the production race. NVIDIA reported that token generation from open models has grown 35x over the past year. When businesses move from "testing" to "production," they want models they can control and run privately.
AI is becoming the new operating system. Jensen Huang, NVIDIA’s CEO, argued that we are no longer just "using" AI tools. We are building AI factories. You are managing memory, file systems, and tool access.
The businesses that start thinking about AI as infrastructure they build on, rather than a tab they occasionally visit, are the ones that will pull ahead.
The barrier to running a private, capable AI system is lower than it has ever been. You do not need a data center. You just need the right harness.
That is it for this week. If anything in this edition sparked a question or an idea, I would love to hear it. Hit reply and let me know.
And if you are curious about where your team stands with AI, take the readiness audit. It takes a few minutes, and the custom report we send back should give you a clear picture of what to focus on next: www.ampra.ai/ai-readiness-audit
Julien
PS: Know someone who would appreciate this weekly newsletter? Please forward this their way and let them know they can subscribe at www.ampra.ai/join-our-newsletter.