
Hi {{first_name}}!
A lot happened in AI this week.
OpenAI made a major strategic hire that signals where agents are heading.
Google officially launched Gemini 3.1 and upgraded its reasoning engine.
Several product updates quietly pushed AI further into everyday workflows.
In this issue, I’ll break down the key updates that matter for business operators, and then go deeper on two stories:
What OpenClaw’s creator joining OpenAI tells us about the future of AI agents
A simple, practical way to move your ChatGPT context into Claude without starting from scratch
Let’s get into it.
OK, let's get into it!
New and Noteworthy

The new Google “Deep Think” Raises the Bar on Reasoning: Google launched a major upgrade to Gemini 3 Deep Think, a specialized reasoning mode built to solve the toughest challenges. This mode (which you turn on by selecting “Thinking” in the Gemini app) uses extra computing time to triple check its logic before responding, allowing it to achieve a record breaking 84.6% on the ARC-AGI-2 reasoning benchmark. This high level of logical rigor makes it ideal for high stakes business tasks like identifying hidden risks in 100 page contracts or running "pre-mortem" logic audits on multiple years worth of P&L’s against current strategic growth plans. Use this mode for problems where accuracy is more important than speed. Note that this is separate from “Deep Research,” which is used to scour the web and compile massive amounts of data into a report.
Gemini 3.1 Pro (the Agentic Workhorse) released yesterday: Released yesterday, Gemini 3.1 Pro is the new core model for everyday business operations. It was built using the same advanced reasoning engine found in the Deep Think mode, but it is optimized for speed and "agentic" workflows where the AI executes a multi-step plan without constant human guidance. This version more than doubles the reasoning performance of its predecessor, making it significantly more reliable at synthesizing messy client data, evaluating standard legal documents, and accurately executing business automations like lead research or CRM updates. The update features a 1-million-token context window, allowing you to upload up to 1,500 pages of business documents at once for analysis, and is rolling out now across the Gemini app, NotebookLM, and Google Workspace.
OpenClaw creator joins OpenAI: Peter Steinberger, the developer behind the viral open-source AI agent OpenClaw, is now at OpenAI to lead their next generation of personal agents. OpenClaw went from a side project to over 200,000 GitHub stars and 2 million visitors in a single week. The project will continue as an open-source foundation backed by OpenAI. This is a clear signal that the big AI labs are absorbing the open agent ecosystem rather than competing with it. I’ll talk more about this below!
Spotify CEO reveals top devs haven't written a line of code since December: On a recent earnings call, Spotify leadership shared that engineers are increasingly using internal AI systems to generate and deploy code directly from Slack. This does not mean developers stopped thinking. It means they are orchestrating AI to accelerate execution. Over 50 features were reportedly shipped using this workflow in 2025. Whether or not you agree with the approach, it shows what scaled AI adoption looks like: humans directing strategy and reviewing output, AI handling structured execution.
Claude Cowork comes to Windows: Claude’s full desktop experience is now available on Windows, not just Mac. The desktop app enables multi-step task execution, local file access, and support for MCP connectors. If you’ve only been using Claude in the browser, this version expands what’s possible, particularly for workflow-heavy users who want deeper file and system interaction. You can read more about Claude Cowork in my last issue.
Google Docs adds AI audio summaries: Google Docs now allows you to generate and listen to AI-powered summaries of documents. Instead of skimming a 20-page report, you can get the core takeaways read to you. It’s a small feature, but these are the kinds of incremental improvements that save real time across a week when you process a high volume of documents.
GitHub adds MCP support to Copilot: GitHub’s Copilot now supports the Model Context Protocol, allowing it to connect to external tools and systems. This means AI coding agents can interact with databases, manage pull requests, run tests, and integrate more directly into development workflows. For teams with internal dev resources, this pushes AI from suggestion engine to system-level participant.
OpenAI Just Hired the Guy Who Built the Most Viral AI Tool Since ChatGPT

As of this past weekend, Peter Steinberger, founder of OpenClaw (formerly Clawdbot), is joining OpenAI to lead their push into personal agents.
Steinberger built the original prototype in about an hour. He connected WhatsApp to an AI coding tool, sent it a message, and got a useful reply back. That was November. By January, OpenClaw had become one of the fastest-growing repositories in GitHub history, pulling in over 200,000 stars and 2 million visitors in a single week.
Before you think this was just a lucky side project, Peter previously built PSPDFKit, a software used by companies like Apple and Dropbox. Nearly a billion people use apps powered by his technology. He bootstrapped that business for 13 years and exited successfully. This is a seasoned operator.
The backstory is one of the more fascinating sequences in the AI space this year. The project originally launched as “Clawdbot.” Anthropic’s legal team asked him to change the name due to its similarity to Claude. Fair enough. He rebranded, and ultimately landed on OpenClaw. Days before the announcement, he shared in a Lex Fridman interview that he was choosing between Meta and OpenAI. Mark Zuckerberg was personally testing the product and texting him feedback. Satya Nadella called him directly. Every major VC firm was reaching out.
Ultimately he chose to go with OpenAI. In his blog post, he explained his decision by saying “It’s always been important to me that OpenClaw stays open source and given the freedom to flourish. Ultimately, I felt OpenAI was the best place to continue pushing on my vision and expand its reach.”
OpenClaw will now move to an independent foundation and remain open-source, with OpenAI sponsoring the project.
So what does this actually mean for you?
This hire signals where AI is heading next. We are moving from AI you type questions into, to AI that executes tasks on your behalf. Sam Altman has said the future is multi-agent.
Think about what that looks like in your day-to-day. Instead of asking ChatGPT to draft a follow-up email, imagine an agent that drafts it, checks your CRM, schedules the send based on your calendar, and automatically follows up if there is no reply in three days. That is the shift.
Steinberger has suggested that agent-style systems could eventually replace a large percentage of the apps on your phone. That might sound aggressive today. But when you consider how many apps exist just to manage tasks and handoffs, it becomes more plausible.
Here is the practical move that puts you ahead of most business owners.
Start identifying the tasks in your business that are repetitive, time-consuming, and require minimal judgment. For example:
Following up with leads who went quiet
Scheduling and rescheduling meetings
Sorting email to surface what matters most
Pulling reports from multiple systems
Sending recurring reminders to clients or team members
These are prime candidates for agent-style automation.
The businesses mapping these workflows now will be ready when agent tools become more seamless and embedded. Based on this week’s moves, that timeline is accelerating.
What to watch next.
If OpenAI integrates OpenClaw-style capabilities directly into ChatGPT, your AI assistant could manage your calendar, handle customer communication, and coordinate across tools from a single conversation. We have already seen AI move from answering questions to generating content. The next step is operational execution.
I will keep you posted as this develops.
How to Transfer Your ChatGPT Memory, History, and Context Into Claude

Claude has been making some serious moves lately, but the biggest hesitation I keep hearing is: “I’ve spent months building all that context in ChatGPT. I don’t want to start over.”
A few days ago I posted in my Circle community about why ChatGPT is still my daily driver, even though I also love Claude. A lot of you felt the same way. It’s not about which model wins a benchmark. It’s about the fact that ChatGPT knows you. Your business. Your style. Your preferences. That context compounds and makes every conversation better.
The good news is you do not have to rebuild that from scratch.
If you want to move your ChatGPT context into Claude like I recently did, you can do it in about 10 minutes:
1. Export your data from ChatGPT.
Go to Settings → Data Controls → Export Data. ChatGPT will email you a download link.
2. Locate the file called chat.html.
Extract the ZIP file and find chat.html. That file contains your full conversation history.
3. Create a new Project in Claude and upload it.
Name it something like “ChatGPT Hub” and upload the file. If the file exceeds Claude’s size limit for your plan, open it in your browser, select all, copy, and paste the contents directly into the Project instead.
4. Ask Claude what it knows about you.
Inside that Project, start a conversation and ask:
“What do you know about me based on my ChatGPT history?”
You should see your business context, writing style, and patterns reflected back.
5. Save the important patterns to Claude’s memory.
This is the step that makes it stick. Ask Claude to extract the most important insights and save them to its memory. Once stored, those patterns can apply beyond just that single Project.
That’s it.
No starting from scratch. No re-explaining your business for the hundredth time. Just your existing context, now portable across tools.
From an operational standpoint, this gives you flexibility. You can test different models without losing the compounding value of your history.
Try it out and hit reply to let me know what Claude surfaces. I’m curious what patterns it picks up for you.
If you want to go deeper on any of this, that's exactly what we do inside the Circle community.
See you next week,
Julien
PS: If you found this useful, forward it to someone on your team or a colleague who could use it. www.ampra.ai/join-our-newsletter.