Hi {{first_name}}!

I’ve been thinking a lot about the admin tax most of us pay. The daily drag of copy-pasting data, triaging email, and babysitting systems that never quite talk to each other. It’s exactly the kind of friction we try to eliminate at Ampra.

This week felt like a real inflection point.

We’re moving past the “clunky” era of AI, where you have to hold its hand constantly, and into an era where tools can proactively handle work for you. Less prompting. More execution.

The big shift: AI is starting to move from assistive to autonomous, and that changes how we should think about tools, prompting, and risk.

This week we're covering:

  • Moltbot 101: Breaking down the viral agent that’s closing the gap between “chatting” and “doing.”

  • High-Level Prompting: Why the best prompt usually isn’t a prompt at all, it’s an interview.

  • New AI Developments: Practical updates from OpenAI, Google, Anthropic, and Yahoo (yes Yahoo!).

OK, let's get into it!

New and Noteworthy

  • Anthropic publishes Claude’s New Constitution: About a week ago Anthropic released a foundational 35,000-token "Constitution" that explains the values and reasoning behind how Claude behaves. Unlike older models that followed rigid rules, this narrative document helps Claude navigate complex situations by balancing goals like honesty, safety, and helpfulness. This is a major push for transparency. It helps business owners understand exactly why an AI makes certain decisions, turning a "black box" into a predictable, value-aligned teammate. A good analogy might be to think of this as the difference between a new hire who memorized a 1-page FAQ and an experienced manager who read the entire company handbook and understands the "culture." 

  • Yahoo launches Scout: Yahoo introduced Scout, an AI answer engine built on Anthropic’s Claude and Bing’s open web APIs. The key difference is that it blends conversational answers with traditional search results while prominently linking back to original sources. It’s clearly designed to be more publisher-friendly and less of a black hole than some AI search experiences.

  • OpenAI debuts Prism: OpenAI has previewed Prism, an AI-native workspace designed to integrate advanced reasoning models directly into technical writing. While it’s positioned toward science and research, the feature that got my attention is “Whiteboard-to-Doc.” You can snap a photo of a messy workflow or logic diagram, and Prism converts it into structured documentation or code. It’s a real bridge between brainstorming and execution.

  • Google upgrades Gemini 3 Flash: The latest version of Gemini Flash includes code-executed image analysis. In plain English, the model can write and run its own code to better analyze images. This improves speed and reliability when pulling useful data from screenshots, charts, or photos.

  • Microsoft unveils it’s Maia 200 Chip: Designed to power AI models like GPT-5.2 at scale, this custom chip features over 100 billion transistors. While Microsoft is building its own hardware to cut dependence on Nvidia, the industry still relies heavily on Nvidia’s "best in breed" GPUs for infrastructure upgrades. My takeaway:  the big players are investing heavily to make AI faster, cheaper, and more available over time.

  • Ads are coming to ChatGPT: As mentioned last week, OpenAI is rolling out ads for free and lower-tier plans. Early reports suggest pricing around a $60 CPM, which is significantly higher than most social platforms. That tells you how valuable attention inside AI tools is becoming.

  • Claude Acting more Like a workflow hub: Anthropic is rolling out interactive, app-like experiences inside Claude. You can start to work with and use tools like Slack, Figma, and Canva without leaving the chat. It’s early, but the direction is clear: AI is becoming a place where work actually happens, not just where questions get answered.

  • Moltbot (formerly Clawdbot) goes viral: This open-source agent can run 24/7 on a local machine or cloud server and be controlled via Telegram or WhatsApp. It can sort email, book restaurants, manage and edit files, and interact with websites like a human would. It’s impressive, and it’s also something you should approach thoughtfully. I’ll break this down more below.

  • Manus AI supports "Skills": Following a standard introduced by Anthropic, Manus AI now supports reusable “Skills.” These allow you to package specific workflows so the AI can execute them consistently without repetitive explanations. I did a deeper dive on Skills in my circle community if you want to explore this more.

The Golden Rule of High-Level Prompting

There’s an old saying from the early days of AI:

Give someone a prompt and they’ll use AI for a day. Teach someone to prompt and they’ll use it effectively for a lifetime.

Most people struggle with AI because they try to write the “perfect” prompt on the first try. Long. Detailed. Carefully worded. And then they’re surprised when the output still misses the mark.

That’s like giving instructions to a new hire without letting them ask a single question.

Here’s the move: don’t start with a prompt. Start with a goal.

Instead of guessing what details the AI needs, tell it what you’re trying to accomplish and have it interview you. You’ll get deeper, more personalized results in half the time.

Try this exact template:

“I have a specific goal: [insert your goal].

Before you start, act as an expert consultant.

Ask me 10 targeted questions, one by one, to gather all the context, data, and preferences you need.”

Once the interview is done, you can even say:

“Now reverse-engineer this entire conversation into one master prompt I can reuse.”

It’s one of the fastest ways I know to turn a short brainstorming session into a repeatable system.

Pro Tip: If you want to take it a step further, OpenAI and Anthropic have quietly released official Prompt Optimizers. These tools take your raw ideas and rewrite them into formats the models respond to more reliably, and best of all they explain why the changes were made and why they matter so you can learn along the way.

Think of these optimizers as a "translation layer" between how humans talk and how AI thinks. Here are the links to keep in your toolkit:

  • OpenAI Prompt Optimizer: Built directly into the developer platform, this tool helps you refine GPT-4 and GPT-5 prompts for maximum reliability.

  • Claude Prompt Improver: Anthropic’s official tool that turns a simple sentence into a high-quality prompt using their best-practice techniques.

Next time you’re building a repetitive workflow, use the Interview Method to get the content right, then run that result through a Prompt Optimizer to lock it in.

The "Pocket Assistant" Revolution with Some Important Context: An Overview into Moltbot

Tools like n8n have given us a powerful way to connect systems and automate work once the process is clear. In 2026, we’re starting to see a new layer emerge on top of that, agent-style assistants that can take more initiative.

Moltbot sits squarely in that category.

Moltbot is an open-source, messaging-first AI agent that runs on your own computer or a cloud server. You interact with it through Telegram or WhatsApp, and instead of waiting for a prompt, it behaves more like a background teammate.

What makes it interesting isn’t that it replaces automation, but that it tries to sit on top of it.

This part matters.

Agents like Moltbot do not eliminate the need for solid processes, well-documented SOPs, or pure automation. In fact, they depend on them. Without clear workflows and expectations, an agent just moves faster in the wrong direction.

A simple way to think about it:

  • Pure automation handles known, repeatable steps reliably

  • Tools like n8n orchestrate systems once the process is well understood

  • Agents work best as a coordination or interface layer, not the foundation

Moltbot only becomes useful after the basics are in place.

A few capabilities that stand out:

  • Persistent context and memory: Moltbot can maintain long-lived memory, so it remembers preferences, patterns, and prior decisions. This is helpful once workflows are already defined.

  • Active browser and file execution: It can browse the web, fill out forms, and interact with files and portals like a human would. That opens the door to tasks that are hard to fully automate but still structured enough to follow rules.

  • Proactive operation: Moltbot can run on schedules or triggers, monitoring inboxes or forms and notifying you when something matters. Think of it as an early-warning or coordination layer, not a magic worker.

Where things get more powerful (and potentially more risky) is with Skills, which are reusable task definitions. Skills work well when they’re grounded in clear processes and constraints. Without that, you’re just giving an AI more room to guess.

This is why experienced operators pair agents with:

  • Well-documented SOPs

  • Deterministic automations

  • Clear handoffs between systems

Right now, Moltbot is best thought of as a playground for power users, not a replacement for foundational automation. If you’re already using n8n and have solid workflows in place, it may eventually become a helpful interface or notification layer. Personally, I’d give it a few weeks, let the dust settle, and watch which real, high-value use cases emerge.

One practical note: many people are running Moltbot on a separate machine to reduce risk. You don’t need a fancy setup. A $5 VPS works fine and keeps it isolated from your main computer, which is the safer move. Here’s a simple VPS setup guide if you want to explore that path or see what’s involved.

A few honest caveats:

  • This is early and experimental

  • It’s a bit technical to set up

  • Autonomous agents amplify both good and bad processes

The takeaway isn’t “everyone needs an agent right now.”

It’s that we’re starting to see what the next layer might look like once automation and process maturity are already in place.

That’s still where the real leverage lives.

I'm curious, are you ready to let an AI agent work autonomously on your computer, or does the idea of "full system access" still feel a bit too much like sci-fi?

Hit reply and let me know if you’ve tried any of these new "Skills" yet or if you’re experimenting with Clawdbot. I read every response and love hearing what you all are building!

See you next week,

Julien

PS: If you know someone that would value the information I share each week please forward this to them, or send them a link to subscribe at www.ampra.ai/join-our-newsletter

Keep Reading