Futuristic illustration showing AI tools and documents connected to automation and app prototype workflows using glowing icons and circuitry, with no text.

Google’s New Gemini and NotebookLM Updates Are Wild: What Changed and How to Use Them

Google just rolled out a serious batch of Gemini and NotebookLM updates, and some of these changes are much bigger than they first appear. The headline features are easy to spot: NotebookLM is now accessible inside Gemini, Gemini has more built-in automation and agent mode, and Google Stitch has become a surprisingly powerful way to design and prototype apps for free.

But the real story is what these tools now let you do together.

You can move from research to automation, from documents to workflows, and from app ideas to interactive prototypes without hopping between a bunch of disconnected platforms. If you use Google Workspace, build with AI, or just want to automate more of your day, these upgrades are worth paying attention to.

This guide breaks down the most important new features, what they actually do, and where they’re most useful.

Table of Contents

NotebookLM inside Gemini changes the workflow completely

One of the most useful upgrades is that NotebookLM can now be accessed directly from inside Gemini.

That sounds simple, but it changes the experience in a big way. Instead of treating NotebookLM as a separate destination, you can now pull your notebooks into Gemini and work with them in a more flexible, tool-rich environment.

What you can do from inside Gemini

When you open Gemini, you can now access your notebooks directly. From there, you can:

  • Open existing notebooks
  • View the sources already attached
  • Add more files
  • Pull in content from Google Drive
  • Add websites
  • Paste text manually
  • Rename notebooks
  • Pin notebooks for quick access

That alone makes Gemini feel much more like a central workspace instead of just a chatbot.

It gets even more interesting when you combine a notebook with Gemini’s built-in tools. From within that notebook context, you can use Gemini for things like:

  • Image generation
  • Canvas or website creation
  • Research workflows
  • Video creation
  • Music generation
  • Guided learning
  • Deep Think
  • Code import
  • Photo uploads
  • Drive actions

In practical terms, that means your research materials are no longer stuck in a static notebook. You can now use them as the source layer for a broader set of AI actions.

Notebook settings now matter more

Inside Gemini, notebook settings also become more important. You can enable notebook memory and customize how Gemini responds inside that notebook.

That gives you more control over tone, context, and behavior. If you use notebooks for different purposes, like client research, content creation, or technical notes, this can help Gemini act more appropriately in each one.

What still works better in NotebookLM itself

This integration is powerful, but it does not mean NotebookLM has disappeared as a standalone tool.

There are still some features that are more native to NotebookLM, including things like:

  • Data tables
  • Interactive quizzes
  • Flashcards
  • Infographics
  • Slide decks
  • Mind maps
  • Audio overviews that act like podcast-style summaries
  • Video overviews

So the best way to think about it is this: Gemini now gives you access to NotebookLM-style knowledge work in a more action-oriented environment, while NotebookLM still keeps some of its own specialized outputs and interactive study features.

Gemini’s new “Do tasks for me” feature is the beginning of practical AI agents

Another major update is the new Do tasks for me option in Gemini.

This turns on Agent Mode, and it points to where Google is clearly headed: less prompting, more delegation.

How Agent Mode works

Gemini now offers at least two operating modes for task execution:

  • Fast, for quicker results
  • Pro, for a more capable maximum version

The idea is straightforward. Instead of asking Gemini a one-off question, you give it a goal, and it breaks the job into steps.

For example, a task like finding electrician jobs in Port St. Lucie, Florida gets turned into a mini workflow. Gemini can:

  1. Define the scope of the task
  2. Research relevant job listings
  3. Compile the best options
  4. Return structured recommendations

What makes this compelling is the visible process. You can see the task box, the steps being taken, and the system moving through them in real time.

What kinds of tasks Gemini can now handle

Google is surfacing examples like:

  • Booking a car rental
  • Booking activities and experiences
  • Researching job listings live
  • Booking local services

And the job-search example makes the use case very clear. Gemini can collect:

  • Who the role is for
  • Location
  • Role details
  • Pay information

That is already useful. But the real leap comes when you combine research with action.

If you upload a resume and ask Gemini to respond to relevant jobs, it can potentially continue the process instead of stopping at the research stage. That is a big shift from “AI that helps” to “AI that gets things done.”

You can trigger this from Chrome too

If Google Chrome is updated, Gemini can also be accessed through the browser interface. That means this kind of task execution is not limited to the main Gemini app.

There’s also a live mode where Gemini can see what is on your screen. Combined with task automation, that creates two very different automation paths:

  • Task-based agent automation inside Gemini
  • Live screen-aware assistance through Gemini

That combination is especially interesting for research, admin work, and step-by-step assistance while working across tabs and documents.

Google added one-click automations across Drive, Gmail, and Workspace

This is one of those updates that could easily go unnoticed, but it might be one of the most practical.

Google is now sprinkling automation setup options throughout Gemini-connected Workspace tools.

Automations in Google Drive

Inside Google Drive, files can now surface a Set up automation option.

Once selected, you can trigger one-click automations such as:

  • Get notified when a file is added
  • Automatically create tasks when files are added

The key point here is speed. You no longer need to build everything from scratch just to set up simple process automation.

Google is moving toward a model where common automations are suggested in context. If you are already working in Drive, the system can present relevant actions right there.

Workspace Studio is becoming an automation hub

Another new entry point appears in the top right corner of Google Drive and Gmail: Studio.

When you open Studio and choose Do More in Studio, you enter Workspace Studio, where Gemini helps automate work across Google apps.

This is where things start to feel more like no-code workflow building.

What you can automate in Workspace Studio

Inside the Discover section, Google surfaces templates and categories including:

  • Emails
  • Meetings
  • Tasks and action items
  • Customer connections
  • Everyday essentials

One example is speeding up replies to incoming emails.

Here’s how that kind of flow can work:

  1. An email arrives
  2. The automation extracts the questions from the email
  3. You provide a source document, like an FAQ or support doc
  4. Gemini drafts a reply using that reference material

That is a strong use case for teams dealing with repeat questions or structured support requests.

You can also build custom flows from scratch

If templates are too limiting, New Flow lets you define your own trigger and logic.

You can trigger automations when:

  • An email arrives
  • A chat comes in
  • A Google Sheet changes
  • A scheduled time is reached

And then you can tell Gemini what to do next. Available actions include things like:

  • Ask Gemini to decide something
  • Summarize content
  • Extract information
  • Recap unread emails
  • Add filters
  • Use if-then logic
  • Draft replies
  • Draft new emails
  • Send notifications
  • Mark emails in red
  • Send chat notifications
  • Manipulate Sheets, Drive, Docs, or Tasks

That is a pretty broad automation canvas, especially for anyone already living inside Google Workspace.

You can describe the workflow in plain English

One of the best parts is that you do not always have to manually assemble the workflow yourself.

If the interface feels too technical, you can just describe what you want. A prompt like catch me up on yesterday can generate the automation for you, and then you can fine-tune the timing and conditions.

This is exactly the kind of feature that makes AI automation accessible to more people. Instead of understanding every trigger and action in advance, you can start with intent and refine from there.

Google Stitch just got a massive upgrade for AI design and prototyping

Then there’s Google Stitch, which may be the most surprising update of the bunch.

If you have not paid attention to Stitch yet, now is probably the time.

Google is turning it into a free AI-assisted product design environment where you can go from idea to redesign to prototype with very little friction.

What’s new in Stitch

Stitch now includes features like:

  • Idea-to-solution generation
  • App redesign assistance
  • URL-based redesign input
  • Image-based design input
  • Variation generation
  • Agent logs to show the design process
  • Live collaboration through screen sharing

One especially useful option is redesigning an app by simply uploading either screenshots or the website URL. That lowers the barrier dramatically for redesign work.

Turning an idea into a UI concept

A great example is prompting Stitch to build something like a CRM for real estate agents with a modern, fairly complex interface.

From there, Stitch can generate a plan, expose an agent log showing what it is doing, and present a draft design that you can either approve or adjust.

This matters because it is not just generating a random UI mockup. It is behaving more like a design assistant that proposes structure before continuing.

Live design feedback is where things get really interesting

Stitch can also go live and react to screen-shared feedback.

So if you are looking at a design and say you hate a certain section, Stitch can see what you are referring to and help redesign it. That makes the workflow feel less like static prompting and more like talking to a designer while pointing at the screen.

That is a genuinely different kind of interface. Not just “generate me a screen,” but “work with me on the screen.”

Built-in editing and design system controls

On the right side, Stitch includes editing tools that let you:

  • Select elements
  • Mark areas
  • Make direct edits
  • Pan around the canvas
  • Upload files to the canvas
  • Create a design system

You can also customize design system variables such as:

  • Fonts
  • Primary colours
  • Secondary colours
  • Tertiary colours
  • Neutral colours

That is important because it means Stitch is not only about generation. It also supports consistency and refinement, which is what real product design work actually needs.

It can generate prototypes and export to other tools

Once a design is generated, you can do more than just stare at it.

Stitch can:

  • Generate an instant prototype
  • Create variations
  • Make a mobile app version
  • Generate a predictive heat map
  • Open previews in a new tab
  • Show a QR code for quick previewing
  • Preview on mobile, tablet, or desktop layouts

Export options also include:

  • Copy to Figma
  • Export as Figma
  • Export as a zip file
  • Copy code to clipboard
  • Export a project brief
  • Send to AI Studio

And the wild part is the prototype generation. Stitch can actually code the prototype for you so you can interact with it, test flows, and imagine additional screens.

At that point, it stops feeling like a mockup generator and starts feeling like a lightweight product builder.

Why these updates matter together

Each update is useful on its own, but the bigger story is how Google is connecting them.

You can now imagine a workflow like this:

  1. Research a topic in NotebookLM
  2. Pull that notebook into Gemini
  3. Use Gemini tools to generate assets, content, or plans
  4. Set up automations in Workspace Studio to act on incoming documents or emails
  5. Use Agent Mode to execute multi-step tasks
  6. Prototype a related product idea in Google Stitch

That is a much more connected Google AI ecosystem than what existed before.

And it shows Google’s direction pretty clearly:

  • Research is becoming actionable
  • Documents are becoming workflows
  • Prompts are becoming agents
  • Ideas are becoming prototypes faster

Best use cases for these new Gemini and NotebookLM updates

For researchers and knowledge workers

  • Organize source material in NotebookLM
  • Access it from Gemini for broader creation and analysis
  • Generate learning aids like flashcards, quizzes, and overviews when needed

For job seekers

  • Use Agent Mode to find relevant openings
  • Upload a resume
  • Potentially automate application support tasks

For support teams and operations

  • Use Workspace Studio to draft email responses
  • Reference FAQ or documentation files
  • Route, summarize, and flag incoming communication

For founders, designers, and builders

  • Use Stitch to redesign an app from screenshots or a URL
  • Generate UI concepts from plain English prompts
  • Turn those concepts into interactive prototypes
  • Export the results into design or development workflows

Practical limitations to keep in mind

Even with all these upgrades, it is worth being realistic.

There are still some NotebookLM-specific features that remain stronger in NotebookLM itself. Not every Gemini interaction will be perfect. And automations that draft replies or perform actions should still be reviewed before being trusted for high-stakes work.

But none of that changes the core point: Google’s AI products are becoming much more useful because they are becoming much more connected.

FAQ

Can you use NotebookLM directly inside Gemini now?

Yes. Gemini now lets you access notebooks directly, view and add sources, rename and pin notebooks, and use Gemini tools in the context of notebook content.

Is everything from NotebookLM available inside Gemini?

Not completely. Some NotebookLM features still appear to be more specific to NotebookLM itself, including things like data tables, mind maps, audio overviews, video overviews, and certain interactive study outputs.

What is Gemini’s “Do tasks for me” feature?

It is a new entry point for Agent Mode in Gemini. You give Gemini a goal, and it breaks the task into steps, researches information, and compiles results. It is designed to help complete multi-step tasks rather than just answer single prompts.

Where can Gemini automations be created now?

Automations can now be triggered from several places, including Google Drive, Gmail, Chrome, and Workspace Studio. Google is also adding context-aware one-click automation suggestions in some places.

What can Workspace Studio automate?

Workspace Studio can automate actions related to emails, meetings, tasks, sheets, chats, Drive files, Docs, and more. It supports triggers, filters, if-then logic, Gemini summarization and extraction, notifications, and drafted responses.

What is Google Stitch used for?

Google Stitch is an AI-assisted design and prototyping tool. It can help generate app or website designs, redesign existing interfaces from screenshots or URLs, create design systems, generate variations, and turn concepts into interactive prototypes.

Can Google Stitch export to Figma or code?

Yes. Stitch includes options to copy designs to Figma, export files, copy code, generate prototypes, and preview designs across different device layouts.

Final thoughts

These updates are not just random feature drops. They point to a much bigger shift in how Google wants AI to fit into everyday work.

Gemini is becoming more agentic. NotebookLM is becoming more integrated. Workspace is becoming more automatable. And Stitch is becoming a legitimate entry point for AI-powered product design.

If you use Google’s tools regularly, now is a good time to explore what changed. A lot of the value is not in any single feature, but in how quickly you can move between research, execution, and creation.

If you want to get more out of these tools, try building one small real workflow first. Pull a notebook into Gemini. Set up one email automation. Prototype one app idea in Stitch. That is usually where the “this is cool” moment turns into “okay, this is actually useful.”

For publishing support, consider adding a product screenshot gallery with descriptive alt text such as “Gemini NotebookLM integration interface”, “Workspace Studio automation flow builder”, and “Google Stitch instant prototype preview”. You may also want to link to Google’s official product pages and your own related AI automation guides for deeper reading.

Share this post