Illustration of an AI workflow combining a glowing notebook, a mind map concept, an AI studio editing workspace, and document creation—no text present.

NotebookLM and Google Gemini’s New Features Are Crazy: Mind Map Customization, AI Studio Upgrades, and Smarter Document Creation

Google just rolled out a batch of updates across NotebookLM, Gemini, and AI Studio, and honestly, some of these changes are much bigger than they look at first glance. What used to be separate little AI conveniences are starting to turn into full workflows. That is the real story here.

The biggest shift is not just that there are new buttons to click. It is that these tools are becoming more customizable, more connected, and way more useful for real work. Whether you are researching, building marketing assets, generating websites, or drafting polished documents, Google’s latest updates make the process faster and far more hands-off.

Here are the upgrades that matter most, what they actually do, and the new use cases they unlock.

Table of Contents

1. NotebookLM finally adds customization to Mind Maps

One of the most useful additions is inside NotebookLM: Mind Maps now support customization.

If you have used NotebookLM before, you probably know that several outputs already allowed some control over what the AI generated. Audio overviews, slide decks, video overviews, flashcards, infographics, quizzes, and data table reports all had some form of guided prompting or customization. Mind Maps were the odd one out.

That has now changed.

Instead of clicking “Mind Map” and getting whatever generic structure the system decides to produce, you can now guide the output with a more specific instruction. That sounds small, but it completely changes the usefulness of the feature.

What this means in practice

You can now tell NotebookLM things like:

  • Restrict the mind map to a specific source
  • Focus only on one key concept
  • Organize the map around a goal or question
  • Build the map in a specific framing, such as timeline, causes, or consequences

A strong example is asking it to create a mind map that explains the timeline of the Great Depression, what led to it, and what the consequences were. Instead of a flat, broad, generic concept map, NotebookLM generates something much more structured and useful.

That matters because research tools are only as valuable as their ability to reduce complexity. A default mind map often gives you a bird’s-eye view, but a customized one can help you actually learn, teach, or present the material.

Why the new version is better than the old one

The basic version of the feature produced mind maps that were often too simple. They gave you a rough outline, but not necessarily the most meaningful one.

With customization turned on, the generated mind map becomes:

  • More expansive, because it follows a clearer objective
  • More relevant, because it filters information based on your prompt
  • Easier to understand, because the organization is intentional
  • More practical, because it can summarize large source collections into just the important parts

In the example Rob demonstrates, the mind map pulls from 70 different sources and turns that into a concise structure focused on causes, chronology, and consequences. That is exactly the kind of thing AI should be doing: compressing a large pile of information into a format that helps you think.

One limitation to know before exporting

There is one practical detail worth remembering. If you download a NotebookLM mind map as an image, the export only includes the sections that are currently expanded on screen.

So if you plan to use the map in a presentation, report, or handout, make sure you open the branches you want captured first. Otherwise, you may need to export multiple images.

Best use cases for customized mind maps

  • Studying complicated historical or scientific topics
  • Summarizing large source sets
  • Preparing lessons or workshops
  • Structuring research before writing
  • Creating visuals for slides or educational material

Suggested image: Screenshot of a NotebookLM customized mind map with branches expanded.
Suggested alt text: NotebookLM customized mind map showing timeline, causes, and consequences of a topic.

2. Google’s new catalog feature in PMax-style creative tools is a massive marketing shortcut

Another update Rob highlights is a new catalog feature available through Google Labs, tied to a business DNA workflow. This one is especially interesting for marketers, ecommerce brands, and software companies.

The idea is simple: you upload or connect your business information, and Google generates a product catalog directly from your website so it can be used inside campaigns.

That sounds useful on its own, but what makes it powerful is what happens next.

How the business DNA setup works

When you connect a website, the system analyzes the brand and starts extracting core elements like:

  • Logo
  • Fonts
  • Brand colours
  • Tagline
  • Brand values
  • Visual aesthetic
  • Tone of voice
  • Business overview
  • Images and product details from the site

It is basically trying to understand the full identity of the business before generating assets from it.

Rob tests this with both a software business and an ecommerce store, and in both cases the output becomes a foundation for rapid campaign creation.

What the catalog can do

Once the catalog is built, individual products or offers can be turned into campaign assets. For a software company, that might mean entries for tools like a keyword finder, hook generator, caption generator, hashtag generator, or scriptwriter. For an ecommerce company, it can map actual physical products or product lines.

From there, the system can generate:

  • Campaign concepts
  • Social media creatives
  • Visual marketing assets
  • Different aspect ratio variations
  • Brand-consistent promotional content

And because it is pulling directly from the source website, the assets are far more aligned with the brand than generic AI image generation normally is.

Why this is such a big deal for marketers

This is one of those features that sounds niche until you realize how much time it can save.

Normally, creating campaign assets means bouncing between:

  • A website or product page
  • A design tool
  • A copywriting tool
  • An asset resizer
  • A planner or campaign manager

This update starts collapsing those steps into a single flow. You feed in your business DNA, let the system understand the brand, and then create campaign assets almost immediately.

That is not just a convenience feature. It changes the speed at which small teams can launch creative tests.

Best use cases for the catalog feature

  • Ecommerce product launches
  • SaaS promotion campaigns
  • Fast social creative production
  • Brand-consistent ad concept generation
  • Rapid testing of multiple product angles

Suggested image: Workflow graphic showing website to business DNA to catalog to campaign assets.
Suggested alt text: Google AI catalog creation workflow for turning website content into campaign assets.

3. AI Studio is becoming a one-stop shop for building apps and websites

The next upgrade is inside AI Studio, and it is one of the clearest examples of Google merging separate AI capabilities into one working environment.

Here is the scenario: you prompt AI Studio to create a website for a pizza restaurant in Bozeman, Montana. Instead of only generating a rough concept, it now goes much further. It can build the website, let you choose among design variations, and then edit the result directly inside the tool.

The big change: in-preview editing

The major update is that you can now customize apps by drawing directly on the preview and fine-tuning elements without leaving AI Studio.

That includes editing:

  • UI elements
  • UX components
  • Images and visual assets
  • Menus and layouts
  • Forms, search, and social links

Rob also points out that Nano Banana is now inside AI Studio, which matters because it means image and design-related changes can happen in the same place as the app or site generation. Previously, you might have needed a separate design tool or another workflow to get that done.

Now the process is more direct:

  1. Prompt the site or app idea
  2. Choose a generated design
  3. Customize the layout and features
  4. Add images or assets
  5. Adjust menus, forms, and functionality

Why this feels important

The feature itself is cool. The bigger story is what it signals.

Google is clearly trying to make AI Studio a central workspace where generation, editing, design, and enhancement all happen together. That is a big shift from the fragmented AI experience most people are used to.

Instead of asking an AI model for code here, an image tool for visuals there, and a website builder somewhere else, AI Studio is moving toward a single environment where the pieces work together.

For people prototyping ideas, that is huge.

What you can add inside AI Studio now

  • Menu sections
  • Images
  • AI features
  • Order forms
  • Search functionality
  • Social links

And the speed is part of what makes this feel wild. Rob shows a complete restaurant website generated in roughly a minute or two, with editable structure and visual controls immediately available afterward.

Best use cases for AI Studio’s new editing tools

  • Rapid website prototyping
  • Mocking up landing pages
  • Testing app interface ideas
  • Creating proof-of-concept projects
  • Building functional drafts without switching tools

Suggested image: Screenshot of AI Studio website editor with menu and customization options open.
Suggested alt text: Google AI Studio editing a generated restaurant website with direct customization tools.

4. Gemini can now create fully formatted files, not just text

This may be the most practical upgrade of the bunch.

Gemini can now create outputs in different file formats, including:

  • Google Docs
  • Slides
  • PDFs
  • LaTeX files
  • Other structured formats based on the task

This is a major leap beyond simple chat responses. Instead of generating text that you then have to copy into another tool and format manually, Gemini can now create the actual document in the format you need.

A practical example

Rob uses Gemini to create a Google Doc for a friend explaining how the TikTok, Instagram, Facebook, and YouTube Shorts algorithms work, specifically around interest-based distribution.

Gemini does more than write the explanation. It:

  • Creates the document
  • Formats the content properly
  • Adjusts text styling
  • Adds visual structure
  • Even inserts a chart

That is the difference between “AI wrote a paragraph” and “AI completed a deliverable.”

Editing inside the created document

It gets better. Once the document exists, you can ask Gemini to revise or extend it while matching the formatting already in place.

For example, you can prompt it to add another section explaining how to identify a good hook and tell it to keep the same formatting as the rest of the document. Gemini then updates the file accordingly.

You can accept or reject the changes, which gives the process a useful editorial layer instead of blindly overwriting your work.

Why this unlocks bigger workflows

The real power here is not just formatting. It is contextual document generation.

Gemini can work with information you give it access to, including:

  • Email
  • Google Drive files
  • Existing documents
  • Notes
  • Meeting notes
  • Uploaded files such as CSVs

That means you can combine multiple information sources, ask Gemini to research or synthesize them, and then have it produce something usable right away, like a report, slide deck, document, or structured file.

This dramatically reduces the friction between “I have information” and “I have a finished asset.”

Best use cases for Gemini’s file creation tools

  • Creating client-ready documents
  • Drafting presentations from notes
  • Turning research into formatted reports
  • Building explainers or educational docs
  • Creating structured outputs from CSV or uploaded data

What all of these Google AI updates point to

Each feature is useful on its own, but together they tell a much bigger story.

Google is moving away from isolated AI tricks and toward integrated production workflows.

You can see the pattern clearly:

  • NotebookLM is becoming better at turning source-heavy research into usable study and thinking tools
  • Business DNA and catalog generation are becoming marketing pipelines, not just analysis features
  • AI Studio is becoming a unified place to build and edit apps, websites, and visuals
  • Gemini is becoming a deliverable engine, not just a chatbot

That is why these updates feel bigger than a normal feature release. They are reducing handoff work. And handoff work is where a lot of productivity gets lost.

If you spend your time researching, writing, designing, or building campaigns, the biggest win is not that AI can generate more stuff. It is that you need fewer tools, fewer exports, fewer copy-paste steps, and fewer manual formatting passes to get to a final result.

Recommended additions for publishing this article

To improve engagement and search performance, consider adding:

  • A comparison image showing old vs new NotebookLM mind maps
  • A short embedded demo of AI Studio’s website generation workflow
  • An infographic summarizing the four major Google AI updates

For readers who want more hands-on AI workflow ideas, it also makes sense to link to related resources on prompt engineering, AI design tools, or content automation. For external references, linking to official Google product pages such as Google Labs or Gemini can help support trust and context.

You may also want to internally link to related posts on AI productivity, AI content tools, or design automation if those exist on your site.

FAQ

What is the new NotebookLM mind map feature?

NotebookLM now lets you customize mind maps with prompts. Instead of generating a generic map, you can tell it to focus on a specific source, concept, timeline, or goal.

Can NotebookLM export full mind maps as images?

It can export mind maps as images, but only the parts currently expanded on screen will appear in the downloaded file. If you need more detail, you may have to export multiple versions.

What does Google’s catalog feature do?

It uses your website and business DNA to create a product catalog that can be used for campaigns. It can pull branding elements, product information, and visuals to help generate marketing assets quickly.

What changed in AI Studio?

AI Studio now allows more direct editing of generated apps and websites. You can customize UI, UX, menus, images, forms, and other elements inside the same workspace instead of relying on external tools.

Can Gemini create actual documents and slides now?

Yes. Gemini can create content in formats like Google Docs, slides, PDFs, and LaTeX files. It can also format those outputs and update them while keeping the same style and structure.

Why are these Google AI updates important?

Because they reduce friction between idea, creation, and delivery. Instead of generating raw material that still needs lots of cleanup, these tools are starting to produce usable, polished outputs much faster.

Final thought

The most exciting part of these updates is not any single feature. It is the direction. Google’s AI tools are becoming more practical, more connected, and much closer to handling complete workflows from research to final asset.

If you are using AI for learning, content creation, marketing, or rapid prototyping, these changes are worth testing immediately. The difference between a generic AI output and a customizable, context-aware, formatted deliverable is massive.

If you want to keep building smarter with tools like NotebookLM, Gemini, and AI Studio, explore more AI workflow content, share this article with someone experimenting with Google AI, and test at least one of these features in a real project this week. That is where the value shows up fast.

Share this post