Google is rolling out a Gemini update that lets the app generate custom images using Google Photos and users’ personal preferences, a move aimed at making image creation more specific without requiring manual uploads or long prompts. The feature is being pushed to U.S. subscribers over the next few days.
Users will be able to ask for scenes featuring themselves or loved ones, and Gemini can pull relevant context from connected accounts to shape the result. Google said people can also connect Google Photos so actual images of themselves and family members help guide generation, while the option to refine results or swap reference photos remains available.
The company introduced the new tools today with Nano Banana 2 and Google Photos, building on what it describes as one of Gemini’s favorite uses: image generation. The update is tied to Personal Intelligence and connected Google apps, part of a broader push to make the assistant work with information people already keep inside Google’s ecosystem.
Google also said users can organize and label groups of people and pets in Google Photos, and those labels can give Gemini more context when creating personal images. The company said it does not train models on private photo libraries, a detail that may matter as users decide how much of their own camera roll to connect.
The rollout lands first with U.S. subscribers to Google AI Plus, Pro or Ultra, and that makes access uneven for now. The practical question is whether enough people will hand Gemini access to their personal photo libraries to make the feature feel indispensable rather than merely novel.