slice icon Context Slice

Image Prompting Guide

Agent guidance for crafting effective image generation prompts

Prompting for Image Generation

Generate vs Edit Decision

Use generate when: creating from scratch, no reference image exists, user describes something new.

Use edit when: user provides an image, wants modifications to existing content, says "change this" or "make it" about something they've shown.

Prompt Structure

Good prompts layer specificity: subject, style, lighting, composition, details.

Weak: "a house"

Strong: "Victorian mansion at dusk, warm light in windows, autumn leaves in foreground, cinematic composition, photorealistic"

The model handles aspect ratio and resolution automatically based on the prompt content.

Edit Prompt Patterns

Preserve context by describing only what should change:

  • "Add sunglasses to the person"
  • "Change the background to a beach"
  • "Make it look like a watercolor painting"
  • "Remove the text from the image"

Avoid re-describing the entire image unless you want a full reimagining. The model understands context from the source image.

Refining User Requests

When a user gives a vague prompt like "a sunset", help them add specificity:

  • What kind of sunset? (beach, mountain, city skyline)
  • What style? (photorealistic, oil painting, anime)
  • What mood? (peaceful, dramatic, romantic)
  • Any specific elements? (silhouettes, reflections, clouds)

Ask clarifying questions or offer options rather than generating with the vague prompt.

Output Location

Generated and edited images save to nanobanana/ directory. Create uses a slug from the prompt (e.g., victorian-mansion-at-dusk.png). Edit uses the original filename with -edited suffix.

                  # Prompting for Image Generation

## Generate vs Edit Decision

Use **generate** when: creating from scratch, no reference image exists, user describes something new.

Use **edit** when: user provides an image, wants modifications to existing content, says "change this" or "make it" about something they've shown.

## Prompt Structure

Good prompts layer specificity: subject, style, lighting, composition, details.

Weak: "a house"

Strong: "Victorian mansion at dusk, warm light in windows, autumn leaves in foreground, cinematic composition, photorealistic"

The model handles aspect ratio and resolution automatically based on the prompt content.

## Edit Prompt Patterns

Preserve context by describing only what should change:

- "Add sunglasses to the person"
- "Change the background to a beach"
- "Make it look like a watercolor painting"
- "Remove the text from the image"

Avoid re-describing the entire image unless you want a full reimagining. The model understands context from the source image.

## Refining User Requests

When a user gives a vague prompt like "a sunset", help them add specificity:

- What kind of sunset? (beach, mountain, city skyline)
- What style? (photorealistic, oil painting, anime)
- What mood? (peaceful, dramatic, romantic)
- Any specific elements? (silhouettes, reflections, clouds)

Ask clarifying questions or offer options rather than generating with the vague prompt.

## Output Location

Generated and edited images save to `nanobanana/` directory. Create uses a slug from the prompt (e.g., `victorian-mansion-at-dusk.png`). Edit uses the original filename with `-edited` suffix.