I build guitars for a living.
That's my day job. I'm a luthier at PRS Guitars, and most of my mornings start with wood, fretwire, and the smell of fresh rosewood shavings. The rest of the time I'm an AI artist. I make photosurreal images, which is my own word for photographic-looking pictures of things that can't actually exist, and I produce a series of paper cut-out storybook animations called Stor-AI Time. In February I joined the Adobe Firefly Ambassador program.
Adobe Firefly is open on my screen most days. Probably more than my email is.
What follows is what that actually looks like. Not a polished demo reel. Four surfaces I genuinely live in, the daily creative practice they enable, and one specific puzzle workflow I've been chasing for three months that I finally cracked. The full formula is at the bottom. Take it. Run it. Make weird things.
The four firefly surfaces I live in
1. Firefly Image 5, for photosurrealism

Image 5 is where my main aesthetic lives. A tiger with iridescent macaw feathers growing out of its fur. A loft apartment with two competing gravitational portals pulling objects into spirals where their fields collide. The model has to deliver photographic realism without slipping into the plastic, oversaturated, "AI art" look that most people picture when they hear the term.
Image 5 holds that line for me. It understands camera language. When I tell it "85mm portrait lens, deep focus, atmospheric haze revealing competing light rays," it responds the way a real photographer would.
This is my hero tab. Most days start here.
2. Firefly Boards, for thinking out loud

Boards is where ideation happens. I treat it like a moodboard, a sketchpad, and a model comparison rig at the same time. The thing that sets it apart from any other surface I've used: I can run the same prompt through Image 5, Nano Banana 2, Nano Banana Pro, Imagen, Flux, and GPT Image 2 side by side in a single Board. When I'm trying to figure out which model handles a specific concept best, I don't bounce between tabs. The comparison is the workspace.
For the hidden pictures workflow you're about to read about, Boards is where most of the iteration happened.
3. Nano Banana 2 inside Firefly, for the puzzles

Nano Banana 2 is my hidden pictures workhorse, and it does heavy lifting in my Stor-AI Time keyframe pipeline too. It's good at dense scenes packed with detail. Dense scenes are exactly what hidden objects puzzles need. The model has different strengths than Image 5, less photographic, more illustrative range, and being able to access it inside Firefly without changing tools matters more than it sounds like it should.
4. Firefly Soundtrack Generator, for narrative work

The Soundtrack Generator is the newest addition to my daily rotation. For my Stor-AI Time storybook animations I need a single continuous music track that matches a story's emotional arc. Celtic folk for the Welsh tale. Something warmer for the Inca one. The Soundtrack Generator gets me there fast, and the output drops straight into Premiere for final assembly.

Hidden pictures puzzles are the kind of thing I grew up doing in the back of Highlights magazine at the dentist's office. A busy illustrated scene with a strip at the bottom showing five outline drawings of objects to find. You stare at the scene until the umbrella you've been looking past resolves into the curve of a tree branch you'd missed.
I started trying to make these in AI in February. I assumed it would take an afternoon.
It took three months.
The reason it took three months is that AI image models are bad at hiding things. They're trained to make objects clearly visible. When you ask one to put a hidden seahorse in a Victorian curiosity cabinet, it puts a seahorse on the desk. Not hidden. Just there. Tell it to hide the seahorse and it either renders nothing at all or draws a literal silhouette of a seahorse with a "hidden" label slapped on top.
The puzzle isn't drawing the scene. The puzzle is teaching the model to make objects that share their shape with something else, to actually camouflage. That's the problem I was trying to solve.
The three months
I'll compress the journey because the failed approaches teach more than the wins.
The transformation trap. My first instinct was to describe each hidden object becoming a scene element. Something like "the trumpet IS an exhaust manifold flaring outward from the combustion chamber." This doesn't hide the trumpet. It deletes it and replaces it with an exhaust manifold. The model takes you literally.
The "outline" trap. I knew from prior testing that the working language was "each object shares its outline with a scene element." I rewrote my prompts around that phrase. Firefly drew literal visible borders around the hidden objects, like a coloring book before you fill it in. The word "outline" was the problem. Same word, different meaning, model takes you literally again.
The negative-instruction trap. I tried to fix the visibility issue by adding "hidden objects should NOT be distinctly visible, they only emerge when you know what to look for." The model stopped generating the objects entirely. Negative hiding instructions don't make objects subtle. They make objects vanish.
The lineup problem. Once I got objects generating with positive language, they appeared in a tidy horizontal row across the middle of the image. Every single time. The model defaults to a lineup unless you explicitly tell it not to.
The breakthrough. The working version replaced "outline" with "shape," removed every negative instruction, and added scatter language. "Each object shares its shape with a scene element, scattered randomly throughout the full image at different sizes and angles, seamlessly woven into [scene context]." That single sentence is the load-bearing piece of the entire formula.
The formula
Here's what I use now. Take it, modify it, run it.
For Nano Banana 2 inside Firefly
json
{
"image_type": "Hidden Pictures puzzle in [style] style",
"subject": "[extremely dense scene description, this does the hiding work]",
"art_style": "[specific medium, lighting, palette, quality reference]",
"hidden_objects": {
"method": "each object shares its shape with a scene element, scattered randomly throughout the full image at different sizes and angles, seamlessly woven into [scene-specific context]",
"items": ["item1", "item2", "item3", "item4", "item5"] in random order
},
"negative": "NO [style-specific negatives], NO placing objects in open empty space",
"bottom_strip": "white strip showing outline drawings of the 5 hidden objects to find"
}For GPT Image 2, also available in Firefly
Same JSON structure, but with the method field doing more work:
json
{
"method": "each object shares its shape with a scene element, scattered randomly throughout the full image at different sizes and angles, seamlessly woven into [scene-specific context]. The resemblance should be accidental, like seeing shapes in clouds. Each hidden shape must match the colors and textures of surroundings with no color contrast at edges. Objects at unexpected angles, tilted or rotated."
}Why each field is there
subject does most of the actual hiding. The denser the scene description, the harder the items are to find. A "Victorian apothecary cabinet" gives the model nothing to hide objects in. Compare it to "Victorian apothecary cabinet packed with apothecary jars, leather-bound books, brass scales, hanging dried herbs, glass beakers, taxidermy specimens, candle stubs, ink bottles, ledgers stacked unevenly, oil lamp casting warm side-light, mahogany shelving with peeling labels." That gives the model dozens of shapes to weave items into.
method tells the model how to embed. The "shares its shape" phrase is what makes camouflage actually happen. The "scattered randomly at different sizes and angles" phrase is what prevents the lineup. Both pieces are required. Drop either one and you lose the effect.
items should not belong in the scene. A lantern in a lantern market is invisible. The model will absorb it into the scene. A magnifying glass on a scholar's desk is too obvious. The model renders it as expected furniture. The best items have no business being where they are. A carrot in a Victorian apothecary. A trumpet in a circuit board. A penguin in an ancient manuscript pile. The mismatch is what makes the puzzle a puzzle.
negative blocks default behaviors. "NO placing objects in open empty space" stops the model from putting items on blank walls or sky, which is one of its favorite cheats. Style-specific negatives depend on what you're trying to avoid for that aesthetic.
bottom_strip builds the puzzle into the image itself. The model generates the answer key as part of the same output. No separate compositing step.
Run it yourself in firefly

Open Firefly Boards.
Set the model to Nano Banana 2.
Paste the JSON formula above. Fill in your scene, style, items, and negatives.
Generate. The first output won't be perfect. Iterate.

If you want to compare models, run the same prompt through GPT Image 2 in the same Board. You'll see different strengths. Nano Banana 2 tends to hide objects better through dense scene weaving. GPT Image 2 needs the longer method field with the pareidolia and color-matching language to get there, but when it works, the integration feels more organic.
I use 16:9 with the bottom strip rendered as part of the image. If your model places the items strip on top instead of the bottom, add "answer key strip at the bottom of the image, below the puzzle scene" to clarify.
Ten things I learned
Dense scene description does the hiding. The more visual stuff packed into your scene, the harder the hidden items are to find. The method field is secondary to scene density.
Avoid the word "outline" anywhere in the prompt. Models read it literally and draw visible borders around hidden objects.
Negative hiding instructions kill generation. "Should NOT be distinctly visible" makes objects stop generating entirely. Use positive language only.
Items must not belong in the scene. No lantern in a lantern market. No magnifying glass on a scholar's desk. The mismatch is the game.
Scatter language is mandatory. Without "scattered randomly throughout the full image at different sizes and angles," the model places everything in a horizontal line.
"In random order" after the items list prevents predictable placement patterns.
Pareidolia framing helps GPT Image 2. "The resemblance should be accidental, like seeing shapes in clouds" produces more organic integration.
Color matching is the strongest hiding lever for GPT Image 2. "Each hidden shape must match the colors and textures of surroundings with no color contrast at edges" breaks the silhouette boundary that makes objects pop.
JSON works for both models. I expected GPT Image 2 to need natural language. It doesn't. The same JSON structure works.
Each model has a hiding ceiling. Nano Banana 2 can achieve true pareidolia-level hiding. GPT Image 2 peaks at "objects sculpted from scene materials with somewhat recognizable silhouettes." Both produce fun puzzles. Pick the model that fits the look you want.
Your turn

Five hidden objects in this scene. Find them.
If you want to make your own, head to adobe.com/firefly and try the formula above. Tag me when you post yours. I want to see what scenes you build.
The thing I keep coming back to about Firefly is that it's a suite, not a single tool. Image 5 for the photosurreal stuff. Boards for thinking and comparing. Nano Banana 2 for the dense puzzle work. Soundtrack for the Stor-AI Time pieces. They're all in the same place and they all talk to each other. That's what makes it the surface I open every day.
I hope you have a productive and creative day.
Made in Adobe Firefly.
— Glenn (@GlennHasABeard)
#AdobeFireflyAmbassadors #Ad #HowToAdobeFirefly

