Adobe Embraces Generative AI: Exclusive Interview with Zeke Koch Reveals Strategy

May 21, 2025

Zeke-Koch-Adobe-Firefly-IA-générative

Is Adobe’s generative AI strategy going to be successful? With Firefly and its various models, AI agents are proliferating within the company’s software, while the web application transforms into an all-in-one creative platform.

Through a well-thought-out and polished strategy, Adobe is gradually incorporating the features of its generative AI, Firefly, into its leading creative software. Having showcased the capabilities of its AI models on the Firefly web application, the company is simultaneously turning it into a full-fledged creative platform. Zeke Koch, VP of Product Management for generative AI applications and services, who BDM met at Adobe MAX, explains Adobe’s strategy and vision regarding AI.

Zeke Koch, VP of Product Management GenAI & Firefly at Adobe

After a decade at Microsoft, Zeke Koch joined Adobe in 2005 when Macromedia was acquired. Since then, he has held positions as VP of Product, Content, and Moderation for Adobe Stock and as Senior Director of Product Management for digital publishing products, as well as leadership roles in platform strategy and the management of mobile and desktop product teams. Today, he is the Vice President of Product Management for Adobe’s generative AI applications and services, including Firefly.

For BDM readers, can you quickly introduce yourself and describe your current role at Adobe?

My name is Zeke Koch and I am the Vice President in charge of product management at Adobe, which means I oversee what we build and why we build it, specifically focusing on generative AI. This includes Firefly, which encompasses a family of generative AI models (image, video, audio, vectors, translation…) and a web interface to access them.

Firefly has entered a landscape already crowded with tools like Midjourney, DALL·E, or Runway. How does Firefly stand out in this generative AI space?

The first thing is that our models are commercially safe. This means they are trained only on content that we have legal rights to use, without infringing on anyone else’s intellectual property.

We aim to give users extensive creative control.

Next, we aim to provide users with extensive creative control. A professional always sees ways to enhance a creation, so we want our models to finely adapt to these aesthetic demands.

See also  Top 20 Programming Languages of June 2025: See What's Dominating Tech!

Lastly, Firefly is integrated throughout the entire Adobe ecosystem. We offer more than just a web interface: Firefly is designed to work directly within Photoshop, Express, Premiere Pro, etc.

Despite a web interface for Firefly, Adobe quickly integrated its features into various software. Can this be seen as a form of precocious AI agency? Was this decision part of the initial strategy?

Yes, this was part of the strategy from the start. From the beginning, our vision was to help creators tell their stories more effectively by integrating these tools directly into the software they use every day.

For example, for “generative fill” functions, we first needed to create a fundamental model that understands the world visually. Only then could we integrate it into Photoshop.

The goal: to keep creators in a “state of flow.”

I remember my sister, a video editor, found it absurd to have to export a file from Premiere Pro to test a feature on the Firefly web app. She wanted everything directly in Premiere. That’s the goal: to keep creators in a “state of flow,” without constantly switching platforms.

The web version of Firefly was originally a technological showcase. Today, it has evolved into a standalone working tool. Does this reflect a change in strategy?

Originally, the Firefly website was designed as a technological showcase. But it quickly exceeded our expectations in terms of growth. We saw people who had never used our desktop software preferring the simplicity of the web. These users start with Firefly, then move on to Photoshop or Premiere Pro to refine their work.

Today, the strategy has definitely changed: it’s no longer just a showcase, it’s a full-fledged tool that continues to evolve with new models, new features, such as collaborative Boards.

Speaking of Boards, one of the new key features on the web version of Firefly. Where did this idea come from?

The development of Boards started well before the era of generative AI. In all our discussions with creators – whether in film, product design, or elsewhere – the mood board is an essential starting point: a place to throw images, materials, colors to express an idea without words.

What’s different with Firefly is the ability to generate images and collaborate, with tools and features that I particularly like. For example, you can now select specific elements (color, texture, light…) with a dropper and ask the model to recompose images from these inspirations.

See also  ChatGPT Revolutionizes Meetings: Now Integrates with Google Drive, Outlook, GitHub!

Boards is hosted on the web because it’s simpler to develop that way, but it’s clearly designed for professional creators.

Adobe recently introduced its first AI agents for its creative software, capable of automating certain tasks. Do these first examples outline a broader strategy? How far does your vision of an orchestrated ecosystem of creative AIs go?

Yes, we have a broad vision, but it all depends on how you define “agent.” I worked in the 90s on the famous Office paperclip… That was already a form of agent.

Today, the idea is that one can communicate in natural language with Adobe tools, and they understand increasingly complex requests. Asking “make this image more steampunk” implies understanding that style, identifying the right aesthetic parameters, and then applying them. So, we need several specialized agents – color, text, video, etc. – and to orchestrate them. There will be many, probably more agents than users eventually.

One of the major challenges is ensuring that these agents do not degrade visual quality over adjustments. In some systems, the more you modify, the more you lose the initial intention or aesthetic quality. At Adobe, we want the user to maintain creative control at each step. It’s easier in Express, where objects are separated, than in Photoshop where everything is more merged. But all this is progressing rapidly.

You mentioned GPT Image and other models: what do you think about the rapid innovations in the sector, and how is Adobe reacting to them?

It’s mostly surprise and excitement. We love seeing the new capabilities that emerge, it feeds lots of ideas. Many of these innovations are either open source or described in scientific publications. Even if we need to retrain our models to ensure commercial safety, we can draw inspiration from these ideas.

For example, GPT Image has been integrated into the Firefly web app, with some transparency about the limits, especially regarding commercial use. And the fact that these models work well confirms that certain approaches are viable. This allows us to build our next models on solid foundations.

See also  Digital Pros Reveal: How Will AI Transform Your Career by 2025?

What feedback have you received from professionals using Firefly? Is adoption smooth? And how do you respond to creators who fear AI might replace them?

The main feedback I receive is a request for Firefly to better understand creative intentions. For instance: improving photorealism, managing skin tone well, or getting the right number of fingers right.

On this point, we’ve focused on the Image 4 model, which aims for a very realistic rendering, with aesthetics close to that of a professional.

If you have a vision, if you want to express an idea, AI tools help you achieve it better.

Regarding fears, I understand. But to me, it’s like when I had my first child: I wanted to take beautiful photos and bought a professional camera. My brother, a photographer, took better photos than me… with an iPhone. It’s not the technology that makes the quality, it’s the creator’s eye and intention.

AI is just another tool. If you have a vision, if you want to express an idea, these tools help you achieve it better. I don’t think creatives should be afraid. On the contrary.

Any final advice for curious creators who are still hesitant to get started with Firefly or generative AI in general?

Be curious, try everything. Every new medium has given birth to works that are still talked about years later. All technologies that seem normal today – cinema, color, sound… – were once innovative and scary. So explore, play, find what inspires you. Even a 20-year-old tool can produce magical results if used with passion.

Similar Posts

Rate this post

Leave a Comment

Share to...