With Firefly Enterprise, Adobe aims to merge the power of generative AI with the demands of major brands: security, transparency, customization, and quality.
Scientists confirm: This is the most effective way to get your cat’s attention, according to new research
Elderly Couple Refuses Reserved Seats—Viral Train Standoff Sparks Fiery Debate on Courtesy
As generative AI becomes a staple in creative processes, large corporations seek to adopt it without compromising security, quality, or brand consistency. With Firefly Enterprise, Adobe aims to provide a “commercially safe” AI, designed for professional needs. Rajan Vashisht, Senior Director of Machine Learning & Firefly Product at Adobe, explains to BDM the philosophy and technical choices behind this approach, including personalization, data governance, and the scaling of creative applications.
To start, can you describe your role in Firefly and the mission of the Machine Learning team you lead?
I have been the Senior Director of Machine Learning and Firefly Product for just under two years. My team develops Adobe’s generative AI technology, both as APIs and through products that span multiple modalities (image, video, design, vectors, 3D…). We also build the integrated workflows on firefly.adobe.com.
Why You Should Never Reheat These Foods in the Microwave – The Hidden Dangers Experts Warn About
I tried the top 5 guard dogs—here’s what makes these breeds the ultimate protectors
I also work with large companies on custom models, like those recently showcased for individual and team use. My role involves industrializing the technology: turning a trained model into a product ready for inference and deployment, with guarantees of security, reliability, and control.
We design complete workflows that support users from start to finish: generation, media import, editing, sharing, signing… The team covers the entire creation cycle, closely linked with machine learning engineering.
It’s an exciting venture. The technology is evolving at a remarkable speed. We started by integrating generative AI into our flagship products, like Photoshop with the Generative Fill workflow, and then expanded our efforts to video with Generative Extend, which allows for extending a clip or a b-roll sequence. Our mission is really to cover the entire creative ecosystem.
In terms of generative AI, what are the most common expectations you observe among the brands and large organizations you work with?
Commercial security is their primary requirement. It’s the foundational principle of our approach. Companies want to know which data the models are trained on, whether they can safely use them in their own workflows, and whether the generated content is legally safe to use.
Commercial security is the primary requirement of companies. It’s the foundational principle of our approach.
But each organization also has its specific needs. A retail player, a consumer goods brand, or an industrial company will not have the same priorities. Some focus on the quality of logos, others on the physics of motion in videos, the consistency of human faces, or the accuracy of shapes, etc.
Our job is to develop technologies that can respond to this diversity through customization. Over the past 18 months, this has proven very successful. We have helped numerous companies to generate, edit, and produce content more quickly, thus reducing their time-to-market and enhancing the performance of their content production chain.
Firefly Enterprise is distinguished by its approach to data transparency, security, and content ownership. How do you design the machine learning architecture to meet these requirements?
The issue of data arises at several levels. First, during their integration into our training ecosystem: we apply strict controls from the start and repeatedly. Our internal policies precisely define how we qualify data, whether it’s a model created from scratch or a fine-tuned model. These mechanisms allow us to confidently state that the data sets used are commercially safe.
Enterprise data remains completely isolated: it is never re-injected into our training pipelines.
Companies also want to know where their data is transmitted and where inference takes place. Therefore, we have designed an enterprise-grade storage infrastructure that allows each client to upload their own content for training or fine-tuning in a dedicated space, hosted in their country. Any customization, any weight adjustment, is kept in the same region.
This approach is essential to respect our data residency policies and reassure our clients. Their content remains completely isolated. It is never re-injected into our training pipelines. On their part, these pipelines use exclusively safe and royalty-free sources, such as Adobe Stock. The two worlds, client data and training data, remain strictly separate.
Many brands face challenges in transitioning from prototype to industrialization. What insights have you gained from integrations already implemented with your clients?
Quality is the central focus. Brands want to ensure that the generated content can be used in their production chain, at a professional level.
For the past year and a half, we have been working closely with our clients within the framework of our customization offers. Early adopters have provided us with numerous pieces of feedback, which we have incorporated to refine our models according to use cases: types of images, resolution levels, rendering of shapes or faces…
Some clients are satisfied with the overall quality, while others demand increased precision for generating logos, vector art, or the realism of hands and eyes. We also have telecommunications companies or brands that use our tools for large-scale campaigns, such as during the Super Bowl. Some seek a massive scale-up more than a quality improvement, and we work with cloud providers to meet these needs.
Generative AI innovation is evolving extremely quickly. How do you ensure the scalability and longevity of Firefly models when they are integrated into complex creative workflows?
This is one of our daily challenges. Interest in these technologies is immense, and the demand for computing power is skyrocketing everywhere.
For this, we have forged strong partnerships with major cloud providers (hyperscalers) and always plan ahead for our needs to ensure the availability of resources necessary for our clients. Adobe also invests heavily in computing for training, which gives us great flexibility.
This allows us to quickly reallocate our computing capacity to meet demand in just a few minutes. This is what enables us to scale our platform on a large scale, with a level of responsiveness and robustness that few actors can currently achieve.
In areas like design, user experience, or personalization, how does the machine learning team collaborate with designers and brand managers at Adobe and your clients?
Within Adobe, we have several highly specialized teams that work hand in hand with users and clients. A dedicated research team studies their uses and provides precise feedback weekly to guide our products and roadmaps.
Our design teams observe interactions, identify improvement points, and continuously evolve the workflows. We also have a Customer Advisory Board that regularly exchanges with companies to identify successful use cases and new needs that emerge with the evolution of technologies.
We do not react hastily. We test, we analyze, we validate with users.
All this information feeds our product strategy and planning. We sometimes adjust our priorities quickly, which is positive: it means we are learning fast. Firefly, and more broadly the Creative Product Group led by Ely Greenfield, demonstrate great agility while remaining responsible. We do not react hastily. We test, we analyze, we validate with users.
From the outside, it may seem that we move very quickly, but in reality, there is a lot of work behind the scenes to offer solid solutions that are tested and tailored to the real needs of clients.
What indicators do you use to measure the success of generative AI adoption in a company? Productivity, brand consistency, time-to-market?
Time-to-market is one of the most telling indicators today. Many companies tell us that they have never been able to produce as quickly, which is a significant evolution.
But we also measure the quality of the product and the efficiency of our user feedback loops. For example, anyone can now test the workflows on firefly.adobe.com and send feedback directly from the interface. These responses are analyzed and re-injected into our models to continuously improve them.
It requires a few extra clicks, but we take them very seriously. Several users have even told us that they have seen their comments addressed quickly. These direct exchanges are extremely valuable as they strengthen our way of working and improve the quality of the product.
Finally, the speed with which we integrate feedback and correct issues is another key indicator. And so far, our major clients are very satisfied. This is a sign that our processes are holding up well.
What has been the main challenge for you and your team in recent months?
I wouldn’t really call it a challenge. We work a lot, certainly, but without excessive pressure, because the entire company shares the same motivation: to understand creativity more deeply and push its limits with new technologies.
What matters to us is staying close to our users and clients.
We are living through an exciting period. What matters to us is staying close to our users and clients, learning through their feedback, and avoiding building solutions that do not meet their real needs.
Our goal is to educate and support creators, to show them that these tools are designed to amplify their creativity, not to replace it. This conviction guides us daily, and that’s what makes this adventure so stimulating.
Rajan Vashisht, Senior Director of Machine Learning and Firefly Product
Rajan Vashisht leads the teams responsible for machine learning and product development of Firefly, Adobe’s generative AI. He oversees the integration of models into the group’s creative tools and supports companies in implementing safe, customizable, and tailored AI solutions.
Similar Posts
- Adobe’s VP of Generative AI, Alexandru Costin: 5 Key Questions Answered!
- Adobe Embraces Generative AI: Exclusive Interview with Zeke Koch Reveals Strategy
- Photoshop’s AI Assistant Hits Public Beta: Available Now on Web, iOS & Android!
- Adobe MAX 2025 Unveiled: Firefly, AI Assistants, and Audio Innovations – Key Takeaways!
- Adobe Premiere Hits iPhone and iPad: Edit Videos Anywhere, Anytime!

Jordan Park writes in-depth reviews and editorial opinion pieces for Touch Reviews. With a background in UI/UX design, Jordan offers a unique perspective on device usability and user experience across smartphones, tablets, and mobile software.