Zendesk is leveraging technologies such as OpenAI to enhance its generative AI capabilities with the goal of achieving 80% of automated resolutions. Matthias Göhler, CTO EMEA at Zendesk, elaborates on this partnership.
Scientists confirm: This is the most effective way to get your cat’s attention, according to new research
Elderly Couple Refuses Reserved Seats—Viral Train Standoff Sparks Fiery Debate on Courtesy
Artificial intelligence is transforming customer relations, and Zendesk is keen to remain at the forefront of this revolution. Matthias Göhler, CTO EMEA for Zendesk, discusses the strategic partnership with OpenAI, the performance stakes, and the ambitions surrounding the automation of interactions.
How did the collaboration between Zendesk and OpenAI start?
We have been working with OpenAI since April 2023. At Zendesk, there isn’t just one AI model powering everything; we use AI agents, a copilot, intelligent routing, forecasting, an intent model, a knowledge base generator, analytics… AI is already deeply integrated into our solutions. OpenAI is a key partner, especially for generative AI where we utilize their large language models, particularly for drafting content.
Why You Should Never Reheat These Foods in the Microwave – The Hidden Dangers Experts Warn About
I tried the top 5 guard dogs—here’s what makes these breeds the ultimate protectors
This is a real partnership, not just a provision of models. We work continuously, share feedback that is taken into account, and the integration extends to our R&D functions.
We constantly benchmark different providers to deliver the best results for our clients. We look at three criteria: response time, accuracy, and the cost of deployment and operation. To date, OpenAI has performed the best, which is why we have a strong partnership with them.
We were also among the first to offer OpenAI in European data centers. Practically, this allows our clients in Europe to ensure that their data does not leave the European Union, which is a major asset for some of them.
You mention regular benchmarks. Does this mean you could easily switch from OpenAI to another provider?
Technically, yes. We ensure that our technology remains open, which would allow us to switch if necessary, or even use multiple providers for different use cases. Some clients already have preferences, and we must remain flexible. However, a partnership is not just technical; it also has a commercial and relational dimension.
Our partnership with OpenAI is very solid, as is our partnership with AWS, on which Zendesk is built. But we will continue to monitor the market and benchmark, as our guide is always our clients’ interests: achieving the best results as quickly as possible and at an optimal cost.
How often do you adjust the model configurations and based on what criteria?
It’s not just about tweaking the prompts, although that is a discipline in itself. What’s more important is evaluating each new model that comes out: does it offer significant benefits? This can relate to accuracy, depth of knowledge, or simultaneous understanding of text and voice.
For example, historically, a voice agent worked by transcribing voice to text, sending this text to a model, and then re-converting the response to voice. This created noticeable latency in the conversation. The new models capable of directly processing voice and text significantly reduce this latency, making interactions much more natural.
We are also seeing the emergence of more specialized models. For instance, some are trained exclusively on medical studies. They perform much better on these topics than generalist models trained on the entirety of the Internet. I wouldn’t be surprised to see more specialized models appear, by industry or domain, and we will likely need to consider integrations with them.
So, latency is crucial. How do you manage it, especially during peak traffic times?
This is a major challenge. We have designed our systems by incorporating latency as a central criterion. The hosting location in data centers is also important to minimize communication delays. On the client side, it’s also necessary to set limits to avoid spikes in requests that would degrade the experience for everyone.
We also plan for periods of very high activity, such as Black Friday, or major sporting events that cause massive spikes in demand. Thanks to AWS, we can scale up quickly. Latency actually depends on several factors: network architecture, traffic management, how we’ve designed our solutions, exchanges with other systems, and the speed of the systems themselves. All of this combined creates the latency experience that the client perceives.
Do OpenAI models ever make mistakes?
Of course, that can happen. But if you compare the first versions of GPT to GPT-5, the accuracy has improved significantly. However, clients rarely ask general questions. They generally ask specific questions about a business, its products, or services. That’s why we train our models specifically on customer service.
We handle about 5 billion interactions per year: this provides a huge base to refine AI while respecting anonymization.
We also use an approach called RAG (retrieval augmented generation). Essentially, the model retrieves the answer from a reliable knowledge base and does not modify it: it can adapt the tone, be more or less empathetic, but the fundamental information remains accurate. If a customer wants to return a product, for example, the AI identifies the request, checks the orders, verifies the products, and automatically triggers the appropriate process. This combination allows us to maintain a high level of accuracy and resolve requests.
Have you noticed any unexpected positive effects for your clients?
I would say the effects are positive, but expected. The main indicator for us is the automated resolutions. We aim for 80% of requests to be fully resolved by AI, across all channels. A resolution means that the customer does not need to revisit the issue after the ticket is closed. Currently, we see many clients progressing toward this goal. Some are already at 40 or 50%, others at 20 or 30%, but the momentum is strong.
We expect to reach about 20,000 clients using Zendesk AI by the end of the year, making us one of the most significant players in the market in terms of adoption. AI is expected to generate nearly $200 million in revenue this year, positioning us among the fastest-growing companies in the customer service sector in this area.
Achieving 80% of automated resolutions has a dual impact: better customer satisfaction, as the responses are quick and accurate, and more time for agents, who can focus on the remaining 20% of cases (complex, emotional, or high-value situations). Today, many agents are overwhelmed by repetitive requests such as “I haven’t received my invoice, can you resend it?”. AI can handle these cases autonomously.
Do you still encounter clients who are hesitant about AI?
Very rarely. In Europe, there is hardly any customer conversation or conference where AI is not mentioned. There are still a few skeptical companies, often for security reasons, but they are becoming increasingly rare. With 20,000 clients expected by the end of the year, it’s clear that our base is actively adopting AI.
Matthias Göhler, CTO EMEA
After spending notably five years at Gemini Consulting, then many years at SAP AG and Hybris, Matthias Göhler has been the Chief Technology Officer at Zendesk for the EMEA (Europe, Middle East, and Africa) region since October 2021.
Similar Posts
- ChatGPT’s New Voice Annoys Users: Find Out Why
- Customer Focus & Team Excellence: Shashi Upadhyay’s Fast-Paced Method at Zendesk
- Breaking News: OpenAI Introduces Voice Mode Directly in ChatGPT Interface!
- AI Revolution in Business: OpenAI Predicts Massive Adoption by 2025
- Free Activates 5G+ on iPhone 15 and 16: Faster Speeds Unleashed!

Jordan Park writes in-depth reviews and editorial opinion pieces for Touch Reviews. With a background in UI/UX design, Jordan offers a unique perspective on device usability and user experience across smartphones, tablets, and mobile software.