Product release
Jul 8
34 min

Most agents fail after launch. This fixes it.

Watch Tray’s biggest release yet and see how to build agents your team will actually use.

Video thumbnail

Overview

Most AI agents never make it past the prototype. Teams build them, then no one uses them.

This session shows you how to address the adoption gap with new features in Tray Merlin Agent Builder.

You’ll see a live demo of the latest release and learn how to connect your agent to your data, give it memory, choose the right model, and launch it where your team already works.

What you'll learn

  • How to give agents memory that sticks: Let your agent hold onto what users said before. Across sessions. Across tools.

  • How to connect to Slack, Google Drive, and more: Use Tray’s new Knowledge Engine to bring in documents, messages, and context with just a few clicks.

  • How to choose the right LLM for every agent: Select the best model for each task or bring your own. Change models fast without rebuilding anything.

  • How to launch agents where people actually work: Use Slack. Use Teams. Use the web. Use the API. Interact with users where they are. 

Want a deeper look at how teams are building agents?

We shared this during the session. Get the AI Agent Strategy Playbook to compare the common approaches and see why more teams are choosing AI-ready iPaaS to get agents into production.

Session Chapters

  • Four ways teams build agents (and what fails)

  • Why agents don’t get used

  • Demo: Connect data sources in seconds

  • Demo: Use Tray Knowledge Engine to power your agent

  • Demo: Built-in memory across sessions and users

  • Demo: Choose the right LLM for every agent

  • Demo: Launch agents in Slack, Teams, and more

Transcript

Hi. Good morning, good afternoon, good evening, wherever you might be. And thank you so much for joining us on "Unblocking Agent Adoption" with Tray.ai. My name is Michael Douglas. I'm a senior product marketing manager here at Tray, and, I'd like to thank you guys for being able to take some time out of your day to go through this exciting webinar with us. I'm joined, by, my colleague, Tom Walne. Tom, are you there?

I am here, Michael. Thank you very much for the introduction. Hi, everyone. My name is Tom Walne, as Michael just said, director of product management at Tray. I've been working on the agent builder since its inception, so very keen to show off, this update today. Great. Thanks, Tom.

So what we're going to do today, folks, is we're going to give you a high level overview of some of the challenges that organizations are faced with in, getting agents to be used and be adopted, by users. And then what we're going to do is we're going to quickly get into a demo of some of the groundbreaking new capabilities that are going to be first seen today. So this is a first of its kind.

You guys are on the ground floor here to see these new capabilities, so we're incredibly excited for that. So without further ado, let's just jump in.

So we all know that there's significant expectations around AI agents. But with these the expectations, there's also significant challenges.

And we have seen a lot of challenges out in the marketplace, so what we decided to do, at the end of last year was we put out a survey, to just over 1,000 tech leaders to really understand what are the challenges that they're dealing with in order to get agents adopted. And moving from left to right, we really saw, the first one was speed. So over over 50% - 55% of organizations, are really trying to move from prototype to production in a matter of weeks.

And this is really driven by a lot of pressure from business unit leaders to deliver value and deliver their promise on AI agents. So they want to get it out out into the organization and start to use AI agents as quickly as possible. But what IT leaders really understand is that just building an agent and then putting it out in the wild leads to a lot of risk.

So you're really exposing the agent from a governance and management perspective, and that's why we see about nearly 60% of IT leaders mention that as their as their second top second challenge.

You know, so when you think about it, when you put an agent out there, how do you put guardrails in place? How do you make sure that the right data is going to the right folks within the right use case and that users aren't getting access to sensitive information?

How do you mask private identifiable information as well? These are some of the core challenges that organizations are faced with when it comes to adopting,agents.

But as you can see here, one of the standout problems and standout challenges from an agent perspective is data. Data is really the lifeblood of an agent, so that's why we see over 80% of IT leaders listing it as their top number one challenge.

And it's not just about structured data because that's the easy part. It's about how do you take both structured and unstructured data from all of these siloed repositories from these systems across across the organization.

How do you make sense of it for the agent and give it the right context for the right use case to deliver business results. So that's really the top three challenges that we have seen coming from that survey. But those are the top three, but, really, you're kind of just scratching the surface.

It's just the tip of the iceberg when it comes to actually building and deploying an agent. And so there's there's a lot of underlying production challenges when it comes to building an agent. So for the likes of how do you swap out an LLM? How do you bring in a new LLM?

How do you point that LLM at the right use case for the right outcome?

How do you let the agent take complex actions across multiple systems throughout the organization?

How do you test and debug that agent? Or how do you for instance, if you're seeing hallucinations from the LLM and the agent, how do you do a regression analysis to see what prompts are causing this hallucination?

So there's a number of challenges that lay below the surface that when organizations get into building an agent, they start to really understand very, very quickly.

So when you go to try to address these challenges, what we have seen is there's four core ways to build agents.

And I'm going to talk about these at a fairly high level, but we have built what we call the agent playbook that deals with all four critical options in much more detail. And we're going to include that link at the end of the presentation so that you guys can do a little bit more deep dive into it. But moving from left to right, the internal custom build agent is probably one of the least popular that we see out in the marketplace, and there's a couple of reasons why.

First of all, organizations realize that building an agent is fairly complex. They don't want to invest in specialized AI developer capabilities, putting the resource behind that, and then they're having to manage a single code base with a really long development time. And it typically would get abandoned as well.

One of the most popular approaches we do see is an off the shelf agent. And this is going back to I spoke about the business leaders putting pressure on the IT leaders to deliver agents. So for instance, a business leader might say, look. I really need a sales agent or I really need an ITSM agent, and the IT organization says, okay. We'll get this off the shelf agent that gives a quick fix approach. So it delivers some business value, but, really with that short term gain comes long term pain because what they quickly understand is that it's only a single use case. So it doesn't really scale. And it can't be repurposed across the organization.

So, it was great for that initial business value, but you really run into a lot of roadblocks from a long term perspective.

Next, we look at siloed SaaS agents. Now we think about this from if you're a heavy Microsoft shop or you're heavy Salesforce shop, it can be very attractive for an organization to turn to their incumbent technology vendor and say, hey. What are you guys doing from an AI agent strategy, and how can you help me out? And, there comes a lot of challenges with adopting AI agents in this fashion, and mainly, it's you become locked in that ecosystem.

You think about if you're a Microsoft shop, it can become very complicated and complex, and it becomes sort of a Frankenstein technology stack. So we think about Microsoft. You've got Microsoft Copilot. You've got Microsoft Studio. You've got Power Automate to automate the workflows. And so then you're having to navigate building AI agents with all of this complexity without the ability to integrate across multiple systems that might live outside of that Microsoft stack.

And, lastly, if you go to move off of Microsoft, you want to move into a new infrastructure, then you basically lose all of that value when you move platforms.

But what we have seen since launching Merlin Agent Builder last year is there's a real realization in the market that data and integration is the lifeblood of agents. I've said that before, and I'll say it again. Without data, your agent is essentially useless. So being able to integrate data from multiple systems across the organization is really critical to agent adoption. It's critical that the agent is useful and is going to make a real business impact. So having an AI-ready iPaaS that has all of that integration capabilities, the AI built in allows you to have the flexibility, but also that rapid build time being able to deliver AI agents in a matter of weeks and not months.

But with that said, we've talked about the four different options that you have for building agents. They're still hitting user adoption. There's still a gap there that even if you have the right technology in place, it doesn't matter if people don't adopt it. So let's look at some of the elements that are going on behind that.

And I think this the the title of this slide really sums it up really, really well. Agents are getting built but not used. So if we go back to the report that we looked at earlier in the slide deck, about 70% of organizations are investing roughly half a million dollars or more into their AI agent strategy. That's not an insignificant amount of money. But what we've seen is, and this is coming from Boston Consulting Group, they did a big research in 2024 around AI adoption. And what they're saying is nearly 75% of organizations are really not seeing the value, are not seeing the scalability, from their AI agent strategy. So that really speaks to the level of gap between the investment and the adoption.

So let's take a minute and look at why that's happening.

And first of all, I just want to say it's obviously not a silver bullet. There's not one thing that's blocking AI agent adoption.

For a start, incomplete knowledge. From what we've seen, going back to the mantra of data is key, for organizations, one of the most time consuming challenges is being to empower agent with the right data. So having to build custom ingestion pipelines, manually reprocessing data is really burning developer time and burning a lot of money in order to do that. And what's happening is because the agent doesn't have the right data, then there's a lot of limitations around it. There's a lot of missed opportunities from a user engagement perspective, and that's really hurting the underlying success of the agent. And, it also limits the value of conversations.

So moving on, subpar memory. So you think about it when there's solutions out in the marketplace that because of their memory and their lack of memory capabilities, they can only keep about 10 or 11 previous prompts. So anything after or before that, should I say, anything before that is completely wiped.

So imagine going and trying to have a conversation with that agent, and it's not able to remember those previous interactions. I like to think of an agent as a coworker. So imagine if you're going to your coworker and they can't remember the previous conversations that you had yesterday or the day before, is that really going to make you want to engage with that coworker and work on projects with them? No.

So being able to have long term memory is requiring a lot of custom engineering and a lot of workarounds in order to give agents those past interactions and being able to provide a great user experience. But being able to do that is really costing the organization a lot of money and a lot of time in developer resources.

Thirdly, the misaligned models. So as we all know, each LLM is really designed for specific use cases. Whether we're talking deep, deep research LLMs, or quick response LLMs that are giving you surface level information, you really need to ensure that the LLM is designed for the right use case.

So having your LLM pointed at the right use case is really critical because if the user is not getting the right information, if they're, for instance, doing trying to do a pretty complex action with the agent and they're getting very short responses from the agent that don't really fulfill what the project is is is meant for the abandonment for that agent is going to be quite high. So being able to orchestrate those LLMs, for the best experience is really, really critical. And then lastly, you've got your agent built, but then how do you deploy it where the user's at?

So you really want to meet the user where they're, looking to interface with, whether that be Slack, whether it be Microsoft Teams, whether it be API or web apps and so on. How do you manage the context? How do you authenticate it? How do you route separately every time is really critical, to having a great user experience. And even if you have all of those previous three elements in play and you fall short on the the user interaction, you still might come up with user abandonment on your agents.

So with that said, we're now getting into the really exciting part. So we've developed a number of new capabilities, that are really groundbreaking for Merlin Agent Builder in order to completely take out these blockers for AI agent adoption. And so with that, I'm really excited to introduce, some of those options.

So it's all about, again, bridging the agent gap. And as I've outlined before, we've built new capabilities for this. So we're adding new data sources with the click of a button.

We're going to have built in memory for long term and short term that handle session management a lot more. We're giving you the flexibility to point the right LLM, bring in LLMs for the right use case, and then lastly, the tailored interaction channels as well, which I'm really excited about too. And so with that, I'm going to, now bring in, my colleague Tom who's going to get you through the demo. Hey, Tom.

You there?

I am, Michael. Thank you very much.

No problem, Tom. So, Tom, I'll stop sharing and let you show us the the magic behind all of these great new features.

Brilliant. Thank you, Michael. Well, should we start off, with, data sources.

So, yeah, Michael touched on the importance of grounding your agent in data. And one of the key things we really want to do is make that process as as quick and as streamlined as possible and take away some of the challenges and the time consumption that Michael talked about earlier. So should we get started and add our first data source. I'm going to start with Google Drive.

I've already got authentication here, so I just need to select that. But if I didn't have an authentication, I could use this flow here to provide one. I'm also going to select a folder.

I could ingest the entire drive, or I could pick out a specific folder. Let's just pick out a specific folder for now. And that's it. So three or four clicks.

We'll save that Google Drive, and there it is. Essentially, now what will happen is the data source will go and actually pull files from that folder that I shared, and begin ingesting them and making them available to the agent. Whilst it's doing that, should we drop another agent in. Should we, let's put Slack in, shall we.

And, again, I'll use, this existing auth I've got here.

The whole idea behind this is really presenting this really simple, but flexible, configurable experience, giving you the chance to actually, have some control over what's ingested and and where it's ingested from. But also nice little additions like being able to ingest images, for example. So I tick this box here. What will happen now when I add my Slack data source any images in any of the channels that I ingest will also be ingested. The text will be extracted, and that will be made available to the agent as well. So I select a channel here. Let's go for this channel. As you can see, I can actually select multiple channels to this interface as well. So I could put a couple of channels in here.Maybe put this one in as well and click save.

And now that will add this Slack data source to my kind of list of data sources and begin the ingest process. As you can see just there, Google Drive has completed, ingested 33 documents from that folder, and now Slack is in progress, and we'll start pulling the data from those channels.

So, yeah, thank you. Thinking you missed it there a bit, Michael. So, again, it's quite a lot to show off in just a few clicks.

Yeah. Great. Thanks, Tom. So, during the presentation, we're talking about there's a lot of issues in organizations having to build their own data pipelines, having to pour a lot of developer resources in behind it. What you showed was very simple. With a couple of clicks of a button, we're able to get a significant amount of data ingested into the agent.

Would you give us a bit of a background on that in terms of how complicated is it to do that? Everything looks simple, but, it's always the proof is always in the pudding.

Yeah. It certainly is. So we should switch over here and off the back of that, talk about some of those limitations.

And you touched on this earlier when you talked about traditional integration, traditional, data pipelines, but also traditional connectivity.

With traditional connectors and integrations you need to know what you're asking for. You make a request for a particular resource, and you get it back. And that's largely true with any API or any query you need to write against a particular data source.

So there's a significant gap in the middle, which right now, people having to fill, as you described it, with manual data pipelines, complex transformations, all these kind of things that people are having to think about again. And now they're having to think about it with unstructured data as well as structured data. So what we did and what you saw before, each data source is powered by the Tray Knowledge Engine. And this basically takes away all of those challenges that we've just talked about in terms of accessing the data, partitioning it, deciding how to break it down, chunking it, how to break those partitions down into readable chunks for the for the agent to actually use. Then embedding that data to make it accessible to natural language, making it semantically searchable, so that agents can actually access the information that they need.

It's really important. We showing structured and unstructured and semi unstructured here, different types of data source. And the idea being with the top one that you can essentially import the metadata of a large database, data warehouse, that kind of thing.

And then the LLM can actually interpret the table it has access to to actually build queries and access that data. And that's incredibly powerful, and that's how a structured data source would work. But also, of course, it works with, you know, unstructured data. You know, we've got Notion as an example here, but you also saw, Google Drive and and Slack previously, the ability to ingest data from those sources, full datasets, and making them available to search for the agent. And what's really, really important with the Knowledge Engine is, obviously, each of these different data sources have data stored in different formats. And what we take away with our knowledge data sources is all that that complexity around partitioning, chunking, and embedding. We handle all of that, and we've optimized it for each of our data sources.

Yeah. That's that's awesome. Thanks so much for showing us that, Tom. I mean, it it goes to show you, like, the level of complexity that we have taken out for the organization being able to just with as I said, with the click of a couple of buttons to be able to get all of these complex data sources into the agent. I think it's tremendously forward.

Everything that looks easy is is usually pretty complex on the back end. So thanks so much for walking us through that. when I was talking, before, we we were talking about memory and the ability to handle memory within sessions both long term and short term.

Would you mind just giving us some examples of that in the platform and how we're handling that?

Yeah. You mentioned there was something about having to explain or something or talk to a colleague about something the same thing every single day. Can't can't possibly imagine what that feels like.

Yeah.

But it's a significant problem. And anyone that interacts regularly with LLMs and agents in particular, you have that consist of frustration that you're really getting somewhere, you're getting to the point where you, you've got some value, and then you have tonight another tangential thread or another piece of work as you described earlier. And you essentially have this feeling of starting from scratch. You're teaching this agent what it needs to know.

So what we're introducing, firstly, in terms of session context, we've got that covered. You know, Tray can handle massive context in a session. So, the limitation there will really be the context window, the LMM you're using. It won't be the medium term memory. And I'll show you how it works for this too with hopefully this rather straightforward question here asking the agent what my name is. You can see this is a brand new session with this this agent. This isn't based on a period a previous chain of conversation. This is just a request I've sent directly to this agent.

What this agent will actually be able to do is, go and look through the info conversations we've had had with it previously, and retrieve information that I've that I've discussed with it before. And here we go.

Look. It's told me what my name is, so we got that right. That's always a good start. Where I work in the product team, because I've been here since 2021, and my boss, Ali Russell. So, yeah, not all great information, but there it is, on the screen.

This is incredibly powerful. A fresh session. What's also really powerful about this as well, this is specific to me. So it has memories based on my interactions with the agents. This isn't just listing every single interaction that someone has had with an agent.

It's specific to me, so it won't have context of conversations that other people have had. So it really allows you to deploy agents that are really scaled across the organization because they can remember a wide corporate context specific to individuals interacting with the agent.

Yeah. That's great, Tom.

If I was a glass half empty type of guy and we've all interfaced with ChatGPT and other AI models, I would say, so what?

I would say why is this really important for agents?

So it's a good question, Michael. And I think this example here being reminded my name is, it's great for demo purposes. But why this is really powerful is if you remember what we've showed in the agent builder before is the incredibly wide range of tools you've got access to. So we really see is bringing it all together.

We've already had the limbs and the arms of the robot that allowed it to take action. Now we're really bringing in this brain concept, this ability to remember. So what can happen now is you can deploy agents that remember context so they can get to the value point quickly, and then they can take the action they need to in various different systems.

So you've got the best of both worlds, essentially.

Yeah. That makes sense. And I think, we've talked about orchestrating agent of agents and having that internal memory. I think that's another key component, as it relates to to given agents memory and being able to understand that.

When we look at LLMs and LLM flexibility, how how are we making it easier for the organization to bring in whatever LLM that they would like for whatever use case?

Yeah. It's another great question, Michael. And I think we can show that off as well. Again, you touched on it earlier, the need for different uses of different models for different use cases.

And the way we're thinking about agents being deployed, and you'll see this, when we talk about how we kind of expose our agents. But, the idea of one mega agent that does absolutely everything is unrealistic. And agents, a lot like human beings are going to be assigned specific tasks, which essentially become their job. Those jobs essentially require different levels of capability. So as you can see here, I'm currently using Claude, for Sonnet, via AWS Bedrock with this, particular agent. But I can change that just through here straight away. So I can go in.

This agent isn't doing anything particularly heavy, so I could potentially switch to a smaller model. You know, something like Haiku 3.5, I'd get faster results, which is ideal. Or perhaps I wanted to pick an entirely different provider, much like the data source process. The idea is this is really easy to configure, clicking through.

If you haven't provided an authentication already, you can go through the authentication process here, and then you can select a model for this. You know, I might prefer a a smaller model if I've got some kind of orchestration engine, which is distributing tasks to other agents, which, have a kind of higher model. So that's what we really wanted to bring in here. That simple click through process to select your agents.

But it really aligns with the way we're thinking about agents being this kind of distributed architecture of agent of agents. And we're seeing lots of emerging frameworks, which are are supporting that.

Particularly, Google's A2A is getting quite a lot of attention right now for this inter-agent communication. And being able to easily change and set the model for each agent is a critical part of that process.

Yeah. So your last sentence there, being able to easily change the LLM for the right use case, again, we've made it incredibly easy to do this, but we know there's a lot of complexity on the back end in order to give this incredibly easy user interface.

Would you mind walking us through the you know, how we got here? What were the hurdles? How did we come up with such an easy to use interface? It's showing how we bake the cake, if you will.

Yeah. Well, much like, data sources, the key was extracting complexity. So, virtually all model providers offer some kind of tooling some kind of agentic use case where you can provide actions that your agents can take. What we've built is an agnostic layer above that that sits above each of these underlying services.

So when we introduce new models and providers, we build them in as part of our underlying process. So, essentially, you can swap them out directly. This is all you need to do. I don't need to change the prompt.

I don't need to change the format of any of my tools. I don't need to adjust anything. It automatically works no matter what I select here. So that's really why, again, simple on the outside, but there's a a ton of complexity under the hood to make sure we're making the right calls to the LLM when you invoke the agent.

Yeah. No doubt. And I see a number of questions coming through so I think LLMs definitely sparked a lot of curiosity there. So that's good to see. So, Tom, we've talked about giving the agent the right data, giving it, both short term and long term memory for better recall, then being able to point it at the right model based on the right use case.

And now, as we talk, the most important thing now is being able to deploy that agent. We're meeting users for much better agent adoption, which is what we're all here about. So would you walk us through how we're approaching that and some of the examples that we have that we have done to to make that easier for the organizations?

Yeah. Of course. So building on what I mentioned earlier around agentic framework, agentic orchestration, every agent built on Tray is automatically exposed as an API. So it can immediately be called by the other agents who can consume it as a tool, or can be set in between other kind of processes as well. But we also want to obviously expose it in the place where people are working. So I'll show you quickly.

You might remember MemTray from Slack. This is, MemTray in Teams. This is actually the exact same agent, and it has access to the same memory as you can see here.

So I can send it a message, and this may be exposed to team? As I said, this is the same actual agent. I've got it linked, with my user, so it knows that it's me. And now I'm able to tell it things and have conversations with it based on knowledge that it already has.

Well, there it is. So, yeah, it looks like this is not the truth. There it is.

Yeah. Michael Douglas, you're working out great too. It's going okay.

And it's I think that even might be might be telling you some lies, but it's okay. Keep going.

But yeah. So that's it. You've seen. You've seen it deployed in Slack. You see it deployed in Teams. This is, again, as I say, the exact same that I'm interacting with here.

Obviously, these are actually exposed to API, which allows you to deploy anywhere. But you could also deploy it between a range of other channels, as well, for interacting between sort of asynchronous processes, ticketing systems, that kind of thing. So you can essentially deploy your agents anywhere.

Terrific. Terrific. And I know that our, the agent can take full value of, the all of the native Slack and and Team features.

So it's not just a conversational interface, but it's giving you a true native experience within whatever, wherever you deploy it.

Yeah. So if you I'm I would if you look back at some of the previous sessions that, you know, my boss, Ali Russell, as you saw earlier, has has done you know, he showed a very interactive ITSM agent, which is capable of taking a range of actions and actually makes use of the interactive components for the tool like Slack. So, it's even further closer time to value where you can use the action buttons, that kind of thing. And in his case, the actual agent itself is generating those action buttons.

It's not a laborious block kit building exercise. They are dynamically generated based on the the interaction the agent is having with the user.

Yeah. First great. That's so, Tom, thank you so much for walking, everyone through this great new feature list. Thank you so much for everybody for joining us.

Featuring

Tom Walne
presenter

Tom Walne

Director, Product

Tray.ai
Michael Douglas
presenter

Michael Douglas

Sr. Product Marketing Manager

Tray.ai

Let's explore what's possible, together.

Contact us

More like this