Megacast
Feb 26
46 min

The path to the autonomous enterprise

Learn how flexible orchestration, composability, and AI-driven automation are reshaping enterprise operations — and enabling autonomous business processes.

Video thumbnail

Overview

Today’s enterprises need more than automation—they need orchestration, composability, and AI to drive performance at scale. In this special Megacast, Tray’s leadership team unveils how the Universal Automation Cloud helps organizations architect AI-powered operations in fast, safe, flexible way. See what’s new and what’s next for building the autonomous enterprise.

What you’ll learn 

  • Why composable architectures are critical for AI-driven business operations

  • How orchestration supports flexible, scalable enterprise automation

  • Where the Tray Universal Automation Cloud fits into modern AI strategies

  • New Tray capabilities to help you accelerate the move toward autonomy

Session chapters 

1. Why AI needs orchestration 2. Scenario: the autonomous enterprise in action 3. Rethinking iPaaS for AI 4. Infusing AI into business processes 5. Building the autonomous enterprise foundation 6. Building a knowledge agent 7. Advancing toward enterprise autonomy 8. Q&A

Transcript

Welcome to today's Megacast, "The path to the autonomous enterprise."

At the conclusion of today's event, please stick around for our speaker Q&A after-party, where we will answer the questions you submit throughout the event. Without further ado, please welcome Tray cofounder and CEO, Rich Waldron.

Hello, and welcome. Thank you for joining us today. We have a ton of exciting content to cover.

We're gonna show you how to get on the path to becoming the autonomous enterprise, a grand vision that enables AI agents to proactively and reactively carry out tasks on our behalf and allow growth never before thought possible.

To do that, we're gonna unpack this vision. We're gonna show you what it takes to get there. And using some of the amazing product functionality that we announced yesterday, we'll show you how to get started right now. Our starting point today is how do we harness the power of AI? It's a question that I've debated with so many of you on the road, spent so much time talking to analysts and industry experts. We're all looking around trying to figure out what's the quickest way for us to take advantage of this amazing technology.

See, we have a number of headaches to get over as we think about how to proactively bring this to the workplace right now. There's the governance angle. How do we safely and securely enable this across our organization, especially in a world where every single application that we have is bringing its own variation of AI? How do we stop ourselves from being in a position where we become AI referees?

Secondly, how do we actually get these solutions into production? You know, what is the engineering bandwidth required? How do we actually deploy them? How do we bring these things to life and put them in the hands of our team so that they can start getting the benefit?

And then finally, where do we get started?

What are the first easy wins for us to bring to our organizations?

Where do we get that inspiration from?

Today, we wanna cover all of those topics. We'll be starting with a concept video that gives you a picture of the art of the possible, and then we'll show you how you can build, manage, and deploy these solutions into your organization today.

You see the centerpiece when it comes to thinking about deploying AI within your organization is actually your iPaaS vendor.

They're already connected into your applications. They have the capability to create these solutions because of the building frameworks that they have in place, and they have the governance model and control that allows you to deploy this seamlessly, securely, and effectively.

By doing so, that puts you back in control. So when you're thinking about your AI strategy, it shouldn't start after you've considered how you deploy this using your iPaaS because, really, that's the centerpiece or the console that allows you to get started.

To get a sense of what can be done, we'd love to show you what the autonomous enterprise looks like in reality.

Roll the tape.

Morning, everybody. Morning. Hi. Hi.

Did y'all see the memos from Barry?

Mhmm.

Okay.

So more efficient growth, where do we start? What's our biggest revenue bottleneck at the minute?

Well, the most obvious thing is that renewals just take way too long.

Okay. We can definitely use Tray to figure out why that's happening and maybe also use it to advance our AI strategy. Hey. Good idea. Merlin, why are renewals taking so long?

Let me take a look.

I can see in Salesforce that renewals is a manual process taking thirty seven days. I would suggest automating this process. Would you like me to create a QR ticket for the integrations team?

Yeah. Thank you.

Ticket created.

What's this?

Marlon, do you have any templates for processing Salesforce account renewal dates?

Yeah. This is a good start.

Merlin, fetch the Salesforce accounts usage from Snowflake.

Right. I need to fetch entitlement plan data from Salesforce CPQ.

Merlin, do you have a Salesforce CPQ connector? Mhmm.

Ah, great. Alright.

So let's just configure this to use the Snowflake usage data.

Merlin, send an email to the account owner informing them of the upcoming renewal and provide the renewal calculations from CPQ.

Right.

Let's also make this available as a connector as well for others to pull our renewal quotes on demand.

Okay. So we need to detect when customers are asking about renewals.

Merlin, how can I determine if a support ticket is from a user inquiring about renewals?

Great. Thanks.

Let's have a new connector the integration team built to retrieve a renewal quote.

Then I just respond to the ticket with the quote details. And done.

Interesting. Self serving renewal quotes and integration issues are top of mind for customers.

Right. Let's use the renewal service that was built by the integration team and the product itself to enable users to self-serve.

If I just take this process and publish it as an API, then we can use that in the product as well.

Merlin, credit your ticket for the engineering team to use this API within the product for a new self-service billing page. Oh, and make sure you include the API docs and the client credentials.

Merlin, can you support our users' integration needs?

Great. Let's do that.

This looks great.

How can I integrate my Acme data with Salesforce?

Wow. That's great.

Merlin, how was our q one performance?

On time renewals have improved by one thousand percent. We saw one hundred percent of renewals completed on time in q one.

We have also seen an increase in NRR of ten percent because of support for integration to other SaaS services.

That's fantastic. Can you provide me with a weekly summary to track this?

Sure. I will send an email and a team chat message.

Great. Oh, can you include the leadership team as well?

I have added the leadership team to the weekly summary.

Wow. How exciting was that? And to think we only showed a handful of examples.

You see, a typical enterprise has tens of thousands of these processes.

Imagine the impact you could have by harnessing this ten times or a hundred times over and over, and think through the speed at which your business would be able to operate and the competitive advantage that you gain if you are able to do this.

But also imagine for a second the nightmare you'd have if you didn't have one strategic platform to deploy, manage, and govern it all. That's what we're gonna cover today.

You see, when Dom, Ali, and I started Tray, we started with the mission that we could live in a world where anyone could solve problems without the constraints of technology.

That led us to build the platform that you know and love that allows you to construct these powerful workflows visually and harness the power of APIs and computing without needing to write code.

If we think about what AI brings us, it brings us a shared language that allows us to converse with computers in a way that we've never been able to before.

I'd love to introduce my cofounder, Alistair Russel, who's gonna unpack that for us a little bit more. Ali, over to you.

Absolutely. Thanks, Rich. So previously, trained machine learning models was always very kind of small in scale, quite specific and targeted at certain use cases and specific datasets.

But the rise of large language models has become the foundation of a huge swathe of machine learning and AI innovation in recent years. You know, while at its core, text completion, such as, you know, general GPT and things like that may seem limited, the use of neural networks and the ability to train models on, you know, vast amounts of existing human knowledge is game changing, really. You know, language is really what separates humans from other species. And, you know, large language models not only enable a shared language with computers, something we've been, you know, working towards for decades, but they also embed a sick significant corpus of human knowledge into that model.

It really is a paradigm shift, you know, driving us towards an artificial general intelligence, and we're only really just getting started. It's also, you know, Keith, how iPaaS is gonna change the application of AI in the future.

Integration automation is all about computers performing tasks faster and more efficiently than humans can. And this shared understanding knowledge will make this even more powerful as all the existing knowledge about a company's SaaS stack and how to better integrate it will be possible with, you know, a higher degree of accuracy and success and with less human intervention. You know, think about dealing with data mapping and, data hygiene and, you know, API responses and errors. All of this sort of stuff is, is, you know, contained in the existing knowledge base of an LLM. And, you know, it really empowers the iPaaS to be able to sort of solve those issues and sort of help automate, faster and and better.

So, yeah, pretty, really sort of excited about that.

So with all this talk of shared languages and LLMs and an explosion of a new and exciting technology, we know we're in the midst of a platform shift.

In fact, we've seen these shifts in the past, and I'll save you from the history lesson because we all know what they are. But with each one of these shifts, new companies exist. They are born. They are created to solve the problems that exist by the change in the technology at that point in time. And so when we look at this from an iPaaS perspective, a number of organizations were created to solve the problems that existed at that time, that on premise to premise connectivity or that on premise to cloud connectivity.

But the fact is those organizations are not built for the rigors and the demands of what is to come in this AI era.

You see, whilst companies are born through these eras and through these phases, they also die.

And Gartner predicted that by twenty twenty three, two thirds of iPaaS vendors would have disappeared. You see the fragmented code bases, the inelastic architectures, and the poor connectivity mean that even AI with its vast knowledge and reasoning engine can't retrofit these old solutions to meet the demands for the outcomes of today.

You see, we're shifting from a place where all automations were built manually to one where they are augmented by AI and supported through their delivery.

In fact, there's an irony that integration is built to remove silos, yet in the archaic way, it creates them because it limits who can actually solve these problems in the first place.

We're shifting to a time where AI facilitated composability and collaboration changed this game.

You see, this is gonna be the end of those static processes, those ones that get built once and they need replacing all the time. We're getting to a place where AI will be able to continuously deliver and continually improve.

In fact, it's the end of passive integration and passive iPaaS. We're heading toward a time where we need to be proactive, and that underpins the requirement to be able to deliver the autonomous enterprise.

And so with that...

...What's gonna be the first change? What is the area that we're all looking at?

Well, it's that every business process is being reimagined for AI. Everything that we do today, all the processes that we've known and built up over time can now be changed and augmented and improved upon by this amazing technology.

And so we'd like to start by talking through how you can infuse AI into your processes right now. And I'm gonna hand it back to Ali to walk us through that. Ali, over to you.

Yeah. So, you know, we have seen the, you know, the high level adoption of natural language features over the last eighteen months in in their many services, something that the generative AI boom has made very easy, you know, passing natural language, generating natural language, and lots of other sort of, you know, use cases. Yeah. As we've shown ourselves using, Merlin Build and and Tray Chat, but the real value is when you start to infuse AI into new and existing business processes, both, you know, internal and external.

You know, for example, you know, using AI to classify incoming support tickets, you know, doing sentiment analysis, you know, customer health analysis, things like that, you know, the sort of things that we've just shown in our glimpse video.

But one of the biggest concerns with this AI infusion is the data governance and scalability.

While a lot of the platforms out there are starting to introduce AI, you know, infusion features, much like the integration space generally, like one size doesn't fit all. There's always gonna be, I know, you know, very specific way of doing things for your company, the way that your data sort of, you know, fits and stuff and, you know, how you do things like the human aspect.

With the intelligence features that services are currently building, you don't really have control over that data as well and, you know, how it's being used to train and sort of improve the existing you know, those models that have been used. Whereas the Tray platform really allows you to bring your own large language model. You know, get more specific, on the data that is used to, you know, so it could be more accurate. You have more control over the governance of that data.

And iPaaS also allows you to bring in data from multiple sources. It's not just about, you know, the data that exists in Zendesk or, you know, market. So, you know, whatever tool is bringing it all together into the right place at the right time. And, of course, the big thing is the ability to take action, and things like that. It's not just about being able to reason over data. It's actually being able to do something without making decisions and taking actions.

But really, the future lies in the idea of intelligent and autonomous agents.

Firstly, intelligent knowledge agents, you know, have access to a huge amount of existing knowledge, through the large language model itself and, you know, all the advancements that have happened in recent years. But also the development of tools that are the foundation of this AI kind of, boom at the moment. Things like embeddings, you know, vector databases, etcetera. They're all allowing companies to unlock the knowledge that exists within their company, something we'll be showing you later as well.

But what I'm really excited about, however, is the future of the autonomous enterprise, which is the proliferation of autonomous agents across a business.

See, these agents can reactively or proactively monitor and optimize every process across your business.

This really is the future of process mining and the AI era. You know, these agents have a massive amount of foundational knowledge, you know, and they have the ability to iteratively learn and react to new knowledge, as they go. You know, you can evaluate all of your business data and processes, you know, to help drive efficiencies across your business.

And the key differentiator and the reason why the Tray platform is perfectly positioned to power this agent revolution is the ability, as I mentioned earlier, to to take action, to use the context that already exists throughout the business in all of the the the services that you use, all of the existing business processes that you you use or you run through Tray.

And it also includes the human aspect as well. There's a lot of context there that, just having everything sort of in a single large kind of model is gonna miss.

It's not just about reasoning over large datasets in an intelligent way. It's also about, you know, like I said, making decisions and acting on those decisions. You know, we fully believe that the future companies will supercharge every role with these agents. You know, imagine every person in your company has an AI sidekick to supercharge their productivity.

And it can not only just provide knowledge and answers and sort of, you know, it can actually make decisions and act on things as well as we've, you know, seen in our kind of future glimpse video.

Now if the autonomous enterprise, as we've been describing it today, is something that you believe is right for your business someday, then the great news is you don't have to wait to get started. You know, the universal automation cloud that we deliver today, is an AI powered multi experience iPaaS that gets your teams building integrations and automations faster and better because of, you know, Merlin now, the intelligence layer. You know, we're taking the next big step with AI elevated composability, which you'll see more today. Composability is a huge, sort of foundation part of this AI revolution because, you know, the artificial intelligence layers need building blocks to be able to sort of impose, larger sort of, you know, functionality and features, on top of.

And that composability aspect is a big big sort of foundation for us and something we're pushing for the next step to the Tray platform.

And, of course, once you're on board, you know, you will grow your AI capabilities with us as we help you, you know, infuse every process, with AI, you know, harness intelligent process mining to supercharge your company efficiency. And, of course, the utopian vision, is of that intelligent agent powered autonomous enterprise, you know, supercharging your whole company and being incredibly efficient and on growth using these agents, basically.

Now the Tray platform has been designed to empower the autonomous enterprises at every layer, you know, from the foundational enterprise core. You know, this gives you all the security and governance control that you need to unleash AI across your enterprise and really starts sort of reaping the benefits.

You know, the composable platform capabilities that are key to giving AIs, as I mentioned, the building blocks succeed.

And, of course, the multiple experiences.

This really allows different roles to coexist and collaborate with AI in a single platform. You know, we're really focused on delivering the future of AI and iPaaS together through the Tray platform. And I'm gonna say it again, but I'm really, really excited about this. I don't know. How about you, Rich?

Yeah. I mean, I'm incredibly excited because I think for many of us, this represents a once in a career opportunity to unleash a technology that is gonna change the trajectory of our businesses.

That change doesn't happen overnight, and we firmly believe that the Tray Universal Automation Cloud is the place to start.

I know you've been waiting for it, so let me hand it over to Alex Kohlhofer, our head of product, and he can show you how you can get this going right now. Alex, over to you.

Thank you, Rich.

So how do we get started?

AI is the next big shift, and it will be disruptive for all.

Every single business process has to be adopted, and entirely new ones will have to be invented.

And there's incredible urgency to start this transformation, and yet there is so little clarity as to how.

So let's see how you can start infusing AI into your organization today.

And we'll do it by looking at a very specific challenge and a real solution.

But know that what we'll show you as part of the solution is applicable to virtually all your processes.

You can use it across your entire stack, not just for the specific use case that we are choosing as an example here.

So what is that challenge?

Sharing knowledge is one of the biggest burdens.

You get tons of questions from your customers. Your teams scramble to understand and to provide answers.

Your teams themselves spend time hunting down and rediscovering information.

Meanwhile, you know that information actually exists, but it is just never readily available.

And even worse, all this knowledge grows stale across the many silos we exist in.

So what if what if you could create an intelligent knowledge agent, One that understands what our customers are asking, that understands what the teams need, that can provide comprehensive expertise in real time.

Think of it as a constant watchtower that hears what the questions are and has the answers.

An agent that always tells you where these answers actually come from so you don't have to blindly trust it. You can check and verify.

And once you feel what these answers are based on, your teams can improve the underlying knowledge base, the system of record.

So today, we'll show you exactly that.

And spoiler, it was built in days, not months or quarters, and it was not built by the engineering team.

And you can do that too, and therefore act incredibly fast against your strategic priorities.

So let's have a look.

So here is such an agent. It does everything I just described.

And before we look at how it was created and the new features that enable it, let's see what this expert can do.

Let's ask a question.

What version of the SFDC API does our Salesforce connect connector use?

And while it's looking that up for us, please note that it's simply the easiest way to show this as a chat interface, but it is very different from some other chat interfaces you've seen because this is a reusable service that can manifest in many ways and across all your systems.

So here's the answer.

It doesn't really matter for our purposes here, but I can tell you that this answer is correct. But more importantly, you can see here that it actually tells you where this information is coming from, and you can visit that. This is from our own internal documentation.

Right? Okay. Let's try something a little bit more complex.

How can I create an API?

Because this isn't a generic model, I do not have to specify that I'm looking to create an API on Tray. It knows, of course, because it's a Tray expert, so all its answers will pertain to our domain.

And here it comes.

To create an API on Tray, you can follow these steps.

Turn any project into an API that tells you how to do this. Create required endpoints and operations.

Manage API security and access, including roles and policies, authentications, set rate limits, everything you need to do to create an industry standard high performing API.

That includes, of course, who gets access and it even tells you how to make calls against the new API.

But, again, most importantly, at the end, it tells you exactly where this information is coming from, where you can read more, and how you can do it yourself.

But here's an interesting thing. This is actually an entirely new feature. I'm quite happy to announce today that we're launching API management. And so this being an entirely new capability about which you will hear a bit more in a moment, It was really critical for the team who built it to check their own readiness by looking at this response because the quality of this response tells us how ready we are. And if this response by this expert is lacking, they can actually update the underlying knowledge systems until this answer is solid.

So entirely new, tightly integrated in the actual development process used and fueled by the teams and now providing answers to you, me, and anybody else about this exciting new capability.

One last change, test. You know how these generic chat interfaces often they have a reputation. They hallucinate. They make things up because they often don't know, or you can't tell what they know or don't know because they don't tell you where the information is coming from. So let's try what our agent here does.

What is Ali's favorite workflow, and what does it, oops, does it do?

Let's see what it says to that.

I'm sorry, but I'm unable to answer the question. This is the correct answer because it has no business of knowing this. And even if it knew, it has no business of disclosing this to us.

So extremely transparent and a true expert on the things that we wanted to be and on nothing else.

So with that, I'll hand it over to Tom to show us how and why this works and how it was possible to create this in mere days.

Over to you, Tom.

Thanks, Alex. It's great to see the agent in action.

Like many vendors, we have extensive information available, and the knowledge agent is perfect for ensuring that our users can access the information they need when they need it. As Alex pointed out, this is more than just a simple LLM integration.

To power this interface, we need to implement a retrieval augmented generation or RAG AI framework that uses our platform as a knowledge base to ground any large language model that we use to power the agent interface that Alex just demonstrated.

This is to ensure we return accurate and up to date information to every query.

So how is this built on Tray? Well, I'm gonna show you.

Firstly, I wanna point out that you're looking at the brand new composable UI, within the Tray platform.

We've got a tiny tiny little sidebar over here, which gives you more space. You can switch your logs on and off, and you can also jump between workflows really easily. And these are the four workflows that are powering the ingestion pipeline, which is ultimately providing data to the agent that Alex just demonstrated.

It's quite simple. This workflow is probably key as this is receiving all the updates that we make to our platform and is essentially distributing that information throughout the rest of the workflows and ultimately storing it in a database here. We're using a VEX database called Pinecone, which makes the data available to the model that is powering the agent. And when I move on to the query pipeline in a sec, you'll see how the different models we're using are accessing the data they need to provide up to date and accurate queries, to the questions that are asked to the agent.

So let's jump over to that query pipeline right now. Now this is simple. This is only a single workflow, and it really demonstrates the level of configuration available and how easy it is to build composable interfaces on the Tray platform.

As you can see, we've implemented the ability to use different models with this pipeline. We've got OpenAI's GPT4. That is what you just saw powering the agent. We've got Anthropic's Claude, and we've got Google's Gemini.

This enables you to use different models for different use cases, which you will also see in this demo.

Each of these models is powered by an ingestion pipeline on the one I just showed you earlier. That was the one for GPT4. So there's a full set of data available to each model, which means you can route the query to whichever model you want to answer the question.

If speed is a key requirement, you might choose one model over another. In our testing, GPT4 was fastest, so we use this model to power the agent you just saw. But if quality response is the most important factor, we found that anthropic, was slower, but generally returned better results. You could also consider elements like cost and potentially swap out GPT four to GPT three point five.

So the query is passed into the workflow with a parameter that indicates which model you wanna use, and you can see that here.

And this branch connect just simply routes it accordingly depending on which model is in the query parameter passed to the workflow.

So I've mentioned a few times throughout the demo, and Alex touched on it as well, that Tray is perfect for building these composable AI interfaces.

This is partly due to the new, composable interface, which gives you access to everything all in one view.

But it's also you've seen how easy it was to build the pipeline that we showed earlier that allows us to ensure that the agent always has access to the latest information.

But it's more than that, and I'm gonna show you a brand new feature now that is the first time we're really talking about it, and that is tray API management.

So this basically allows me to expose this workflow here as a fully operational API, and I've already done just that.

So here is my chat completion endpoint. I've just selected this workflow, and I've decided to expose it as an endpoint that accepts a post method with the following path.

I can set up access control for this. And as you can see, I've already done that, and this is relevant for later as some of these clients are already using the API in a different setting. I can also define policies that allow me to restrict access to the API, or add the rate limits and roles that allow me to link the clients to the policies.

Now let's just have a look at that API in action.

Here it is in Postman.

As you can see, the parameter I'm passing in is GPT four. This will be used by that branch that I showed you earlier in the workflow, which will route this request to the GPT four model. And I'm gonna ask it how I can create a workflow that runs at eleven thirty every day.

And there it is, exactly the same content as if I do ask the agent with links to the documentation as Alex demonstrated earlier.

Okay. Great. So I've got my API, but how can I use this API to infuse AI within my organization and build out the autonomous enterprise?

Well, as you've just seen, I've actually got this up and running already because I've built a couple of clients that are exposing the agent functionality via API.

And some of you might have noticed that one of those clients was a Zendesk client, and that's exactly what I've built right here. Amazingly, I was able to kinda get this together in literally about half an hour. And all it's doing is taking Zendesk tickets as they come in, enriching them with some additional information.

It's then making a call to the knowledge agent using the API that I've just shown you. You can see here with the token, and it's passing in the query the customer raised, and it's getting a response based on the question. It's storing this response, and then it's actually making a request to Slack, an agent in Slack to essentially check the response and send it through. Now let's just see this in action, shall we? So this is how you'd submit a ticket through the Tray platform.

As you can see, I'm asking this query again, and I put all the information in. I'm gonna submit that ticket.

As you can see, any comments that appear would appear here based on the response from the agent. Now let's jump over to Slack and see what happens.

Right. I've got my response through. So here's the question I asked, and here's the suggested response from the agent. You've seen this before, and it's got the link to the details in.

I'm pretty happy with this, so let's send it out.

As you can see, it's told me my ticket's been updated now as the customer. If I jump back over to this interface and refresh, I can see the response through from the agent answering the question that I just asked. So, yeah, you've seen quite a lot. I showed you the new Tray UI. I showed you the workflows that were powering the ingestion process that ultimately powers the agent that Alex showed you earlier and also that I've exposed through your API to use throughout our processes.

And, yeah, this is it. This is really how you're building the autonomous enterprise with Tray. Now I'm gonna hand you back over to Alex.

Thank you so much, Tom.

So the AI landscape is constantly evolving, and so can you because there is simply no end in sight.

But now you can swap out any part, anytime, and adopt new technologies and players as they emerge. Some of these systems and companies a year ago we haven't even heard about, and this will continue.

And so you can simply not afford to make the wrong bet.

And like you heard from Tom, we were able to swap out an experiment with different vendors instantly and constantly, and this continues on because that's what it takes at this point to stay nimble and to keep up.

One thing we've come to understand in this new era of AI is that AI is actually really easy to play with. Literally, everybody can do it. But we've also learned that it's a lot harder to make it real.

Now our platform is inherently composable, so you get to use and reuse all the building blocks. The result, real software that can be configured and deployed anywhere.

For example, exposing it via Slack took mere minutes thanks to our dedicated templates. Integrating into Zendesk, like Tom showed, same thing.

So AI is easy to play with, and now it's also easy to make real.

So what we're announcing today is a new user experience woven into Tray build centered around composability and collaboration.

You saw Tom use and show it.

AI ecosystem access that includes new connectors, templates, etcetera. And it is extendable on your terms because I'm also happy to announce today our new modern connector development kit, an SDK that allows you to connect to any system without compromising performance or quality.

Your connectors are just as good as ours, and you can build them fast and to the highest quality.

Then last but not least, API management, which enables you to instantly expose any Tray workflow, existing or new, as a highly scalable API that you can use virtually anywhere.

All these new capabilities combined with our existing capabilities enable your AI journey, And they're all backed by the Tray Universal Automation Cloud that takes care of governance, security, and scalability, leaving you firmly in control from start to finish.

Also, that you can orchestrate your own AI journey today. And with that, I'll hand back to Rich.

Thank you, Alex and Tom. So exciting to see those new features that are available today in the universal automation cloud that is Tray.io.

Thank you for joining us and hearing about how to get to the autonomous enterprise. This is a vision that we're so excited to bring to you today. This is the future for how you can enable your iPaaS solution to get you on the path to deploying AI successfully. That concludes our presentation for today. Please pop your questions in the Q&A box, and we look forward to answering them there.

Hello. Thanks for sitting through our presentation so far. We are now at the Q&A after-party, and we have a few questions to get stuck into today. So, between myself, Tom, Alex, and Ali, we'll be happy to answer those for you.

So to kick us off, how does the new API management capability compare to choices already in the market?

Tom, I'd love to hear your perspective on that one, please.

Yeah. Absolutely. Thanks, Rich.

Well, well, firstly, the Tray API management will be readily available to all Tray customers as part of the universal automation cloud. So you have full access to APO management as part of your existing packages, and any API calls you make, using API management are just covered as part of your package.

But in terms of real functionality, there's two elements that really kinda stand out to me. When we were looking at the market, we saw it was a kind of a choice between power and performance, but large, clunky, outdated UIs with, you know, desktop applications and hybrids, browser based applications, and really smooth, experiences, you know, with with easy easy easy to access experience, quick to get to an API built, but lacking the performance of those kind of traditional vendors. So we wanted to really build something that gives you the best of both worlds. So we've got a very powerful rules engine that allows you to essentially deliver the gate API gateway style functionality, that you want to kind of control access to your endpoints, through a simple, easy to use interface, but it's backed by the full power of the Tray platform.

So you get access to everything you get with Tray, and you can expose APIs to operate at massive scale, throughout your organization and to your customers.

Thank you, Tom. Onto our next question.

Let's just see here.

What steps does Tray recommend to ensure a smooth AI adoption journey?

Alex, this feels like a question that is right up your street.

Thank you. Yeah. So I think there are a few things to consider here. Right? So number one, you absolutely have to have the right infrastructure.

And then not only do you have to have it, you also have to treat us as that infrastructure.

Right? Because that's the thing. IPaaS is obviously uniquely positioned for this because it is already connecting to every single one of your systems and has the capability to connect to anything you might throw at it. And you need that kind of integration and connection in order to put AI to work throughout your organization.

It's not something you could just sprinkle on top. And that's actually the big disadvantage that we see in the market with lots of individual vendors trying to add AI value to their offering because that's exactly it. It's just a little vertical slice of value added on top of a very specific use that you have. This allows you to really infuse it across the org.

So you have to treat it as infrastructure. And the other thing that is really critical is that there's urgency. Right? This is not a transformation that's gonna happen at some point soon.

You need to start today because you can already start doing this one process at a time across your organization. So for it to be smooth, you have to start now because it's gonna take some time. That's the truth. So treat it with infrastructure, start today, and then treat it with urgency for your teams.

It's not a long term mandate. It's now a mandate. But if you give them the right tools and the right infrastructure to operate on, they will be able to deliver against this, and they will succeed. So I think that's what it takes.

Thank you, Alex.

We have another question here.

When and how will the new UI be rolled out? Can I preview or test it first?

I'm gonna open this up to see whoever would like to jump in because I feel like all three of you will be hot on this one.

Maybe I go, because it's the thing that gives me personally a lot of pleasure. It really is transformative, this new experience. But we also know how UI changes normally go.

They always happen at the wrong time. They get sprung on you.

So we went all of our way.

Number one, there's early access, for testing that anybody can request an opt in. But then even when we roll it out wide scale for everybody, it's gonna be done in a way that you can, in your own time, decide to explore it, and you can switch back at any time to the old experience. So it doesn't happen right that moment when you are doing something critical on our platform trying to deliver something and suddenly everything changes for you. So on your own terms is really what I'm telling you, is how this is gonna go down. But, eventually, it will be for everybody because our entire future is playing on the experience. New features will be based on it, and we really believe it's critical that this brings all the functions and tools in your organization together to collaboratively work on the same assets in one place, not here, there, everywhere. So it's critical.

It's instrumental, but you'll get to experience it in your own time, on your own terms when and if you're ready. Eventually, it's gonna be ready for everybody.

Thank you, Alex.

Potentially a follow-up because I'm not entirely sure which piece this one relates to, but I'm gonna take a guess. Is this usable by embedded users?

I think it would be related to API management.

Tom, this feels like one that could be up your street.

Yeah. It will. Well, again, it's I mean, just it pretty much covers everything here. So, I mean, API management will be available to all users so you can build APIs that could be used in embedded solutions.

But also what you actually just seen and what Alex demonstrated and what then I showed you how it was built could also be exposed as an embedded solution. I've actually started experimenting with just that. I'm using my own credentials there to power the Slack authentication and the Zendesk authentication.

But given how easy it is to, you know, convert a workflow into an embedded solution, I could just do just that, and you could expose this as templated functionality, which your end users could could set up, by providing Zendesk or another ticketing tool, for example, or Slack or some kind of other messaging tool to route the kind of the the queries and the the information. So, essentially, everything you saw today is available to embedded users.

Yeah. May I add something really quick? Because I think that's really key. Right? We talk about composability all day long and everywhere because it is so critical, but it's baked into what we're doing here.

Everything you see can be used anywhere and reused and used again for all the purposes that you may have. So we don't distinguish really between are you trying to embed something, are you trying to automate something, are you trying to integrate something? These are all valid use cases using the same infrastructure, the same platform. You do not have to actually do different provision processes.

The brown, can I use it here? Can I use it there? It all applies.

But do you see Just just just wanted to add something, Rich, quickly.

Sorry to interrupt. Like, you know, the you know, we've we've all got sort of, you know, like you said, strong opinions around this sort of particular topic.

The glimpse video actually went into, you know, into the sort of the idea of embedding by the APIs that the product manager built into the product itself, into that kind of Acme and product, into the renewals billing page. And as Alex and Tom just said, that's something you can do today, entirely. Like, you know, you can build an API. You can use that within your product, and it gives you the ability to kind of, you know, have policies of clients, etcetera, and and really sort of, you know, control how that API is rolled out. So it's certainly something that's available. In terms of, you know, other embedded, sort of, you know, parts of the things we've showed today, certainly, all of the AI features in terms of connectors, etcetera, can certainly be used within an in, an embedded context.

We do have some more plans for how you can take advantage of the AI capabilities of the platform from an embedded perspective in the future.

Thanks, Ali. Yeah. As I was gonna say, that's the beauty of the universal automation platform is that it brings the whole thing together so that all the functionality that we're releasing, the idea being that you can use it across all of these different use cases. And that kinda sets us up nicely for the last question that we have here, which is, do you have a fourteen day trial?

The answer is yes. You can go sign up on our website, get started with our team. That will give you the full functionality of the Tray Universal Automation platform, and you'll be able to get started on your journey to the autonomous enterprise.

And with that, I'd love to close out the session today. I wanna say thanks to our speakers and the content that we've put together for everybody and for all of those that have attended today to learn more about the journey to autonomous enterprise.

So with that, we'll be following up very shortly. You'll be able to get a copy of the recording of this sent out by our team, relatively imminently.

And until then, we look forward to speaking with you very, very soon. Thank you all.

Featuring

Alistair Russell
presenter

Alistair Russell

CTO

Tray.ai
Tom Walne
presenter

Tom Walne

Director, Product

Tray.ai
Rich Waldron
host

Rich Waldron

CEO

Tray.ai

Let's explore what's possible, together.

Contact us