Webinar
Dec 4
38 min

How to quickly build LLM-powered apps through an AI-ready architecture

Go from prototype to production with LLM apps that integrate, scale, and adapt as your AI and data stacks evolve.

Video thumbnail

Overview

LLMs are powerful—but building usable, adaptable apps with them is still a challenge. This session shows how teams are applying modern integration architecture to bring LLM-powered apps to production faster, with less risk and more flexibility.

What you’ll learn 

  • How to turn LLM prototypes into production-ready apps without rewriting your stack

  • What makes an integration platform “AI-ready”—and how it helps you move faster

  • How to design for change with composable architecture and fast-evolving models

  • How to manage vector storage, orchestration, and governance across evolving LLM workloads

Session chapters

  1. Why LLM apps stall

  2. Common blockers: Integration, data, and risk

  3. Modern architecture for AI-infused apps

  4. How composable integration accelerates development

  5. What makes Tray an AI-ready iPaaS

  6. Real-world use cases and design patterns

  7. Next steps: Where to go from here

Transcript

Welcome to the Tray.ai session where we'll learn how AI and LLM technologies are changing the landscape for composable apps and serverless architecture.

And even better, we'll learn how AI focused multifunction iPaaS is helping developers speed up and simplify your LLM and AI app development.

Our guide for this great session this morning is Paul Turner, product strategist at Tray.ai. Paul, welcome.

Great to be here, Vance.

We're really glad to have Paul with us this morning. He advises enterprise customers on app development, AI, and integration strategies. You've got more than twenty years experience in SaaS, app integration, and now AI LLMs, and that'll be on great display this morning in his session, how to fast track your LLM app development using modern iPads.

This session Paul has put together is specifically tailored to address the data challenges the companies face as they deploy LLMs and look to integrate those into their apps and business processes for AI enabled outcomes. And Paul's gonna tackle a few items that are really top of mind as we've seen them. Our composable new set of AI integration technologies can help build enterprise class AI powered apps, how you could easily scale, update, and maintain your AI enabled apps and processes often with automation.

And for those software architects out there, Paul's gonna share how AI and LLMs are changing composable development, serverless architecture, and data management.

And a really great note, Paul actually has toward the end a great shareable list of ten practical tips for successful LLM implementation. So be sure to be on the lookout for that. Just a quick note before we begin, you can download the slides. Just click the big red button on the view area.

You'll also see some other valuable links right there in blue. Those are great supplemental resources, including a free trial, which you can take a look at just one click away. And we love to have you communicate with Paul. So to ask a question, make a comment, just type in the submitted question box.

So, Paul, great session. Again, glad to have you here. Let me turn to you and tell us how to fast-track our LLM app development with an iPaaS.

Awesome. Thanks so much, Vance. Great to be here. We're really excited to take you through how to quickly build LLM-powered apps using AI-ready integration and then iPaaS.

So what we're gonna cover today, you're gonna learn some of the emerging challenges facing IT and development teams around deploying AI-powered apps, how to get past what I call the "build it and they will come" problem that many teams are facing, and learn how to really get everything from large language models in GenAI all the way through traditional machine language based and predictive models directly into the business. So they're actually providing business benefits. So turning those LLMs to apps across everything from finance, marketing, and sales. And as Vance mentioned at the end, I'm gonna share my top ten practical tips with you.

So just a little background. It was fun to write. I spent a lot of time delving deep on the AI sort of things. Obviously, as everyone knows, vector databases are a big part of RAG and deploying AI.

They're heavily dealing with dimensional data around embeddings and metadata around it. And I started my career actually deploying multidimensional databases way back for financial institutions long time ago. Definitely pre-GenAI, but it's fun to see how some of this stuff comes back around. I spent a long time in my career in ETL building data warehouses, creating roadmaps around financial and sales, descriptive and predictive analytics, and other areas.

Worked in many companies in data and apps that I'm sure everyone here is familiar with. Background as a former software engineer, computer science before I moved into strategy. Fun to be here with you and jump into this. Here's the thing.

You know, we're now in, as I mentioned, the kind of "show me" phase of GenAI. We're moving beyond experimentation to production. You know, Gartner predicts that the majority of companies won't only have consumed models through APIs, but they'll have deployed GenAI-enabled apps by 2026. So just over a year away.

So the word here is deployed. Right? It means weaving GenAI into their business. So I'm not sure where you are in your LLM delivery or whether you're beginning to operationalize, whether you're consuming public models or maybe spinning your own models.

But now is the time to get to production and usage. We've kind of moved beyond the prototyping phase, and now it's about picking winners and getting that out to the business. And so if we don't think about deployment, if we don't think about adoption, right, if we stay in the prototyping phase, we can end up with what I call the next big data. When you remember Hadoop way back.

Right? There's a lot of investment, a lot of prototyping, but not much in terms of adoption. And we really wanna avoid making that same mistake, and so I'm gonna show you how in this session.

So when I speak with data teams and and AI teams, and AppDev teams, there are really three recurrent issues that come up again and again. The first is the volume and variety of projects that teams are facing. I know every business head that we deal with wants a GenAI or a traditional machine learning project, whether they are looking for agents, autonomous or semi-autonomous, NLP based chatbots, and your AI around text generation, content generation.

You know, a lot of these projects would not be viable or make it to production. But, you know, picking models and prompt engineering and fine-tuning and understand the value of it all takes a lot of time. So you have all of these prototypes you're looking to meet, and it's all about, you know, figuring out which ones and they're taking them into production. But all of that takes a lot of effort on that prototyping. So and all all of this is stacked on top of our existing integration and business process management and automation kind of typical requirements as well.

And it requires also getting to the last mile of AI as well, getting it integrated to the business as well. Take going from a model, but actually dropping it actually into the business as well. So the second here is every AI project, whether it's GenAI or traditional ML, is also predicated on data. So whether using, you know, RAG or your spinning your own model, it's all predicated on the data you have, and much of it is need to be vectorized for GenAI.

So you can only move as fast as you can feed your data, connect with your apps, your data warehouse, maybe you're ingesting new data or unstructured data. And so it's really important to get a handle on how you're flowing, how you're connecting that data to your business. Many of the AI agents you'll be deploying are also dealing with heavy unstructured data, things like invoices, orders, images, and all those kind of things too. So the the data landscape that we have has grown.

There's more data to deal with.

Finally, you know, if you're running fast, doing those prototypes, being able to go to production, also dealing with data, that does create risk. Right? So you need to think about, right, say, prototyping in your training models, you're dealing with public models. If you're doing it in an unmanaged way, it's really easy to create business exposure, compliance, privacy issues. It's definitely gonna move fast, but it's really important not to break things. I'm gonna cover that as well to really stay focused on ensuring that you're handling data responsibly throughout these projects.

So what do you think about when it comes to going to production with a large language model and creating an LLM- powered app? Once the model is trained and tuned, there are really three patterns your AI models are gonna play a part in. The first is business process themselves. So this left hand one.

Right? Adding intelligence to business processes. So your finance team, for example, might wanna use a large language model with maybe dynamic prompts to follow-up on invoices using text generation. Right?

So they wanna infuse AI, for example, into their invoicing process. So that's the left hand side of you. Upgrading existing business processes, whether it's marketing or finance or or services with AI.

The second is within data integration and reverse ETL. So if you're using, for example, Snowflake or Databricks, you can use large language models to enrich data flowing into the data warehouses. And if you're using that data processes for reverse ETL, you can marry that data with prompts for your ALM as well. So you can use your data warehouse within your AI projects as well. And then finally, if you're creating apps, agents, and, you know, often using RAG, you wanna create composite microservices as well. So you can create APIs and publish those out to your front end teams to consume in the UX as well. So everything from infusing your processes on the left through to adding more intelligence to your data integrations, releveraging your data warehouse, to using a LLM to enrich data flowing into your data warehouse, for example, all the way to deploying composite microservices that your front end team can consume, but in turn may be calling out to AI services.

So in these patterns, infusing LLMs into them plays a pretty vast array of initiatives across everything from finance, sales, logistics, customer and employee experience. There's a huge opportunity to roll out everything from predictive, prescriptive, you know, generative AI into those process, as well as deploying new apps, I think, like customer experience, employee experience, agents, and self-service, and chatbots, and those areas. There's a lot of opportunity to retake AI and infuse them into business requirements of the business, but the key is getting your large language models adopted, intersected with those business needs.

So, you know, remember when I talked about data integration being a bottleneck? So, you know, here's why. So, you know, on the left, it's the traditional tech stack. Your apps, your databases, your HCM, ERP, your enterprise data warehouse, your CRM, your supply chain management system.

So a lot of different data silos, and here's where your data is. There's a whole new tech stack. It's the AI stack. It says it's like McKinsey right here, and your vector databases, your unstructured data, embeddings, large language models, additional machine learning models and tools, prompt engineering, AB testing.

It all requires you to flow that data across it. And the data needs are often very different for each LLM project in each AI project.

And building AI-powered applications really requires you to bring these stacks together fast and responsibly. Your LLM apps are only as good and as fast as you deliver your data. So it's bringing the new tech stack together with your traditional where your data is. So the good news here, right, is that the data you're dealing with on the in your old world stack isn't changing that far.

You know, if you're leveraging data from your ERP, your HCM, your CRM, it's not changed significantly over the years. That stack's changing a lot fast. But the stack you're intersecting it with on the right hand side, the new AI stack, it's changing incredibly fast. EPT, Gemini, Llama, Claude, Command, Mistral, DBRX, Grok, Pinecone.

These are components that didn't exist for some twelve, eighteen, twenty four months ago. You're bringing a very fast changing stack together with your traditional stack. Right? You need to think about that.

So it really means that you as you spin up your large language models, you need to know the AI stack you integrate today will not necessarily be the best stack even in three, six months' time. The large language model you use and the database you use are undergoing significant amount of change. Technical debt, you should take years to accrue, and now it takes months. And so you need to ensure that as you integrate AI into your apps, you're not pointing what I would call "silicon concrete," which is you're stuck with AI technologies that are suddenly obsolete. As you build your applications, you really have to think about change.

Yeah. So what does this all mean? The key takeaway here is that and I think Shari Lava of IDC Research says it really well.

Integration developers are absolutely critical in getting business value from GenAI. The best results come from the fusion of models with organizational data. So the key point here is that building LLMs into your applications that you deliver to the business requires integration. Yes. You can think about the stack, but you also have to think about integrating it and integrating data.

So in actual fact, it's actually more than that. It's actually orchestration as well, and this is how Gartner sees it. So the LLMs are used, create they're gonna be orchestrated together with apps and databases, APIs, market services, document processing.

When you build the left hand side here, which is processing model and designing it, Orchestration is then really important to get that last mile, taking it from the team directing the business, connecting it with documents, connecting it with your applications, with your APIs, with human tasks, taking out of the business. So you have to consider how your LLM your new deploying, will play within orchestration. Keep not in your production, getting the value out of it as well. So how you intersect it with all of these other areas across your organization.

So before I shift into the how, let's recap. Right? So we've got a lot of GenAI projects, a lot of project demands. We're going really fast in how we experiment and iterate, not be held by integration constraints.

Right? We're spinning a lot of different plates at once, and we need to be very efficient in how we build those prototypes because, ultimately, only a tenth of your experiments and prototypes you're gonna make into production. It'd be part of those apps orchestrations that I mentioned. So we're gonna run rapidly, move to production, right, have a lot of flexibility around that.

And, also, finally, one wrong move to the data can land you and your company hot water. Right? So you can also be, you know, extremely careful around data and ensuring that you're not actually flowing into the wrong models as you're going through these rapid building, move to production, and infusing AI into all those different apps that I mentioned across the business.

So now I'm gonna introduce the concept of composable AI integration technology. And it's really a way to overcome data roadblocks, integration roadblocks, and also infuse AI into your application development initiatives no matter what form they take. And so it's really got four areas. So first of all, it's about creating any AI or non-AI orchestration, adding AI into those patterns I shared earlier. So if you have app integrations or processes or data integrations, being able to really pull a large language model, pull a machine learning model directly into those traditional orchestration.

The second is building traditional AI infused composite microservices as well. Right? And often your front end teams, as they're building applications, are gonna wanna start to leverage some of the back end logic that you're developing and where in turn you might be calling out to a large sample model. Right? And so it's a way to really easily deploy those composite microservices so your front end can consume within their UXs, whether using React or in your Angular or Vue or some other framework.

The third is intelligent processes. Right? So adding things like GenAI to, you know, invoicing or maybe to prospect responses or your customer support processes. Right?

So building brand new business processes that are using LLMs or predictive. And also looking to operationalize your data warehouse as well. You know, if you have Snowflake or Databricks or BigQuery or Redshift, it's not just about getting data into them. It's also about releveraging them within the business.

For example, in your lead lifecycle process, you can call out to Snowflake who understand, for example, you know, customer risk or use it as your CDP.

The final area is agent development as well, building that agent. So whether you're building out an internal employee self-service chatbot that maybe connects to your HR app or you're creating agents that are more autonomous and event-based or trigger-based, easily deploying those kinds of agents as well. And, ideally, you wanna try and you do one platform for this because it's really easy enough with a lot of platform fragmentation when you think of all these projects. You wanna try and get to one single reusable platform, and that cuts your effort, cuts your maintenance, cuts your development costs, and those other things. So that's really where a composable AI integration platform really kicks into play.

So let's start at a high level.

So first, when you think about all those projects, those orchestrations, integrations, you wanna be able to engage with a development team using a range of ways to develop building using low code or code. So you're using drag and drop to build these projects visually or code first. Composable development is really important. Right? And what it is, it's about reuse. All of the objects you build, ensuring that you can reuse them, whether you're building an automation, using a data integration, or vice versa. So it's less one offs, much more Lego building blocks, and that's being about much more efficient with your development.

So a broad range of iPaaS capabilities, integration platform as a service, things like cross automation, data integration, API management, quick connectivity to your AI and your non-AI stack, and self-service for your business teams to connect large time intervals to their stack as well. So when you think about the set of capabilities you can dip into, as you're deploying your AI in your applications, you're gonna be connecting it with, I think, from your data to your processes and into your APIs that you develop as well. An intelligence layer enables you to build orchestration faster using AI to actually help you build and AI-infused process development. So you can quickly add LLMs or other services into your processes.

And then finally, ensuring that when you put your LLM into production and into your apps and it starts playing with the business, you have all of the instrumentation and scale and management around those data flows as well. So you have all the observability to scale those applications in the business with also strong controls around it.

So there's some key takeaways here when you think about that one platform approach. So the first is one IDE for everything is a huge productivity benefit. So no fragmented tooling. So whether you're building an LLM-powered, for example, lead generation process using, for example, RAG or you're flowed into your LLM or turning what you build into an API, you know, one ID means much more productivity, less IDs for your development teams to learn given the variety of projects they're facing. When you're building, being able to easily select large language model you use, whether it's a foundation model or a private model, has to be easy. So it also means that when you're developing, you can easily be adapted to use a different LLMs. So if an an emerging LLM change over time, you quickly remap your project into that new model.

And then finally, if you're building orchestration integrations in corporate AI, the right platform can also provide Copilot to help you build as well, so augmented development. Things like GenAI power documentation, performance suggestions, and whatnot. So you can use AI to help you build integrations and build processes, build automations, build APIs.

So drilling into those iPaaS capabilities, you know, one platform for use cases. So process automation, so workflows that mirror the exact process, high fidelity workflows, code grade logic. So there's no daylight between the large language model within your business process and how it's visually represented. So data integration, data helpers to help think data transformation, parallel processing help with bulk data loading training to help build the DI flows and handle the data transformations with them.

API management, make it easy to turn any of the applications you develop into APIs to your front end to consume them. You can build microservice in a few click.

And also connectivity as well. So you can connect to any of your apps, your databases, to your models, any API.

And, also, you can enroll your own connectors as well, so using a development kit, can do on premise apps. So the whole idea is to overcome those data obstacles as you intersect your AI projects with your stack.

And on the intelligence layer, there's those two areas I covered, which is augmented development. So your integration developers and builders, you can deliver faster and also manage post-development. So things like NLP-based development using Copilots to help frame out those integration automations using natural language.

Also provide documentation on the workflows you're building and also suggest optimization. So this is just pure developer productivity.

And the second here on the right side is AI infused process development. So this is really about taking the public models, your own private train models, vector databases, ML services, so make it really easy to connect to those to to your AI stack.

And on the right hand side here is also native AI and machine learning, things like sentiment, classification, text extraction, summarization.

So if you could quickly build a customer escalation process, for example, that incorporates sentiment or an automatic process that extracts tech from a PDF, you know, using machine learning. So for technologists, it enables you to really add AI into the apps you're building really easily toward your fingertips.

So with integration being so important for you taking your LLMs out into the business, it's important that your integration technology natively understands AI data. And so most integration tools are often geared around traditional row and column-based data. They provide you traditional lookup tables and your class kind of v lookups. But this isn't good for AI data where you're doing things like embeddings and text and images and metadata around embeddings.

So you don't wanna be in the business of having to spin up a vector database every time. So if your integration platform provides built in vector storage, it makes it simpler to fast handle vector embeddings and unstructured data within all the orchestrations integrations you built.

When you're thinking about infusing AI into your business processes, you'll, of course, want to include, you know, large language model, whether using Cohere or OpenAI or Alarm or some other models. So out of the box, an extensible connectivity is really important.

But it's also tapping into predefined native common AI operations, things like text summarization, text generation, classification, so you can immediately start knocking out business requirements quickly as well. Right? So look for a platform that incorporates core AI operations in it as well. So when you're rolling out that project, your team can quickly tap into that within the product land use. So building out an LLM powered app or orchestration requires scale and trust.

Some of this is the same as traditional projects that you would always need, like governance and security. But others are really important, like elasticity, execute control, secure data either in stores or training model. You can easily continue from where it left off. And, also, ensuring you have dedicated support for data flows in out of AI models to stop to ensure, data is all tokenized and anonymized.

But whether you're experimenting or going into production, running integration, automations, or microservices that are crawling into your large language model, you need to think about scale. Traditionally, it requires deploying your worker nodes and calls that require sizing and provisioning.

The problem is that takes a lot of time. So service processing enable you to deploy orchestrations, agents, integrations, APIs without having to install on infrastructure. You have to deploy containers or install run times. Instead, execution is already scaled up or down as service functions, which means way less ops work for your team. So we've been talking about flowing data across your AI and non-AI data landscape.

And so it's really important also you have control in place. So Composable AI integration can really provide things like API tokenization to obfuscate data flowing to large language models and then detokenize on the way out as needed. So you can run AI-infused process flows without compromising your data. And if you're building composably, your lower level building blocks that interface with the large language model can put in controls that are automatically inherited by your high level service as well.

And, also, you want detailed log and audit trails so you can see exactly what data is flowing and add that granularity of visibility.

So all of this composability, low code, integration, AI infused development can really help you supercharge delivery. So you can go from an idea, for example, using GenAI to deploy an employee self-service agent to a prototype really quickly. And then in one platform, we can switch to production when ready. So it's really about really enabling you to develop faster, but also get out to production in a scalable and also trusted way.

So to start this, I promised I'd share key takeaways. So here's the here's the first. LLMs and LLM-based apps really require integration. If you have a strong sustainable integration strategy in place, that's really a foundation for success.

The second area is that you need to consider the ways large language models will play in the business, business processes, data integrations, and APIs.

So if your integration platform supports these patterns, you can do everything from adding AI to your lead management to deploying a RAG-based chatbot, for example.

The third here is when you build, you have to assume that the large language model you're using may not be the one you're using in six months' time. So you have to ensure that what you build is adaptable to change, and you're not stuck with the future tech debt.

Composable development, where everything is reusable and component-based, preferably using one IDE. So across all these projects really helps speed development and and ease maintenance.

And, also, you don't want friction when you're adding AI to processes. So if you ensure that your integration platform usually makes it easier, you have to flux around with APIs or external services for core AI operations like sentiment and classification or text generation. You want that kind of stuff baked into the platform since at your fingertips when you build. Managing on the scale side of things, when you deploy your LLM-powered apps and any integration automation, you gotta scale. Serverless processing helps you scale elastically, and that cuts a lot of headaches in prototyping and going to production.

Managing data flows, more important than ever, especially using public models. So things like obfuscation and tokenization of data governance layers are really important. Composability can also ensure everyone is routed through a few points of integration to live with models for even more controls. You can build low level integrations, then reuse.

So it ensures everyone's going through a common kind of gateway. Data's been going through a common obfuscation layer into those models. Less is more. You ought to be in the business of doing all of this with Python when you're experimenting and creating and all different tools, and having to rebuild everything when you go into production.

So you wanna use one platform for both prototyping while that's production. It's possible to use AI to accelerate your AI project as well. So you can use, integration copilots to help you build everything from helping you develop faster, to documentation, to providing suggestions on ways to improve performance. So it's not about just building products with AI, building AI projects.

It's about using AI to speed your delivery as well. Then finally, AI delivery is a team sport. Local tools can really help you partner with the business on delivery. So if you're in application development, you can huddle around with other teams business teams around what you're building because it's all visual, and it's not all buried in code as well.

So it's a collaborative process with the business. If you wanna learn more about Tray.ai and composable AI integration, you're welcome to jump on to Trader AI and take a interactive tour. And we also offer a free trial as well so you get hands on and start building quickly as well using our local tools. So you can jump on the website right there.

And then with that, I'm a hand it back to you, Vance, for questions.

Wow, Paul. Thank you for a whirlwind tour, not only of the tactical of the whole new world of composable AI integration and how it helps with models and deployment, but also this big picture of what the new architecture and pipeline looks like for AI and LLM apps. Really great session.

Thanks, Vance.

Our pleasure.

And luckily for us and our attendees, you've left us a little bit of time for questions. So with your permission, let's get into a couple of questions.

Great.

Paul, you mentioned this theme a couple of times in your session. I think you even had a slide or a chart about it, this composable AI integration.

Can you help define how that works, what it is, and how it might be a little different from some of the pallet sort of low code drag and drop integration that a lot of folks that might be using iPads is familiar with today?

Yeah.

So it is, number of things. So the first one is handling all the various patterns that you need to tap into, everything from business processes to data integration to publishing APIs.

The second is it's reusable. Right? So you're building all in one single environment. So whether you build a a component that integrates your data into OpenAI and pulls it back out again, you can use that in all of the various patterns you're deploying for, whether it's a business process, a data integration, or microservice that you're publishing. And then finally, it's being able to very quickly tap into all of the AI models you need to access to roll new connectivity quickly to new AI services and access your existing app database stack. So the keys are it's flexible across all different integration patterns that you need, everything from orchestrations to APIs, data integrations, infusing AI to processes, rolling out agents to handle the various patterns.

And it provides ease of connectivity into your new stack and your existing stack. And also, maybe you're very easily infused AI as it supports those native functions as well. I think I think, like, say, classification and sentiment and those kinda areas as well. But the biggest bang for the buck is that single IDE and maximum reusability rather than ending up with your development team fragmenting the effort across multiple tools, multiple environments.

Awesome. Awesome. Really a lot going on there. So let me ask this follow-up question. It's probably the elephant in the room, and you probably get asked this a lot.

In this new vision that Tray.ai is articulating for the iPaaS, what is the role of the API? Is it just a smarter API? Or do in some cases for integration, does the API completely disappear and we're using another methodology? Yeah.

Completely disappear and we're using another methodology?

Yeah. When you think about applications you're building, especially you're deploying, let's say, a chatbot, for example, but your front end team is ultimately gonna be consuming the APIs that you publish. What we're seeing a lot of is the AI services are changing very quickly around the back end. What's often happening is development teams now are looking to build the market service themselves, a composite market service that might be calling out to Llama or Gemini or behind the scenes or calling out to a complex orchestration.

And the front end team is abstract away from that, or they're just calling out to the abstract API. This is important because it ensures that the front end you're building aren't tethered directly into AI services that are gonna be changing very quickly. You know, over time, you're distracting your development team away from all of that. And so in Tre, what you basically end up doing is you build the market service within Tre.

The logic within Tray might be calling out to Pinecone and OpenAI or out into your CRM or ERP deployment, and all the business logic is defined within the microservice. You publish that as a REST API. Your front end team is consuming that composite microservice. One of the benefits of that is that both of which simplifies life for your front end team. But the second is that as your AI stack changes over time, all your front end team is completely insulated from that, and you have the strong controls on the back end. And, also, it's important for data governance as well.

Let's move over to the implementation elements, Paul. You talked a lot about the power to bring integration to my composable AI as well as my LLMs. What are some outcomes or some kind of sample use cases you're seeing among companies?

Yeah. I think some of the most common departmental use cases we're seeing right now, I'd say within marketing, within customer service, and also within finance as well.

Things like knowledge agents.

So customers can self-service on things like documentation, customers entering service requests, and automatically handling the natural language response and escalations in those areas on the kind of, like, customer self-service chatbot, for example. On the marketing side of things, on the finance side of things, seeing customers looking to use AI infusion for things like invoicing and billing, handling things like extracting text from unstructured data such as invoices and sales orders. And these also even provide text generation to respond back to customers as well. And on the marketing side of things like customer facing and prospect facing chatbots, as well as infusing AI into marketing processes. For example, taking data from Snowflake and using it as your CDP to help inform the demand process as well. So it's a range of things, everything from smarter processes, handling things like unstructured content, deploying chatbots, customer self-service agents, a kind of range of areas.

What's cool about that list is that I would assume that many of our attendees have traditional workflow apps that do some of these things, but they're not smart. They're not AI-infused.

Do you have methods or templates that help people update their legacy workflows, let's say, and add more intelligence with AI thanks to Tray?

Actually, with our product, we have a component called Merlin AI Palette, and you can check it out on the trial.

And from there, you can take your existing process and quickly insert an AI function into it. For example, text classification or text generation or gauge the sentiment, for example, of text.

So a wholesale function, you can quickly add AI elements into your existing process or call out to a large language model as well and incorporate that. So we try and make it as easy as possible to add AI in.

The second area, actually, is we provide a pretty rich template library as well. So, you know, if you get a hand on Tray, when when you create that process, you actually pick from a whole range of AI templates as well. So you can start with a predefined process, whether it's a RAG-based process, for example, or a a text classification process, and then take over the finish line. So everything from making it easy to add AI into existing process to starting with template is available in the platform.

Time is speeding by, but let's see if we can squeeze one or two more in. This is really great, Paul. Let's turn to your top ten or one of the top ten in particular. We have a lot of cloud native and hybrid cloud application developers here. And in one of your top ten, you mentioned that serverless architectures can cut their operational effort for AI workloads. Give us a few extra words on how that works and how developers could benefit from that.

Yeah.

With traditional iPaaS, traditional integration, you have to understand what your workload is gonna look like, kinda a priori. So you have to deploy a runtime or a worker or a Vcore, and you have to know exactly if you got a three or four project you're coming up, then you have to deploy those runtimes to meet those requirements. But if you think about where we are with the AI, we have lots of prototyping that we are doing and spinning up lots of different projects. As I mentioned, only a few of those are gonna make you to production.

Two, what you really need is elasticity. You don't wanna be in a business and provision environments to go through sizing. You just wanna operate a more of a task-based model of pay for what you use without having to go through all the operational effort of, let's say, installation, provisioning environments, those kind of things. So that's really what a service architecture enables you to do.

Basically, the platform itself is gonna scale up elastically, pay for use on tasks, and you don't have to go through all of the operational overhead of, say, deploying containers and those kinds of things.

What that means is that you can focus on projects, the prototypes, without getting buried on the upside.

Yeah. It's really great. And, of course, the other element to that is visibility into serverless architectures is not always the best, and knowing exactly where to throttle and where you can do a swap out is pretty difficult. So to some extent, there's some element in Tray.ai that actually helps pinpoint where to focus that effort.

Yeah. So that's a great point. I mean, observability is really important. So within our platform in in our case, we provide what's called our Insights Hub, and that provides all of the visibility in terms of what orchestrations and workflows are consuming what resources and also trends as well. You get all of the visibility and observability around usage, but you also get the elasticity that you can grow.

One last thought here, Paul, and you mentioned it a couple of times, this idea that integration and app development, especially within the AI era, is a team sport.

Given all the new functionality you've got in this modern composable AI integration iPaaS, do you have suggestions or trends you're seeing for how folks can build an AI team that can work on AI-infused apps?

What would they look like? Who should be in my group?

I think the thing to think about here is that you wanna ultimately get to one environment. It's very easy to end up with a huge amount of fragmentation. One app for data integration, another one process, another for API publishing, another for agent development, and suddenly you end up with six, seven, eight, nine different tools and applications, that becomes much harder from a collaboration standpoint. So if you can get down to fewer tools, ideally, one tool for all of your orchestrations, integrations, your agent development, that it becomes much easier to move towards a fusion team based model at that point.

Really good point. And as you confer there, it also helps put all these different disciplines, these different persona on the same page. They all can know what they're doing and understand what each other is saying. Paul, this has been great.

I see time has just about expired. But before you go, one last favor. Can you give us a couple of suggestions of how you would recommend people go the next step and learn more about Tray.ai? Maybe go hands on, get a deeper demo.

What could people do as a next step to learn more?

Yeah. There's a number of things you can do, but if you wanted to just have a have a quick look, you can take our interactive tour, and you can access that from our Tray.ai homepage.

So that could guide a tour through the platform.

Also provides video resources as well.

I say we do provide full hands on trial so you can experience the entire platform. So everything from video tours, obviously, we're also available for one-on-one chats as well.

Wow. Wonderful. We appreciate all the options. Paul Turner, product strategist at Tray.ai. Thanks very much for a great session.

Really great overview of how the worlds of integration are coming together to enable AI-infused apps and make it easier to work with LLMs. Really terrific session. Thank you.

Enjoyed the part of the session, boss.

Absolutely. Yeah. And audience, thank you for some really great questions. You really parsed the whole capability of how Tray has put together a new multifunction iPaaS with an AI LLM focus to help folks get started in this hot space and even migrate some of the apps that they've already got into an AI environment.

So really great session overall. Quick notes before we close. Paul mentioned a few assets that you can explore to learn more. Many of them are right here in the breakout room, including that tour and free trial.

And also, as you can tell, there is a ton of innovation going on quarterly, monthly, weekly at Tray.ai. Download Paul's slides and you'll get this slide. It's a page where you can get all sorts of more detailed information at Tray.ai. All these things will be live.

And so thanks again to our speaker, Paul Turner, and audience. Really, really great questions. Thank you.

Featuring

Paul Turner
speaker

Paul Turner

Automation Expert

tray.ai

Let's explore what's possible, together.

Contact us