Unblocking the AI data supply chain bottleneck
Learn how to modernize AI data pipelines with in-flight SQL transformation. See Data Engineering and SQL Transformer in action in the Tray platform.
Your AI and agent projects are only as good as your data. Transforming, reshaping, joining, and enriching data is critical whether you're flowing data into knowledge bases, or into data warehouses like Snowflake, Redshift, Databricks, or BigQuery. But most integration platforms are disconnected from AI agent projects, slowing your team down at exactly the wrong moment.
In this Tray First Look session, see how Tray Data Engineering solves the AI supply chain bottleneck that causes 60% of AI projects to fail.
What you’ll learn
- The hidden data bottlenecks slowing AI project velocity and how to identify them
- Turning raw data to AI-ready intelligence in a single flow
- Using Tray Data Engineering and SQL Transformer in your stack with a live walkthrough showing you exactly how
Learn more about what’s included in Tray Data Engineering in the launch blog.
Session chapters
- The AI data bottleneck: why projects stall
- Tray’s approach: from data to intelligence
- Tray Data Engineering demo
- Q&A
Featuring


Transcript
Hi, everyone. Welcome to today's webinar, Unblocking the AI data supply chain bottleneck. My name is Paul Turner, your host for today's event.
Also joined by Niels Fogt, Senior Director of Automation Solutions here at Tray, my partner in crime.
Thank you for taking the time out for us. So half an hour, we've got a packed session for you here.
Just to give you a quick run through of what we're gonna take you through, I'm gonna take through the why. Right? Why we are focused on really data around AI, some of the common themes that we're seeing when we speak with customers every day. And then I'm gonna hand it over to Niels.
Niels is gonna take you through a deep dive into our data engineering solution that we launched. Gosh. Well, it feels like longer now in AI times, but it's, I guess, a couple weeks ago. Right?
And we'll take you through a demo as well. We got a little bit of time for questions as well, right, and follow-up too. So, so once again, thanks for your time, and let's kinda jump right into it.
So, you know, if there's one truth that we've implemented just hundreds of AI projects, right, at this point. And if there's one truth that we see every single time, right, it's the you know, interesting where we are now is that building agents is sometimes, you know, easier than the data aspects. Right? And, you know, whether you're whether you're fronting up an MCP server and building the tools around that, whether you're building an agent for, you know, sales or marketing or service, whether you're flowing data into PostgreSQL or MongoDB or in into Redshift to to build agents from there, You know, it's all dependent on data.
Right? And, you know, one of the things that we've gone to share here is that, you know, sixty percent of AI projects, you know, fail to deliver. Right? Will fail to deliver due to data aspects.
Right? And so, you know, yes, building agents. You know? Yes. Business teams are asking for it quickly.
Right? But unless you figured out what we call it, Tray, your data to intelligence pipeline that flow through from your source systems to your data lake house, your warehouse, your maybe your maybe some mini databases you front up for your agent tools, your MCP tools, unless you've cracked the code on that, you're gonna hit a snag. Right? And that that you know, everything that's the choke point, right, for your agent success.
So one of the things that we see when we look at our existing landscapes of the companies we speak to, there's three things.
The first is the AI stack, the agent development stacks and your agent gateways and your MCP servers they're building.
Maybe they're building out, you know, data warehouses, lake houses, you know, all those kinds of things, right, to support edge initiatives.
The data stack's disconnected. Right? You know, they have things like Python scripts. They have older platforms like Informatica and, you know, Boomi. Right?
And, you know, they want one set of tools for building our agents and really something else entirely for the data data aspects. And what we often find is that the data pipelines they have, right, the tooling they have just isn't agile enough to support the business needs they have, right, which is you know, if you look at the last ten, twenty years, right, data pipelines really haven't changed a lot. Right? You have the same feeds rolling into the data warehouse.
Right? Maybe it's your order flows, your customer three sixty, right, flows. But now everyone needs new data pipelines, right, for your marketing, your sales, your service agent, your finance agent, your MCP servers, your building, you have to be able to front up data pipelines really, really quickly. And the challenge is is the older tooling that we have isn't designed for building out pipelines very quickly.
It puts data teams under a lot of pressure, right, to have to have to kinda, you know, wrangle wrangle those tools. Right? And often what we find is that business teams, as they want the agents front up, are getting frustrated because it all depends on the data. One of the areas well is that when the data pipeline is the same, right, year in, year out, you have you know, costs are predictable.
Right? But when data pipelines are changing, when you're building out new pipelines and building out new flows and integrations and prepping out the data, it becomes much more unpredictable as well. Right? Whether it's, you know, vCores or Atoms, right, or containers.
Containers. Right? It can really quickly get out of control as well, right, in terms of the predictability aspects too. So what we've really focused on with this release is bringing together the AI and the data stack.
Right? We'll jump that today. Right? How to move fast around pipelines and how to be much more predictable around some of those costs as well.
So, you know, I know we could be joined by your customers here as well as, you know, well well as well as folks that are researching Tray as well. But, you know, just a quick kind of, you know, of our perspective here. So, you know, Tray here, we've, you know, provide an AI orchestration platform, right, to thousands of customers.
You know, we've brought a no code agent builder, you know, agent gateway. But most importantly as well is that we also brought iPaaS, right, which is a batch included data integration. So what that means is that, you know, yes, you can build multi agent orchestrations. You know, you can build a MCP service with Tray.
You can build end to end end end to end agents whether you're building out a customer on HR self-service agent, whether you're building out customer three sixty agent or an ITSM agent, whether you're infusing AI into your business processes, maybe adding, for example, an item document processing to your to cash flow, or or you're just fronting up your data warehouse. Right? You're loading data into Snowflakes or Databricks or BigQuery or Redshift. Right?
But the key here is our approach is to bring it together in those two places. Right? Your agentic development building out, your no code development as well as the MCP, but also the data aspects as well, and also the government government's control around that. So the idea with Tray is we deploy that one platform, right, from data to intelligence.
So what does data engineering, you know, fit? You know, you can kinda think of Tray as, you know, there's three kinds of layers here, right, underneath the underneath the agents we deploy. Right? We have our enterprise core, which is all the governance and control and audit trail and logging.
Right? And that's really important, right, for data. Right? You wanna make sure you have an audit trail around the data transformations.
Right? And, you know, that traceability, right, as well. Right? We have our elastic execution layer.
Right? All of the Tray architecture is serverless as well, which is a huge benefit, right, which means you can scale on demand. You can throttle up, throttle down. You need to provision your a priori.
And our intelligence iPaaS layer. Right? And this is that once again, that data to intelligence. Right?
They that's that's data integration, that's unstructured integration, that's built in vector tables, it's lookup tables, it's data mapping, data transformation, data helpers. And now what we've added is data engineering also. Right? And so what the key thing here is that that's all available to you as you build out agents.
Right? So if you're building tools, right, for your for your service desk agent, whether you're building out tools for your MCP server, maybe you're loading up some near MongoDB or MySQL or PostgresQL to feed feed some of your MCP tools, you can all build that. Right? We'll build the integration layer.
You can build the gateway and the agents because of the flow of the data integration aspects too with Tray.
So data engineering, that is what it enables. So it includes our SQL transformer. Neils is gonna take a deep dive into the architecture and data engineering.
But, basically, it really provides storage and mapping. So you can basically load data into the tray tray file system, everything from CSV, JSON, Excel, common formats like Parquet. You can export data out from your external systems as well.
But what it all kind of rolls into is it drops into our SQL transformer, right, and based around the TV, and you can really use ANSI SQL. So you'd have to build out those kinds of proprietary steps. You can write SQL, and we'll handle all the transformations in one one step right here. So it really gives you a lot of flexibility for, you know, building out your complex joins, navigations, and filtering.
You can run in SQL. We'll crunch it, and you can output it to files. Right? Whether you don't have to put it to a JSON JSON format, maybe you wanna, you know, you know, support that for your APIs, your dev database, but also out to, you to data warehouse and your new operation databases and API endpoints as well.
And so our idea here with their engineering is that it gives you a lot of flexibility, right, but also a lot of power with the processing. And we we we we've we've based this around a new enterprise data engine, right, that's running beneath the covers.
So the kind of key benefits here, what you'll see is that dramatically reduces workflow steps versus those tools. So if you think about, you know, how you typically build today, right, you're building out a lot of your proprietary kind of Lego, kind of, you know, the building blocks steps. Right? And they can get pretty onerous.
Right? You know, you know, sometimes transformations can resolve into, you know, dozens dozens of steps. Right? And so what we focus on here is that you can write the SQL, right, and it'll dramatically reduce those workflow steps for those other tools.
Also, bulk data in single digital steps. So what you'll see is rather than traditionally, you know, looping and iterative step by step operations, what you'll see here with data engineering is that you can perform you can execute one set of, you know, SQL, and we'll you perform an entire aggregation or an entire, you know, remapping or data transformation in a single step as well. Right? And that gives you much more, you know, processing predictability as well.
You know, everyone loves SQL. Right? You know, I'm a former, you know, CS, former software engineer. Right?
You know, you have to, you know, building out SQL especially even easier now, right, with vibe coding and those kinds of things, and it also works against that built-in database as well. And we've really focused on modern formats as well. Right? So, you know, not only, you know, the traditional data formats, but formats like pop parquet as well, right, for those kinds of columnar formats as well that you might be getting in out of various systems too.
And so with that, I'm gonna hand it over to Niels, who's gonna take you through a deep dive into data engineering. So over to you, Niels.
Awesome. Thank you, Paul. And, yeah, everyone does love SQL. In fact, SQL was so I'm a former customer of Tray's, and SQL was sort of my gateway drug to Tray.
Right? So one of our first use cases back in the day when I was working at New Relic was we had built this cloud data warehouse, and we had all this data in our cloud data warehouse, and we needed to operationalize it. And the only way to do that at that time was, you know, you had to put everything in the warehouse. Right?
And they had to create these views on top of it, and we had to learn how do we use this data operationally inside of our organization. And, you know, in order to do that, I was in a growth marketing role there, which is a very sort of, you know, tech meets marketing, you know, sort of role.
In order to make use of all of our customer data, we had to teach ourselves SQL. Right? And, you know, that brought me to Tray where I needed to then reverse ETL that SQL out of our warehouse and put it in our, you know, life cycle marketing systems and stuff like that. So, you know, sort of coming full circle here with this connector is that, it it's a it's a very, simple way, as you said, to put together our, you know, data inside Tray in a way that is very native to to people like myself, sort of technical people, but to your point, even to AI agents.
So if I can just pull up a little demo here, what demo would be complete without doing it through the lens of an AI agent? Right? So you said, right, hey, agents can write SQL.
Well, agents can take advantage of our data integration capabilities, including the SQL transformer. So I'll show you under the hood here in a minute what this all looks like. But what we have here is an agent that we're hosting on Tray. So it's hosted as a Tray agent that we built with a product we offer called Merlin Agent Builder, and it's a SQL data analyst.
Right? And so under the hood, this SQL data analyst is gonna take advantage of our SQL transformer to answer some questions such as, you know, which regions have the most, customers here.
And, to your point, Paul, that data could come in a multitude of different formats. Right? So for us, we're using the Tray file system here, and we've got a couple different types of files in the Tray file system that we're gonna have that agent interface with through our SQL transformer.
We have a customer's JSON file. Right? So you can think of this like any API is gonna give you JSON back. Right? And you might wanna be able to use SQL to query that JSON. So for example, like, you might have Salesforce, and you might need to get your customer data out of Salesforce.
We have product data, right, that's in an Excel spreadsheet. We have order data that's in a Parquet file, and we have product usage data that's in a CSV. Well, what's great about the SQL transformer is that it can work again with any of these files, whether they're in the Tray file system or whether they're an external file system or whether they're coming through from an API.
And so back in my chat client here Hey, Nils.
Just to let you know, are you sharing? Do you I don't see your do I see your share right here?
Am I not sharing?
Just oh, you're good. Yeah. You're good. You're good.
Am I good? Yeah.
You're good.
Can we get a thumbs up?
Everybody's You're You're You're Okay.
Wouldn't be the first time I've gone ten minutes into a demo not sharing my screen.
Alright. I'll stop sharing.
Okay. So what we have here is we have an agent that was able to, you know, pull our customer data by region. And what's great about that is that not only is it pulling that customer data using our SQL transformer, but we gave it another tool to actually take that data from the output of our SQL transformer and turn that into a chart. Right? So we're giving these agents tools like the SQL transformer, like this ability to create charts to give us really interesting visualizations here in the agentic experience. Right? So okay.
Cool demo, Neil. How does that actually work? Right?
So I'm gonna start with a basic sort of view of the underlying technology that this agent I've exposed to this agent through a tool. So what we're not looking at is the agent tool yet. I'll show you that, but I just wanna show you the fundamentals of the SQL transformer and why we think it's very powerful, both for integration use cases and for agentic use cases.
So, this first one is just our basic introduction to it. Right? And so as I mentioned before, we have a customer JSON file here in the Tray file system, and we have our SQL transformer step. And so as Paul mentioned, we can pull data from multiple different sources. So that could be from the Tray file system, that could be from an external file system, or that could be from, you know, a plain text source or, anywhere else. Right? So, now what we've done is we've named our data source customers.
We've named the data type. We know it's JSON, and we've walked into the Tray file system, and we've chosen that file from the Tray file system. Right? So we could have chosen our products data, our usage data, any file format here, could have chosen.
And then down below, we have a really simple query, the simplest of queries. This is your hello world of SQL. Right? Select star from customers.
And so if I go ahead and run this workflow, right, we're gonna query that dataset, and then we have our most recent log here, and we'll get a simple output. Right? So we have Sarah Johnson here, customer number one with an email, a region, and a sign up date. That's the data in our JSON, essentially, that we've just queried.
But what if we wanna do something where we need to prepare this data for some other system? Right? So what if we wanna do some transformations now? So now I have that same dataset, that same customer JSON, but a much more sophisticated and probably likely, you know, query you might see for a data integration use case where you need to do some normalization.
And Paul mentioned this earlier. A lot of times with other connectors, you might have to write a script or you might have to have multiple steps, and you may have to have all sorts of logic. Right? But we can write this SQL now really easy.
I had AI write this SQL here. I just say, hey. I need to do a transformation. Right?
I wanna turn region values from this lowercase single text item into this sort of capitalized, you know, region dash subregion format here. Right? And I can run that connector. And on that same dataset, we now have data transformed.
Right? So we have the name standardized. We have the email standardized. We have the region standardization.
Right? So we've enriched this data or transformed this data, using a fundamental SQL query.
Now, SQL is a very powerful language. Right? So we can do things like aggregates. So in this case, we wanna look at our parquet file that has all of our orders data.
Right? And we wanna do some aggregates. Right? So we wanna do things like some, you know, bucketing with case statements.
We're sort of rounding and summing up segment revenue. Right? We're trying to look at a more complex query, in other words. And we'll go ahead and run that.
Right? And we can get things like aggregates out of it.
And so if you know SQL at all, this shouldn't be anything, you know, unfamiliar to you. Right? But this data becomes very powerful again when coupled with things like, you know, data pipelines you have to create where you're dealing with lots of different formats are, again, in the agentic use case where you need to do where you wanna expose your data to an agent. The last little thing I'll show you here that's kinda cool is joins.
Right? So we don't just need to work on one file, but let's say we wanna work on multiple files. Right? And in this case, I've got both my customer's file and my order's file.
And my goal here is to understand by customer how many total orders do they have.
You know, what is the total amount they've spent. Right? So we can, again, we can join data together from one dataset to the other with the SQL transformer to Paul's point earlier all in one step. Right?
And we can get that data out now, and we have things like their total orders and the total amount they spent. Right? It's a very powerful thing. So if we come back again to things like our agentic use case, right, we can ask that agent, okay.
Well, who are our top ten customers by spending? Right? A similar type of thing. But now rather than me having to write that query, we can allow the agent to write that query.
And the only thing that we actually need to do at that point is to provide that agent with a tool.
So, in that case, if we wanna come, you know, over to our agentic experience, we have some tools now available to that agent that are based upon this particular connector. So, for example, we have this query data source files tool.
And in that workflow, we have an agent who's running queries against that in order to come back and provide our front end experience with a result from that. So once again, if we go ahead and look at what our agent's actually doing here is they're writing SQL for our front end application. I didn't write any of this. Right?
Like, I didn't spend any time writing SQL. I just said, hey, agent. You have a tool here which you can write SQL to, and you have another tool which you can explore all the data in our system. Now go ahead and use those tools to determine how to answer the questions of the business users throughout the business.
So that's at the core of what I wanted to show you guys today in terms of how the SQL transformer works. There's a lot you can do with this, again, both from an agentic perspective or from a core integration and data pipelines perspective.
And we're really excited to release this as sort of the first leg of this data engineering release to append all the other tools that we have available to you to make it really easy for you to take advantage of your production data.
Cool.
Thanks, Niels. I mean, one of the things that really kinda pops there, right, is that, once again, you got the agent that Niels shared was powered by Tray with Merlin Agent Builder and also all the data prep behind. Right? So it's that one single platform for everything from the data prep side of things all the way through to building up the tools, all the way through to Agent Builder where you actually actually handling the integration also with, you know, Slack or whatever your whatever your collaboration platform is of choice there.
Neils, do you wanna share share deck or should I? Oh, go ahead.
Sorry, Paul. Sorry.
Yeah. So just a quick review on the architecture side of things. Right? So, basically, you know, we'll provide the formats, you know, CSV, JSON, Parquet, Excel, plain text, taking data from Excel systems as well.
We have our Tray file system, right, where you can basically place all the files. And what that enables you to do is handle joins across file formats as well. So whether you wanna join your CSV or Excel or plain text or your Parquet, you have to write SQL against that. You have the full range of the ANSI SQL at your disposal, all built around the DuckDB, so underlying SQL engine.
And then whether you wanna output it to a data warehouse, in your operational databases, you maybe say, may have Postgres or MongoDB or output JSON results for AI pipelines and your knowledge or vector store or those kinds of things. Gives you a lot of flexibility there. But the key thing here, when you zoom out, right, you can flow that all the way through to Tray Agent Gateway for MCP and all the way through to Merlin Agent Builder for no code agent development as well.
Just on some of the use case sort of things, it kind of runs the kind of spectrum, you know, traditional integrations. You know, you can think about the left hand side. Right? Things like, you know, if you just want to do a standard, then you have no AI, right, on the left hand side. Right? Things like, you know, marketing ops, you know, handling some of your audience segmentation, or maybe you're doing your monthly revenue reporting to load into BigQuery.
Or, you know, if your data warehouse is acting as your data layer for agents, for example, if a enablement, you might, you know, load snow snowflake, right, for your AI pipeline, and then use that to be consumed by, for example, AI models, you know, above above Snowflake. But also on the right hand side, you know, we see customers using data engineering also for what the use case that Neil shared there, right, which is, you know, Tray agents. Right? Building out reference tables for agent tools, or if you're using agent gateway to front up MCP servers, you can use their engineering as well. So for example, let's see about Postgres or, you know, MongoDB or, you know, MySQL set of tables. You can use that, for example, then to build some of your MCP tools to reference as well and then publish that using Tray as well. So, yeah, full spectrum from the data side all the way through to MCP and agents, you know, the warehouse, those kinds of things too.
Alright. So with that, I think we've got a little bit of time left. So I guess we'd be packed quite a lot into the half an hour here.
So just on the Q&A side of things. So as always, Neils, all the hard questions go to you. Right?
But got a couple of questions here.
So question from Tray customers. So how do you access their engineering? So you contact your customer success manager, right, if you're interested, and then we can work that out for you.
And so, yeah, just just just follow-up your success manager.
On the data side of things, so question around, is this for what kind of data is this for? Is it for, you know, structured or unstructured? So this here, with the engineering, we've refocused on structured and semi structured data.
We already handle unstructured data within the Tray platform. So we provide built in vectorization and that storage built into the platform. So, you know, it's actually worth thinking about that that, you know, if you if you have, you know, one tool for structured and maybe you're handling unstructured another way, what you can do here is handle all your structured, semi structured data with data engineering and your vectorization and partitioning and chunking and those kind of thing with our vector our our built in vector tables, you can handle that all within within one platform right here with Tray. Right? So all your styles of data.
Got a question here, Niels, about just kinda error handling. How does that work with data engineering? How does that work with our built in error handling?
That's right.
Yeah. So there are many levels of sort of error handling in Tray.
Probably the most straightforward one, you know, would be our connector level error handling. So we have a couple things in place there. Right? So one, if any connect connector in a given process fails, you know, we do have our standard alerting processes that'll let you essentially know that that process has failed and where it failed, including if, in this case, it was on the SQL transformer step, that step.
In addition to that, we have the ability to do things like what we call manual error handling, which effectively turns the connector into a if you succeed, go this way. If you fail, go that way, in which case, you could do some more dedicated error handling on that.
So that would be the default, but it would work like just any other connector. You have the ability to just continue the workflow if the step fails, that sort of thing.
Does that answer the question, Paul?
Yeah. I think that that comes out. Just got a related question, actually. You know, it's not you know, you got a question on logs, right, and audit trails.
How does that work in terms of just, like, kind of traceability and those kind of things? Is it separate logs or is it? No. Yeah. How's that work, Nels?
No. Everything is logged in a similar fashion through which any other connector on Tray would be logged, meaning any any step level failures would be recorded exactly as you saw, us sort of exploring logs inside the interface during the demo. And, again, those logs, if folks don't know, you do have the ability to, stream any of our logs externally to an external system. Could be, like, any log logging type platform, Splunk, Datadog, that sort of thing, which has some interesting use cases for it. But, no, it would be a standard way you approach logs just like anything else.
Cool. And then I guess another question here actually is on the learning curve side of things.
You know, it's kind of interesting. Right? I mean, if iyou've got a data background. Right?
You've got no SQL. Right? And, you know, Python and those kind of things. But, actually, the question is that, you know, about how about, you know, if they've not used Tray before.
Right? What's the learning curve there? I mean, I know you can maybe speak, Neils, about how when you first started using Tray, how that learning curve was?
Yeah. So for me, the learning curve, in my opinion, it's always best to start with a clear use case in mind. Right? So if you have an objective, like, for example, you need to ETL some data between a couple of systems and you need to put those things together.
It's always best when you kinda have a clear endpoint in mind. From there, right, it's just about becoming familiar with the syntax, right, of the platform you're using. And so for that, we have some great, obviously, we use a low code drag and drop sort of visual interface. We do webinars like this to kinda introduce people to concepts, but then you could dive deeper, if you go over to Tray Academy, and you can kinda go through some of the fundamentals and learn things like that.
But as far as, you know, the learning curve, I think, particularly for something like this where there's a, you know, a coding language, so to speak, involved, we're it's only getting easier with AI. Right? So so if you have that end in mind, right, and you can talk about some of the data that you wanna you have, and you can sort of work with the AI to come up with things like the queries you need to produce, you're only producing this stuff much faster and making the learning curve that much lower. Right?
So we think it's powerful when you combine things like AI with our sort of easy to use interface, the speed at which you can do things like this and the number of people who can do things like this becomes wider for the number of people who can do it, and the speed becomes much faster. Right? So much easier to do.
Got it. We got a couple more questions here on the extraction.
So around SQL transformer here, could you use SQL transformer to extract data from MySQL and and everyone's favorite Oracle on premise database and load it into Snowflake?
I mean, my take is the answer is yes. Niels?
Yeah. So there may be, like two connectors there. A MySQL connector is something that we have there available. But, essentially, you could pull that data from the MySQL database, put it into the SQL transformer.
Again, use other data sources there as well that you might be wanting to pull in, and join that data or transform that data, whatever it is. And then, of course, we have our Snowflake connector as well, in which case, connector number three, you would have inside the workflow. So we're happy to work with customers, pair them up with a solution architect, kinda think through the use case a bit more, do some office hours, whatever it is, to explore the connector in that specific use case. But yeah.
So the questions here.
Moving well, I guess, time for one final question. Moving data from within Snowflake, can we use the option to move data from the staging schema to target reporting schema with the required transformations? Yeah. So if you have our yeah.
You can extract data from the staging side of things with Tray and then flow it into a into a into a, you know, to a target schema with the transformations. Yeah. That's pretty you know, we have customers doing that today, whether it's Snowflake or Redshift or BigQuery or, you know, any of the databases. So, yeah, if you wanna do from one schema to another, that's all good too.
Anything to add, Nils?
I don't think so.
I know there's a question on the file size.
Yeah. Yeah.
I wanna get that for Bill. It's very large. I don't have the specifics on hand right now, but I know some connectors have some, you know, like, between step, file size limits. But because we're working with file objects this time, right, so we're not passing raw data into the step, you don't have the same size limits. Right? So you could do very large files. But we'll make sure we have your account team follow-up with you with the specific file size limits.
Yeah.
Just want them to speak out of hand.
Yeah. And we also have concurrency as well. Right? So you can handle all this stuff in parallel as well. Right? So it's not just the single file size as well. So if you wanna run it concurrently, we can do that too.
So I know we're up against the time. So, hopefully, everyone found that valuable. I think we packed a lot into half an hour there.
So a couple of things. Obviously, if we don't answer your questions, we've got your questions, and we'll get back to you post this event, number one.
If you wanna go ahead and, you know, get a demo, we have an interactive tour, or you can request a demo and, you know, you might get lucky. You might get Neils, right, providing a demo as well. Alright? So, you know, let's go ahead and check go to tray.ai/demo,
You can request a demo, take an interactive tour, and, obviously, we'll follow-up and get your questions answered also. So with that, I wanna thank everyone for taking time out. I know your time is valuable. Half an hour with us today.
And, also, thank you, Niels, for the demonstration and the overview of data engineering. Have a good day, everyone. K. Bye now.
