Demo
5 min

Build trustworthy AI with Vector Tables

Ground AI responses in your own data using Tray’s native Vector Tables—no third-party vector storage, retraining, or fine-tuning required.

Video thumbnail

Why it matters

AI-powered apps are only as good as the data they use. Vector Tables let you bring your own knowledge into LLM workflows—without retraining or risking data exposure. Because Vector Tables are native to the Tray platform, there’s no need to manage a third-party vector database or additional infrastructure. Whether you're building a chatbot, RAG pipeline, or internal agent, you can store, query, and cite information in a way that's fast to implement and safe to govern.

What you’ll see   

  • Vector Tables store documentation and unstructured data for use in AI workflows

  • How users ingest data from sources like websites, Slack, or Jira

  • How models are prompted only with verified knowledge—enabling safe, grounded responses

  • How to manage and monitor table contents directly in Tray

Transcript

Today, I am going to be talking about something that almost every single customer and prospect needs help with right now, which is making AI trustworthy. Whether it's chatbots, AI assistance, RAG pipelines, agents, or anything else in that category of AI services and applications.

So how do we do this? We need to find a way to give the model access to our data and our knowledge, and make sure that we add guardrails so that it only answers using the knowledge we provide.

And this is only the data that we're passing in is only used to generate answers. So no models get trained on your data using this method.

So queue up Vector Tables, which is available natively in the Tray platform.

You can use this as a core component when making your AI apps and services trustworthy.

It's a fantastic tool for storing data and knowledge that AI services and applications use.

So let's talk about the use case a little bit.

If we go to our own documentation site, we have an "Ask Merlin AI" feature which searches across our entire documentation.

This runs on Vector Tables. So, it's really important for us to get these answers based on our actual documentation and provide citations.

We also have customers indexing their unstructured data and knowledge from apps like Jira and Slack so that they can also use, that data in AI agents on Tray.

So let's show you, an example quickly. If I go to scrape a website using this Tray form, put my email in, and use the default setting for this, I can hit submit.

And we're kicking off a process of ingesting that website into vector storage. So if I come back into my Gmail, I can see that the pipeline is now running. So I get a notification for that. And in a moment's notice, we'll get a notification that the pipeline has finished running. And that means that that website has been scraped and put into vector storage. So we can see the pipeline is complete.

This will also provide, if you want access to this, a link to quickly open the inference side of things. So inference is asking questions against an AI model. But this is going to use knowledge from our vector storage. So how can I build a RAG pipeline on Tray?

Well, I won't even include on Tray. So if I hit that, I can select which model I want to run with. Now, on this side, we're seeing asking that question and we'll get an answer in a second and I'll ask something completely irrelevant.

How do I make bread?

To show you a little bit about the guardrails.

If I come back to my demo, email inbox, I'll see that a couple different notifications are going to come in for those two different, runs against the model. So the first one's here.

So the first one came the other one came in first. So how do I make bread? My knowledge doesn't include information. Answer another question.

Perfect. How can I build a RAG pipeline? An answer, but it's not just a random answer. It's actually grounded in the information that we ingested and now have access to in vector storage.

So let's look at how this actually works a little bit in the application.

The first thing we did was we submitted the form. We crawled the web, to get the information from that website, and then we sent it to some downstream processing.

The beauty of Tray is that with this template, it's all set up for you and you can go as advanced or as surface level as you want. And one of the next processes is to up cert this into vector storage. So you can see, it's grabbing some information about today. It's embedding, again. But if you don't know what that means, this is all kind of preset out for you. And it stores the vectors in a vector storage database.

So then the last thing, we set the form in action and ran inference. And this comes through, uses Vector Tables again, chooses which model, which is, a simple branch here, and then sends that email back. And the process of standing up a new Vector Table is extremely easy. You can come to the left hand menu here, click plus, name your table, and select the number of dimensions. And if you don't know anything about dimensions, that's okay. We've got a quick guide here for you to actually go about setting that up. And then when you have items in your your Vector Table, that shows up here and you can see, the count of the number of vectors and all of that to quickly get an understanding of, okay, what just got added, and have some visuals there.

So if you're interested in this, and want to learn more and hear about how some IT departments and other teams around different organizations are building out their applications and services on Tray, we'd love to see you at one of our upcoming workshops.

Also, we'd love to hear from you in the community. Thank you. Have a great day.

Let's explore what's possible, together.

Contact us