This short demo shows how to use Merlin Guardian to tokenize and safely reinsert PII when working with third-party AI—so your data stays private, end to end.
When teams use large language models (LLMs) with sensitive data, even a single exposed name, email, or ID can create risk. Whether it's customer messages or internal records, most tools can’t protect that data across steps. This demo shows how Merlin Guardian helps teams safely redact and restore personal data inside AI workflows—so you can move fast without compromising privacy.
Merlin Guardian tokenize PII in incoming messages
How redacted data is routed through an external AI model
How personal data is safely reinserted before sending downstream
Native AI capabilities within the Tray platform include Merlin Guardian.
In this use case, we're receiving a message from a third party system that could be Gmail, Outlook, Zendesk, Intercom, or other.
In the first step, we're tokenizing the PII that's contained in the message that we're receiving at the top of this workflow.
The next step is to send that now private and tokenized data to a third party AI vendor so that we can get a generated response that we can then use as a draft message in another downstream system.
Before we can use that as a draft message, we need to use the Merlin Guardian connector to detokenize the PII tokens in the generated response from our third party vendor.
We're going to take a look at exactly how this works through the logs and an example spreadsheet.
So in this example spreadsheet, the incoming messages are column A, and live populating in the rows are in column B, what Merlin Guardian's doing in the first step, tokenizing the PII data.
Column c is the third party AI vendor's response, and they're providing those tokens in the correct places because of the prefixes associated with the tokens.
And then in column D, we're using Merlin Guardian to safely detokenize that information so that we can actually use this as a draft message in a downstream system. You can see this is populated for all the various messages, and you can see that consistently is able to tokenize and detokenize that information in order to continue the safe processing of PII data.
From the logs perspective, we can see that in tray, all of our executions are run in real time. You can see they continue to populate. The message comes in. We can see the tokenization happens within a context of composite AI, so we're using the best tool for the job. This is a machine learning industry leading algorithm that's able to identify exactly what pieces of information are PII with a confidence interval associated, and then it creates the mapping which allows you to see the tokens but obscure the PII.
We have the response from OpenAI and the other executions coming in in real time. And then we can see that all we had to do for the detokenization was pass in the mapping provided by the previous Guardian step, and then we can get the output to be the fully reintegrated response that can be sent to a third party system downstream.
Built with Tray Merlin Agent Builder, this RFP agent finds accurate answers, escalates pricing questions, and processes full documents in Slack.
Built with Tray Merlin Agent Builder, this HR agent checks entitlements, applies rules, and takes action—so your team can stop fielding basic questions.
Ground AI responses in your own data using Tray’s native Vector Tables—no third-party vector storage, retraining, or fine-tuning required.