Skip to main content Skip to footer


December 16, 2025

Neuro SAN: Securing Data in Agent Networks

A deep dive into the security principles behind Neuro SAN and how it protects private data within complex, interconnected agent workflows


Neuro SAN sly data

Agentic AI systems are becoming increasingly powerful – but with that power comes a new class of security risks. As agents communicate, reason, and act across tools and data sources, sensitive information can unintentionally spill into LLM prompts, cross trust boundaries without oversight, or become visible to third-party model providers. Whether you’re an individual experimenting with multi-agent workflows or an enterprise deploying AI at scale, the challenge is the same: how do you keep control over your data when autonomous components are talking to each other?

Neuro SAN was created to address exactly this. Built with a security-by-default philosophy, it gives developers full control over execution environments, model selection, and private data handling – ensuring sensitive information never enters an LLM stream unless explicitly allowed. Neuro SAN’s design empowers secure, transparent agentic operations without sacrificing flexibility, capability or scalability.

Below, we walk through the core features that make Neuro SAN a secure foundation for multi-agent systems.

Choice of Execution Environments

With Neuro SAN, you have the control over where your Neuro SAN servers run, as opposed to being relegated to running within someone else's walled garden or server execution environment. We provide sample Dockerfiles so your Neuro SAN deployments can be done within your own security perimeter on your own terms.

This means that if your Coded Tools need to access sensitive data requiring credentials, you can do that on your own terms. All your server logs are your own, and routable to whatever observability platform you like via open telemetry (if you want any at all).

Most importantly, you don't have to poke scary new holes in your firewalls to let requests running God-knows-where in.

Usage of Privately Hosted LLMs

For sensitive deliberations, you do not necessarily want all of your agents' LLM chat streams going back to the companies that host the LLMs, let alone to be used to further their training efforts. As discussed in the earlier article "Use the Right LLM for the Job", Neuro SAN's llm_info configuration allows you to select LLMs that are hosted within secure data boundaries, as in the MicroSoft Azure or Amazon Bedrock cloud environments, and gives you the ability to change LLM parameters for each usage as needed.

Furthermore, if you are hosting your own LLMs for security reasons and/or have one or more of your own fine-tuned models for your own domain-specifics, you can add those into your agent network configuration mix wherever that is appropriate.  You can do any of this LLM-specific configuration on an agent-by-agent basis within any Neuro SAN agent network.

Keep Private Data Private

One of the more intriguing security features of Neuro SAN is what we call "sly_data".

With any regular chat request of a Neuro SAN server, you can send standard text input with an accompanying dictionary carrying a private data channel.  Within any Neuro SAN server, this sly_data channel (as in: data "on the sly") is treated _completely_ separately from data that goes into and out of the agents' chat streams.  Primarily, the sly_data is only ever sent to Neuro SAN Coded Tools that you provide; it is _never_ sent to any chat stream by the server infrastructure we provide.

The keys in the sly_data dictionary that come as input are always strings and the values can be whatever you want as long as they are serializable - strings, lists, deep dictionaries, whatever.  When your Coded Tool implementation is invoked by an agent, you get the single copy of the sly_data dictionary that is common to the request.  So it's only your Coded Tools on your servers that have access to the sly_data coming in.

Furthermore, your Coded Tools have the option to add to the sly_data dictionary when they are invoked, not unlike a bulletin board.  This allows for private data acquired by the Coded Tools to also be part of the sly_data _output_ of an agent.  Secure data in, secure data out.  (Note: there are other network-internal uses for sly_data-as-bulletin-board as well, but that is for another article.)

An Example:

Let's say that your agent network needs temporary user-specific credentials to access Personally Identifiable Information (PII) in a secure database. One or more sets of LLM-based agents might communicate amongst themselves as to what PII elements are needed via natural language chat streams, but actually retrieving the data securely is not for the realm of LLMs. For that we have Coded Tools do the work. 

The user id itself can be considered PII and can come in as part of the sly_data dictionary input for later access by other agents in the network.  The user themselves might not have access to the authorizing entity dispensing the desired key, but the Coded Tool in the Neuro SAN server can be granted that access, just like any other upstanding member of your microservices mesh might be. This first tool can fetch the user-specific key and add that to the sly_data for down-chain agents within the network to access later.

Now, a single Neuro SAN agent network is in and of itself its own trust boundary, so as long as the Coded Tool that fetches the sensitive information from the database is within the same agent network, this database tool fetching specific PII elements has access to the user's key only for the duration of the single Neuro SAN request.  To keep the PII out of the chat stream of any other LLM-based agents in the network, that database tool can add the sensitive value to the sly_data to be returned as part of the request result relevant to the client later.

At this point, the sly_data dictionary has a bunch of data in it: some we got as input from the client, some we want to return to the client, and some that should never leave the server. How do we control that access?

Preventing Sensitive Information Leaks

In Neuro SAN's configuration-driven agent network specifications, agents can have what we call an "allow" block specified in their HOCON files. (Recall: The HOCON standard is effectively JSON with comments - better than YAML.)  For the entrypoint agent (or "front man"), you can specify exactly which keys of the sly_data dictionary are allowed to go back to the client as a return value. If the dictionary key is not mentioned in this configuration, it does not go back to the client, and so it is by this mechanism that the user-specific authorization key never leaves the Neuro SAN server, but the PII element from the database can.

Taking this allow-block security concept even further, consider that Neuro SAN agent networks can also invoke other Neuro SAN agent networks as tools to form agent webs. You also have fine-grained control over what private sly_data keys can be communicated to these external agent networks in another separate "allow" block in the network HOCON.

To continue with the example, the PII can be forwarded to the callee agent network as sly_data input, but the user id and key do not have to come along for the ride unless specifically called out in the "allow" block configuration. 

And finally, to put the icing on the security cake, the calling Neuro SAN network doesn't have to accept all the sly_data output from its callee, either.  The calling Neuro SAN network can choose to accept whatever sly_data it wants from downstream agents by yet another "allow" block. And, all along the way, _none_ of this information ever has to enter any LLM chat stream (or observability logging, for that matter).

In the end, it's still up to the agent network developer to keep private information private, but Neuro SAN gives you the control over where your sensitive information goes at all points of entry and exit to your agent networks with security-by-default philosophy. 



Daniel Fink

Associate Vice President — Platform Engineering

dan fink

Daniel Fink is an AI engineering expert with 15+ years in AI and 30+ years in software — spanning CGI, audio, consumer devices, and AI.



Latest posts

Related topics