Skip to main content Skip to footer


February 11, 2026

Behind the Build with Daniel Fink

An inside look into the work and expertise from our AVP-Platform Engineering, Daniel Fink.


Animated image of Daniel Fink on a gradient red background

Behind the Build is a series featuring developers and engineers from Cognizant AI Lab. The series offers an inside look at how AI systems are designed, built, and taken to production, spotlighting the people behind the code, the technical decisions they make, and the lessons learned along the way and how their work is shaping AI technologies that are becoming part of everyday enterprise systems.

What first pulled you into AI development or systems engineering?

  • I actually started in the consumer electronics space. I was working on navigation systems when a friend introduced me to a company called Genetic Finance. That’s where everything shifted.

    Genetic Finance was using evolutionary algorithms to simulate financial markets — essentially running millions of autonomous bots as day traders. Watching algorithms compete, adapt, and evolve in that environment was eye-opening. It wasn’t just automation — it was emergent behavior. That’s when I really got pulled into AI development. I saw how systems could be designed to learn, adapt, and operate at scale in complex environments

When designing an AI system like a Neuro-AI Multi-Agent Accelerator, how do you decide what’s genuinely worth building versus what’s just a trendy agent pattern?

  • At the engine level, I stay very attuned to what people are actually asking for. If multiple teams independently converge on the same need, that’s a strong signal. When you see duplication across use cases — the same patterns appearing in different contexts — that’s worth formalizing and building properly.

  • There’s also the inverse: things people consistently express they need, but that no one has solved cleanly yet.

  • Another major filter is scalability. Everything has to scale — not just to one server, but across many. If a pattern can’t scale “ridiculously,” it’s probably not foundational.

  • Vendor lock-in is another big consideration. Early assistant systems — even before neurosan — were heavily tied to specific ecosystems like OpenAI. That kind of lock-in limits trust and flexibility. With NeuroSAN, we wanted true deployment freedom: choose where it runs, protect against data leakage, avoid ecosystem constraints, and maintain architectural independence.

  • If a trend locks you into someone else’s infrastructure or limits flexibility, it’s probably hype. If it increases scalability, configurability, and trust, it’s worth building.

Is there an agentic AI system or feature you’re especially proud of deploying? What broke along the way, and what did that teach you about building robust agents?

  • I’m especially proud of the ultra-configurability we achieved in neurosan. It started as experimental code from Babak — very hard-coded, very specific. Over time, I kept refactoring it. The goal was to reduce the essential behavior of the system into data specifications rather than code. Eventually, the system became almost entirely text-driven. The logic moved from rigid code paths into configuration.

  • That shift was transformative. Instead of programming agents in a fixed way, agents could arrive with data specifications. Agents could create other agents. The system became composable and extensible rather than brittle.

  • A lot broke along the way. There were calculated risks — especially when modifying frameworks like LangChain to enable LLMs to talk to other LLMs in more structured ways. Some of that involved tremendous hacks that later had to be stabilized and cemented properly.

  • Refactoring was the key skill. Moving the “essence” of functionality into cleaner, more modular, more testable structures. When you structure something well, it doesn’t break easily. I’ve been told I’m a “master refactorer,” and I think that discipline — constantly distilling systems down to their essence — is what makes neurosan robust.

What are you currently building or improving in the agentic space, and what core technical breakthrough or design philosophy is guiding that work right now?

  • Right now, I’m excited about agents creating other agents. Because NeuroSAN is configuration-driven, you can tell an agent in plain text to create a form, define a task, or assemble a workflow. You’re not coding — you’re specifying intent. I call it “vibe coding” for a network of agents.

  • We’re building agent templates and hierarchical systems — especially in deep agentic RAG architectures. In those systems, individual agents are responsible for specific segments of a large document store. For example, in a legal context, one agent might specialize in contracts, another in regulatory filings, another in case law. They assemble their outputs hierarchically into a cohesive result.

Looking back, what skills mattered more than you expected in becoming effective at building agentic or multi-agent systems, and what advice would you give developers who want to work seriously on agents?

  • Ironically, management skills. The best agent prompters — Babak Hojdat is a great example — know what not to say. That’s as important as what to say.

  • Building agentic systems is like managing a team:

    • Focus on outcomes, not micromanaging the process.

    • Don’t over-specify every step.

    • Break large goals into smaller objectives.

    • Understand cognitive load — machines, like people, don’t have infinite bandwidth.

  • There’s also something I call information empathy. You have to ask:

    • What does this agent know right now?

    • What have I already told it?

    • Does it have enough context to succeed?

  • If you overload it or assume knowledge it doesn’t have, performance degrades.

  • Another big lesson: manage expectations. Agents are far more complex than search engines, but sometimes we expect perfection. Set time and quality expectations realistically. Don’t overthink. Don’t over-engineer. And don’t micromanage.

Looking ahead, what domains or workflows will agents transform next, and what emerging agentic trend or capability makes that transformation possible?

  • Language-heavy domains are next: medical, legal, policy, regulatory environments. These spaces are deeply language-based rather than spatial. That’s where LLM-driven agents shine.

  • The breakthrough isn’t just bigger models — it’s specialization. Taking base LLMs and training or fine-tuning them toward specific domains and jargon. Pair that with agentic RAG — where agents represent different slices of a massive document corpus — and you get something like a highly specialized research assistant.

  • For example, in legal systems:

    • Separate agents represent different document types.

    • Each agent specializes deeply.

    • A coordinator agent assembles outputs.

    • The system increases certainty — even if it costs more.

  • Certainty is worth the cost in high-stakes environments.

  • Long term, I see something analogous to the early web: publicly available agent servers. Agents acting on behalf of users. Agents evaluating other agents. Agents “raiding” agents — discovering, testing, and invoking capabilities across networks. That ecosystem shift will be profound.

If you could deploy  personal AI agents for yourself today with no constraints, what silly thing would you want it to do?

  • First — fill out my timesheets.

  • Second — a joke rater. Something that tells me whether a joke is actually funny before I say it.

  • Third — a backseat driver agent that tells everyone when they are being bad drivers.

  • And while we’re at it — AI-enabled traffic lights that actually coordinate so I’m not sitting at red lights for no reason.



Daniel Fink

Associate Vice President — Platform Engineering

dan fink

Daniel Fink is an AI engineering expert with 15+ years in AI and 30+ years in software — spanning CGI, audio, consumer devices, and AI.



Subscribe to our newsletter

Get our latest research and insights on AI innovation


Latest posts

Related topics