Building Intelligence for Real-World Complexity
Jyotirmoy Sundi
CTO & Co-Founder
Votal AI Inc
Building Intelligence for Real-World Complexity
Jyotirmoy Sundi
CTO & Co-Founder
Votal AI Inc
AI is no longer a distant concept, it’s becoming part of everyday work, and organizations everywhere are asking the same question: how do you trust it in the moments that matter most? The leaders making real progress aren’t driven by trends. They understand how unpredictable real-world environments can be, and why technology must hold up even when conditions are far from ideal. This new era belongs to leaders like Jyotirmoy Sundi, who know that intelligence is meaningless without reliability and safety. His journey through Motorola Solutions, Lotame, Intuit, Walmart Labs, IPSY, DropYacht, and now Votal AI has exposed him to fast-moving systems, unruly data, and the relentless pressure of operating at scale. Those experiences forged a simple conviction: AI must withstand real-world conditions and deliver real-world value. Jyotirmoy carries this philosophy into his role as CTO and Co-Founder of Votal AI, where he leads the development of practical, trustworthy and secure AI that helps people work safer and smarter. His team builds voice assistants for frontline and industrial workers, visual AI systems that empower engineers, and responsible AI platforms that help businesses learn, adapt, and operate with greater clarity. With a strong focus on AI security, trust, transparency and real-world usefulness guide every decision he makes. During an exclusive conversation with TradeFlock, he discussed his journey, the philosophies that guide his work and the path forward as AI enters its most consequential decade.
What guided your shift from engineer to building enterprise-ready AI products?
Early experiences tend to leave a deeper mark than we realize in the moment. In my years at Motorola Solutions, I wrote software knowing it might be used during an actual emergency. That awareness quietly rewired how I saw engineering. It impressed on me a principle that has held true across Walmart, IPSY, Cribl, and now Votal.ai: if it does not work reliably in the real world, it does not count. Anything that only works in a neat development environment or a flawless demo simply does not survive the realities of production. This mindset has kept me from getting carried away by prototypes that look impressive but crumble under real pressure. I have seen that happen often enough to know how costly the gap can be. So I quickly push ideas from presentation decks into small, production-grade pilots, where real users, real data, and real constraints provide the only feedback that matters. Metrics grounded in behavior guide decisions more strongly than anyone’s opinions, including my own. Another habit that became part of my leadership: thinking about how something might fail long before discussing how it will succeed. Teams question misuse, corner cases, long-term drift and operational weaknesses right from the beginning. That practice makes room for both ambition and discipline. Holding those two together is what turns an interesting AI idea into something an enterprise feels confident running at the very center of its operations.
How do you guide teams to turn deep research into solutions that deliver real value?
The work often takes place between two very different realities. One is the research world, full of elegant ideas and technical breakthroughs. The other is the world of customers, operations, time pressure and unpredictable constraints. Bridging those two consistently shapes how I lead teams. We never begin with a model. We begin with a specific outcome that matters: reducing call handle time, preventing mis-picks, or catching unsafe behavior before it reaches production. Once that outcome is clear, everything flows backward from it. Techniques, data choices and architectures get selected not for novelty but for their ability to move that specific number within that specific environment. Experimentation still matters, but it lives inside clear guardrails. We run small production pilots, we track explicit metrics together and we listen to the people who actually use the system. On the technical side, I prefer designs that my younger self at Motorola or Walmart could have operated confidently: systems that are simple to reason about, easy to monitor and resilient under load, even if they are no longer fashionable. There is also a human part to this. Researchers join customer calls, post-mortems, and field sessions to understand the constraints behind the problems they are solving. Over time, this builds a culture where research is not an academic pursuit but a tool that earns its way into the product only when it demonstrates real value.
How do you foster a culture that holds trust, privacy and reliability as strongly as innovation?
The connection between trust and adoption became clear early in my career. If people do not trust a system, they simply will not rely on it, no matter how innovative it seems. That realization eliminated any idea of a trade-off. Trust and innovation operate side by side from the very beginning. Privacy, security, and reliability are addressed in the first whiteboard sketch, not at the end of development. They influence how we handle data minimization, how we enforce tenant isolation and how rigorously we test our own systems. My team regularly red-teams our large language model workflows because waiting for someone else to challenge them would be irresponsible. Culture plays its own quiet but powerful role. When something goes wrong, the post-mortems are direct but free of blame. People know that raising risks or acknowledging uncertainty early is valued. Each model clearly states where it performs strongly, where it is fragile, and which assumptions it depends on. Customers see this plainly. Over time, engineers have come to recognize and appreciate the secure, privacy-conscious path. That trust within the team eventually becomes the foundation that enables us to meaningfully push into new AI territory without sacrificing reliability.
"If AI can give every person a patient, high-quality tutor, it will reshape who gets to become an engineer, a founder, or a policymaker."
Which qualities help you decide whether an AI company can scale in the next decade?
Several traits repeat themselves across companies that endure. The first is the depth of understanding of the problem. Founders who have lived through a painful problem can describe it in vivid detail, often with stories of specific customers and specific moments. After working across retail, data infrastructure and security, I have seen thin AI layers fade quickly. Teams that last are the ones anchored to a problem that truly matters. Another quality is real differentiation. This can appear in different ways: a unique data asset, a technical wedge such as low-latency inference infrastructure, deep safety and compliance expertise, or a product so integrated into daily workflows that removing it would cause real disruption. I ask pointed questions about deployment, scale and what prevents a well-funded competitor from catching up. Discipline is the third signal I look for. Strong founders speak comfortably about unit economics, model lifecycle management, data governance and regulatory risk. Excitement about benchmarks alone does not carry a company forward. The companies that stay relevant over a decade combine meaningful technical innovation with consistent execution and a serious, grounded approach to risk.
Which trends will redefine how companies train and build their workforce?
Training is transforming from static content to something embedded into the flow of work. Instead of videos and quizzes, platforms now provide scenario generation, skills assessment and feedback APIs that integrate directly into CRMs, IDEs and operational dashboards. Enterprise skills graphs are emerging as one of the biggest shifts. Built from code, tickets, transcripts and documents, these graphs allow models to personalize learning paths based on real work patterns rather than generic personas. Agentic training workflows are advancing as well. Multi-step AI agents create realistic roleplays, generate synthetic edge cases and continuously retest employees as policies evolve. This evolution demands a strong infrastructure. Low-latency inference, vector search and streaming pipelines are becoming baseline expectations so people receive feedback in-session, not hours later. Enterprises also want formal evaluations, red-teaming and policy enforcement built directly into training systems to ensure fairness, privacy and compliance. Privacy-preserving personalization through tenant-isolated embeddings, fine-grained access control and even on-device models is gaining traction. Together, these trends turn AI-driven training into an adaptable system that continuously measures, improves and operationalizes workforce skills.
What was the biggest hurdle in moving from engineering leadership to being a startup CTO?
The shift required letting go of the idea that a startup CTO role is just a larger version of engineering leadership. At Walmart, IPSY and Cribl, I owned major systems and teams while other functions—sales, legal, finance, brand—were already established. Stepping into Votal AI meant stepping into a space where everything required attention at once. Days often moved from fundraising meetings to security architecture reviews to customer demos, with hiring and prioritization woven throughout. Strong technical decisions remained essential, but they were no longer enough. If customer development, runway planning or clear storytelling faltered, even the most carefully designed architecture would not matter. Another major adjustment was prioritization. Larger companies allow room to refine and explore multiple research threads. A startup demands a much sharper filter. Ideas I personally liked sometimes had to be set aside when they did not push us toward product-market fit or a critical customer milestone. I also grew closer to customers, translating complex AI and security ideas into language that matched their needs and learning from the moments where their expectations challenged my assumptions. Living with that mix of uncertainty, responsibility and high technical standards became the most demanding part of the transition, and also the most rewarding.
Which emerging technologies excite you but remain underexplored?
Several areas at the intersection of AI, systems and safety continue to feel both promising and underdeveloped. One of them is agentic infrastructure. Today’s early orchestration libraries are helpful, but many essential tools are still missing: strong agent debuggers, traffic shaping and safety mechanisms for agents that operate over long horizons. Privacy-preserving personalization is another area with significant potential. Running meaningful parts of recommendation, decision-making or coaching logic closer to the user—on device or in tightly isolated environments—opens the door for personalization without compromising trust. Techniques like tenant-isolated embeddings, stronger key management and efficient near-edge models are early but encouraging steps. Simulation and evaluation stacks also feel early. There are many leaderboards but far fewer robust, programmable environments where AI systems can be stress-tested against realistic scenarios, long-horizon tasks or adversarial behavior. As AI moves deeper into workflows where failures matter, these environments will become critical infrastructure on par with the models themselves.
"Innovation means very little without the reliability for people to depend on it. The two grow together or not at all."
If AI could solve one major global problem in the next decade, which should it tackle?
One challenge has always felt deeply important: the inequality of opportunity that begins with unequal access to learning. Childhood circumstances—geography, language, the presence or absence of one good teacher—shape paths in profound ways. Reliable access to a patient, a high-quality tutor, and a coach in one’s own language and at one’s own pace would reshape more than just education metrics. It would influence who becomes an engineer, a founder, a nurse, or a policymaker. The vision is not a chatbot that completes homework but a system that can coach someone through years of real skill-building, from basic literacy to operating complex machinery or running a business, all while respecting privacy, culture and safety. If that becomes real, it expands the number of people capable of tackling the world’s hardest problems—climate resilience, public health, economic stability and even safer AI systems. Closing that opportunity gap would create ripple effects far beyond education itself.
How do you stay ahead in an AI landscape that evolves so quickly?
Staying ahead requires staying close to both research and reality. Each week includes time for reading papers, following trusted researchers and building small prototypes. A private repository holds dozens of experiments that almost worked, tested on real datasets rather than toy benchmarks. Writing and running code reveal insights that passive reading never does. Conversations with customers, founders and younger engineers highlight gaps that matter in practice. Those conversations often reveal more than any leaderboard ranking. I also place myself in situations that stretch my understanding—live coding in front of the team, experimenting with new frameworks or red-teaming my own systems. These moments quickly and honestly uncover weaknesses. Teaching plays a role as well. Explaining concepts to others reveals the areas of my understanding that are incomplete. And since it is impossible to chase everything happening in AI, I stay disciplined about the themes that matter most: safety, latency and real-world reliability. That combination of hands-on work, real-world exposure and focused curiosity helps keep perspective steady even as the field moves rapidly.









