В кошику порожньо!
His recent work focuses on what he calls "Ambient Intelligence"—AI that doesn’t demand attention but provides context exactly when needed. While many of his peers chase the glitter of Generative AI and autonomous agents, Dintakurthi focuses on the hard problem of control .
Currently, he is working on a stealth project involving "Inverse Reinforcement Learning"—teaching AI to understand human values by watching what humans actually do, rather than what they say they do. It is a subtle distinction, but one that could finally bridge the gap between cold logic and human intent.
“He taught us that ‘can’ doesn’t mean ‘should,’” says Priya V., a former mentee. “Sumanth treats ethics like a performance metric. If you don’t test for it, you haven’t finished the build.” Looking forward, Dintakurthi is wary of the current "AI gold rush." He worries that in the rush to implement chatbots and generative text, the industry is forgetting the lessons of user-centric design from the early web days.
“Just because a Large Language Model can write an email doesn't mean I want it to,” he warns. “Does it sound like me? Does it capture my irony? If not, you’re just adding noise.”
During the pandemic, as burnout swept through the tech sector, Dintakurthi started a weekly virtual clinic called "The Human Loop." It was a no-judgment space for junior developers struggling with the ethics of AI—how to kill a project that worked technically but would hurt a vulnerable population, or how to tell a product manager that an AI feature was technically possible but morally ambiguous.
His recent work focuses on what he calls "Ambient Intelligence"—AI that doesn’t demand attention but provides context exactly when needed. While many of his peers chase the glitter of Generative AI and autonomous agents, Dintakurthi focuses on the hard problem of control .
Currently, he is working on a stealth project involving "Inverse Reinforcement Learning"—teaching AI to understand human values by watching what humans actually do, rather than what they say they do. It is a subtle distinction, but one that could finally bridge the gap between cold logic and human intent. sumanth dintakurthi
“He taught us that ‘can’ doesn’t mean ‘should,’” says Priya V., a former mentee. “Sumanth treats ethics like a performance metric. If you don’t test for it, you haven’t finished the build.” Looking forward, Dintakurthi is wary of the current "AI gold rush." He worries that in the rush to implement chatbots and generative text, the industry is forgetting the lessons of user-centric design from the early web days. His recent work focuses on what he calls
“Just because a Large Language Model can write an email doesn't mean I want it to,” he warns. “Does it sound like me? Does it capture my irony? If not, you’re just adding noise.” It is a subtle distinction, but one that
During the pandemic, as burnout swept through the tech sector, Dintakurthi started a weekly virtual clinic called "The Human Loop." It was a no-judgment space for junior developers struggling with the ethics of AI—how to kill a project that worked technically but would hurt a vulnerable population, or how to tell a product manager that an AI feature was technically possible but morally ambiguous.