This is the last part in my series of “learnings from building an AI coding agent”. Here are the previous two posts (in case you have read them yet) :
People don’t always understand a new technology until they experience it, so it’s on us to help them get there. - Anon
Imagine how driverless car “builds trust” in a “very high risk/high stakes” scenario. AI is driving the car vs the humans! As a PM how do you build trust in such a scenario?
AI is magical & importantly, non-deterministic, which inherently carries risk for customers, it may work or fail unexpectedly. Add to this, the market being flooded with “prototypes and half-baked products and you have a massive user trust deficit in AI products!
It was a feature we built in the AI coding agent at Freshworks that would take the outdated code of an app code base and migrate it “magically” to the latest supported developer platform version. It was important for our marketplace customers and business to ensure developers adopt latest platform versions which was not only more secure, making the apps secure, but also was feature rich in building diverse types of apps use-cases.
Here is how the feature worked
https://www.youtube.com/watch?v=fCjdLPdzEnw
Of-course the risk was high! We were changing 10s of files in an existing working app codebase which developers had written and understood very well! As a developer myself, I would hardly trust an AI (which is non-deterministic duh!) to make changes to my codebase since I am responsible for the code and a working app!