to write more code (as you said). The current methods of "AI" don't allow it to actually make "judgment calls" or "decisions" in the way we think about them. The observed result is the same in the end is what I mean. I'm not worried about sentience. And if you treat it as sentient you are going to be very disappointed when it hallucinates on you or gives you bad answers on edge cases with little training data. All successful AI groups I know (including my own company) put heavy backstops with both procedural code and humans.
Anyway, I really don't think we view it differently.