When you use GenAI for coding, you’re now a Code Director.
With GenAI in the production seat, you have to take a step back from hands-on and start directing vision.

I used Gemini Canvas to build, debug, enhance, deploy an API service, and implement a JavaScript function on my blog within a couple hours—that may be child’s play to a professional engineer, but I was personally blown away.
I'm not a developer, but a UX Product Designer for 29 years.
A couple of things in play here:
- 👉 Extremely simple features and simple apps are the sweet spot for GenAI Coding
- 👉 The hands-on engineering/coding role gets abstracted into both a UX Design and Director roles
I knew the basics of Git, Netlify, and code injection, but I had zero ability to wire it all up line by line, let alone troubleshoot the defects (including terminal commands on the Mac).
GenAI guided me through configuring each step, auto-generating fixes every time I brought them up (whether specific errors or something didn’t behave as expected), and I was able to guide it to extend the functionality without disturbing the existing features.
It wasn’t just the speed, but the quality and intricacy of the whole endeavor.
The takeaway is while I knew what I wanted the end user experience to be (UX Design), there was no need for me to touch any lines of code (although I did tweak some CSS).
In order for me to affect change, I had to know how to direct the AI and then let it figure out what code to make or change.
It’s like Art Direction, but with code—Code Direction! ✨
Not too different from how I’ve been operating as a Designer for years 😀
As an Art Director, you may dip down into the production-level graphic design for a moment, but only to demonstrate your vision. You’re more interested in giving direction to the junior designers, critiquing their work, updating your direction, and honing in where the final polish needs to go. You’re a strategist with a powerhouse of creatives.
In the same way, for GenAI coding, my mindset was all purpose-driven and outcome-based. I only dipped into some lines of code to make super-specific adjustments. All-in-all, the feature was a result of whatever was generated and refactored.
As far as the usefulness of GenAI for coding entire apps?
It currently seems plausible for these simple, self-contained functions (although I’m experimenting building a watchOS app with Cursor in the same vein).
I can’t imagine the code is as efficient, encapsulated, or the best practices engineers are used to.
In fact, if this were a serious feature, I wouldn’t trust it to be secure or at least up to the levels of security needed for a given project. However, that’s the beauty and the danger.
There’s more needed in order to scale this concept beyond its current state, given all the dependencies in our everyday software and environments. However, when it does scale, it will be another abstraction of what it means to design and engineer software.
Will we all become “Full-stack AI-Designers”?
No doubt the Design-Engineering gap is closing. Then again, when those two points meet there’s something else that causes more divergence.
I’ll not say what I’ll never do, but there’s so much thinking and strategizing and exploring to be had in the Design space, I’m more than happy to leave the engineering to engineers.
One thing is certain: you will always need to know how to design on purpose if you want to achieve an outcome regardless the tools you use to get there.