As part of my involvement at LeadDev NYC I had the opportunity a short video message that would be part of a montage played for folks between the live talks. I decided to speak about the way engineers are enabling the future of products (you can watch it here ).
It seems to me that questions like “how can engineers affect the future of (whatever)” sometimes come from a place of anxiety. And these days there’s no greater source of that anxiety than the advances – and the impacts we imagine coming from those advances – in Large Language Models (LLM) – more broadly billed as Artificial Intelligence (AI).
But LLM and AI are techniques. Nobody in tech ever lost their job because of a new technique, but plenty of folks become anxious when techniques grow to full-on implementations that take the world by storm. I’m speaking, of course, of tools like Bard, Dall-e, and ChatGPT.
It’s inarguable that the things these tools can accomplish are both impressive and diverse. But we – both as tech practitioners and humans moving through the world – encounter a lot of tools that are both impressive and versatile. My argument about AI-driven tools isn’t that they’re worthless. My frustration is with the statements of how they’ll change the world; replacing the work done by entire swaths of professionals.
This point of view is entirely unfounded in experience, if not fact.
Nathan Hamiel recently dug into this on ModernCISO.com and honestly I can’t provide better examples than he does. What he drills down to time and time again is that LLM-based tools can accomplish wondrous things, but only when given highly specific, tightly bounded directions that require the operator to have intimate knowledge of both the subject and the desired outcome, and often took multiple attempts to perfect before it “just worked like magic.” In fact, in reading his essay I was reminded of the scene in the movie “Sully”.
Every time I read a blog breathlessly proclaiming “I used ChatGPT to accomplish this really hard thing” I’m left wondering, like Sully, “How many times? How many practice attempts did they make before successfully pulling it off.”
I’m saying these tools invalid or the claims overblown. I simply want to put their achievements into a realistic context. And that context should be familiar to us, because we’ve already seen a similar thing happen – on both a larger and yet also smaller scale – in our lifetimes.
The first example I encountered was as simple and unassuming as it was revolutionary: the calculator.
The first pocket calculator came onto the market in 1971. Advances in what that tiny yet powerful device could do were both rapid and impressive. It wasn’t just the improvements in speed, or size, or interface. There were massive leaps in the types of operations that could be performed.
“This is going to change the world!” proclaimed a chorus of voices from supporters. Ironically, the detractors were shouting the same thing, although with a decidedly different intonation. In what should sound eerily familiar to us in 2023, this tool was banned from schools, job interviews, and other settings for fear it would diminish children’s ability to learn; that it would make identifying truly skilled individuals impossible; that it would upset the balance of merit in the school, in the workplace, and in society at large.
With the benefit of over 50 years perspective, we can see how foolish such fears were. Calculators proved themselves to be a useful and versatile tools, but they were limited by the math skills of the operator.
To put it plainly, no matter how powerful the calculator, if I am using the square root function while balancing my checkbook, something has gone horribly wrong, and it’s probably not my finances which are to blame.
In the hands of a novice, a scientific calculator is far more likely to be used to spell out “fart” than it is to find the proof to Fermat’s theorem. (Use hex mode, put in “46415254”. Don’t ask me why I know this.)
Like the calculator, the spreadsheet, and the internet, AI-driven LLM tools are likely to change HOW we do our work, but not the fact that humans will still be the ones doing the work in the first place.
The shift will come in people learning how to use the tool to it’s best advantage: translating from our context into the context we know it needs to be in.
Which brings me back to the beginning. What is it, then, that we as engineers, developers, designers, and creative people can do to affect the future of products. And I think I’ve laid out an answer both satisfying and filled with hope:
We should endeavor to be fully present as thinking, feeling, context-seeking humans in all of our work. To embrace new tools and use them to their best advantage, while also being clear about their limitations. We affect the future of products when the bulk of the work we do is not with our brains or our hands, but with our hearts.
1 thought on “What ChatGPT Needs is Context”
My greatest success with ChatGPT so far was it took me about half a day to get a working Roslyn analyzer, some C# code that would create a custom compiler warning if code was written a certain way (yes I’m THAT guy that has very strong opinions about the code formatting and linting).
It made the code the first time. It took me a few hours to make the Roslyn project from an old template. Then I had to fix the wrong parts of the code. The only issue left is the template code doesn’t follow all my OTHER code standards and I need to fix one or two warning it generated so I can have al my perfect little Continuous integration DMV employee that will reject anything not perfectly formatted so that I the human can reject it for higher order things like your code not actually doing what it was supposed to.
Anyway, the real lesson learned here is ChatGPT let me start sometime hard (I never did a Roslyn analyzer before). I will probably get good enough at Roslyn analyzers to just make them myself. It didn’t take my job away, and in the end it caused me to do more work because I never would have written a Roslyn analyzer before that.