The essence of engineering
How the ChatGPT revolution makes the tech interview look even more absurd
Talk of flawed interviewing practices in software engineering is nothing new: it is a classic, yet seemingly futile discussion in the community. It feels like despite repeated calls from engineers to be treated more like humans in interviews, companies (at least larger ones) have instead doubled down and attempted to automate the selection process even further. A growing number of platforms such as Toptal, Turing and the like have made the expression ‘cog in the machine’ take on way too literal a meaning: they are essentially automatons which take people as input and produce code for their clients as output. They market themselves as making working conditions better for developers (and they may very well, in some aspects, succeed in that - helping developers bypass middle managers comes to mind), but their business model is based on eliminating as much of the human factor as possible. They are championing the use of automated coding tests for candidates, cleverly marketing them as sophisticated ‘vetting engines’.
While these companies have been figuring out the most efficient possible way for a program to determine whether a human can write a good program, ChatGPT came along and showed the world AI can now write programs as well, oftentimes better than humans. After the initial shock of ‘can ChatGPT actually take our jobs?’ the software engineering community quite quickly reached the consensus that no, it ultimately cannot take our jobs, because there are parts of software engineering you can’t automate (at least not without automating consciousness itself) - and those are the most important parts! This is precisely the value and impact of the ChatGPT revolution - it will force programmers to distill engineering down to its essence by perpetually asking themselves: ‘If I can use ChatGPT for all these tasks, what are the tasks I still have to do myself, and how can I become even better at them?’
Putting this transcendental ‘essence of engineering’ into words will surely always seem somewhat reductive, but for argument’s sake, let’s attempt just that: ChatGPT does not have the context to understand how a real life problem translates into code and all the nuances, implications and tradeoffs that come with that. In order for ChatGPT to meaningfully help us with a real-life problem a human must first chop the problem up in smaller, AI-comprehensible pieces and then synthesise several AI answers into a human-usable solution. This translation between a real-life problem and a technical solution is the essence of engineering - it’s where the value is generated, where the difference between good and bad engineers is felt, and also where the most fun is usually had by the former. It is neither automatable nor meaningfully evaluable by automated exams (or their, somehow even more humiliating, in-person whiteboard counterparts.)
AI is not the software engineer’s competition, it’s a tool in their toolbox. And it has irreversibly and radically made less relevant what engineers have been asking tech leadership to please, consider less relevant all along: the need to recall information on command and the need for repetitive puzzle-solving. It has made it painfully obvious that companies demanding interviewees to excel at stuff ChatGPT can and will do instead are misguided, actively depriving themselves not only of better candidates but more fundamentally, of ways to do better work. If your selection process is algorithms evaluating the correctness of algorithms a candidate wrote, and both can be simply and quickly automated using AI, what are you even evaluating? At best maybe something related to experience or memory and at worst merely compliance. You’re setting pointless hoops and only evaluating people’s willingness and ability to jump through them. You are deliberately and shamelessly not even attempting to operate in the realm of the essence of engineering.
Talk of flawed interviewing practices in software engineering is nothing new: it is a classic, yet seemingly futile discussion in the community. It feels like despite repeated calls from engineers to be treated more like humans in interviews, companies (at least larger ones) have instead doubled down and attempted to automate the selection process even further. A growing number of platforms such as Toptal, Turing and the like have made the expression ‘cog in the machine’ take on way too literal a meaning: they are essentially automatons which take people as input and produce code for their clients as output. They market themselves as making working conditions better for developers (and they may very well, in some aspects, succeed in that - helping developers bypass middle managers comes to mind), but their business model is based on eliminating as much of the human factor as possible. They are championing the use of automated coding tests for candidates, cleverly marketing them as sophisticated ‘vetting engines’.
While these companies have been figuring out the most efficient possible way for a program to determine whether a human can write a good program, ChatGPT came along and showed the world AI can now write programs as well, oftentimes better than humans. After the initial shock of ‘can ChatGPT actually take our jobs?’ the software engineering community quite quickly reached the consensus that no, it ultimately cannot take our jobs, because there are parts of software engineering you can’t automate (at least not without automating consciousness itself) - and those are the most important parts! This is precisely the value and impact of the ChatGPT revolution - it will force programmers to distill engineering down to its essence by perpetually asking themselves: ‘If I can use ChatGPT for all these tasks, what are the tasks I still have to do myself, and how can I become even better at them?’
Putting this transcendental ‘essence of engineering’ into words will surely always seem somewhat reductive, but for argument’s sake, let’s attempt just that: ChatGPT does not have the context to understand how a real life problem translates into code and all the nuances, implications and tradeoffs that come with that. In order for ChatGPT to meaningfully help us with a real-life problem a human must first chop the problem up in smaller, AI-comprehensible pieces and then synthesise several AI answers into a human-usable solution. This translation between a real-life problem and a technical solution is the essence of engineering - it’s where the value is generated, where the difference between good and bad engineers is felt, and also where the most fun is usually had by the former. It is neither automatable nor meaningfully evaluable by automated exams (or their, somehow even more humiliating, in-person whiteboard counterparts.)
AI is not the software engineer’s competition, it’s a tool in their toolbox. And it has irreversibly and radically made less relevant what engineers have been asking tech leadership to please, consider less relevant all along: the need to recall information on command and the need for repetitive puzzle-solving. It has made it painfully obvious that companies demanding interviewees to excel at stuff ChatGPT can and will do instead are misguided, actively depriving themselves not only of better candidates but more fundamentally, of ways to do better work. If your selection process is algorithms evaluating the correctness of algorithms a candidate wrote, and both can be simply and quickly automated using AI, what are you even evaluating? At best maybe something related to experience or memory and at worst merely compliance. You’re setting pointless hoops and only evaluating people’s willingness and ability to jump through them. You are deliberately and shamelessly not even attempting to operate in the realm of the essence of engineering.