ChatGPT is Changing Engineering — and Why This is Just the Beginning

ChatGPT engineering


Our relationship with AI is complicated. We humans have a long history of creating science-fiction narratives in which AI comes to life, turns evil, and threatens to destroy life as we know it. We’ve seen this story in so many forms we can’t help but be cynical about AI. 

Plot twist: Now we have AI that can write its own science-fiction by itself— a scary thought for many. 

ChatGPT, the world’s trendiest chatbot, is one of several new AI-powered tools that can answer seemingly any question, create personalized emails and blog posts, generate photos and videos, and much, much more.  

These bots and tools can even write code. And we believe every developer — if they aren’t already — will eventually be using tools like ChatGPT to help solve some of the world’s most challenging technical problems. More on that in a moment. 

However, for some, the emergence of “generative AI” tools raises concerns and a plethora of ethical questions. Many fear AI will lead to job losses. It certainly promises to change how many people work. And any change, of course, can be scary.

What is generative AI, exactly? 


Rather than create our own definition of generative AI, we figured we’d start by asking ChatGPT to define it for us. Here’s what it said when prompted to explain generative AI in 50 words or less: 

“Generative AI refers to AI models that generate new outputs, such as images, music, or texts, based on learned patterns in existing data. The goal is to create diverse, yet coherent, outputs that resemble the original data.”

It’s worth noting that Sam Altman, the CEO of OpenAI (the company that created ChatGPT) doesn’t like the term generative AI — despite the mainstream media’s adoption of it. 


While ChatGPT offers information and text-based content (including code) — and internet giants including Microsoft and Google are working on their own similar offerings — there are many emerging generative AI tools that are helping content producers create lifelike still images, stunning (and fake) videos, audio, and even 3D simulations.

As Harvard Business Review recently published, “We’re hitting a tipping point for artificial intelligence.”

While some AI skeptics may be trying to put a pause on the use of generative AI, there’s little doubt that even today it offers a lot of benefits for different jobs. And it’s going to continue to improve. These capabilities, applied in the right ways, can make us better at our work. It’s already helping our team work smarter. 

Why our CTO uses ChatGPT

Our CTO Oliver Weng has started incorporating ChatGPT into his workflow, using it to get ideas for how to build new features and quickly generate code as a starting point for a solution. It’s also pretty good at finding bugs in other people’s code (“debugging”). ChatGPT has quickly become a go-to among his favorite technical research hubs, including Stack Overflow, Github, DuckDuckGo, and others. 

Oliver suggests ChatGPT, and other generative AI tools like it, can save him several hours over the course of a prolonged project. 

However, there are several misconceptions about ChatGPT that developers and their management teams need to understand.

Limitations to using generative AI as a developer


Though it’s very useful, ChatGPT is far from a perfect tool. And as long as generative AI is trained on data that ultimately comes from people, it will be limited. 

So, for developers to successfully use a tool like ChatGPT on the job, they (and their teams) should consider: 

  • ChatGPT makes predictions — but not always good ones. It synthesizes a massive amount of information, then ultimately predicts a new original answer to the prompts it receives. As one writer put it, “if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation.” And whether you’re a bot or a human, generating a new answer to any question is prone for error. 
  • Code samples may look great but can be flawed. At first glance, a ChatGPT-generated code string may look perfect — and could lure developers to just copy/paste the code and plug it into a project. However, the code is often flawed because the large language model has ultimately been trained on human sources. To use ChatGPT for code generation, developers should keep prompts very specific — and limited in scope — to maximize their accuracy.  
  • Generative AI won’t solve your unique problems. Oliver and other engineers like him are hired to solve problems by architecting complete solutions. Solutions aren’t simply a string of code, but a working system of technical components expertly assembled to do something unique. While ChatGPT, prompted by engineers, may be able to generate code that’s part of a solution, generative AI alone isn’t likely to solve complex problems. 
  • You won’t know the source. ChatGPT generates content based on the sum of a massive amount of data on the public internet. It doesn’t, however, provide sources for where data comes from. So, for developers, it’s different than downloading code samples from open-source repositories where they can see (and presumably trust) a  source.
  • Portions of any content may be copyrighted. With all the human inputs and feedback, there are undoubtedly bits of confidential and trademarked information that has found their way into ChatGPT’s AI engine. Amazon, among others, has asked employees to not share confidential information with ChatGPT because of these concerns. Developers, and their legal/management teams, need to be wary of any ChatGPT outputs that might be considered owned IP.


These factors don’t mean developers should avoid using ChatPGT. They just need to be prepared to truly own the consequences of the code or information they use from it. That should ring familiar. Developers commonly leverage frameworks and libraries from other sources — and then modify them to fit the requirements of their project. However, developers always own responsibility for the final solutions — including any pieces from external sources. 

This is just the beginning. And it’s a good thing


OpenAI already has a purpose-built model that promises to be even more powerful for software developers. As the team at OpenAI writes, Code-davinci-002 “is particularly good at translating natural language to code.” Meanwhile, Github (owned by Microsoft), Amazon, Meta, and others are working on their own developer tools. 

While ChatGPT is the latest shiny thing that has captured everybody's attention, what's coming promises to further change how our technology builders build. This is just the beginning. And it’s a good thing for the future for technology builders and innovators. Even if it’s a little scary.