Blog

How are software developers using AI tools?

Written by John Wark | Jul 18, 2023

Survey results on the experience of NSS alumni using (or not) AI tools on the job

This is the first in a series of blog posts on the topic of AI tools/tooling and their impact on software development - both from a learning perspective and from a job demand perspective. 

If you’ve been awake for the past few months, the topic of AI has exploded in the media: ChatGPT! Millions of jobs will be affected!! Millions of jobs will be lost!!! AGI is just around the corner!!!! AGI will kill us all so stop the research!!!!! The hype cycle is in overdrive and out of control and unfortunately mainly creating good old-fashioned FUD (fear, uncertainty, and doubt). 

Yet despite the obvious overheated hype, this is a very important topic. AI - more specifically generative AI using large language model (LLM) technology - has clearly reached a milestone in terms of usability and value in many different problem domains and for many types of jobs. AI based on LLMs is being integrated into the work of millions of people. 

Speaking more narrowly in terms of NSS, AI (we’ll use AI as shorthand for LLM / generative AI in the rest of this post) is going to have an impact on software development and software developers (and data scientists and data analysts and others building and delivering software-based solutions). Is it going to change the nature of the work, and the skills needed to do the work, such that the training for these professions needs to be adjusted? Is it going to reduce the demand for any of these jobs - either in the short term or the longer term? There are far, far more questions at this point than there are definitive answers but we thought it appropriate to share our thoughts with our community based on what is known today, and based on what we can reasonably expect in the near term future. 

We have been exploring the new LLM-based tools over the past few months, first informally and then in a more focused manner. The goals for our research included topics like: can these tools really generate working code? What are the limitations of the new technologies? What other uses in software development beyond simply generating code can LLMs be used for? How might these tools be used in learning to code? What do we think the impact of these tools will be on the types of jobs that we train people to perform? Based on everything we learn, what should our policy be regarding student use of these tools in class? How should we be integrating, if at all, these tools into some or all of our classes? We’re going to explore our views on most of these questions (and more) in a series of blog posts, starting with this one. 

One type of research that we decided was important, independent of the hype in the media and blogs and twitter, was to find out what real working software developers and data analysts/scientists are doing with these tools on the job. We wanted perspective beyond individual anecdotes to see what patterns exist in terms of real world usage. We know this area is still nascent and evolving very rapidly but we thought we should try to get a snapshot by talking to working professionals. And what better group of working professionals to survey than our alumni, which would also give us a decent cross section of different types of employers with a bias to organizations here in Middle Tennessee. 

We sent a survey request to approximately 1500 NSS graduates who we knew to be working professionally in tech roles. It was a fairly long survey but we managed to get a 4% response rate (~60 respondents). Some other facts about the respondents:

  • Approximately 75% are full-stack web development program graduates, the other 25% are data analytics or data science program graduates
  • Graduates were spread across 38 graduated cohorts. There were a small number from cohorts prior to 2015 (i.e. more than 8 years of experience) the rest were pretty evenly spread from 1 year to 7 years of professional experience. 
  • The survey respondents were well distributed across types of employers. There was a good spread of employer sizes and good representation of enterprise IT shops vs. tech product/SAAS shops. Consultancies/agencies were the only type of employer that seemed underrepresented relative to the number of NSS grads working in that domain. 

On most topics, we asked both quantitative questions as well as a related open-ended qualitative question. I present the results of the quantitative questions followed by selected examples of the qualitative comments related to that topic. In many cases I found the comments to be more informative or indicative of areas for additional research than the quantifiable responses. I selected comments intended to give a representative sample, and then asked members of our Learning Leaders team to check my selections for bias. 

This blog post shares the responses to the survey questions related to if people are using AI tools and, if so, their feelings about the value they are getting from using the tools. The next blog post in this series will present responses to the second group of questions, which relate more to learning and curriculum. 

The Survey Says...

The first question in the survey related to whether employers had implemented any forms of policy or formal practice regarding the use of AI tools. We were looking for insight into questions such as: we employers starting to support the use of AI tools in general, had they banned the use of AI tools in any manner, had they selected specific tools for use by their staff, etc. 

The answer to the employer policy question was:

  • No policy - 85%
  • Yes policy - 15%

To me, that answer seems surprising, at least on the surface. Given all the hype on this topic, and given some of the open risk factors that have been identified with the use of these tools in areas such as security, intellectual property rights, quality, etc. I thought that possibly we’d find more definitive policies discouraging or banning use of these tools for at least certain uses and/or more specific policies outlining acceptable uses. I suspect the answer reflects the fact that, despite all of the superficial hype, these AI tools are so new that many (most?) organizations just haven’t had the time to sufficiently study the implications of using these tools such that policy can be formalized. Possibly this is another reminder of how slowly adoption of new technologies actually proceeds as compared to the speed of the hype cycle, which tends to be driven by early adopters and those who stand to benefit from the adoption of the new technology.  

Some of the responses to the policy question included comments such as these:

  • “AI tools such as ChatGPT are blocked by security / networking, so that we don't share company info.”
  • “I don't know that the policy is documented. We have started a pilot of using Github Copilot for AI-assisted code completion.”
  • “They are still in the process of finalizing some policies but a company wide email and meeting have occurred where they informed us that we could start to use it, but until official policy is sent out we should be cautious.”
  • “I'm not aware of documentation regarding if or how we're permitted to use AI, but I have begun using it a lot recently to solve basic but frustrating questions that I'd normally turn to a senior dev for support on. I typically just use ChatGPT and ask it things like ‘I have this  block of code and it's producing the following runtime error, what could be the cause?’" 

 

The second set of questions asked people if they were using AI tools on the job and to identify the AI tools they were using. Responses to “are you using AI tools on the job” were:

  • Yes, using such tools - 66.7%
  • No, not using such tools - 33.3%

Responses regarding which tools they were using were:

  • ChatGPT only - 21
  • Github CoPilot only - 3
  • Github CoPilot and ChatGPT - 10
  • Nothing else with more than a single user

When we asked about how often people we using the AI tools on the job, we heard:

  • Daily - 30%
  • Weekly - 28.3%
  • Monthly - 6.7%
  • Quarterly - 6.7%
  • Never - 28.3%

I found it very interesting to see ⅔ of the respondents using AI tools on the job while only 15% of employers have formulated any formal policy on the use of the tools. Clearly, developers have started to experiment with the tools on their own independent of guidance or sanction from management. In some sense, this is very similar to how developers might start to experiment with other new tools, such as a new editor or a new testing tool. However, the AI tools seem to have implications and considerations well beyond the scope of normal developer tooling. I’m not sure there’s a conclusion to be drawn from the limited information from our survey other than developers are gonna be developers and try out new tools that seem interesting or potentially helpful.

 

We next asked whether people found the AI tools to be valuable on the job:

  • Yes - 70%
  • No - 30%

The comments on how the AI tools were valuable were interesting. We heard things like:

  • “I find it useful to have a resource for quick, adhoc refactoring and code review that doesn't require me to hop into a meeting with a busy superior.”
  • “Generally I find Copilot to be really helpful with writing test cases. I tend to consider it a good auto-complete tool, and I haven't found it very useful for new code from scratch.”
  • “So far, GitHub Copilot is helpful when I have to write a lot of repeated code as it will autofill nicely and save time. It sometimes offers suggestions that I can use or that will help me think through a solution. “
  • “Personally I use Github Copilot way more than ChatGPT because ChatGPT will hallucinate buggy code and reference dated documentation in it's answers. Also, I think it takes me longer to think about how to prompt ChatGPT to spit out a result sometimes than just googling it.” 
  • “It's a great way to have a discussion about new concepts and patterns to give a jumping off point. I am the only higher level React developer and oftentimes there is no one else on staff to discuss new concepts with. Github copilot saves time once it knows the patterns of your application.”
  • “I use it for figuring out errors mostly. It’s usually not right but it can give me enough information to approach a problem from a different angle.”

 

Another question asked whether the use of the AI tool(s) enhanced their work experience:

  • Yes - 63.3%
  • No - 36.7%

The comments on how the tools enhanced the work experience feel similar to the comments about how the tools were valuable, with responses such as:

  • “It allows me to focus on the hard stuff and not googling syntax for the lower hanging fruit as I switch between front end and back end code.”
  • “I think of ChatGPT the same way that I do Google: a tool that helps me to find the answer that I'm looking for. You still have to know how to ask the right questions.”
  • “Although it comes with some annoyances (the cognitive cost of reading through a lot of completions I don't end up using), on balance, I think it makes some tasks a little faster.”
  • “If I really don't know where to start but have a good idea of what I need to do, I'll plug the existing code in and ask ‘How do I do x?’ I've never gotten a completely correct answer but it usually gives me a good idea of how I can do it myself and where to start.”
  • “My workflow has improved a lot because I basically always have a ‘coworker’ to check my work and point out what might be going wrong. Often ChatGPT will solve my problem within 2 or 3 prompts, although I have struck out with it a handful of times.”
  • “Yes’ish. It’s usually wrong.”
  • “When I’ve used it personally, I got value from it because it can show approaches that are different from mine. I wouldn’t cut and paste the answer but using it as a rubber duck helps me consider all aspects and hone my thinking.”

 

The final quantitative question that I will share in this blog post asked whether these tools supported the respondent’s problem-solving and debugging processes. The response breakdown matched the prior question:

  • Yes - 63.3%
  • No - 36.7%

And here are the comments relative to whether these tools supported the respondent’s problem-solving and debugging processes:

  • “They don’t solve my problems, but the tools can often serve as a suitable rubber duck to jumpstart the problem solving mindset.”
  • “Translating syntaxes from one language to other.”
  • “Figuring out unfamiliar errors.”
  • “I can put a bit of code in and ask why it’s doing X. It’s helped me learn more about the inner workings of programming.”
  • “My experience debugging with large language models has been that they cannot identify very obvious error code outputs. It would be better to look up the error code in the docs and understand that explanation. When it does produce a valid response about a bug, it is basically word for word from the docs anyways.”
  • “I can have it write example code to solve an issue and I can tweak it. Or I can paste code to have it explain it to see if it says that the code does something different than I think it does.”
  • “I answered ‘No’, but just want to jump in here and say that using Copilot has often given me more to debug, not less.”

 

Overall, it feels like a mixed bag. Definitely some feedback that there are challenges in using these tools effectively and productively. Also a lot of feedback that suggests there are benefits to using AI tools. There is a lot more that could be said in terms of analyzing the results of our survey, but I’m going to forego additional analysis and opinion - both personal and what we’ve decided as an organization - until we get to the third and fourth posts in this series. For now, we’ll leave it as an exercise for the reader to extract meaning from the survey results. But, we will also get to the third blog post as fast as we can so that we can start to share our thoughts!

The next post presents survey results on three questions regarding the use of AI tools in learning and understanding software development, data analytics, etc.