How are NSS Alumni Using Generative AI Tools On The Job?

Jun 26, 2024
John Wark

Updating our alumni survey after another year of genAI experience

Last year at about this time we started a series of blog posts about what was, at that time, the newest and most hyped software technology (possibly of all time) - generative AI (genAI) / large language models (LLMs). At that time generative AI was less than a year old in terms of availability and awareness in the overall tech world, much less the general public, having exploded into visibility with the release of ChatGPT 3.5 by OpenAI in November 2022. It was being much touted that generative AI was a technology that could and would revolutionize many industries, automate many jobs thus throwing millions of people out of work, and otherwise have a massive disruptive impact in a very short period of time. Among the jobs that it was said that this technology would automate was software development/coding, thus massively changing or even eliminating all of the career paths that we at NSS were preparing students for. 

That all struck us as a bit much for such a new technology for which only very limited use cases had reached production deployment. Did we think generative AI was promising? Oh yeah, absolutely. Did we think it was going to put software developers out of work in the near to mid term? Nope, no we didn’t. We didn’t see a lot of evidence to support that view. Over the much longer term, maybe - but where was the evidence? Speaking as someone who had bitten hard on a previous AI hype cycle in the 1980s (expert systems), I personally was very definitely from the “Show Me state” on this hype cycle. 

But rather than embrace the hype as gospel or dismiss generative AI out of hand as yet another overselling of a technology emerging from years of AI research, we started our own research on the topic - research that continues to this day. Out of our first round of research, we published a series of blog posts. We reported on a survey of NSS graduates that assessed how they were or were not using LLMs on the job, we talked about our views on what to do about training new software developers or data analysts on using generative AI tools, and we discussed our analysis and projections regarding the impact of generative AI on the job market for tech talent. We followed up with additional blog posts and then multiple podcasts. 

Most recently, we surveyed our graduates again regarding their use of generative AI tools on the job in their work as software developers, data scientists, data analysts, and adjacent careers. We kept the questions in the survey the same as last year, or at least almost the same, so that we could do meaningful comparisons from one year to the next. Last year’s survey was taken around six months after ChatGPT 3.5 was released and early in the life of other genAI tools from Microsoft, Google, Meta, & Anthropic. 

This year’s survey gives last year’s respondents another year of experience with genAI and gives us another year’s worth of graduates to survey. We hope you’ll find the results interesting as there is clear evolution in the past ~12 months in terms of attitudes about genAI tooling, in the range and depth of experience in using these tools, and in greater clarity around use cases (both good ones and bad ones) for these tools.

Note on the format - we’ve placed the charts showing results from this year’s survey in the left hand column and the charts from last year in the right hand column. As we did last year, we are also sharing comments from survey respondents under the charts - except as noted, all comments are copied under the results of the question associated with the comment. All comments are from this year’s survey (if you’d like to see the selected comments from last year, you can find the blog post here). We got far, far too many comments to be able to share them all. We’ve tried to select comments in a way to show the range of sentiment and examples from the full set of survey responses. 

We received 83 responses to the survey this year as compared to 60 last year. Responses this year were more heavily biased to graduates of the Web Development and Software Engineering programs than last year, with proportionally less representation from graduates of the Data Analytics program. We don’t know how to interpret that fact but I’d like to make a more focused effort at some point in the near future to do a deeper dive into usage of genAI tools by data analysts just to make sure we aren’t missing out on some valuable insights.

Does your company have a policy regarding using - or not using - generative AI in your work?

AI use in your work - 2024AI use in your work - 2023

A significantly higher percentage of organizations have policies in place (both formal and informal) when compared to last year. Lots more comments this year on this topic as compared to last. Based on the comments, it appears that most of the policies address bans or restrictions on use, e.g. don’t use company data, don’t paste company code into public (e.g. ChatGPT) LLMs. There are still lots of explicit bans on use of genAI.  But there are also more policies that permit use within some set of constraints (e.g. use company license for ChatGPT, Github CoPilot, etc). 

Selected comments: 

"It's a blanket ban on all use on company devices or using company IP. We are not allowed to input any sensitive data into these models."

"We are not allowed to input any sensitive data into these models."

"Our policy isn't documented (yet), but it has been stated multiple times via meetings and emails that we are *not* to use any AI tools outside of what is provided [by company]. These are Microsoft AI (Teams Copilot I think it's called?) and Github Copilot."

"It was sent out in a memo to not use publicly accessible LLM's. AI is seen as a risk to security and intellectual property. Companies have started housing their own internal AI to circumvent this. Copying and pasting code from the companies code base to an AI is a terminable offense similar to pasting code on a public forum like Stack Overflow."

"We are expected to find ways to incorporate generative AI into our workflows. We also are not allowed to use company data to train 3rd party models in any way. We have our own internal ChatGPT service for anything that might require proprietary data."

"Don’t copy and paste code and push it to someone else’s server that we don’t manage, ya goofball. (I actually don’t know if we have an explicit policy)."

Do you use any generative AI tools such as GitHub Copilot, ChatGPT, Claude, Amazon CodeWhisperer, or others on the job?

AI tools on the job - 2024AI tools on the job - 2023

Approximately 30% increase in the number of our grads that use genAI tools. Comments indicate that more of our grads have used multiple genAI tools when compared to last year. The most mentioned tools, by a large margin, were ChatGPT and GitHub CoPilot. After those two tools, other MSFT Copilot tools were mentioned fairly frequently, Claude came up several times, Jetbrains several times, and then a couple of mentions each for various Azure and AWS AI tools/platforms, and Phind.

How frequently do you use generative AI tools as a part of your work?

How frequently do you use AI tools - 2024How frequently do you use AI tools - 2023

There was a significant increase in the number of grads that are using genAI tools every day on the job. In one year we’ve seen an increase from 30% daily users to 48.3% daily users, which is a 60% increase in the number of daily users of genAI. There was a small decrease in the numbers of those who only use genAI weekly and a much larger decrease in those who say they never use genAI on the job. 

Do you find generative AI tools helpful on the job?

Do you find generative AI tools helpful on the job? - 2024Do you find generative AI tools helpful on the job? - 2023

The most notable thing here is that the number of respondents that said that genAI tools were not useful on the job dropped in half in the past year, from 30% to just under 15%. My overall sense of the many comments we got on this question as compared to last year’s comments is that there was more specificity to this year’s  comments, more crispness relative to specific examples and specific use cases, and a general sense of a greater comfort around what the genAI tools were good for - and not good for. That makes a lot of sense - everyone has had another year to learn prompting techniques, experiment with the tools, and try them on a wider range of tasks compared to last year. 

It also struck me that there was more consistency across comments regarding the most valuable use cases for genAI, more sense that we are discovering the most useful patterns for using these tools. Even those who expressed awareness of shortcomings of the tools often also cited positive use cases. 

Here are responses showing some of the most common use cases and also a couple of interesting examples of how individuals have found different use cases for multiple tools:

"This answer has changed a lot for me over the last 12-24 months. I used it with my Boss yesterday to write 5 different automated tasks that we had on the backlog to help us with some support type issues and it was tremendously helpful."

"To me it's a quick reference on syntax and a tool for learning. Sometimes I'll use it for debugging and give an example of the issue. It often helps me understand the error in more detail."

"I use Copilot mainly for auto completion of repetitive patterns like in tests. I use ChatGPT almost exclusively now for programming search topics that are outside of the scope of docs."

"Doing repetitive tasks, spotting errors, refactoring functions, generating types, writing scripts."

"I use Claude AI for proofreading, and ChatGPT for coding (Python)."

"As a person who is neurodivergent and still very much learning, AI tools really help me understand the concepts I'm working with or stay on track when I'm just not able to focus. Sometimes it’s what helps me to that "Ahah!" moment."

"Automates the simple stuff so you can spend more time on the complicated."

"It depends. I think you have to have a coding basis or background because sometimes it makes up answers or it makes your coding even more complicated than it has to be and you'll waste time trying to debug. If you have some idea of what you're doing, then it's helpful."

"I use the tools as a "coach" or "mentor". I ask questions and use the AI as a thought partner. I ask it to explain a piece of code. I rarely use it to "generate" code. If I do, it's something simple, like how to verify that a string is a valid date. I will also converse with GPT about programming concepts."

"It is very useful in generating code snippets for specific use cases, and being able to ask it questions about the snippets it produces. Using it as a learning tool when trying to debug or modify code that is unfamiliar or in a language I do not know. Asking lots of questions about unfamiliar coding languages is kind of like having a senior engineer in the chair with me. I also use it as an idea generating tool when trying to craft professional emails and messages."

Does the use of generative AI tools enhance your work experience and/or productivity?

Do AI tools Enhance Productivity - 2024Do AI tools Enhance Producticity - 2023

Another question with a big swing to the positive - almost a ⅔ reduction in No responses. Comments on this question were overall quite similar to the comments on the preceding question. It looks like in the next survey we can probably strike one of the two questions since they are so similar. 

However, the responses to this question did identify additional examples of good use cases for genAI - with lots of stress on its value as a tool for learning. We also got more comments that mentioned one or more negatives, often combined with one or more positives, such as:

"While it does not have all the answers, and sometimes spits out complete nonsense, in general AI is a helpful assistant to get things done more quickly. Senior devs are not always available to pair and AI can help get unstuck. It still requires the user to know what they are doing and be smart about it. It can also be a useful tool for learning or exploring alternative code solutions."

"I've found that generative AI tools are helpful in reducing the amount of time I spend doing monotonous or tedious code-related things, allowing me to focus on the actual problem-solving parts of my work."

"You have to know how to prompt well to get the answer you're looking for. If you can do that it can help with your work. Debugging, writing scripts, etc."

"AI has been able to steer me in the right direction when I am learning something new and unfamiliar and cut down on the amount of research time needed to get a new concept/project started."

"Because my senior is quite busy, I'm able to ask questions and get [an] explanation of things like having a personal tutor."

"While it does not have all the answers, and sometimes spits out complete nonsense, in general AI is a helpful assistant to get things done more quickly. Senior devs are not always available to pair and AI can help get unstuck. It still requires the user to know what they are doing and be smart about it. It can also be a useful tool for learning or exploring alternative code solutions"

"They speed up development by giving you more specific answers than general ones you can find from regular web searching. Also, AI is really good at describing things with examples."

Does the use of generative AI tools enhance your understanding of coding or other concepts?

Do AI tools enhance your understanding of coding - 2024Do AI tools enhance your understanding of coding - 2023

Another question where feelings are more positive than last year - Yes responses were up by approximately 50%. The comments on this question seemed to contain more negative or cautionary feedback about genAI than the previous questions. But also quite a few comments very similar to the ones for the two questions above - since we’ve included lots of those in above, we’ll hit a few more of the cautionary comments:

"Often it gives me incorrect syntax or straight up hallucinations, it seems more helpful to me for getting ideas but there's usually a lot of incorrect stuff that needs to be fixed also. But fixing its mistakes can be helpful for learning. At the same time it could be a dangerous thing for someone with less experience if they don't fully understand concepts and are using AI as a helper. I think the main thing is to make sure you as a human understand whatever it is that you end up taking from the AI output and ensuring it is correct."

"It's all too often wrong/I need to spend more time reading through the code it's spit out that I'd probably be better off just writing things myself."

"Generative AI tools can be a tool, but it's easy to just insert code without understanding what's happening or if there's a better way of doing something. I do sometimes like having ChatGPT explain a code block if it's a little unclear where something is coming from, but again, if your fundamentals are not strong, it seems like it could be easy to rely too heavily on the tools."

"It highly depends on how you use it. I have found that it seems to do the exact opposite in many cases. I've seen a pattern amongst fellow software engineers where they will take the results at face value. Because it works or does what they want it to when they run it, they don't take the time to figure out what it actually does."

"Yes but with a caveat. When I’m trying to figure out complex code and solutions, the copilot responses are often inadequate. However, with additional prompting and my own knowledge I do figure it out. Eventually."

"As long as you keep in mind that it's a tool to assist your problem solving/debugging, and not a tool to replace your problem solving/debugging, it can be really informative. Especially for newer developers trying to learn."

"It'll only help if you take the time to understand the code yourself and not just copy paste."

But it’s not all negative/cautionary, for example:

"When stepping into a new project with which I may not be familiar, I often will pull out sections of the code and have ChatGPT explain it to me in detail. Or, if dealing with older styles of writing, I'll often ask to have it refactor the code so to better understand what is going on."

And a lot more comments regarding examples of using genAI tools as learning aids / explainers that can be prompted and interacted with to be much more on point than something like Stack Overflow. 

Do generative AI tools support your problem-solving and debugging processes?

Do AI tools support debugging processes? - 2024Do AI tools support debugging processes? - 2023

A somewhat smaller increase in positive responses to this question compared to most  of the previous questions - only about a 23% increase in Yes responses. It seems like this question doesn’t invoke as many of the positive use cases mentioned in the comments to earlier questions such as value as a learning tool. Responses are nonetheless ⅔ positive with several of the positive responses mentioning the value of a genAI tool as a form of rubber ducky. There are several specific mentions of how genAI helps address CSS issues. And as you can see below there is a wide range of opinions regarding value as a debugging aid. 

"I don't use it for this purpose. I still follow my previous debugging practices."

"Mostly around researching solutions to problems. I don't usually use it for debugging."

"It is not my immediate go-to for debugging. I live in the debugger whether it's in vscode or dev tools and I figure out the problem the majority of the time. I'd say only about 10% of my usage involves debugging. And even then, I create an example or post an error message directly to understand it better."

"It helps me see things I just can't. It's like rubber ducking but with a duck that speaks back."

"I have to solve its problems more than it solves mine 😅"

"I've used the VS Code Github Copilot extension to explain blocks of code to me. It will usually provide a possible solution, or at least explain why I'm running into the bug."

"It will give me some ideas to fix the questions or issues. I don't feel it will give me correct answers or debugging if the question is a little complex."

"Debugging: yes, sometimes. It can be helpful in getting answers for obtuse errors that I'm not sure where to even start with, but more often than not, my knowledge of the codebase and product supersedes that of the language model (for now)."

"AI is genuinely a game changer for debugging, small little details like a missing semicolon or just a slightly wrong syntax it catches in seconds where sometimes it can take a person hours. It's also super helpful for me when it comes to problem solving because it's just quicker. It won't always be right, but a lot of times when it shows me a solution (even wrong) I can piece the missing parts together."

6 Observations that I took Away...

So - what can we take away from this year’s survey? Here are 6 observations that I took away, your mileage may vary:

1) Valuable Use Cases for Tech Professionals

There are many use cases where genAI tools are valuable to software developers, data analysts, data scientists, and other tech professionals.

2) Effective Tooling for Productivity

Generative AI is becoming part of the tooling that every professional software developer, data analyst, etc. needs to be aware of, comfortable with, and equipped with. It seems pretty clear that genAI can improve the productivity and effectiveness of tech professionals.

3) Awareness of Tool Limitations

Just as much, we need to develop awareness of the shortcomings, blind spots, and time wasters of these tools (which is true of ALL of the tooling that we use as professional problem-solvers.)

4) Human-in-loop Needed for Effective Code Generation

One cannot use these tools to generate code - valid code, working code, code that meets requirements, secure code - if one is not trained as a developer. The tools cannot be trusted by themselves to generate working, valid, correct code. In the hands of untrained developers, they will generate more problems than solutions.

5) Learning Curve for Generative AI Proficiency

There is definitely a learning curve to becoming adept at using these tools across the range of use cases identified by our graduates. We can help future graduates up this learning curve by introducing them to genAI at the appropriate place in their training.

6) Employer Restrictions on Generative AI Usage

There are still employers, and not just a few of them, that do not permit the use of these tools for many purposes for a range of reasons, including protection of proprietary company information, protection of PII data of patients, clients, etc., lack of trust in security of the tools, etc. 

What’s really striking is that despite all the hype, genAI certainly doesn’t appear to be taking away anyone’s job. Nobody is talking about examples of these tools doing large scale coding, independent creation of new applications, etc. On the other hand, there is lots of discussion of “human in the loop” use cases; of complementing the human developer; of taking away tedious, repetitive tasks; of useful support for learning; and other use cases that allow the software developer or data analyst to focus on higher value problem-solving tasks. 

I also found the comment from the neuro-divergent graduate regarding how they found value in genAI tools to be very interesting. Wouldn’t it be amazing if these genAI tools, originally touted by some as being a way to replace human developers, actually turn out to open up application development careers and/or data analytic careers to individuals who are most in need of and can most benefit from personalized, non-judgemental assistance that “meets them where they are”. That might be the best news of all from our survey. 

Topics: Alumni, Technology Insights