Developers are embracing AI-enabled tools. Here's how that's changing the way they code – SiliconANGLE
SPECIAL REPORT: GENERATIVE AI TRANSFORMS EVERY INDUSTRY by
Ever since generative artificial intelligence tools such as OpenAI LP’s ChatGPT and Google LLC’s Bard jumped into the limelight, the entire world has become enamored with the ability of AI’s capability to hold conversations and do impressive work — and one of the most prominent early uses is helping software developers write code.
Since the extreme popularity of ChatGPT and its amazing capability to hold conversations and even write essays and stories, a number of code-generating chatbots and developer assistive tools have flocked to the market. These include GPT-4, the same model that powers the premium ChatGPT Plus service, Microsoft Corp.’s GitHub Copilot coding assistant, Amazon Web Services Inc.’s Code Whisperer and many more.
A survey of more than 90,000 developers from Stack Overflow, released in June, showed that almost 44% used AI tools in their work. That number is most likely even higher by now since an additional 26% at the time were open to using it soon. More than 82% of the respondents said that they used the tools to produce code, 48% on debugging and getting help, and 30% for learning about a particular codebase.
To get a better understanding of how developers are using AI tools and how it’s affecting their day-to-day work, SiliconANGLE spoke to a few developers to get a view of their experience. (This and other trends in generative AI will be explored Tuesday and Wednesday, Oct. 24-25, at SiliconANGLE and theCUBE’s free live virtual editorial event, Supercloud 4, featuring a big lineup of prominent executives, analysts and other experts.)
There are two major modes for coding tools in software development. Chatbots such as ChatGPT, Meta Platform Inc.’s Code Llama, and others can be prompted to generate snippets of code, entire functions or large blocks in well-known coding languages, or to check code for bugs or improve code. And then there are coding assistants that provide suggestive code as developers write, which are capable of filling out swaths of code by predicting what comes next based on the previous code as a developer works such as GitHub Copilot and Tabnine Ltd.’s code assistant.
Oliver Keane, an associate web developer at the digital transformation company Adaptavist Group Ltd., said he uses ChatGPT as an AI coding companion regularly to get code foundations out of the way quickly. For example, he uses it to prepare a project such as a website for a client to match specific needs, such as a particular administration panel.
“So, an example of using it is that I wanted to create like a content management system, where they could edit and update FAQs,” said Keane. “What I asked GPT to do was please create me a method to allow admins to update the FAQ with questions and answers, and because it’s been used in a long chat thread before, it has all the context behind it. It gives me something I that I can just drop in and it works.”
Keane said he could have just coded it himself, but it might take 15 to 20 minutes to get it working and then he’d probably have to modify it to get it working with the admin panel and fit the client’s needs. However, with GPT, he said, he gets a solution the first time in about 80% of his tries.
However, it does take a little bit of up-front work. With ChatGPT and other chatbots they require a bit of history or context to work correctly, so he explained that a conversational thread about a specific project needs to be built up. Essentially, to use it, developers have to build a rapport with the tool by prompting it with code that they’re working on until it understands the project, and then after a while it begins to provide better responses.
“You need efficient prompts and you need to build up a history with the AI in the chat thread that you use, and then it becomes really effective,” Keane said.
Generally, other tools that provide a chatbot experience attached to an editor environment don’t always require this sort of training up, because they can be trained with the company’s codebase or fed a project. However, when a developer is working with a standalone chatbot, Keane explained, this is the most efficient way to get it to provide the best experience.
Once they’re fully trained, developers now can use these models as part of code reviews, to automate the discovery of bugs, and to dive back into old work by using it as a “pair programmer” to talk through how to make old code more consistent. The interface makes the tool a lot more like talking to another person, who at the same time is capable of delivering code on demand and making modifications when asked. Keane said that 20% of the time, the AI tool doesn’t give him what he needs out of the box, but he can just follow up with a refined prompt to get closer to what he wants.
Chatbots can also be extremely useful for learning new frameworks and coding languages. As long as the LLM is well-trained with documentation and coding information, it can be used as a tutor to bring a developer up to speed very quickly. In fact, coding chatbots are often sought out more than developer community websites such as Stack Overflow thanks to the conversational capability and back-and-forth about code, especially when it comes to understanding why a piece of code works in a particular way.
As for code completion tools, such as GitHub Copilot, Joseph Reeve, software engineering manager at digital analytics platform company Amplitude Inc., said tools such as Copilot provide for a totally different experience from chatbots, but they can also make people better coders.
“I’ve noticed myself and other people writing code with tools like Copilot change in an interesting way: It forces you to write better code, even though you’re not writing most of the code,” Reeve said. “It means that the small bits of code that you do write to sort of encourage your copilot to do what you want it to do, become very descriptive, and very precise, much more so than you might expect initially.”
The reason coders must write better, more precise code, he explained, is because the AI will go haywire or respond strangely if it’s not written precisely or descriptively the first time. The AI assistant can be seen going off in suggested directions that have nothing to do with what the developers wants as they type. But it’s possible to get it back on track the further into the desired function they type, until it finally settles on what they want. The more precision, the faster that happens.
“It’s quite useful to think of the AI as humans that are just not very good at following instructions,” said Reeve. “That means in so many different parts of product development, software engineering, whether that’s trying to build a feature on top of them, or trying to get them to help you write some code, you actually end up sort of being able to get very quick feedback from them about the ways in which your instructions are bad.”
This behavior by the AI has led him to watch that feedback and prepare for it. In the beginning, the tool will aim to predict short stints of code, such as five lines, then 10 lines, then eventually 20 lines into the future. In many cases, this was code that he probably would have already written, saving time and providing momentum.
“In that sense, it feels pretty magical, it feels a little bit like time travel, like I’ve just skipped 30 seconds of having to write out, copy, paste this line three times and edit,” said Reeve. He explained that it has given him a lot more time to put into working on other parts of his job that don’t involve coding, such as generating marketing assets, building websites or working on things that might be needed on the business side.
Some 37% of the professional coder respondents from the Stack Overflow survey replied that they saw improved productivity as the main benefit of engaging and greater efficiency (27%) and speed of learning (27%) as secondary. Although developers just learning to code prioritized learning, productivity was the primary benefit for all types of developers cited.
These tools may seem extremely useful and perhaps quite beneficial to developer workflows, but they can also be a source of problems. For example, large language models can suffer from producing misinformation or “hallucinating,” which means that they can produce problematic code from time to time.
When it comes to using a tool such as a chatbot, most of the bugs produced might be caught by the compiler or by a veteran coder looking over the general activity of work. At the same time, code-suggesting AIs can also produce similar issues, which can lead to a little bit of time spent trying to figure out what happened after the fact.
For example, Reeve said, sometimes there can be some anxiety when Copilot produces 20 or more lines of code at once. “It does also sometimes get things quite wrong,” he said.
In one case, it happened when he was working on a graphical editing tool similar to Figma or Canvas and the AI tool gave him a large chunk of code. He was overjoyed, since it did a bunch of math all at once that he had done before and swept up what would have taken perhaps five minutes of time into mere seconds. However, hours later a bug cropped up that involved the mouse moving in strange ways, which eventually turned out to be a “greater than” sign instead of a “lesser than” sign in that AI-generated code.
“It looked right at the time, but it turned out to be wrong,” Reeve said. “So, there’s a certain amount of anxiety when it starts building that far into the future.”
From the Stack Overflow survey, only about 42% of the developers asked said they trust the AI models, with 31% having more doubts and the rest showing more serious concerns about the outputs. Some AI models can also be more problematic than others, with higher rates of hallucinations.
The other side of this is that these tools allow people to produce code rapidly in ways they haven’t before. They do it mostly by writing for them, and more senior developers have brought up that this might divorce newer developers from fully understanding the fundamentals of the programming languages they’re using. It’s a little bit curmudgeonly, but having too much of the busywork done for a developer also means that it might be that much harder to track down that annoying bug later.
Jodie Burchell, developer advocate for data science at the software development tools company JetBrains, said that in the end, AI coding tools and assistants should be treated as exactly that: tools and assistants. At the end of the day the coder is still the responsible party and needs to learn the craft and spend the time with the code.
“If you have a model that is giving you bad information, it’s still incumbent upon you, as a developer, to make sure that that piece of code still does exactly what you want it to do — and it’s not necessarily always straightforward,” Burchell said. “I would say there’s no free lunch, you can’t get away with just letting these models develop code for you and pass it into your codebase without any sort of critical thinking.”
The 2023 Accelerate State of DevOps Report from Google LLC’s cloud division released this month revealed that as AI tools are being incorporated into developer’s toolsets, they appear to improve individual well-being, such as burnout and job satisfaction, only slightly, but had a neutral or even negative effect on group-level outcomes, such as overall team performance and software delivery metrics.
“We speculate that the early stage of AI-tool adoption among enterprises might help explain this mixed evidence,” the researchers said. “Likely, some large enterprises are testing different AI-powered tools on a trial basis before making a decision about whether to use them broadly.”
By and large, in spite of the potential issues that AI tools might cause, growing numbers of developers have shown an interest in using them in their workflows, according to the Stack Overflow survey. That’s true of developers learning to code more so than professionals, at 54% versus 44%. This sentiment was echoed by Burchell, who said that veteran coders tended to dislike the new AI tools and were less likely to adopt them.
Overall, it’s still early days when it comes to generative AI developer tools, although the adoption pace is fairly rapid. Companies such as OpenAI and Meta continually upgrade their models, such as GPT-4, Codex and Code Llama, so that they can be integrated into more tools, and the latest iteration of GitHub Copilot is partially powered by GPT-4.
As these models and tools continue to improve and are integrated with more parts of the development process, developers and engineers may see a future where much of their coding time is spend working with an AI tool. Learning how to provide proper prompts, maintaining better precision coding to guide predictive coding, and understanding the limitations of the models will probably prepare them for that future.
THANK YOU
Is this the friendly face of AI in 2024? Meet LG’s new ‘AI agent’
South Korea’s chipmaking industry sparks back into life, as shipments and production increase
Barracuda patches Email Security Gateway vulnerability targeted by hackers
Nvidia launches scaled-down version of its latest gaming GPU in China
Samsung partners with Red Hat to verify key data center scalability tech
Little-known ransomware gang claims responsibility for cyberattack on Ohio Lottery
Is this the friendly face of AI in 2024? Meet LG’s new ‘AI agent’
AI – BY . 14 HOURS AGO
South Korea’s chipmaking industry sparks back into life, as shipments and production increase
INFRA – BY . 14 HOURS AGO
Barracuda patches Email Security Gateway vulnerability targeted by hackers
SECURITY – BY . 15 HOURS AGO
Nvidia launches scaled-down version of its latest gaming GPU in China
INFRA – BY . 15 HOURS AGO
Samsung partners with Red Hat to verify key data center scalability tech
INFRA – BY . 15 HOURS AGO
Little-known ransomware gang claims responsibility for cyberattack on Ohio Lottery
SECURITY – BY . 15 HOURS AGO
Forgot Password?
Like Free Content? Subscribe to follow.
Recent Comments