90 % of coders use AI, but judge it unreliable

According to the DORA 2025 report by Google Cloud, 90 % of developers now integrate AI into their daily lives. However, less than a quarter of them really trust its results. Between increased productivity and persistent skepticism, industry sails in a paradox.

A developer codes for his screen, while an AI silhouette observes it silently from an orange light frame.

In short

  • 90 % of tech professionals now use AI in their daily work, an increase of 14 % in one year.
  • Only 24 % of developers really trust the results produced by these tools, according to Google.
  • 46 % of developers actively be wary of the accuracy of the AI, against only 33 % which give it their confidence.
  • This massive adoption hides cautious use where each line of code generated must be verified.

The use of AI explodes among developers despite increasing mistrust

Google Cloud's Dora 2025 report, published on Wednesday, draws up a final observation: almost 90 % of developers now use AI in their daily activities, compared to 76 % in 2024.

The uses cover the whole cycle of development, from the generation of code to debugging, including the writing of technical documentation.

But behind this massive craze hides a more nuanced reality: only 24 % of respondents believe that the results produced by AI are really reliable.

The studyconducted with around 5,000 engineers around the world, reveals that developers spend an average of two hours a day with AI assistants.

However, most of them treat their suggestions as “spam” content to systematically control. This paradox echoes the recent conclusions of Stack Overflow, which shows that distrust of AI jumped from 31 % to 46 % in one year, despite an ever -broader adoption.

Even at Google, caution remains the rule. Ryan Salva, head of Gemini Code Assist, admits that it is inevitable to use AI in daily work, but engineers know that they must validate each line.

For its part, CEO Sundar Pichai, however, highlights a measurable gain, evoking a 10 % increase in team productivity thanks to these tools.

20 € bonus to register on Bitvavo
This link uses an affiliation program

Between dependence and systemic risks

Dependence on artificial intelligence is no longer in doubt: 65 % of developers acknowledge that they are largely based on these toolseven without giving them total confidence. This paradox reflects an industry that can no longer do without AI, while remaining aware of the potential drifts.

Recent security flaws – such as “copypasta” attacks highlighted by Hiddenlayer – show how instrumentalized coding assistants for malicious purposes.

Google tries to supervise this dependence with its Dora capacity model. This highlights clear protocols, a user-centered design and supervised workflows to avoid an uncontrolled autonomy of the AI. The objective is simple: maximize productivity gains without transforming these tools into delaying bombs.

Beyond engineering, this debate joins a broader concern. As other surveys have shown, the Americans themselves see in AI a useful but dehumanizing tool, fearing that it would erode creativity and social bond. The software industry is no exception: developers consider AI as a brilliant, but unpredictable colleague.

In short, AI is now required as a must for developers. However, generalized distrust highlights a major challenge: to build a digital future where these technologies will not only be used, but also deemed trustworthy. For the time being, AI remains an essential ally … under close surveillance.

Maximize your Cointribne experience with our 'Read to Earn' program! For each article you read, earn points and access exclusive rewards. Sign up now and start accumulating advantages.

Similar Posts