Americans lack trust in AI even as adoption climbs fast

A detailed illustration depicting the complex relationship between Americans and artificial intelligence. In the foreground, a map of the United States is covered in reaching hands attempting to grasp glowing AI icons, representing rapid adoption. A central figure holds a transparent tablet displaying 'AI', while looking apprehensively toward a crumbling stone wall in the background. On the wall, the message 'OUR DEEP MISTRUST' is prominently carved. A large mural to the right shows a crowd being forcibly integrated by a large robot. This visual juxtaposition illustrates that while technology spreads quickly, many Americans lack trust in AI. The scene includes detailed elements like a large clock and old books.
This intricately detailed illustration captures the national zeitgeist surrounding artificial intelligence. It visually expresses the central tension that while adoption is climbing fast (represented by the reaching hands on the map and ubiquitous AI devices), many Americans lack trust in AI itself, symbolized by the massive, carved message on the crumbling wall 'OUR DEEP MISTRUST' and the concerning mural of robotic integration.

Most Americans are using AI โ€” and most of them don’t trust it. That contradiction sits at the center of several major surveys published this week, painting a picture of a public that has integrated a technology it remains deeply skeptical about.

Americans lack trust in AI despite record adoption rates

A Quinnipiac University poll of nearly 1,400 Americans found that 76% say they trust AI rarely or only sometimes. Just 21% say they trust it most or almost all of the time โ€” a striking figure given how embedded the technology has become in everyday workflows.

The data suggests that familiarity is not translating into confidence. The more exposure Americans get, the more reservations they seem to develop โ€” a pattern that puts the industry’s adoption narrative under pressure.

Quinnipiac professor Tamilla Triantoro, who specializes in business analytics and information systems, noted that younger Americans report the highest familiarity with AI tools, but are also the least optimistic about the labor market. That combination โ€” high use, low hope โ€” is a warning sign for anyone betting that generational turnover will smooth out public resistance.

Jobs anxiety is concrete, not abstract

Workforce fears are not vague. According to the same Quinnipiac survey, 70% of respondents believe AI advances will lead to fewer job opportunities overall. Among currently employed Americans, 30% say they are very or somewhat concerned that AI will make their specific job obsolete.

Only 15% of Americans say they would be willing to work under an AI supervisor โ€” one that assigns tasks and sets schedules. The vast majority reject that arrangement outright, even as companies quietly move toward more automated management layers.

The fear is not evenly distributed. Workers in roles with higher AI exposure are more likely to report anxiety, while those further from direct AI contact still worry about systemic effects on hiring and wages.

Regulation wanted, but the trade-offs are hard to swallow

A separate national survey from AI governance nonprofit Fathom, shared exclusively with Axios, found that nearly two-thirds of Americans now use AI weekly or more. The same group overwhelmingly wants stronger oversight โ€” but balks when forced to confront the trade-offs regulation would require.

Among respondents, 40% say they feel excited about AI, while 23% describe themselves as concerned. That gap is narrower than the industry would prefer, and it comes with a condition: people want policymakers to maintain safety guardrails without ceding U.S. dominance in the global AI race.

That is a difficult set of demands to satisfy simultaneously. Stricter rules slow deployment; looser rules feed the distrust that the polls keep measuring. Washington has not yet found a formula that resolves the tension.

Industry optimism collides with public sentiment

At the Axios AI+DC summit last week, Meta vice president and chair Dina Powell McCormick argued that the U.S. will need an entirely new workforce for AI within a few years to stay competitive. She framed AI as an equalizer โ€” a mostly affordable tool capable of democratizing access to industries and potential jobs.

That framing has not landed with the public. Senator Mark Warner (D-Va.) cited data showing AI is currently more unpopular with Americans than ICE. He argued that AI companies can be a positive force, but need to genuinely reckon with how the public experiences the technology encroaching on their daily lives.

White House science and technology adviser Michael Kratsios maintained that the Trump administration can pursue aggressive AI development while also addressing public concern. He also noted that President Trump convened major tech companies to agree that each new data center they build must be paired with its own dedicated power source โ€” a response to growing anxiety about AI’s effect on consumer electricity bills.

Adobe’s chief legal officer Louise Pentland pushed back on displacement narratives, stating flatly that AI is not replacing human creativity. Anthropic’s head of public policy Sarah Heck added that AI is increasingly being deployed in ways that augment rather than eliminate roles. Both positions reflect what the industry wants the public to believe โ€” but the polling suggests the message is not getting through.

What to watch as the trust gap widens

The central question is whether adoption without trust is sustainable. National Review’s analysis of the survey landscape describes AI as one of the most disliked forces in American life right now โ€” a characterization that should concern both developers and enterprise buyers who need employee buy-in to deploy effectively.

For business decision-makers, the implication is direct. Rolling out AI tools into organizations where 76% of the workforce is skeptical and 30% fears job loss is not a neutral act. Resistance, workarounds, and reduced output quality are likely outcomes without deliberate trust-building strategies.

The policy window is also narrowing. State legislatures and federal agencies are actively debating regulatory frameworks, and public distrust is the wind in the sails of more restrictive proposals. Companies that ignore sentiment data do so at their own risk.

FAQ – Frequently Asked Questions

  • How can individuals prepare for an AI-driven job market?
    To prepare, individuals can focus on developing skills that are complementary to AI, such as critical thinking, creativity, and emotional intelligence. Online courses and training programs in AI, data science, and analytics can also be beneficial. Additionally, building a professional network and staying adaptable can help individuals navigate the changing job landscape.
  • What are some potential consequences of stricter AI regulations on innovation?
    Stricter AI regulations could lead to increased costs and complexity for companies developing AI technologies, potentially slowing the pace of innovation. However, regulations could also drive the development of more transparent and explainable AI systems, ultimately leading to more trustworthy and widely adopted technologies. Some experts argue that regulations could also create new opportunities for companies that specialize in AI safety and compliance.
  • Are there any industries that are likely to be less affected by AI-driven automation?
    Industries that require human empathy, creativity, and complex problem-solving, such as healthcare, education, and social work, may be less affected by AI-driven automation. Additionally, industries that involve highly variable or unpredictable tasks, such as skilled trades and construction, may also be less susceptible to automation. However, even in these industries, AI is likely to have some impact, and workers will need to adapt to new technologies and workflows.

Laszlo Szabo / NowadAIs

Laszlo Szabo is an AI technology analyst with 6+ years covering artificial intelligence developments. Specializing in large language models, ML benchmarking, and Artificial Intelligence industry analysis

Categories

Follow us on Facebook!

A conceptual illustration showing digital twins in healthcare being hindered by a significant "data readiness gap." On the left, a chaotic assortment of fragmented data types, including physical medical charts, scattered papers, and broken lab icons, attempts to flow across a deep chasm towards an incomplete digital model of a human. This idealized digital twin on the right is only partially formed within a futuristic, crumbling grid framework, while labels point to specific missing components and integration failures, visually representing the challenges faced in creating fully functional digital twins in healthcare.
Previous Story

Digital Twins in Healthcare Face a Data Readiness Gap

Anthropic Claude User Interface prompt bar with categories like Write, Learn, Code, and Life stuff.
Next Story

Anthropic Claude Source Code Leak: What the Code Exposes and What It Hides

Latest from Blog

Go toTop