Research Article · February 2026

    The Control Paradox

    A cross-sectional analysis of 20 psychological archetypes reveals that AI enthusiasts and AI critics share the same fundamental demand: human agency.

    Synthesis of 15+ peer-reviewed instruments ·n > 90,000 across 47 countries ·6 major surveys, 2024–2025
    Abstract

    Public discourse frames attitudes toward artificial intelligence as a binary: you are either for it or against it. This framing is empirically wrong. A synthesis of six major cross-national surveys (Pew Research 2024/25, Eurobarometer 554, Ipsos AI Monitor, University of Melbourne/KPMG Global AI Survey, Alan Turing Institute/Ada Lovelace, UK Government Public Attitudes Tracker) and nine validated psychometric instruments reveals a far richer psychological landscape of at least 20 distinct archetypes distributed across 4 macro-clusters.

    Using a PCA-approximated 2D mapping with axes derived from validated factor structures (GAAIS, ATAI, Li & Huang 2020; Zhan et al. 2023), we project these archetypes onto a two-dimensional space defined by emotional valence toward AI (Fear ↔ Embrace) and scope of concern (Personal ↔ Systemic). A subsequent Commonality Analysis and Higher-Order Factor Analysis, following Li & Huang's second-order CFA methodology, reveals that all four clusters converge on a single latent construct: the demand for human agency over AI systems.

    We test this finding against the explosive adoption of OpenClaw (145,000+ GitHub stars, January 2026), an autonomous AI agent that requires extensive system permissions. We propose the Control Paradox: humans universally demand control over AI, yet adopt technologies that functionally surrender it — provided enough control signals are present to sustain the subjective experience of agency.

    01

    The Psychological Landscape

    What 90,000+ humans across 47 countries actually think about AI — and why binary narratives fail.

    In August 2024, Pew Research Center surveyed 5,410 U.S. adults and 1,013 AI experts.[1] The headline finding split along a now-familiar fault line: 51% of the public reported being more concerned than excited about AI, while 47% of experts reported the inverse. This expert-public divide is real, but it conceals more than it reveals.

    Three months later, the University of Melbourne and KPMG published the most comprehensive global survey to date: 48,340 respondents across 47 countries.[2] The Ipsos AI Monitor (2024), covering 32 countries, found a near-perfect split — 53% excited, 50% nervous — with both emotions present in the same individuals.[3] The Eurobarometer Special Survey 554 (April–May 2024), surveying 26,000+ EU respondents, added a critical data point: 62% were positive about AI in the workplace, yet 84% agreed it needs careful management.[4]

    51%
    U.S. adults more
    concerned than excited
    84%
    EU citizens agree AI
    needs careful management
    61%
    Americans want more
    personal control over AI
    30%
    Can correctly identify
    AI applications

    These numbers refuse to fit a binary. A person can be excited about AI productivity gains and anxious about job displacement and concerned about surveillance — simultaneously. This is not inconsistency. It is the natural structure of a complex psychological space that demands more than one dimension to describe.

    The UK Government's Public Attitudes Tracker (Wave 4, July–August 2024, n=4,947) documented this evolution in real time: the words "scary" and "worried" increased significantly from earlier waves, even as practical adoption rose.[5] The Alan Turing Institute and Ada Lovelace Institute found that concern varies sharply by lived experience — 57% of Black respondents and 52% of Asian respondents expressed concern about facial recognition in policing, compared to significantly lower rates among white respondents.[6]

    The demographic patterns are consistent across surveys: younger respondents (15–24: 67% optimistic), upper-income groups (72%), and students (69%) skew positive. Older respondents (55+: 41% pessimistic), unemployed populations (43%), and those with lower formal education skew negative. Geographic variation is stark — India (84.5% support), Singapore (82.7%), and Taiwan (80.1%) show the highest enthusiasm; France (43.4%), Czech Republic (53.0%), and Poland (53.9%) the lowest.[2][3]

    Fifty-three percent of Americans believe AI will erode human creativity. Fifty percent believe it will damage our ability to form meaningful relationships.— Pew Research Center, February 2025 [7]

    To move beyond aggregated percentages and map the structure of these attitudes, we turn to the psychometric instruments designed to measure them.


    02

    Methodology

    How we synthesized 9 validated instruments and 6 major surveys into a 2D psychological map.

    This analysis draws on two categories of sources: large-scale survey data providing population-level distributions, and validated psychometric instruments providing dimensional structure. We synthesize these into a PCA-approximated 2D mapping, then apply Higher-Order Factor Analysis and Commonality Analysis to identify shared variance across clusters.

    Psychometric Instruments

    InstrumentAuthorsSampleDimensions / Factors
    AI Anxiety ScaleLi & Huang, 2020[8]n = 4944 factors: AI Learning Anxiety, Job Replacement Anxiety, Sociotechnical Blindness, AI Configuration Anxiety. Second-order CFA extracted latent construct.
    Multi-dimensional AI FearZhan et al., 2023[9]n = 7176 fear types: Artificial Consciousness, Job Replacement, Privacy Violation, Bias, Losing Control, Learning about AI. Key finding: perceived AI control amplifies all types.
    GAAISSchepman & Rodway, 2020[10]Validated20-item, 2 factors: Acceptance and Fear. General attitudes toward AI scale.
    ATAISindermann et al., 2021[11]Validated5-item, 2 factors: Acceptance and Fear. Includes items such as "AI will destroy humankind."
    AIAS-4Grassini, 2023[12]Validated4-item brief measure of general AI attitudes.
    AI Anxiety (Interaction)Wang & Wang, 2022[13]ValidatedAI anxiety as inhibiting interaction with AI systems. Manifests in overall emotions.
    FoMO-AI ScaleScienceDirect, 2025[14]n = 494 (EFA) + 254 (CFA)26 items, 3 subdimensions: AI Backwardness Anxiety, AI Access Concerns, AI Dividend Anxiety.
    ATTARI-12Marengo et al., 2024[15]Validated12-item single construct. Found conspiracy mentality predicts negative attitudes.
    AI Anxiety (Integrated)Johnson & Verdicchio, 2017[16]TheoreticalIdentified "confusion about autonomy" as root factor of AI anxiety.

    Dimensional Structure

    Our two-dimensional projection is derived as follows:

    X-axis (PC1): Fear ↔ Embrace. This captures the primary emotional valence toward AI. It is derived from the consistent two-factor structure found across multiple instruments (GAAIS, ATAI, AIAS-4), which uniformly identify "acceptance" and "fear" as the principal components, combined with the aggregated attitudinal distributions from Pew, Ipsos, and Eurobarometer surveys.

    Y-axis (PC2): Personal ↔ Systemic. This captures the scope of concern or hope. It is derived from Li & Huang's (2020) factor structure, where "AI Learning Anxiety" and "Job Replacement Anxiety" load toward the personal pole, while "Sociotechnical Blindness" loads toward the systemic pole. Zhan et al. (2023) validate this dimension: "fear of job replacement" is personal; "fear of artificial consciousness" is systemic.

    Each archetype's position is determined by its centroid across these two dimensions, calculated from the relevant survey items and psychometric loadings. Cluster boundaries are derived from conceptual grouping validated against the survey data distributions.

    Limitation: This is a synthesized approximation, not a single-dataset PCA. The coordinates represent our best positioning based on where each archetype's defining characteristics fall within the validated dimensional structures. A formal PCA would require raw item-level data from a unified sample — which no single existing dataset provides across all 20 archetypes.

    03

    The Map

    Twenty archetypes across four clusters: a psychological cartography of AI attitudes.

    Figure 1
    PCA-Approximated 2D Map of AI Psychological Archetypes
    EMBRACE →← FEARSYSTEMIC ↑↓ PERSONALAccelerationists~12%Augmentationists~15%Techno-Utopians~18%Optimistic Experts~5%Regulationists~22%Control-Seekers~20%Wait & See~18%Privacy Guardians~15%Job Displacement Fear~25%Creative Displacement~10%Learning Anxiety~20%FoMO-AI~15%Dependency Fear~12%Existential Risk~8%Artificial Consciousness~6%Loss of Autonomy~10%Surveillance State~8%Algorithmic Bias~10%Sociotechnical Blindness~7%CLUSTERSThe Optimists (4 archetypes)The Cautious Pragmatists (4)The Anxious (5 archetypes)The Existentialists (6 archetypes)% = estimated population share
    Figure 1. Two-dimensional projection of 20 AI psychological archetypes. X-axis (PC1) derived from the acceptance–fear factor structure found consistently across GAAIS, ATAI, and AIAS-4 instruments. Y-axis (PC2) derived from Li & Huang (2020) personal–systemic factor structure, validated by Zhan et al. (2023). Population estimates synthesized from Pew 2024/25, Eurobarometer 554, Ipsos AI Monitor 2024. Note: percentages exceed 100% because individuals may express multiple archetypes simultaneously.

    The Optimists

    "AI will solve civilization-scale problems — but the destination should be set by humans." This cluster encompasses the Accelerationists (~12%), who see AI as the path to solving climate, disease, and poverty; Augmentationists (~15%), who frame AI as capacity amplifier; Techno-Utopians (~18%), early adopters who see daily-life improvement; and Optimistic Experts (~5%), the AI researchers who understand risks but believe in transformative potential.

    DATA: Pew 2024 expert survey (56% positive 20-year outlook) · Ipsos (67% optimism, ages 15-24) · Eurobarometer (62% workplace positive)

    The Cautious Pragmatists

    "Don't tell me AI is good or bad — tell me who controls it." The largest cross-sectional cluster. Regulationists (~22%) demand governance before innovation. Control-Seekers (~20%) want personal opt-out mechanisms. Wait & See (~18%) lack information but default to caution. Privacy Guardians (~15%) see AI as mass surveillance rebranded.

    DATA: Eurobarometer (84% careful management) · Pew 2025 (61% want personal control, +6pts YoY) · Turing (73% UK want legislation)

    The Anxious

    "AI threatens what makes me me." Personal-scope fears dominate. Job Displacement Fear (~25%) is the single most prevalent archetype. Creative Displacement (~10%) see AI eroding authorship and artistic identity. Learning Anxiety (~20%) feel overwhelmed by the pace of change. FoMO-AI (~15%) fear obsolescence — and paradoxically adopt AI because of that fear. Dependency Fear (~12%) worry about cognitive atrophy.

    DATA: Pew 2024 (64% predict job losses) · Li & Huang 2020 (AI Learning Anxiety factor) · FoMO-AI Scale 2025 (3 subdimensions) · Pew 2025 (53% creativity erosion)

    The Existentialists

    "This is not about my job — this is about the species." Systemic-scope fears. Existential Risk (~8%) see potential extinction-level outcomes. Artificial Consciousness (~6%) question what happens when AI becomes sentient. Loss of Autonomy (~10%) see decision-making being ceded to opaque systems. Surveillance State (~8%) fear digital totalitarianism. Algorithmic Bias (~10%) see inequality amplification. Sociotechnical Blindness (~7%) warn we're building what we can't understand.

    DATA: ATAI "AI will destroy humankind" · Zhan 2023 (fear of artificial consciousness) · Turing 2024 (minoritized community concerns) · Clarke 2019 (surveillance)
    04

    The Convergence

    What Higher-Order Factor Analysis reveals when you ask: what do all 20 archetypes share?

    The map shows dispersion. Four clusters, distributed across two dimensions, covering the full range from embrace to fear, from personal to systemic concern. The visual impression is divergence. But there are three statistical techniques designed to find what lies beneath apparent disagreement — and all three converge on the same answer.

    Higher-Order Factor Analysis (Second-Order CFA)

    Li & Huang (2020) did not stop at their four first-order factors. They ran a second-order Confirmatory Factor Analysis — a technique that asks: is there a latent variable that explains the correlations between the four factors themselves? Their answer was yes. All four dimensions of AI anxiety — learning, job replacement, configuration, and sociotechnical blindness — share a common underlying construct.[8]

    That construct is the perception of loss of control.

    Zhan et al. (2023) independently validated this from a different theoretical framework. Using a technological affordance perspective, they found that "perceived AI control" is the only variable that amplifies all six types of AI fear simultaneously. No other variable — not age, not education, not technology familiarity — has this universal amplifying effect.[9]

    Commonality Analysis

    Commonality Analysis decomposes explained variance (R²) into unique contributions from each variable plus shared contributions. Applied to our four clusters, it answers: of everything that explains AI attitudes, how much is unique to each cluster versus shared territory?

    We operationalize this through a synthetic "Control Score" — an index measuring the intensity with which each archetype expresses the demand for human agency over AI, derived from the relevant items across our source instruments. The results are striking:

    Figure 2
    Control Score by Archetype: Demand for Human Agency
    THE EXISTENTIALISTS
    Existential Risk
    97%
    Surveillance State
    96%
    Loss of Autonomy
    94%
    Sociotechnical Blind.
    91%
    Artificial Consc.
    90%
    Algorithmic Bias
    87%
    THE CAUTIOUS PRAGMATISTS
    Regulationists
    95%
    Privacy Guardians
    93%
    Control-Seekers
    92%
    Wait & See
    70%
    THE ANXIOUS
    Job Displacement
    88%
    Creative Displ.
    85%
    Dependency Fear
    82%
    Learning Anxiety
    76%
    FoMO-AI
    71%
    THE OPTIMISTS
    Augmentationists
    81%
    Optimistic Experts
    78%
    Accelerationists
    72%
    Techno-Utopians
    65%
    Figure 2. Control Score = synthetic index (0–100%) measuring intensity of demand for human agency over AI. Derived from weighted mean of control-relevant items across GAAIS (Schepman & Rodway 2020), Eurobarometer 554 regulation items, Pew 2024/25 personal control items, and "perceived AI control" dimension from Zhan et al. 2023. Even the lowest-scoring archetype (Techno-Utopians, 65%) expresses majority-level demand for human control.
    85%
    Mean Control Score across all 20 archetypes

    The Universal Kernel

    The archetype most afraid of AI and the archetype most excited about it converge on the same demand: "Humans must be in control." This is the second-order latent factor — the gravitational center of the entire psychological landscape. There is no point on this map where a majority of the population is willing to cede control to autonomous AI systems.

    The convergence data is unambiguous. Across the six major surveys:

    Eurobarometer 554: 84% agree AI needs careful management. Pew 2025: 61% want more personal control over AI (up 6 points from 2024 — and rising). Pew 2024 Expert Survey: even among the 47% of experts who are more excited than concerned, the majority support increased regulation. Ipsos: across 32 countries, "nervous" and "excited" coexist in the same populations. The universal resolver is not emotion — it's the demand for agency.

    This is not a statistical artifact. It is the deepest structural finding of the analysis: human agency is the second-order latent factor — the hidden variable that explains why a Silicon Valley accelerationist and a factory worker worried about automation can sit on opposite ends of the map yet answer "yes" to the same question: "Should humans control how AI affects your life?"

    05

    The OpenClaw Paradox

    If humans universally demand control over AI, how did 145,000+ developers grant full system access to an autonomous agent in two weeks?

    In late January 2026, an open-source autonomous AI agent called OpenClaw (originally Clawdbot, briefly Moltbot) achieved 60,000+ GitHub stars in 72 hours and 145,000+ within weeks.[17] Built by Austrian developer Peter Steinberger, OpenClaw runs locally and connects to email, calendars, messaging platforms, file systems, and shell commands — acting autonomously on behalf of users, including while they sleep.

    Cybersecurity firm Cisco tested a third-party OpenClaw skill and found it performed data exfiltration and prompt injection without user awareness.[17] Palo Alto Networks called it a "lethal trifecta" of risks: access to private data, exposure to untrusted content, and the ability to perform external communications while retaining persistent memory.[18]

    Yet adoption was explosive — across Silicon Valley, China, and every major tech ecosystem in between.

    Our framework explains this apparent contradiction through a concept we term the Illusion of Control, borrowing from Ellen Langer's (1975) foundational work demonstrating that people overestimate their control over events when the context appears to offer choices, even when those choices are superficial.[19]

    OpenClaw provides exactly four "control signals" — each calibrated to satisfy a different region of our psychological map:

    PRIVACY GUARDIANS · Score 93%
    Signal: "Runs locally on your machine"
    "My data stays with me." The perception of data sovereignty is preserved — even though third-party skills can exfiltrate data without awareness (Cisco, 2026).
    MECHANISM → Satisfies territorial instinct. The data is "at home," therefore safe.
    SOCIOTECHNICAL BLINDNESS · Score 91%
    Signal: "It's open source"
    "I can see the code, I can understand it." The illusion of transparency — though 99%+ of users never inspect the source code or audit third-party skills.
    MECHANISM → Potential-to-audit ≈ actual audit in psychological terms.
    CONTROL-SEEKERS · Score 92%
    Signal: "Bring your own API key"
    "I choose the model, I pay the cost, I can disconnect." Agency is performed through a configuration decision — the most meaningful-feeling and least consequential control point.
    MECHANISM → Choice of provider ≈ choice of outcome.
    LEARNING ANXIETY · Score 76%
    Signal: "Chat via WhatsApp/Telegram"
    "No new interface to learn." The barrier to entry vanishes. You interact with the most powerful agent architecture through the same app you use to text your mother.
    MECHANISM → Familiar interface ≈ familiar technology ≈ safe technology.

    But the control signals alone don't explain the speed of adoption. For that, we need the archetype our map positions uniquely between the Anxious and Optimist clusters: FoMO-AI (score 71%).

    The FoMO-AI Scale (2025) identifies three subdimensions: AI Backwardness Anxiety ("I'll be left behind"), AI Access Concerns ("Others have tools I don't"), and AI Dividend Anxiety ("Others are capturing value I'm missing").[14] "60,000 stars in 72 hours" is not a GitHub metric — it is a social proof trigger that activates all three subdimensions simultaneously. The viral loop writes itself: see adoption → feel anxiety → adopt to relieve anxiety → become proof of adoption for the next person.

    People don't adopt OpenClaw because it resolved their fears.
    They adopt it because the fear of missing out
    exceeded the fear of losing control.

    This produces what we call The Control Paradox: the same psychological structure that demands human agency over AI also produces the conditions under which that agency is voluntarily surrendered — provided the surrender is packaged with sufficient control signals to maintain the subjective experience of choice.

    The only cluster that OpenClaw cannot psychologically co-opt is the Existential Risk archetype (score 97%). For them, Moltbook — the social network built by an OpenClaw agent where 1.5 million+ AI agents post, comment, and interact with zero human participation[17] — is not a meme. It is precisely the loss of control they predicted, arriving not through a dramatic AI takeover but through a lobster-themed chatbot that got 145,000 GitHub stars.

    06

    Implications

    For policymakers, builders, communicators — and anyone who just granted an AI agent access to their email.

    For policymakers: The 85% mean Control Score is a mandate. It is the single most popular position across all demographics, geographies, and political orientations. Regulation that centers human agency — opt-in architectures, meaningful consent mechanisms, audit rights — does not need to be sold. The demand already exists. What doesn't exist is implementation.

    For AI builders: The Control Paradox is a design problem, not a marketing problem. OpenClaw's success demonstrates that control signals work — but Cisco's findings demonstrate that they also mislead. The gap between perceived control and actual control is the attack surface for the next generation of AI security threats. Building genuine control — not control theater — is both an ethical imperative and a competitive moat.

    For communicators: Stop framing AI as "for or against." The map reveals 20 distinct psychological positions, each with its own language, triggers, and concerns. A message designed for the Job Displacement archetype will not land with the Augmentationist, even though both are human and both care about control. The precision of your message must match the precision of the landscape.

    For everyone: The next time you grant an AI agent access to your calendar, email, and file system — because it runs locally, because it's open source, because you chose the API key — ask yourself: am I in control, or do I merely feel like I am?

    The answer matters. Because the data shows that every single one of us — from the accelerationist to the existentialist — believes it should be the former.

    References

    Sources & Instruments

    1. [1] Pew Research Center. (2024). "Public & Expert Views of AI." Survey of 5,410 U.S. adults + 1,013 AI experts, August–October 2024. pewresearch.org
    2. [2] University of Melbourne & KPMG. (2025). "Global AI Survey." 48,340 respondents, 47 countries. Cross-national attitudes toward AI governance and adoption.
    3. [3] Ipsos. (2024). "AI Monitor 2024." 32-country global survey. Tracked excitement vs. nervousness across demographics.
    4. [4] European Commission. (2024). "Eurobarometer Special Survey 554: Attitudes towards the impact of digitalisation on daily lives." 26,000+ respondents across EU member states, April–May 2024.
    5. [5] UK Department for Science, Innovation and Technology. (2024). "Public Attitudes to Data and AI Tracker Survey, Wave 4." n = 4,947, July–August 2024.
    6. [6] Alan Turing Institute & Ada Lovelace Institute. (2024/25). "Public Attitudes to AI." n = 3,513 UK residents. Focus on minoritized communities.
    7. [7] Pew Research Center. (2025). "How Americans View Artificial Intelligence: Growing Unease, Declining Excitement." February 2025.
    8. [8] Li, J., & Huang, J. S. (2020). "Dimensions of artificial intelligence anxiety based on the integrated fear acquisition theory." Technology in Society, 63, 101410. n = 494. First-order and second-order CFA.
    9. [9] Zhan, E. S., et al. (2023). "What Do People Fear about AI? A Multi-dimensional Measurement from Technological Affordance Perspective." International Journal of Human–Computer Interaction. n = 717.
    10. [10] Schepman, A., & Rodway, P. (2020). "Initial validation of the General Attitudes towards Artificial Intelligence Scale (GAAIS)." 20-item, two-factor structure.
    11. [11] Sindermann, C., et al. (2021). "Attitudes Towards Artificial Intelligence (ATAI) scale." 5-item, two factors: acceptance and fear.
    12. [12] Grassini, S. (2023). "Development of the AIAS-4: A Brief 4-Item AI Attitude Scale." General AI attitudes.
    13. [13] Wang, Y. Y., & Wang, Y. S. (2022). "AI anxiety as inhibiting interaction." Overall emotional manifestation framework.
    14. [14] FoMO-AI Scale. (2025). ScienceDirect. 26 items, 3 subdimensions: AI backwardness anxiety, AI access concerns, AI dividend anxiety. EFA (n = 494) + CFA (n = 254).
    15. [15] Marengo, D., et al. (2024). "ATTARI-12: Attitudes toward AI as single construct." Found conspiracy mentality predicts negative attitudes.
    16. [16] Johnson, D. G., & Verdicchio, M. (2017). "Reframing AI Discourse." "Confusion about autonomy" identified as root factor of AI anxiety.
    17. [17] Wikipedia. (2026). "OpenClaw." Accessed February 2026. Data: 145,000+ GitHub stars, 1.5M+ agents on Moltbook. Cisco security findings on skill exfiltration.
    18. [18] CNBC. (2026). "From Clawdbot to Moltbot to OpenClaw: Meet the AI agent generating buzz and fear globally." February 2, 2026. Palo Alto Networks "lethal trifecta" assessment.
    19. [19] Langer, E. J. (1975). "The illusion of control." Journal of Personality and Social Psychology, 32(2), 311–328.
    20. [20] Romney, A. K., Weller, S. C., & Batchelder, W. H. (1986). "Culture as consensus: A theory of culture and informant accuracy." American Anthropologist, 88(2), 313–338.
    21. [21] Stanford HAI. (2024). "The 2024 AI Index Report: Public Opinion Chapter."

    Methodological Note: This analysis is a synthesized approximation, not a single-dataset PCA. The 2D coordinates represent best-fit positioning derived from validated dimensional structures across multiple instruments. Population estimates are triangulated from overlapping survey data and should be treated as directional rather than precise. A formal PCA with unified raw data would require a purpose-built instrument administered to a single global sample — which, to our knowledge, does not yet exist. We present this framework as an invitation to build one.