Creativity in the Age of Generative AI
An Illustrative Review of Divergent Thought
Genesis
The Primordial Soup of Thought
Nascent in the dark matter of one's mind, a solitary thought lies impotent. The atomic units of thought are types of information — sensations, sounds, images, words — and, when multiple units begin to associate, a creative thought is born. When the atomic units are words (or "tokens" in machine speak), their emergence is due, in part, to the background, where invisible gravitational forces, like personality, shape the likelihood of creativity itself. The survival of a creative thought in one's mind follows a Darwinian logic of variation and retention: new ideas are generated through the attraction of distant concepts, and then selected through evaluation. Further afield, the survival of a creative thought outside of one's mind - in the form of, say, a published or produced piece of work - depends entirely on the precise level of pressure in one's environment. Too little, and an idea may fail to coalesce; too much, and innovation will be suffocated. The conditions that allow this process to flourish, and the conditions that quietly erode it, are the subject of this essay.
The Measure of a Thought
Divergent Associations
If we learn to observe our thoughts, we might notice that they have a form: sometimes they appear as visions or sounds, and at other times as symbols or words. The linguistic shaping of thought affords scientists tools to measure creativity - surveys, interviews, journaling - and a salient concept unifying semantic creativity has emerged: our ability to produce sets of different words provides a measure of divergent thinking. The Divergent Associations Task demonstrates that naming words as far apart from each other as possible is a strong indicator of creative thought. In fact, this characteristic of creativity spans domains - visual and musical artists have been found to produce greater semantic divergence - and is associated not only with thought, but with creative achievement, as well. The essence of semantic divergence lies in calculating the "difference" between words. While this calculation was previously a highly subjective task, the recent advent of embedding models - compressed statistical representations of large text corpora - have provided an efficient, objective alternative. A test of divergence is a slight misnomer, for arriving at a set of highly diverse words requires converging. Convergent thinking involves evaluating an existing set of constraints and identifying common patterns amongst them, and this happens before and after searching for differences. This dual-process rhythm of expansion and evaluation, far from passive free association, depends on executive function, an actively managed cognitive capacity that strengthens with use and weakens with neglect. Constraints play an important role in both bounding our search space, but also in stimulating us to search harder.
The strength of divergent thinking as a predictor of real-world creative capacity is not an artifact of measurement convenience. Divergent thinking is defined as the process of generating a variety of solutions where responses are considered creative if they are both novel and appropriate (Gerwig et al., 2021); creative thinking is widely assessed with tests of divergent thinking, particularly the Alternative Uses Task, which has shown consistent evidence of validity with moderate to large correlations between performance and real-world creative achievement in the arts and sciences (Beaty et al., 2021). The Divergent Association Task also has strong validity correlations with the Alternative Uses Task and the Bridge-the-Associative-Gap Task (Olson et al., 2021). Originality — assessed either by subjective ratings or computational measures — is the strongest predictor of real-world creative achievements, including scientific innovation, artistic production, and entrepreneurial success, outperforming fluency, flexibility, and elaboration as predictive dimensions (Skurnik et al., 2025).
Creative thinking involves two components — generation of novelty via divergent thinking, and evaluation of that novelty via convergent thinking. Without the convergent phase, divergent production yields output that may be novel, but not useful - surprising yet not meaningful (Cropley, 2006). Standard creativity measures inherently require both divergent and convergent thinking, and individual differences in creativity reflect variation in both capacities, not divergence alone (Cortes et al., 2019). The cycle is iterative: diverge to generate candidates, converge to evaluate the set and identify patterns, then diverge again from the new position — a rhythm that is not passive free association but an actively managed cognitive operation.
The engine driving this dual-process rhythm is executive function. Executive shifting — one of the three core executive functions alongside inhibition and updating — predicts successful performance on the Alternative Uses Task, establishing that divergent thinking requires strategic category switching rather than undirected semantic wandering (Nusbaum & Silvia, 2011). A network science approach extends this finding, showing that both semantic memory structure and executive control contribute to creative thought as complementary rather than redundant pathways; broad retrieval ability and fluid intelligence each independently predict divergent thinking, with the former reflecting the richness of the associative substrate and the latter reflecting the efficiency of the search process that traverses it (Benedek et al., 2017). The implication for what follows in this essay is direct: if the convergent-evaluation phase depends on executive function, and executive function is use-dependent — strengthened by exercise, weakened by neglect — then the habitual offloading of this phase to an external agent may quietly erode the very cognitive machinery that makes creative search productive.
- (2021). “Naming unrelated words predicts creativity.” Proceedings of the National Academy of Sciences. DOI 138 citations
- (2021). “Automating creativity assessment with SemDis: An open platform for computing semantic distance.” Behavior Research Methods. DOI 247 citations
- (2021). “The Relationship between Intelligence and Divergent Thinking—A Meta-Analytic Update.” Journal of Intelligence. DOI 84 citations
- (2025). “Semantic memory and creative evaluation.” BMC Psychology. DOI 1 citations
- (2006). “In Praise of Convergent Thinking.” Creativity Research Journal. DOI 706 citations
- (2019). “Re-examining prominent measures of divergent and convergent creativity.” Current Opinion in Behavioral Sciences. DOI 90 citations
- (2011). “Are intelligence and creativity really so different? Fluid intelligence, executive processes, and strategy use in divergent thinking.” Intelligence. DOI 440 citations
- (2017). “How semantic memory structure and intelligence contribute to creative thought: a network science approach.” Thinking & Reasoning. DOI 198 citations
The Dark Matter
Intrinsic Factors
The dual-process rhythm of divergence and convergence is shaped by intrinsic forces — the dark matter of one's mind — that predispose creative capacity. Five prerequisites feed the generative process: personality, motivation, sufficient material (breadth), domain expertise (depth), and unexpected connections ("creative skills"). Among the intrinsic factors that predispose divergent association, openness to experience is the personality trait most strongly correlated with creative capacity, yet the same loose associative processing that allows creative individuals to discover novel patterns also predispose us to finding meaning in unrelated experiences. This may explain, in part, why human's metacognitive estimation of their own work is unreliable: more creative individuals tend to underestimate the novelty of their work, while less creative individuals overestimate their own novelty. It is hypothesized that more creative individuals underestimate their creativity because it required relatively less effort for them to develop the solution, indicating that we associate our effort - not our output - with feeling creative.
The prerequisites synthesized in this essay — motivation, sufficient material, depth of domain understanding, and unexpected connections — represent a reorganization of constructs from Amabile, Csikszentmihalyi (1988), and Simonton into a sequence ordered by their role in the generative process: intrinsic motivation and creative skills (Amabile), the field and domain as resource environments (Csikszentmihalyi), and depth of expertise as the substrate for both generation and evaluation (Simonton).
While personality was not systematically measured in many of the field's frameworks, the evidence is overwhelming that personality plays a predisposing role in creative thought and creative achievement. The personality trait most consistently implicated in creative capacity is Openness to Experience, a Big Five dimension whose most central markers include "imaginative," "creative," and "original" (DeYoung, 2015). The neurocognitive mechanism underlying this trait suggests it is a structural feature of associative processing: highly creative persons exhibit defocused attention and reduced latent inhibition, attending to stimuli that domain expertise would typically filter as irrelevant, because surprising ideas emerge precisely when something deemed irrelevant turns out to be highly relevant (Simonton, 2012). The neural substrate for this process involves the default mode network — medial prefrontal cortex, posterior cingulate, and lateral temporal regions — which collectively activates during memory retrieval, imagination, and mind wandering (Beaty et al., 2023). The evidence strongly suggests that one's Openness to Experience, a part of our innate personality, underlies many of the features observed in creative individuals.
The same loose associative processing that generates creative connections also generates false ones: magical ideation, schizotypy, apophenia (finding meaning in coincidences) and paranormal beliefs are all more prevalent in individuals that have a high Openness to Experience (Rominger et al., 2022), yet these individuals also produce a greater number of unusual words and show less inhibited spreading activation in semantic priming tasks (Mohr et al., 2001). Thus, the evidence suggests that both creative and delusional pattern detection may share a common cognitive mechanism, with the balance between the two poles mediated by the brain's executive functioning.
Our evaluative filter is itself unreliable. Our self-evaluation influences our creative output, moderated by our perceived capacity for improvement (Silvia & Phillips, 2004). People tend to underestimate the creativity of their ideas, with the most creative individuals showing the most pronounced underestimation (Kaufman & Beghetto, 2013). The mechanism appears to involve what has been called the fluency heuristic — in richly connected semantic networks, novel associations are retrieved with relative ease, and this ease of retrieval is mistaken for ordinariness; the creator judges the idea as unremarkable precisely because it arrived without the subjective experience of effort. Conversely, less creative individuals, whose sparser networks require more laborious traversal to produce even moderately novel associations, experience the process as difficult and interpret that difficulty as evidence of originality. The accuracy of our confidence judgments varies systematically with task characteristics and individual differences: metacognitive calibration errors are the norm rather than the exception (Steyvers et al., 2025).
The introduction of generative AI into this already-miscalibrated system produces a compounding effect. While AI can improve task performance, it simultaneously degrades the accuracy of self-assessment; higher AI literacy is paradoxically associated with less accurate self-evaluation, suggesting that fluency with the tool breeds a false sense of evaluative competence (Fernandes et al., 2026). When combined with the baseline creative self-evaluation asymmetry — creative individuals undervaluing their own output, less creative individuals overvaluing theirs — the prediction compounds: when creative individuals who already undervalue their ideas encounter AI output, they tend to accept the superficially superior appearance at face value, and the deferral is experienced not as capitulation but as good judgment (Skurnik et al., 2025; Vicente & Matute, 2023).
- (2015). “Openness/intellect: A dimension of personality reflecting cognitive exploration.” APA Handbook of Personality and Social Psychology. DOI 281 citations
- (2012). “Taking the U.S. Patent Office Criteria Seriously: A Quantitative Three-Criterion Creativity Definition and Its Implications.” Creativity Research Journal. DOI 293 citations
- (2023). “Associative thinking at the core of creativity.” Trends in Cognitive Sciences. DOI 154 citations
- (2001). “Loose but Normal: A Semantic Association Study.” Journal of Psycholinguistic Research. DOI 110 citations
- (2022). “Creative, yet not unique? Paranormal belief, but not self-rated creative ideation behavior is associated with a higher propensity to perceive unique meanings in randomness.” Heliyon. DOI 7 citations
- (2004). “Self-Awareness, Self-Evaluation, and Creativity.” Personality and Social Psychology Bulletin. DOI 250 citations
- (2013). “Creative metacognition and self-ratings of creative performance: A 4-C perspective.” Learning and Individual Differences. DOI 220 citations
- (2025). “Metacognition and Uncertainty Communication in Humans and Large Language Models.” Current Directions in Psychological Science. DOI
- (2026). “AI makes you smarter but none the wiser: The disconnect between performance and metacognition.” Computers in Human Behavior. DOI 7 citations
- (2025). “Semantic memory and creative evaluation.” BMC Psychology. DOI 1 citations
- (2023). “Humans inherit artificial intelligence biases.” Scientific Reports. DOI 120 citations
The Constraints That Create
Extrinsic Factors
Rarely is creative thought free from constraint, and the most divergent thoughts, paradoxically, arise under imposed constraints — specifically semantic constraints of knowledge, of what exists and what is known, within which the creative mind can discover and define the unknown. Environmental constraints, like time pressure or budgetary pressure, can have both positive and negative effects on creativity, with too little pressure leaving the search space unbounded and too much pressure culling good ideas too soon. Generative AI finds its foothold here, as it is a tool in our external environment deployed under production conditions — producing work faster, aligned with the constraints of the task, yet measurably less divergent than human responses. This homogenization is detectable in short-form output, yet increasingly difficult to notice as documents grow longer, where prior paragraphs, established tone, and structural logic recursively impose additional constraints that require engaging more memory and executive functioning to understand.
The paradox of constraint has deep empirical roots. From Picasso's self-imposed color restrictions to Stravinsky's tonal constraints, creative breakthroughs emerge not from the removal of boundaries but from their deliberate imposition (Stokes, 2005). The mechanism eliminates large regions of the solution space and forcing the creator to explore unfamiliar territory within the remaining space. Convergent constraints in creative tasks channel search processes productively — the walls of the maze are what make navigation generative rather than random (Cortes et al., 2019). The double edge appears when the constraints shift from semantic to environmental: an analysis of over 9,000 daily diary entries from knowledge workers finds that high time pressure without focused protection consistently suppresses creative thinking, a finding that held even when workers themselves believed they were being more creative under pressure (Amabile et al., 2002). The relationship is curvilinear: moderate time pressure can enhance creativity when individuals possess high openness to experience and receive supportive supervision, yet the relationship inverts beyond a threshold; the boundary conditions are precise and context-dependent, not amenable to the blanket intensification that AI-augmented production workflows tend to impose (Baer & Oldham, 2006).
The Darwinian logic of variation and retention — the framework positing creativity as the generation of novel associations followed by the selective retention of the most promising ones — has been formalized across independent research traditions. The dynamic componential model decomposes the creative process into stages with distinct cognitive dependencies: idea generation draws upon creativity-relevant processes and intrinsic motivation, while idea validation requires domain-specific skills for checking nascent associations against established criteria, and the number and novelty of ideas generated increases with stronger intrinsic motivation and more developed creative thinking skills (Amabile et al., 2016). The evaluative dimension finds an unexpected institutional parallel in the U.S. Patent Office's three requirements — novelty, usefulness, and non-obviousness — where non-obviousness corresponds to the surprise criterion in psychological definitions of creativity, a correspondence that emerged through entirely independent institutional reasoning and that underscores the robustness of the variation-retention framework across domains (Simonton, 2012).
Generative AI finds its foothold in the gap between these two constraint types. AI is overwhelmingly deployed under production conditions — consultants used it to produce work 40% faster — and this deployment pattern determines whether AI enhances or degrades output quality (Dell'Acqua et al., 2023). The downstream consequence is measurable: among AI-adopting artists, creative productivity and artwork value significantly increased while novelty in both conceptual and visual features decreased over time, establishing that production-oriented deployment of AI systematically favors constraint satisfaction over divergent exploration (Zhou et al., 2024). The pattern is consistent: when AI is used to satisfy existing constraints more efficiently, it succeeds; when the task requires generating genuinely novel associations that violate or transcend those constraints, the architecture's bias toward statistical central tendency becomes a liability. Empirical comparisons of LLM-generated and human-generated free association networks confirm this: LLM semantic networks cluster more tightly around high-frequency connections, exhibiting less variance and fewer remote associations than those derived from human participants (Abramski et al., 2025) — a structural deficit that the user, operating under the very time pressure that AI was deployed to relieve, is poorly positioned to detect.
The detectability varies inversely with the scale of the output. The transformer architecture of LLM's — predicting a held-out token based on preceding tokens, with each generated token conditioning the probability distribution of subsequent tokens — progressively narrows the viable output space by design (Mahowald et al., 2024). Textual cohesion — the overlap of lexical and semantic content within a text — operates at multiple levels: lexical repetition, reference chains, and thematic continuity, to name a few (Skalicky et al., 2017). As documents grow longer, these legitimate cohesion constraints compound with the architectural constraints of next-token prediction, making AI's convergent bias increasingly indistinguishable from the coherence demands of the document itself. In a haiku, the constraint space is small and divergence is immediately visible. In a ten-page report, each paragraph constrains the next through prior framing, established argument, and evidence selection: when insufficient attention is dedicated to the task, we tend to accept the output as "correct" because it satisfies every "local" constraint, and our memory fails to detect the "global" omissions that would signal genuine novelty (Cai et al., 2024).
- (2005). “Creativity from Constraints: The Psychology of Breakthrough.” Springer. 350 citations
- (2019). “Re-examining prominent measures of divergent and convergent creativity.” Current Opinion in Behavioral Sciences. DOI 90 citations
- (2002). “Creativity under the gun.” Harvard Business Review, 80(8), 52–61. 567 citations
- (2006). “The curvilinear relation between experienced creative time pressure and creativity.” Journal of Applied Psychology, 91(4), 963–970. DOI 944 citations
- (2016). “The dynamic componential model of creativity and innovation in organizations: Making progress, making meaning.” Research in Organizational Behavior. DOI 1,330 citations
- (2012). “Taking the U.S. Patent Office Criteria Seriously: A Quantitative Three-Criterion Creativity Definition and Its Implications.” Creativity Research Journal. DOI 293 citations
- (2023). “Navigating the Jagged Technological Frontier.” HBS Working Paper No. 24-013. DOI 505 citations
- (2024). “Generative artificial intelligence, human creativity, and art.” PNAS Nexus. DOI 245 citations
- (2025). “The LLM World of Words English free association norms generated by large language models.” Scientific Data. DOI
- (2024). “Dissociating language and thought in large language models.” Trends in Cognitive Sciences. DOI 247 citations
- (2017). “Identifying Creativity During Problem Solving Using Linguistic Features.” Creativity Research Journal. DOI 22 citations
- (2024). “Do large language models resemble humans in language use?.” Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics. DOI 63 citations
The Fading Paths
What Fades and Why You Won't Notice
AI's errors have shifted from commission to omission: from hallucinations and false positives in 2024, to missing nuance, suppressed outliers, and false negatives in 2026. Human metacognitive monitoring is better calibrated to detect fabrication than absence, which means omissions accumulate insidiously. When AI handles the divergent search on the user's behalf, our evaluative skills may remain intact — we can still judge whether an output is "correct" — but our ability to produce divergent alternatives may atrophy from disuse. Furthermore, the average human is reaching for generative AI in precisely those domains where they know the least. Thus, with insufficient expertise to critically evaluate the output, many of us are accepting and imbibing de facto AI outputs, which have been proven to be less diverse in the long run.
In recent studies in healthcare, omissions are the most frequent error type in AI-generated clinical summaries at 25%, while inaccuracies appear in 20% and hallucinations are rare at only 2% (Grolleau et al., 2026). The cognitive basis for why this goes unnoticed comes from semantic illusion experiments, which find that people routinely fail to detect conspicuous errors when the erroneous word is semantically close to the correct one, suggesting that human comprehension monitoring relies on local semantic plausibility rather than global completeness (Cai et al., 2024). The evidence strongly indicates that AI has been optimized away from the error type humans are best equipped to catch — hallucination, which triggers the mismatch detectors of factual monitoring — and toward the error type humans are worst equipped to catch — omission, which requires the perceiver to notice the absence of something they may never have known to expect.
When AI handles the divergent search on the user's behalf, the dual-process cycle identified in Section II falters at its first step: the convergent evaluation may remain intact — the person can still judge whether an output is adequate — while the generative capacity to produce alternatives atrophies from disuse. Habitual AI cognitive offloading is inversely correlated with independent problem-solving capacity (Gerlich, 2025), and AI assistance can improve surface-level task performance while simultaneously accelerating the decay of the underlying skills — a dissociation between apparent competence and actual capability that the user does not perceive because the performance metrics look favorable (Macnamara et al., 2024).
Generative AI disproportionately benefits below-average performers — less-skilled workers saw the largest productivity gains, precisely because the AI could substitute for expertise they lacked (Brynjolfsson et al., 2023). While AI lowers the barrier to entry, it may come at a cost: in another study, consultants who lacked the domain expertise to recognize which tasks fell outside the frontier suffered the worst outcomes — not because they used AI recklessly, but because they could not distinguish between the terrain where AI excels and the terrain where it misleads (Dell'Acqua et al., 2023). Expert-level evaluation requires thousands of hours of deliberate, domain-specific practice, a finding that holds across chess, music, medicine, and other domains (Ericsson et al., 1993); a user without this accumulated expertise cannot distinguish between an AI response that is genuinely novel and one that is merely fluent. The combination creates a feedback loop: users gravitate toward AI in domains where they lack competence, which is exactly where they lack the evaluative capacity to detect AI's convergent tendencies, and the more proficient they become at prompting and integrating AI output, the less aware they become that their independent evaluative and generative capacities are quietly eroding.
- (2026). “Safety and Utility of an Agentic Large Language Model-Based Hospital Course Summarizer.” medRxiv. DOI
- (2024). “Do large language models resemble humans in language use?.” Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics. DOI 63 citations
- (2025). “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking.” Societies. DOI 359 citations
- (2024). “Does using AI assistance accelerate skill decay?.” Cognitive Research, 9, 40. DOI 30 citations
- (2023). “Generative AI at Work.” NBER Working Paper No. 31161. DOI 838 citations
- (2023). “Navigating the Jagged Technological Frontier.” HBS Working Paper No. 24-013. DOI 505 citations
- (1993). “The role of deliberate practice in the acquisition of expert performance.” Psychological Review, 100(3), 363–406. DOI 8,790 citations
The Homogenization Engine
How LLMs Reshape the Soup
As our language faculties and fluency shape the structure and diversity of our thoughts, an increasingly mindless dependence on AI appears to be having the unintended side effect of eroding our syntactical abilities. The semantic network is not static but use-dependent — restructured by what one reads, hears, and processes — and when the dominant input shifts from the varied, idiosyncratic output of diverse human minds to the polished, statistically averaged output of generative AI, our network's associative pathways are pruned toward the center while our divergent periphery quietly atrophies. Passive exposure to AI-generated ideas narrows subsequent human ideation, and when AI systems are trained on AI-generated data, distributional tails are progressively lost in each generation, producing collapse toward the statistical mean. The process is self-reinforcing: as the network narrows, the individual's search trajectories become less divergent, producing output that is itself more convergent, which, when fed back into organizational knowledge bases or training corpora, further narrows the input environment from which the next generation of associations will be drawn.
Cognitive offloading — the externalization of cognitive processes into external aids — has been a concern since Socrates, and the dominant worry has been about storage: that external memory aids, like written text, substitute for internal memory, producing forgetfulness (Risko & Gilbert, 2016). The creativity-relevant mechanism is not storage, but search. GPS-dependent drivers develop weaker spatial representations and reduced hippocampal gray matter volume not because they lose knowledge of the city's layout but because they stop actively navigating it (Dahmani & Bohbot, 2020). Spatial memory depends on both egocentric (i.e. mapping the world relative to one's bodily location) and allocentric (i.e. mapping the world relative to objects around you) reference frames, and both processes involve neural structures in the medial temporal lobe structures, like the hippocampus, that are engaged by active navigation and disengaged by passive following (Abraham et al., 2015). The internet already functions as a primary transactive memory source, priming people to think about where to find information rather than encoding the information itself (Sparrow et al., 2011). While offloading saves internal cognitive resources, it reduces the internal demands that would otherwise strengthen those capacities through use (Grinschgl et al., 2020). Since it is the search — the active, effortful traversal of associative pathways — not the storage of their contents, that underlies divergent association, offloading search in the early stages of creative work may lead to an atrophy of those associative pathways in the long run.
The structural prerequisites for divergent thought are not merely cognitive dispositions but measurable features of an individual's semantic network. Network science methods reveal that the semantic networks of highly creative individuals differ structurally from those of less creative individuals: they exhibit shorter path lengths between nodes, lower clustering coefficients, and more small-world properties — an architecture that facilitates the traversal of remote associations by ensuring that any given concept can reach any other through fewer intermediate steps (Kenett et al., 2014). Percolation analysis extends this finding, showing that creative individuals' networks remain connected under greater degradation — their associative pathways are more robust and redundant, capable of sustaining creative traversal even when individual concepts are theoretically removed (Kenett et al., 2018). This network structure contributes to creative thought independently of intelligence — a critical finding for what follows, because it means that changes to the network's topology alter creative capacity even when executive function remains intact, establishing that the substrate through which search operates is itself a variable, not merely a passive medium through which cognitive ability expresses itself (Benedek et al., 2017).
The dynamics of search within this network follow a pattern borrowed from behavioral ecology. Memory search follows foraging dynamics: individuals exploit local associative patches — clusters of semantically related concepts — until the local yield is depleted, then make costly transitions to more distant patches, where the process repeats (Hills et al., 2012). The structure of the semantic network determines where patches are, how rich they are, and how far transitions must travel; a network pruned toward the statistical center would offer fewer distant patches and shorter transition distances, producing less divergent search trajectories even if the search process itself remains intact. Direct evidence suggests the pruning is already underway: passive exposure to AI-generated ideas narrows subsequent human ideation, documenting the causal direction — less diverse input produces less diverse output, not because the individual's generative capacity is permanently impaired but because the search environment has been restructured to favor shorter, more central foraging paths (Ashkinaze et al., 2024).
The evidence of this restructuring converges from multiple domains of cognitive offloading. Uncritical calculator use is associated with reduced mental computation ability — students who habitually reach for calculators lose the arithmetic fluency that would otherwise strengthen through practice (LaCour et al., 2019). The neuroanatomical consequence is visible in spatial cognition: GPS-dependent drivers develop smaller hippocampal gray matter volumes and reduced spatial memory performance, even after controlling for age and total navigation experience (Dahmani & Bohbot, 2020). When AI-generated content replaces human-generated content in medical datasets, the variability that carries diagnostic information is systematically suppressed — a finding whose implications extend directly to the creative domain, where variability is not noise to be smoothed but the raw material from which novel associations are drawn (He et al., 2026). A synthesis of recent evidence on AI's impact on academic writing documents that heavy reliance on AI writing tools erodes foundational writing skills, with users showing reduced syntactic independence and weakened metacognitive monitoring of their own prose (Frontiers in Education, 2025). The semantic network is not a fixed architecture but a use-dependent structure, continuously restructured by what one reads, hears, and processes; when the dominant input shifts from the varied, idiosyncratic output of diverse human minds to the polished, statistically central output of generative AI, the network's associative pathways are pruned toward the center and its divergent periphery quietly atrophies.
The process compounds through recursive feedback at multiple levels. Algorithmic monoculture — the widespread adoption of common foundation models — leads to correlated failures and outcome homogenization across systems, reducing the diversity of the information ecosystem from which both humans and future models draw (Bommasani et al., 2022). At the model level, the mechanism is mathematical: when AI systems are trained on AI-generated data, distributional tails are progressively lost in each generation, producing model collapse toward the statistical mean — a finding that establishes the inevitability of diversity loss under recursive self-training (Shumailov et al., 2024). The paradox at the human level is equally stark: while AI assistance makes individual outputs more creative by elevating below-average performers toward the mean, it simultaneously decreases collective diversity by pulling all performers toward the same center — a gain in individual quality purchased at the cost of population-level variance (Doshi & Hauser, 2024). The recursive loop operates at individual, organizational, and technological levels simultaneously, and the process is self-reinforcing: as the individual's network narrows, their search trajectories become less divergent, producing output that is itself more convergent, which, when fed back into organizational knowledge bases or AI training data, further narrows the input environment from which the next generation of associations will be drawn.
- (2016). “Cognitive Offloading.” Trends in Cognitive Sciences. DOI 610 citations
- (2020). “Habitual use of GPS negatively impacts spatial memory.” Scientific Reports, 10, 6310. DOI 120 citations
- (2015). “Semantic memory as the root of imagination.” Frontiers in Psychology. DOI 98 citations
- (2011). “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips.” Science. DOI 1,243 citations
- (2020). “Interface and interaction design: How mobile touch devices foster cognitive offloading.” Computers in Human Behavior. DOI 34 citations
- (2014). “Investigating the structure of semantic networks in low and high creative persons.” Frontiers in Human Neuroscience. DOI 332 citations
- (2018). “Flexibility of thought in high creative individuals represented by percolation analysis.” Proceedings of the National Academy of Sciences. DOI 178 citations
- (2017). “How semantic memory structure and intelligence contribute to creative thought: a network science approach.” Thinking & Reasoning. DOI 198 citations
- (2012). “Optimal foraging in semantic memory.” Psychological Review. DOI 450 citations
- (2024). “How AI Ideas Affect the Creativity, Diversity, and Evolution of Human Ideas.” arXiv:2401.13481. 15 citations
- (2019). “When calculators lie: A demonstration of uncritical calculator usage among college students.” PLOS ONE, 14(10), e0223736. DOI 6 citations
- (2026). “AI-generated data contamination erodes pathological variability and diagnostic reliability.” medRxiv. DOI
- (2025). “The impact of generative AI on academic reading and writing: a synthesis of recent evidence (2023-2025).” Frontiers in Education. DOI 15 citations
- (2022). “On the Opportunities and Risks of Foundation Models.” arXiv:2108.07258. 120 citations
- (2024). “AI models collapse when trained on recursively generated data.” Nature, 631, 755–759. DOI 565 citations
- (2024). “Generative AI enhances individual creativity but reduces the collective diversity of novel content.” Science Advances, 10(28), eadn5290. DOI 368 citations
The Rising Floor
Organizations and the Attention Economy
We are an innately creative species, and there are many (myself included) who find generative AI liberating, as it opens up a world of creative possibilities that had previously been technically elusive. Yet, particularly among creatives, the amplified volume and velocity of thought-provoking - and often incendiary - content being produced becomes an internalized pressure to be "more productive." Generative AI, always willing and available, makes this nagging desire harder to ignore. Recent evidence indicates this intensification is having detrimental effects on wellbeing: AI tools do not reduce workloads but instead create consistent work intensification, producing cognitive fatigue that degrades the reflective conditions that creative thought requires. Among 53,000 artists, creative productivity and artwork value significantly increased with AI adoption while novelty decreased over time. When every manuscript is polished and every report structurally sound, the evaluator's task shifts from detection of flaws to detection of absence - precisely the faculty that cognitive offloading to AI threatens to erode. The pressure to produce is contributing to the workslop in organizations and the homogenization of content on creative platforms, and five upstream mechanisms — architectural bias toward statistical central tendency, self-selection into low-expertise domains, search offloading, evaluative erosion, and organizational homogenization — are each contributing to this shift toward higher volume and lower novelty.
The individual-level mechanisms described in previous sections are compounded by a structural intensification fanned by AI itself. An eight-month ethnographic study documents that AI tools do not reduce employee workloads but, instead, create consistent work intensification through voluntary adoption — task expansion across role boundaries, blurred work-life boundaries as workers integrate tasks into previously protected breaks, and increased multitasking. This was found to produce a self-reinforcing cycle in which accelerated tasks raised speed expectations, wider task scope intensified work density, and the resulting cognitive fatigue, burnout, and decision-making impairment degraded the reflective conditions that creative thought requires (Ranganathan & Ye, 2026). The content-level manifestation is equally striking: major social media platforms, whose monetization programs actively reward volume over value, are flooded with AI-generated content at a scale that exceeds earlier forms of low-quality material, saturating the information environment from which both creators and evaluators draw their reference standards (Madsen et al., 2025). March (1991) provides the theoretical framework for the organizational consequence: organizations face a fundamental tension between exploration — the search for new possibilities — and exploitation — the refinement of existing competencies — and organizations that overweight exploitation achieve short-term productivity gains at the cost of long-term adaptive failure, a dynamic that AI's efficiency orientation systematically exacerbates.
The empirical signature of this dynamic is now visible at population scale. Among 53,000 artists, creative productivity and artwork value significantly increased with AI adoption, while adopters' artworks exhibited decreasing novelty over time in both conceptual and visual features — higher surface quality coexisting with lower divergence, the production-quality floor rising as the novelty ceiling compresses (Zhou et al., 2024). The five upstream mechanisms traced through this essay — AI's architectural bias toward statistical central tendency, self-selection of AI use into low-expertise domains, search offloading weakening associative pathways, evaluative erosion through signal compression, and organizational homogenization of creative substrate — each independently predict precisely this outcome: a shift toward higher volume and lower novelty, with the volume increase masking the novelty decrease because the metrics most organizations track — throughput, consistency, stakeholder satisfaction — register the former while remaining blind to the latter.
Our evaluative standards tend to shift toward the mean of recently encountered stimuli (the well-known recency bias). When AI raises the quality floor uniformly — every draft polished, every report structurally sound, every proposal superficially plausible — our adaptation level rises correspondingly, and output that would once have registered as merely competent now registers as adequate (Helson, 1964). Viewed another way, uniform surface quality compresses the signal-to-noise ratio on which evaluators depend, reducing the discriminability between genuinely novel work and competently average work (Steyvers et al., 2025). For managers, editors, and publishers, the managerial task has shifted away from detecting flaws to detecting absence, which requires the perceiver to notice what is not there against a uniformly polished surface. Yet, this is precisely the faculty that the metacognitive asymmetry identified in Sections III and V threatens to erode.
- (2026). “AI Doesn’t Reduce Work—It Intensifies It.” Harvard Business Review.
- (2025). “The 7Vs of AI Slop: A Typology of Generative Waste.” SSRN. DOI
- (1991). “Exploration and Exploitation in Organizational Learning.” Organization Science, 2(1), 71–87. DOI 11,477 citations
- (2024). “Generative artificial intelligence, human creativity, and art.” PNAS Nexus. DOI 245 citations
- (1964). “Current trends and issues in adaptation-level theory.” American Psychologist, 19, 26–38. DOI 240 citations
- (2025). “Metacognition and Uncertainty Communication in Humans and Large Language Models.” Current Directions in Psychological Science. DOI
Liberation Through Understanding
What To Do
The conditions that allow the creative process to flourish, and those that quietly erode it, have been the subject of this essay — and the answer returns us to the collision that started it. Nascent in the dark matter of one's mind, a solitary thought is impotent until it encounters another; generative AI can supply those encounters at scale, yet its output is, by design, the statistical transformation of all prior human expression: the well-worn path.
For individuals, the evidence points to two approaches: AI can be used antagonistically, prompting to reveal the convergent center so the creator can deliberately diverge away from it. However, the triteness of your AI's response may not be apparent at first blush, and antagonistic use requires practicing restraint and training one's judgment. Since generative AI boosts creativity in the moment yet exhibits less divergence in the long run, the second approach is to create divergent constraints within which AI finds associations: we specify the outline, with each section logically diverging around a central thesis. In other words, instead of prompting AI linearly to write an essay, the optimal approach is for you to specify the outline, and then rely upon AI to interpolate between sections. This is the approach I took in writing this essay.
For those organizing human ingenuity at scale, the evidence from organizational psychology suggests that organizations themselves bound the space of ideation. Thus, the optimal implementation of AI would involve creating sufficiently divergent organizational constraints a priori within which AI-human collaboration can bridge the associative gap. This process might resemble the OKR framework, where constraints are percolated top-down, with individual contributors interleaving bottom-up considerations. In addition, organizational metacognition is imperative: leaders must be aware of the pressures under which their employees operate, for those pressures exacerbate counterproductive offloading — and only those who discover the jagged frontier through direct experience, not abstract briefing, adjust their reliance appropriately. The creative process is Darwinian, and its substrate is the varied, effortful, sometimes errant collision of ideas in a mind that is searching; to preserve that substrate in an age of generative AI is to understand that the friction is not the obstacle — it is the mechanism.
The prescriptive implications of the preceding analysis begin with a precise empirical characterization of what generative AI output actually provides. The evidence strongly indicates that AI compresses the tails of variation, producing fewer divergent responses in the long run. Yet, it is these very responses that, by the Darwinian logic described in Section IV, constitute the raw material from which selective retention can operate (Hubert et al., 2024). At the semantic level, AI-generated associations cluster more tightly around the statistical center than human associations, with less variance and fewer remote connections (Abramski et al., 2023). The output of generative AI is, by construction, a reflection of the statistical center of its training distribution — the aggregated, averaged, most-probable continuation of all prior human expression — and this is not a limitation to be engineered away but a structural feature of next-token prediction that can be repurposed as a tool. Antagonistic use treats AI output not as a draft to be refined but as a cartographic instrument that maps the convergent center with precision, marking the well-worn path so that the creator can deliberately diverge from it; the triteness of the response, once recognized, becomes the reference point from which genuine novelty is measured.
The cognitive science of learning provides the theoretical justification for why this reorientation matters. Learning conditions which feel difficult — spacing, interleaving, testing, variability — produce superior long-term retention and transfer compared to conditions that feel fluent, because effortful retrieval strengthens memory traces and builds more flexible knowledge representations (Bjork & Bjork, 2011). The desirable difficulty principle predicts that the subjective ease of AI-assisted production is itself a warning sign, signaling that the cognitive operations which build independent capacity are being bypassed rather than exercised. Campbell (1960) provides the theoretical foundation from which the essay's entire argument descends: creative thought requires blind variation — the generation of unpredictable, unguided variations — followed by selective retention; eliminating the variation phase, as occurs when AI supplies the divergent candidates and the human merely selects among them, eliminates the substrate from which selection can operate. Antagonistic use preserves the desirable difficulty of independent semantic search — the effortful traversal of one's own associative landscape — while leveraging AI's unique ability to characterize the convergent center against which that search can be calibrated. As one paradigm for human-AI collaboration, human creativity may be better served not by accepting the de facto output, but by specifying the outline a priori, establishing the semantic constraints within which AI interpolates. This is a division of labor that respects our mutual strengths.
At the organizational level, psychological safety — a shared belief that the team is safe for interpersonal risk-taking — is the precondition for failure-based learning; without it, employees suppress the type of experimentation and error-reporting that all innovation - including pressure-testing the limits of AI's capabilities - requires (Edmondson, 1999). Studies have found that only those individuals who discovered the jagged frontier through direct experience — encountering tasks where AI failed and learning to recognize the terrain — adjusted their reliance appropriately, while those merely briefed about the limitations continued to over-rely (Dell'Acqua et al., 2023). This implies that organizations must allocate deliberate runway for failure rather than relying on training alone, because the calibration of human judgment to AI's actual capabilities is an experiential process that cannot be transmitted through documentation or policy. The OKR-like framework described in this essay — divergent constraints percolated top-down, with individual contributors and management interleaving bottom-up considerations and goals — may create the structural conditions for creative AI use at scale by ensuring that the divergent, boundary-setting work remains human while the convergent, constraint-satisfying work is allocated to AI. Finally, organizational metacognition, the awareness of the pressures under which employees operate and the cognitive consequences of those pressures, is not a management luxury but a structural necessity in an environment where the default mode of AI deployment systematically erodes the exploratory capacity on which long-term innovation depends.
- (2024). “The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks.” Scientific Reports, 14, 3440. DOI 129 citations
- (2023). “Cognitive Network Science Reveals Bias in GPT-3, GPT-3.5 Turbo, and GPT-4.” Big Data and Cognitive Computing, 7(3), 124. DOI 62 citations
- (2011). “Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning.” Psychology and the Real World. 860 citations
- (1960). “Blind variation and selective retention in creative thought as in other knowledge processes.” Psychological Review, 67(6), 380–400. 1,500 citations
- (1999). “Psychological Safety and Learning Behavior in Work Teams.” Administrative Science Quarterly. DOI 9,112 citations
- (2023). “Navigating the Jagged Technological Frontier.” HBS Working Paper No. 24-013. DOI 505 citations
The Resolution of the Map
Limitations
Every map distorts the territory it represents, and this essay is a map. It attempts to extrapolate from one mechanism of creative cognition — divergent association — to derive lessons for human lifestyles and workplaces. The known challenges in deriving general lessons from mechanistic explanations of human biology lie in two fundamental problems: the loss of nuanced understanding of how other variables interact with this mechanism for an outcome, and the assumption that the same mechanism operates in the same way in all people. An additional challenge specific to the cognitive science and psychology literature is the narrowness of the experimental conditions in which these cognitive mechanisms are being inferred — typically undergraduate students enrolled in college psychology classes, who tend to be predominantly caucasian and female. As a consequence, there is a known risk that the fundamental mechanism of action — divergent association as a predictor of creative thought and creative achievement — does not generalize across every culture and every population. The measurement instruments themselves — the Divergent Association Task, the Alternative Uses Task, and semantic distance scoring — are imperfect proxies for creativity, each capturing novelty more reliably than the full construct of creativity, and each carrying its own scoring biases: sample-dependence, cultural variability, and an upper limit to how well computational semantic distance tracks human judgments of originality. In attempting to generalize from the first principle of divergent association as a basis for creative thought, many levels were not acknowledged; for instance, the literature examining the effect of domain expertise or creative skills on creative thinking is vast, and, as it was tangential to the main argument of this review, it was not parsed thoroughly. Several of this essay's central inferences — the GPS-to-semantic-search analogy, the shared neural wiring of creativity and apophenia, the aggregation of individual convergent bias into organizational homogenization — involve inferential leaps that extend beyond what any single source individually establishes. The volume-novelty tradeoff prediction on which the essay's prescriptive arguments rest assumes that current usage patterns persist; the essay's own prescription for antagonistic use, if adopted, could interrupt the predicted cycle — a reflexive limitation that is itself a form of optimism.
The first category of limitation concerns the instruments through which divergent association is measured. The Divergent Association Task — the primary measure linking semantic distance to creative capacity — has several acknowledged shortcomings: it measures originality with better face validity than appropriateness, and DAT scores may partly reflect constructs more related to divergence than creativity, such as overinclusive thinking or schizotypy (Olson et al., 2021). Participants may artificially modulate their scores by intentionally choosing rare words, drawing on environmental cues, or employing letter-based strategies, though the short time limit reduces this likelihood. The broader class of divergent thinking assessments faces the same structural concern: manual scoring is laborious, sample-dependent, and culturally variable — uses of objects vary in commonality across cultures and over time, making cross-cultural or longitudinal comparison unreliable (Olson et al., 2021). Semantic distance, the computational approach that underpins much of this essay's evidence on AI's convergent tendency, is a measure of novelty rather than a direct measure of creativity — it captures conceptual remoteness but lacks the usefulness criterion, meaning it is slightly more sensitive to novelty than to creativity as humans understand the term (Beaty & Johnson, 2021). A counterintuitive finding compounds this concern: among LLM-based embedding models, larger models did not consistently produce better semantic distance scores, and as a model's language understanding improved, the relationship between its semantic distance outputs and human-rated originality weakened — suggesting an upper limit to this proxy's effectiveness that has yet to be fully understood (Organisciak et al., 2023). Relatively low creativity scores in some study samples may further limit the generalizability of findings linking semantic memory network properties to creative performance, and the potential lack of cross-task generalizability means that findings on laboratory tasks like the Alternative Uses Task may not extend to real-world creativity, which involves goal-directed problem-solving, collaboration, and iterative refinement using different cognitive processes (Skurnik et al., 2025).
The second category concerns the demographic and methodological scope of the evidence base. The cognitive science and psychology studies cited throughout this essay draw predominantly from samples of undergraduate university students — young, predominantly female, Western-educated, and in the case of AI-related studies, technologically literate and already familiar with generative AI tools. Fan et al. (2025) recruited 117 students of whom 70% were female, and their reliance on a single reading-and-writing task limits cross-task inference. Reza et al. (2025) found that requiring prior AI familiarity inadvertently capped their sample at age 34, excluding older adults and producing educational homogeneity. Lee et al. (2025) acknowledge a demographic skew toward younger, tech-savvy participants surveyed exclusively in English, with no multilingual or cross-cultural representation. Rominger et al. (2022) note reduced variance in their young-student sample for both paranormal beliefs and creative ideation, potentially constraining the observed associations between creativity and apophenia. Multiple studies are cross-sectional rather than longitudinal, meaning they capture a snapshot of AI's effects at one moment rather than tracking how those effects evolve — a significant constraint given that AI tools are constantly evolving and usage patterns are likely to shift (Lee et al., 2025; Fan et al., 2025). The controlled experimental conditions in which many effects are observed do not fully capture the complexities of real-world creative processes, where task type, organizational climate, domain expertise, and collaborative dynamics can influence both the direction and magnitude of creativity effects (McGuire et al., 2024). Several studies employ quasi-experimental designs or lack contemporaneous control groups, meaning their findings are descriptive associations rather than established causal relationships (Fernandes et al., 2026; Grolleau et al., 2026). When He et al. (2026) modeled AI-generated data contamination, they did so in a controlled, accelerated environment; real-world data contamination may occur more gradually and with different distributional consequences.
The third category is the essay's own inferential architecture — the scaffolding that holds the argument together across levels of analysis. Several meta-syntheses involve interpretive mappings that extend beyond what any individual source establishes. The GPS-to-semantic-search analogy (Section VI) is suggestive: GPS-dependent drivers develop weaker spatial representations, and by extension, AI-dependent writers may develop weaker associative pathways. Yet spatial navigation and semantic association, while both implicating the hippocampus, involve non-identical neural circuits — semantic association additionally recruits prefrontal and temporal regions — and the critical question of whether AI-assisted writing reduces active semantic search or merely redirects it toward evaluation and selection remains open. The shared-neural-wiring claim linking creativity and apophenia (Section III) asserts the mechanistic version of a correlation that most sources establish at the behavioral level; the dopaminergic evidence from de Manzano et al. (2010) is based on small samples, and the default mode network's broad involvement in many cognitive processes limits its specificity as evidence of a shared mechanism. The claim that individual convergent bias aggregates into organizational homogenization (Section VII) involves a further inferential step: organizations have always exerted convergent pressures — conformity, hierarchy, standardization — and AI may be accelerating an existing tendency rather than introducing a qualitatively new one. The mapping between established creativity models and the essay's prerequisites (Section I) is interpretive rather than axiomatic. The volume-novelty tradeoff prediction (MS-006) assumes current usage patterns persist; if antagonistic use is widely adopted, as recommended by this essay, the self-reinforcing cycle could be interrupted, and the prediction's unspecified timeframe makes it difficult to falsify. In centering the argument on divergent association as the primary mechanism of creative thought, the essay necessarily underweights other well-established contributors — domain expertise, deliberate practice, motivational states, social network effects, and the interaction between convergent and divergent thinking — each of which has its own extensive literature that, while tangential to this review's central thesis, would complicate and in some cases qualify the conclusions drawn here.
- (2021). “Naming unrelated words predicts creativity.” Proceedings of the National Academy of Sciences. DOI 138 citations
- (2021). “Automating creativity assessment with SemDis: An open platform for computing semantic distance.” Behavior Research Methods. DOI 247 citations
- (2023). “Beyond semantic distance: Automated scoring of divergent thinking greatly improves with large language models.” Thinking Skills and Creativity. DOI 44 citations
- (2025). “Semantic memory and creative evaluation.” BMC Psychology. DOI 1 citations
- (2025). “Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance.” British Journal of Educational Technology. DOI 235 citations
- (2025). “Co-Writing with AI, on Human Terms.” ACM Computing Surveys. DOI 15 citations
- (2025). “The Impact of Generative AI on Critical Thinking.” CHI Conference on Human Factors in Computing Systems. DOI 249 citations
- (2022). “Creative, yet not unique? Paranormal belief, but not self-rated creative ideation behavior is associated with a higher propensity to perceive unique meanings in randomness.” Heliyon. DOI 7 citations
- (2026). “AI makes you smarter but none the wiser: The disconnect between performance and metacognition.” Computers in Human Behavior. DOI 7 citations
- (2024). “Establishing the importance of co-creation and self-efficacy in creative collaboration with artificial intelligence.” Scientific Reports. DOI 56 citations
- (2026). “Safety and Utility of an Agentic Large Language Model-Based Hospital Course Summarizer.” medRxiv. DOI
- (2026). “AI-generated data contamination erodes pathological variability and diagnostic reliability.” medRxiv. DOI
If this article sparks your questions, concerns, or interest, we'd love to hear from you. Drop us a note at research@phronos.org.
← Back to Library