Ghazouani, M. (2026) “Foundational Structure Identity (FSI),” The Ilantic Journal .
The tragedy of our age, suggests Momen Ghazouani, is not that machines think but that humans have begun to doubt whether their thoughts were ever truly their own. This quiet crisis of intellectual confidence sits at the heart of a conceptual framework proposed in a recent position paper: Foundational Structure Identity (FSI), a system designed to distinguish genuine human cognitive contribution from the organizational assistance provided by artificial intelligence.
The question emerges from an increasingly common experience: A researcher develops a novel theoretical framework in her mind, uses an AI system to help articulate it clearly, and then stares at the polished result wondering is this truly mine? The words are the AI's. The structure owes something to the system's capabilities. But the insight, the conceptual architecture, the foundational idea these originated in human thought. Yet there exists no mechanism to verify this distinction, no way to demonstrate that the human provided more than a prompt and the AI provided less than the substance. This ambiguity, Ghazouani argues, has consequences that extend far beyond questions of attribution into the psychological foundations of intellectual identity itself.
The Collaboration Paradox
Traditional tools for intellectual work from pen to word processor function as instruments that externalize what already exists in the user's mind. A spell-checker corrects errors; a citation manager organizes references. The relationship remains clear: the tool extends capability without contributing to foundational ideas. Generative AI systems represent a qualitative departure from this model. These systems don't merely execute instructions they generate novel text based on statistical patterns learned from vast corpora of human writing. When a user interacts with such a system, the output emerges from both human input and machine generative processes in ways that blur the boundaries of authorship.
Current frameworks treat AI-generated text as a homogeneous category, either attributing authorship entirely to the human user or denying authorship claims altogether on grounds that substantial portions are machine-generated. This binary approach fails to capture the nuanced reality of human-AI collaboration, where the locus of intellectual contribution varies dramatically depending on the interaction's nature. In some cases, the AI generates novel conceptual content based on training data patterns, functioning as co-creator. In others, it serves primarily as an organizational tool, helping users structure ideas that originated entirely within their own cognitive processes.
The inability to distinguish between these modes carries consequences beyond legal and technical domains. Users who rely on AI to articulate genuinely originated ideas often experience uncertainty about their own authorship a phenomenon resembling impostor syndrome. They sense that the work isn't truly "theirs" despite having provided the foundational intellectual contribution. This uncertainty compounds in the absence of any technical mechanism to verify cognitive primacy, undermining intellectual confidence and threatening to erode the very concept of individual authorship.
A Framework for Cognitive Recognition
Foundational Structure Identity proposes a solution: embed verifiable markers within AI-generated text when the system determines that the user provided the foundational intellectual contribution and conceptual architecture, while the AI's role remained limited to organizational or expressive assistance. Unlike traditional watermarking systems designed to identify AI involvement, FSI aims to identify the locus of foundational intellectual contribution within collaborative human-AI interaction.
The framework envisions multiple contribution levels. At the highest tier of human foundational contribution, users provide detailed conceptual architecture, specific substantive content, and clear organizational structure, with AI functioning primarily as a linguistic assistant rendering ideas into polished prose. At lower levels, users provide only vague prompts, and AI generates substantive conceptual content largely from training data patterns. Between these extremes lie gradations reflecting different balances of human and machine contribution to the work's intellectual substance.
The technical implementation would require AI systems to analyze the relationship between user input and generated output, assessing factors such as: the specificity and completeness of user-provided conceptual structure, the extent to which user input determines substantive content, the degree to which AI introduces novel conceptual elements, and the overall balance between user-originated and AI-originated ideas. Based on this analysis, the system would assign a foundational contribution classification and embed this within the generated text through markers imperceptible to readers but verifiable through appropriate mechanisms.
Ghazouani emphasizes that FSI is not designed to replace human judgment about authorship or provide definitive answers to "Who is the author?" Rather, it provides additional information supporting more informed human judgments about attribution. The presence of a high foundational contribution marker doesn't automatically confer authorship it indicates that, based on the AI system's analysis, the user provided substantial foundational intellectual input.
Philosophical Underpinnings and Psychological Stakes
The framework rests on a particular philosophical position: intellectual authorship fundamentally concerns the origination of ideas and conceptual structures rather than the specific linguistic form expressing those ideas. This prioritization has historical precedent in Platonic and rationalist traditions emphasizing the realm of ideas over material instantiation, and in scientific practices assigning priority based on who first conceived an idea rather than who first articulated it eloquently.
Yet this position isn't philosophically uncontroversial. Alternative traditions including certain strands of literary theory and philosophy of language emphasize the inseparability of form and content, arguing that meaning is constituted through specific linguistic choices. From this perspective, AI assistance in organization and expression might represent more significant intellectual contribution than FSI would acknowledge.
The psychological dimensions warrant particular attention. Impostor syndrome involves three core components: persistent belief that accomplishments are fraudulent or undeserved, fear of exposure as a fraud, and inability to internalize accomplishments or attribute them to one's own ability. AI-assisted work can trigger all three. When individuals use AI to structure or express ideas, they may internalize the AI's contribution as evidence the work isn't fully "theirs," even when foundational ideas originated entirely from their thinking. Without external validation of foundational contribution, they're left with only potentially biased self-assessment to counter these doubts.
Ghazouani describes this as "the silent erosion of intellectual confidence" measured not in what we fail to create, but in what we create yet cannot claim. FSI could potentially disrupt this dynamic by providing external validation independent of individual self-assessment. If the AI system itself indicates that the user contributed foundational intellectual content, this validation may help individuals internalize their accomplishment and recognize genuine contribution.
Implications and Unresolved Tensions
The framework's implications extend across educational, professional, and societal domains. In education, FSI could help distinguish between legitimate learning support and problematic outsourcing of cognitive effort. A student using AI to organize genuinely developed ideas engages in fundamentally different activity than one using AI to generate uncomprehended conceptual content. Rather than prohibition-based approaches, FSI could enable nuanced policies ensuring AI use supports rather than replaces learning objectives.
In scholarly and professional contexts, FSI could reshape acknowledgment and attribution norms, enabling more precise indication of AI contribution's nature and extent while affirming human foundational roles. This increased precision could support new professional norms distinguishing between AI assistance modes, moving beyond binary choices of acknowledging AI use or not.
Yet significant limitations and unresolved questions remain. The most fundamental epistemological challenge concerns whether AI systems can reliably assess foundational intellectual contribution. The concept itself resists precise definition and may vary across disciplines and contexts. Moreover, the relationship between user input and AI output isn't straightforwardly causal in ways necessary for clear attribution. The AI interprets, fills gaps, makes inferences about intent, and generates content users didn't explicitly specify making boundaries between user and AI contribution inherently fuzzy.
Technical challenges include making markers robust against text transformations like paraphrasing or translation while keeping them imperceptible. Verification raises questions about who should access these markers and under what circumstances. If broadly available, verification mechanisms risk enabling surveillance; if restricted, they might prevent legitimate scrutiny. The framework also faces adversarial robustness concerns if FSI markers become valuable, strong incentives will emerge to manipulate or forge them.
Ethical considerations include whether FSI might create new inequalities, disadvantaging users skilled at valuable thinking but less capable at articulating detailed conceptual architectures in forms AI systems recognize as "foundational." The framework might also reinforce problematic dichotomies between human and machine contribution, treating AI purely as tools rather than potentially collaborative partners in intellectual work.
An Invitation to Further Inquiry
Ghazouani presents FSI as a conceptual contribution to ongoing navigation of human-AI collaboration's complex terrain a proposed direction rather than completed solution. The framework identifies a dimension receiving insufficient attention in current discussions: the need to distinguish foundational from organizational contribution. Whether FSI in its proposed form represents the right approach remains to be determined through continued theoretical analysis, empirical research, and practical experimentation.
The broader significance lies not only in FSI's potential as a practical framework but in what it reveals about human-AI collaboration challenges in knowledge production. The ambiguity FSI seeks to address manifests deeper questions about creativity's nature, individual and collective cognition's boundaries, and the changing relationship between human and artificial intelligence. As AI systems become increasingly capable and ubiquitous, these questions will only intensify. The challenge and the invitation is to develop systems, practices, and norms that preserve what is valuable about human intellectual work while embracing AI's potential to extend and enhance human capability.
