Panel 2: Cognition, Models, and Meaning
-
Eight Arms, Zero Score. Octopus Cognition and the Anthropomorphic Limits of AI Evaluation.
By Claudio Rossi (Alma Mater Europaea)
Evaluation benchmarks for embodied AI are typically treated as neutral measurement instruments. This paper argues they are better understood as normative frameworks that encode contestable assumptions about the nature of intelligence itself. By examining widely-used benchmarks in robotic manipulation research, I identify implicit commitments to centralized control architectures, rigid morphological precision, vision-dominant sensing, and efficiency-focused success metrics.
To surface these assumptions, I employ octopus cognition as a comparative analytical lens. The octopus represents an "independent experiment" in embodied intelligence—one that achieves sophisticated sensorimotor competence through distributed neural processing, compliant morphology, and multimodal chemical-tactile sensing. When current benchmarks are examined against this alternative cognitive architecture, their anthropomorphic commitments become visible: they presuppose a body organized like ours and a mind organized as we imagine ours to be.
This analysis draws on 4E cognition frameworks and morphological computation theory to develop a conceptual critique rather than an engineering proposal. The methodology is humanities-based—discourse analysis of benchmark documentation and evaluation criteria—not empirical testing of robotic systems.
The paper contributes to philosophy of mind debates about intelligence and embodiment by demonstrating how technical evaluation infrastructure operationalizes particular answers to contested philosophical questions. Current benchmarks implicitly privilege what classical cognitivism treats as "rational" behavior: explicit planning, centralized control, predictable task execution. Dimensions of embodied intelligence that fall outside this model—adaptive coupling, morphological dynamics, distributed processing—are not measured as alternative competencies but rendered invisible as noise. By surfacing these assumptions, the paper asks what evaluation criteria would look like if we began from a less anthropocentric, less rationalist model of mind.
-
Mitigating Hallucinations and Incoherent Reasoning in Large Language Models Using Neuro-Symbolic Knowledge Frameworks.
By Ibiyinka Temilola Ayorinde (Idaban) and Oluseyi Ayodeji Oyedeji (Northhampton)
In recent times, Large Language Models (LLMs) have experienced huge breakthroughs in the Natural Language Processing domain. In fact, the level of grammatical and contextual richness in texts produced by LLMs is highly impressive. However, a significant gap remains in terms of semantic reliability. A close analysis of some LLM-generated texts often reveals questionable facts and logical inconsistencies, highlighting the instability of LLMs and the constant need for their evolution. This paper conceptualises LLMs as syntactically rational but semantically irrational, thereby revealing their limitations in expressing the grounded meaning that human language requires. A lightweight neuro-symbolic framework is used to conceptualise and test the integration of neural language modelling with ontology-based reasoning as a way to address semantic irrationality. This involves using a pre-trained LLM, Flan-T5 model developed by Google combined with WordNet and a small domain ontology. Ontology rules, supported by a reasoner, are used to enforce constraints during text generation. The setup is applied across two experimental contexts, including question answering and summarisation, then comparisons are made between a baseline LLM and its neuro-symbolic variant. The results show that integrating the symbolic component reduces hallucination tendencies, ontological inconsistencies, and contradictions without compromising grammatical richness. Qualitative analysis also indicates clearer reasoning and justification, thereby improving trust and ethical transparency. In conclusion, this paper argues that semantic irrationality is not merely a technical issue that can be solved by upgrading LLMs, but one that requires integrating the natural blend of logic, intuition, and bias that characterises human thought. Neuro-symbolic integration therefore serves as a bridge between neural language modelling and natural human cognition.
-
Stochastic Parrots as World Makers.
By Leonardo Santa Maria (Civic AI)
There is a line of thought in contemporary philosophy of language that suggests that, since Large Language Models (LLMs) do not genuinely understand language, it is impossible for us to communicate with them (Bender & Koller, 2020; Mallory, 2023; Titus, 2024; Bottazzi Grifoni & Ferrario, 2025; Hattiangadi & Schoubye, 2025; Browning, forthcoming). The usual response to this view is to argue that LLMs are, in fact, more than stochastic parrots (Grzankowski et al., 2025) and that they exhibit forms of semantic understanding (Lyre, 2024; Beckmann & Queloz, 2025). In this talk, I suggest that this debate is distorting our understanding of LLMs and propose a different hermeneutical framing through which to conceptualize these technologies.
Following a suggestion by Fazi (2025), I explore LLMs as “worldmakers” – not because they understand or intend, but because they demonstrably reorganize the symbolic resources through which humans express themselves (Geng & Trotta, 2025; Yakura et al., 2024; Liang et al., 2025). Furthermore, their outputs influence what is taken as objective (An, 2025), stylistically normal (Alvero et al., 2024; Rama & Airoldi, 2025), or informationally salient (Gillespie, 2024), thereby affecting the background conditions of thought expression and social relations (Lepp & Alvero, 2025). This paper argues that such worldmaking effects are more consequential for the future of human expression than the familiar question of whether LLMs genuinely “mean” what they say. It is not sufficient to counteract the pervasive anthropomorphizing of LLMs in the name of their essence as stochastic parrots that understand nothing. Rather, it is pivotal that we recognize these stochastic parrots as worldmakers.