Abstracts
David Rosenthal
Consciousness, Theory, and Mental Appearance
Abstract: I contrast one-factor views of consciousness with two-factor views. Examples of one-factor views are the first-order theory of Fred Dretske, the first-order approach of Thomas Nagel, and Ned Block's conception of phenomenal consciousness. Examples of two-factor views are the higher-order theory of consciousness I have defended elsewhere and the global-workspace theories of Bernard Baars and of Stanislas Dehaene and Lionel Naccache.
I argue that one-factor views allow neither a useful explanation of consciousness nor even an informative description of what it is for a mental state to be conscious, and that because they construe consciousness as an intrinsic property of conscious states they are in effect strongly anti-theoretical. The appeal of one-factor views for some derives from its denial of a coherent contrast between the mental appearance and the mental reality of conscious states. But although some find that denial inviting, it derives from an unwarranted extrapolation from common sense, and is theoretically indefensible. It's also responsible for the shortcomings of a one-factor view.
Keywords: Consciousness; Higher-order theories; First-order theories; Global-workspace theory; Mental appearance; utility of consciousness.
Susan Schneider
The Global Brain Argument
Abstract: In this talk I argue, based on current tech trends, that many humans will likely be nodes in one or more global superintelligent or “savant” systems (a “global brain”) within the next several decades, and that this intelligence, which outthinks any individual human, will likely not be conscious. I visit several deep ethical, epistemological and metaphysical challenges.
Arthur I. Miller
Machines that Make Art, Improvise Music and Write Film Scripts
Abstract: The boundary between humans and machines – the other - is blurring. Machines have already shown glimmers of creativity, mastering complex games such as Go, producing unpredictable styles of art, generating new forms of music and literature, and engaging in cutting-edge scientific research such as protein folding. The algorithm DeepDream enables machines to see what we cannot, while General Adversarial Networks (GANs) enable machines to dream, imagine and begin to build an inner life. An exciting new comer is Generative Pre-Trained Transformer 3 (GPT-3), the large language model capable of creating human-like text. They may soon be able to deal with real world situations by inventing their own algorithms. At present humans and machines collaborate, bootstrapping each other’s creativity. The next step will be when machines become end-to-end creative and become artists, musicians and writers in their own right. We need to ask why creativity should be an attribute only of humans and indeed may well need to learn to appreciate art we know has been created by a machine.
In the future this will become moot when we merge with machines which may well be the path to survival for the human race. Unlike humans, machines can look into the future, sense problems and deal with them.
I will discuss these topics and more by presenting an overview of the creative process with my own theory of creativity as it applies to generative systems. This will enable us to discuss how machines can have human characteristics of creativity and be creative like us - and go beyond in the Age of Artificial Superintelligence. I have begun to explore these topics in The Artist in the Machine: The World of AI-Powered Creativity (MIT Press).
Key Words: Creativity; Algorithm; Large language model; Generative Adversarial Networks; DeepDream.
Lucia Santaella
Is Artificial Intelligence Intelligent?
Abstract: The discussion about the intelligence of machines today oscillates between, on the one hand, those who do not hesitate to demonstrate their uncertainties, while others deny, without hesitation, any kind of intelligence for machines. They even compare AI with mindless mechanical typewriters. Between the two trends, that of the uncertain and that of the denialists, there are researchers who are in search of non-anthropocentric and, therefore, renewed definitions of intelligence. This presentation will defend that C. S. Peirce's philosophical and logical concept of intelligence is capable of bringing a contribution to the notions of both human and non-human intelligence, therefore, a contribution to rethink intelligence in a spectrum that allows analyzing the absence or presence of intelligence in machines and why yes or no.
Gualtiero Piccinini
(co-authored with Stephen Hetherington)
Knowing That as Knowing How: A Neurocognitive Account
Abstract: We argue that knowing that is a form of knowing how. Specifically, knowing that P is knowing how to represent the fact that P, ground such a representation in the fact that P, use such a representation to guide action with respect to P, and exercising that know-how when needed. More precisely, neurocognitive systems control organisms by building internal models of their environments and using such models to guide action. Such internal models implicitly represent how things are. When agents’ implicit internal models are grounded in the fact that P and are usable for guiding action with respect to P, agents have implicit knowledge that P. When neurocognitive systems acquire the additional capacity to manipulate language, they also acquire the capacity to explicitly represent and express that the world is thus-and-so. When agents’ explicit internal models are appropriately grounded in the fact that P and are usable for guiding action with respect to P, agents have explicit knowledge that P. Thus, both implicit and explicit knowing that P are forms of knowing how to represent that P, ground such a representation in P, use such a representation to guide action with respect to P, and exercising that know-how when needed.
Keywords: knowledge, knowledge-how, representation, action, intellectualism
Pietro Perconti
Trusting robots
Abstract: Over the past decades, our everyday life has been crowded with an ever-increasing number of robots. It is no longer just a matter of facing the idea of an artificial intelligence, but of dealing with intelligent bodies engaged in routines of social cognition with humans. All this presents challenges that we had not addressed until now. We have ergonomic, cross-cultural, and moral issues.
In the background, however, there is a trust problem. Prevailing attitudes-at least in Western countries-sway between the desire to enslave robots and the fear that they will turn against us.
Unfortunately, both attitudes are not productive from a social point of view. It seems desirable to have a kind of relationship where cooperation is the basis. To have it, however, we need a proper idea of what it means to have trust or deference toward machines and robots. Basically, deference is a behavior, while trust is an intentional state. However, deference can also be associated with a mental state, which we could call "deferential attitude." What it means to trust robots is a more investigated topic than the question of deference.
But what is deference? While in the individual life deference is an attitude often being associated with impotence and lack of enterprise, in social interactions it is an essential piece of living together on a constructive basis. Social deference, indeed, is not a submission device, but an intelligent strategy aimed at maximizing personal utility and building productive social relationships. It is the attitude that leads an individual to take advantage of someone else's better confidence in a certain area of knowledge, whether practical or theoretical.
A deferential attitude that is smart should be "selective," that is, a type of deference that brings together the fear of losing control and the benefits of the social distribution of intellectual work. Selective deference is an epistemic attitude that hierarchizes (implicitly or explicitly) the kinds of social knowledge to be most deferential toward (in which what the subject knows is more or less largely dependent on others) and those in which the subject instead has a direct commitment to the semantic content of what she is saying (or implies).
Keywords: Human-robot interaction; Trust; Deference; Social cognition.
Marcin Miłkowski
(co-authored with Juraj Hvorecky)
Theoretical virtues of cognitive extension
Abstract: The extended mind hypothesis has received considerable attention from philosophers over the years. However, some have remained fairly critical, stressing that, when compared to the claim that cognition requires that the agent be situated in its environment with its tools and devices, it offers no particular explanatory gain. The extended mind and situated cognition are explanatorily equivalent. It is merely a metaphysical claim, perhaps with some ethical significance, since cognitive extensions should be treated as parts of one body.
In my talk, I intend to analyze the equivalence claim by showing that while the extended mind explanations are indeed equivalent to situated cognition explanations, general theories are not. By analyzing strategies in theorizing about cognition defended by Herbert A. Simon, I claim that the “unit of analysis” should track the invariant generalizations to be theoretically satisfying.
However, while in some cases, the extended mind theories could retain some explanatory gain, they seem restricted in their scope to individual cognition. Instead, as I will claim, distributed cognition offers a much more flexible theoretical framework that can be adapted to various kinds of cognitive processes that rely on technology, which is best understood in terms of cognitive artifacts.
Keywords: extended mind; situated cognition; distributed cognition; cognitive artifact; theoretical virtues.
Daniel Everett
What kind of mind can have a language?
Abstract: Language is built on a semiotic basis. Communication is also a type of semiotic system. All entities in the universe, from stars to paramecia communicate - they emit and interpret information from their environments, via indexical and iconic signs of different types. But to have a language requires not only the addition of symbolic signs (where a symbol has a general interpretation and is usually culturally established), but a language also requires inference: abduction, deduction, and induction. Although all creatures can inductively and deductively interpret their environment to a limited degree, only humans regularly and necessarily engage in abduction. In human language, for example, we see induction in what is called "compositionality" and deduction in "parsing." But these are both incomplete. Only a mind capable of productive abduction is capable of symbolic signs and interpretations. I argue here that both abductive inference and symbols made their appearance more than 1.5 million years ago with the brain and culture of Homo erectus.
Patrícia Gouveia
The Digital Playful River, a River Out of Eden.
How the internet shaped my planetary perception
Abstract: Starting with an autoethnographic perspective, merging the creation of the Digital River installation (Gouveia, Portugal, 1997) and the concept of the Playmode exhibition (Gouveia et al., Portugal and Brazil, 2019-2022), we will tell a personal story to inquire the role of interaction technologies and the internet in shaping our artistical and cultural playful reality. The goal is to suggest that our perception relates to gaming technologies and that the internet plays an important role in defining us as global and planetary citizens. There is no precedent in human history for such dissemination of information and connectivity before the spread of networked playful technologies. That fact made us consider the role of interaction in our lives and how it changes our physical and artificial environments. Starting from a personal and political journey to a broader context where the age of integrated arts and technologies merge with play to find possible ways to survive in a damage planet. Feminist theories, dark ecology and open possibilities promote dignity and care for future survival. Speculative thinking can encourage integrative views where arts and sciences are key to generate alternative ways of dealing with fear and anxiety. Speculative feminism avoids grand narratives and certainties, emphasize vulnerability and coexistence, and for that matter can be a tool to stimulate humility and respect among humans and other species. Play and gaming can integrate women studies to generate convergent and sustainable futures. Speculative arts-based research deals with processes instead of objects with the aim of instigating resistance against modern delusions.
Key words: Ecofeminism; Techno feminism; Internet studies; Play; Gaming.
Alessio Plebe
On meaning in machines, once again
Abstract: A central assumption in cognitive science is that thought is a form of computation. If this is the case, then even a computer, equipped with an appropriate program, could think, could reason on internal representations beaning meaning about the world. That things could really be like this was already believed by Alan Turing, who even proposed a wonderful method to verify it. The possibility of semantics for machines has stimulated a lively debate between the 80s and the 90s of the last century. Prominent among the champions of the front that radically denied this possibility was Hubert Dreyfus, the contrary argument most admired for its persuasive power was John Searle’s Chinese room. By the beginning of this millennium, the debate faded away, after 50 years AI had not produced anything that came close to human semantic capabilities, so it was of little use and interest to insist on philosophising about it. In the last decade the situation has rapidly reversed, unexpectedly, AI, powered by deep learning, have sharply improved, to the point of approaching the performance of humans in several complex cognitive tasks. Lately, there has been a resurgence of new criticisms of AI centred on the impossibility for machines to have meaning, the intensity of which seems to amplify as AI progresses, especially in the domain of natural language. There are works that seem to be driven by a moral intent to uncover a fraud, the deception of machines capable of understanding.
A scrutiny of this critical literature reveals two different categories. There is one that collects radical positions, according to which machines cannot understand, cannot have meaningful representations, simply because they are machines. The arguments put forward in support are the same as those used in the last century, without any substantial advancements in this respect. A second category grants machines the possibility of meaning, which is however precluded to deep learning. Notable examples in this category are Gary Marcus and Brenden Lake, who argues that for reaching intelligence it is necessary for a system to explicitate several cognitive basic frameworks like intuitive physics, intuitive psychology, and principles of compositionality. Even if it is true that in deep learning models such competences are not as explicit as in symbolic AI systems, the argument seems to suffer from the fallacy of denying the existence of competences from the fact that they are not directly observable in human readible terms.
Keywords: semantics – machine intelligence – deep learning.
Paulo Alexandre e Castro
What Neurohacking can tell us about the mind.
Contributions to a (new) theory of mind
Abstract: The possibilities that have opened up with new brain technologies do not just concern medicine and its treatments. Among them, neurocrime and specifically neurohacking is one of the darkest possibilities. However, from the point of view of philosophical reflection, such a crime provides an understanding not only of the human mind but also of its myths, and in this particular, the myth of mind uploading.
Taking these references into account, we will try to provide a perspective on the subject, concerning the concept of neurocrime and its extension in order to philosophically draw the consequences for a theory of mind. A conception that will inevitably have to consider two items: the mind, considered in what is hacked in neurohacking and the technology (lato sensu) that is used in our daily lives, such as artificial intelligence (and others). The perspective to be presented (as AEMt) will seek to unify some proposals of the philosophy of mind in a (possible) single theory.
Keywords: Neurocrime; Neurohackers; Mind Uploading; AEMt.
Ania Malinowska
Hypnotic AI. On the Artificial Unconscious
Abstract: This talk probes the idea of the artificial as based on AI’s responses to hypnosis. It taps into the problem of robot cognition as a condition distinct from human categories of thinking and feeling. It specifically focuses on robot’s phenomenological and cognitive idiosyncrasy, following assumptions that robots may show some “sentient”, “mental” and “cognitive” autonomy on levels we have not yet accessed, and to which the studies of consciousness and quantum physics refers as “latent” or “post-material”. “Hypnotic AI” challenges the anthropomorphizing trends in cognitive and affective computing. It also further explores the robot’s object orientation - an idea that emerged from various strands of speculative realism and new-phenomenological thinking preoccupied with the object’s perspective. What we examine and represent here is a hypnotic potential of a robot understood as a manifestation of some different -invariably artificial - mental reality as well as a hypnotizing quality in itself (something that may put us in awe).
Maya Kóvskaya
Approaching Other-Than-Human Minds through Ordinary Language Practices,
Multispecies Relational Pragmatics, and Ecosemiotic Embodied Meanings
Abstract: Recent technological advances in AI, robotics, and machine learning have made possible new inroads in the study of the minds of other-than-human beings through language and communicative practices, overturning centuries of misguided, ideologically axiomatic Human Exceptionalist, Anthroposupremacist assumptions that only humans possess minds, agency, culture, and language. These technologies are being developed in a number of noteworthy multidisciplinary research programs, departing from the methods and tools used in early experiments with ape language learning, focusing on teaching various apes American Sign Language, and English using wordboards in mostly laboratory, clinical, or experimental contexts—including Project Nim, Project Washoe, Project Koko, and Project Kanzi, as well as Irene Pepperberg's work with Alex the African Grey Parrot. Taking these attempts to teach apes English out of the lab and doing ethological linguistic research rooted in the lived contexts of everyday life, a citizen science movement, as well as a study at UC San Diego is underway, documenting how household companion animals, such as cats and dogs, horses, and other critters, can learn dozens to English words to issue commands, requests, and even create novel sentences and ask questions of their humans using Augmentative Alternative Communication speech buttons. But while it is becoming clear that even seemingly ordinary companion animals of average intelligence can master and effectively use a surprising array of English (and other languages) to communicate, we humans are lagging far behind other non-human animals in our ability to speak to other beings in their own languages, or even comprehend them at all. AI and machine learning are offering improved tools for us to start doing research in this new direction. The ongoing Elephant Listening Project uses infrasound technologies to gather previously inaudible bioacoustic data on Elephant conversations, for their growing Elephant Dictionary. Projects such as Ocean Literacy, engage in Orca dialect and cultural research. Chirovox: A Public Library of Bat Calls is an open access, collaborative endeavor to create an extensive database of bat calls from across the world and syntax as well as some language use patterns have already been established. Project CETI is gathering Sperm Whale bioacoustic data to analyze for grammar and vocabulary with the goal of taking back to Sperm Whale in their own languages. But while this rich body of exciting new technologically enabled scientific research is promising, certain assumptions about what constitutes language and how language and meaning work in actual conditions of usage need to be problematized and reframed. This talk argues for a multidisciplinary approach to the study of other-than-human minds through language, speech, performative practices, and semiosis is needed to refine and correct flawed assumptions about meaning and language that belong to an outdated positivist conception of language still dominant within the scientific community. I advocate for an approach to other-than-human minds, cultures, and languages that draws on insights from Philosophy of Other Minds in the context of Ordinary Language Philosophy and Speech Act Theory (Wittgenstein and Austin), studies in Cognitive Ethology and Animal Minds, Languages, and Cultures (Andrews, Bekoff, Safina); Semiotic approaches to language, meaning-making, niche construction, and multispecies worldmaking (Peirce; Fuentes, Kissel, Petersen; Kohn; Kirksey, Van Dooren, Rose, et al); Biosemiotics, Ecosemiotics, and Zoosemiotics (von Uexküll; Kull; Maran; Hoffmeyer; Hendlin, et al); and Performativity and Practice theory (Austin; Butler; Ahmed; Bourdieu; de Certeau). I formulate a synthesis drawn from these approaches in terms I call "More-Than-Human Speech Acts" and "Multispecies Ecoperformativity and Ecosemiosis," "Multispecies Relational Pragmatics" and use this synthesis to suggest ways to correct, augment, and refine the scientific and AI work being done on other-than-human-language and communications, by focusing linguistic data collection and interpretation on the ethnological pragmatic grounding of intersubjectively shared language usage conventions and embodied semantic and semiotic meanings in contexts of shared, lived experience.
Consciousness, Theory, and Mental Appearance
Abstract: I contrast one-factor views of consciousness with two-factor views. Examples of one-factor views are the first-order theory of Fred Dretske, the first-order approach of Thomas Nagel, and Ned Block's conception of phenomenal consciousness. Examples of two-factor views are the higher-order theory of consciousness I have defended elsewhere and the global-workspace theories of Bernard Baars and of Stanislas Dehaene and Lionel Naccache.
I argue that one-factor views allow neither a useful explanation of consciousness nor even an informative description of what it is for a mental state to be conscious, and that because they construe consciousness as an intrinsic property of conscious states they are in effect strongly anti-theoretical. The appeal of one-factor views for some derives from its denial of a coherent contrast between the mental appearance and the mental reality of conscious states. But although some find that denial inviting, it derives from an unwarranted extrapolation from common sense, and is theoretically indefensible. It's also responsible for the shortcomings of a one-factor view.
Keywords: Consciousness; Higher-order theories; First-order theories; Global-workspace theory; Mental appearance; utility of consciousness.
Susan Schneider
The Global Brain Argument
Abstract: In this talk I argue, based on current tech trends, that many humans will likely be nodes in one or more global superintelligent or “savant” systems (a “global brain”) within the next several decades, and that this intelligence, which outthinks any individual human, will likely not be conscious. I visit several deep ethical, epistemological and metaphysical challenges.
Arthur I. Miller
Machines that Make Art, Improvise Music and Write Film Scripts
Abstract: The boundary between humans and machines – the other - is blurring. Machines have already shown glimmers of creativity, mastering complex games such as Go, producing unpredictable styles of art, generating new forms of music and literature, and engaging in cutting-edge scientific research such as protein folding. The algorithm DeepDream enables machines to see what we cannot, while General Adversarial Networks (GANs) enable machines to dream, imagine and begin to build an inner life. An exciting new comer is Generative Pre-Trained Transformer 3 (GPT-3), the large language model capable of creating human-like text. They may soon be able to deal with real world situations by inventing their own algorithms. At present humans and machines collaborate, bootstrapping each other’s creativity. The next step will be when machines become end-to-end creative and become artists, musicians and writers in their own right. We need to ask why creativity should be an attribute only of humans and indeed may well need to learn to appreciate art we know has been created by a machine.
In the future this will become moot when we merge with machines which may well be the path to survival for the human race. Unlike humans, machines can look into the future, sense problems and deal with them.
I will discuss these topics and more by presenting an overview of the creative process with my own theory of creativity as it applies to generative systems. This will enable us to discuss how machines can have human characteristics of creativity and be creative like us - and go beyond in the Age of Artificial Superintelligence. I have begun to explore these topics in The Artist in the Machine: The World of AI-Powered Creativity (MIT Press).
Key Words: Creativity; Algorithm; Large language model; Generative Adversarial Networks; DeepDream.
Lucia Santaella
Is Artificial Intelligence Intelligent?
Abstract: The discussion about the intelligence of machines today oscillates between, on the one hand, those who do not hesitate to demonstrate their uncertainties, while others deny, without hesitation, any kind of intelligence for machines. They even compare AI with mindless mechanical typewriters. Between the two trends, that of the uncertain and that of the denialists, there are researchers who are in search of non-anthropocentric and, therefore, renewed definitions of intelligence. This presentation will defend that C. S. Peirce's philosophical and logical concept of intelligence is capable of bringing a contribution to the notions of both human and non-human intelligence, therefore, a contribution to rethink intelligence in a spectrum that allows analyzing the absence or presence of intelligence in machines and why yes or no.
Gualtiero Piccinini
(co-authored with Stephen Hetherington)
Knowing That as Knowing How: A Neurocognitive Account
Abstract: We argue that knowing that is a form of knowing how. Specifically, knowing that P is knowing how to represent the fact that P, ground such a representation in the fact that P, use such a representation to guide action with respect to P, and exercising that know-how when needed. More precisely, neurocognitive systems control organisms by building internal models of their environments and using such models to guide action. Such internal models implicitly represent how things are. When agents’ implicit internal models are grounded in the fact that P and are usable for guiding action with respect to P, agents have implicit knowledge that P. When neurocognitive systems acquire the additional capacity to manipulate language, they also acquire the capacity to explicitly represent and express that the world is thus-and-so. When agents’ explicit internal models are appropriately grounded in the fact that P and are usable for guiding action with respect to P, agents have explicit knowledge that P. Thus, both implicit and explicit knowing that P are forms of knowing how to represent that P, ground such a representation in P, use such a representation to guide action with respect to P, and exercising that know-how when needed.
Keywords: knowledge, knowledge-how, representation, action, intellectualism
Pietro Perconti
Trusting robots
Abstract: Over the past decades, our everyday life has been crowded with an ever-increasing number of robots. It is no longer just a matter of facing the idea of an artificial intelligence, but of dealing with intelligent bodies engaged in routines of social cognition with humans. All this presents challenges that we had not addressed until now. We have ergonomic, cross-cultural, and moral issues.
In the background, however, there is a trust problem. Prevailing attitudes-at least in Western countries-sway between the desire to enslave robots and the fear that they will turn against us.
Unfortunately, both attitudes are not productive from a social point of view. It seems desirable to have a kind of relationship where cooperation is the basis. To have it, however, we need a proper idea of what it means to have trust or deference toward machines and robots. Basically, deference is a behavior, while trust is an intentional state. However, deference can also be associated with a mental state, which we could call "deferential attitude." What it means to trust robots is a more investigated topic than the question of deference.
But what is deference? While in the individual life deference is an attitude often being associated with impotence and lack of enterprise, in social interactions it is an essential piece of living together on a constructive basis. Social deference, indeed, is not a submission device, but an intelligent strategy aimed at maximizing personal utility and building productive social relationships. It is the attitude that leads an individual to take advantage of someone else's better confidence in a certain area of knowledge, whether practical or theoretical.
A deferential attitude that is smart should be "selective," that is, a type of deference that brings together the fear of losing control and the benefits of the social distribution of intellectual work. Selective deference is an epistemic attitude that hierarchizes (implicitly or explicitly) the kinds of social knowledge to be most deferential toward (in which what the subject knows is more or less largely dependent on others) and those in which the subject instead has a direct commitment to the semantic content of what she is saying (or implies).
Keywords: Human-robot interaction; Trust; Deference; Social cognition.
Marcin Miłkowski
(co-authored with Juraj Hvorecky)
Theoretical virtues of cognitive extension
Abstract: The extended mind hypothesis has received considerable attention from philosophers over the years. However, some have remained fairly critical, stressing that, when compared to the claim that cognition requires that the agent be situated in its environment with its tools and devices, it offers no particular explanatory gain. The extended mind and situated cognition are explanatorily equivalent. It is merely a metaphysical claim, perhaps with some ethical significance, since cognitive extensions should be treated as parts of one body.
In my talk, I intend to analyze the equivalence claim by showing that while the extended mind explanations are indeed equivalent to situated cognition explanations, general theories are not. By analyzing strategies in theorizing about cognition defended by Herbert A. Simon, I claim that the “unit of analysis” should track the invariant generalizations to be theoretically satisfying.
However, while in some cases, the extended mind theories could retain some explanatory gain, they seem restricted in their scope to individual cognition. Instead, as I will claim, distributed cognition offers a much more flexible theoretical framework that can be adapted to various kinds of cognitive processes that rely on technology, which is best understood in terms of cognitive artifacts.
Keywords: extended mind; situated cognition; distributed cognition; cognitive artifact; theoretical virtues.
Daniel Everett
What kind of mind can have a language?
Abstract: Language is built on a semiotic basis. Communication is also a type of semiotic system. All entities in the universe, from stars to paramecia communicate - they emit and interpret information from their environments, via indexical and iconic signs of different types. But to have a language requires not only the addition of symbolic signs (where a symbol has a general interpretation and is usually culturally established), but a language also requires inference: abduction, deduction, and induction. Although all creatures can inductively and deductively interpret their environment to a limited degree, only humans regularly and necessarily engage in abduction. In human language, for example, we see induction in what is called "compositionality" and deduction in "parsing." But these are both incomplete. Only a mind capable of productive abduction is capable of symbolic signs and interpretations. I argue here that both abductive inference and symbols made their appearance more than 1.5 million years ago with the brain and culture of Homo erectus.
Patrícia Gouveia
The Digital Playful River, a River Out of Eden.
How the internet shaped my planetary perception
Abstract: Starting with an autoethnographic perspective, merging the creation of the Digital River installation (Gouveia, Portugal, 1997) and the concept of the Playmode exhibition (Gouveia et al., Portugal and Brazil, 2019-2022), we will tell a personal story to inquire the role of interaction technologies and the internet in shaping our artistical and cultural playful reality. The goal is to suggest that our perception relates to gaming technologies and that the internet plays an important role in defining us as global and planetary citizens. There is no precedent in human history for such dissemination of information and connectivity before the spread of networked playful technologies. That fact made us consider the role of interaction in our lives and how it changes our physical and artificial environments. Starting from a personal and political journey to a broader context where the age of integrated arts and technologies merge with play to find possible ways to survive in a damage planet. Feminist theories, dark ecology and open possibilities promote dignity and care for future survival. Speculative thinking can encourage integrative views where arts and sciences are key to generate alternative ways of dealing with fear and anxiety. Speculative feminism avoids grand narratives and certainties, emphasize vulnerability and coexistence, and for that matter can be a tool to stimulate humility and respect among humans and other species. Play and gaming can integrate women studies to generate convergent and sustainable futures. Speculative arts-based research deals with processes instead of objects with the aim of instigating resistance against modern delusions.
Key words: Ecofeminism; Techno feminism; Internet studies; Play; Gaming.
Alessio Plebe
On meaning in machines, once again
Abstract: A central assumption in cognitive science is that thought is a form of computation. If this is the case, then even a computer, equipped with an appropriate program, could think, could reason on internal representations beaning meaning about the world. That things could really be like this was already believed by Alan Turing, who even proposed a wonderful method to verify it. The possibility of semantics for machines has stimulated a lively debate between the 80s and the 90s of the last century. Prominent among the champions of the front that radically denied this possibility was Hubert Dreyfus, the contrary argument most admired for its persuasive power was John Searle’s Chinese room. By the beginning of this millennium, the debate faded away, after 50 years AI had not produced anything that came close to human semantic capabilities, so it was of little use and interest to insist on philosophising about it. In the last decade the situation has rapidly reversed, unexpectedly, AI, powered by deep learning, have sharply improved, to the point of approaching the performance of humans in several complex cognitive tasks. Lately, there has been a resurgence of new criticisms of AI centred on the impossibility for machines to have meaning, the intensity of which seems to amplify as AI progresses, especially in the domain of natural language. There are works that seem to be driven by a moral intent to uncover a fraud, the deception of machines capable of understanding.
A scrutiny of this critical literature reveals two different categories. There is one that collects radical positions, according to which machines cannot understand, cannot have meaningful representations, simply because they are machines. The arguments put forward in support are the same as those used in the last century, without any substantial advancements in this respect. A second category grants machines the possibility of meaning, which is however precluded to deep learning. Notable examples in this category are Gary Marcus and Brenden Lake, who argues that for reaching intelligence it is necessary for a system to explicitate several cognitive basic frameworks like intuitive physics, intuitive psychology, and principles of compositionality. Even if it is true that in deep learning models such competences are not as explicit as in symbolic AI systems, the argument seems to suffer from the fallacy of denying the existence of competences from the fact that they are not directly observable in human readible terms.
Keywords: semantics – machine intelligence – deep learning.
Paulo Alexandre e Castro
What Neurohacking can tell us about the mind.
Contributions to a (new) theory of mind
Abstract: The possibilities that have opened up with new brain technologies do not just concern medicine and its treatments. Among them, neurocrime and specifically neurohacking is one of the darkest possibilities. However, from the point of view of philosophical reflection, such a crime provides an understanding not only of the human mind but also of its myths, and in this particular, the myth of mind uploading.
Taking these references into account, we will try to provide a perspective on the subject, concerning the concept of neurocrime and its extension in order to philosophically draw the consequences for a theory of mind. A conception that will inevitably have to consider two items: the mind, considered in what is hacked in neurohacking and the technology (lato sensu) that is used in our daily lives, such as artificial intelligence (and others). The perspective to be presented (as AEMt) will seek to unify some proposals of the philosophy of mind in a (possible) single theory.
Keywords: Neurocrime; Neurohackers; Mind Uploading; AEMt.
Ania Malinowska
Hypnotic AI. On the Artificial Unconscious
Abstract: This talk probes the idea of the artificial as based on AI’s responses to hypnosis. It taps into the problem of robot cognition as a condition distinct from human categories of thinking and feeling. It specifically focuses on robot’s phenomenological and cognitive idiosyncrasy, following assumptions that robots may show some “sentient”, “mental” and “cognitive” autonomy on levels we have not yet accessed, and to which the studies of consciousness and quantum physics refers as “latent” or “post-material”. “Hypnotic AI” challenges the anthropomorphizing trends in cognitive and affective computing. It also further explores the robot’s object orientation - an idea that emerged from various strands of speculative realism and new-phenomenological thinking preoccupied with the object’s perspective. What we examine and represent here is a hypnotic potential of a robot understood as a manifestation of some different -invariably artificial - mental reality as well as a hypnotizing quality in itself (something that may put us in awe).
Maya Kóvskaya
Approaching Other-Than-Human Minds through Ordinary Language Practices,
Multispecies Relational Pragmatics, and Ecosemiotic Embodied Meanings
Abstract: Recent technological advances in AI, robotics, and machine learning have made possible new inroads in the study of the minds of other-than-human beings through language and communicative practices, overturning centuries of misguided, ideologically axiomatic Human Exceptionalist, Anthroposupremacist assumptions that only humans possess minds, agency, culture, and language. These technologies are being developed in a number of noteworthy multidisciplinary research programs, departing from the methods and tools used in early experiments with ape language learning, focusing on teaching various apes American Sign Language, and English using wordboards in mostly laboratory, clinical, or experimental contexts—including Project Nim, Project Washoe, Project Koko, and Project Kanzi, as well as Irene Pepperberg's work with Alex the African Grey Parrot. Taking these attempts to teach apes English out of the lab and doing ethological linguistic research rooted in the lived contexts of everyday life, a citizen science movement, as well as a study at UC San Diego is underway, documenting how household companion animals, such as cats and dogs, horses, and other critters, can learn dozens to English words to issue commands, requests, and even create novel sentences and ask questions of their humans using Augmentative Alternative Communication speech buttons. But while it is becoming clear that even seemingly ordinary companion animals of average intelligence can master and effectively use a surprising array of English (and other languages) to communicate, we humans are lagging far behind other non-human animals in our ability to speak to other beings in their own languages, or even comprehend them at all. AI and machine learning are offering improved tools for us to start doing research in this new direction. The ongoing Elephant Listening Project uses infrasound technologies to gather previously inaudible bioacoustic data on Elephant conversations, for their growing Elephant Dictionary. Projects such as Ocean Literacy, engage in Orca dialect and cultural research. Chirovox: A Public Library of Bat Calls is an open access, collaborative endeavor to create an extensive database of bat calls from across the world and syntax as well as some language use patterns have already been established. Project CETI is gathering Sperm Whale bioacoustic data to analyze for grammar and vocabulary with the goal of taking back to Sperm Whale in their own languages. But while this rich body of exciting new technologically enabled scientific research is promising, certain assumptions about what constitutes language and how language and meaning work in actual conditions of usage need to be problematized and reframed. This talk argues for a multidisciplinary approach to the study of other-than-human minds through language, speech, performative practices, and semiosis is needed to refine and correct flawed assumptions about meaning and language that belong to an outdated positivist conception of language still dominant within the scientific community. I advocate for an approach to other-than-human minds, cultures, and languages that draws on insights from Philosophy of Other Minds in the context of Ordinary Language Philosophy and Speech Act Theory (Wittgenstein and Austin), studies in Cognitive Ethology and Animal Minds, Languages, and Cultures (Andrews, Bekoff, Safina); Semiotic approaches to language, meaning-making, niche construction, and multispecies worldmaking (Peirce; Fuentes, Kissel, Petersen; Kohn; Kirksey, Van Dooren, Rose, et al); Biosemiotics, Ecosemiotics, and Zoosemiotics (von Uexküll; Kull; Maran; Hoffmeyer; Hendlin, et al); and Performativity and Practice theory (Austin; Butler; Ahmed; Bourdieu; de Certeau). I formulate a synthesis drawn from these approaches in terms I call "More-Than-Human Speech Acts" and "Multispecies Ecoperformativity and Ecosemiosis," "Multispecies Relational Pragmatics" and use this synthesis to suggest ways to correct, augment, and refine the scientific and AI work being done on other-than-human-language and communications, by focusing linguistic data collection and interpretation on the ethnological pragmatic grounding of intersubjectively shared language usage conventions and embodied semantic and semiotic meanings in contexts of shared, lived experience.