Hessam Akhlaghpour
Laboratory of Integrative Brain Function, Rockefeller University
Is life equipped with a universal computer? Surprising parallels between molecular biology and combinatory logic.
Mathematicians of the 1930s arrived at an insight that became a core foundation of computer science: a system based on very simple rules can be used to compute any calculable function. This insight has revolutionized technology but its full implications for the study of natural organisms remain largely unexplored. I first challenge the notion that our current models of natural computation (including biologically-plausible neural networks) are “universal”, i.e., able to compute any calculable function. I then argue for the existence of an undiscovered universal computation system, given its usefulness in survival and reproduction and given how easy it is to accidentally stumble upon simple rules that are sufficient for universality. Drawing from past work on synthetic RNA/DNA-computers, I argue that life’s universal computation system – if there is one – most likely operates at the level of polynucleotides. I then present an RNA-based model capable of universal computation while being simple enough to have evaded detection. My model highlights remarkable parallels between combinatory logic and what is already known to exist in cells. This work motivates us to search for nature’s universal computer at the molecular level and suggests some guiding details that may aid us in its discovery.
I am a Postdoctoral Associate in Gaby Maimon’s lab at Rockefeller University. My research interests revolve around the computational foundations of cognition and intelligence in biological systems, spanning both theoretical molecular biology and experimental/behavioral neuroscience.
During my Ph.D. in Ilana Witten’s lab at Princeton University, I studied the striatum’s role in spatial working memory using in vivo extracellular neural recordings in behaving rats. I then became interested in studying path integration in Drosophila melanogaster, in which a computationally tractable behavior is combined with a cellularly/molecularly tractable organism. In my postdoctoral work I developed a new head-fixed path- constrained navigation assay for flies. Inspired by molecular engram theories, I developed a set of theoretical models that mimic combinatory logic using RNA, demonstrating that a handful of simple editing rules operating on RNA molecules is sufficient to compute any computable function. In my future work, I plan to use my head-fixed behavioral assay in flies to search for molecular correlates of memory, and to experimentally test some of the predictions from my theoretical models.
Frantisek Baluska
Institute of Cellular and Molecular Botany, University of Bonn
Cellular Basis of Sentience and Cognition
Cellular life on Earth started some four billion years ago, taking nearly two billion years to evolve from protocells to the first prokaryotic cells, including ancient archaea and bacteria that resemble modern counterparts within those cellular domains. More complex eukaryotic cells evolved via symbiotic interactions of prokaryotic cells, representing multicellular cells within a cell chimera-like cellular consortia. The first eukaryotic cells emerged some 2–1.5 billion years ago, which implies that it took nearly two billion years to get from prokaryotic to eukaryotic cells. For some 2 billion of years, all cells were living as independent organisms. Our cellular basis of consciousness (CBC) model states that all living cells utilize cellular sentience to survive and evolve. We argue that the prolonged timeline to evolve eukaryotic cells from prokaryotic cells was necessitated by the complex level of evolutionary novelty required to assemble unitary consciousness from several formerly independent prokaryotic versions of cellular consciousness, as successive orders of cognition. Cellular cognition and sentience enabled multicellularity and permitted its successful continuous evolution toward the higher level of cohesive cellular complexities exhibited in multicellular organisms, with symbiotic fungal-tree root networks representing one of its most extensively integrated forms.
František Baluška was Group Leader at the IZMB, University of Bonn. He is one of the leading scientists in the fields of cell biology and evolution, cytoskeleton, cell polarity and plant sensory biology. He published more than 200 peer reviewed papers. Web of Science scores 283 entries with H-Index 63. In order to foster this new sensory and behavioural view of plants and their roots, he has founded, and acts as editor-in-chief, of two scientific journals: Plant Signaling & Behavior and Communicative & Integrative Biology. He is editor of the book series ‘Signaling and Communication in Plants’ at the Springer Verlag.
Anna Ciaunica
Centre for Philosophy of Science, University of Lisbon
From Cells to Selves: Coupling Neuronal and Immune Cellular Processing in Biological self-Organising Systems
Significant efforts have been made in the past decades to understand how mental and cognitive processes are underpinned by neural mechanisms in the human brain and biological systems. Here I argue that a promising way forward in understanding the nature of human cognition is to zoom out from the prevailing picture focusing on its neural basis. It considers instead how neurons work in tandem with other type of cells (e.g. immune) to subserve biological self-organisation and adaptive behaviour of the human organism as a whole. We focus specifically on the immune cellular processing as key actor in complementing neuronal processing in achieving successful self-organisation and adaptation of the human body in an ever-changing environment. The focus on cellular rather than neural, brain processing underscores the idea that adaptive responses to fluctuations in the environment require a carefully crafted orchestration of multiple cellular and bodily systems at multiple organisational levels of the biological organism. Hence cognition can be seen as a multiscale web of dynamic information processing distributed across a vast array of complex cellular (e.g. neuronal, immune, and others) and network systems, operating across the entire body, and not just in the brain. Ultimately, this paper builds up towards two radical claims. First, there is the idea that cognition should not be confined to one system alone, namely the neural system in the brain, no matter how sophisticated the latter notoriously is. Second, I outline the role of co-embodiment – i.e. human bodies and brains developing within another human body, in utero – as a key factor in scaling up cognitive processing across the lifespan. Ultimately, the aim is to show that paradoxically, in order to understand what makes cognition uniquely human, we need to focus on what we have in common with other biological systems.
Dr Anna Ciaunica is a Principal Investigator at the Centre for Philosophy of Science, University of Lisbon, Portugal; and Research Associate at the Institute of Cognitive Neuroscience, University College London, the UK. Before that she was Research Associate at the Department of Clinical, Educational and Health Psychology, University College London; and postdoctoral researcher at the Department of Philosophy, University of Fribourg, Switzerland. She obtained her PhD from the University of Burgundy, Dijon, France.
Anna is currently PI on three interdisciplinary projects looking at the relationship between self-awareness, embodiment and social interactions in humans and artificial agents. Her approach is highly interdisciplinary, using methods from philosophy, experimental psychology, cognitive neuroscience, phenomenology and arts. More recently, Anna has deepened the concept of minimal selfhood in utero developing as a process of co-embodiment and co-homeostasis. Apart from the numerous scientific papers published, Anna is currently working on a book: ‘From Cells to Selves: the Co-Embodied Roots of Human Self-Consciousness’.
She is also coordinator of the Network for Embodied Consciousness, Technology and the Arts (NECTArs) – a collaborative platform bringing together artists, researchers, stakeholders, policy makers and people with lived experiences, aiming at fostering creative solutions to timely questions such as self-consciousness and (dis)embodiment in our hyper-digitalized and hyper-connected world.
David J. Colaço
Department of Philosophy, University of Munich, Germany
Learning from the History of Memory Transfer
If the mind is material, as most scientists accept, then memory has a physical basis. Imagine if we could extract this basis and move it somewhere else. This idea is memory transfer: the transfer of learned information from one organism to another via the transfer of biological material. Memory transfer was an active research target in the 1960s and 70s. While this research program disintegrated, its study has been revived in the past decade. In this talk, I will address several of the complications that researchers faced in the history of memory transfer, and I will pose and answer three questions. Does this phenomenon really happen? Is this phenomenon really memory? Why is the phenomenon so controversial? Together, my answers provide lessons for the revival of memory transfer research.
David Colaço is an Alexander von Humboldt Postdoctoral Research Fellow at LMU Munich, in the Munich Center for Mathematical Philosophy. He completed his PhD in History and Philosophy of Science at the University of Pittsburgh and received graduate certification from the Center for Neural Basis of Cognition. He studies scientific reasoning along with the history and philosophy of memory science.
Chris Fields
23 Rue des Lavandières, Caunes Minervois, France
Pushing the boundaries by dissolving them: A scale-free formalization of physical systems as hierarchical agents
Since its introduction by Karl Friston and colleagues in the mid-2000s, the variational Free Energy Principle (FEP) has provided a new and productive connection between physics and the biosciences. When reformulated in the language of quantum information theory (QIT), the FEP is revealed as a classical limit of the principle of unitarity, or conservation of information; it therefore applies to all physical systems at all scales. The language of QIT provides, moreover, a rich tool set of concepts and methods for representing communication between agents that comprise hierarchies of subsystems. These methods have been used to model basal cognition (10.1093/nc/niab013, 10.3390/e24060819), the structure and behavior of neurons (10.1016/j.biosystems.2022.104714, 10.1088/2634-4386/aca7de), and the control systems that enable organisms to switch between distinct behaviors (10.3390/e24050601, arxiv:2303.01514). The QIT formalism requires all measurements, including those required to identify objects and locate them in space and time, to be represented explicitly. This immediately raises the question of how the representation of objects in memory relates to the representation of space and time. Quantum gravity researchers have developed robust formal models of spacetime as an emergent error-correcting code. In an FEP context, such models recast spacetime as an external, i.e. stigmergic, memory resource accessible to any organism that can measure space and time as independent variables. Following a brief review of the topics mentioned above, I will focus on hierarchical models of inter-component communication processes that enable using spacetime as a data structure that “contains” identified objects. Such models generalize the well-known functions of mammalian hippocampus as a simultaneous processor of object identity and spacetime location.
Chris Fields works at the intersection of quantum information theory, evo-devo biology, and cognitive neuroscience. His primary interest is in how systems at all scales, from macromolecular complexes to communities and ecosystems, identify objects in their environments, represent state changes over time, and develop communicative conventions that enable cooperative and/or competitive activity. Fields has published over 200 refereed papers in nuclear physics, information physics, biophysics, cognitive science, information systems, neuroscience, and molecular, developmental, and evolutionary biology. A research summary and recent publications are available from https://chrisfieldsresearch.com. Dr. Fields has a PhD in Philosophy from the University of Colorado, Boulder, 1985.
Karl Friston
Wellcome Trust Centre for Neuroimaging Institute of Neurology, University College London
The Physics of Sentience
This presentation offers a heuristic proof (and simulations of a primordial soup) suggesting that life—or biological self-organization—is an inevitable and emergent property of any (weakly mixing) random dynamical system that possesses a Markov blanket. This conclusion is based on the following arguments: if a system can be differentiated from its external milieu, heat bath or environment, then the system’s internal and external states must be conditionally independent. These independencies induce a Markov blanket that separates internal and external states. This separation means that internal states will appear to minimize a free energy functional of blanket states – via a variational principle of stationary action. Crucially, this equips internal states with an information geometry, pertaining to probabilistic beliefs about something; namely external states. Interestingly, this free energy is the same quantity that is optimized in Bayesian inference and machine learning (where it is known as an evidence lower bound). In short, internal states (and their Markov blanket) will appear to model—and act on—their world to preserve their functional and structural integrity. This leads to a Bayesian mechanics, which can be neatly summarised as self-evidencing.
Karl Friston is a theoretical neuroscientist and authority on brain imaging. He invented statistical parametric mapping (SPM), voxel-based morphometry (VBM) and dynamic causal modelling (DCM). These contributions were motivated by schizophrenia research and theoretical studies of value-learning, formulated as the dysconnection hypothesis of schizophrenia. Mathematical contributions include variational Laplacian procedures and generalized filtering for hierarchical Bayesian model inversion. Friston currently works on models of functional integration in the human brain and the principles that underlie neuronal interactions. His main contribution to theoretical neurobiology is a free-energy principle for action and perception (active inference). Friston received the first Young Investigators Award in Human Brain Mapping (1996) and was elected a Fellow of the Academy of Medical Sciences (1999). In 2000 he was President of the international Organization of Human Brain Mapping. In 2003 he was awarded the Minerva Golden Brain Award and was elected a Fellow of the Royal Society in 2006. In 2008 he received a Medal, College de France and an Honorary Doctorate from the University of York in 2011. He became of Fellow of the Royal Society of Biology in 2012, received the Weldon Memorial prize and Medal in 2013 for contributions to mathematical biology and was elected as a member of EMBO (excellence in the life sciences) in 2014 and the Academia Europaea in (2015). He was the 2016 recipient of the Charles Branch Award for unparalleled breakthroughs in Brain Research and the Glass Brain Award, a lifetime achievement award in the field of human brain mapping. He holds Honorary Doctorates from the University of Zurich and Radboud University.
Jordi Garcia-Ojalvo
Department of Experimental and Health Sciences, Universitat Pompeu Fabra
Minimal determinants of cellular cognition
Living organisms must monitor the dynamics of their environment continuously, in order to adapt to present conditions and anticipate future changes. But anticipation requires processing temporal information, which in turn necessitates memory. In this talk I will discuss a possible mechanism through which cells can perform such dynamical information processing by leveraging the recurrent architecture of gene regulatory networks. I will also discuss the size requirements that recurrent networks must fulfill under realistic biological conditions, in order to provide such a dynamic representation of the world. Finally, I will argue that recurrence in biological networks enables not only short-term memory, but also long-term storage of information, through a phenomenon reminiscent of generalized chaos synchronization.
Prof. Garcia-Ojalvo obtained his PhD in statistical physics at the University of Barcelona in 1995. He did postdoctoral work at the Georgia Institute of Technology in Atlanta in 1996, working on laser dynamics, and at the Humboldt University of Berlin in 1998 as an Alexander von Humboldt Fellow, studying noise effects in excitable media and neuronal systems. In 2003 he was an IGERT Visiting Professor at Cornell University in Ithaca, New York, at which time he began working in the field of systems biology. In 2008 he became professor at the Universitat Politècnica de Catalunya, where he had taught applied physics since 1991. He joined the Universitat Pompeu Fabra (UPF) in 2012, where he currently leads the Laboratory of Dynamical Systems Biology at the Department of Medicine and Life Sciences. He has also been Visiting Research Associate in Biology at the California Institute of Technology since 2006.
Jeremy Gunawardena
Department of Systems Biology, Harvard Medical School
Cell learning: a new paradigm for biology?
The “selfish gene” has been the prevailing metaphor of modern biology. I will suggest that it is time to retire this gene-centric view in favour of a more functional perspective, of the “learning cell”. Cognitive science enables us to define learning as a form of information processing and directs our attention to the “internal models” that cells may be constructing of their environments. I will review what we already know about the molecular implementations of such models and how we may be able to systematically decode them in the future.
I am a pure mathematician by training. I received my PhD in algebraic topology from Trinity College, Cambridge. I was subsequently a Dickson Instructor at the University of Chicago and a Research Fellow at Trinity College. I spent several years in industrial research at Hewlett-Packard Research Labs, where I set up HP’s Basic Research Institute in the Mathematical Sciences (BRIMS). I returned to academic life in the Department of Systems Biology at Harvard Medical School, where my lab studies cellular information processing using a combination of experimental, mathematical and computational methods.
Marta Halina
Department of History and Philosophy of Science, University of Cambridge
Resource constraints and the evolution of cognition
The evolutionary history of animal cognition appears to involve a few major transitions: major changes that opened up new phylogenetic possibilities for cognition. Here we review and contrast current transitional accounts of cognitive evolution. We discuss how an important feature of an evolutionary transition should be that it changes what is evolvable, so that the possible phenotypic spaces before and after a transition are different. We develop an account of cognitive evolution that focuses on how selection might act on the computational architecture of nervous systems. Selection for operational efficiency or robustness can drive changes in computational architecture that then make new types of cognition evolvable. Transitional accounts have value in that they allow a big-picture perspective of macroevolution by focusing on changes that have had major consequences. For cognitive evolution, however, we argue it is most useful to focus on evolutionary changes to the nervous system that changed what is evolvable, rather than to focus on specific cognitive capacities.
Marta Halina is University Associate Professor in the Department of History and Philosophy of Science at the University of Cambridge. Halina co-founded the Kinds of Intelligence program at the Leverhulme Centre for the Future of Intelligence, which draws on work in psychology, neurobiology, computer science, and philosophy to develop and critically assess notions of intelligence. In addition to her philosophical writings on animal minds, Halina has designed and run studies testing the social cognitive skills of nonhuman primates. Her recent publications include “Direct Human-AI Comparison in the Animal-AI Environment” (Frontiers in Psychology) and “Unlimited Associative Learning as a Null Hypothesis” (Philosophy of Science).
Martin M. Hanczyc
Department of Cellular, Computational, and Integrative Biology (CIBIO), University of Trento
Liquid droplets as simple dynamics systems to explore information and cognition.
My work is focused on understanding the fundamental principles of living and evolving systems through experimental science. To this end, I build synthetic systems that behave in part like natural living organisms. I will present experimental models of synthetic biology based on simple active matter chemical droplets. This system has the ability to sense the environment leading to an investigation of the embodiment of information and perhaps cognition.
Martin Hanczyc is an Associate Professor in the Department of Computational and Integrative Biology at the University of Trento, Italy and a Research Professor of Chemical and Biological Engineering at the University of New Mexico, Albuquerque USA. He formally was an Honorary Senior Lecturer at the Bartlett School of Architecture, University College London, Chief Chemist at ProtoLife and Associate Professor at the University of Southern Denmark. He received a bachelor’s degree in Biology from Pennsylvania State University, a doctorate in Genetics from Yale University and was a postdoctorate fellow under Jack Szostak at Harvard University. He has published in the area of droplets, complex systems, evolution and the origin of life. Currently he heads the Laboratory for Artificial Biology, developing novel synthetic chemical systems based on the properties of living systems.
Gaspar Jekely
Living Systems Institute University of Exeter
A ciliary photoreceptor-cell circuit mediates pressure response in marine zooplankton
Luis Bezares-Calderón 2, Réza Shahidi 1, Gáspár Jékely 1, 2
1 Centre for Organismal Studies, Heidelberg University
2 Living Systems Institute, University of Exeter
Hydrostatic pressure is a dominant cue for vertically migrating marine organisms but the physiological mechanisms of responding to pressure changes remain unclear. We uncovered the cellular and circuit bases of a barotactic response in the planktonic larva of the marine annelid Platynereis dumerilii. Increases in pressure induced a rapid, graded and adapting upward swimming response due to faster ciliary beating. By calcium imaging, we found that brain ciliary photoreceptors show a graded response to pressure changes. Larvae mutant for ciliary opsin-1 have smaller and disorganised sensory cilia and show diminished pressure responses. The ciliary photoreceptors synaptically connect to the head multiciliary band via serotonergic motoneurons. Genetic inhibition of the serotonergic cells by expression of a tetanus toxin blocked pressure-dependent increases in ciliary beating. We conclude that ciliary photoreceptors function as pressure sensors and activate ciliary beating through serotonergic signalling during barotaxis.
Gáspár Jékely studied Biology and obtained his PhD in 1999 at the Eötvös Loránd Universities in Budapest. He then worked as a postdoc at the EMBL, Heidelberg in the laboratory of Pernille Rorth and then Detlev Arendt. Between 2007-2017 he was a group leader at the Max Planck Institute for Developmental Biology in Tübingen, Germany. He moved to the Living Systems Institute at the University of Exeter as Professor of Neuroscience in 2017. Since 2023 he is a Professor of Molecular Organismal Biology at the Centre for Organismal Studies at Heidelberg University. His research interests include the structure, function and evolution of neural circuits in marine ciliated larvae and the origin and early evolution of nervous systems.
Aneta Koseska
Max Planck Institute for Neurobiology of Behavior–Caesar, Bonn, Germany
Ghost states, memory and learning in single cells
A fundamental characteristic of living systems is sensing and subsequent robust response to continuously varying environmental signals. Even seemingly “simple” systems, such as single cells or single-cell organisms, reveal higher-order computational capabilities that go beyond simple stimulus-response association. Developing a theoretical framework to characterize semistablity in dynamical systems, we describe how “ghost” states can be utilized as a memory to integrate information from time-varying signals. We have identified experimentally that “ghost” states are an emergent feature of cell-surface receptor networks organized at criticality, which they exploit for memory-based navigation in changing environments. Furthermore, by formulating a theory of computation with ghost states, we explore theoretically and experimentally basic paradigms of learning on the level of single cells.
Aneta Koseska received a PhD in Nonlinear Dynamics from the University of Potsdam, Germany (2007), followed by a postdoc at the Center for Dynamics of Complex Systems also in Potsdam. In these periods her work focused on the dynamical basis of symmetry breaking phenomena. In 2012 she joined the Max Planck Institute for Molecular Physiology in Dortmund, where her group interfaced theory of biological information processing with experimental understanding of cell signaling systems. Since 2020 she leads the group “Cellular computations and learning” at the Max Planck Institute for Neurobilogy of Behavior in Bonn, Germany, focusing on the computational and learning capabilities of single cells and single cell organisms.
Michael Levin
Allen Discovery Center, Tufts University
“Unconventional Selves, Unconventional Memories: the bioelectric memories of the body”
All of us made the remarkable journey from being an unfertilized oocyte to a complex being with metacognitive capacities. Slowly, gradually, from chemistry and physics emerges a Self with hopes and dreams that belong to it and none of its individual neurons. Cognition does not wait until the brain appears – evolutionarily, and developmentally, it is preceded by the collective intelligence of your body cells. In this talk, I will describe how the same machinery – ion channels and electrical networks – underlies the ability of cellular collectives to use pattern memories to build and repair bodies. The symmetry between intelligent behavior in 3D space, and problem-solving capabilities of morphogenesis in anatomical space, has many fascinating implications for evolution, philosophy of mind, and regenerative medicine.
Dr. Michael Levin studies diverse intelligence in a wide range of evolved, designed, and hybrid systems. He received dual BS degrees in computer science and biology at Tufts University, and then a PhD in genetics from Harvard. He did a post-doc in cell biology at Harvard Medical School and opened his lab in 2000. His work on the genetic pathway regulating the left-right asymmetry of the body was chosen by the journal Nature as one of the 100 milestones of developmental biology of the 20th century. He holds the Vannevar Bush chair and is a Distinguished Professor in the Biology department at Tufts and an associate faculty at Harvard’s Wyss Institute. He is the founding director of the Allen Discovery Center at Tufts. His group works at the intersection of developmental biophysics, cognitive science, and computer science. They use a variety of models, including slime molds, cells, cultured organs, frog embryos, regenerating flatworms, and in silico simulations to understand how the basal intelligence of molecular networks and cells scales up. They develop computational tools to understand how systems at all scales navigate physiological, transcriptional, morphogenetic, and behavioral problem spaces. One of the main directions of the lab concerns developmental bioelectricity as the medium of the collective intelligence of cells during regulative morphogenesis. This includes appropriating the concepts and tools of neuroscience and applying them to read and write the anatomical memories of non-neural tissues. The applications of their work have extended to repair of birth defects, induction of organ regeneration, cancer reprogramming, and the creation of synthetic living machines.
Isabelle M. Mansuy
Laboratory of Neuroepigenetics, Brain Research Institute, Medical Faculty of the University of Zurich, and Institute for Neuroscience, Department of Health Science and Technology of the Swiss, Federal Institute of Technology (ETH)
Neuroepigenetics: How childhood trauma can affect descendants via the germline
Behavior and physiology in mammals are strongly influenced by life experiences, particularly experiences in childhood. While positive factors can favor proper development and good mental and physical health later in life, early life adversity and traumatic events increase the risk for psychiatric, metabolic and autoimmune diseases and cancer. Such disorders can affect directly exposed individuals but also their descendants in some cases, across several generations. The biological mechanisms underlying such form of inheritance of environmentally-induced (acquired) traits are thought to involve non DNA-based factors in the germline. We developed a mouse model of postnatal stress that recapitulates trauma symptoms including increased risk-taking, depressive-like behaviors, cognitive and social deficits, as well as metabolic and cardiovascular dysfunctions in adulthood across generations. Symptoms persist throughout life and some affect the following offspring, up to the 5th generation (i.e. risk-taking behaviors). Comparable symptoms affect traumatized children, indicating conserved effects across species. Exposure is associated with molecular and epigenetic changes involving RNA and DNAme in germ cells, with sperm RNA being causally linked to symptoms transmission. MiRNAs are also affected in extracellular vesicles in blood and the reproductive tract. Circulating factors were identified as mediators of some of the alterations in germ cells. Chronic injection of serum from trauma-exposed mouse males into control males recapitulated metabolic phenotypes in the offspring, suggesting information transfer from serum to germ cells. Pathways involving peroxisome proliferator-activated receptor (PPAR) are causally involved, with pharmacological PPAR activation in vivo affecting sperm transcriptome and metabolic functions in the offspring and grand-offspring. These results suggest the existence of an ensemble of factors and mechanisms that can carry information about past experiences from the periphery to germ cells for the inheritance of acquired traits.
Isabelle M Mansuy is professor in neuroepigenetics at the Faculty of Medicine of the University of Zurich and the Department of Health Sciences and Technology of the Swiss Federal Institute of Technology in Zurich. Her lab is pioneer in the field of epigenetic inheritance and identified molecular factors in reproductive cells responsible for the transmission of acquired traits across generations in mammals. The work focuses on the early life trauma and its impact on mental and physical health in mouse and in humans. Mansuy authored over 200 research articles, reviews and book chapters in neuroscience and neuroepigenetics and wrote a popular book on epigenetics for the public. She is a member of the Swiss Academy of Medical Sciences, the European Academy of Sciences (EURASC), the European Molecular Biology Organization, is a knight in the National Order of Merit and holds the Legion of Honor in France. She is also a nominator for the Nobel Prize in Physiology or Medicine.
Isabella Sarto-Jackson
Konrad Lorenz Institute for Evolution and Cognition Research, Klosterneuburg, Austria
Epistemogenesis: Knowledge accumulation in biological systems
According to “Cognitive Biology” (sensu L. Kovac, 2006; 2015), the accumulation of knowledge (i.e., epistemogenesis) is the hallmark of biological and cultural evolution. Epistemogenesis is a complex, dynamic process that underlies the acquisition, storage, and utilization of knowledge in biological systems. This process involves the integration of novel information with past experiences and intrinsic knowledge. It, therefore, pertains to both, acquired, ontogenetic knowledge as well as phylogenetically transmitted knowledge. Epistemogenesis allows organisms to learn from past experiences, utilize and update their knowledge to adjust to a given situation, and predict future conditions.
In order to survive and maintain onticity, living systems must incessantly do ontic work using energy. In addition to ontic work, they also perform epistemic work by drawing on existing knowledge to more effectively compete for energy resources and to (actively) adapt to their environment. Knowledge here is defined as a useful description of some aspects of the world, giving the possessor the competence to behave in ways that contribute to its survival and reproduction (Goodwin, 1978). This view argues for knowledge and life to be co-extensive and closely ties knowledge to evolution. Based on this evolutionary definition, knowledge undergoes variation and selection, is protracted, and transgenerationally passed on. Thus, due to variation, retention, and transmission processes, knowledge represents a product of the biological system’s past that accumulated over the course of its evolutionary history paralleled by an increase in organizational and/or metabolic complexity.
More specifically, species carry and transmit knowledge as molecular, cellular, organismal, inter-organismal, (and cultural) memories by means of inclusive inheritance mechanisms providing the potential that then can be realized into particular phenotypes depending on the environment. Individual organisms or other biological systems exhibit variations of species-specific knowledge thereby functioning as “fumbling fingers” of the overall species to explore different niches.
Consequently, epistemogenesis – when understood as an accumulation of knowledge over generations – bestows a significant advantage on its possessor by facilitating adaptation to changing environments and avoiding potentially harmful situations. Accordingly, epistemogenesis is not limited to the brain, but includes non-neuronal cells, organs, and other biological systems. Moreover, knowledge accumulation in biological systems can follow different trajectories, depending on the context and the domain of knowledge.
For the longest time of evolutionary history, growth of knowledge was an exceptionally slow process. However, with the advent of artifacts manufactured by our hominid ancestors, human evolution sped up by several orders of magnitude. This was due to knowledge no longer being limited to the body of an individual and its somatic memory, but being shared collectively allowing for cumulative cultural evolution. In other words, knowledge acquisition occurred in a goal-directed manner, knowledge accumulation became strongly dependent on already existing knowledge, and its growth trajectory became linear. Over the last centuries, symbolic culture became pivotal, fostering institutionalized knowledge transmission thereby further accelerating epistemogenesis. Since the Anthropocene, knowledge has accumulated exponentially in some areas, especially in science and technology, as new discoveries not only build upon previous ones, but can also mutually interact and scaffold each other, leading to a snowball effect. This paTern of compounded knowledge accumulation may explain the rapid climax of human evolution as compared to other species and, at the same time, may prompt us to reflect on the impending future of our species.
Isabella Sarto-Jackson is a neurobiologist and theoretical biologist. She is the Executive Manager of the Konrad Lorenz Institute for Evolution and Cognition Research in Klosterneuburg, Austria and President of the Austrian Neuroscience Association. She holds a Master´s degree in genetics, a PhD in neurobiochemistry, and the venia docendi in neurobiology. She has worked as a Postdoc and Assistant Professor at the Center for Brain Research of the Medical University in Vienna, Austria and as a guest researcher at the Leibniz Institute for Neurobiology in Magdeburg, Germany. In 2011, she joined the Konrad Lorenz Institute for Evolution and Cognition Research, extending her research focus to the philosophy of biology, evolutionary biology, and cognitive science. She is Associate Editor of Biological Theory and teaches at the University of Vienna, Austria and the Comenius University in Bratislava, Slovakia. Her research focuses on interrelations between cognition and evolution drawing on concepts of cognitive biology and evolutionary epistemology that understand knowledge as a biogenic phenomenon involving all levels of biological organization. She is also interested in the processes that lie at the interface of biological and cultural evolution, particularly in the accelerating factors that contribute to the evolution of the “social brain.” In 2022, she published the book ”The Making and Breaking of Minds. How social interactions shape the human mind.“
Ricard V. Solé
ICREA-Complex Systems Lab, Universitat Pompeu Fabra
Cognition spaces: boundaries, limits and constraints
Collective computations take place in nature in two major classes of architecture, which we can roughly classify as “solid” (the standard, synaptic connectivity picture) versus those that are performed by “liquid” networks, such as the Immune system or ant colonies. Other solutions (or constraints) associated with plant and fungal communication or the potential of unicellular systems such as Physarum emerged as relevant actors. Finally, synthetic systems such as robot swarms and engineered communicating cells offer additional avenues for inquiry. We need to address many questions, notably: What are the computational limits associated with the physical state displayed by the collective? Are there limited possibilities (as those already observed) or many others? What can or cannot be computed? How do we define a proper evolutionary framework to understand the origins of different solutions? What are the trade-offs involved here? Can we evolve other solutions using artificial life models? Can a statistical physics approach to computation, including physical phases, help find universality classes? What is the impact of fluid versus solid on the values and meaning of integrated information theory? Answering these questions will help to define a theoretical framework for the emergence and design of cognitive networks.
I am ICREA research professor (the Catalan Institute for Research and Advanced Studies) currently working at the Universitat Pompeu Fabra, where I head the Complex Systems Lab at the PRBB. I completed degrees in both Physics and Biology at the University of Barcelona and received my PhD in Physics at the Polytechnic University of Catalonia. I am also an External Professor at the Santa Fe Institute (USA). I have been awarded, among others with James McDonnell Foundation and Advanced ERC grants.
My research focuses on understanding complex systems’ evolutionary origins
using mathematical models and experimental approaches based on synthetic biology. I have proposed the concept of Synthetic Major Transitions as a unifying framework to explore the origins of innovation in evolution using a parallel approach, namely our potential for building or simulating synthetic systems that can recreate past evolutionary events. This includes a complex systems view of the origin of protocells, multicellular systems, cognition and language. I proposed a novel field of research (“liquid brains”) to undergo a search in the space of the possible cognitive networks, from standard brains to “liquid” ones (ant colonies, slime moulds or the immune system), including plants and artificial swarms. Finally, I am also pushing forward a research initiative of terraformation of endangered ecosystems as a potential scenario for preventing tipping points in a future warming planet.
Richard Watson
University of Southampton
Natural Induction: Spontaneous problem solving in biological and abiological networks (without selection or design)
Is evolution by natural selection the only possible source of adaptive organisation in the natural world? Physical systems can show some properties relevant to adaptation spontaneously, i.e. without selection or design. First, any dynamical system described by the local minimisation of an energy function can be interpreted as an optimisation process in the limited sense that it finds states that solve the frustrations between its components (but only locally optimal solutions). Second, physical systems can also learn (with or without a teacher) in the sense that their internal structure deforms or accommodates to a pattern of forcing, and thereby approximates a model that can store, recall and generalise past states. We investigate physical systems where both these effects occur at the same time. We show that, in such systems, the (initially limited) optimisation ability of physical systems can improve with experience through an effect we call adaptation by natural induction. The effect is shown in a dynamical system described by a network of viscoelastic connections subject to occasional disturbances. When the internal structure of such a system deforms slowly across many disturbances, it spontaneously learns to preferentially visit solutions of exceptionally high quality. Adaptation by natural induction thus produces network organisations that improve problem-solving competency with experience. We note that the conditions for adaptation by natural induction, and its adaptive competency, are different from those of natural selection. We conclude that natural selection is not unique in its ability to spontaneously produce adaptive organisation and discuss some implications.
Dr Richard Watson studies evolution, learning, cognition and society and their unifying algorithmic principles. He studied Artificial Intelligence and Adaptive Systems at Sussex University, then PhD Computer Science at Brandeis in Boston. His current work deepens the unification of evolution and learning – specifically, with connectionist models of learning and cognition, familiar in neural network research – to address topics such as evolvability, ecological memory, evolutionary transitions in individuality (ETIs), phenotypic plasticity, the extended evolutionary synthesis, collective intelligence and ‘design’. He has also developed new computational methods for combinatorial optimisation (deep optimisation), exploiting a unification of deep learning and ‘deep evolution’ (i.e. ETIs). He is author of “Compositional evolution” (MIT Press), was featured as “one to watch in AI” in Intelligent Systems magazine, and his paper “How Can Evolution Learn” in TREE, attracted the ISAL award 2016. He is now a professor at the University of Southampton.