
.png)

Online ISSN 2653-4983
JOURNAL of MULTISCALE NEUROSCIENCE
Volume 3 Issue 4 December 2024





ORIGINAL RESEARCH
The Illusion of Intrinsic Meaning: Reassessing Conscious experience
Thomas W. Loker
The illusion of intrinsic meaning in predictive coding through cognitive artifacts to minimize prediction errors points toward a functionalist attempt at understanding conscious experience. It examines how conscious experience serves functional roles in predictive coding and symbolic cognition systems within the brain. By addressing recent developments in diverse fields like cognitive neuroscience, philosophy, and artificial intelligence, we argue that conscious experience emerges from the need to construct coherent narratives for survival and decision-making. Additionally, the paper explores the implications for artificial intelligence, suggesting that artificial systems could develop analogous cognitive artifacts through predictive models without subjective awareness, contributing to a functionalist understanding of consciousness, and further advancing the discussion on the nature of conscious experience in biological and artificial systems.
BRIEF REPORT
A new pilot wave reinterpretation for quantum AI systems
Roumen Tsekov
A new nonlinear Schrödinger equation is derived, which describes a cluster of bosons interacting via the strongest force. Its numerical solution points out that the cluster is stable only if the number of particles is exactly 131. The relevant kinetic and potential energies of the particles are also calculated and interrelated via the virial theorem. The developed concept is also applied to electrostatic forces.
EDITORIAL
Multiscalar Brain Adaptability in AI Systems
Shantipriya Parida
​
Artificial General Intelligence (AGI) is at the core of this exploration, which seeks to emulate human cognitive flexibility, reasoning, and learning within computational paradigms. While AGI excels in predefined tasks, it falters in managing uncertainty and unpredictability. By contrast, Strong Artificial Intelligence (SAI) envisions systems capable of mind-like processes—managing uncertainty and anticipating unexpected events. Achieving SAI requires a deeper understanding of the brain’s adaptability, spanning multiple scales from synaptic plasticity to precognitive consciousness....​
BRIEF REPORT
Roman R. Poznanski and E. Alemdar
A new approach attempts to express the poststructural dynamics of the entropic brain in terms of the ‘hidden’ structure of uncertainties. It highlights the importance of understanding how consciousness operates within a functional system approach that appropriately considers changeable boundary conditions through functional interactions. The causality is sought in boundary conditions when uncertainty reduction becomes an act of understanding as a course of action that navigates the multiscale landscape of potentialities. Motion through the multiscale landscape continuously changes uncertainties into intentionalities via ‘multiscale redundancy.’ In the multiscale version of the entropic brain, the ‘hidden’ structure of uncertainties unfolding through the poststructural dynamics occurring at different locations, levels, and times that instantly actualize through intermittent interactions as precognitive experienceabilities and combine into a global resonance before returning to spontaneous potentiality. The entropic brain is the ‘hidden’ structure of uncertainties unfolding through poststructural dynamics in the transition from potentialities to intentionalities, giving form to action via quantum potential energy and then motion via quantum kinetic energy through new information pathways.
BRIEF REPORT
Intentionality for better communication in minimally conscious AI design
R.R. Poznanski, L.A. Cacha, V. Sbitnev, N. Iannella, S. Parida, E.J. Brändas and J.Z Achimowicz
​
Consciousness is the ability to have intentionality, which is a process that operates at various temporal scales. To qualify as conscious, an artificial device must express functionality capable of solving the Intrinsicality problem, where experienceable form or syntax gives rise to understanding 'meaning' as a noncontextual dynamic prior to language. This is suggestive of replacing the Hard Problem of consciousness to build conscious artificial intelligence (AI). Developing model emulations and exploring fundamental mechanisms of how machines understand meaning is central to the development of minimally conscious AI. It has been shown by Alemdar and colleagues [New insights into holonomic brain theory: implications for active consciousness. Journal of Multiscale Neuroscience 2(2023), 159-168] that a framework for advancing artificial systems through understanding uncertainty derived from negentropic action to create intentional systems entails quantum-thermal fluctuations through informational channels instead of recognizing (cf., introspection) sensory cues through perceptual channels. Improving communication in conscious AI requires both software and ...