SlideShare una empresa de Scribd logo
1 de 85
Descargar para leer sin conexión
Artificial General
Intelligence 2
Bob Marcus
robert.marcus@et-strategies.com
Part 2 of 4 parts: Natural and Human Intelligence
This is a first cut.
More details will be added later.
Part 1: Artificial Intelligence (AI)
Part 2: Natural Intelligence(NI)
Part 3: Artificial General Intelligence (AI + NI)
Part 4: Networked AGI Layer on top or Gaia and Human Society
Four Slide Sets on Artificial General Intelligence
AI = Artificial Intelligence (Task)
AGI = Artificial Mind (Simulation)
AB = Artificial Brain (Emulation)
AC = Artificial Consciousness (Synthetic)
AI < AGI < ? AB <AC (Is a partial brain emulation needed to create a mind?)
Mind is not required for task proficiency
Full Natural Brain architecture is not required for a mind
Consciousness is not required for a natural brain architecture
Philosophical Musings 10/2022
Focused Artifical Intelligence (AI) will get better at specific tasks
Specific AI implementations will probably exceed human performance in most tasks
Some will attain superhuman abilities is a wide range of tasks
“Common Sense”= low-level experiential broad knowledge could be an exception
Some AIs could use brain inspired architectures to improve complex ask performance
This is not equivalent to human or artificial general intelligence (AGI)
However networking task-centric AIs could provide a first step towards AGI
This is similar to the way human society achieves power from communication
The combination of the networked AIs could be the foundation of an artificial mind
In a similar fashion, human society can accomplish complex tasks without being conscious
Distributed division of labor enable tasks to be assigned to the most competent element
Networked humans and AIs could cooperate through brain-machine interfaces
In the brain, consciousness provides direction to the mind
In large societies, governments perform the role of conscious direction
With networked AIs, a “conscious operating system”could play a similar role.
This would probably have to be initially programmed by humans.
If the AI network included sensors, actuators, and robots it could be aware of the world
The AI network could form a grid managing society, biology, and geology layers
A conscious AI network could develop its own goals beyond efficient management
Humans in the loop could be valuable in providing common sense and protective oversight
Outline
Human Intelligence
Brain Architecture
Memory
Human Instinctive Intelligence
Human Cognitive Intelligence
Human Experiential Learning
Domain Knowledge
Common Sense
Intuition
Human Behavior
Mind
Consciousness
Brain Interfaces
References
Human Intelligence
From https://arxiv.org/pdf/2205.00002.pdf
A Theory of Natural Intelligence
Brain makes Predictions
From https://numenta.com/a-thousand-brains-by-jeff-hawkins
There was only one explanation I could think of. My brain, specifically my neocortex, was making multiple
simultaneous predictions of what it was about to see, hear, and feel. Every time I moved my eyes, my neocortex
made predictions of what it was about to see. Every time I picked something up, my neocortex made predictions of
what each finger should feel. And every action I took led to predictions of what I should hear. My brain predicted the
smallest stimuli, such as the texture of the handle on my coffee cup, and large conceptual ideas, such as the
correct month that should be displayed on a calendar. These predictions occurred in every sensory modality, for
low-level sensory features and high-level concepts, which told me that every part of the neocortex, and therefore
every cortical column, was making predictions. Prediction was a ubiquitous function of the neocortex.
At that time, few neuroscientists described the brain as a prediction machine. Focusing on how the neocortex made
many parallel predictions would be a novel way to study how it worked. I knew that prediction wasn’t the only thing
the neocortex did, but prediction represented a systemic way of attacking the cortical column’s mysteries. I could
ask specific questions about how neurons make predictions under different conditions. The answers to these
questions might reveal what cortical columns do, and how they do it.
To make predictions, the brain has to learn what is normal—that is, what should be expected based on past
experience. My previous book, On Intelligence, explored this idea of learning and prediction. In the book, I used the
phrase “the memory prediction framework” to describe the overall idea, and I wrote about the implications of
thinking about the brain this way. I argued that by studying how the neocortex makes predictions, we would be able
to unravel how the neocortex works.
Today I no longer use the phrase “the memory prediction framework.” Instead, I describe the same idea by saying
that the that the neocortex learns a model of the world, and it makes predictions based on its model.
9
Growth of Minds
From https://www.amazon.com/Journey-Mind-Thinking-Emerged-Chaos/dp/B09M2WCW72/
In the beginning, fourteen billion years ago, existence arose from
nonexistence and the universe commenced. Four billion years ago, give or
take, life arose from nonlife and the evolution of species commenced. A
billion years after that, purpose arose from purposelessness and the journey
of the mind commenced. Eventually, the journey would forge a god out of
godlessness, a new breed of mind endowed with the power and disposition
to reshape the cosmos as it saw fit.
This book retraces the journey of the mind from the aimless cycling of mud
on a dark and barren Earth until the morning a mind woke up and declared
to an indifferent universe, “I am aware of me!” The chapters ahead visit
seventeen different living minds, ranging from the simplest to the most
sophisticated. First up is the tiniest organism on Earth, the humble
archaeon, featuring a mind so minuscule that you would be forgiven for
questioning whether it’s a mind at all. From there, our itinerary will take us
forward through a series of increasingly brawny intellects. We will sojourn
with amoeba minds, insect minds, tortoise minds, and monkey minds, until
we arrive at the mightiest mind to ever grace our solar system . . . one that
may be something of a surprise.
10
Growth of Minds (cont)
From https://www.amazon.com/Journey-Mind-Thinking-Emerged-Chaos/dp/B09M2WCW72/
Brain Architecture
Human Brain
From https://en.wikipedia.org/wiki/Human_brain
The human brain is the central organ of the human nervous system, and with the spinal cord makes up the central nervous system. The brain consists
of the cerebrum, the brainstem and the cerebellum. It controls most of the activities of the body, processing, integrating, and coordinating the
information it receives from the sense organs, and making decisions as to the instructions sent to the rest of the body. The brain is contained in, and
protected by, the skull bones of the head.
The cerebrum, the largest part of the human brain, consists of two cerebral hemispheres. Each hemisphere has an inner core composed of white matter,
and an outer surface – the cerebral cortex – composed of grey matter. The cortex has an outer layer, the neocortex, and an inner allocortex. The
neocortex is made up of six neuronal layers, while the allocortex has three or four. Each hemisphere is conventionally divided into four lobes – the
frontal, temporal, parietal, and occipital lobes. The frontal lobe is associated with executive functions including self-control, planning, reasoning, and
abstract thought, while the occipital lobe is dedicated to vision. Within each lobe, cortical areas are associated with specific functions, such as the
sensory, motor and association regions. Although the left and right hemispheres are broadly similar in shape and function, some functions are
associated with one side, such as language in the left and visual-spatial ability in the right. The hemispheres are connected by commissural nerve tracts,
the largest being the corpus callosum.
The cerebrum is connected by the brainstem to the spinal cord. The brainstem consists of the midbrain, the pons, and the medulla oblongata. The
cerebellum is connected to the brainstem by three pairs of nerve tracts called cerebellar peduncles. Within the cerebrum is the ventricular system,
consisting of four interconnected ventricles in which cerebrospinal fluid is produced and circulated. Underneath the cerebral cortex are several
important structures, including the thalamus, the epithalamus, the pineal gland, the hypothalamus, the pituitary gland, and the subthalamus; the limbic
structures, including the amygdala and the hippocampus; the claustrum, the various nuclei of the basal ganglia; the basal forebrain structures, and the
three circumventricular organs. The cells of the brain include neurons and supportive glial cells. There are more than 86 billion neurons in the brain,
and a more or less equal number of other cells. Brain activity is made possible by the interconnections of neurons and their release of neurotransmitters
in response to nerve impulses. Neurons connect to form neural pathways, neural circuits, and elaborate network systems. The whole circuitry is driven
by the process of neurotransmission.
The brain is protected by the skull, suspended in cerebrospinal fluid, and isolated from the bloodstream by the blood–brain barrier. However, the brain
is still susceptible to damage, disease, and infection. Damage can be caused by trauma, or a loss of blood supply known as a stroke. The brain is
susceptible to degenerative disorders, such as Parkinson's disease, dementias including Alzheimer's disease, and multiple sclerosis. Psychiatric
conditions, including schizophrenia and clinical depression, are thought to be associated with brain dysfunctions. The brain can also be the site of
tumours, both benign and malignant; these mostly originate from other sites in the body.
The study of the anatomy of the brain is neuroanatomy, while the study of its function is neuroscience. Numerous techniques are used to study the
brain. Specimens from other animals, which may be examined microscopically, have traditionally provided much information. Medical imaging
technologies such as functional neuroimaging, and electroencephalography (EEG) recordings are important in studying the brain. The medical history
of people with brain injury has provided insight into the function of each part of the brain. Brain research has evolved over time, with philosophical,
experimental, and theoretical phases. An emerging phase may be to simulate brain activity.[2]
Human Brain
From https://en.wikipedia.org/wiki/Human_brain
A New Function of the Cerebellum
From https://neurosciencenews.com/cerebellum-emotional-memory-21589/
Summary: The cerebellum plays a key role in the storage of both positive and negative memories of emotional events. The cerebellum is known primarily for the
regulation of movement. Researchers at the University of Basel have now discovered that the cerebellum also plays an important role in remembering emotional
experiences.
Both positive and negative emotional experiences are stored particularly well in memory. This phenomenon is important to our survival, since we need to remember
dangerous situations in order to avoid them in the future. Previous studies have shown that a brain structure called the amygdala, which is important in the processing of
emotions, plays a central role in this phenomenon. Emotions activate the amygdala, which in turn facilitates the storage of information in various areas of the cerebrum.
The current research, led by Professor Dominique de Quervain and Professor Andreas Papassotiropoulos at the University of Basel, investigates the role of the
cerebellum in storing emotional experiences. In a large-scale study, the researchers showed 1,418 participants emotional and neutral images and recorded the subjects’
brain activity using magnetic resonance imaging. In a memory test conducted later, the positive and negative images were remembered by the participants much better
than the neutral images. The improved storage of emotional images was linked with an increase in brain activity in the areas of the cerebrum already known to play a
part. However, the team also identified increased activity in the cerebellum.
The cerebellum in communication with the cerebrum
The researchers were also able to demonstrate that the cerebellum shows stronger communication with various areas of the cerebrum during the process of enhanced
storage of the emotional images. It receives information from the cingulate gyrus – a region of the brain that is important in the perception and evaluation of feelings.
Furthermore, the cerebellum sends out signals to various regions of the brain, including the amygdala and hippocampus. The latter plays a central role in memory
storage.Furthermore, the cerebellum sends out signals to various regions of the brain, including the amygdala and hippocampus. The latter plays a central role in
memory storage. “These results indicate that the cerebellum is an integral component of a network that is responsible for the improved storage of emotional
information,” says de Quervain.
From https://en.wikipedia.org/wiki/Neuroscience
Neuroscience
Neuroscience is the scientific study of the nervous system (the brain,
spinal cord, and peripheral nervous system) and its functions.[1] It is a
multidisciplinary science that combines physiology, anatomy,
molecular biology, developmental biology, cytology, psychology,
physics, computer science, chemistry, medicine, statistics, and
mathematical modeling to understand the fundamental and emergent
properties of neurons, glia and neural circuits.[2][3][4][5][6] The
understanding of the biological basis of learning, memory, behavior,
perception, and consciousness has been described by Eric Kandel as
the "epic challenge" of the biological sciences.[7]
The scope of neuroscience has broadened over time to include
different approaches used to study the nervous system at different
scales. The techniques used by neuroscientists have expanded
enormously, from molecular and cellular studies of individual neurons
to imaging of sensory, motor and cognitive tasks in the brain.
From https://www.amazon.com/World-Wide-Mind-Integration-Humanity/dp/1439119147/
Neuron
From https://en.wikipedia.org/wiki/Neuroscience_and_intelligence
Neuroscience and Intelligence
Neuroscience and intelligence refers to the various neurological factors that are partly responsible for the variation of intelligence
within species or between different species. A large amount of research in this area has been focused on the neural basis of human
intelligence. Historic approaches to study the neuroscience of intelligence consisted of correlating external head parameters, for
example head circumference, to intelligence.[1] Post-mortem measures of brain weight and brain volume have also been used.[1]
More recent methodologies focus on examining correlates of intelligence within the living brain using techniques such as magnetic
resonance imaging (MRI), functional MRI (fMRI), electroencephalography (EEG), positron emission tomography and other non-
invasive measures of brain structure and activity.[1]
Researchers have been able to identify correlates of intelligence within the brain and its functioning. These include overall brain
volume,[2] grey matter volume,[3] white matter volume,[4] white matter integrity,[5] cortical thickness[3] and neural efficiency.[6]
Although the evidence base for our understanding of the neural basis of human intelligence has increased greatly over the past 30
years, even more research is needed to fully understand it.[1] The neural basis of intelligence has also been examined in animals
such as primates, cetaceans, and rodents.[7]
Neural efficiency
The neural efficiency hypothesis postulates that more intelligent individuals display less activation in the brain during cognitive
tasks, as measured by Glucose metabolism.[6] A small sample of participants (N=8) displayed negative correlations between
intelligence and absolute regional metabolic rates ranging from -0.48 to -0.84, as measured by PET scans, indicating that brighter
individuals were more effective processors of information, as they use less energy.[6] According to an extensive review by
Neubauer & Fink[40] a large number of studies (N=27) have confirmed this finding using methods such as PET scans,[41] EEG[42]
and fMRI.[43]
fMRI and EEG studies have revealed that task difficulty is an important factor affecting neural efficiency.[40] More intelligent
individuals display neural efficiency only when faced with tasks of subjectively easy to moderate difficulty, while no neural
efficiency can be found during difficult tasks.[44] In fact, more able individuals appear to invest more cortical resources in tasks of
high difficulty.[40] This appears to be especially true for the Prefrontal Cortex, as individuals with higher intelligence displayed
increased activation of this area during difficult tasks compared to individuals with lower intelligence.[45][46] It has been proposed
that the main reason for the neural efficiency phenomenon could be that individuals with high intelligence are better at blocking
out interfering information than individuals with low intelligence.[47]
From https://en.wikipedia.org/wiki/Functional_magnetic_resonance_imaging
Functional Magnetic Resonance Imaging (fMRI)
Functional magnetic resonance imaging or functional MRI (fMRI) measures brain activity by detecting changes associated with blood flow.[1][2] This technique
relies on the fact that cerebral blood flow and neuronal activation are coupled. When an area of the brain is in use, blood flow to that region also increases.[3]
The primary form of fMRI uses the blood-oxygen-level dependent (BOLD) contrast,[4] discovered by Seiji Ogawa in 1990. This is a type of specialized brain and
body scan used to map neural activity in the brain or spinal cord of humans or other animals by imaging the change in blood flow (hemodynamic response) related
to energy use by brain cells.[4] Since the early 1990s, fMRI has come to dominate brain mapping research because it does not involve the use of injections, surgery,
the ingestion of substances, or exposure to ionizing radiation.[5] This measure is frequently corrupted by noise from various sources; hence, statistical procedures
are used to extract the underlying signal. The resulting brain activation can be graphically represented by color-coding the strength of activation across the brain or
the specific region studied. The technique can localize activity to within millimeters but, using standard techniques, no better than within a window of a few
seconds.[6] Other methods of obtaining contrast are arterial spin labeling[7] and diffusion MRI. Diffusion MRI is similar to BOLD fMRI but provides contrast based
on the magnitude of diffusion of water molecules in the brain.
In addition to detecting BOLD responses from activity due to tasks or stimuli, fMRI can measure resting state, or negative-task state, which shows the subjects'
baseline BOLD variance. Since about 1998 studies have shown the existence and properties of the default mode network, a functionally connected neural network
of apparent resting brain states.
An fMRI image with yellow areas showing increased activity compared with a control condition
Purpose: measures brain activity detecting changes due to blood flow.
From https://en.wikipedia.org/wiki/Connectome
Connectome
A connectome (/kəˈnɛktoʊm/) is a comprehensive map of neural
connections in the brain, and may be thought of as its "wiring
diagram". An organism's nervous system is made up of neurons
which communicate through synapses. A connectome is constructed
by tracing the neuron in a nervous system and mapping where
neurons are connected through synapses.
The significance of the connectome stems from the realization that
the structure and function of the human brain are intricately linked,
through multiple levels and modes of brain connectivity. There are
strong natural constraints on which neurons or neural populations can
interact, or how strong or direct their interactions are. Indeed, the
foundation of human cognition lies in the pattern of dynamic
interactions shaped by the connectome.
Despite such complex and variable structure-function mappings, the
connectome is an indispensable basis for the mechanistic
interpretation of dynamic brain data, from single-cell recordings to
functional neuroimaging.
From https://en.wikipedia.org/wiki/Cerebral_organoid
Brain Organoids
Brainstorm
From https://alleninstitute.org/what-we-do/brain-science/
Allen Institutes
The Allen Institute for Brain Science is the Allen
Institute’s oldest scientific division, established in
2003, and has generated several foundational data
resources for the neuroscience community,
exploring the brain at the level of gene
expression, connectivity, and, most recently,
individual cell types and synapses. The Allen
Institute for Brain Science is currently focused on
defining and understanding the cell types of the
mammalian brain to ultimately better understand
brain development, evolution and disease. We are
working toward a complete parts list of brain cell
types, how those cell types connect and function
in the brain, and what changes happen to cells in
the aging brain and in neurodegenerative diseases
such as Alzheimer’s disease.
The MindScope program at the Allen Institute
seeks to understand the transformations, sometimes
called computations, in coding and decoding that
lead from photons to behavior and conscious
experience by observing, perturbing and modeling
the physical transformations of signals in the
cortical-thalamic visual system within a few
perception-action cycles. We generate data and
discoveries through the Allen Brain Observatory, a
standardized and high-throughput experimental
platform that captures neurons and circuits in
action in the visual regions of the mouse brain, to
glean principles of how the mammalian brain
processes information, responds to the external
world, and drives behavior.
The Allen Institute for Neural Dynamics
explores the brain’s activity, at the level of
individual neurons and the whole brain, to reveal
how we interpret our environments to make
decisions. We aim to discover how neural
signaling – and changes in that signaling – allow
the brain to perform complex but fundamental
computations and drive flexible behaviors. Our
experiments and openly shared resources will
shed light on behavior, memory, how we handle
uncertainty and risk, how humans and other
animals chase rewards – and how some or all of
these complicated cognitive functions go awry
in neuropsychiatric disorders such as depression,
ADHD or addiction.
From https://portal.brain-map.org/
Brain Maps
From https://neuroscience.stanford.edu/
Neuroscience at Stanford
NeuroDiscovery applies cutting-edge techniques
to make fundamental discoveries in brain science
— discoveries that could unlock new medical
treatments, transform education, inform public
policy, and help us understand who we are. Our
scientists peer at individual molecules operating
where one neuron sends signals to the next. We
trace networks of interconnected neurons to map
the neural circuits responsible for different brain
functions. And we tap into those circuits,
tracking dynamic chemical and electrical signals
to understand how our brains detect, integrate
and transform stimuli into action.
The human brain has 100 billion nerve cells and
trillions of connections between them.
Understanding the workings of such a complex and
dynamic organ requires new tools and technologies.
Materials scientists are developing probes to form
gentle but sensitive and reliable interfaces to
stimulate and record signals from thousands of
individual neurons at once. Our engineers are
developing ways to manipulate neural circuits with
electricity, light, ultrasound and magnetic fields, and
others are listening to the brain, interpreting the
language of neural signals and using that language to
drive robotic arms or to type on a computer. New
tools will enable as yet unimagined discoveries and
will allow us to repair and even to augment the
human brain.
Projects
Understanding the brain in health and disease will
improve treatments for ourselves and our loved
ones. Our clinical scientists not only treat patients,
but are also working with basic scientists to pioneer
novel treatments for psychiatric and neurological
disease. Ongoing research aims to reverse brain
aging, ease the devastating consequences of stroke,
and develop non-invasive treatments to modulate
brain activity associated with epilepsy and other
neurological diseases. Breakthrough improvements
in brain and mental health benefit not just
individuals, but society as a whole.
From https://www.nature.com/articles/d41586-021-02628-x
The Rise of the Assembloid
Ever since he was a medical student in Romania, Sergiu Pașca has wanted to understand how connections between cells go awry
in the brains of people with psychiatric disorders. Because the living human brain is inaccessible, these conditions could be
diagnosed and classified only according to their behavioural symptoms, rather than their underlying biological causes.
In 2017, Pașca took a major step towards his goal. By this time, he was a physician-scientist at Stanford University in California,
and using induced pluripotent stem cells (iPS cells) to model various structures in the brain. Cultured from adult skin cells, iPS
cells could be prodded in Pașca’s laboratory to grow into 3D spheroids that mimic neuronal tissues such as the frontal cortex.
Spheroids are useful for studying the emergence and properties of individual neurons, “but also limited in that we couldn’t use
them to study complex interactions involving multiple cell types”, Pașca says.
These interactions are crucial to how the brain gets wired up during development, and Pașca wanted to model them using iPS-
cell-derived tissues in a dish. So, he and his team performed an experiment: they combined spheroids from two distinct brain
regions involved in higher-order thought processes. And remarkably, the two spheroids fused together, just as they would in the
brain of a growing baby. No one had ever witnessed this early developmental process before, and Pașca marvelled at the sight of
it. “The cells within the spheroids knew just where to go,” he says. “They started changing their morphology to form synapses
and become electrically integrated.”
Pașca coined the term “assembloid” to describe this construct of neural circuits1. As he defines them, assembloids are 3D
structures formed from the fusion and functional integration of multiple cell types. And, most important, they mimic the
complex cellular interactions from which organs arise in the body.
Assembloids are now at the leading edge of stem-cell research. Scientists are using them to investigate early events in organ
development, and as tools for studying not only psychiatric disorders, but other types of disease as well. According to Pașca,
assembloids have the advantage of revealing how interactions between different tissues give rise to new cellular properties. For
instance, some neurons activate secondary developmental programs required for integration into circuits only after meeting up
with the other brain cells they connect to. And by creating assembloids using cells derived from people with particular diseases,
researchers should be able to reproduce the inherited pathology of their diseases in a dish. It is hoped that assembloids will
lessen the need for laboratory animals, and open doors for high-throughput screening of drugs and chemicals.
From https://www.humanbrainproject.eu/en/about/overview/
Human Brain Project
The Human Brain Project (HBP) is one of the three FET (Future and Emerging Technology)
Flagship projects. Started in 2013, it is one of the largest research projects in the world . More
than 500 scientists and engineers at over than 140 universities, teaching hospitals, and
research centres across Europe come together to address one of the most challenging
research targets – the human brain.
To tame brain complexity, the project is building a research infrastructure to help advance
neuroscience, medicine, computing and brain-inspired technologies - EBRAINS. The HBP is
developing EBRAINS to create lasting research platforms that benefit the wider community.
The HBP provides a framework where teams of researchers and technologists work together
to scale up ambitious ideas from the lab, explore the different aspects of brain organisation,
and understand the mechanisms behind cognition, learning, or plasticity.
Scientists in the HBP conduct targeted experimental studies and develop theories and
models to shed light on the human connectome, addressing mechanisms that underlie
information processing, from the molecule to cellular signaling and large-scale networks.
From https://www.humanbrainproject.eu/en/follow-hbp/news/human-brain-project-announces-new-phase/
Human Brain Project
From https://en.wikipedia.org/wiki/SpiNNaker
Human Brain Project’s SpiNNaker Computer
From https://braininitiative.nih.gov/
BRAIN Initiative
The Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN)
Initiative is aimed at revolutionizing our understanding of the human brain. By
accelerating the development and application of innovative technologies,
researchers will be able to produce a revolutionary new dynamic picture of the
brain that, for the first time, shows how individual cells and complex neural circuits
interact in both time and space. Long desired by researchers seeking new ways to
treat, cure, and even prevent brain disorders, this picture will fill major gaps in our
current knowledge and provide unprecedented opportunities for exploring exactly
how the brain enables the human body to record, process, utilize, store, and
retrieve vast quantities of information, all at the speed of thought.
From https://braininitiative.nih.gov/sites/default/files/images/brain_2.0_6-6-19-final_revised10302019_508c.pdf
BRAIN Initiative 2025
Next steps for integrative efforts in BRAIN 2.0 Because this priority area deals with a
broad approach rather than specific approaches or types of tools, BRAIN 2025 did not list
individual short-term and long-term goals, opting instead to describe examples. Following
that lead, we do not think it is necessary for BRAIN 2.0 to include an exhaustive set of
suggested goals for integrative approaches. However, many opportunities and goals listed
in Priority Areas 1 to 6 hinge upon integration. These include:
• Tools to integrate molecular, connectivity, and physiological properties of cell type
• Connectivity and functional maps at multiple scales that retain cell-type information
• Integration of fMRI with other activity measures and anatomical connections
• Integration of electrophysiological and neurochemical methods
• Integration of perturbational techniques with other technologies
• More interactions between experimentation and theory
• Development of approaches and tools to integrate human data from different
experimental approaches
From https://www.science.org/content/article/nihs-brain-initiative-puts-dollar500-million-creating-detailed-ever-human-brain-atlas
BRAIN Initiative Cell Atlas Network (BICAN)
The BRAIN Initiative, the 9-year-old, multibillion-dollar U.S. neuroscience effort, today announced its most ambitious
challenge yet: compiling the world’s most comprehensive map of cells in the human brain. Scientists say the BRAIN Initiative
Cell Atlas Network (BICAN), funded with $500 million over 5 years, will help them understand how the human brain works
and how diseases affect it. BICAN “will transform the way we do neuroscience research for generations to come,” says
BRAIN Initiative Director John Ngai of the National Institutes of Health (NIH).
BRAIN, or Brain Research Through Advancing Innovative Neurotechnologies, was launched by then-President Barack Obama
in 2013. It began with a focus on tools, then developed a program called the BRAIN Initiative Cell Census Network, resulting
in a raft of papers in 2021. The studies combined data on the genetic features, shapes, locations, and electrical activity of
millions of cells to identify more than 100 cell types across the primary motor cortex—which coordinates movement—in mice,
marmosets, and humans. Hundreds of researchers involved in the network are now completing a cell census for the rest of the
mouse brain. It is expected to become a widely used, free resource for the neuroscience community.
Now, BICAN will characterize and map neural and nonneuronal cells across the entire human brain, which has 200 billion
cells and is 1000 times larger than a mouse brain. “It’s using similar approaches but scaling up,” says Hongkui Zeng, director
of the Allen Institute for Brain Science, which won one-third of the BICAN funding. Zeng says the results of the effort will
serve as a reference—a kind of Human Genome Project for neuroscience.
Other groups will add data from human brains across a range of ancestries and ages, including fetal development. “We will try
to cover the breadth of human development and aging,” says Joseph Ecker of the Salk Institute for Biological Studies, which
leads BICAN studies of epigenetics, the study of heritable changes that are passed on without changes to the DNA. Ngai
expects BICAN to study several hundred human brains overall, although investigators are just starting to work out details.
“The sampling and coverage is going to be a big, big topic of discussion,” Ngai says.
BRAIN Initiative Alliance
From https://www.braininitiative.org/alliance/
From https://www.internationalbraininitiative.org/about-us
International BRAIN Initiative
The International Brain Initiative is represented by some of the world's major brain research
projects:
Our vision is to catalyse and advance neuroscience through international
collaboration and knowledge sharing, uniting diverse ambitions and disseminating
discoveries for the benefit of humanity.
From https://alleninstitute.org/what-we-do/brain-science/
Allen Institute for Brain Science
The Allen Institute is committed to uncovering some of the most pressing questions in
neuroscience, grounded in an understanding of the brain and inspired by our quest to
uncover the essence of what makes us human.
Our focus on neuroscience began with the launch of the Allen Institute for Brain Science in
2003. This division, known worldwide for publicly available resources and tools on brain-
map.org, is beginning a new 16-year phase to understand the cell types in the brain,
bridging cell types and brain function to better understand healthy brains and what goes
wrong in disease. Our MindScope Program focuses on understanding what drives
behaviors in the brain and how to better predict actions. In late 2021 we launched the Allen
Institute for Neural Dynamics, a new research division of the Allen Institute that is dedicated
to understanding how dynamic neuronal signals at the level of the entire brain implement
fundamental computations and drive flexible behaviors.
From https://alleninstitute.org/what-we-do/brain-science/
Allen Institute for Neural Dynamics
The Allen Institute for Neural Dynamics explores the brain’s activity, at the level of individual
neurons and the whole brain, to reveal how we interpret our environments to make
decisions. We aim to discover how neural signaling – and changes in that signaling – allow
the brain to perform complex but fundamental computations and drive flexible behaviors.
Our experiments and openly shared resources will shed light on behavior, memory, how we
handle uncertainty and risk, how humans and other animals chase rewards – and how some
or all of these complicated cognitive functions go awry in neuropsychiatric disorders such
as depression, ADHD or addiction.
From https://portal.brain-map.org/explore/overview
Allen Brain-Map.org
The Allen Institute for Brain Science was established in 2003 with a goal to
accelerate neuroscience research worldwide with the release of large-scale,
publicly available atlases of the brain. Our research teams continue to conduct
investigations into the inner workings of the brain to understand its components
and how they come together to drive behavior and make us who we are.
One of our core principles is Open Science: We publicly share all the data,
products, and findings from our work. Here on brain-map.org, you’ll find our open
data, analysis tools, lab resources, and information about our own research that
also uses these publicly available resources. The potential uses of Allen Institute
for Brain Science resources, on their own or in combination with your own data,
are endless.
The Allen Brain Atlases capture patterns of gene expression across the brain in
various species. Learn more and read publications at the Transcriptional
Landscape of the Brain Explore page. Example use cases across the atlases
include exploration of gene expression and co-expression patterns, expression
across networks, changes across developmental stages, comparisons between
species, and more.
From https://en.wikipedia.org/wiki/Computational_neuroscience
Cognitive Neuroscience
From https://bigthink.com/neuropsych/great-brain-rewiring-after-age-40/
Brain Rewiring after 40
In a systematic review recently published in the journal Psychophysiology, researchers from Monash University in Australia swept through the scientific literature, seeking to
summarize how the connectivity of the human brain changes over our lifetimes. The gathered evidence suggests that in the fifth decade of life (that is, after a person turns 40),
the brain starts to undergo a radical “rewiring” that results in diverse networks becoming more integrated and connected over the ensuing decades, with accompanying effects
on cognition.
Since the turn of the century, neuroscientists have increasingly viewed the brain as a complex network, consisting of units broken down into regions, sub-regions, and
individual neurons. These units are connected structurally, functionally, or both. With increasingly advanced scanning techniques, neuroscientists can observe the parts of
subjects’ brains that “light up” in response to stimuli or when simply at rest, providing a superficial look at how our brains are synced up.
Share The brain undergoes a great “rewiring” after age 40 on LinkedIn
In a systematic review recently published in the journal Psychophysiology, researchers from Monash University in Australia swept through the scientific literature, seeking to
summarize how the connectivity of the human brain changes over our lifetimes. The gathered evidence suggests that in the fifth decade of life (that is, after a person turns 40),
the brain starts to undergo a radical “rewiring” that results in diverse networks becoming more integrated and connected over the ensuing decades, with accompanying effects
on cognition.
Since the turn of the century, neuroscientists have increasingly viewed the brain as a complex network, consisting of units broken down into regions, sub-regions, and
individual neurons. These units are connected structurally, functionally, or both. With increasingly advanced scanning techniques, neuroscientists can observe the parts of
subjects’ brains that “light up” in response to stimuli or when simply at rest, providing a superficial look at how our brains are synced up.
Early on, in our teenage and young adult years, the brain seems to have numerous, partitioned networks with high levels of inner connectivity, reflecting the ability for
specialized processing to occur. That makes sense, as this is the time when we are learning how to play sports, speak languages, and develop talents. Around our mid-40s,
however, that starts to change. Instead, the brain begins becoming less connected within those separate networks and more connected globally across networks. By the time we
reach our 80s, the brain tends to be less regionally specialized and instead broadly connected and integrated.
Prefrontal Cortex as a Meta-Reinforcement Learning System
From https://www.biorxiv.org/content/biorxiv/early/2018/04/06/295964.full.pdf
Over the past twenty years, neuroscience research on reward-based learning has
converged on a canonical model, under which the neurotransmitter dopamine ‘stamps
in’ associations between situations, actions and rewards by modulating the strength of
synaptic connections between neurons. However, a growing number of recent findings
have placed this standard model under strain. In the present work, we draw on recent
advances in artificial intelligence to introduce a new theory of reward-based learning.
Here, the dopamine system trains another part of the brain, the prefrontal cortex, to
operate as its own free-standing learning system. This new perspective
accommodates the findings that motivated the standard model, but also deals
gracefully with a wider range of observations, providing a fresh foundation for future
research.
Cortical Columns
From https://en.wikipedia.org/wiki/Cortical_column
A cortical column, also called hypercolumn, macrocolumn,[1] functional column[2] or sometimes cortical module,[3] is a group of
neurons in the cortex of the brain that can be successively penetrated by a probe inserted perpendicularly to the cortical surface, and which
have nearly identical receptive fields.[citation needed] Neurons within a minicolumn (microcolumn) encode similar features, whereas a
hypercolumn "denotes a unit containing a full set of values for any given set of receptive field parameters".[4] A cortical module is defined as
either synonymous with a hypercolumn (Mountcastle) or as a tissue block of multiple overlapping hypercolumns.[5]
The columnar hypothesis states that the cortex is composed of discrete, modular columns of neurons, characterized by a consistent
connectivity profile.[2] It is still unclear what precisely is meant by the term, and it does not correspond to any single structure within the
cortex. It has been impossible to find a canonical microcircuit that corresponds to the cortical column, and no genetic mechanism has been
deciphered that designates how to construct a column.[4] However, the columnar organization hypothesis is currently the most widely
adopted to explain the cortical processing of information.[6]
Columnar functional organization
The columnar functional organization, as originally framed by Vernon Mountcastle,[9] suggests that neurons that are horizontally more than
0.5 mm (500 µm) from each other do not have overlapping sensory receptive fields, and other experiments give similar results: 200–
800 µm.[1][10][11] Various estimates suggest there are 50 to 100 cortical minicolumns in a hypercolumn, each comprising around 80 neurons.
Their role is best understood as 'functional units of information processing.'
An important distinction is that the columnar organization is functional by definition, and reflects the local connectivity of the cerebral
cortex. Connections "up" and "down" within the thickness of the cortex are much denser than connections that spread from side to side.
Number of cortical columns
There are about 200 million (2×108) cortical minicolumns in the human neocortex with up to about 110 neurons each,[13] and with estimates
of 21–26 billion (2.1×1010–2.6×1010) neurons in the neocortex. With 50 to 100 cortical minicolumns per cortical column a human would
have 2–4 million (2×106–4×106) cortical columns. There may be more if the columns can overlap, as suggested by Tsunoda et al.[14]
There are claims that minicolumns may have as many as 400 principal cells,[15] but it is not clear if that includes glia cells.
Some contradicts the previous estimates,[16] claiming the original research is too arbitrary.[17] The authors propose a uniform neocortex, and
chose a fixed width and length to calculate the cell numbers. Later research pointed out that the neocortex is indeed not uniform for other
species,[18] and studying nine primate species they found that “the number of neurons underneath 1 mm2 of the cerebral cortical surface …
varies by three times across species." The neocortex is not uniform across species.[17][19][20] The actual number of neurons within a single
column is variable, and depends on the cerebral areas and thus the function of the column.
Cortical Columns
From https://numenta.com/a-thousand-brains-by-jeff-hawkins
Reference Frames
From https://numenta.com/a-thousand-brains-by-jeff-hawkins
1. Reference Frames Are Present Everywhere in the Neocortex
2. Reference Frames Are Used to Model Everything We Know, Not Just Physical Objects
3. All Knowledge Is Stored at Locations Relative to Reference Frames
4. Thinking Is a Form of Movement
So far, I have described a theory of how cortical columns learn models of physical objects such as coffee
cups, chairs, and smartphones. The theory says that cortical columns create reference frames for each
observed object. Recall that a reference frame is like an invisible, three-dimensional grid surrounding and
attached to something. The reference frame allows a cortical column to learn the locations of features
that define the shape of an object.
Reference Frames for Concepts: Up to now in the book, I have described how the brain learns
models of things that have a physical shape. However, much of what we know about the world can’t
be sensed directly and may not have any physical equivalent. For example, we can’t reach out and
touch concepts such as democracy or prime numbers, yet we know a lot about these things. How
can cortical columns create models of things that we can’t sense? The trick is that reference frames
don’t have to be anchored to something physical. A reference frame for a concept such as
democracy needs to be self-consistent, but it can exist relatively independent of everyday physical
things. It is similar to how we can create maps for fictional lands. A map of a fictional land needs to
be self-consistent, but it doesn’t need to be located anywhere in particular relative to Earth.
Brainscapes
From https://gardenofthemind.com/
How does your brain—an organ smaller than a soccer ball—represent the big, wide world of sensations, events,
and meaning unfolding all around you?
Your experience of the world feels so seamless and boundless that you may never have thought to ask this question. But
once asked, the question demands an answer. Or, in this case, it demands three answers. Because it is only thanks to three
solutions that you can perceive your world at all. And it is these solutions, in turn, that determine exactly how you
experience your world.
1. You Miss More Than You Think
The first and simplest solution is that your brain doesn’t represent everything taking place around you. Not even close. You
perceive only a small fraction of the energy and information buzzing all around you.
2. Your Brain Is Full of Maps
This brings us to the second grand solution that makes perception possible. Creatures on earth, including humans, eke more
abilities out of their brains by organizing neurons into literal maps. These maps allow creatures to pack more neurons into a
brain while keeping the costly connections between them as short as possible.
3. Your Maps and Your Perceptions Are Warped
This brings us to the third grand solution: Your brain maps are warped, preserving some details while sacrificing
others. Your brain maps are distorted to save energy and space. And these distortions, in turn, distort how you perceive your
world.
Brainscape Book
Memory
From https://thebrain.mcgill.ca/flash/d/d_07/d_07_cr/d_07_cr_tra/d_07_cr_tra.html
Memory
This ability to hold on to a piece of information temporarily in order to complete a task is specifically human. It causes certain regions of the brain to become very active, in
particular the pre-frontal lobe. This region, at the very front of the brain, is highly developed in humans. It is the reason that we have such high, upright foreheads, compared with
the receding foreheads of our cousins the apes. Hence it is no surprise that the part of the brain that seems most active during one of the most human of activities is located precisely
in this prefrontal region that is well developed only in hum
Information is transferred from short-term memory (also known as working memory) to long-term memory through the hippocampus, so named because its shape resembles the
curved tail of a seahorse (hippokampos in Greek). The hippocampus is a very old part of the cortex, evolutionarily, and is located in the inner fold of the temporal lobe.
All of the pieces of information decoded in the various sensory areas of the cortex converge in the hippocampus, which then sends them back where they came from. The
hippocampus is a bit like a sorting centre where these new sensations are compared with previously recorded ones. The hippocampus also creates associations among an object’s
various properties.
From https://neurosciencenews.com/superager-neurons-memory-21561/
Superager Brains Contain ‘Super Neurons’
Summary: Neurons in the memory-associated entorhinal cortex of super-agers are significantly larger than their
cognitively average peers, those with MCI, and even in people up to 30 years younger. Additionally, these neurons
contained no signs of Tau, a hallmark of Alzheimer’s disease.
Neurons in an area of the brain responsible for memory (known as the entorhinal cortex) were significantly larger
in SuperAgers compared to cognitively average peers, individuals with early-stage Alzheimer’s disease and
even individuals 20 to 30 years younger than SuperAgers — who are aged 80 years and older, reports a new
Northwestern Medicine study.
These neurons did not harbor tau tangles, a signature hallmark of Alzheimer’s disease.
“The remarkable observation that SuperAgers showed larger neurons than their younger peers may imply that large cells
were present from birth and are maintained structurally throughout their lives,” said lead author Tamar Gefen, an
assistant professor of psychiatry and behavioral sciences at Northwestern University Feinberg School of Medicine.
“We conclude that larger neurons are a biological signature of the SuperAging trajectory.”
The study of SuperAgers with exceptional memory was the first to show that these individuals carry a unique biological
signature that comprises larger and healthier neurons in the entorhinal cortex that are relatively void of tau tangles
(pathology).
Human Instinctive Intelligence
Layers of Intelligence
From Bob Marcus
Expert Knowledge - Rational Knowledge + Individual Learned Knowledge
Rational Knowledge - Rule-based Decision Making
Individual Learned Knowledge - Object and Situation Recognition
General Learned Knowledge - Common Sense
Preconfigured Capabilities - Instinctual Knowledge and Behavior
Innateness, AlphaZero,and Artificial Intelligence
From https://arxiv.org/ftp/arxiv/papers/1801/1801.05667.pdf
The concept of innateness is rarely discussed in the context of artificial intelligence. When it is
discussed, or hinted at, it is often the context of trying to reduce the amount of innate machinery
in a given system. In this paper, I consider as a test case a recent series of papers by Silver et al
(Silver et al., 2017a) on AlphaGo and its successors that have been presented as an argument that
a “even in the most challenging of domains: it is possible to train to superhuman level, without
human examples or guidance”, “starting tabula rasa.”
I argue that these claims are overstated, for multiple reasons. I close by arguing that artificial
intelligence needs greater attention to innateness, and I point to some proposals about what that
innateness might look like.
Virtually all modern observers would concede that genes and experience work together; it is
“nature and nurture”, not “nature versus nurture”. No nativist, for instance, would doubt that we
are also born with specific biological machinery that allows us to learn. Chomsky’s Language
Acquisition Device should be viewed precisely as an innate learning mechanism, and nativists
such as Pinker, Peter Marler (Marler, 2004) and myself (Marcus, 2004) have frequently argued
for a view in which a significant part of a creature’s innate armamentarium consists not of
specific knowledge but of learning mechanisms, a form of innateness that enables learning.
As discussed below, there is ample reason to believe that humans and many other creatures are
born with significant amounts of innate machinery. The guiding question for the current paper is
whether artificially intelligent systems ought similarly to be endowed with significant amounts
of innate machinery, or whether, in virtue of the powerful learning systems that have recently
been developed, it might suffice for such systems to work in a more bottom up, tabula rasa
fashion
Human Instinctive Behavior
From https://www.sainsburywellcome.org/web/qa/understanding-control-instinctive-behaviour
How do you define instinctive behaviour?
People often use the terms “instinctive” or “innate” to describe behaviours that are not learned, i.e. behaviours you already know how to do for the first time. Instinctive
behaviours are important for promoting the survival of your genes and thereby your species.
What role is the hypothalamus thought to play in the expression of instinctive behaviours?
The hypothalamus is an ancient part of the brain whereas other areas, such as the cortex and forebrain, are very recent evolutionary additions. As such, the hypothalamus
is able to respond to sensory inputs, form internal states and induce motor outputs.
According to the evolutionary neurobiologist Detlev Arendt, the hypothalamus was formed by the fusion of two ancient neural nets:
• a neuroendocrine system that responded to light and secreted factors into the main body cavity – ancestor of the modern midline neuroendocrine nuclei
• a motor system that controlled contractile tissue to produce basic behavioral patterns – ancestor of the medial and lateral hypothalamus that control instinctive
behaviors
The hypothalamus is sometimes mistakenly called the “reptilian” brain; in reality it dates back to before the appearance of the first bilaterian organisms and is perhaps
better termed the Ur-brain. A lot of current work focuses on trying to understand how the hypothalamus encodes internal motivational states that drive instinctive
behaviour, and although its basic architecture was clarified already 30 years ago, how it controls behaviour it still pretty much a mystery.
Human Cognitive Intelligence
40Years of Cognitive Architecture Research
From https://arxiv.org/abs/1610.08602
In this paper we present a broad overview of the last 40 years of research on cognitive architectures.
Although the number of existing architectures is nearing several hundred, most of the existing surveys do
not reflect this growth and focus on a handful of well-established architectures. Thus, in this survey we
wanted to shift the focus towards a more inclusive and high-level overview of the research on cognitive
architectures. Our final set of 84 architectures includes 49 that are still actively developed, and borrow from
a diverse set of disciplines, spanning areas from psychoanalysis to neuroscience. To keep the length of this
paper within reasonable limits we discuss only the core cognitive abilities, such as perception, attention
mechanisms, action selection, memory, learning and reasoning. In order to assess the breadth of practical
applications of cognitive architectures we gathered information on over 900 practical projects implemented
using the cognitive architectures in our list. We use various visualization techniques to highlight overall
trends in the development of the field. In addition to summarizing the current state-of-the-art in the
cognitive architecture research, this survey describes a variety of methods and ideas that have been tried
and their relative success in modeling human cognitive abilities, as well as which aspects of cognitive
behavior need more research with respect to their mechanistic counterparts and thus can further inform
how cognitive science might progress.
Vygotsky's Sociocultural Theory of Cognitive Developmen
From https://www.simplypsychology.org/vygotsky.html
The work of Lev Vygotsky (1934) has become the foundation of much research and theory in cognitive development
over the past several decades, particularly of what has become known as sociocultural theory.
Vygotsky's sociocultural theory views human development as a socially mediated process in which children acquire
their cultural values, beliefs, and problem-solving strategies through collaborative dialogues with more
knowledgeable members of society. Vygotsky's theory is comprised of concepts such as culture-specific tools,
private speech, and the Zone of Proximal Development.
Vygotsky's theories stress the fundamental role of social interaction in the development of cognition (Vygotsky,
1978), as he believed strongly that community plays a central role in the process of "making meaning."
Unlike Piaget's notion that childrens' development must necessarily precede their learning, Vygotsky argued,
"learning is a necessary and universal aspect of the process of developing culturally organized, specifically human
psychological function" (1978, p. 90). In other words, social learning tends to precede (i.e., come before)
development.
Vygotsky has developed a sociocultural approach to cognitive development. He developed his theories at around the
same time as Jean Piaget was starting to develop his ideas (1920's and 30's), but he died at the age of 38, and so his
theories are incomplete - although some of his writings are still being translated from Russian.
Like Piaget, Vygotsky could be described as a constructivist, in that he was interested in knowledge acquisition as a
cumulative event - with new experiences and understandings incorporated into existing cognitive frameworks.
However, whilst Piaget’s theory is structural (arguing that development is governed by physiological stages),
Vygotsky denies the existence of any guiding framework independent of culture and context.
40Years of Cognitive Architecture Research (cont)
From https://arxiv.org/abs/1610.08602
Human Experiential Learning
z
Psychology of Learning
From: https://www.verywellmind.com/learning-study-guide-2795698
Psychologists often define learning as a relatively permanent change in behavior as a result of experience. The
psychology of learning focuses on a range of topics related to how people learn and interact with their environments.
One of the first thinkers to study how learning influences behavior was psychologist John B. Watson who suggested that
all behaviors are a result of the learning process. The school of thought that emerged from Watson's work was known as
behaviorism. The behavioral school of thought proposed studying internal thoughts, memories, and other mental
processes that were too subjective.
Domain Knowledge
z
Memory Development in Children
From: https://www.sciencedirect.com/topics/computer-science/domain-knowledge
The Impact of Domain Knowledge
Striking effects of domain knowledge on performance in memory tasks has been provided in numerous developmental studies. In
most domains, older children know more than younger ones, and differences in knowledge are linked closely to performance
differences. How can we explain this phenomenon? First, one effect that rich domain knowledge has on memory is to increase the
speed of processing for domain-specific information. Second, rich domain knowledge enables more competent strategy use.
Finally, rich domain knowledge can have nonstrategic effects, that is, diminish the need for strategy activation.
Evidence for the latter phenomenon comes from studies using the expert-novice paradigm. These studies compared experts and
novices in a given domain (e.g., baseball, chess, or soccer) on a memory task related to that domain. It could be demonstrated that
rich domain knowledge enabled a child expert to perform much like an adult expert and better than an adult novice—thus showing
a disappearance and sometimes reversal of usual developmental trends. Experts and novices not only differed with regard to
quantity of knowledge but also regarding the quality of knowledge, that is, in the way their knowledge is represented in the mind.
Moreover, several studies also confirmed the assumption that rich domain knowledge can compensate for low overall aptitude on
domain-related memory tasks, as no differences were found between high- and low-aptitude experts on various recall and
comprehension measures (Bjorklund and Schneider 1996).
Taken together, these findings indicate that domain knowledge increases greatly with age, and is clearly related to how much and
what children remember. Domain knowledge also contributes to the development of other competencies that have been proposed
as sources of memory development, namely basic capacities, memory strategies, and metacognitive knowledge. Undoubtedly,
changes in domain knowledge play a large role in memory development, probably larger than that of the other sources of memory
improvement described above. However, although the various components of memory development have been described
separately so far, it seems important to note that all of these components interact in producing memory changes, and that it is
difficult at times to disentangle the effects of specific sources from that of other influences.
Common Sense
z
Common Sense Research Questions
From: https://arxiv.org/pdf/2112.12754.pdf
• What exactly is common sense? What technical definition best suits the needs of AI?
• What are appropriate tests for the presence of common sense? How can we tell if we
are getting closer to building it into our AI systems?
• How is experiential knowledge represented, accessed,and brought to bear on current
situations? What is theole of analogy? How does the ability to recognize somehing or
see something as another thing (or even as aninstance of an abstract concept) develop
and get used?
• How is commonsense knowledge learned as new experi-ences happen? How is the
updatedifferent when knowledge is acquired through language?
• What ontological frameworks are critical to build into an AI system? Are there special
properties of the knowledge of the physical world that need to be handled in a way that is
different from its non-physical counterparts?
• What is the relationship between common sense and the broader notion of rationality
(including bounded ratio-nality, minimal rationality, etc.)?
• What overall architecture is best suited for the multiple roles of common sense? What
mechanism(s) should beused to invoke common sense out of routine, rote processing,
and then to sometimes go beyond it to more spe-cialized forms of expertise?
• What role, if any, does metareasoning play?
Intuition
Intuition in the Brain
From https://www.scientificamerican.com/article/intuition-may-reveal-where-expertise-resides-in-the-brain/
Understanding computer code, deciphering a differential equation, diagnosing a tumor from the shadowy patterns on an x-ray
image, telling a fake from an authentic painting, knowing when to hold and when to fold in poker. Experts decide in a flash,
without thought. Intuition is the name we give to the uncanny ability to quickly and effortlessly know the answer, unconsciously,
either without or well before knowing why. The conscious explanation comes later, if at all, and involves a much more deliberate
process.
Intuition arises within a circumscribed cognitive domain. It may take years of training to develop, and it does not easily transfer
from one domain of expertise to another. Chess mastery is useless when playing bridge. Professionals, who may spend a lifetime
honing their skills, are much in demand for their proficiency.
This elegant finding links intuition with the caudate nucleus, which is part of the basal ganglia—a set of interlinked brain areas
responsible for learning, executing habits and automatic behaviors. The basal ganglia receive massive input from the cortex, the
outer, rindlike surface of the brain. Ultimately these structures project back to the cortex, creating a series of cortical–basal
ganglia loops. In one interpretation, the cortex is associated with conscious perception and the deliberate and conscious analysis
of any given situation, novel or familiar, whereas the caudate nucleus is the site where highly specialized expertise resides that
allows you to come up with an appropriate answer without conscious thought. In computer engineering parlance, a constantly
used class of computations (namely those associated with playing a strategy game) is downloaded into special-purpose hardware,
the caudate, to lighten the burden of the main processor, the cortex.
It appears that the site of fast, automatic, unconscious cognitive operations—from where a solution materializes all of a sudden
—lies in the basal ganglia, linked to but apart from the cortex. These studies provide a telling hint of what happens when the
brain brings the output of unconscious processing into awareness. What remains unclear is why furious activity in the caudate
should remain unconscious while exertions in some part of the cortex give rise to conscious sensation. Finding an answer may
illuminate the central challenge—why excitable matter produces feelings at all.
Human Behavior
Human Behavior
From https://en.wikipedia.org/wiki/Human_behavior
Human behavior is the potential and expressed capacity (mentally, physically, and socially) of human individuals or groups to
respond to internal and external stimuli throughout their life.[1][2] Behavior is driven by genetic and environmental factors that
affect an individual. Behavior is also driven, in part, by thoughts and feelings, which provide insight into individual psyche,
revealing such things as attitudes and values. Human behavior is shaped by psychological traits, as personality types vary from
person to person, producing different actions and behavior.
Social behavior accounts for actions directed at others. It is concerned with the considerable influence of social interaction and
culture, as well as ethics, interpersonal relationships, politics, and conflict. Some behaviors are common while others are
unusual. The acceptability of behavior depends upon social norms and is regulated by various means of social control. Social
norms also condition behavior, whereby humans are pressured into following certain rules and displaying certain behaviors that
are deemed acceptable or unacceptable depending on the given society or culture
Cognitive behavior accounts for actions of obtaining and using knowledge. It is concerned with how information is learned and
passed on, as well as creative application of knowledge and personal beliefs such as religion. Physiological behavior accounts for
actions to maintain the body. It is concerned with basic bodily functions as well as measures taken to maintain health. Economic
behavior accounts for actions regarding the development, organization, and use of materials as well as other forms of work.
Ecological behavior accounts for actions involving the ecosystem. It is concerned with how humans interact with other
organisms and how the environment shapes human behavior.
Free Will
From https://en.wikipedia.org/wiki/Free_will
Free will is the capacity of agents to choose between different possible courses of action unimpeded.[1][2]
Free will is closely linked to the concepts of moral responsibility, praise, culpability, sin, and other judgements which apply only
to actions that are freely chosen. It is also connected with the concepts of advice, persuasion, deliberation, and prohibition.
Traditionally, only actions that are freely willed are seen as deserving credit or blame. Whether free will exists, what it is and the
implications of whether it exists or not are some of the longest running debates of philosophy and religion. Some conceive of
free will as the right to act outside of external influences or wishes.
Some conceive free will to be the capacity to make choices undetermined by past events. Determinism suggests that only one
course of events is possible, which is inconsistent with a libertarian model of free will.[3] Ancient Greek philosophy identified this
issue,[4] which remains a major focus of philosophical debate. The view that conceives free will as incompatible with
determinism is called incompatibilism and encompasses both metaphysical libertarianism (the claim that determinism is false
and thus free will is at least possible) and hard determinism (the claim that determinism is true and thus free will is not possible).
Incompatibilism also encompasses hard incompatibilism, which holds not only determinism but also indeterminism to be
incompatible with free will and thus free will to be impossible whatever the case may be regarding determinism.
In contrast, compatibilists hold that free will is compatible with determinism. Some compatibilists even hold that determinism is
necessary for free will, arguing that choice involves preference for one course of action over another, requiring a sense of how
choices will turn out.[5][6] Compatibilists thus consider the debate between libertarians and hard determinists over free will vs.
determinism a false dilemma.[7] Different compatibilists offer very different definitions of what "free will" means and
consequently find different types of constraints to be relevant to the issue. Classical compatibilists considered free will nothing
more than freedom of action, considering one free of will simply if, had one counterfactually wanted to do otherwise, one could
have done otherwise without physical impediment. Contemporary compatibilists instead identify free will as a psychological
capacity, such as to direct one's behavior in a way responsive to reason, and there are still further different conceptions of free
will, each with their own concerns, sharing only the common feature of not finding the possibility of determinism a threat to the
possibility of free will.[8]
Mind
Mind != Consciousness
Mind
From https://en.wikipedia.org/wiki/Mind
The mind is the set of faculties responsible for all mental phenomena. Often the term is also identified with the phenomena themselves.[2][3][4]
These faculties include thought, imagination, memory, will, and sensation. They are responsible for various mental phenomena, like perception,
pain experience, belief, desire, intention, and emotion. Various overlapping classifications of mental phenomena have been proposed. Important
distinctions group them together according to whether they are sensory, propositional, intentional, conscious, or occurrent. Minds were
traditionally understood as substances but it is more common in the contemporary perspective to conceive them as properties or capacities
possessed by humans and higher animals. Various competing definitions of the exact nature of the mind or mentality have been proposed.
Epistemic definitions focus on the privileged epistemic access the subject has to these states. Consciousness-based approaches give primacy to
the conscious mind and allow unconscious mental phenomena as part of the mind only to the extent that they stand in the right relation to the
conscious mind. According to intentionality-based approaches, the power to refer to objects and to represent the world is the mark of the mental.
For behaviorism, whether an entity has a mind only depends on how it behaves in response to external stimuli while functionalism defines mental
states in terms of the causal roles they play. Central questions for the study of mind, like whether other entities besides humans have minds or
how the relation between body and mind is to be conceived, are strongly influenced by the choice of one's definition.
Mind or mentality is usually contrasted with body, matter or physicality. The issue of the nature of this contrast and specifically the relation
between mind and brain is called the mind-body problem.[5] Traditional viewpoints included dualism and idealism, which consider the mind to be
non-physical.[5] Modern views often center around physicalism and functionalism, which hold that the mind is roughly identical with the brain or
reducible to physical phenomena such as neuronal activity[6][need quotation to verify] though dualism and idealism continue to have many supporters.
Another question concerns which types of beings are capable of having minds.[citation needed][7] For example, whether mind is exclusive to humans,
possessed also by some or all animals, by all living things, whether it is a strictly definable characteristic at all, or whether mind can also be a
property of some types of human-made machines.[citation needed] Different cultural and religious traditions often use different concepts of mind,
resulting in different answers to these questions. Some see mind as a property exclusive to humans whereas others ascribe properties of mind to
non-living entities (e.g. panpsychism and animism), to animals and to deities. Some of the earliest recorded speculations linked mind (sometimes
described as identical with soul or spirit) to theories concerning both life after death, and cosmological and natural order, for example in the
doctrines of Zoroaster, the Buddha, Plato, Aristotle, and other ancient Greek, Indian and, later, Islamic and medieval European philosophers.
Psychologists such as Freud and James, and computer scientists such as Turing developed influential theories about the nature of the mind. The
possibility of nonbiological minds is explored in the field of artificial intelligence, which works closely in relation with cybernetics and
information theory to understand the ways in which information processing by nonbiological machines is comparable or different to mental
phenomena in the human mind.[8] The mind is also sometimes portrayed as the stream of consciousness where sense impressions and mental
phenomena are constantly changing.[9][10]
Freud’s Unconscious Pre-Conscious, and Conscious Mind
From https://www.verywellmind.com/the-conscious-and-unconscious-mind-2795946
The famed psychoanalyst Sigmund Freud believed that behavior and personality were derived from the constant and unique interaction of conflicting psychological forces that operate
at three different levels of awareness: the preconscious, conscious, and unconscious.1 He believed that each of these parts of the mind plays an important role in influencing behavior.
In order to understand Freud's theory, it is essential to first understand what he believed each part of personality did, how it operated, and how these three elements interact to contribute
to the human experience. Each level of awareness has a role to play in shaping human behavior and thought.
The famed psychoanalyst Sigmund Freud believed that behavior and personality were derived from the constant and unique interaction of conflicting psychological forces that operate
at three different levels of awareness: the preconscious, conscious, and unconscious.1 He believed that each of these parts of the mind plays an important role in influencing behavior.
In order to understand Freud's theory, it is essential to first understand what he believed each part of personality did, how it operated, and how these three elements interact to contribute
to the human experience. Each level of awareness has a role to play in shaping human behavior and thought.
Freud's Three Levels of Mind
Freud delineated the mind in the distinct levels, each with their own roles and functions.1
• The preconscious consists of anything that could potentially be brought into the conscious mind.
• The conscious mind contains all of the thoughts, memories, feelings, and wishes of which we are aware at any given moment. This is the aspect of our mental processing that
we can think and talk about rationally. This also includes our memory, which is not always part of consciousness but can be retrieved easily and brought into awareness.
• The unconscious mind is a reservoir of feelings, thoughts, urges, and memories that are outside of our conscious awareness. The unconscious contains contents that are
unacceptable or unpleasant, such as feelings of pain, anxiety, or conflict.
Freud likened the three levels of mind to an iceberg. The top of the iceberg that you can see above the water represents the conscious mind. The part of the iceberg that is submerged
below the water, but is still visible, is the preconscious. The bulk of the iceberg that lies unseen beneath the waterline represents the unconscious.
Consciousness
Mind != Consciousness
Consciousness
From https://en.wikipedia.org/wiki/Consciousness
Consciousness, at its simplest, is sentience or awareness of internal and external existence.[1] Despite millennia of analyses,
definitions, explanations and debates by philosophers and scientists, consciousness remains puzzling and controversial,[2] being
"at once the most familiar and [also the] most mysterious aspect of our lives".[3] Perhaps the only widely agreed notion about the
topic is the intuition that consciousness exists.[4] Opinions differ about what exactly needs to be studied and explained as
consciousness. Sometimes, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's "inner
life", the world of introspection, of private thought, imagination and volition.[5] Today, it often includes any kind of cognition,
experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing
or not.[6][7] There might be different levels or orders of consciousness,[8] or different kinds of consciousness, or just one kind with
different features.[9] Other questions include whether only humans are conscious, all animals, or even the whole universe. The
disparate range of research, notions and speculations raises doubts about whether the right questions are being asked.[10]
Examples of the range of descriptions, definitions or explanations are: simple wakefulness, one's sense of selfhood or soul
explored by "looking within"; being a metaphorical "stream" of contents, or being a mental state, mental event or mental
process of the brain; having phanera or qualia and subjectivity; being the 'something that it is like' to 'have' or 'be' it; being the
"inner theatre" or the executive control system of the mind.[11]
What is Consciousness
From https://www.nature.com/articles/d41586-018-05097-x
The majority of scholars accept consciousness as a given and seek to understand its relationship to the objective world described by science. More than a
quarter of a century ago Francis Crick and I decided to set aside philosophical discussions on consciousness (which have engaged scholars since at least
the time of Aristotle) and instead search for its physical footprints. What is it about a highly excitable piece of brain matter that gives rise to
consciousness? Once we can understand that, we hope to get closer to solving the more fundamental problem.
We seek, in particular, the neuronal correlates of consciousness (NCC), defined as the minimal neuronal mechanisms jointly sufficient for any specific
conscious experience. What must happen in your brain for you to experience a toothache, for example? Must some nerve cells vibrate at some magical
frequency? Do some special “consciousness neurons” have to be activated? In which brain regions would these cells be located?
ne important lesson from the spinal cord and the cerebellum is that the genie of consciousness does not just appear when any neural tissue is excited. More
is needed. This additional factor is found in the gray matter making up the celebrated cerebral cortex, the outer surface of the brain. It is a laminated sheet
of intricately interconnected nervous tissue, the size and width of a 14-inch pizza. Two of these sheets, highly folded, along with their hundreds of
millions of wires—the white matter—are crammed into the skull. All available evidence implicates neocortical tissue in generating feelings.
We can narrow down the seat of consciousness even further. So it appears that the sights, sounds and other sensations of life as we experience it are
generated by regions within the posterior cortex. As far as we can tell, almost all conscious experiences have their origin there. What is the crucial
difference between these posterior regions and much of the prefrontal cortex, which does not directly contribute to subjective content? The truth is that we
do not know. Even so—and excitingly—a recent finding indicates that neuroscientists may be getting closer.
Neural Correlates of Consciousness
From https://www.nature.com/articles/nrn.2016.22
• The neuronal correlates of consciousness (NCC) are the minimum neuronal mechanisms jointly sufficient for any one specific conscious experience. It
is important to distinguish full NCC (the neural substrate supporting experience in general, irrespective of its specific content), content-specific NCC
(the neural substrate supporting a particular content of experience — for example, faces, whether seen, dreamt or imagined) and background conditions
(factors that enable consciousness, but do not contribute directly to the content of experience — for example, arousal systems that ensure adequate
excitability of the NCC).
• The no-report paradigm allows the NCC to be distinguished from events or processes — such as selective attention, memory and response preparation
— that are associated with,
• precede or follow conscious experience. In such paradigms, trials with explicit reports are included along with trials without explicit reports, during
which indirect physiological measures are used to infer what the participant is perceiving.
• The best candidates for full and content-specific NCC are located in the posterior cerebral cortex, in a temporo-parietal-occipital hot zone. The content-
specific NCC may be any particular subset of neurons within this hot zone that supports specific phenomenological distinctions, such as faces.
• The two most widely used electrophysiological signatures of consciousness — gamma range oscillations and the P3b event-related potential — can be
dissociated from conscious experiences and are more closely correlated with selective attention and novelty, respectively.
• New electroencephalography- or functional MRI-based variables that measure the extent to which neuronal activity is both differentiated and integrated
across the cortical sheet allow the NCC to be identified more precisely. Moreover, a combined transcranial magnetic stimulation–
electroencephalography procedure can predict the presence or absence of consciousness in healthy people who are awake, deeply sleeping or under
different types of anaesthesia, and in patients with disorders of consciousness, at the single-person level.
• Extending the NCC derived from studies in people who can speak about the presence and quality of consciousness to patients with severe brain injuries,
fetuses and newborn infants, non-mammalian species and intelligent machines is more challenging. For these purposes, it is essential to combine
experimental studies to identify the NCC with a theoretical approach that characterizes in a principled manner what consciousness is and what is
required of its physical substrate.
Human Consciousness
From https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00567/full
Consciousness is not a process in the brain but a kind of behavior that, of course, is
controlled by the brain like any other behavior. Human consciousness emerges on the
interface between three components of animal behavior: communication, play, and the
use of tools. These three components interact on the basis of anticipatory behavioral
control, which is common for all complex forms of animal life. All three do not
exclusively distinguish our close relatives, i.e., primates, but are broadly presented
among various species of mammals, birds, and even cephalopods; however, their
particular combination in humans is unique. The interaction between communication
and play yields symbolic games, most importantly language; the interaction between
symbols and tools results in human praxis. Taken together, this gives rise to a
mechanism that allows a creature, instead of performing controlling actions overtly, to
play forward the corresponding behavioral options in a “second reality” of objectively
(by means of tools) grounded symbolic systems. The theory possesses the following
properties: (1) It is anti-reductionist and anti-eliminativist, and yet, human
consciousness is considered as a purely natural (biological) phenomenon. (2) It avoids
epiphenomenalism and indicates in which conditions human consciousness has
evolutionary advantages, and in which it may even be disadvantageous. (3) It allows to
easily explain the most typical features of consciousness, such as objectivity, seriality
and limited resources, the relationship between consciousness and explicit memory,
the feeling of conscious agency, etc.
Criteria for Consciousness
From https://numenta.com/a-thousand-brains-by-jeff-hawkins
1. Learn a model of the world
2. Continuously remember the states of that model
3. Recall the remembered states
Consciousness
From https://numenta.com/a-thousand-brains-by-jeff-hawkins
Consciousness, at its simplest, is sentience or awareness of internal and external existence.[1] Despite millennia of analyses, definitions,
explanations and debates by philosophers and scientists, consciousness remains puzzling and controversial,[2] being "at once the most familiar
and [also the] most mysterious aspect of our lives".[3] Perhaps the only widely agreed notion about the topic is the intuition that consciousness
exists.[4] Opinions differ about what exactly needs to be studied and explained as consciousness. Sometimes, it is synonymous with the mind,
and at other times, an aspect of mind. In the past, it was one's "inner life", the world of introspection, of private thought, imagination and
volition.[5] Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or
self-awareness either continuously changing or not.[6][7] There might be different levels or orders of consciousness,[8] or different kinds of
consciousness, or just one kind with different features.[9] Other questions include whether only humans are conscious, all animals, or even the
whole universe. The disparate range of research, notions and speculations raises doubts about whether the right questions are being asked.[10]
New Explanation for Consciousness
From https://neurosciencenews.com/consciousness-theory-21571/
Consciousness is your awareness of yourself and the world around you. This awareness is subjective and unique to you.
A Boston University Chobanian & Avedisian School of Medicine researcher has developed a new theory of consciousness,
explaining why it developed, what it is good for, which disorders affect it, and why dieting (and resisting other urges) is so
difficult.
“In a nutshell, our theory is that consciousness developed as a memory system that is used by our unconscious brain to help us
flexibly and creatively imagine the future and plan accordingly,” explained corresponding author Andrew Budson, MD,
professor of neurology.
“What is completely new about this theory is that it suggests we don’t perceive the world, make decisions, or perform actions
directly. Instead, we do all these things unconsciously and then—about half a second later—consciously remember doing them
We knew that conscious processes were simply too slow to be actively involved in music, sports, and other activities where
split-second reflexes are required. But if consciousness is not involved in such processes, then a better explanation of what
consciousness does was needed,” said Budson, who also is Chief of Cognitive & Behavioral Neurology, Associate Chief of
Staff for Education, and Director of the Center for Translational Cognitive Neuroscience at the Veterans Affairs (VA) Boston
Healthcare System.
According to the researchers, this theory is important because it explains that all our decisions and actions are actually made
unconsciously, although we fool ourselves into believing that we consciously made them.
“Even our thoughts are not generally under our conscious control. This lack of control is why we may have difficulty stopping a stream of
thoughts running through our head as we’re trying to go to sleep, and also why mindfulness is hard,” adds Budson.
Brain Interfaces
Brain-Computer Interfaces
From https://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface
A brain–computer interface (BCI), sometimes called a brain–machine interface (BMI), is a direct communication pathway
between the brain's electrical activity and an external device, most commonly a computer or robotic limb. BCIs are often directed
at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.[1] Implementations of
BCIs range from non-invasive (EEG, MEG, EOG, MRI) and partially invasive (ECoG and endovascular) to invasive
(microelectrode array), based on how close electrodes get to brain tissue.[2] Research on BCIs began in the 1970s by Jacques Vidal
at the University of California, Los Angeles (UCLA) under a grant from the National Science Foundation, followed by a contract
from DARPA.[3][4] The Vidal's 1973 paper marks the first appearance of the expression brain–computer interface in scientific
literature. Due to the cortical plasticity of the brain, signals from implanted prostheses can, after adaptation, be handled by the
brain like natural sensor or effector channels.[5] Following years of animal experimentation, the first neuroprosthetic devices
implanted in humans appeared in the mid-1990s. Recently, studies in human-computer interaction via the application of machine
learning to statistical temporal features extracted from the frontal lobe (EEG brainwave) data has had high levels of success in
autonomous recognition of fall detection as a medical alarm,[6] mental state (Relaxed, Neutral, Concentrating),[7] mental emotional
state (Negative, Neutral, Positive),[8] and thalamocortical dysrhythmia.[9]
From https://www.nicolelislab.net/?p=683
Networked Brains (Brainet) from Duke University
From https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3497935/
Brain-Computer Interface
Brain-computer interfaces (BCIs) acquire brain signals, analyze them, and translate them into commands that are relayed to
output devices that carry out desired actions. BCIs do not use normal neuromuscular output pathways. The main goal of BCI is
to replace or restore useful function to people disabled by neuromuscular disorders such as amyotrophic lateral sclerosis,
cerebral palsy, stroke, or spinal cord injury. From initial demonstrations of electroencephalography-based spelling and single-
neuron-based device control, researchers have gone on to use electroencephalographic, intracortical, electrocorticographic, and
other brain signals for increasingly complex control of cursors, robotic arms, prostheses, wheelchairs, and other devices. Brain-
computer interfaces may also prove useful for rehabilitation after stroke and for other disorders. In the future, they might
augment the performance of surgeons or other medical professionals. Brain-computer interface technology is the focus of a
rapidly growing research and development enterprise that is greatly exciting scientists, engineers, clinicians, and the public in
general. Its future achievements will depend on advances in 3 crucial areas. Brain-computer interfaces need signal-acquisition
hardware that is convenient, portable, safe, and able to function in all environments. Brain-computer interface systems need to
be validated in long-term studies of real-world use by people with severe disabilities, and effective and viable models for their
widespread dissemination must be implemented. Finally, the day-to-day and moment-to-moment reliability of BCI performance
must be improved so that it approaches the reliability of natural muscle-based function.
From https://www.frontiersin.org/articles/10.3389/fnsys.2021.578875/full
Progress in Brain Computer Interface: Challenges and Opportunities
Brain computer interfaces (BCI) provide a direct communication link between the brain and a computer or other external
devices. They offer an extended degree of freedom either by strengthening or by substituting human peripheral working
capacity and have potential applications in various fields such as rehabilitation, affective computing, robotics, gaming, and
neuroscience. Significant research efforts on a global scale have delivered common platforms for technology standardization
and help tackle highly complex and non-linear brain dynamics and related feature extraction and classification challenges.
Time-variant psycho-neurophysiological fluctuations and their impact on brain signals impose another challenge for BCI
researchers to transform the technology from laboratory experiments to plug-and-play daily life. This review summarizes state-
of-the-art progress in the BCI field over the last decades and highlights critical challenges.
The brain computer interface (BCI) is a direct and sometimes bidirectional communication tie-up between the brain and a computer or an external
device, which involves no muscular stimulation. It has shown promise for rehabilitating subjects with motor impairments as well as for
augmenting human working capacity either physically or cognitively (Lebedev and Nicolelis, 2017; Saha and Baumert, 2020). BCI was
historically envisioned as a potential technology for augmenting/replacing existing neural rehabilitations or serving assistive devices controlled
directly by the brain (Vidal, 1973; Birbaumer et al., 1999; Alcaide-Aguirre et al., 2017; Shahriari et al., 2019). The first systematic attempt to
implement an electroencephalogram (EEG)-based BCI was made by J. J. Vidal in 1973, who recorded the evoked electrical activity of the cerebral
cortex from the intact skull using EEG (Vidal, 1973), a non-invasive technique first studied in humans invented by Berger (1929). Another early
endeavor to establish direct communication between a computer and the brain of people with severe motor impairments had utilized P300, an
event related brain potential (Farwell and Donchin, 1988). As an alternative to conventional therapeutic rehabilitation for motor impairments, BCI
technology helps to artificially augment or re-excite synaptic plasticity in affected neural circuits. By exploiting undamaged cognitive and
emotional functions, BCI aims at re-establishing the link between the brain and an impaired peripheral site (Vansteensel et al., 2016). However,
the research applications of BCI technology evolved significantly over the years, including brain fingerprinting for lie detection (Farwell et al.,
2014), detecting drowsiness for improving human working performances (Aricò et al., 2016; Wei et al., 2018), estimating reaction time (Wu et al.,
2017b), controlling virtual reality (Vourvopoulos et al., 2019), quadcopters (LaFleur et al., 2013) and video games (Singh et al., 2020), and
driving humanoid robots (Choi and Jo, 2013; Spataro et al., 2017). Figure 1 demonstrates the progression of BCI in various application fields
since its conception.
From http://people.uncw.edu/tothj/PSY595/Lebedev-Brain-Machine%20Interfaces-TiN-2006.pdf
Brain–machine interfaces:past, present and future
Since the original demonstration that electrical activity generated by ensembles of
cortical neurons can be employed directly to control a robotic manipulator,
research on brain–machine interfaces (BMIs) has experienced an impressive
growth. Today BMIs designed forboth experimental and clinical studies can translate
raw neuronal signals into motor commands that reproduce arm reaching and hand
grasping movements in artificialactuators. Clearly, these developments hold promise
for the restoration of limb mobility in paralyzed subjects. However, as we review here,
before this goal can be reached several bottlenecks have to be passed. These
include designing a fully implantable biocompatible recording device, further
developing real-time computational algorithms, introducing a method for providing
the brain with sensory feedback from the actuators, and designing and building
artificial prostheses that can be controlled directly by brain-derived signals. By
reaching these milestones, future BMIs will be able to drive and control revolutionary
prostheses that feel and act like the human arm.
From https://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface
Brain–Computer Interface
A brain–computer interface (BCI), sometimes called a brain–machine interface (BMI), is a direct
communication pathway between the brain's electrical activity and an external device, most commonly a
computer or robotic limb. BCIs are often directed at researching, mapping, assisting, augmenting, or
repairing human cognitive or sensory-motor functions.[1] Implementations of BCIs range from non-
invasive (EEG, MEG, EOG, MRI) and partially invasive (ECoG and endovascular) to invasive
(microelectrode array), based on how close electrodes get to brain tissue.[2]
Research on BCIs began in the 1970s by Jacques Vidal at the University of California, Los Angeles
(UCLA) under a grant from the National Science Foundation, followed by a contract from DARPA.[3][4]
The Vidal's 1973 paper marks the first appearance of the expression brain–computer interface in scientific
literature.
Due to the cortical plasticity of the brain, signals from implanted prostheses can, after adaptation, be
handled by the brain like natural sensor or effector channels.[5] Following years of animal experimentation,
the first neuroprosthetic devices implanted in humans appeared in the mid-1990s.
Recently, studies in human-computer interaction via the application of machine learning to statistical
temporal features extracted from the frontal lobe (EEG brainwave) data has had high levels of success in
autonomous recognition of fall detection as a medical alarm,[6] mental state (Relaxed, Neutral,
Concentrating),[7] mental emotional state (Negative, Neutral, Positive),[8] and thalamocortical dysrhythmia.
[9]
AGI Part 2.pdf
AGI Part 2.pdf
AGI Part 2.pdf

Más contenido relacionado

Similar a AGI Part 2.pdf

Nervous system physiology.pdf
Nervous system physiology.pdfNervous system physiology.pdf
Nervous system physiology.pdfIshita60889
 
Dale Maltz - Rewire Your Brain_ Against Bad Habit And Its Component (2020).pdf
Dale Maltz - Rewire Your Brain_ Against Bad Habit And Its Component (2020).pdfDale Maltz - Rewire Your Brain_ Against Bad Habit And Its Component (2020).pdf
Dale Maltz - Rewire Your Brain_ Against Bad Habit And Its Component (2020).pdfjarantab
 
Brain based research -overview of recent neuroscience
Brain based research -overview of recent neuroscienceBrain based research -overview of recent neuroscience
Brain based research -overview of recent neuroscienceLfialkoff
 
Brain based research -overview of recent neuroscience
Brain based research -overview of recent neuroscienceBrain based research -overview of recent neuroscience
Brain based research -overview of recent neuroscienceLfialkoff
 
Ap Psychology Case Study
Ap Psychology Case StudyAp Psychology Case Study
Ap Psychology Case StudyKate Loge
 
Dual credit psychology notes chapter 6 - brain and body
Dual credit psychology notes   chapter 6 - brain and bodyDual credit psychology notes   chapter 6 - brain and body
Dual credit psychology notes chapter 6 - brain and bodymrslocomb
 
NERVOUS - Fetal Pig Dissection
NERVOUS - Fetal Pig Dissection NERVOUS - Fetal Pig Dissection
NERVOUS - Fetal Pig Dissection sapphire_12
 

Similar a AGI Part 2.pdf (20)

BLUEBRAIN(J.S.R)
BLUEBRAIN(J.S.R)BLUEBRAIN(J.S.R)
BLUEBRAIN(J.S.R)
 
Brain
BrainBrain
Brain
 
Nervous system physiology.pdf
Nervous system physiology.pdfNervous system physiology.pdf
Nervous system physiology.pdf
 
Dale Maltz - Rewire Your Brain_ Against Bad Habit And Its Component (2020).pdf
Dale Maltz - Rewire Your Brain_ Against Bad Habit And Its Component (2020).pdfDale Maltz - Rewire Your Brain_ Against Bad Habit And Its Component (2020).pdf
Dale Maltz - Rewire Your Brain_ Against Bad Habit And Its Component (2020).pdf
 
Seminar+of+blue+brain
Seminar+of+blue+brainSeminar+of+blue+brain
Seminar+of+blue+brain
 
14 562
14 56214 562
14 562
 
BLUE_BRAIN
BLUE_BRAINBLUE_BRAIN
BLUE_BRAIN
 
Inscriptions 32 jan2021
Inscriptions 32 jan2021Inscriptions 32 jan2021
Inscriptions 32 jan2021
 
Brain based research -overview of recent neuroscience
Brain based research -overview of recent neuroscienceBrain based research -overview of recent neuroscience
Brain based research -overview of recent neuroscience
 
Brain based research -overview of recent neuroscience
Brain based research -overview of recent neuroscienceBrain based research -overview of recent neuroscience
Brain based research -overview of recent neuroscience
 
Myelination Essay
Myelination EssayMyelination Essay
Myelination Essay
 
Nervous
NervousNervous
Nervous
 
Ap Psychology Case Study
Ap Psychology Case StudyAp Psychology Case Study
Ap Psychology Case Study
 
Dual credit psychology notes chapter 6 - brain and body
Dual credit psychology notes   chapter 6 - brain and bodyDual credit psychology notes   chapter 6 - brain and body
Dual credit psychology notes chapter 6 - brain and body
 
NERVOUS - Fetal Pig Dissection
NERVOUS - Fetal Pig Dissection NERVOUS - Fetal Pig Dissection
NERVOUS - Fetal Pig Dissection
 
G017434861
G017434861G017434861
G017434861
 
Primer on the brain revised
Primer on the brain   revisedPrimer on the brain   revised
Primer on the brain revised
 
blue brain
blue brainblue brain
blue brain
 
Essay On The Brain
Essay On The BrainEssay On The Brain
Essay On The Brain
 
Human Brain
Human BrainHuman Brain
Human Brain
 

Último

"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...Zilliz
 
JohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptxJohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptxJohnPollard37
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusZilliz
 
Simplifying Mobile A11y Presentation.pptx
Simplifying Mobile A11y Presentation.pptxSimplifying Mobile A11y Presentation.pptx
Simplifying Mobile A11y Presentation.pptxMarkSteadman7
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vázquez
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Zilliz
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Orbitshub
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Quantum Leap in Next-Generation Computing
Quantum Leap in Next-Generation ComputingQuantum Leap in Next-Generation Computing
Quantum Leap in Next-Generation ComputingWSO2
 
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)Samir Dash
 
Decarbonising Commercial Real Estate: The Role of Operational Performance
Decarbonising Commercial Real Estate: The Role of Operational PerformanceDecarbonising Commercial Real Estate: The Role of Operational Performance
Decarbonising Commercial Real Estate: The Role of Operational PerformanceIES VE
 
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...WSO2
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
Stronger Together: Developing an Organizational Strategy for Accessible Desig...
Stronger Together: Developing an Organizational Strategy for Accessible Desig...Stronger Together: Developing an Organizational Strategy for Accessible Desig...
Stronger Together: Developing an Organizational Strategy for Accessible Desig...caitlingebhard1
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Victor Rentea
 

Último (20)

"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
JohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptxJohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptx
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Simplifying Mobile A11y Presentation.pptx
Simplifying Mobile A11y Presentation.pptxSimplifying Mobile A11y Presentation.pptx
Simplifying Mobile A11y Presentation.pptx
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Quantum Leap in Next-Generation Computing
Quantum Leap in Next-Generation ComputingQuantum Leap in Next-Generation Computing
Quantum Leap in Next-Generation Computing
 
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
 
Decarbonising Commercial Real Estate: The Role of Operational Performance
Decarbonising Commercial Real Estate: The Role of Operational PerformanceDecarbonising Commercial Real Estate: The Role of Operational Performance
Decarbonising Commercial Real Estate: The Role of Operational Performance
 
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...
WSO2 Micro Integrator for Enterprise Integration in a Decentralized, Microser...
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Stronger Together: Developing an Organizational Strategy for Accessible Desig...
Stronger Together: Developing an Organizational Strategy for Accessible Desig...Stronger Together: Developing an Organizational Strategy for Accessible Desig...
Stronger Together: Developing an Organizational Strategy for Accessible Desig...
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 

AGI Part 2.pdf

  • 1. Artificial General Intelligence 2 Bob Marcus robert.marcus@et-strategies.com Part 2 of 4 parts: Natural and Human Intelligence
  • 2. This is a first cut. More details will be added later.
  • 3. Part 1: Artificial Intelligence (AI) Part 2: Natural Intelligence(NI) Part 3: Artificial General Intelligence (AI + NI) Part 4: Networked AGI Layer on top or Gaia and Human Society Four Slide Sets on Artificial General Intelligence AI = Artificial Intelligence (Task) AGI = Artificial Mind (Simulation) AB = Artificial Brain (Emulation) AC = Artificial Consciousness (Synthetic) AI < AGI < ? AB <AC (Is a partial brain emulation needed to create a mind?) Mind is not required for task proficiency Full Natural Brain architecture is not required for a mind Consciousness is not required for a natural brain architecture
  • 4. Philosophical Musings 10/2022 Focused Artifical Intelligence (AI) will get better at specific tasks Specific AI implementations will probably exceed human performance in most tasks Some will attain superhuman abilities is a wide range of tasks “Common Sense”= low-level experiential broad knowledge could be an exception Some AIs could use brain inspired architectures to improve complex ask performance This is not equivalent to human or artificial general intelligence (AGI) However networking task-centric AIs could provide a first step towards AGI This is similar to the way human society achieves power from communication The combination of the networked AIs could be the foundation of an artificial mind In a similar fashion, human society can accomplish complex tasks without being conscious Distributed division of labor enable tasks to be assigned to the most competent element Networked humans and AIs could cooperate through brain-machine interfaces In the brain, consciousness provides direction to the mind In large societies, governments perform the role of conscious direction With networked AIs, a “conscious operating system”could play a similar role. This would probably have to be initially programmed by humans. If the AI network included sensors, actuators, and robots it could be aware of the world The AI network could form a grid managing society, biology, and geology layers A conscious AI network could develop its own goals beyond efficient management Humans in the loop could be valuable in providing common sense and protective oversight
  • 5. Outline Human Intelligence Brain Architecture Memory Human Instinctive Intelligence Human Cognitive Intelligence Human Experiential Learning Domain Knowledge Common Sense Intuition Human Behavior Mind Consciousness Brain Interfaces References
  • 8. Brain makes Predictions From https://numenta.com/a-thousand-brains-by-jeff-hawkins There was only one explanation I could think of. My brain, specifically my neocortex, was making multiple simultaneous predictions of what it was about to see, hear, and feel. Every time I moved my eyes, my neocortex made predictions of what it was about to see. Every time I picked something up, my neocortex made predictions of what each finger should feel. And every action I took led to predictions of what I should hear. My brain predicted the smallest stimuli, such as the texture of the handle on my coffee cup, and large conceptual ideas, such as the correct month that should be displayed on a calendar. These predictions occurred in every sensory modality, for low-level sensory features and high-level concepts, which told me that every part of the neocortex, and therefore every cortical column, was making predictions. Prediction was a ubiquitous function of the neocortex. At that time, few neuroscientists described the brain as a prediction machine. Focusing on how the neocortex made many parallel predictions would be a novel way to study how it worked. I knew that prediction wasn’t the only thing the neocortex did, but prediction represented a systemic way of attacking the cortical column’s mysteries. I could ask specific questions about how neurons make predictions under different conditions. The answers to these questions might reveal what cortical columns do, and how they do it. To make predictions, the brain has to learn what is normal—that is, what should be expected based on past experience. My previous book, On Intelligence, explored this idea of learning and prediction. In the book, I used the phrase “the memory prediction framework” to describe the overall idea, and I wrote about the implications of thinking about the brain this way. I argued that by studying how the neocortex makes predictions, we would be able to unravel how the neocortex works. Today I no longer use the phrase “the memory prediction framework.” Instead, I describe the same idea by saying that the that the neocortex learns a model of the world, and it makes predictions based on its model.
  • 9. 9 Growth of Minds From https://www.amazon.com/Journey-Mind-Thinking-Emerged-Chaos/dp/B09M2WCW72/ In the beginning, fourteen billion years ago, existence arose from nonexistence and the universe commenced. Four billion years ago, give or take, life arose from nonlife and the evolution of species commenced. A billion years after that, purpose arose from purposelessness and the journey of the mind commenced. Eventually, the journey would forge a god out of godlessness, a new breed of mind endowed with the power and disposition to reshape the cosmos as it saw fit. This book retraces the journey of the mind from the aimless cycling of mud on a dark and barren Earth until the morning a mind woke up and declared to an indifferent universe, “I am aware of me!” The chapters ahead visit seventeen different living minds, ranging from the simplest to the most sophisticated. First up is the tiniest organism on Earth, the humble archaeon, featuring a mind so minuscule that you would be forgiven for questioning whether it’s a mind at all. From there, our itinerary will take us forward through a series of increasingly brawny intellects. We will sojourn with amoeba minds, insect minds, tortoise minds, and monkey minds, until we arrive at the mightiest mind to ever grace our solar system . . . one that may be something of a surprise.
  • 10. 10 Growth of Minds (cont) From https://www.amazon.com/Journey-Mind-Thinking-Emerged-Chaos/dp/B09M2WCW72/
  • 12. Human Brain From https://en.wikipedia.org/wiki/Human_brain The human brain is the central organ of the human nervous system, and with the spinal cord makes up the central nervous system. The brain consists of the cerebrum, the brainstem and the cerebellum. It controls most of the activities of the body, processing, integrating, and coordinating the information it receives from the sense organs, and making decisions as to the instructions sent to the rest of the body. The brain is contained in, and protected by, the skull bones of the head. The cerebrum, the largest part of the human brain, consists of two cerebral hemispheres. Each hemisphere has an inner core composed of white matter, and an outer surface – the cerebral cortex – composed of grey matter. The cortex has an outer layer, the neocortex, and an inner allocortex. The neocortex is made up of six neuronal layers, while the allocortex has three or four. Each hemisphere is conventionally divided into four lobes – the frontal, temporal, parietal, and occipital lobes. The frontal lobe is associated with executive functions including self-control, planning, reasoning, and abstract thought, while the occipital lobe is dedicated to vision. Within each lobe, cortical areas are associated with specific functions, such as the sensory, motor and association regions. Although the left and right hemispheres are broadly similar in shape and function, some functions are associated with one side, such as language in the left and visual-spatial ability in the right. The hemispheres are connected by commissural nerve tracts, the largest being the corpus callosum. The cerebrum is connected by the brainstem to the spinal cord. The brainstem consists of the midbrain, the pons, and the medulla oblongata. The cerebellum is connected to the brainstem by three pairs of nerve tracts called cerebellar peduncles. Within the cerebrum is the ventricular system, consisting of four interconnected ventricles in which cerebrospinal fluid is produced and circulated. Underneath the cerebral cortex are several important structures, including the thalamus, the epithalamus, the pineal gland, the hypothalamus, the pituitary gland, and the subthalamus; the limbic structures, including the amygdala and the hippocampus; the claustrum, the various nuclei of the basal ganglia; the basal forebrain structures, and the three circumventricular organs. The cells of the brain include neurons and supportive glial cells. There are more than 86 billion neurons in the brain, and a more or less equal number of other cells. Brain activity is made possible by the interconnections of neurons and their release of neurotransmitters in response to nerve impulses. Neurons connect to form neural pathways, neural circuits, and elaborate network systems. The whole circuitry is driven by the process of neurotransmission. The brain is protected by the skull, suspended in cerebrospinal fluid, and isolated from the bloodstream by the blood–brain barrier. However, the brain is still susceptible to damage, disease, and infection. Damage can be caused by trauma, or a loss of blood supply known as a stroke. The brain is susceptible to degenerative disorders, such as Parkinson's disease, dementias including Alzheimer's disease, and multiple sclerosis. Psychiatric conditions, including schizophrenia and clinical depression, are thought to be associated with brain dysfunctions. The brain can also be the site of tumours, both benign and malignant; these mostly originate from other sites in the body. The study of the anatomy of the brain is neuroanatomy, while the study of its function is neuroscience. Numerous techniques are used to study the brain. Specimens from other animals, which may be examined microscopically, have traditionally provided much information. Medical imaging technologies such as functional neuroimaging, and electroencephalography (EEG) recordings are important in studying the brain. The medical history of people with brain injury has provided insight into the function of each part of the brain. Brain research has evolved over time, with philosophical, experimental, and theoretical phases. An emerging phase may be to simulate brain activity.[2]
  • 14. A New Function of the Cerebellum From https://neurosciencenews.com/cerebellum-emotional-memory-21589/ Summary: The cerebellum plays a key role in the storage of both positive and negative memories of emotional events. The cerebellum is known primarily for the regulation of movement. Researchers at the University of Basel have now discovered that the cerebellum also plays an important role in remembering emotional experiences. Both positive and negative emotional experiences are stored particularly well in memory. This phenomenon is important to our survival, since we need to remember dangerous situations in order to avoid them in the future. Previous studies have shown that a brain structure called the amygdala, which is important in the processing of emotions, plays a central role in this phenomenon. Emotions activate the amygdala, which in turn facilitates the storage of information in various areas of the cerebrum. The current research, led by Professor Dominique de Quervain and Professor Andreas Papassotiropoulos at the University of Basel, investigates the role of the cerebellum in storing emotional experiences. In a large-scale study, the researchers showed 1,418 participants emotional and neutral images and recorded the subjects’ brain activity using magnetic resonance imaging. In a memory test conducted later, the positive and negative images were remembered by the participants much better than the neutral images. The improved storage of emotional images was linked with an increase in brain activity in the areas of the cerebrum already known to play a part. However, the team also identified increased activity in the cerebellum. The cerebellum in communication with the cerebrum The researchers were also able to demonstrate that the cerebellum shows stronger communication with various areas of the cerebrum during the process of enhanced storage of the emotional images. It receives information from the cingulate gyrus – a region of the brain that is important in the perception and evaluation of feelings. Furthermore, the cerebellum sends out signals to various regions of the brain, including the amygdala and hippocampus. The latter plays a central role in memory storage.Furthermore, the cerebellum sends out signals to various regions of the brain, including the amygdala and hippocampus. The latter plays a central role in memory storage. “These results indicate that the cerebellum is an integral component of a network that is responsible for the improved storage of emotional information,” says de Quervain.
  • 15. From https://en.wikipedia.org/wiki/Neuroscience Neuroscience Neuroscience is the scientific study of the nervous system (the brain, spinal cord, and peripheral nervous system) and its functions.[1] It is a multidisciplinary science that combines physiology, anatomy, molecular biology, developmental biology, cytology, psychology, physics, computer science, chemistry, medicine, statistics, and mathematical modeling to understand the fundamental and emergent properties of neurons, glia and neural circuits.[2][3][4][5][6] The understanding of the biological basis of learning, memory, behavior, perception, and consciousness has been described by Eric Kandel as the "epic challenge" of the biological sciences.[7] The scope of neuroscience has broadened over time to include different approaches used to study the nervous system at different scales. The techniques used by neuroscientists have expanded enormously, from molecular and cellular studies of individual neurons to imaging of sensory, motor and cognitive tasks in the brain.
  • 17. From https://en.wikipedia.org/wiki/Neuroscience_and_intelligence Neuroscience and Intelligence Neuroscience and intelligence refers to the various neurological factors that are partly responsible for the variation of intelligence within species or between different species. A large amount of research in this area has been focused on the neural basis of human intelligence. Historic approaches to study the neuroscience of intelligence consisted of correlating external head parameters, for example head circumference, to intelligence.[1] Post-mortem measures of brain weight and brain volume have also been used.[1] More recent methodologies focus on examining correlates of intelligence within the living brain using techniques such as magnetic resonance imaging (MRI), functional MRI (fMRI), electroencephalography (EEG), positron emission tomography and other non- invasive measures of brain structure and activity.[1] Researchers have been able to identify correlates of intelligence within the brain and its functioning. These include overall brain volume,[2] grey matter volume,[3] white matter volume,[4] white matter integrity,[5] cortical thickness[3] and neural efficiency.[6] Although the evidence base for our understanding of the neural basis of human intelligence has increased greatly over the past 30 years, even more research is needed to fully understand it.[1] The neural basis of intelligence has also been examined in animals such as primates, cetaceans, and rodents.[7] Neural efficiency The neural efficiency hypothesis postulates that more intelligent individuals display less activation in the brain during cognitive tasks, as measured by Glucose metabolism.[6] A small sample of participants (N=8) displayed negative correlations between intelligence and absolute regional metabolic rates ranging from -0.48 to -0.84, as measured by PET scans, indicating that brighter individuals were more effective processors of information, as they use less energy.[6] According to an extensive review by Neubauer & Fink[40] a large number of studies (N=27) have confirmed this finding using methods such as PET scans,[41] EEG[42] and fMRI.[43] fMRI and EEG studies have revealed that task difficulty is an important factor affecting neural efficiency.[40] More intelligent individuals display neural efficiency only when faced with tasks of subjectively easy to moderate difficulty, while no neural efficiency can be found during difficult tasks.[44] In fact, more able individuals appear to invest more cortical resources in tasks of high difficulty.[40] This appears to be especially true for the Prefrontal Cortex, as individuals with higher intelligence displayed increased activation of this area during difficult tasks compared to individuals with lower intelligence.[45][46] It has been proposed that the main reason for the neural efficiency phenomenon could be that individuals with high intelligence are better at blocking out interfering information than individuals with low intelligence.[47]
  • 18. From https://en.wikipedia.org/wiki/Functional_magnetic_resonance_imaging Functional Magnetic Resonance Imaging (fMRI) Functional magnetic resonance imaging or functional MRI (fMRI) measures brain activity by detecting changes associated with blood flow.[1][2] This technique relies on the fact that cerebral blood flow and neuronal activation are coupled. When an area of the brain is in use, blood flow to that region also increases.[3] The primary form of fMRI uses the blood-oxygen-level dependent (BOLD) contrast,[4] discovered by Seiji Ogawa in 1990. This is a type of specialized brain and body scan used to map neural activity in the brain or spinal cord of humans or other animals by imaging the change in blood flow (hemodynamic response) related to energy use by brain cells.[4] Since the early 1990s, fMRI has come to dominate brain mapping research because it does not involve the use of injections, surgery, the ingestion of substances, or exposure to ionizing radiation.[5] This measure is frequently corrupted by noise from various sources; hence, statistical procedures are used to extract the underlying signal. The resulting brain activation can be graphically represented by color-coding the strength of activation across the brain or the specific region studied. The technique can localize activity to within millimeters but, using standard techniques, no better than within a window of a few seconds.[6] Other methods of obtaining contrast are arterial spin labeling[7] and diffusion MRI. Diffusion MRI is similar to BOLD fMRI but provides contrast based on the magnitude of diffusion of water molecules in the brain. In addition to detecting BOLD responses from activity due to tasks or stimuli, fMRI can measure resting state, or negative-task state, which shows the subjects' baseline BOLD variance. Since about 1998 studies have shown the existence and properties of the default mode network, a functionally connected neural network of apparent resting brain states. An fMRI image with yellow areas showing increased activity compared with a control condition Purpose: measures brain activity detecting changes due to blood flow.
  • 19. From https://en.wikipedia.org/wiki/Connectome Connectome A connectome (/kəˈnɛktoʊm/) is a comprehensive map of neural connections in the brain, and may be thought of as its "wiring diagram". An organism's nervous system is made up of neurons which communicate through synapses. A connectome is constructed by tracing the neuron in a nervous system and mapping where neurons are connected through synapses. The significance of the connectome stems from the realization that the structure and function of the human brain are intricately linked, through multiple levels and modes of brain connectivity. There are strong natural constraints on which neurons or neural populations can interact, or how strong or direct their interactions are. Indeed, the foundation of human cognition lies in the pattern of dynamic interactions shaped by the connectome. Despite such complex and variable structure-function mappings, the connectome is an indispensable basis for the mechanistic interpretation of dynamic brain data, from single-cell recordings to functional neuroimaging.
  • 21. From https://alleninstitute.org/what-we-do/brain-science/ Allen Institutes The Allen Institute for Brain Science is the Allen Institute’s oldest scientific division, established in 2003, and has generated several foundational data resources for the neuroscience community, exploring the brain at the level of gene expression, connectivity, and, most recently, individual cell types and synapses. The Allen Institute for Brain Science is currently focused on defining and understanding the cell types of the mammalian brain to ultimately better understand brain development, evolution and disease. We are working toward a complete parts list of brain cell types, how those cell types connect and function in the brain, and what changes happen to cells in the aging brain and in neurodegenerative diseases such as Alzheimer’s disease. The MindScope program at the Allen Institute seeks to understand the transformations, sometimes called computations, in coding and decoding that lead from photons to behavior and conscious experience by observing, perturbing and modeling the physical transformations of signals in the cortical-thalamic visual system within a few perception-action cycles. We generate data and discoveries through the Allen Brain Observatory, a standardized and high-throughput experimental platform that captures neurons and circuits in action in the visual regions of the mouse brain, to glean principles of how the mammalian brain processes information, responds to the external world, and drives behavior. The Allen Institute for Neural Dynamics explores the brain’s activity, at the level of individual neurons and the whole brain, to reveal how we interpret our environments to make decisions. We aim to discover how neural signaling – and changes in that signaling – allow the brain to perform complex but fundamental computations and drive flexible behaviors. Our experiments and openly shared resources will shed light on behavior, memory, how we handle uncertainty and risk, how humans and other animals chase rewards – and how some or all of these complicated cognitive functions go awry in neuropsychiatric disorders such as depression, ADHD or addiction.
  • 23. From https://neuroscience.stanford.edu/ Neuroscience at Stanford NeuroDiscovery applies cutting-edge techniques to make fundamental discoveries in brain science — discoveries that could unlock new medical treatments, transform education, inform public policy, and help us understand who we are. Our scientists peer at individual molecules operating where one neuron sends signals to the next. We trace networks of interconnected neurons to map the neural circuits responsible for different brain functions. And we tap into those circuits, tracking dynamic chemical and electrical signals to understand how our brains detect, integrate and transform stimuli into action. The human brain has 100 billion nerve cells and trillions of connections between them. Understanding the workings of such a complex and dynamic organ requires new tools and technologies. Materials scientists are developing probes to form gentle but sensitive and reliable interfaces to stimulate and record signals from thousands of individual neurons at once. Our engineers are developing ways to manipulate neural circuits with electricity, light, ultrasound and magnetic fields, and others are listening to the brain, interpreting the language of neural signals and using that language to drive robotic arms or to type on a computer. New tools will enable as yet unimagined discoveries and will allow us to repair and even to augment the human brain. Projects Understanding the brain in health and disease will improve treatments for ourselves and our loved ones. Our clinical scientists not only treat patients, but are also working with basic scientists to pioneer novel treatments for psychiatric and neurological disease. Ongoing research aims to reverse brain aging, ease the devastating consequences of stroke, and develop non-invasive treatments to modulate brain activity associated with epilepsy and other neurological diseases. Breakthrough improvements in brain and mental health benefit not just individuals, but society as a whole.
  • 24. From https://www.nature.com/articles/d41586-021-02628-x The Rise of the Assembloid Ever since he was a medical student in Romania, Sergiu Pașca has wanted to understand how connections between cells go awry in the brains of people with psychiatric disorders. Because the living human brain is inaccessible, these conditions could be diagnosed and classified only according to their behavioural symptoms, rather than their underlying biological causes. In 2017, Pașca took a major step towards his goal. By this time, he was a physician-scientist at Stanford University in California, and using induced pluripotent stem cells (iPS cells) to model various structures in the brain. Cultured from adult skin cells, iPS cells could be prodded in Pașca’s laboratory to grow into 3D spheroids that mimic neuronal tissues such as the frontal cortex. Spheroids are useful for studying the emergence and properties of individual neurons, “but also limited in that we couldn’t use them to study complex interactions involving multiple cell types”, Pașca says. These interactions are crucial to how the brain gets wired up during development, and Pașca wanted to model them using iPS- cell-derived tissues in a dish. So, he and his team performed an experiment: they combined spheroids from two distinct brain regions involved in higher-order thought processes. And remarkably, the two spheroids fused together, just as they would in the brain of a growing baby. No one had ever witnessed this early developmental process before, and Pașca marvelled at the sight of it. “The cells within the spheroids knew just where to go,” he says. “They started changing their morphology to form synapses and become electrically integrated.” Pașca coined the term “assembloid” to describe this construct of neural circuits1. As he defines them, assembloids are 3D structures formed from the fusion and functional integration of multiple cell types. And, most important, they mimic the complex cellular interactions from which organs arise in the body. Assembloids are now at the leading edge of stem-cell research. Scientists are using them to investigate early events in organ development, and as tools for studying not only psychiatric disorders, but other types of disease as well. According to Pașca, assembloids have the advantage of revealing how interactions between different tissues give rise to new cellular properties. For instance, some neurons activate secondary developmental programs required for integration into circuits only after meeting up with the other brain cells they connect to. And by creating assembloids using cells derived from people with particular diseases, researchers should be able to reproduce the inherited pathology of their diseases in a dish. It is hoped that assembloids will lessen the need for laboratory animals, and open doors for high-throughput screening of drugs and chemicals.
  • 25. From https://www.humanbrainproject.eu/en/about/overview/ Human Brain Project The Human Brain Project (HBP) is one of the three FET (Future and Emerging Technology) Flagship projects. Started in 2013, it is one of the largest research projects in the world . More than 500 scientists and engineers at over than 140 universities, teaching hospitals, and research centres across Europe come together to address one of the most challenging research targets – the human brain. To tame brain complexity, the project is building a research infrastructure to help advance neuroscience, medicine, computing and brain-inspired technologies - EBRAINS. The HBP is developing EBRAINS to create lasting research platforms that benefit the wider community. The HBP provides a framework where teams of researchers and technologists work together to scale up ambitious ideas from the lab, explore the different aspects of brain organisation, and understand the mechanisms behind cognition, learning, or plasticity. Scientists in the HBP conduct targeted experimental studies and develop theories and models to shed light on the human connectome, addressing mechanisms that underlie information processing, from the molecule to cellular signaling and large-scale networks.
  • 28. From https://braininitiative.nih.gov/ BRAIN Initiative The Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative is aimed at revolutionizing our understanding of the human brain. By accelerating the development and application of innovative technologies, researchers will be able to produce a revolutionary new dynamic picture of the brain that, for the first time, shows how individual cells and complex neural circuits interact in both time and space. Long desired by researchers seeking new ways to treat, cure, and even prevent brain disorders, this picture will fill major gaps in our current knowledge and provide unprecedented opportunities for exploring exactly how the brain enables the human body to record, process, utilize, store, and retrieve vast quantities of information, all at the speed of thought.
  • 29. From https://braininitiative.nih.gov/sites/default/files/images/brain_2.0_6-6-19-final_revised10302019_508c.pdf BRAIN Initiative 2025 Next steps for integrative efforts in BRAIN 2.0 Because this priority area deals with a broad approach rather than specific approaches or types of tools, BRAIN 2025 did not list individual short-term and long-term goals, opting instead to describe examples. Following that lead, we do not think it is necessary for BRAIN 2.0 to include an exhaustive set of suggested goals for integrative approaches. However, many opportunities and goals listed in Priority Areas 1 to 6 hinge upon integration. These include: • Tools to integrate molecular, connectivity, and physiological properties of cell type • Connectivity and functional maps at multiple scales that retain cell-type information • Integration of fMRI with other activity measures and anatomical connections • Integration of electrophysiological and neurochemical methods • Integration of perturbational techniques with other technologies • More interactions between experimentation and theory • Development of approaches and tools to integrate human data from different experimental approaches
  • 30. From https://www.science.org/content/article/nihs-brain-initiative-puts-dollar500-million-creating-detailed-ever-human-brain-atlas BRAIN Initiative Cell Atlas Network (BICAN) The BRAIN Initiative, the 9-year-old, multibillion-dollar U.S. neuroscience effort, today announced its most ambitious challenge yet: compiling the world’s most comprehensive map of cells in the human brain. Scientists say the BRAIN Initiative Cell Atlas Network (BICAN), funded with $500 million over 5 years, will help them understand how the human brain works and how diseases affect it. BICAN “will transform the way we do neuroscience research for generations to come,” says BRAIN Initiative Director John Ngai of the National Institutes of Health (NIH). BRAIN, or Brain Research Through Advancing Innovative Neurotechnologies, was launched by then-President Barack Obama in 2013. It began with a focus on tools, then developed a program called the BRAIN Initiative Cell Census Network, resulting in a raft of papers in 2021. The studies combined data on the genetic features, shapes, locations, and electrical activity of millions of cells to identify more than 100 cell types across the primary motor cortex—which coordinates movement—in mice, marmosets, and humans. Hundreds of researchers involved in the network are now completing a cell census for the rest of the mouse brain. It is expected to become a widely used, free resource for the neuroscience community. Now, BICAN will characterize and map neural and nonneuronal cells across the entire human brain, which has 200 billion cells and is 1000 times larger than a mouse brain. “It’s using similar approaches but scaling up,” says Hongkui Zeng, director of the Allen Institute for Brain Science, which won one-third of the BICAN funding. Zeng says the results of the effort will serve as a reference—a kind of Human Genome Project for neuroscience. Other groups will add data from human brains across a range of ancestries and ages, including fetal development. “We will try to cover the breadth of human development and aging,” says Joseph Ecker of the Salk Institute for Biological Studies, which leads BICAN studies of epigenetics, the study of heritable changes that are passed on without changes to the DNA. Ngai expects BICAN to study several hundred human brains overall, although investigators are just starting to work out details. “The sampling and coverage is going to be a big, big topic of discussion,” Ngai says.
  • 31. BRAIN Initiative Alliance From https://www.braininitiative.org/alliance/
  • 32. From https://www.internationalbraininitiative.org/about-us International BRAIN Initiative The International Brain Initiative is represented by some of the world's major brain research projects: Our vision is to catalyse and advance neuroscience through international collaboration and knowledge sharing, uniting diverse ambitions and disseminating discoveries for the benefit of humanity.
  • 33. From https://alleninstitute.org/what-we-do/brain-science/ Allen Institute for Brain Science The Allen Institute is committed to uncovering some of the most pressing questions in neuroscience, grounded in an understanding of the brain and inspired by our quest to uncover the essence of what makes us human. Our focus on neuroscience began with the launch of the Allen Institute for Brain Science in 2003. This division, known worldwide for publicly available resources and tools on brain- map.org, is beginning a new 16-year phase to understand the cell types in the brain, bridging cell types and brain function to better understand healthy brains and what goes wrong in disease. Our MindScope Program focuses on understanding what drives behaviors in the brain and how to better predict actions. In late 2021 we launched the Allen Institute for Neural Dynamics, a new research division of the Allen Institute that is dedicated to understanding how dynamic neuronal signals at the level of the entire brain implement fundamental computations and drive flexible behaviors.
  • 34. From https://alleninstitute.org/what-we-do/brain-science/ Allen Institute for Neural Dynamics The Allen Institute for Neural Dynamics explores the brain’s activity, at the level of individual neurons and the whole brain, to reveal how we interpret our environments to make decisions. We aim to discover how neural signaling – and changes in that signaling – allow the brain to perform complex but fundamental computations and drive flexible behaviors. Our experiments and openly shared resources will shed light on behavior, memory, how we handle uncertainty and risk, how humans and other animals chase rewards – and how some or all of these complicated cognitive functions go awry in neuropsychiatric disorders such as depression, ADHD or addiction.
  • 35. From https://portal.brain-map.org/explore/overview Allen Brain-Map.org The Allen Institute for Brain Science was established in 2003 with a goal to accelerate neuroscience research worldwide with the release of large-scale, publicly available atlases of the brain. Our research teams continue to conduct investigations into the inner workings of the brain to understand its components and how they come together to drive behavior and make us who we are. One of our core principles is Open Science: We publicly share all the data, products, and findings from our work. Here on brain-map.org, you’ll find our open data, analysis tools, lab resources, and information about our own research that also uses these publicly available resources. The potential uses of Allen Institute for Brain Science resources, on their own or in combination with your own data, are endless. The Allen Brain Atlases capture patterns of gene expression across the brain in various species. Learn more and read publications at the Transcriptional Landscape of the Brain Explore page. Example use cases across the atlases include exploration of gene expression and co-expression patterns, expression across networks, changes across developmental stages, comparisons between species, and more.
  • 37. From https://bigthink.com/neuropsych/great-brain-rewiring-after-age-40/ Brain Rewiring after 40 In a systematic review recently published in the journal Psychophysiology, researchers from Monash University in Australia swept through the scientific literature, seeking to summarize how the connectivity of the human brain changes over our lifetimes. The gathered evidence suggests that in the fifth decade of life (that is, after a person turns 40), the brain starts to undergo a radical “rewiring” that results in diverse networks becoming more integrated and connected over the ensuing decades, with accompanying effects on cognition. Since the turn of the century, neuroscientists have increasingly viewed the brain as a complex network, consisting of units broken down into regions, sub-regions, and individual neurons. These units are connected structurally, functionally, or both. With increasingly advanced scanning techniques, neuroscientists can observe the parts of subjects’ brains that “light up” in response to stimuli or when simply at rest, providing a superficial look at how our brains are synced up. Share The brain undergoes a great “rewiring” after age 40 on LinkedIn In a systematic review recently published in the journal Psychophysiology, researchers from Monash University in Australia swept through the scientific literature, seeking to summarize how the connectivity of the human brain changes over our lifetimes. The gathered evidence suggests that in the fifth decade of life (that is, after a person turns 40), the brain starts to undergo a radical “rewiring” that results in diverse networks becoming more integrated and connected over the ensuing decades, with accompanying effects on cognition. Since the turn of the century, neuroscientists have increasingly viewed the brain as a complex network, consisting of units broken down into regions, sub-regions, and individual neurons. These units are connected structurally, functionally, or both. With increasingly advanced scanning techniques, neuroscientists can observe the parts of subjects’ brains that “light up” in response to stimuli or when simply at rest, providing a superficial look at how our brains are synced up. Early on, in our teenage and young adult years, the brain seems to have numerous, partitioned networks with high levels of inner connectivity, reflecting the ability for specialized processing to occur. That makes sense, as this is the time when we are learning how to play sports, speak languages, and develop talents. Around our mid-40s, however, that starts to change. Instead, the brain begins becoming less connected within those separate networks and more connected globally across networks. By the time we reach our 80s, the brain tends to be less regionally specialized and instead broadly connected and integrated.
  • 38. Prefrontal Cortex as a Meta-Reinforcement Learning System From https://www.biorxiv.org/content/biorxiv/early/2018/04/06/295964.full.pdf Over the past twenty years, neuroscience research on reward-based learning has converged on a canonical model, under which the neurotransmitter dopamine ‘stamps in’ associations between situations, actions and rewards by modulating the strength of synaptic connections between neurons. However, a growing number of recent findings have placed this standard model under strain. In the present work, we draw on recent advances in artificial intelligence to introduce a new theory of reward-based learning. Here, the dopamine system trains another part of the brain, the prefrontal cortex, to operate as its own free-standing learning system. This new perspective accommodates the findings that motivated the standard model, but also deals gracefully with a wider range of observations, providing a fresh foundation for future research.
  • 39. Cortical Columns From https://en.wikipedia.org/wiki/Cortical_column A cortical column, also called hypercolumn, macrocolumn,[1] functional column[2] or sometimes cortical module,[3] is a group of neurons in the cortex of the brain that can be successively penetrated by a probe inserted perpendicularly to the cortical surface, and which have nearly identical receptive fields.[citation needed] Neurons within a minicolumn (microcolumn) encode similar features, whereas a hypercolumn "denotes a unit containing a full set of values for any given set of receptive field parameters".[4] A cortical module is defined as either synonymous with a hypercolumn (Mountcastle) or as a tissue block of multiple overlapping hypercolumns.[5] The columnar hypothesis states that the cortex is composed of discrete, modular columns of neurons, characterized by a consistent connectivity profile.[2] It is still unclear what precisely is meant by the term, and it does not correspond to any single structure within the cortex. It has been impossible to find a canonical microcircuit that corresponds to the cortical column, and no genetic mechanism has been deciphered that designates how to construct a column.[4] However, the columnar organization hypothesis is currently the most widely adopted to explain the cortical processing of information.[6] Columnar functional organization The columnar functional organization, as originally framed by Vernon Mountcastle,[9] suggests that neurons that are horizontally more than 0.5 mm (500 µm) from each other do not have overlapping sensory receptive fields, and other experiments give similar results: 200– 800 µm.[1][10][11] Various estimates suggest there are 50 to 100 cortical minicolumns in a hypercolumn, each comprising around 80 neurons. Their role is best understood as 'functional units of information processing.' An important distinction is that the columnar organization is functional by definition, and reflects the local connectivity of the cerebral cortex. Connections "up" and "down" within the thickness of the cortex are much denser than connections that spread from side to side. Number of cortical columns There are about 200 million (2×108) cortical minicolumns in the human neocortex with up to about 110 neurons each,[13] and with estimates of 21–26 billion (2.1×1010–2.6×1010) neurons in the neocortex. With 50 to 100 cortical minicolumns per cortical column a human would have 2–4 million (2×106–4×106) cortical columns. There may be more if the columns can overlap, as suggested by Tsunoda et al.[14] There are claims that minicolumns may have as many as 400 principal cells,[15] but it is not clear if that includes glia cells. Some contradicts the previous estimates,[16] claiming the original research is too arbitrary.[17] The authors propose a uniform neocortex, and chose a fixed width and length to calculate the cell numbers. Later research pointed out that the neocortex is indeed not uniform for other species,[18] and studying nine primate species they found that “the number of neurons underneath 1 mm2 of the cerebral cortical surface … varies by three times across species." The neocortex is not uniform across species.[17][19][20] The actual number of neurons within a single column is variable, and depends on the cerebral areas and thus the function of the column.
  • 41. Reference Frames From https://numenta.com/a-thousand-brains-by-jeff-hawkins 1. Reference Frames Are Present Everywhere in the Neocortex 2. Reference Frames Are Used to Model Everything We Know, Not Just Physical Objects 3. All Knowledge Is Stored at Locations Relative to Reference Frames 4. Thinking Is a Form of Movement So far, I have described a theory of how cortical columns learn models of physical objects such as coffee cups, chairs, and smartphones. The theory says that cortical columns create reference frames for each observed object. Recall that a reference frame is like an invisible, three-dimensional grid surrounding and attached to something. The reference frame allows a cortical column to learn the locations of features that define the shape of an object. Reference Frames for Concepts: Up to now in the book, I have described how the brain learns models of things that have a physical shape. However, much of what we know about the world can’t be sensed directly and may not have any physical equivalent. For example, we can’t reach out and touch concepts such as democracy or prime numbers, yet we know a lot about these things. How can cortical columns create models of things that we can’t sense? The trick is that reference frames don’t have to be anchored to something physical. A reference frame for a concept such as democracy needs to be self-consistent, but it can exist relatively independent of everyday physical things. It is similar to how we can create maps for fictional lands. A map of a fictional land needs to be self-consistent, but it doesn’t need to be located anywhere in particular relative to Earth.
  • 42. Brainscapes From https://gardenofthemind.com/ How does your brain—an organ smaller than a soccer ball—represent the big, wide world of sensations, events, and meaning unfolding all around you? Your experience of the world feels so seamless and boundless that you may never have thought to ask this question. But once asked, the question demands an answer. Or, in this case, it demands three answers. Because it is only thanks to three solutions that you can perceive your world at all. And it is these solutions, in turn, that determine exactly how you experience your world. 1. You Miss More Than You Think The first and simplest solution is that your brain doesn’t represent everything taking place around you. Not even close. You perceive only a small fraction of the energy and information buzzing all around you. 2. Your Brain Is Full of Maps This brings us to the second grand solution that makes perception possible. Creatures on earth, including humans, eke more abilities out of their brains by organizing neurons into literal maps. These maps allow creatures to pack more neurons into a brain while keeping the costly connections between them as short as possible. 3. Your Maps and Your Perceptions Are Warped This brings us to the third grand solution: Your brain maps are warped, preserving some details while sacrificing others. Your brain maps are distorted to save energy and space. And these distortions, in turn, distort how you perceive your world. Brainscape Book
  • 44. From https://thebrain.mcgill.ca/flash/d/d_07/d_07_cr/d_07_cr_tra/d_07_cr_tra.html Memory This ability to hold on to a piece of information temporarily in order to complete a task is specifically human. It causes certain regions of the brain to become very active, in particular the pre-frontal lobe. This region, at the very front of the brain, is highly developed in humans. It is the reason that we have such high, upright foreheads, compared with the receding foreheads of our cousins the apes. Hence it is no surprise that the part of the brain that seems most active during one of the most human of activities is located precisely in this prefrontal region that is well developed only in hum Information is transferred from short-term memory (also known as working memory) to long-term memory through the hippocampus, so named because its shape resembles the curved tail of a seahorse (hippokampos in Greek). The hippocampus is a very old part of the cortex, evolutionarily, and is located in the inner fold of the temporal lobe. All of the pieces of information decoded in the various sensory areas of the cortex converge in the hippocampus, which then sends them back where they came from. The hippocampus is a bit like a sorting centre where these new sensations are compared with previously recorded ones. The hippocampus also creates associations among an object’s various properties.
  • 45. From https://neurosciencenews.com/superager-neurons-memory-21561/ Superager Brains Contain ‘Super Neurons’ Summary: Neurons in the memory-associated entorhinal cortex of super-agers are significantly larger than their cognitively average peers, those with MCI, and even in people up to 30 years younger. Additionally, these neurons contained no signs of Tau, a hallmark of Alzheimer’s disease. Neurons in an area of the brain responsible for memory (known as the entorhinal cortex) were significantly larger in SuperAgers compared to cognitively average peers, individuals with early-stage Alzheimer’s disease and even individuals 20 to 30 years younger than SuperAgers — who are aged 80 years and older, reports a new Northwestern Medicine study. These neurons did not harbor tau tangles, a signature hallmark of Alzheimer’s disease. “The remarkable observation that SuperAgers showed larger neurons than their younger peers may imply that large cells were present from birth and are maintained structurally throughout their lives,” said lead author Tamar Gefen, an assistant professor of psychiatry and behavioral sciences at Northwestern University Feinberg School of Medicine. “We conclude that larger neurons are a biological signature of the SuperAging trajectory.” The study of SuperAgers with exceptional memory was the first to show that these individuals carry a unique biological signature that comprises larger and healthier neurons in the entorhinal cortex that are relatively void of tau tangles (pathology).
  • 47. Layers of Intelligence From Bob Marcus Expert Knowledge - Rational Knowledge + Individual Learned Knowledge Rational Knowledge - Rule-based Decision Making Individual Learned Knowledge - Object and Situation Recognition General Learned Knowledge - Common Sense Preconfigured Capabilities - Instinctual Knowledge and Behavior
  • 48. Innateness, AlphaZero,and Artificial Intelligence From https://arxiv.org/ftp/arxiv/papers/1801/1801.05667.pdf The concept of innateness is rarely discussed in the context of artificial intelligence. When it is discussed, or hinted at, it is often the context of trying to reduce the amount of innate machinery in a given system. In this paper, I consider as a test case a recent series of papers by Silver et al (Silver et al., 2017a) on AlphaGo and its successors that have been presented as an argument that a “even in the most challenging of domains: it is possible to train to superhuman level, without human examples or guidance”, “starting tabula rasa.” I argue that these claims are overstated, for multiple reasons. I close by arguing that artificial intelligence needs greater attention to innateness, and I point to some proposals about what that innateness might look like. Virtually all modern observers would concede that genes and experience work together; it is “nature and nurture”, not “nature versus nurture”. No nativist, for instance, would doubt that we are also born with specific biological machinery that allows us to learn. Chomsky’s Language Acquisition Device should be viewed precisely as an innate learning mechanism, and nativists such as Pinker, Peter Marler (Marler, 2004) and myself (Marcus, 2004) have frequently argued for a view in which a significant part of a creature’s innate armamentarium consists not of specific knowledge but of learning mechanisms, a form of innateness that enables learning. As discussed below, there is ample reason to believe that humans and many other creatures are born with significant amounts of innate machinery. The guiding question for the current paper is whether artificially intelligent systems ought similarly to be endowed with significant amounts of innate machinery, or whether, in virtue of the powerful learning systems that have recently been developed, it might suffice for such systems to work in a more bottom up, tabula rasa fashion
  • 49. Human Instinctive Behavior From https://www.sainsburywellcome.org/web/qa/understanding-control-instinctive-behaviour How do you define instinctive behaviour? People often use the terms “instinctive” or “innate” to describe behaviours that are not learned, i.e. behaviours you already know how to do for the first time. Instinctive behaviours are important for promoting the survival of your genes and thereby your species. What role is the hypothalamus thought to play in the expression of instinctive behaviours? The hypothalamus is an ancient part of the brain whereas other areas, such as the cortex and forebrain, are very recent evolutionary additions. As such, the hypothalamus is able to respond to sensory inputs, form internal states and induce motor outputs. According to the evolutionary neurobiologist Detlev Arendt, the hypothalamus was formed by the fusion of two ancient neural nets: • a neuroendocrine system that responded to light and secreted factors into the main body cavity – ancestor of the modern midline neuroendocrine nuclei • a motor system that controlled contractile tissue to produce basic behavioral patterns – ancestor of the medial and lateral hypothalamus that control instinctive behaviors The hypothalamus is sometimes mistakenly called the “reptilian” brain; in reality it dates back to before the appearance of the first bilaterian organisms and is perhaps better termed the Ur-brain. A lot of current work focuses on trying to understand how the hypothalamus encodes internal motivational states that drive instinctive behaviour, and although its basic architecture was clarified already 30 years ago, how it controls behaviour it still pretty much a mystery.
  • 51. 40Years of Cognitive Architecture Research From https://arxiv.org/abs/1610.08602 In this paper we present a broad overview of the last 40 years of research on cognitive architectures. Although the number of existing architectures is nearing several hundred, most of the existing surveys do not reflect this growth and focus on a handful of well-established architectures. Thus, in this survey we wanted to shift the focus towards a more inclusive and high-level overview of the research on cognitive architectures. Our final set of 84 architectures includes 49 that are still actively developed, and borrow from a diverse set of disciplines, spanning areas from psychoanalysis to neuroscience. To keep the length of this paper within reasonable limits we discuss only the core cognitive abilities, such as perception, attention mechanisms, action selection, memory, learning and reasoning. In order to assess the breadth of practical applications of cognitive architectures we gathered information on over 900 practical projects implemented using the cognitive architectures in our list. We use various visualization techniques to highlight overall trends in the development of the field. In addition to summarizing the current state-of-the-art in the cognitive architecture research, this survey describes a variety of methods and ideas that have been tried and their relative success in modeling human cognitive abilities, as well as which aspects of cognitive behavior need more research with respect to their mechanistic counterparts and thus can further inform how cognitive science might progress.
  • 52. Vygotsky's Sociocultural Theory of Cognitive Developmen From https://www.simplypsychology.org/vygotsky.html The work of Lev Vygotsky (1934) has become the foundation of much research and theory in cognitive development over the past several decades, particularly of what has become known as sociocultural theory. Vygotsky's sociocultural theory views human development as a socially mediated process in which children acquire their cultural values, beliefs, and problem-solving strategies through collaborative dialogues with more knowledgeable members of society. Vygotsky's theory is comprised of concepts such as culture-specific tools, private speech, and the Zone of Proximal Development. Vygotsky's theories stress the fundamental role of social interaction in the development of cognition (Vygotsky, 1978), as he believed strongly that community plays a central role in the process of "making meaning." Unlike Piaget's notion that childrens' development must necessarily precede their learning, Vygotsky argued, "learning is a necessary and universal aspect of the process of developing culturally organized, specifically human psychological function" (1978, p. 90). In other words, social learning tends to precede (i.e., come before) development. Vygotsky has developed a sociocultural approach to cognitive development. He developed his theories at around the same time as Jean Piaget was starting to develop his ideas (1920's and 30's), but he died at the age of 38, and so his theories are incomplete - although some of his writings are still being translated from Russian. Like Piaget, Vygotsky could be described as a constructivist, in that he was interested in knowledge acquisition as a cumulative event - with new experiences and understandings incorporated into existing cognitive frameworks. However, whilst Piaget’s theory is structural (arguing that development is governed by physiological stages), Vygotsky denies the existence of any guiding framework independent of culture and context.
  • 53. 40Years of Cognitive Architecture Research (cont) From https://arxiv.org/abs/1610.08602
  • 55. z Psychology of Learning From: https://www.verywellmind.com/learning-study-guide-2795698 Psychologists often define learning as a relatively permanent change in behavior as a result of experience. The psychology of learning focuses on a range of topics related to how people learn and interact with their environments. One of the first thinkers to study how learning influences behavior was psychologist John B. Watson who suggested that all behaviors are a result of the learning process. The school of thought that emerged from Watson's work was known as behaviorism. The behavioral school of thought proposed studying internal thoughts, memories, and other mental processes that were too subjective.
  • 57. z Memory Development in Children From: https://www.sciencedirect.com/topics/computer-science/domain-knowledge The Impact of Domain Knowledge Striking effects of domain knowledge on performance in memory tasks has been provided in numerous developmental studies. In most domains, older children know more than younger ones, and differences in knowledge are linked closely to performance differences. How can we explain this phenomenon? First, one effect that rich domain knowledge has on memory is to increase the speed of processing for domain-specific information. Second, rich domain knowledge enables more competent strategy use. Finally, rich domain knowledge can have nonstrategic effects, that is, diminish the need for strategy activation. Evidence for the latter phenomenon comes from studies using the expert-novice paradigm. These studies compared experts and novices in a given domain (e.g., baseball, chess, or soccer) on a memory task related to that domain. It could be demonstrated that rich domain knowledge enabled a child expert to perform much like an adult expert and better than an adult novice—thus showing a disappearance and sometimes reversal of usual developmental trends. Experts and novices not only differed with regard to quantity of knowledge but also regarding the quality of knowledge, that is, in the way their knowledge is represented in the mind. Moreover, several studies also confirmed the assumption that rich domain knowledge can compensate for low overall aptitude on domain-related memory tasks, as no differences were found between high- and low-aptitude experts on various recall and comprehension measures (Bjorklund and Schneider 1996). Taken together, these findings indicate that domain knowledge increases greatly with age, and is clearly related to how much and what children remember. Domain knowledge also contributes to the development of other competencies that have been proposed as sources of memory development, namely basic capacities, memory strategies, and metacognitive knowledge. Undoubtedly, changes in domain knowledge play a large role in memory development, probably larger than that of the other sources of memory improvement described above. However, although the various components of memory development have been described separately so far, it seems important to note that all of these components interact in producing memory changes, and that it is difficult at times to disentangle the effects of specific sources from that of other influences.
  • 59. z Common Sense Research Questions From: https://arxiv.org/pdf/2112.12754.pdf • What exactly is common sense? What technical definition best suits the needs of AI? • What are appropriate tests for the presence of common sense? How can we tell if we are getting closer to building it into our AI systems? • How is experiential knowledge represented, accessed,and brought to bear on current situations? What is theole of analogy? How does the ability to recognize somehing or see something as another thing (or even as aninstance of an abstract concept) develop and get used? • How is commonsense knowledge learned as new experi-ences happen? How is the updatedifferent when knowledge is acquired through language? • What ontological frameworks are critical to build into an AI system? Are there special properties of the knowledge of the physical world that need to be handled in a way that is different from its non-physical counterparts? • What is the relationship between common sense and the broader notion of rationality (including bounded ratio-nality, minimal rationality, etc.)? • What overall architecture is best suited for the multiple roles of common sense? What mechanism(s) should beused to invoke common sense out of routine, rote processing, and then to sometimes go beyond it to more spe-cialized forms of expertise? • What role, if any, does metareasoning play?
  • 61. Intuition in the Brain From https://www.scientificamerican.com/article/intuition-may-reveal-where-expertise-resides-in-the-brain/ Understanding computer code, deciphering a differential equation, diagnosing a tumor from the shadowy patterns on an x-ray image, telling a fake from an authentic painting, knowing when to hold and when to fold in poker. Experts decide in a flash, without thought. Intuition is the name we give to the uncanny ability to quickly and effortlessly know the answer, unconsciously, either without or well before knowing why. The conscious explanation comes later, if at all, and involves a much more deliberate process. Intuition arises within a circumscribed cognitive domain. It may take years of training to develop, and it does not easily transfer from one domain of expertise to another. Chess mastery is useless when playing bridge. Professionals, who may spend a lifetime honing their skills, are much in demand for their proficiency. This elegant finding links intuition with the caudate nucleus, which is part of the basal ganglia—a set of interlinked brain areas responsible for learning, executing habits and automatic behaviors. The basal ganglia receive massive input from the cortex, the outer, rindlike surface of the brain. Ultimately these structures project back to the cortex, creating a series of cortical–basal ganglia loops. In one interpretation, the cortex is associated with conscious perception and the deliberate and conscious analysis of any given situation, novel or familiar, whereas the caudate nucleus is the site where highly specialized expertise resides that allows you to come up with an appropriate answer without conscious thought. In computer engineering parlance, a constantly used class of computations (namely those associated with playing a strategy game) is downloaded into special-purpose hardware, the caudate, to lighten the burden of the main processor, the cortex. It appears that the site of fast, automatic, unconscious cognitive operations—from where a solution materializes all of a sudden —lies in the basal ganglia, linked to but apart from the cortex. These studies provide a telling hint of what happens when the brain brings the output of unconscious processing into awareness. What remains unclear is why furious activity in the caudate should remain unconscious while exertions in some part of the cortex give rise to conscious sensation. Finding an answer may illuminate the central challenge—why excitable matter produces feelings at all.
  • 63. Human Behavior From https://en.wikipedia.org/wiki/Human_behavior Human behavior is the potential and expressed capacity (mentally, physically, and socially) of human individuals or groups to respond to internal and external stimuli throughout their life.[1][2] Behavior is driven by genetic and environmental factors that affect an individual. Behavior is also driven, in part, by thoughts and feelings, which provide insight into individual psyche, revealing such things as attitudes and values. Human behavior is shaped by psychological traits, as personality types vary from person to person, producing different actions and behavior. Social behavior accounts for actions directed at others. It is concerned with the considerable influence of social interaction and culture, as well as ethics, interpersonal relationships, politics, and conflict. Some behaviors are common while others are unusual. The acceptability of behavior depends upon social norms and is regulated by various means of social control. Social norms also condition behavior, whereby humans are pressured into following certain rules and displaying certain behaviors that are deemed acceptable or unacceptable depending on the given society or culture Cognitive behavior accounts for actions of obtaining and using knowledge. It is concerned with how information is learned and passed on, as well as creative application of knowledge and personal beliefs such as religion. Physiological behavior accounts for actions to maintain the body. It is concerned with basic bodily functions as well as measures taken to maintain health. Economic behavior accounts for actions regarding the development, organization, and use of materials as well as other forms of work. Ecological behavior accounts for actions involving the ecosystem. It is concerned with how humans interact with other organisms and how the environment shapes human behavior.
  • 64. Free Will From https://en.wikipedia.org/wiki/Free_will Free will is the capacity of agents to choose between different possible courses of action unimpeded.[1][2] Free will is closely linked to the concepts of moral responsibility, praise, culpability, sin, and other judgements which apply only to actions that are freely chosen. It is also connected with the concepts of advice, persuasion, deliberation, and prohibition. Traditionally, only actions that are freely willed are seen as deserving credit or blame. Whether free will exists, what it is and the implications of whether it exists or not are some of the longest running debates of philosophy and religion. Some conceive of free will as the right to act outside of external influences or wishes. Some conceive free will to be the capacity to make choices undetermined by past events. Determinism suggests that only one course of events is possible, which is inconsistent with a libertarian model of free will.[3] Ancient Greek philosophy identified this issue,[4] which remains a major focus of philosophical debate. The view that conceives free will as incompatible with determinism is called incompatibilism and encompasses both metaphysical libertarianism (the claim that determinism is false and thus free will is at least possible) and hard determinism (the claim that determinism is true and thus free will is not possible). Incompatibilism also encompasses hard incompatibilism, which holds not only determinism but also indeterminism to be incompatible with free will and thus free will to be impossible whatever the case may be regarding determinism. In contrast, compatibilists hold that free will is compatible with determinism. Some compatibilists even hold that determinism is necessary for free will, arguing that choice involves preference for one course of action over another, requiring a sense of how choices will turn out.[5][6] Compatibilists thus consider the debate between libertarians and hard determinists over free will vs. determinism a false dilemma.[7] Different compatibilists offer very different definitions of what "free will" means and consequently find different types of constraints to be relevant to the issue. Classical compatibilists considered free will nothing more than freedom of action, considering one free of will simply if, had one counterfactually wanted to do otherwise, one could have done otherwise without physical impediment. Contemporary compatibilists instead identify free will as a psychological capacity, such as to direct one's behavior in a way responsive to reason, and there are still further different conceptions of free will, each with their own concerns, sharing only the common feature of not finding the possibility of determinism a threat to the possibility of free will.[8]
  • 66. Mind From https://en.wikipedia.org/wiki/Mind The mind is the set of faculties responsible for all mental phenomena. Often the term is also identified with the phenomena themselves.[2][3][4] These faculties include thought, imagination, memory, will, and sensation. They are responsible for various mental phenomena, like perception, pain experience, belief, desire, intention, and emotion. Various overlapping classifications of mental phenomena have been proposed. Important distinctions group them together according to whether they are sensory, propositional, intentional, conscious, or occurrent. Minds were traditionally understood as substances but it is more common in the contemporary perspective to conceive them as properties or capacities possessed by humans and higher animals. Various competing definitions of the exact nature of the mind or mentality have been proposed. Epistemic definitions focus on the privileged epistemic access the subject has to these states. Consciousness-based approaches give primacy to the conscious mind and allow unconscious mental phenomena as part of the mind only to the extent that they stand in the right relation to the conscious mind. According to intentionality-based approaches, the power to refer to objects and to represent the world is the mark of the mental. For behaviorism, whether an entity has a mind only depends on how it behaves in response to external stimuli while functionalism defines mental states in terms of the causal roles they play. Central questions for the study of mind, like whether other entities besides humans have minds or how the relation between body and mind is to be conceived, are strongly influenced by the choice of one's definition. Mind or mentality is usually contrasted with body, matter or physicality. The issue of the nature of this contrast and specifically the relation between mind and brain is called the mind-body problem.[5] Traditional viewpoints included dualism and idealism, which consider the mind to be non-physical.[5] Modern views often center around physicalism and functionalism, which hold that the mind is roughly identical with the brain or reducible to physical phenomena such as neuronal activity[6][need quotation to verify] though dualism and idealism continue to have many supporters. Another question concerns which types of beings are capable of having minds.[citation needed][7] For example, whether mind is exclusive to humans, possessed also by some or all animals, by all living things, whether it is a strictly definable characteristic at all, or whether mind can also be a property of some types of human-made machines.[citation needed] Different cultural and religious traditions often use different concepts of mind, resulting in different answers to these questions. Some see mind as a property exclusive to humans whereas others ascribe properties of mind to non-living entities (e.g. panpsychism and animism), to animals and to deities. Some of the earliest recorded speculations linked mind (sometimes described as identical with soul or spirit) to theories concerning both life after death, and cosmological and natural order, for example in the doctrines of Zoroaster, the Buddha, Plato, Aristotle, and other ancient Greek, Indian and, later, Islamic and medieval European philosophers. Psychologists such as Freud and James, and computer scientists such as Turing developed influential theories about the nature of the mind. The possibility of nonbiological minds is explored in the field of artificial intelligence, which works closely in relation with cybernetics and information theory to understand the ways in which information processing by nonbiological machines is comparable or different to mental phenomena in the human mind.[8] The mind is also sometimes portrayed as the stream of consciousness where sense impressions and mental phenomena are constantly changing.[9][10]
  • 67. Freud’s Unconscious Pre-Conscious, and Conscious Mind From https://www.verywellmind.com/the-conscious-and-unconscious-mind-2795946 The famed psychoanalyst Sigmund Freud believed that behavior and personality were derived from the constant and unique interaction of conflicting psychological forces that operate at three different levels of awareness: the preconscious, conscious, and unconscious.1 He believed that each of these parts of the mind plays an important role in influencing behavior. In order to understand Freud's theory, it is essential to first understand what he believed each part of personality did, how it operated, and how these three elements interact to contribute to the human experience. Each level of awareness has a role to play in shaping human behavior and thought. The famed psychoanalyst Sigmund Freud believed that behavior and personality were derived from the constant and unique interaction of conflicting psychological forces that operate at three different levels of awareness: the preconscious, conscious, and unconscious.1 He believed that each of these parts of the mind plays an important role in influencing behavior. In order to understand Freud's theory, it is essential to first understand what he believed each part of personality did, how it operated, and how these three elements interact to contribute to the human experience. Each level of awareness has a role to play in shaping human behavior and thought. Freud's Three Levels of Mind Freud delineated the mind in the distinct levels, each with their own roles and functions.1 • The preconscious consists of anything that could potentially be brought into the conscious mind. • The conscious mind contains all of the thoughts, memories, feelings, and wishes of which we are aware at any given moment. This is the aspect of our mental processing that we can think and talk about rationally. This also includes our memory, which is not always part of consciousness but can be retrieved easily and brought into awareness. • The unconscious mind is a reservoir of feelings, thoughts, urges, and memories that are outside of our conscious awareness. The unconscious contains contents that are unacceptable or unpleasant, such as feelings of pain, anxiety, or conflict. Freud likened the three levels of mind to an iceberg. The top of the iceberg that you can see above the water represents the conscious mind. The part of the iceberg that is submerged below the water, but is still visible, is the preconscious. The bulk of the iceberg that lies unseen beneath the waterline represents the unconscious.
  • 69. Consciousness From https://en.wikipedia.org/wiki/Consciousness Consciousness, at its simplest, is sentience or awareness of internal and external existence.[1] Despite millennia of analyses, definitions, explanations and debates by philosophers and scientists, consciousness remains puzzling and controversial,[2] being "at once the most familiar and [also the] most mysterious aspect of our lives".[3] Perhaps the only widely agreed notion about the topic is the intuition that consciousness exists.[4] Opinions differ about what exactly needs to be studied and explained as consciousness. Sometimes, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's "inner life", the world of introspection, of private thought, imagination and volition.[5] Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not.[6][7] There might be different levels or orders of consciousness,[8] or different kinds of consciousness, or just one kind with different features.[9] Other questions include whether only humans are conscious, all animals, or even the whole universe. The disparate range of research, notions and speculations raises doubts about whether the right questions are being asked.[10] Examples of the range of descriptions, definitions or explanations are: simple wakefulness, one's sense of selfhood or soul explored by "looking within"; being a metaphorical "stream" of contents, or being a mental state, mental event or mental process of the brain; having phanera or qualia and subjectivity; being the 'something that it is like' to 'have' or 'be' it; being the "inner theatre" or the executive control system of the mind.[11]
  • 70. What is Consciousness From https://www.nature.com/articles/d41586-018-05097-x The majority of scholars accept consciousness as a given and seek to understand its relationship to the objective world described by science. More than a quarter of a century ago Francis Crick and I decided to set aside philosophical discussions on consciousness (which have engaged scholars since at least the time of Aristotle) and instead search for its physical footprints. What is it about a highly excitable piece of brain matter that gives rise to consciousness? Once we can understand that, we hope to get closer to solving the more fundamental problem. We seek, in particular, the neuronal correlates of consciousness (NCC), defined as the minimal neuronal mechanisms jointly sufficient for any specific conscious experience. What must happen in your brain for you to experience a toothache, for example? Must some nerve cells vibrate at some magical frequency? Do some special “consciousness neurons” have to be activated? In which brain regions would these cells be located? ne important lesson from the spinal cord and the cerebellum is that the genie of consciousness does not just appear when any neural tissue is excited. More is needed. This additional factor is found in the gray matter making up the celebrated cerebral cortex, the outer surface of the brain. It is a laminated sheet of intricately interconnected nervous tissue, the size and width of a 14-inch pizza. Two of these sheets, highly folded, along with their hundreds of millions of wires—the white matter—are crammed into the skull. All available evidence implicates neocortical tissue in generating feelings. We can narrow down the seat of consciousness even further. So it appears that the sights, sounds and other sensations of life as we experience it are generated by regions within the posterior cortex. As far as we can tell, almost all conscious experiences have their origin there. What is the crucial difference between these posterior regions and much of the prefrontal cortex, which does not directly contribute to subjective content? The truth is that we do not know. Even so—and excitingly—a recent finding indicates that neuroscientists may be getting closer.
  • 71. Neural Correlates of Consciousness From https://www.nature.com/articles/nrn.2016.22 • The neuronal correlates of consciousness (NCC) are the minimum neuronal mechanisms jointly sufficient for any one specific conscious experience. It is important to distinguish full NCC (the neural substrate supporting experience in general, irrespective of its specific content), content-specific NCC (the neural substrate supporting a particular content of experience — for example, faces, whether seen, dreamt or imagined) and background conditions (factors that enable consciousness, but do not contribute directly to the content of experience — for example, arousal systems that ensure adequate excitability of the NCC). • The no-report paradigm allows the NCC to be distinguished from events or processes — such as selective attention, memory and response preparation — that are associated with, • precede or follow conscious experience. In such paradigms, trials with explicit reports are included along with trials without explicit reports, during which indirect physiological measures are used to infer what the participant is perceiving. • The best candidates for full and content-specific NCC are located in the posterior cerebral cortex, in a temporo-parietal-occipital hot zone. The content- specific NCC may be any particular subset of neurons within this hot zone that supports specific phenomenological distinctions, such as faces. • The two most widely used electrophysiological signatures of consciousness — gamma range oscillations and the P3b event-related potential — can be dissociated from conscious experiences and are more closely correlated with selective attention and novelty, respectively. • New electroencephalography- or functional MRI-based variables that measure the extent to which neuronal activity is both differentiated and integrated across the cortical sheet allow the NCC to be identified more precisely. Moreover, a combined transcranial magnetic stimulation– electroencephalography procedure can predict the presence or absence of consciousness in healthy people who are awake, deeply sleeping or under different types of anaesthesia, and in patients with disorders of consciousness, at the single-person level. • Extending the NCC derived from studies in people who can speak about the presence and quality of consciousness to patients with severe brain injuries, fetuses and newborn infants, non-mammalian species and intelligent machines is more challenging. For these purposes, it is essential to combine experimental studies to identify the NCC with a theoretical approach that characterizes in a principled manner what consciousness is and what is required of its physical substrate.
  • 72. Human Consciousness From https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00567/full Consciousness is not a process in the brain but a kind of behavior that, of course, is controlled by the brain like any other behavior. Human consciousness emerges on the interface between three components of animal behavior: communication, play, and the use of tools. These three components interact on the basis of anticipatory behavioral control, which is common for all complex forms of animal life. All three do not exclusively distinguish our close relatives, i.e., primates, but are broadly presented among various species of mammals, birds, and even cephalopods; however, their particular combination in humans is unique. The interaction between communication and play yields symbolic games, most importantly language; the interaction between symbols and tools results in human praxis. Taken together, this gives rise to a mechanism that allows a creature, instead of performing controlling actions overtly, to play forward the corresponding behavioral options in a “second reality” of objectively (by means of tools) grounded symbolic systems. The theory possesses the following properties: (1) It is anti-reductionist and anti-eliminativist, and yet, human consciousness is considered as a purely natural (biological) phenomenon. (2) It avoids epiphenomenalism and indicates in which conditions human consciousness has evolutionary advantages, and in which it may even be disadvantageous. (3) It allows to easily explain the most typical features of consciousness, such as objectivity, seriality and limited resources, the relationship between consciousness and explicit memory, the feeling of conscious agency, etc.
  • 73. Criteria for Consciousness From https://numenta.com/a-thousand-brains-by-jeff-hawkins 1. Learn a model of the world 2. Continuously remember the states of that model 3. Recall the remembered states
  • 74. Consciousness From https://numenta.com/a-thousand-brains-by-jeff-hawkins Consciousness, at its simplest, is sentience or awareness of internal and external existence.[1] Despite millennia of analyses, definitions, explanations and debates by philosophers and scientists, consciousness remains puzzling and controversial,[2] being "at once the most familiar and [also the] most mysterious aspect of our lives".[3] Perhaps the only widely agreed notion about the topic is the intuition that consciousness exists.[4] Opinions differ about what exactly needs to be studied and explained as consciousness. Sometimes, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's "inner life", the world of introspection, of private thought, imagination and volition.[5] Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not.[6][7] There might be different levels or orders of consciousness,[8] or different kinds of consciousness, or just one kind with different features.[9] Other questions include whether only humans are conscious, all animals, or even the whole universe. The disparate range of research, notions and speculations raises doubts about whether the right questions are being asked.[10]
  • 75. New Explanation for Consciousness From https://neurosciencenews.com/consciousness-theory-21571/ Consciousness is your awareness of yourself and the world around you. This awareness is subjective and unique to you. A Boston University Chobanian & Avedisian School of Medicine researcher has developed a new theory of consciousness, explaining why it developed, what it is good for, which disorders affect it, and why dieting (and resisting other urges) is so difficult. “In a nutshell, our theory is that consciousness developed as a memory system that is used by our unconscious brain to help us flexibly and creatively imagine the future and plan accordingly,” explained corresponding author Andrew Budson, MD, professor of neurology. “What is completely new about this theory is that it suggests we don’t perceive the world, make decisions, or perform actions directly. Instead, we do all these things unconsciously and then—about half a second later—consciously remember doing them We knew that conscious processes were simply too slow to be actively involved in music, sports, and other activities where split-second reflexes are required. But if consciousness is not involved in such processes, then a better explanation of what consciousness does was needed,” said Budson, who also is Chief of Cognitive & Behavioral Neurology, Associate Chief of Staff for Education, and Director of the Center for Translational Cognitive Neuroscience at the Veterans Affairs (VA) Boston Healthcare System. According to the researchers, this theory is important because it explains that all our decisions and actions are actually made unconsciously, although we fool ourselves into believing that we consciously made them. “Even our thoughts are not generally under our conscious control. This lack of control is why we may have difficulty stopping a stream of thoughts running through our head as we’re trying to go to sleep, and also why mindfulness is hard,” adds Budson.
  • 77. Brain-Computer Interfaces From https://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface A brain–computer interface (BCI), sometimes called a brain–machine interface (BMI), is a direct communication pathway between the brain's electrical activity and an external device, most commonly a computer or robotic limb. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.[1] Implementations of BCIs range from non-invasive (EEG, MEG, EOG, MRI) and partially invasive (ECoG and endovascular) to invasive (microelectrode array), based on how close electrodes get to brain tissue.[2] Research on BCIs began in the 1970s by Jacques Vidal at the University of California, Los Angeles (UCLA) under a grant from the National Science Foundation, followed by a contract from DARPA.[3][4] The Vidal's 1973 paper marks the first appearance of the expression brain–computer interface in scientific literature. Due to the cortical plasticity of the brain, signals from implanted prostheses can, after adaptation, be handled by the brain like natural sensor or effector channels.[5] Following years of animal experimentation, the first neuroprosthetic devices implanted in humans appeared in the mid-1990s. Recently, studies in human-computer interaction via the application of machine learning to statistical temporal features extracted from the frontal lobe (EEG brainwave) data has had high levels of success in autonomous recognition of fall detection as a medical alarm,[6] mental state (Relaxed, Neutral, Concentrating),[7] mental emotional state (Negative, Neutral, Positive),[8] and thalamocortical dysrhythmia.[9]
  • 79. From https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3497935/ Brain-Computer Interface Brain-computer interfaces (BCIs) acquire brain signals, analyze them, and translate them into commands that are relayed to output devices that carry out desired actions. BCIs do not use normal neuromuscular output pathways. The main goal of BCI is to replace or restore useful function to people disabled by neuromuscular disorders such as amyotrophic lateral sclerosis, cerebral palsy, stroke, or spinal cord injury. From initial demonstrations of electroencephalography-based spelling and single- neuron-based device control, researchers have gone on to use electroencephalographic, intracortical, electrocorticographic, and other brain signals for increasingly complex control of cursors, robotic arms, prostheses, wheelchairs, and other devices. Brain- computer interfaces may also prove useful for rehabilitation after stroke and for other disorders. In the future, they might augment the performance of surgeons or other medical professionals. Brain-computer interface technology is the focus of a rapidly growing research and development enterprise that is greatly exciting scientists, engineers, clinicians, and the public in general. Its future achievements will depend on advances in 3 crucial areas. Brain-computer interfaces need signal-acquisition hardware that is convenient, portable, safe, and able to function in all environments. Brain-computer interface systems need to be validated in long-term studies of real-world use by people with severe disabilities, and effective and viable models for their widespread dissemination must be implemented. Finally, the day-to-day and moment-to-moment reliability of BCI performance must be improved so that it approaches the reliability of natural muscle-based function.
  • 80. From https://www.frontiersin.org/articles/10.3389/fnsys.2021.578875/full Progress in Brain Computer Interface: Challenges and Opportunities Brain computer interfaces (BCI) provide a direct communication link between the brain and a computer or other external devices. They offer an extended degree of freedom either by strengthening or by substituting human peripheral working capacity and have potential applications in various fields such as rehabilitation, affective computing, robotics, gaming, and neuroscience. Significant research efforts on a global scale have delivered common platforms for technology standardization and help tackle highly complex and non-linear brain dynamics and related feature extraction and classification challenges. Time-variant psycho-neurophysiological fluctuations and their impact on brain signals impose another challenge for BCI researchers to transform the technology from laboratory experiments to plug-and-play daily life. This review summarizes state- of-the-art progress in the BCI field over the last decades and highlights critical challenges. The brain computer interface (BCI) is a direct and sometimes bidirectional communication tie-up between the brain and a computer or an external device, which involves no muscular stimulation. It has shown promise for rehabilitating subjects with motor impairments as well as for augmenting human working capacity either physically or cognitively (Lebedev and Nicolelis, 2017; Saha and Baumert, 2020). BCI was historically envisioned as a potential technology for augmenting/replacing existing neural rehabilitations or serving assistive devices controlled directly by the brain (Vidal, 1973; Birbaumer et al., 1999; Alcaide-Aguirre et al., 2017; Shahriari et al., 2019). The first systematic attempt to implement an electroencephalogram (EEG)-based BCI was made by J. J. Vidal in 1973, who recorded the evoked electrical activity of the cerebral cortex from the intact skull using EEG (Vidal, 1973), a non-invasive technique first studied in humans invented by Berger (1929). Another early endeavor to establish direct communication between a computer and the brain of people with severe motor impairments had utilized P300, an event related brain potential (Farwell and Donchin, 1988). As an alternative to conventional therapeutic rehabilitation for motor impairments, BCI technology helps to artificially augment or re-excite synaptic plasticity in affected neural circuits. By exploiting undamaged cognitive and emotional functions, BCI aims at re-establishing the link between the brain and an impaired peripheral site (Vansteensel et al., 2016). However, the research applications of BCI technology evolved significantly over the years, including brain fingerprinting for lie detection (Farwell et al., 2014), detecting drowsiness for improving human working performances (Aricò et al., 2016; Wei et al., 2018), estimating reaction time (Wu et al., 2017b), controlling virtual reality (Vourvopoulos et al., 2019), quadcopters (LaFleur et al., 2013) and video games (Singh et al., 2020), and driving humanoid robots (Choi and Jo, 2013; Spataro et al., 2017). Figure 1 demonstrates the progression of BCI in various application fields since its conception.
  • 81. From http://people.uncw.edu/tothj/PSY595/Lebedev-Brain-Machine%20Interfaces-TiN-2006.pdf Brain–machine interfaces:past, present and future Since the original demonstration that electrical activity generated by ensembles of cortical neurons can be employed directly to control a robotic manipulator, research on brain–machine interfaces (BMIs) has experienced an impressive growth. Today BMIs designed forboth experimental and clinical studies can translate raw neuronal signals into motor commands that reproduce arm reaching and hand grasping movements in artificialactuators. Clearly, these developments hold promise for the restoration of limb mobility in paralyzed subjects. However, as we review here, before this goal can be reached several bottlenecks have to be passed. These include designing a fully implantable biocompatible recording device, further developing real-time computational algorithms, introducing a method for providing the brain with sensory feedback from the actuators, and designing and building artificial prostheses that can be controlled directly by brain-derived signals. By reaching these milestones, future BMIs will be able to drive and control revolutionary prostheses that feel and act like the human arm.
  • 82. From https://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface Brain–Computer Interface A brain–computer interface (BCI), sometimes called a brain–machine interface (BMI), is a direct communication pathway between the brain's electrical activity and an external device, most commonly a computer or robotic limb. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.[1] Implementations of BCIs range from non- invasive (EEG, MEG, EOG, MRI) and partially invasive (ECoG and endovascular) to invasive (microelectrode array), based on how close electrodes get to brain tissue.[2] Research on BCIs began in the 1970s by Jacques Vidal at the University of California, Los Angeles (UCLA) under a grant from the National Science Foundation, followed by a contract from DARPA.[3][4] The Vidal's 1973 paper marks the first appearance of the expression brain–computer interface in scientific literature. Due to the cortical plasticity of the brain, signals from implanted prostheses can, after adaptation, be handled by the brain like natural sensor or effector channels.[5] Following years of animal experimentation, the first neuroprosthetic devices implanted in humans appeared in the mid-1990s. Recently, studies in human-computer interaction via the application of machine learning to statistical temporal features extracted from the frontal lobe (EEG brainwave) data has had high levels of success in autonomous recognition of fall detection as a medical alarm,[6] mental state (Relaxed, Neutral, Concentrating),[7] mental emotional state (Negative, Neutral, Positive),[8] and thalamocortical dysrhythmia. [9]