Вы находитесь на странице: 1из 14

International Journal of Software Science and Computational Intelligence

Volume 10 • Issue 1 • January-March 2018

Cognitive Computing:
Methodologies for Neural Computing and
Semantic Computing in Brain-Inspired Systems
Yingxu Wang, University of Calgary, Calgary, Canada
Victor Raskin, Purdue University, West Lafayette, IN, USA
Julia Rayz, Purdue University, West Lafayette, IN, USA
George Baciu, Hong Kong Polytechnic University, Hung Hom, Hong Kong
Aladdin Ayesh, De Montfort University, Leicester, UK
Fumio Mizoguchi, Tokyo University of Science, Shinjuku, Japan
Shusaku Tsumoto, University of Shimane, Matsue, Japan
Dilip Patel, London South Bank University, London, UK
Newton Howard, University of Oxford, Oxford, UK

ABSTRACT

Cognitive Computing (CC) is a contemporary field of studies on intelligent computing methodologies


and brain-inspired mechanisms of cognitive systems, cognitive machine learning and cognitive
robotics. The IEEE conference ICCI*CC’17 on Cognitive Informatics and Cognitive Computing
was focused on the theme of neurocomputation, cognitive machine learning and brain-inspired
systems. This article reports the plenary panel (Part II) in IEEE ICCI*CC’17 at Oxford University.
The summary is contributed by distinguished panelists who are part of the world’s renowned scholars
in the transdisciplinary field of cognitive computing.

Keywords
Applications, Artificial Intelligence, Brain-Inspired Systems, Cognitive Computing, Cognitive Engineering,
Cognitive Robotics, Cognitive Systems, Computational Intelligence, Denotational Mathematics

1. INTRODUCTION

Cognitive Computing (CC) is a novel paradigm of intelligent computing platforms and methodologies
for developing cognitive and autonomous systems mimicking the mechanisms of the brain (Wang,
2002, 2003, 2007a, 2009a-c, 2011b, 2012e, 2013c, 2015a, 2016a, 2017a; Wang et al., 2009, 2010,
2016; Howard et al., 2017). CC emerged from transdisciplinary studies in both natural intelligence
in cognitive/brain sciences (Anderson, 1983; Sternberg, 1998; Reisberg, 2001; Wilson & Keil,
2001; Wang, 2002, 2007a; Wang et al., 2002, 2007a, 2009a, 2009b, 2016a, 2017a) and artificial
intelligence in computer science (Bender, 1996; Poole et al., 1997; Zadeh, 1999, 2016; Widrow, 2015;
Widrow et al., 2015; Wang, 2010a, 2016c). Formal models are sought for revealing the principles
and mechanisms of the brain. This leads to the theory of abstract intelligence (αI) (Wang, 2009a,
2012c) that investigates into the brain via not only inductive syntheses of theories and principles of
intelligence science through mathematical engineering, but also deductive analyses of architectural

DOI: 10.4018/IJSSCI.2018010101

Copyright © 2018, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.


1
International Journal of Software Science and Computational Intelligence
Volume 10 • Issue 1 • January-March 2018

and behavioral instances of natural and artificial intelligent systems through cognitive engineering.
The key methodology suitable for dealing with the nature of αI is mathematical engineering, which
is an emerging discipline of contemporary engineering that studies the formal structural models and
functions of complex abstract and mental objects as well as their systematic and rigorous manipulations
(Wang, 2015a; Wang et al., 2017a).
Fundamental theories of CC cover the Layered Reference Model of the Brain (LRMB) (Wang
et al., 2006), the Object-Attribute-Relation (OAR) model of internal information and knowledge
representation (Wang, 2007c), the Cognitive Functional Model of the Brain (CFMB) (Wang &
Wang, 2006), Abstract Intelligence (αI), Neuroinformatics (Wang, 2013a; Wang & Fariello, 2012),
Denotational Mathematics (Wang, 2008, 2009d, 2012a, 2012b), Cognitive Linguistics (Wang and
Berwick, 2012, 2013), the Spike Frequency Modulation (SFM) Theory of neural signaling (Wang,
2016h), the Neural Circuit Theories (Wang and Fariello, 2012), and cognitive systems (Wang et al.,
2017). Recent studies on LRMB reveal an entire set of cognitive functions of the brain and their
cognitive processes, which explain the cognitive mechanisms and processes of the natural intelligence
with 52 cognitive processes at seven layers known as the sensation, action, memory, perception,
cognitive, inference, and intelligence layers (Wang et al., 2006).
IEEE ICCI*CC’17 on Cognitive Informatics and Cognitive Computing has been held at
Oxford University during July 26-28, 2017 (Howard et al., 2017). This paper is a summary of the
position statements of invited panellists presented in the Plenary Panel of IEEE ICCI*CC 2017 on
Neurocomputation, Cognitive Machine Learning and Brain-Inspired System (Part II). It is noteworthy
that the individual statements and opinions included in this paper may not necessarily be shared by
all panellists.

2. THE THEORETICAL FRAMEWORK OF BRAIN AND


INTELLIGENCE SCIENCES FOR COGNITIVE COMPUTING

It it recognized that the brain may be explained by a hierarchically reductive structure at the logical,
cognitive, physiological and neurological levels from the bottom up, which form the studies known as
abstract intelligence, cognitive informatics, brain informatics, and neuroinformatics. The theoretical
framework of brain science and intelligence science can be described as shown in Figure 1 according
to cognitive informatics studies (Wang, 2007a, 2008, 2009a, 2011b, 2012c/e, 2015b/d, 2016a, 2017a).
The synergy of multidisciplinary studies at all levels leads to the theory of cognitive computing for
explaining the brain. The fundamental theories underpinning the framework of brain and intelligence
sciences are abstract intelligence (αI) and denotational mathematics (DM), which includes concept
algebra (Wang, 2015e), semantic algebra (Wang, 2013b), behavioral process algebra (Wang, 2007b,
2014b), inference algebra (Wang, 2011a), fuzzy logical algebra (Wang, 2016f), probability algebra
(Wang, 2015c, 2016g), big data algebra (Wang, 2016e), relation algebra (Wang, 2017b), system
algebra (Wang, 2015d) and visual semantic algebra (Wang, 2009e) towards mathematical engineering
(Wang, 2015a; Wang et al., 2017a).
Neuroinformatics (NI) is the fundamental level of brain studies in the hierarchical framework of
brain/intelligence science, which enquires primitive forms and mechanisms of the natural intelligence
at the neurological level towards those of brain informatics at the physiological level, cognitive
informatics at the functional level, and abstract intelligence at the logical level (Wang, 2013a; Wang
& Fariello, 2012). NI is a transdisciplinary field that studies the neurological models and neural
representations of genetic information via DNA and acquired information via cognitive neurology
and neurocomputation. NI encompasses theories and methodologies for neural information processing
and neural knowledge representations. A set of fundamental issues is studied in NI including neural
models of genetic and acquired information, neural signaling theory, the neural circuit theory, and
neural representation of memory and knowledge.

2
International Journal of Software Science and Computational Intelligence
Volume 10 • Issue 1 • January-March 2018

Figure 1. The theoretical framework of brain and intelligence sciences underpinning CC

Brain Informatics (BI) is the second level of brain studies in the hierarchical framework of brain/
intelligence science, which is built on NI at the neurological level towards cognitive informatics at
the functional level (Wang, 2011b, 2012c). BI is a joint field of brain and information sciences that
studies information processing mechanisms of the brain at the physiological level by computing
and brain imagination technologies. BI explains how the most complicated physiological organ, the
brain, is formed based on the space/time divided nervous systems according to observations in brain
anatomy and neurophysiology (Woolsey et al., 2008; Carter et al., 2009; Marieb 1992; Sternberg,
1998; Dayan and Abbott, 2001; Wilson & Keil, 2001; Wang & Fariello, 2012). It is recognized that
the exploration for the brain is a complicated recursive problem where contemporary denotational
mathematics is needed in order to efficiently deal with its extreme complexity. Cognitive psychology
and brain science used to explain the brain based on empirical relations between stimuli/tasks and
reactions in the cortices. However, the lack of a precise logical model about the brain at a higher level
has prevented a rigorous explanation of the brain by losing the forest for the trees. This challenge has
led to the investigation into upper layers of the problem towards cognitive and abstract intelligence
theories for the brain.
Cognitive Informatics (CI) is the third level of brain studies in the hierarchical framework of
brain/intelligence science, which is built on both NI and BI at the neurological and physiological
levels towards abstract intelligence at the logical level (Wang, 2002, 2003, 2006, 2007a; Wang et
al., 2006, 2009a, 2016a). CI is a term coined by Wang in the first IEEE International Conference on
Cognitive Informatics (ICCI 2002) (Wang, 2002; Wang et al., 2002). Beyond the narrow sense, the
broad sense of CI is an overarching theory and denotational mathematical means for brain/intelligence
science referring to the definition in the beginning of the paper. CI studies the natural intelligence
and the brain from both a theoretical and a computational approach, which rigorously explains the
mechanisms of the brain by a fundamental theory known as abstract intelligence. Inversely, CI theories
have also paved a way to the development of the next generation brain-inspired computers known as
cognitive computers (Wang, 2009b, 2012b/e).
Abstract Intelligence (αI) is the most precise level of brain studies in the hierarchical framework
of brain/intelligence science (Wang, 2009a, 2012c). Intelligence is a human or a system ability that
transforms information into behaviors among the cognitive objects of data, information, knowledge
and wisdom. Recent basic studies reveal that novel solutions to fundamental AI problems are deeply
rooted in both the understanding of the natural intelligence (Wang, 2002, 2017c; Wang et al., 2006)
and the maturity of suitable mathematical means for rigorously modeling the brain in machine

3
International Journal of Software Science and Computational Intelligence
Volume 10 • Issue 1 • January-March 2018

understandable forms (Wang, 2015d, 2016d; Wang & Berwick, 2012, 2013). αI is the general
mathematical theories of intelligence as a complex natural mechanism that transfers information
into behaviors and knowledge at the embodied neurological, physiological, cognitive and logical
levels from bottom-up aggregations or top-down reductions. The αI theory serves as the foundation
for a multidisciplinary and transdisciplinary enquiry of the brain and intelligence sciences. αI led
to a coherent theory based on both denotational mathematical models and cognitive psychology
observations, which enables the brain to be rigorously and precisely explained through the hierarchical
framework of brain/intelligence science as modeled in Figure 1.
The logical model of the brain and the αI theory of the natural intelligence will enable the
development of cognitive computers (Wang, 2009b, 2012b/e) that perceive, think, inference and learn.
The theoretical and functional difference between cognitive computers and classic ones are that the
latter are data processors based on Boolean algebra and its logical counterparts; while the former are
knowledge processors based on contemporary denotational mathematics. A wide range of applications
of the cognitive computers have been developing in ICIC (ICIC, 2017) such as cognitive robots (Wang
2010a, 2015a/b, 2016a/c, 2017a), cognitive machine learning systems (Wang, 2016d; Wang et al.,
2017), cognitive search engines (Wang, 2010b) and cognitive translators (Wang & Berwick, 2012).
(This section is contributed by Prof. Yingxu Wang.)

3. NO SUBSTANCE-FREE COGNITION HERE!

In her contribution to this joint paper, Professor Julia Taylor Rayz states the importance of access
to implicit information in our field. What it means is that the computer needs to possess the entire
human knowledge of the world to process information. This sounds hopeless and unattainable and
seems to encourage alternative approaches, for instance, the obvious recourse to statistics when access
to real knowledge is denied. I disagree with Disraeli (or whoever it was) who said that statistics is
the biggest lie because, actually, it does not contain any answer to the question of how things are,
and it is failing to account for that failure which is deceptive. Training a whole generation of young
scholars to ignore that is wasteful, and we have been doing it for almost three decades with machine
learning and now neural networks.
These are based on good and constantly improving statistical methods, and they are possibly
applicable to closed and limited domains but they are not limited to those, and their use especially
backfires in natural language processing (NLP) which is not enclosed because of its openness to
human knowledge of the words that is not included in any big data, the latest save-t-all buzzword of
the adepts. In NLP, which is almost completely dominated by these approaches, researchers pride
themselves on not knowing anything about the nature, substance of the domain the methods are
applied to and the complete universality of their methods. That very universality makes the results
non-transparent and non-explicable, and the resulting cognition nil.
The opposite approach, in NLP, is understanding the substance of human knowledge, representing
it formally, and making it available to the computer incrementally as per need. This is augmented by
understanding the phenomenon of grain size, and methodologically, developing an approach which
utilizes a sizable general ontology in conjunction with a capacity for its rapid and affordable hybrid
human-computer extension as the need arises. It is practiced by a few but rapidly growing number
of research groups in various countries, often related to or directly derivative from our Ontological
Semantic Technology project.
The semantic ontology Is a conceptual theory of the world. It is not a limited taxonomy of
objects, not a terminological vocabulary, not created automatically from text. Instead, it has all the
objects and all events in the world, linked with multiple properties. It has no synonyms, antonyms
or homonyms, it has no paraphrases, and it is the same for all languages, natural or constructed, e.g.,
robotic. The language-dependent lexicons, which do have all of that, are anchored in ontology, and the
anchoring is acquired on the internet in a proprietary resource, where human play a brief but crucial

4
International Journal of Software Science and Computational Intelligence
Volume 10 • Issue 1 • January-March 2018

role. Every sentence of a natural language is automatically translated into a conceptual formula, and
all the synonyms have the same representation as do all of its translations into another language.
All the formal expressions of the computer language are interpretable both in natural language
and in the conceptual cognitive structure. More importantly, they are related to each the way humans
relate them to each other. These relations reflect cognition and this makes the computing cognitive.
The ontology is never complete and can always be easily extended to accommodate a new domain
or to develop into a more refined grain size. The already implemented parts of ontology have served
existing applications, and the software supports a very valuable “blame assignment” technology
which identifies processing faults and corrects the ontology and other resources as needed, mostly
automatically but referring them to a human expert when necessary. (This section was contributed
by Prof. Victor Raskin.)

4. HUMAN COGNITION IN NEUROCOMPUTATION, MACHINE


LEARNING AND BRAIN-INSPIRED SYSTEMS

With the advancement of deep machine learning algorithms and capabilities, the question in many
minds is when a machine will be able to perform at a level compatible to human cognition. This is
an interesting question, especially if we take an optimistic approach. The question can be rephrased
as: is it possible to learn from an enormous amount of data in a single dimension: either looking at
images, or looking at texts, or learning behavior of different car drivers. Intuitively, the answer should
be no: we are aware that people learn from multi-modal interactions with an environment and thus it
seems that multi-modality is a prerequisite for cognitive learning. The question can be asked one level
deeper – why do we need multi-modal learning, what is so special about it that it cannot be replaced?
What kind of knowledge are we getting and using that cannot be inferred from a million texts?
The answer is simple and it is based on the difference between implicit and explicit information.
Moreover, it is the difference between perceived implicit and explicated information. The texts that
we read and write are written for humans and thus, do not outline every single detail, especially
if people are already aware of it. Any new situation that we encounter is described with a starting
point of a familiar situation. Something previous unseen, unknown, or inexperienced can then be
compared to something known. This is what people are good at: build on previous knowledge and
provide additional details only as needed. A machine that is trying to read the same text as a human
does not have access to that background knowledge that a human has. The implicit information that
is at the fingertips of most people is not available to a machine. The same goes for images, driving,
or any other task that has been tackled with deep learning without integration of knowledge from
various modalities.
What is interesting is that various modalities have very different explicit information about the
same object of event. A picture may be worth a thousand words about an object, but it’s the words that
may describe what has happened to that object or what will happen to it. A picture of a menu (text)
or a picture of a dish to be served may tell something about it, but not as much as when information
from other sensors (smell, taste, etc) is added. Each of these dimensions and modalities adds what
others are unlikely to deliver. Thus, a description of a taste of wine outlines salient points, but does
not touch on the commonly accepted—implicit—information that can be derived only by tasting it.
It is combination of these modalities that transform implicit information into explicit one.
This leads to the main point of this talk: there is a lot of implicit information in any dimension
of representation. This implicit information has to be resolved for human-like learning and reasoning
to take place. If this is what we mean by cognition, on the premise that every dimension can resolve
some of the implicit unknowns of other dimensions, in order to achieve cognitive machine learning,
multiple modalities should be integrated. (This section is contributed by Prof. Julia T. Rayz.)

5
International Journal of Software Science and Computational Intelligence
Volume 10 • Issue 1 • January-March 2018

5. COGNITIVE CONVERGENCE OF SOCIAL MEDIA AND REMOTE SENSING

Capturing human activity through cognitive traces of salient features from text messaging and imaging
is opening up a new frontier in dynamically adaptive remote sensing. Akin to observing a colony
of ants, human activity analyzed through large datasets of high frequency messaging can provide a
higher resolution demographics that can lead to the classification of common social behavioral patterns
within a region over time. This leads to a positive feedback loop that often intensifies economic
development and natural clustering of human activities in various regions of urban and rural settings.
There is wisdom in crowd behavior. Crowd-based large data cognitive analytics promises to
better track the use of resources in large urban areas and districts. On traditional maps, regions are
divided by artificial border lines, city limits, countries, or continents. These divisions are relatively
static over long periods of time. However, within these borders, many facilities keep changing. For
example, with the fast changing economy, large areas of land may be released for new residential
communities. Shops selling similar types of products may group together to form a business district.
The district itself may expand or shrink within the city border.
These internal changes are harder to capture by physical remote sensing. Yet, the large data
generated by active messaging consisting of text and images can potentially provide further insight
into the internal reorganization of these regions without explicitly resorting to physical features. This
forms a much more powerful stream of information about the natural cognitive behavior of large
crowds. If analyzed and interpreted correctly, these streams of textual and image data can cast insights
into the dynamic utilization of resources and enhance economic development.
A dynamic map requires detectors and sensors such as found in remote sensing and geographic
information systems. However, these require special infrastructure deployment and maintenance. They
are often maintained by a centralized government agency and further require official approvals and
policy changes. Social media applications are so prevalent that the frequency of sampling regions
features can easily be increased simply by analyzing tagged photos. From the posting of user photos,
we can often establish the location. From the image description, we can deduce the feature of that
location and changes of the location over time by various social activities.
By analyzing the data accumulated in time, not only we can imply the transition of people’s
interest, but also the evolution of regional development. Hence, social media can provide a set of
effective tools for cognitive understanding of crowd behavior within urban developments and at the
same time show the cognitive effects of crowds over the utility of resources within regions. This has
the potential to become a new science of localization and remote sensing that includes the dynamics
of the cognitive behavior of large crowds into the economic development of large urban settlements.
(This section is contributed by Prof. George Baciu.)

6. EMOTIONS-BASED LEARNING BEYOND MOTIVATION TOWARDS


A MACHINE PASSION AND EMERGENT PERSONALITY

First thought one may have when we talk about introducing emotions into the learning process is
perhaps motivation. But motivation is an inferred construct from a subset of emotions related to desires.
Using motivation alone, whilst is a valid starting point, neglects the wider scope of emotions and their
influence on cognitive faculties including learning. To appreciate this point, let us look at it from the
opposite direction. Given two students both are motivated by the desire to succeed in their exams and
love of learning. Both have the same intelligent level, measure by IQ or any acceptable alternative,
and yet comes to a give subject, let us assume mathematics, one of them find it easy whilst the other
struggles to maintain the motivation to study it. How can we explain this phenomenon? If we can
explain this phenomenon by a general theory of emotions then how these emotions are encoded in
our motivational/performance model? Finally, how a computational realization of such theory can
be developed? (This section is contributed by Prof. Aladdin Ayesh.)

6
International Journal of Software Science and Computational Intelligence
Volume 10 • Issue 1 • January-March 2018

7. COGNITIVE MACHINE LEARNING TOWARDS SELF-DRIVING CARS

In this position statements, we have focused on driver’ cognitive behavior using inductive logic
programming which is regarded as cognitive machine learning. With the view of induction, we have
obtained the cognitive aspect of behavior in driving with respect to memory resource allocation to
driver cognitive load (.Mizoguchi et al., 2013). As for the objective view of driving, we have measured
eye movement of drivers (Mizoguchi et al., 2014). The eye movement measurement is not naturalistic
driving. For the naturalistic driving, we have used drive recorder without eye movement device. The
study has shown in previous ICCI*CC 2016 where we have presented the cognition of meta-level
inference. We will try to find out the characters of driver behavior in highway. In ICCI*CC 2017, we
have used the common sense reasoning method to avoid incidents of pedestrians’ sudden crossing
behavior on narrow roads. We have changed the view of drivers in highways to drive on narrow roads
shared with pedestrians. The result suggests the next generation self-driving system interacting with
human drivers on both highway and narrow roads. In order to do real-world driving experiments,
we adopt a simple driving recorder in the ordinal market. We have collected data on the highway in
Tokyo starting from Meguro to Nagano of Jouban highway in Japan for analyses and simulations of
self-driving cars. (This section is contributed by Prof. Fumio Mizoguchi.)

8. GRANULAR COMPUTING FOR BIG DATA ENGINEERING

Big data has become a hot topic: development of sensors and smart devices, the amount of data
collection has been exploded. For example, the hospital information system in Shimane University
with 600 beds where about 1000 patients come to the outpatient clinics stores 1TB databases and
52TB medical images. Recently, medical images are increasing with 3.6TB per year. Now, the records
of some patients occupy more than 100GB. In the future, patients with long term follow-up will
generate more than 1TB data, which makes even specialists hard to understand.
In this way, computerization of services generates “Big Data” with wide varieties of level of
granularity or hierarchy. Now, the advance of big data analytics enables us to measure and visualize
services from big data, may lead to one of the major component of service science. However, since
services include temporal and sequential information, specific data mining techniques are required.
Dealing with temporal and sequential data, we need to incorporate elements of granular computing,
and knowledge about information granularity, such as ontology.
The notion, “granular computing” has been proposed by Lotfi Zadeh (Zadeh, 1997), who pointed
out that human can interpret data or humans with different granular scales, which is one of the major
parts of human intelligence. Zadeh (Zadeh, 1997) and Lin (Lin, 1999) discuss that topology may
play a central role in granulation. Thus, big data analytics should include the computation of the
elements of granulation, called granular computing, which will become one of the major parts of
big data research. We are now starting these aspects of big data analytics, such as in (Tsumoto et
al., 2014, 2015, 2016; Iwata et al., 2015), which will be reported in the near future. (This section is
contributed by Prof. Shusaku Tsumoto.)

9. AUTONOMOUS ONTOLOGY GENERATION BY


COGNITIVE MACHINE LEARNING

An ontology is a taxonomic hierarchy of lexical terms and their syntactic and semantic relations
for representing a framework of structured knowledge. Ontology used to be problem-specific and
manually built due to its extreme complexity. Based on the latest advances in cognitive knowledge
learning and formal semantic analyses, an Algorithm of Formal Ontology Generation (AFOG) is
developed. The methodology of AFOG enables autonomous generation of quantitative ontologies in
knowledge engineering and semantic comprehension via deep machine learning. A set of experiments

7
International Journal of Software Science and Computational Intelligence
Volume 10 • Issue 1 • January-March 2018

demonstrates applications of AFOG in cognitive computing, semantic computing, machine learning


and computational intelligence.
In order to rigorously improve the structure and methodology of ontology theories, a mathematical
model of formal concept (Wang, 2014a, 2015e, 2016b) is created that denotes any concept in human
knowledge as a triple of sets of attributes, objects, and relations. Based on the mathematical model
of concepts, a formal methodology for manipulating knowledge is enabled known as concept algebra
(Wang, 2015e), which provides a rigorous methodology for formal knowledge manipulations by a set
of algebraic operators on abstract concepts. Concept algebra enables the quantification of concepts by
semantic weights and the measurement of knowledge by the basic unit of binary relation (bir) (Wang,
2016b) towards knowledge science. Experimental results produced by the AFOG algorithm have
demonstrated an autonomous, accurate, rigorous and quantitative methodology for building formal
ontology, which outperforms human subjective counterparts. AFOG has paved a novel approach to
free human from complex manual ontology building in knowledge engineering. The breakthrough in
machine knowledge learning and semantic comprehension has led to a wide range of applications in
cognitive machine learning, computational linguistics, cognitive knowledge bases and hybrid man-
machine intelligence. (This section is contributed by Prof. Yingxu Wang.)

10. CONCLUSION

This paper has summarized a set of position statements presented in the plenary panel (Part II) of
IEEE ICCI*CC’17 on neurocomputation, cognitive machine learning and brain-inspired systems,
which were contributed by renowned panelists in the contemporary field of cognitive computing. It
has been elaborated that the theoretical foundations underpinning cognitive computing are cognitive
informatics and denotational mathematics. A wide range of theoretical breakthroughs and novel
engineering applications have been reported such as cognitive computing methodologies, cognitive
intelligence, cognitive robots, computing with words, deep learning machines, cognitive learning
engines, cognitive systems, cognitive knowledge bases, cognitive engineering, autonomous ontology
generation and cognitive self-driving cars.

8
International Journal of Software Science and Computational Intelligence
Volume 10 • Issue 1 • January-March 2018

REFERENCES

Anderson, J. R. (1983). The Architecture of Cognition. Cambridge, MA: Harvard Univ. Press.
Baciu, G., Li, C., Wang, Y., & Zhang, X. (2016). Cloudet: A cloud-driven visual cognition of large streaming
data. International Journal of Cognitive Informatics and Natural Intelligence, 10(1), 12–31. doi:10.4018/
IJCINI.2016010102
Bender, E. A. (1996). Mathematical Methods in Artificial Intelligence. Los Alamitos, CA: IEEE CS Press.
Carter, R., Aldridge, S., Page, M., & Parker, S. (2009). Mapping the Mind. Berkeley, USA: Univ. of California
Press.
Dayan, P., & Abbott, L. E. (2001). Theoretical Neuroscience: Computational and Mathematical Modeling of
Neural Systems. MA: The MIT Press.
Harada, T., Iwasaki, H., Mori, K., Yoshizawa, A., & Mizoguchi, F. (2014). Evaluation Model of Cognitive
Distraction State Based on Eye Tracking Data Using Neural Networks. International Journal of Software Science
and Computational Intelligence, 6(1), 1–16. doi:10.4018/ijssci.2014010101
Harbluka, J. L., Ian Noyb, Y., & Patricia, L. (2007). An on-road assessment of cognitive distraction: Impacts
on drivers’ visual behavior and braking performance. Accident; Analysis and Prevention, 39(2), 372–379.
doi:10.1016/j.aap.2006.08.013 PMID:17054894
Howard, N., Wang, Y., & Hussain, A. F. Hamdy, Bernard Widrow and Lotfi A. Zadeh (Eds.), (2017).
Proceedings of the 16th IEEE International Conference on Cognitive Informatics and Computational Computing
(ICCI*CC’17). IEEE Computer Society Press.
ICIC. (2017), The International Institute of Cognitive Informatics and Cognitive Computing (ICIC), http://
www.ucalgary.ca/icic/
Lin, T. Y. (1999). Granular Computing: Fuzzy Logic and Rough Sets. In L. A. Zadeh & J. Kacprzyk (Eds.),
Computing with words in information/intelligent systems (pp. 183–200). Springer-Verlag. doi:10.1007/978-3-
7908-1873-4_9
Iwata, H., Hirano, S., & Tsumoto, S. (2015). Maintenance and discovery of domain knowledge for nursing care
using data in hospital information system. Fundamenta Informaticae, 137(2), 237–252.
Marieb, E. N. (1992), Human Anatomy and Physiology (2nd ed.). Redwood City, CA: The Benjamin/Cummings
Publishing Co.
Mizoguchi, F., Ohwada, H., Nishiyama, H., Yoshizawa, A. & Iwasaki, H. (2015). Identifying Driver’s Cognitive
Distraction Using Inductive Logic Programming. In Proceedings of the 25th International Conference on
Inductive Logic Programming (ILP 2015).
Mizoguchi, F., Nishiyama, H., & Iwasaki, H. (2014). A New Approach to Detecting Distracted Car Drivers
Using Eye Movement Data. In Proceedings of the 13th IEEE International Conference on Cognitive Informatics
& Cognitive Computing (ICCI*CC’14) (pp. 266-272).
Mizoguchi, F., Yoshizawa, A., & Iwasaki, H. (2017). Common Sense Reasoning Approach to Avoid Near Miss
Incident at the Narrow Road of Sudden Crossing Behavior of Pedestrian. In Proceedings of the 16th IEEE
International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC’17).
Mueller, E. T. (2014). Commonsense Reasoning. Morgan Kaufmann.
Patel, S., Wang, Y., Valipour, M., Zatarain, O. D., Gavrilova, M., Hussain, A., & Howard, N. (2017b). Formal
Ontology Generation by Deep Machine Learning. In Proceedings of the 16th IEEE International Conference
on Cognitive Informatics and Cognitive Computing (ICCI*CC 2017), University of Oxford, UK, July 26-28
(pp. 6-15). IEEE CS Press.
Poole, D., Mackworth, A., & Goebel, R. (1997). Computational Intelligence: A Logical Approach. Oxford, UK:
Oxford University Press.
Reisberg, D. (2001). Cognition: Exploring the Science of the Mind. Norton & Company, Inc.

9
International Journal of Software Science and Computational Intelligence
Volume 10 • Issue 1 • January-March 2018

Sega, S., Iwasaki, H., Hiraishi, H., & Mizoguchi, F. (2011). Qualitative Reasoning Approach to a Driver’s
Cognitive Mental Load. International Journal of Software Science and Computational Intelligence, 3(4), 18–32.
doi:10.4018/jssci.2011100102
Sega, S., Iwasaki, H., Hiraishi, H., & Mizoguchi, F. (2011). Applying qualitative reasoning to a driver’s cognitive
mental load. In Proceedings of the 10th IEEE International Conference on Cognitive Informatics & Cognitive
Computing (ICCI*CC’11).
Sternberg, R. J. (1998). In Search of the Human Mind (2nd ed.). Orlando, FL: Harcourt Brace & Co.
Tsumoto, S. (2016). Shoji Hirano and Haruko Iwata (2016), Mining process for Improvement of Clinical Process
Quality. In Proceedings of IEEE Big Data ’16 (pp. 1982–1990).
Tsumoto, S., Hirano, S., & Kimura, T. (2016). Mining Text for Disease Diagnosis in Hospital Information
System. In Proceedings of the IEEE Big Data 2016, Boston, MA, November 11-16.
Tsumoto, S., Iwata, H., Hirano, S., & Tsumoto, Y. (2014). Haruko Iwata, Shoji Hirano, Yuko Tsumoto (2014),
Similarity-based behavior and process mining of medical practices. Future Generation Computer Systems, 33(1),
21–31. doi:10.1016/j.future.2013.10.014
Vapnik, V. (1995). The Nature of Statistical Learning Theory. Springer-Verlag.
Wang, Y. (2002), Keynote: On Cognitive Informatics. In Proc. 1st IEEE International Conference on Cognitive
Informatics (ICCI’02), Calgary, Canada. IEEE CS Press. doi:10.1109/COGINF.2002.1039280
Wang, Y. (2003), On Cognitive Informatics. Brain and Mind: A Transdisciplinary Journal of Neuroscience and
Neurophilosophy, 4(2), 151-167.
Wang, Y. (2006), Keynote: Cognitive Informatics - Towards the Future Generation Computers that Think and
Feel. In Proc. 5th IEEE International Conference on Cognitive Informatics (ICCI’06), Beijing, China. IEEE CS
Press. doi:10.1109/COGINF.2006.365666
Wang, Y. (2006). Cognitive Informatics Models of the Brain. IEEE Transactions on Systems, Man, and Cybernetics
(Part C), 36(2), 203-207.
Wang, Y. (2007a). The Theoretical Framework of Cognitive Informatics. International Journal of Cognitive
Informatics and Natural Intelligence, 1(1), 1–27. doi:10.4018/jcini.2007010101
Wang, Y. (2007b). Software Engineering Foundations: A Software Science Perspective, CRC Series in Software
Engineering. NY: Auerbach Publications.
Wang, Y. (2007c). The OAR Model of Neural Informatics for Internal Knowledge Representation in the
Brain. International Journal of Cognitive Informatics and Natural Intelligence, 1(3), 64–75. doi:10.4018/
jcini.2007070105
Wang, Y. (2008). On Contemporary Denotational Mathematics for Computational Intelligence, Transactions
of Computational Science, 2, 6-29.
Wang, Y. (2009a). On Abstract Intelligence: Toward a Unified Theory of Natural, Artificial, Machinable, and
Computational Intelligence. International Journal of Software Science and Computational Intelligence, 1(1),
1–18. doi:10.4018/jssci.2009010101
Wang, Y. (2009b). On Cognitive Computing. International Journal of Software Science and Computational
Intelligence, 1(3), 1–15. doi:10.4018/jssci.2009070101
Wang, Y. (2009c). Keynote: Cognitive Computing and Machinable Thought. In Proceedings of the 8th IEEE
Int’l Conference on Cognitive Informatics (ICCI’09), Hong Kong.
Wang, Y. (2009d). Paradigms of Denotational Mathematics for Cognitive Informatics and Cognitive Computing.
Fundamenta Informaticae, 90(3), 282–303.
Wang, Y. (2009e). On Visual Semantic Algebra (VSA): A Denotational Mathematical Structure for Modeling
and Manipulating Visual Objects and Patterns. International Journal of Software Science and Computational
Intelligence, 1(4), 1–16. doi:10.4018/jssci.2009062501

10
International Journal of Software Science and Computational Intelligence
Volume 10 • Issue 1 • January-March 2018

Wang, Y. (2010a). Cognitive Robots: A Reference Model towards Intelligent Authentication. IEEE Robotics
and Automation, 17(4), 54–62. doi:10.1109/MRA.2010.938842
Wang, Y. (2010b), Keynote: Cognitive Computing and World Wide Wisdom (WWW+). In Proc. 9th
IEEE Int’l Conf. Cognitive Informatics (ICCI’10), Tsinghua Univ., Beijing. IEEE CS Press. doi:10.1109/
COGINF.2010.5599737
Wang, Y. (2011a). Inference Algebra (IA): A Denotational Mathematics for Cognitive Computing and Machine
Reasoning (I). International Journal of Cognitive Informatics and Natural Intelligence, 5(4), 62–83. doi:10.4018/
jcini.2011100105
Wang, Y. (2011b). Towards the Synergy of Cognitive Informatics, Neural Informatics, Brain Informatics, and
Cognitive Computing. International Journal of Cognitive Informatics and Natural Intelligence, 5(1), 75–93.
doi:10.4018/jcini.2011010105
Wang, Y. (2012a). In Search of Denotational Mathematics: Novel Mathematical Means for Contemporary
Intelligence, Brain, and Knowledge Sciences. Journal of Advanced Mathematics and Applications, 1(1), 1–31.
doi:10.1166/jama.2012.1001
Wang, Y. (2012b). On the Denotational Mathematics Foundations for the Next Generation of Computers:
Cognitive Computers for Knowledge Processing. Journal of Advanced Mathematics and Applications, 1(1),
101–112. doi:10.1166/jama.2012.1009
Wang, Y. (2012c). On Abstract Intelligence and Brain Informatics: Mapping Cognitive Functions of the Brain
onto its Neural Structures. International Journal of Cognitive Informatics and Natural Intelligence, 6(4), 54–80.
doi:10.4018/jcini.2012100103
Wang, Y. (2012d). Contemporary Mathematics as a Metamethodology of Science, Engineering, Society, and
Humanity. Journal of Advanced Mathematics and Applications, 1(1), 1–3. doi:10.1166/jama.2012.1001
Wang, Y. (2012e), Keynote: Towards the Next Generation of Cognitive Computers: Knowledge vs. Data
Computers. In Proceedings of the 12th International Conference on Computational Science and Applications
(ICCSA’12), Salvador, Brazil. Springer.
Wang, Y. (2013a). Neuroinformatics Models of Human Memory: Mapping the Cognitive Functions of Memory
onto Neurophysiological Structures of the Brain. International Journal of Cognitive Informatics and Natural
Intelligence, 7(1), 98–122. doi:10.4018/jcini.2013010105
Wang, Y. (2013b). On Semantic Algebra: A Denotational Mathematics for Cognitive Linguistics, Machine
Learning, and Cognitive Computing. Journal of Advanced Mathematics and Applications, 2(2), 145–161.
doi:10.1166/jama.2013.1039
Wang, Y. (2013c), A Semantic Algebra for Cognitive Linguistics and Cognitive Computing. In Proceedings of
12th IEEE International Conference on Cognitive Informatics and Cognitive Computing (ICCI*CC 2013) (pp.
17-25). New York: IEEE CS Press. doi:10.1109/ICCI-CC.2013.6622221
Wang, Y. (2014a). On a Novel Cognitive Knowledge Base (CKB) for Cognitive Robots and Machine Learning.
International Journal of Software Science and Computational Intelligence, 6(2), 42–64. doi:10.4018/
ijssci.2014040103
Wang, Y. (2014b). Software Science: On General Mathematical Models and Formal Properties of Software.
Journal of Advanced Mathematics and Applications, 3(2), 130–147. doi:10.1166/jama.2014.1060
Wang, Y. (2015a). Keynote: Cognitive Robotics and Mathematical Engineering. In Proceedings of the 15th IEEE
Int’l Conf. on Cognitive Informatics & Cognitive Computing (ICCI*CC 2015), Tsinghua Univ. IEEE CS Press.
Wang, Y. (2015b). Cognitive Learning Methodologies for Brain-Inspired Cognitive Robotics. International
Journal of Cognitive Informatics and Natural Intelligence, 9(2), 37–54. doi:10.4018/IJCINI.2015040103
Wang, Y. (2015c). Fuzzy Probability Algebra (FPA): A Theory of Fuzzy Probability for Fuzzy Inference and
Computational Intelligence. Journal of Advanced Mathematics and Applications, 4(1), 38–55. doi:10.1166/
jama.2015.1071

11
International Journal of Software Science and Computational Intelligence
Volume 10 • Issue 1 • January-March 2018

Wang, Y. (2015d). A Denotational Mathematical Theory of System Science: System Algebra for Formal System
Modeling and Manipulations. Journal of Advanced Mathematics and Applications, 4(2), 132–157. doi:10.1166/
jama.2015.1082
Wang, Y. (2015e). Concept Algebra: A Denotational Mathematics for Formal Knowledge Representation
and Cognitive Robot Learning. Journal of Advanced Mathematics & Applications, 4(1), 1–26. doi:10.1166/
jama.2015.1066
Wang, Y. (2016a). Keynote: Deep Reasoning and Thinking beyond Deep Learning by Cognitive Robots and
Brain-Inspired Systems. In Proceedings of the 15th IEEE International Conference on Cognitive Informatics
and Cognitive Computing (ICCI*CC 2016), Stanford University, Stanford, CA. IEEE CS Press.
Wang, Y. (2016b). On Cognitive Foundations and Mathematical Theories of Knowledge Science. International
Journal of Cognitive Informatics and Natural Intelligence, 10(2), 1–24. doi:10.4018/IJCINI.2016040101
Wang, Y. (2016c). Keynote: Soft Computing: Philosophical, Mathematical, and Theoretical Foundations
of Cognitive Robotics and Computational Intelligence. In Proceedings of the 6th World Conference on Soft
Computing (WConSC 2016), UC Berkeley, CA. Springer.
Wang, Y. (2016d). Keynote: Brain-Inspired Deep Machine Learning and Cognitive Learning Systems. In
Proceedings of the 8th International Conference on Brain Inspired Cognitive Systems (BICS’16), Beijing,
November 28-30.
Wang, Y. (2016e). Big Data Algebra: A Denotational Mathematics for Big Data Science and Engineering. Journal
of Advanced Mathematics and Applications, 5(1), 3–25. doi:10.1166/jama.2016.1096
Wang, Y. (2016f). Fuzzy Logical Algebra (FLA): A Denotational Mathematics for Formal Reasoning and
Knowledge Representation. Journal of Advanced Mathematics and Applications, 5(2), 145–158. doi:10.1166/
jama.2016.1104
Wang, Y. (2016g). On Probability Algebra: Classic Theory of Probability Revisited. WSEAS Transactions on
Mathematics, 15, 550–565.
Wang, Y. (2016h). Keynote: Cognitive Neuroscience and the Spike Frequency Modulation (SFM) Theory for
Neural Signaling Systems. In Proceedings of the 9th Global Neuroscience Conference (GNSC’16), Melbourne,
Australia, November 21-22.
Wang, Y. (2017a). Keynote: Cognitive Foundations of Knowledge Science and Deep Knowledge Learning by
Cognitive Robots. In Proceedings of the 16th IEEE International Conference on Cognitive Informatics and
Cognitive Computing (ICCI*CC 2017), University of Oxford, UK. IEEE CS Press.
Wang, Y. (2017b). On Relation Algebra: A Denotational Mathematical Structure of Relation Theory for
Knowledge Representation and Cognitive Computing. Journal of Advanced Mathematics and Applications,
6(1), 43–66. doi:10.1166/jama.2017.1126
Wang, Y. (2017c), Merging Minds and Machines: Cognitive Robots and Brain-Inspired Systems (Panel Statement).
In Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics (IEEE SMC’17),
Banff, Canada, October 5-8.
Wang, Y., & Berwick, R. C. (2012). Towards a Formal Framework of Cognitive Linguistics. Journal of Advanced
Mathematics and Applications, 1(2), 250–263. doi:10.1166/jama.2012.1019
Wang, Y., & Berwick, R. C. (2013). Formal Relational Rules of English Syntax for Cognitive Linguistics,
Machine Learning, and Cognitive Computing. Journal of Advanced Mathematics and Applications, 2(2),
182–195. doi:10.1166/jama.2013.1042
Wang, Y., & Fariello, G. (2012). On Neuroinformatics: Mathematical Models of Neuroscience and Neurocomputing.
Journal of Advanced Mathematics and Applications, 1(2), 206–217. doi:10.1166/jama.2012.1015
Wang, Y., Howard, N., Plataniotis, K., Widrow, B., & Zadeh, L. A. (Eds.). (2016). Proceedings of the 15th
IEEE International Conference on Cognitive Informatics and Cognitive Computing (ICCI*CC’16), Stanford
University, CA. Los Alamitos, CA: IEEE Computer Society Press.
Y. Wang, R. H. Johnston, & M. R. Smith (Eds.). (2002). Proceedings of the 1st IEEE International Conference
on Cognitive Informatics (ICCI’02). Calgary, Canada: IEEE CS Press.

12
International Journal of Software Science and Computational Intelligence
Volume 10 • Issue 1 • January-March 2018

Wang, Y., Kinsner, W., Anderson, J. A., Sheu, P., Tsai, J., Pedrycz, W., & Zadeh, L. A. et al. (2009a). A Doctrine
of Cognitive Informatics. Fundamenta Informaticae, 90(3), 203–228.
Wang, Y., Kinsner, W., & Zhang, D. (2009b). Contemporary Cybernetics and its Faces of Cognitive Informatics
and Computational Intelligence. IEEE Transactions on Systems, Man, and Cybernetics (Part B), 39(4).
Wang, Y., & Lotfi, A. (2017a). Abstract Intelligence: Embodying and Enabling Cognitive Systems by
Mathematical Engineering. International Journal of Cognitive Informatics and Natural Intelligence, 11(1),
1–15. doi:10.4018/IJCINI.2017010101
Wang, Y., Wang, Y., Patel, S., & Patel, D. (2006). A Layered Reference Model of the Brain (LRMB). IEEE
Transactions on Systems, Man, and Cybernetics (Part C), 36(2), 124–133. doi:10.1109/TSMCC.2006.871126
PMID:16602600
Wang, Y., Zhang, D., & Kinsner, W. (Eds.). (2010). Advances in Cognitive Informatics and Cognitive Computing.
Springer. doi:10.1007/978-3-642-16083-7
Widrow, B. (2016). Keynote: Hebbian Learning and the LMS Algorithm. In Proceedings of the IEEE 15th
International Conference on Cognitive Informatics and Cognitive Computing (ICCI*CC’16), Stanford University,
CA. IEEE Press.
Widrow, B., Kim, Y., & Park, D. (2015). The Hebbian-LMS Learning Algorithm. IEEE Computational
Intelligence Magazine, 10(11), 37–53. doi:10.1109/MCI.2015.2471216
Wilson, R. A., & Keil, F. C. (2001). The MIT Encyclopedia of the Cognitive Sciences. MIT Press.
Woolsey, T. A., Hanaway, J., & Gado, M. H. (2008). The Brain Atlas: A Visual Guide to the Human Central
Nervous System (3rd ed.). NY: Wiley.
Yoshida, Y., Ohwada, H., & Mizoguchi, F. (2014). Temporal Discretization Method and Naïve Bayes Classifier
for Classifying Car Driver`s Cognitive Load. In Proceedings of the 29th International Conference on Computer
and Their Application (pp. 9-14).
Zadeh, L. A. (1997). Toward a theory of fuzzy information granulation and its centrality in human reasoning
and fuzzy logic. Fuzzy Sets and Systems, 90(2), 111–127. doi:10.1016/S0165-0114(97)00077-8
Zadeh, L. A. (1999). From Computing with Numbers to Computing with Words - From Manipulation of
Measurements to Manipulation of Perception. IEEE Transactions on Circuits and Systems, 45(1), 105–119.
doi:10.1109/81.739259
Zadeh, L. A. (2016). Keynote: A Key Issue of Semantics of Information. In Proceedings of the IEEE 15th
International Conference on Cognitive Informatics and Cognitive Computing (ICCI*CC’16), Stanford University,
CA. IEEE Press.

13
International Journal of Software Science and Computational Intelligence
Volume 10 • Issue 1 • January-March 2018

Yingxu Wang is professor of cognitive informatics, brain science, software science, & denotational mathematics,
President of Int’l Inst. of Cognitive Informatics & Computing (ICIC). He is Fellows of BCS, ICIC & WIF, and a
Sen. Member of IEEE & ACM. He was visiting professor at Oxford Univ., Stanford Univ., UC Berkeley & MIT. He
received a PhD from the Nottingham Trent Univ. in 1998 & has been a full professor since 1994. He is the founding
steering commit. chair of IEEE Int’l Confs. on ICCI*CC since 2002. He is founding EICs of Int. Journals of Cognitive
Informatics & Natural Intelligence, Software Science & Computational Intelligence, Advanced Mathematics &
Applications. He is the initiator of a few cutting-edge research fields such as cognitive informatics, denotational
mathematics, abstract intelligence (aI), mathematical models of the brain, and cognitive computing. He has
published 460+ peer reviewed papers, 30 books, and presented 38 invited keynote speeches. He has served as
general/program chairs for more than 23 int’l conferences. He is recognized by Research Gate as the top 2.5%
scholar worldwide and top 10 at Univ. of Calgary, Canada.

14

Вам также может понравиться