MACHINE LEARNING / AI COGNITIVE
PART 1, the evolution of drawing and its fusion within modeled environments, TGN, may have a secondary effect, on cognitive computing. The purpose of media is to support the thought process, to help us understand things. Media will continue evolving for this purpose. Tangerine’s TGN Spec, will play an important role in making media a more energetic and able companion to thought and understanding, and may do so both for human and machine cognition.
What will cognitive systems (like IBM Watson) “see”, when they parse multi-media/data modeled fusion environments enhanced by TGN?
Machine (and human) cognitive systems find/reveal (or generate) connection, or correlation, between diverse and otherwise disconnected fragments of information. From correlation (one among other formative emergent aspects of cognition)1, understanding grows, meaning is found, the essence of situations, is pinpointed. Consequently, cognitive systems develop appropriate response to query, and they develop impetus, to appropriate action.
Action, is beyond the scope of our work because “action involves a complex array of factors comprising the sum of intent, purpose, opportunity, consequence, agency, will“, etc. (or, see Modha) and consequently we’re focused rather on question and response, or deepQA.
The power of cognitive systems is the power of meaningful discourse, dialog, conversation — between people and machine cognitive systems — conversation that generates meaningful answers to complex questions that don’t have pre-defined answers.
Digital spatial media evolution that can be extended and enhanced as described by TGN, and already in its earlier form in the hands of designers and builders everywhere (Revizto et al.) as they think, act, and work, can amplify the detection of relevant correlations that span a variety of different, and otherwise previously disconnected, data and media types, information bits and fragments.
Providing greater correlative connectivity, media fusion makes more fertile ground for cognition, a richer field in which the mind (human and machine) goes to work, the field in which understanding grows.
In a highly complex and always changing data environment — like the real world, full of people, actions, tasks, and myriad data — cross-data-type correlations normally are not easy to detect. And so, machine intelligence has difficulty gaining traction in these kinds of environments.
Gaining traction for cognitive systems in complex spatial visual environments is precisely the possibility worth pursuing.
The fusion of spatial media (models), and the articulate act of “taking a closer look” (“drawing”, as it evolves via TGN), will make correlation more discoverable and therefore more accessible to cognitive systems. We can develop the methods that will give cognitive systems adequate traction in spatial data sets where today they mostly spin their wheels.
How IBM Watson Overpromised and Underdelivered on AI Health Care
After its triumph on Jeopardy!, IBM’s AI seemed poised to revolutionize medicine. Doctors are still waiting https://spectrum.ieee.org/biomedical/diagnostics/how-ibm- watson-overpromised-and-underdelivered-on-ai-health-care 02 Apr 2019
That article documents a story of spectacular failure. But within the story also are successes, along with lessons – of what not to do. There is promise in taking the best of what’s available in machine learning, and applying it within the multi-media datasets typical in AEC, particularly as those datasets are brought into fusion (adequately aligned) via TGN, enhancing existing AI efforts within AEC Asset Management applications.
Improved Organizational Discussion
Consider typical questions that infrastructure asset owner/operators ask as a matter of course. Those questions are not answered by AI systems. They’re answered by business organizations themselves, through organizational discourse – among technical experts using asset data as a resource.
TGN Part 2 is about improving organizational discourse.
By providing software with the means/methods for bringing cognitive systems like https://www.ibm.com/watson into the discussion space between an organization’s asset data and its human evaluators.
Basically, the possibility of TGN Part 2 is to make an organization’s own asset data more revealing of answers to hard questions. Normally, an organization’s asset data appears as an opaque ocean of information. And yet we intuit that there is a lot of value in it. And an organization deals with and evaluates that information. Cognitive Systems can assist in doing so, but only if their capabilities extend beyond text into other data types. And the extension into other data types must penetrate beyond superficial.
TGN Part 2 is about building a bridge between Watson or Watson-like cognitive systems, and industrial asset data, so that an asset’s information ecosystem becomes a more accessible source of answers to hard questions, and even a generator, potentially, of better questions.
In other words:
TGN Part 2 is about using media fusions, true fusions such that: possible correlations may be detected between features found in diverse data types and presented to a cognitive system for evaluation. The idea is that the totality of an asset’s data set (all of an asset’s data) becomes a more connective datasource.
“Connective” is about being able to parse semantic meaning. In the case of today’s cognitive systems, like Watson, this has to do specifically with the semantic meaning of text, such that narratives in one selection of text, relate to similar narratives in other pieces of text, even if the same words or language are not shared in common between the texts, and, finding texts of relevant meaning within terabytes of data (Watson has demonstrated these capabilities for a decade now).
Important pieces of information, and even relevant ideas, can be recovered, not lost, and even synthesized.
That’s Watson technology, and similar from Amazon, Google, Apple. etc. I am suggesting that we use this technology in AEC better, by building a bridge from infrastructure asset data in all its forms, to Watson (and similar cognitive systems).
You can image how a modeled element, or a detectable feature in a point cloud or photograph, can be a connecting puzzle piece that links a very large network of related Watson-parsable data. Our bridge will “simply” make infrastructure asset information a much more connective information network, and not messed up with random connections, but a network of connections in which possible correlations in time and space, content and meaning, are surfaced to Watson which then evaluates them, individually and in the context of the entire network of connections that Watson continuously evaluates.
That is already a primary function of Watson: correlation evaluation, match confidence assessment, an activity that Watson carries on progressively, continuously.
The potential here is greater connective richness brought to asset data such that knowledge and insight is less likely to be lost, and more likely to be surface-able when needed, in organizational question formation, and organizational answer development.
Why it is difficult to do:
Criticism from a machine learning researcher:
“Those are all interesting thoughts. However, Machine Learning (as Watson is but one example of) is not the silver bullet that marketing — not least IBM’s – has been very successful in positioning it: yes, machine learning algorithms can do amazing things, e.g. like Watson with Jeopardy, or facial recognition, or AlphaGo etc. But all those examples required enormous amounts of training/ classification of data/features into crisp, equally well defined set of “game rules”, and well defined categories for the data. So, whether an ML paradigm will be applicable for a specific domain depends heavily on whether the data of that domain lends itself to categorization, and whether the domain obeys under well defined rules. It’s now quite a few years since IBM made news with Watson, but if you think about it: how much do we hear about Watson today…? Maybe I’m trawling the edges of the media buzz, but personally, I do not hear anywhere near as much buzz about Watson as I might have expected after the Jeopardy stunt… Not even in the medical diagnosis field, which was said to be the first “real world” domain for Watson have I heard any significant news…. The reason could very well be that it’s turned out much harder to classify general business data rules and boundaries into crisp, meaningful categories. So, without almost any insight into your domain, I’d be unwilling to have any opinions on how feasible an ML approach could be for the type of data/model fusion you describe – it might work wonders, but then again, it might not work at all (or at least without massive amounts of recurring manual effort).”
The architecture, engineering, construction, facility operations industry (AEC/FM) is characterized by partially structured data types, and diversity of types, with varying degrees of structure, some completely unstructured. Alignment of diverse data types lends itself to extension of ML systems like Watson beyond natural language datasets (text) where they’re already conditionally performant, into other types of graphical/visual and spatial data. TGN Part 1, improves user interaction with and comprehension of complex modeled data environments. So it’s a win already on its own terms. It may also contribute to further wins, improving assess-ability of correlation and semantics of unstructured spatial and graphical data, helping to extend the reach of ML systems beyond the data type they can reliably process: text.
Industrial asset data should be transformed into a useful discussion resource. And collaborative research may show that:
1. What cognition is, neither scientists nor philosophers can say. Yet, aspects of its observable dynamic are useful and can be studied. Like gravity, thinking is not understood. But observations of its behavior and dynamic are productive. Or can be. Correlation, specifically is sometimes disparaged in the scientific literature. That is, it’s called out as “not enough”. And indeed not. But it’s necessity should not be understated. And perhaps Douglas Hofstadter sets the tone right in Surfaces and Essences: Analogy as the Fuel and Fire of Thinking