Tangerine Blog

AUTONOMIC AND SOMATIC VISUAL PROCESSING

A LITTLE PREFACE

Martin Ciupa posted this gem:

Douglas Hofstadter’s book “I Am a Strange Loop” is an exploration of the sense of “I” and examines in depth the concept of a strange loop to explain the sense of “I”. The concept of a strange loop was originally developed in his 1979 book Gödel, Escher, Bach. In the end, we are self-perceiving, self-inventing, locked-in mirages that are little miracles of self-reference. 

Hofstadter demonstrates how the properties of self-referential systems can be used to describe the unique properties of minds.

*****

From 1st linked article…

Think of your eyes as that video camera, but with a significant upgrade: a mechanism, the brain, that not only registers images but abstracts them, arranging and constantly rearranging the data into mental structures–symbols, Hofstadter calls them–that stand as proxies for the exterior world. Along with your models of things and places are symbols for each of your friends, family members and colleagues, some so rich that the people almost live in your head.

https://www.scientificamerican.com/article/a-new-journey-into-hofsta/

https://en.m.wikipedia.org/w/index.php?title=I_Am_a_Strange_Loop

To that I add a few things about the “mechanism, the brain, that not only registers images but abstracts them, arranging and constantly rearranging the data into mental structures”.

The abstracting and arranging and rearranging, of visual input, into mental structures (your models of things and places, proxies for the world, that live in your head) happens in two ways, the difference between which is analogous to the distinction between autonomic processes (like breathing) — that happen without our conscious effort or control over them — and somatic processes that involve voluntary actions that we are aware of.

The analogy isn’t perfect but the gist of it holds up and is useful.

For visual processing, it’s a question of degree. It happens on a gradient, from automatic (involuntary) visual processing that happens without our awareness, to voluntary visual processing requiring effort that we’re aware of and technique (including media) in support of that.

Here you can see an example of the former. This is me, engaged in autonomic visual processing of my back yard. Yes I’m moving through the garden with volition, but the visual processing happens by itself without conscious effort, like breathing:

Now it should be clear already that as digital environments these days become ever more “realistic”, they become stand-ins for the real world and thus are subject to the same fundamental constraints. That is, if we are going to immerse ourselves in them and perceive them, our perception operates according to the same gradient ranging from autonomic to somatic visual processing.

In video games, autonomic visual processing is usually adequate. But for more complex tasks, it isn’t. Where it isn’t, effort, and equipment are needed.

VISUAL ENGAGEMENT HAS FORM

Human understanding of digital environments, and the real world alike, is dependent on mental model formation. Formation of an adequate mental model in turn depends on human engagement of a particular form, the form subject to constraints inherent in human perception and cognition.

I discuss this here: https://tangerinefocus.com/2023/07/28/real-and-modeled-reality-mental-models-views-tweeners-memory-wobble-and-my-kitchen-sink/

The necessary form, of visual engagement with the environment, shares common features even at opposite ends of a task complexity spectrum. Whether one is engaging with the world visually in support of ordinary everyday tasks as simple as walking from one room to another, or complex tasks in technical fields imagining machines, equipment, and physical infrastructure to be built, the cognitive constraints and dynamics are the same, though with variation in intensity.

We can observe, and this can be proven through experiment, that there is a sweet spot for efficient productive development of the mental model, in terms of the frequency of views in which visual attention is invested. The frequency can’t be too low or too high, the number of views neither too few nor too many. Mental model formation fails at both extremes.

For tasks amenable to human visual interpretation (not all tasks fall in this category), the distinction between simple and complex tasks makes a difference:

For simple tasks, the visual attention invested is low, the frequency of views low. The number of views few. But not too few.

For complex tasks, designing a machine or a facility for example, those are reversed: visual attention invested is high, the frequency high, the number of views many. But not too many.

Where exactly the threshold is crossed between simple and complex is not easy to say, and it varies depending on conditions. But it’s important to recognize that:

a) Simple and complex sit on opposite ends of a spectrum and

b) are differentiated by what’s actually crossed when crossing the boundary between them.

What’s crossed, over the threshold between simple and complex tasks that are amenable to visual interpretation, is the boundary separating (borrowing terms for analogy) autonomic and somatic visual processing.

For simple tasks, the necessary visual engagement happens effortlessly, seemingly without willed motivation. The process is autonomic, like the heartbeat. We don’t choose for our heart to beat. It just beats.

“the autonomic nervous system is responsible for many “automatic” functions like control of heart rate, respiration, digestion, sexual arousal, etc. These are things we don’t have to–and sometimes cannot–exert conscious control over.”

autonomic

To walk in a familiar environment, say, from my kitchen through a door to the outside (see here), the necessary view selection and attentive investment required for mental model formation adequate to that task requires no consciously motivated effort. For simple tasks like this, mental model formation never fails. Sufficient attentive focus into appropriately selected views in which to invest it, just happens.

It’s always just enough autonomic effort, to form a mental model just good enough, to make sense of the environment just enough for getting the task done effectively enough.

We know we’ve crossed the threshold from simple tasks to complex when that’s not enough, when autonomic visual processing isn’t getting the job done.

“The counterpart of the autonomic nervous system is the somatic nervous system, which is involved in voluntary actions that we are aware of.”

ibid.

Complex tasks amenable to visual processing require willful concentration (coffee helps) and a structured framework supporting it. While the basic dynamic in play is the same, for simple and complex tasks, the required amplitude of invested attention, and its frequency, is higher for complex tasks. At the risk of mixing metaphors, you could say that the required frequency lies outside the visible light spectrum. Out there, autonomic processing no longer works and we’re outside our cognitive comfort zone. We generally prefer autonomic work, and beyond that we need some assistance. Coffee’s not enough. We need technique, with externalization outside the mental model, in some form of media.

Refer again to the views here representative of autonomic mental model formation supporting a simple task (walking from kitchen sink out the back door). For more complex tasks, say, designing and building the house, the visual attention invested autonomically will be of frequency and amplitude too low for formation of a mental model clear enough to arm us with sufficient understanding of the complexity of the task(s).

Autonomic selection of the appropriate views in which to invest visual attention, the ones critically informative to mental model formation of adequate quality, will fail. Attention has to be guided there willfully. A certain amount of domain knowledge, experience, is in play. And so is:

EXTERNALIZED MEDIA

A mistake is typically made here, since the late 1990s. Given the ubiquity of digital modeling, it has been presumed that the digital model takes on the role of this necessary externalization. This is understandable, making this mistake. It’s a category error that comes from overlooking things easily overlooked because they’re so familiar.

I said in the first sentence:

Human understanding of digital environments, and the real world alike, is dependent on mental model formation.

We spend not much time thinking about mental models. It’s just like, for example, consciousness. It’s so familiar that thinking about it doesn’t really get us anywhere. For professional philosophers paid to think about mind and consciousness, sure, it’s useful to them. But for the rest of us, it doesn’t occur to us even to think about. Let alone that we could obtain anything useful from thinking about it.

To put it another way, there are other things we’re apt to think about. Over the last 20+ years we’re very much aware of the digital model. The mental model is as active as ever. It’s just that now we have this digital model onscreen devouring all our awareness. Which kind of suggests two points:

1. The digital model is a supplement to the mental model, not it’s replacement. When you fly around and through a digital model onscreen, you’re autonomically building a mental model of the same. Whether you realize it or not.

Very simple logic: if you weren’t developing a mental model, then the digital model would be meaningless to you apart from bedazzlement, colorful visual entertainment or something. Beyond being bedazzled is meaningful understanding. Meaningful understanding is built-in to mental model formation.

2. The digital model distracts, in ways not yet adequately understood, and in ways that seem unexpected. Think back, not long ago, to times predating software and computers, before digital modeling. Back then, awareness of the mental model was front of mind, not forgotten. Front of mind means full awareness. It was necessarily so, integral to work process. If you’re too young to remember that, imagine yourself older:

You produce by hand a set of technical drawings, and you look at them. They mean nothing to you, or anyone else, until you visualize them instantiated at true orientation in your mental model.

This is cognitive visualization effort, in which you engage with full awareness that you’re doing it. It’s somatic effort, intentional. In such work it’s impossible to remain unaware of either the mental model, or of the somatic cognitive effort of internalizing (into the mental model) the views of the model that you’ve externalized on pieces of paper. There is just no way to be unaware of this process and its components.

Such were former times.

Today we’re bedazzled (by digital models) which keeps us stuck in a loop generating nonsensical ideas about visual interpretation of complex environments real and digital:

“…the end of drawing, because models.”

We’re stuck or progressively declining over time into stunted visual interpretation of digital models, our understanding declining toward superficial. Inevitably, what we need in those terms is elevated and salvaged by (of course) externalized media (drawing), our vehicle for visual engagement with reality, by way of which, through somatic cognitive effort, we form the mental models necessary for beyond-superficial understanding.

We nevertheless continue disparaging this vehicle.

It’s a counterproductive loop.

There are other, non-visual techniques (like, machine analytics) of high value through which we elevate understanding of models too. But these are no counter to arguments for technique helping people more quickly reach adequate understanding of complex tasks amenable to visual interpretation

WE MUST GRAPPLE WITH SOME FACTS:

In the category of “things amenable to visual interpretation”:

1. To understand anything, simple or complex, we depend on the formation of an adequate mental model.

2. Effective mental model formation for simple tasks is autonomic and never fails.

3. Effective mental model formation for complex tasks is somatic, requires conscious effort, through technique that offloads cognitive burden into a suitable externalization in some medium.

4. The externalizations are un-interpretable (i.e., meaningless) until re-internalized through somatic cognitive effort, in the mental model.

5. VIEWS, TWEENERS, MEMORY, WOBBLE (see here) are aspects of the externalization and its re-internalization in the mental model.

6. The form taken by the externalization is subject to constraints inherent in human perception and cognition. Among the constraints are the nature of view selection, the role of views in mental model formation, and limits on the frequency and amplitude of human cognitive investment in visual attention on those views governed by the human visual attention sine curve. (see here)

7. In architecture, engineering, construction (AEC) and similar fields, the medium of externalization is technical drawing in the traditional form we’re familiar with.

8. The technical drawing is “hosted” somewhere, e.g.: on paper or some digital equivalent, an electronic sheet, a CAD file, a PDF, etc.

9. The trend of calling for “…the end of drawing (7), because models” ignores the relevant facts above and the potential for useful change as follows:

10. There is in fact, no reason to keep paper or its digital equivalent as the externalization host. The digital model is a superior host.

11. There is no reason to keep the externalization (external from the mental model) locked in its traditional form (technical drawing as we know it). The form of the externalization’s expression, hosted within the digital model, can evolve to exceed (bigly) the traditional form’s support for adequate mental model formation, therefore helping people more quickly reach adequate understanding of complex tasks amenable to visual interpretation.

12. A proposal for this (11) is linked below.

PROPOSAL FOR (11), EVOLUTION IN DRAWING’S FORM OF EXPRESSION:

A proposal and specification for evolution in drawing’s form of expression is on my 2 page website:

TANGERINE

The proposal, on one page, is here:

The specification is completely supportive, of all existing forms of drawing and modeling processes and applications, but bumps all of them up at the same time to a higher level of expression, in a completely non-disruptive non-destructive way.

My earlier work was automated fusion of technical drawing in-situ within models. Since 2012 this has been implemented in 9 different softwares but remains largely siloed in each, and limited by flaws (mine) in the original design.

My current work is an unfunded personal mission to address those flaws (of intelligibility and portability) through better design, of an evolution in technical drawing’s form of expression, given it’s instantiation in-situ within models both mental and digital. The proposed evolution in form is both directly supportive of the existing traditional form of technical drawing, honoring it with amplification of the force of its effect, and at the same time, a meaningful evolution in its form, again for greater effect of drawing’s purpose (making things clear in very complex environments).

This is not a proposal for a product.

Instead it’s a redefinition of what drawing is. It’s a proposed evolution in drawing’s form of expression, IN models, designed not to be siloed in any product but instead open so it can be developed in all modeling softwares, all kinds, and made portable between them.

It’s a proposal to transform visual engagement with modeled environments for everyone, but especially motivated for people engaging models for technical purposes, where engagement has to produce beyond-superficial understanding, of complex models, both during model creation and during downstream model use. The proposed evolution in form also establishes an accessible step to stand on and engage from, for humans in the loop, in AI-generated modeling and other forms of computational model generation.

PROSPECTS

Many software companies, typically do not have on their development horizon ideas about changing the form of well known things. They rather take things as they are and build apps around them. Some companies, through, do have evolution in form on their radar. They will pioneer this. Others will follow after seeing it.

Contact me to discuss:

https://www.linkedin.com/in/robsnyder3333/ LinkedIn

robsnyder@tangerinefocus.com mail

Rob Snyder Avatar

About the author

Hi! My name is Rob Snyder, I’m on a mission to elevate digital models in AEC (architecture, engineering, and construction) by developing equipment for visual close study (VCS) within them, so that they supply an adequate assist to the engine of thought we all have running as we develop models during design and as we interpret them so they can be put to use in support of necessary action, during construction for example.