What value has focus?

One day we’ll look back…

The idea that we (humans and machines) don’t need better equipment within models for saying “hey look at this” is an idea one day we’ll look back on and wonder why we ever thought such a thing.

Why talk about “drawings” in the conventional sense? (In the form they’ve been known for centuries). I moved beyond that years ago and propose wide-open evolution in the FORM of drawing’s expression of its function:

“look here at this.”

– drawing’s function
https://youtu.be/rHv59fjX0sU

This function applies to machine action as well, the form of engagement with what a machine needs to see, understand, and do. And while the form is different, the concept is the same. G-code, for example, has exactly that role, narrowing from the entirety of a digital and real environment, to what a machine is tasked with DOING. The machine has its attention “drawn” to this, by some form of engagement (the g-code) with the totality of all the “data”.

Likewise, when, as some imagine, everything is made by machines, spurred into appropriate action by things like g-code…

  • — again, spurred not by the twins/models/sensor loops/data, but by the interpretive layer (code) between that and the machine —

…humans may be in the loop and need a form of engagement that evolves beyond conventional drawing but expresses the same function

attentive focused engagement, visual clarity for interpretive and generative power

– drawing’s function

and with control rods embedded there for human input:

https://tangerinefocus.com/2023/03/23/control-rods/

Also, actually, “AI” as we know it right now in fact does “understand” drawings in their conventional form. We see examples commonly now of AIs both interpreting and producing drawings of this kind.

Where it succeeds most in doing those things, AI is assisted in interpreting drawing as an expression as I’ve specified it in the proposed attention focusing technique TGN OPEN CODE, a package that expresses:

1/ in a coordinate system in a model

2/ within a scope/bounding box (or more complex volume)

3/ looking at the designated target face(s) of the scope from the “normal direction”

4/ which is one direction among many relevant directions of view (so, with UI for controlled view variation) – cinematic camera rigging built-in to the attention focusing rigs within model

5/ with the model/twin filtered by relevant filtering criteria

6/ with some style of display applied

7/ with some clarifying remarks or graphics added (authored within the TGN rigs or displayed in the rigs via external link)

8/ and with this feature package shareable with adequate fidelity to other modeling apps



I say as AI continues to get better at this, reliance on this kind of framework will be more and more involved:

https://tangerinefocus.com/2023/03/16/tgn-open-code/

These 8 functions are the same functions engaged by anyone doing technical drawing manually like this example here https://www.linkedin.com/posts/robsnyder3333_architecturalsketch-architecture-architecturalillustration-activity-7045717651645444096-14xI

In that case the model is mental not digital, which results in the form and medium being different than the proposed form (TGN) within digital models expressing the same function and purpose: to clarify and focus visual attention for interpretive and generative purposes.

That’s all true too when drawings in their conventional form are produced by whatever digital means (CAD, model-generated, AI assisted).

All of which is just to say, the idea that we (humans and machines) don’t need better equipment within models for saying “hey look at this” is an idea one day we’ll look back on and wonder why we ever thought such a thing.

“Hey 👋 Look at this”, says the sketch/drawing. 

That’s what they do, drawings; they draw your attention.

“hey, look 👀 at this”

Everyone needs better equipment for saying this in models.

Not everything you can imagine developing under this concept — which I call AFR (attention focusing rigs) — should exist in every modeling app. But there will come a day when we look back and see that times like these were silly, when we didn’t standardize at least a minimum core feature set for this, either TGN as I propose, or something very much like it.

This form of visual engagement with models, saying “hey look at this”, should run in software for all kinds of spatial visual digital models. Emphasis on all kinds: 

https://tangerinefocus.com/2023/03/25/this-should-run-in-all-model-handling-software/

This form of visual engagement with models should run in software for all kinds of spatial visual digital models. Emphasis on all kinds. 

https://youtu.be/rHv59fjX0sU

We can list many kinds of models but think of any kind of digital model and I include that kind. A few for example: (digital twins, BIMs, photo or scanner generated models, AI or otherwise computationally generated, or NI (natural intelligence) generated… 

How could this become a standardized form of engagement accessible in all modeling and model-handling apps and platforms?

Through Open Source:

https://tangerinefocus.com/2023/03/16/tgn-open-code/

TGN is CODE you already have in your modeling app. It’s just not packaged together there for coherent function for visual engagement. TGN is about making that function easy, accessible, and clarifying. And it’s also about making that portable, so once you have it, you can share that into other modeling apps and platforms.

I’m asking software companies and independent developers to join a project to do this development together open source for the benefit of all modeling apps and their users. TGN will improve how people engage with models, making it easier for more people to make better use of them. 

It’s also a new base for further innovation in this direction for anyone who wants to add/extend/differentiate.

The 8 TGN Core features shown in the video at the top of this post, and in the diagram in the TGN OPEN CODE post, are the 8 core features of the proposed TGN Core:

Many more features supporting an Attention Focusing Rigs (AFR) concept are possible to the extent anyone can imagine them. A bunch of possible additional features are outlined in a TGN Specification document I wrote. Download links here:

TGN Rigs, rigging models for insight, clarity, interpretive power, communication

The TGN developer spec is for free to anyone who wants it. Download

TGN: a digital model INTERACTIONS format standard (Apple Book)

TGN: a digital model INTERACTIONS format standard (ePub)

TGN: a digital model INTERACTIONS format standard (iCloud)

TGN: a digital model INTERACTIONS format standard (PDF)

TGN is proposed fundamentally in support of human engagement with models for interpretive and generative purposes, in recognition of fundamentals of the way human engagement and perception of a spatial visual environment actually works.

But this can be extended. TGNs within models, when in use, could be extended specifically to act as gateways for human-in-the-loop input back into AI and other computational iterative model generating systems

“We need human input to confirm that AI is driving us in the right direction…” KIMON ONUMA

If we need to strengthen human engagement with (AI or naturally generated) models (we do), then we should develop better equipment for doing exactly that, better equipment in models, for human engagement with models. 

TGN supplies an optimal access point for such controls. We see the need for this in any kind of intelligence-generated modeling (A or N, artificial or natural intelligence).

In nuclear power plants, control rods provide control over processes that otherwise will tend to run OUT OF CONTROL. 

So, it’s an apt analogy for interpretive and generative processes that tend to run out of control and therefore require equipment specifically designed to supply that control.

CONCLUSIONS:

  1. Digital tools are an extension of human perceptual/cognitive equipment for engagement with models (real world or digital) for interpretive and generative purposes.
  2. Digital tools need to further advance in support of this perceptual/cognitive equipment.
  3. This need is evident whether models are generated by natural or artificial intelligence
  4. The equipment provided to date in modeling software is inadequate/underdeveloped. This inadequacy is the greatest single reason that technical drawing still supplies the majority of revenue for major commercial modeling software developers, still in 2023 after decades of modeling.
  5. TGN is an open source software development proposal intended to raise the adequacy of the relevant equipment within all modeling software (or model-handling software, all kinds).
    • Commercial and independent software developers are invited to join a project to make this happen. Contact me if interested.
  6. TGN is a minimum feature set that
    • a) will make a difference and 
    • b) corresponds to the way human perception works in modeled worlds (real or digital).
  7. But TGN can be extended and added to, to the extent anyone can envision. The extensions and additions need not be open source.
  8. “Control Rods” for human-in-the-loop input back into AI and other computational iterative model generating systems, for human guidance, for the laying down of control parameters, control markers, control drivers, within models, for feedback back into model generating systems — you understand the idea? — are optimally hosted within TGN rigs, within models. 

    You can see why, right? You can imagine the development of a tremendous variety of such controls, and the fact that these would be made very easily accessible, visible, close at hand, and intelligible, within the context of these rigs.

Let your imagination loose. What control rods would you embed there to guide generative (AI or NI generated) development of the model?

https://youtu.be/rHv59fjX0sU

More on this development proposal coming up. Stay tuned. And, don’t wait either. Contact me if you’re interested.

Hey, look here, at this!

– attentive focus

What value has FOCUS?:

https://youtu.be/xdIjYBtnvZUhttps://tangerinefocus.com/2021/12/09/focus-a-fireplace-of-the-solar-system-and-the-foundation-of-thought/

04:55 …”so the defining property of this curve is that when you draw lines from any point on the curve to these two special thumbtack locations, the sum of the lengths of those lines is a constant, namely the length of the string. Each of these points is called a “focus” of your ellipse, collectively called “foci”. Fun fact, the word focus comes from the latin for “fireplace”, since one of the first places ellipses were studied was for orbits around the sun, a sort of fireplace of the solar system, sitting at one of the foci of a planet’s orbit. 

https://youtu.be/xdIjYBtnvZU
https://en.wikipedia.org/wiki/Attention#/media/File:Scout_Girl_in_Concentration.jpg Randy, CC BY 2.0 https://creativecommons.org/licenses/by/2.0, via Wikimedia Commons

Attention is the behavioral and cognitive process of selectively concentrating on a discrete aspect of information, whether considered subjective or objective, while ignoring other perceivable information. William James (1890) wrote that “Attention is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence.”[1] Attention has also been described as the allocation of limited cognitive processing resources.[2]Attention is manifested by an attentional bottleneck, in term of the amount of data the brain can process each second; for example, in human vision, only less than 1% of the visual input data (at around one megabyte per second) can enter the bottleneck,[3][4] leading to inattentional blindness.[5]

Attention remains a crucial area of investigation within educationpsychologyneurosciencecognitive neuroscience, and neuropsychology. Areas of active investigation involve determining the source of the sensory cues and signals that generate attention, the effects of these sensory cues and signals on the tuning properties of sensory neurons, and the relationship between attention and other behavioral and cognitive processes, which may include working memory and psychological vigilance. A relatively new body of research, which expands upon earlier research within psychopathology, is investigating the diagnostic symptoms associated with traumatic brain injury and its effects on attention. Attention also varies across cultures.[6]

The relationships between attention and consciousness are complex enough that they have warranted perennial philosophical exploration. Such exploration is both ancient and continually relevant, as it can have effects in fields ranging from mental health and the study of disorders of consciousness to artificial intelligence and its domains of research.https://en.wikipedia.org/wiki/Attention

Rodolfo Llinás

I’ll finish with another YouTube glimpse at background. If you enjoyed the Feynman video, you’ll probably like this one too:

https://youtu.be/6T3ovN7JHPo

Consider what Llinás says about thinking, what it is and the nature of thought’s interpretive engagement with the world through attention/focus.

The sea squirt is a sessile marine creature, permanently attached to a substrate, not free to move about, except during a tadpole stage of its lifecycle during which it swims, searching for where to settle. In its tadpole stage it has a “brain” / cerebral ganglion, which it uses, as brains are used, for environmental evaluation and decision support (where to move). Once settled, and developed into its adult sessile form, anchored, it — apparently having no further use for a brain — digests it.

Multicellular organisms that move have brains for environmental analysis and motricity decision support

– Llinas has argued this

This has something to do with digital modeled environments and equipment for and techniques of interpretive focus within them, during design, engineering, construction and operation of complex constructed assets!

Haha! Let’s leave it there.

Website Powered by WordPress.com.

Get new content delivered directly to your inbox

%d bloggers like this: