This post combines fundamentals of the “mechanics” of perception with a software development proposal for a “control rods” function within digital models.
PERCEPTION:
Spatial awareness, or in other words — effectively good enough interpretation of visual spatial environments (the real world, digital models…) — is built up through a cognitive interplay back and forth between perceptions wide (expansive, environmental) and narrow (focused).
Understanding grows in the back and forth between mutually contextualized perceptions of “the model” (wide, environment), and “attention drawn” (narrow, “drawing”).
Cognition is a fusion of these inputs. Or, where these (wide and narrow, environment and focus) are being put together in fusion, thinking is happening. Thought is, arguably, or its basic observable dynamic is, this fusion.
Conversely, where the fusion is not happening, understanding is superficial at best, and stifled.
More on this fusion for use in AEC digital environments at my website:
Such are my thoughts after years of building digital models of designed buildings for construction, and the corresponding focusing lens for looking at and thinking about those models: sets of technical drawings.
Another approach to perception is posted here:
Comment: Optical illusions show that what we see is only in part evidence-based on sensory information (bottom up), it’s also a cognitive model that cybernetically interacts (top-down) with the vision centers and biases it. It’s a “controlled hallucination”.
Title: “Reality” is constructed by your brain. Here’s what that means, and why it matters.
Fix your gaze on the black dot on the left side of this image. But wait! Finish reading this paragraph first. As you gaze at the left dot, try to answer this question: In what direction is the object on the right moving? Is it drifting diagonally, or is it moving up and down?
Remember, focus on the dot on the left.

Now of course you can engage digital worlds with one arm tied behind your back, or both arms. You can ignore, and fail to develop essential equipment for narrowing focus, the lens for looking at models, within the models. As is typical today. But given the complexity of both the environment and our means of perceiving it, and our compelling need for clarity, why not better equip yourself?
The question applies whether digital (or mental) models are AI generated or NI (natural intelligence) generated.
CONTROL RODS:
Let’s bring in the “control rods” idea here.
“We need human input to confirm that AI is driving us in the right direction…”
Kimon Onuma
If we need to strengthen human engagement with (AI or naturally generated) models (we do), then we should develop better equipment for doing exactly that, better equipment in models, for human engagement with models.
TGN is a proposed open source development project, and model engagement format, for doing that. This article introduces the proposal:
The diagram below indicates the 8 proposed open source core features of the TGN rigging equipment:

TGN is proposed fundamentally in support of human engagement with models for interpretive and generative purposes, in recognition of fundamentals of the way human engagement and perception of a spatial visual environment actually works.
But this can be extended. TGNs within models, when in use, could be extended specifically to act as gateways for human-in-the-loop input back into AI and other computational iterative model generating systems. I touch on this idea here:
The TGN proposal is a call for software companies to build this form of engagement within models collaboratively, open source, to elevate the interpretive power of everyone dealing with models in every modeling app and platform. Contact me if interested.
The control rods however are an extension and may not be (your choice) open source. It’s up to every software developer to decide what kind of unique control they supply.
TGN supplies an optimal access point for such controls. We see the need for this in any kind of intelligence (A or N, artificial or natural intelligence)-generated modeling.
In nuclear power plants, control rods provide control over processes that otherwise will tend to run OUT OF CONTROL.
So, it’s an apt analogy for interpretive and generative processes that tend to run out of control and therefore require equipment specifically designed to supply that control.
Control rods are inserted into the core of a nuclear reactor and adjusted in order to control the rate of the nuclear chain reaction and, thereby, the thermal power output of the reactor, the rate of steamproduction, and the electrical power output of the power station.
https://en.wikipedia.org/wiki/Control_rod

Look here at human and AI generated models:
And more such (A or N or A/N)I-generated models presented by Marc Goldman of ESRI:
https://www.linkedin.com/feed/update/urn:li:activity:7044407110280843264
And more here: https://aecmag.com/ai/hypar-text-to-bim-and-beyond/
CONCLUSIONS:
- Digital tools are an extension of human perceptual/cognitive equipment for engagement with models (real world or digital) for interpretive and generative purposes.
- Digital tools need to further advance in support of this perceptual/cognitive equipment.
- This need makes itself evident whether models are generated by natural or artificial intelligence
- The equipment provided to date in modeling software is inadequate/underdeveloped. This inadequacy is the greatest single reason that technical drawing still supplies the majority of revenue for major commercial modeling software developers, still in 2023 after decades of modeling.
- TGN is an open source software development proposal intended to raise the adequacy of the relevant equipment within all modeling software (or model-handling software, all kinds).
- Commercial and independent software developers are invited to join a project to make this happen. Contact me if interested.
- TGN is a minimum feature set that
- a) will make a difference and
- b) corresponds to the way human perception works in modeled worlds (real or digital).
- But TGN can be extended and added to, to the extent anyone can envision. The extensions and additions need not be open source.
- “Control Rods” for human-in-the-loop input back into AI and other computational iterative model generating systems, for human guidance, for the laying down of control parameters, control markers, control drivers, within models, for feedback back into model generating systems — you understand the idea? — are optimally hosted within TGN rigs, within models.
You can see why, right? You can imagine the development of a tremendous variety of such controls, and the fact that these would be made very easily accessible, visible, close at hand, and intelligible, within the context of these rigs.
Here’s a demo of some TGN rigs within a model:
Let your imagination loose. What control rods would you embed there to guide generative (AI or NI generated) development of the model?
Leave a Reply