These few short articles describe a forthcoming evolution in the development of equipment for visual close study (VCS) within digital models of all kinds in the architecture, engineering, and construction (AEC) industry and similar industries:
- The Concept
- TGN: an open source VCS development proposal
- Why VCS evolution? Pragmatic answers
- this article Why VCS evolution? Fundamental answers and the engine of thought
- VCS and Generative AI: a co-evolution
- Previous work on VCS evolution
- VCS development roadmap for implementation in every modeling app and platform
- what we know for certain, that’s wrong…
While the TGN open source VCS development proposal is motivated pragmatically:
… with the pragmatics described in a list in the previous article, here, the fourth in that list is the INTERPLAY that is engaged between an array of visual close study (VCS) expressions, and the wider expansive environment of the whole of the project model (or the real world). This interplay is a back and forth continuous dynamic.
And there is good argument that this -is- the basic observable dynamic of thought itself, that the wide<>narrow or, environment<>focus INTERPLAY is the machine of thought, the engine of thinking.
For those interested in a deeper dive for fundamentals, let’s jump in.
In the articles below I discuss visual spatial perception, as far as I’m aware of it though direct experience, and the engine of thought thus engaged through perceptual equipment.
The basic dynamics of perception (how the mind engages complex visual spatial environments real, imaginary, and digital), adequate recognition of these drive the TGN VCS development proposal.
That is, there are the pragmatics driving the proposal, and behind those are the fundamental realities of the mind, thought, and vision. The fundamentals and the pragmatics together, compel the formative concept that becomes the VCS development proposal.
Two articles, then, about fundamentals:
REAL AND MODELED REALITY, MENTAL MODELS, VIEWS, TWEENERS, MEMORY, WOBBLE, AND MY KITCHEN SINK
AUTONOMIC AND SOMATIC VISUAL PROCESSING
And from fundamentals, we’re back full circle again to pragmatics:
It doesn’t matter how models are generated; the outcome is always the same:
In AEC and similar industries, models are highly complex visual spatial environments that impress on us fundamental cognitive burdens and pragmatic realities the sum of which shape our sense-making imperative, our generalized need for adequate interpretive power in support of complex tasks in very complex environments.No matter the manner of model generation, whether models are computationally generated via directed graphs, are made by ‘generative AI’, or made by natural intelligence, ‘NI’ (stick built by human hand), or whether they’re made by device via photogrammetry, laser scanning, gaussian splatting, NeRF’d, etc., software development serving AEC remains naïve of the burdens and realities placed on everyone engaged with models, still today, 5+ decades in.
VCS and Generative AI: a co-evolution