VCS and Generative AI: a co-evolution

It doesn’t matter how models are generated; the outcome is always the same:

In AEC and similar industries, models are highly complex visual spatial environments that impress on us, since we’re beings within them, fundamental cognitive burdens and pragmatic realities the sum of which shape our sense-making imperative, our generalized need for adequate interpretive power in support of complex tasks in very complex environments.

This article discusses a proposed co-evolution, of generative modeling in all its forms, in tandem with an evolutionary path of development for VCS equipment for visual close study, within those models.

Other articles in this set discuss the VCS concept generally, a proposal for its continued development as an open source set of features articulating visual close study (VCS) within models, for use in all modeling and model-handling apps and platforms in AEC and similar industries, previous VCS development work, and the pragmatic and fundamental reasons necessitating VCS evolution:

Co-evolution of VCS equipment with generative modeling in its various forms.

To propose development of generative modeling and VCS equipment for visual close study co-evolving in tandem, some review is useful:

  • VCS equipment’s past
  • VCS equipment’s present
  • VCS equipment’s proposed future
  • the state of modeling, the numerous methods by which models are generated

These are mentioned in the other articles, so let’s start with some excerpts.

First of all, technical drawings are VCS equipment:

So-called ‘CAD Drafting’, and the pre-digital form of it, technical drawing by hand, is well described as the expression and articulation of the act of (VCS) visual close study, of, or attentive focus within, mental models.

There’s a nice future ahead, for evolution in VCS equipment’s form of expression within digital models, no matter the manner of generation of those models.

https://tangerinefocus.com/

And a longer excerpt from Previous work on VCS evolution:

one of many of the models I built at various architecture firms, this one at my own firm. Here’s some more detail on that model, and what followed from it: https://tangerinefocus.com/2023/10/24/model-automation-and-model-quality-on-a-graph/

First-generation VCS innovation, since 2012, automatically displayed CAD drawings (and hand drawings!) in-situ at their true orientation within digital 3D models (within models of various formats and types).

The fusion was referred to commercially as “hypermodel” and has been marketed by Bentley in many of their products including their CAD application, MicroStation. Since the fusion was introduced by Bentley in 2012, now nine software companies since then have developed their own versions of drawing-model fusion.

https://tangerinefocus.com/previous-work-on-vcs-evolution/

Since drawings are VCS equipment, their fusion into models is VCS<->Model fusion.

Here’s a playlist of 30 demonstration videos I made showing the Bentley drawing-model fusion development we built:

Notice the need for continued development of equipment for visual close study (VCS). Why? Because VCS equipment is equipment for engaging with and making sense of high complexity visual spatial environments.

EVOLUTION in VCS’s form of expression

Now that the idea of drawing-model FUSION has been conceptualized, proposed, specified, developed, commercialized, and proliferated among various software products at various companies over the last 12 years (since 2012), it’s time for phase 2.

Phase 2 is an obvious logical next step. Now that technical drawing resides instantiated not only within mental models, but also in-situ at true orientation within digital models, drawings find themselves rooted in different ground, planted in soil in some ways more fertile, in other ways not. But at a minimum, the ground is different.

In different ground, VCS’s form of expression should evolve.

Why?

Because it can: There are VCS expression possibilities arising from grounding VCS in the digital model that otherwise aren’t readily accessible, or aren’t accessible at all, within the mental model.

And because it must: For reasons pragmatic and fundamental:

What should VCS formal EVOLUTION look like? How should it operate?

I envision and specify what VCS equipment for visual close study within digital models of all kinds and formats should become, what VCS should look like, how it should operate

no matter the manner of model generation, whether models are computationally generated via directed graphs, are made by ‘generative AI’, or made by natural intelligence, ‘NI’ (stick built by human hand), or whether they’re made by device via photogrammetry, laser scanning, gaussian splatting, NeRF’d, etc.

Within all of these model types, and in hybrids of any of them, I propose what VCS equipment should look like, how it should operate within models such that it amplifies the utility of our visual engagements. That is, such that VCS equipment elevates the interpretive perceptual power of our visual engagements with models moving our understanding of very complex modeled environments, and the very complex tasks required of us within them, well beyond superficial understanding, toward adequate understanding, and often beyond adequate.

The proposal is here: TGN: an open source VCS development proposal

An open source team has very recently begun (March 2024) open source development of this TGN VCS development proposal. Announcements and open invitations are forthcoming. Mid-year.

https://tangerinefocus.com/previous-work-on-vcs-evolution/

General Applicability

The VCS open source development proposal sets out a generalizable solution (like technical drawing itself) that’s applicable in a very wide range of usage scenarios and within a very wide range model types and formats regardless of the generative method creating them.

Given the today’s diversity of model generation methods, with models that are:

  • computationally generated via directed graphs
  • made by ‘generative AI’
  • made by ‘NI’ natural intelligence (stick built by human hand)
  • or made by device via
    • photogrammetry
    • laser scanning
    • gaussian splatting
    • NeRF
    • etc.

VCS equipment matters as much as ever. If not more; we’ve got more digital models than ever before, to engage with and make sense of.

It doesn’t matter how models are generated; the outcome is always the same:

In AEC and similar industries, models are highly complex visual spatial environments that impress on us, since we’re beings within them, fundamental cognitive burdens and pragmatic realities the sum of which shape our sense-making imperative, our generalized need for adequate interpretive power in support of complex tasks in very complex environments.

VCS and Generative AI: a co-evolution

Among a large number of situations in which Generative AI and (VCS) visual close study equipment can be developed in tandem. and can evolve in tandem, I’d like to highlight 4 situations of particular interest:

1. VCS equipment within models are an optimal host for control methods enabling human-in-the-loop input for governing re-Generative AI model iterations

Generative AI generates digital models of AEC projects (e.g., buildings) in an iterative process in which a model is generated and then re-generated again and again as input prompts and parameters are tuned repeatedly.

That being the case, visual close study VCS rigs placed (automatically or manually) within these models for human engagement with the model are an optimal device within which to host and develop human-in-the-loop guidance mechanisms (control methods, parameters, markers, drivers) that allow the user to say, for example:

  • Hold “these” part(s) or area(s) of the generative model in “this” position/proportion/orientation/etc.
  • or according to “these” constraints, and
  • let the rest of the model re-iterate freely, unconstrained.

The idea of hosting human-in-the-loop controls governing model regeneration within visual close study VCS rigs is applicable in generative models of more than one type. Today’s Generative AI models and yesterday’s computational design models alike.

As a matter of fact, 20+ years worth of computational/generative design should not be forgotten because of today’s ‘AI’. They (computational designers) have been at it a long time and their modus operandi is the creation and use of apps that allow them to:

  • author their own dependency chains and algorithms
  • with full transparency into what those are, made visible through:
    • directed graphs
    • code inspection, and
    • the resultant modeled geometry in a graphics window simultaneously

Some of us used to think of their work as mysterious art and the masters of it wizards. Little did we know what was coming. These first generation generative/computational design gurus look like the most forthright people anybody’s ever seen by comparison.

2. VCS articulation of human visual engagement with models could improve LLM engagements

While keeping in mind that LLM hype brings memories of IBM Watson (article written in 2023), rhetoric from over 10 years ago may yet ring true: ‘DeepQA’ question and answer against very complex physical asset (infrastructure) digital datasets may possibly be enhanced, giving better question and answer results, i.e., more productive, more useful, conversations with AI about difficult-to-answer questions about very complex systems and huge volumes of data — because VCS visual close study rigs for human interaction within those datasets over the long duration of design, construction, and operation of infrastructure assets MAY reveal data correlations that otherwise would be much harder (or impossible) to discover, and, a richer field of correlation may increase data mining and machine learning yield.

More on this concept here: https://tangerinefocus.com/tgn/investigations-in-cognitive-computing/

3. Automated placement and settings tuning of VCS rigs in models

The use of AI (or algorithms) for both automatic placement of VCS rigs at useful places throughout a digital model, and for automatic tuning of VCS settings within each of those rigs according to the logic of where they’re placed in the models.

This should include but not be limited to automatic upgrade, to the proposed evolutionary in-model form of VCS: TGN, of traditional forms of VCS equipment (conventional technical drawings in their conventional form whether automated by ‘BIM’ or not).

TGN VCS rigs could also auto-populate a digital model without prerequisite VCS equipment (without need for existing drawings in their traditional form of expression). And in fact this pipeline could flow in the other direction, from TGN to traditional drawing as needed.

4. General Interpretive Functions

Restating again:

Fundamental cognitive burdens and pragmatic realities:

It doesn’t matter how models are generated; the outcome is always the same: 

In AEC and similar industries, models are highly complex visual spatial environments that impress on us fundamental cognitive burdens and pragmatic realities the sum of which shape our sense-making imperative, our generalized need for adequate interpretive power in support of complex tasks in very complex environments.

No matter the manner of model generation, whether models are computationally generated via directed graphs, are made by ‘generative AI’, or made by natural intelligence, ‘NI’ (stick built by human hand), or whether they’re made by device via photogrammetry, laser scanning, gaussian splatting, NeRF’d, etc., software development serving AEC remains naïve of the burdens and realities placed on everyone engaged with models, still today, 5+ decades in.

See general discussion of interpretive engagement here:

The proposal is here: TGN: an open source VCS development proposal

An open source team has very recently begun (March 2024) open source development of this TGN VCS development proposal. Announcements and open invitations are forthcoming. Mid-year.

If you’d like to have TGN VCS in your favorite modeling apps, and/or would like to participate in open source TGN development, message me on LinkedIn:  https://www.linkedin.com/in/robsnyder3333/