TGN should be taken up by AOUSD and made a part of Open USD.
Two friends sent me NVIDIA founder Jensen Huang’s presentation from yesterday:
Modeling software developers will be better armed with TGN (or something a lot like it) than without TGN. So will everyone else (model users and model creators).
The more digital reality, the more the user need for TGN.
It really should be taken up and made part of Open USD.
TGN is a proposed evolution in the FORM of technical drawing. Those, drawings, are instantiated in mental models through a process of significant cognitive effort. TGNs are a reflection of this instantiation, in digital models. They give a much needed assist within digitally modeled environments, to processes innate in human visual processing, both autonomic and somatic.
TGN certainly doesn’t solve all problems. But it helps a lot with a fundamental set of problems in the area of human visual interpretation of complex environments.
TGN provides a more visually tangible means of getting human engagement in the loop. The purpose is to help humans get a grasp of, and better understand complex visual environments. Downstream of human engagement through TGN, TGN could also give a boost to machine interpretation of realities real and digital.
A description of TGN on one page is here, includes a feature outline, demo, and specification links:
Just as in all model-handling environments, in Open USD the (8) features of TGN already exist. They’re just not:
- Packaged, for specific function: clear expression in somatic, non-autonomic workflows, of visual attentive focus in the model, and its communication.
- Standardized, TGN’s (8) Individual features, and the package of those features for on-target function (1), are not supported through code standardization in most apps. The proposed TGN OPEN CODE feature definitions and packaging, and their transformation to native feature equivalents in any app, through a code transformation layer, would solve this. AOUSD is obviously the ideal vehicle for that
Models, models, models, a continuation of the same logic
It’s a continuation from times long pre-dating software, from mental models to digital models
More models means, more need for TGN.
It’s not a bad post:
https://tangerinefocus.com/an-important-development/
It includes description in plain language in outline form of what the future of technical drawing (TGN) looks like.
It can continue to be automated in new ways or old, or done manually, or a mix of automatic and manual (like Burger King, have it your way).
Either way, what matters more is the form in which “drawing” is expressed.
Drawing will make a prison break out of the form it’s been locked into for centuries. Digital modeling makes this possible and inevitable.
As it transforms it will honor and amplify the function and purpose of its traditional form, and the transformation will be reversible in both directions, automatically.
If what I say is true, then why doesn’t TGN exist yet, or, why hasn’t Open USD done TGN yet?
It’s because there’s a difference between autonomic and somatic visual processing. And it’s the latter that’s relied upon, along with the techniques supporting it, in technical domains that require somatic visual processing, like mechanical engineering and Architecture, Engineering and Construction (AEC).
Whole fields utilizing computer graphics, and much of the graphics industry itself, is well occupied for decades with workflows (like gaming for example) that rely mostly on autonomic visual processing. Somatic visual processing on the hand remains, to date, relegated to stasis and confinement in its antiquated form externalized from digital models (instantiated within mental models however).
More about that difference here, and the as-yet unrealized potential of a proposed upgrade (TGN) in our technique for somatic visual processing, the technique we need for getting adequate grasp of complex spatial environments when we’re engaged in tasks that exceed what’s obtained by autonomic processing, (tasks like design and construction):
Jensen Huang also showed and told about NVIDIA hardware and software driving AI generated graphics, particularly AI-generated models.
Here also then, the proposed TGN has logically a sensible reason for being, even more so because of the coming preponderance of AI-generated models and the tools used to create and use them. I mentioned why on the one page description of TGN at my website:
WHAT ABOUT AI?
You say,
What about AI, and AI-generated models?
They’re still models, aren’t they?
We still have to perceive, see, engage, think about, develop, interpret, understand, evaluate, improve, use, and exist in the models.
‘TGN’ engagement equipment — within digital models, the expression of what drawing would look like if it were invented today instead of long ago when models were mental and physical but not digital — are an optimal host for ‘control rods’.
CONTROL RODS
“Control Rods” for human-in-the-loop input back into AI and other computational iterative model generating systems, for human guidance, for the laying down of control parameters, control markers, control drivers, within models, for feedback back into model generating systems — you understand the idea? — are optimally hosted within TGN rigs, within models.
You can see why, right? You can imagine the development of a tremendous variety of such controls, and the fact that these would be made very easily accessible, visible, close at hand, and intelligible, within the context of these rigs.
Let your imagination loose. What control rods would you embed there to guide generative (AI or NI generated) development of the model?
By the way, notice at 1:26:18, Jensen Huang presents a 2D plan converted instantly to a 3D model of a warehouse (emphasis on [A] 3D model). With “control rods” developed for convenient visual access within TGN rigs within that model, users are empowered to tune [A] model, in subsequent AI tuning iterations, to make it [THE] model that suits their needs, all the way to the specificity required for a constructable asset in the real world.
https://www.youtube.com/watch?v=3qSQjRaseos&t=5178s
AND has the rigging (TGN rigging) built into it that helps people understand it.
You can help get TGN supported by AOUSD
I believe it will happen. Drawing will make a prison break out of the form it’s been locked into for centuries. Digital modeling makes this possible and inevitable.
As it transforms it will honor and amplify the function and purpose of its traditional form, and the transformation will be reversible in both directions, automatically.
If you want this, contact me and let’s find the way to make it happen.
- LinkedIn: https://www.linkedin.com/in/robsnyder3333/
- email:
robsnyder@tangerinefocus.com
