AFR rigging TGN Core

This short video explains WHY, WHAT, and HOW we’ll build the open source TGN Core feature set of the broader Attention Focusing Rigs (AFR) concept in all modeling software:

A webinar is coming up. Invitations coming soon. Software companies and users will be there. The webinar is intended to kick off development of TGN collaboratively across the industry.

AEC models are very complex. So why would we stop innovating our lens for looking at them? Why stop, especially given all the graphics capability supplied by 30+ years of radical innovation in digital modeling? Better equipment for investigating these models ONLY MAKES SENSE. Because with better AFR equipment, a better lens, we can see, think, communicate, and understand better. Ask better questions, and get better answers. OR AT LEAST: ask clearer questions and get clearer answers.

https://tangerinefocus.com/

About the video above, this one, a friend said:

That’s a really interesting video. Thanks for sharing Rob.

Could you explain in an analogy – is TGN-AFR similar to BCF, where BCF contains a definition of a predefined model view related to an issue, that is theoretically transferable from one software to another, so that when you open the BCF issue in the “receiving” software, it shows you, or takes you, to the related model view. But if I understand correctly, TGN-AFR is like BCF on “steroids” complete with all sorts of camera controls etc.

Yes it is like BCF in those ways. It is also like technical drawing. It’s a combination of three things conceptually:

  • BCF-like portability, 
  • the functional legacy of technical drawing, and 
  • camera rigging (a camera on a path, built in to the rig)

TGN is v2.0 of drawing-model fusion (“hypermodel”). That was my proposal and work leading the team that built that first in 2012: https://tangerinefocus.com/tgn/earlier-media-innovations/

TGN moves that fusion further and solves some of its problems/shortfalls: it’s more clear, communicative, legible, and potentially far more visually expressive and engaging both for authors and viewers. It’s also designed to be portable, not confined to any single modeling or even BIM app; TGN rigs can be shared into metaverses and whatnot, and across authoring apps, and into NeRFs, reality captured models, etc.

In more detail, the TGN proposal has 8 core features. These are described in detail in the developer specification downloadable from Tangerine’s homepage. And further discussion here: AFR/TGN proposed development https://tangerinefocus.com/2022/10/17/afr-tgn-proposed-development/ 

Here is a demo video:

Narrated demo here:

https://youtu.be/iUt-if0O59g?t=722 

More videos: https://www.youtube.com/@RobertSnyder3333/videos

My friend asked another question:

I had another thought, while watching your video Rob. Could manufacturers of component products used in construction, build TGN-AFR rigs into their “objects”, so that as an object is downloaded and inserted into a project, the TGN-AFR will also be available in the project if someone wanted to quickly go find and view that component. (without the designers having to set up the views).

Sure! TGN rigs can be created:

  • manually, the way people create any number of specific drawings, for example, manually.

But also, rigs can be created automatically too, in the way you mentioned and in other ways:

  • Computational design systems for example, and drawing automation AI systems, can create TGN rigs automatically.
  • Simple automatic upgrades can be done too: every BIM view / BIM drawing should be upgraded, and automatically so, to TGN, and therefore made more visually engaging, expressive, and communicative, and also portable outside the confines of the originating BIM app.

This will get a more people a better engaged with more models, which will produce more people understanding models a lot better. This in turn will result in the models themselves getting better.

Here’s the text of the video:

Thank you for being here. Some of you have heard me talking about things for a long time. And some just recently. Welcome!  I’ll make this concise. I’ll talk about WHY we need to do something. WHAT we need to do. And HOW we can do it, working together.

I see two things in this photo. the Universe (or part of it). And a telescope. And by the way, you use the telescope to LOOK AT the universe.

Did you know what a telescope is, functionally? It NARROWS YOUR FOCUS. IT EXCLUDES A LOT FROM VIEW, you know, IT DRAWS YOUR ATTENTION TO WHAT’S BROUGHT  INTO VIEW, AND AMPLIFIES IT. THIS IS what a telescope is, functionally. 

A DRAWING IS like a telescope. It’s a LENS FOR LOOKING AT MODELS. It NARROWS YOUR FOCUS, EXCLUDING A LOT FROM VIEW. IT DRAWS YOUR ATTENTION TO WHAT’S BROUGHT  INTO VIEW AND AMPLIFIES IT. This is what drawing is, functionally. You look at models THROUGH THE LENS of drawing. Let me introduce a TERM: Drawing is an ATTENTION FOCUSING RIG. Just like a telescope is an ATTENTION FOCUSING RIG. And notice this: with your attention focused, THEN YOU THINK ABOUT WHAT YOU SEE. That’s a coupling. Focus drives THINKING. 

The attention focusing rig ELEVATES OUR INTERPRETIVE POWER AND DEVELOPS OUR UNDERSTANDING of models, or the universe if the rig is a telescope.

We don’t talk about this much these days. All the talk is about MODELS, mostly. And Rightfully so…

Modeling  transformed 30 years ago RADICALLY. A COLOSSAL SHIFT … FROM MENTAL MODELS TO DIGITAL MODELS, The impact of this cannot be overstated. 

It spawned a lot of innovation. Here are some milestones. Remember when these came out? I’m just skimming the surface over 30 years of highlights. There’s depth to each of them. So, modeling gets the attention it deserves. 

Attention focusing rigging (AFR), the lens for looking at models, gets less attention, less development. It gets left at the station, no seat on the innovation train. Look. It looks SAD!  Can you see that? AWWW. So sad (haha).

There’s been some work on digital drawing though:

Mostly automation. Starting with BIM, since 1987, that long ago! models drive drawing graphics. Now very recently drawing automation is getting an AI front end (like with SWAPP). Automation matters. No doubt about it. 

We’ve also had fusion. That was my earlier work that I proposed and lead at Bentley. We developed automated drawing-model FUSION and released it in 2012. This fusion is now in software from 9 companies that I know of over the last 11 years *including Revizto (shoutout to Revizto).

Fusion puts drawings where they really are within models. It helps with interpreting. Fusion is really basic, fundamental, like the the fusion of sound into silent film a hundred years ago.

Are these the last stops on the innovation train for drawing, automation and fusion? Is that enough evolution of the LENS FOR LOOKING, the rigging that engages thought and builds understanding?

I don’t think so! And NASA doesn’t either. Here’s the James Webb Space Telescope. Hubble wasn’t enough so they built this. Note the evolution in the telescope’s FORM.

Look at the top right of the page. The OPTICAL TELESCOPE ELEMENT (OTE) which includes the PRIMARY MIRROR, comprised of, I quote, 18 hexagonal segments made of beryllium… and coated with gold… to capture faint infrared light:

That’s really something. A better attention focusing rig, a better lens, To see better, to see further. 

Should we go further too? Should we go further too? I mean, that’s a really dumb question. Isn’t it?

AEC models are very complex. So why would we stop innovating our lens for looking at them? Why stop, especially given all the graphics capability supplied by 30+ years of radical innovation in digital modeling. Better equipment for investigating these models ONLY MAKES SENSE. Because with better AFR equipment, a better lens, we can see, think, communicate, and understand better.  Ask better questions, and get better answers. OR AT LEAST: ask clearer questions and get clearer answers. 

So what would a better lens, a better AFR rig, look like?

Some caveats: 

  • it has to articulate attentive focus in models, and make things clear
  • It has to correspond with normal human visual perception, 
  • it has to be better at making things clear than conventional drawing
  • it has to be EASIER to create and use 
  • it has to INCLUDE conventional drawing but surpass it too
  • it has to be portable, shareable, and open source so widely adoptable across the industry

So, AFR could take many potential forms. Imagine them yourself. I described 20 or so AFR features in the developer specification I wrote and put on my website. You can do the same. Imagine whatever you want in the modeling software you develop or use.

I JUST PROPOSE THIS:

AFR development everywhere, in all modeling software, should share 8 open source CORE FEATURES. These 8 features, I believe, are the minimum set of features that A) will make a significant improvement on human attentive focus within models and B) that correspond with the way human vision actually works. That’s derived through features 2 and 3. The rig volume (a rig’s scope box) and the built-in camera path for moving back and forth around the scope box in a controlled way. 

This has to do with the way that beings like us, with eyeballs (we’re not worms), actually operate in the world. We engage our visual focus with a bit of back and forth movement, around the object of focus. The purpose of this is two-fold:

  1. Contextualizing the object of focus in space, in relation to other items..AND
  2. Solving for visual ambiguity, We perform checks against ever present optical illusion. We miss some, but we disambiguate a lot. The motion helps. We look at things from many angles, with continuity.

So, feature 3, is fundamental, the built-in camera path. It’s not optional. If it’s not in the software, we’re doing it mentally anyway. And you know, think about this in the real world, how it works. And also, think of the development of camera rigging in history of film. It is significant. And we need to bring this into digital modeling in a controlled way. By the way, watch people engage with digital models. They spend a lot of their time spinning it around. They’re doing that motion for a reason. It’s innate to visual perception. The way the mind works. Cognition. So we need to bring that in, and develop better finer control over that motion as an integral part of our better lens, the better attention focusing rigging that we’re going to build. 

So what would it look like? The better attention focus rig, in the models? I mocked that up and I show that. (see videos at top of this post). A lot of you saw the demos before. If you haven’t seen them please go to https://tangerinefocus.com/ and take a look. The videos are on the homepage. Also, there are links to the developer specification and some other articles. They cover the 8 core features, and many other potential features. This stuff is not written in STONE. It’s just, I think, a very good conceptual framework plus a significant amount of specification. More than enough to get development started.

Let’s go into the developer perspective. Packaged together, the 8 CORE features, here they are in the diagram, can be called TGN. TGN is a proposed standardized schema for model engagement through visual attentive focus. TGN is the minimum core feature set intended to solve this problem of the dire need for attentive focused clarity within models.

TGN targets professional grade visual attentive focus, its articulation and communication within models.

The TGN feature set can be extended, improved, tuned, AND additional features can be added by anyone, but anyone doing AFR work will benefit from the open source TGN core feature set, which they can include, build on and extend.

Let’s be clear about something that should be obvious. NONE OF THESE FEATURES REQUIRE NEW INVENTION. They’re not EVEN UNIQUE in any way. You probably have all of them IN YOUR APPS already. What’s new is the way they’re packaged together to make them actually EFFECTIVE, at addressing this critical industry-wide problem: THE NEAR TOTAL ABSENCE OF VISUAL ATTENTIVE FOCUSED CLARITY WITHIN MODELS.

So for developers, here I highlight for example, 3 of the 8 features. Features 2, 3, and 4.

TGN Rig VOLUME, TGN Camera PATH, and TGN Engagement UI. Look at 2. The white boxes at the top are what you have already. You can define a BOX. You can use it as a display clipper. You already have that feature. The same is true for 3 and 4. You create a camera. You put it on a path. You transform it between perspective and parallel projection. You have a UI, probably buried somewhere, for rolling the camera back and forth along its path.

So, what’s needed, for TGN, is, you have to code a transformation layer.

Write new code to take what you have and transform it to a neutral description. That neutral description needs to sink down and be stored in the TGN rig. (the rig is a data package). That way, users can share rigs with others using other modeling apps for other purposes. Sharing is feasible among all apps that implement the TGN codebase. There’s no MAGIC. They have to implement the TGN codebase, and then the neutral descriptions can be re-transformed into the language of the target app. So, we have to agree on the neutral descriptions, and it’s up to each app developer, to make the transformation layer work in their app. It has to work in both directions. From the SOURCE app… TO… the NEUTRAL DESCRIPTION. And back again, from the NEUTRAL  DESCRIPTION… TO your  idiosyncratic SOURCE app.

After the NEUTRAL descriptions AND THE TRANSFORM LAYERS are coded, then the features have to be stitched together in a way that makes them operate together. 

So that’s WHY and WHAT. Now HOW are we gonna do this?  Let’s divide up the feature development among different software companies. We’re asking you: software companies, please volunteer for the features you want to work on. And we all share the feature code with each other. No one has to do it all. Just do a bit, and get the rest.

Some of these can be grouped. If you want to dive into this, the industry will thank you! Together we can collaborate and decide who wants to do what. 

So, the challenge. Code the neutrals and code the transformations. Then, together we’ll stitch the features together. The red thread.

I conclude now. If these things happen, or I should say, when they happen, then we’ll have a CODEBASE, for TGN, Which is to say, THE INDUSTRY WILL HAVE IT. And the industry SHOULD have it. 

And we’ll also have a developer community for TGN.

With those 2 things, then what? TGN as part of an existing open source software foundation? Which one?

WHO CAN SAY?? 

DEVELOPERS AND USERS, WE NEED YOU.  I HOPE YOU ALL WILL PARTICIPATE.  Let me know if you want to jump into this…You can write me at robsnyder@tangerinefocus.com    Welcome all… LET’S BUILD THIS!

Website Powered by WordPress.com.

Get new content delivered directly to your inbox

%d bloggers like this: