Given the importance of the way we engage, visually, with models (it’s at least as important as the models themselves), BCF should have developed further along the path it set out on, by now. What it does is necessary but it’s not enough. It must develop more. I specify its further development here: https://tangerinefocus.com
To put it another way:
BCF is nice. But it’s underpowered
Like the engines on the Airbus A340, it’s not quite up to the task:
And look at this:
Sorry, I had it in 2D
The scene is from 2018. But is there any improvement, on this problem, in the new Mission: Impossible that just came out last week? I’ll get a ticket and go see but I doubt it. They worked on the motorcycle jumps instead, which is a lot more important for the movie.
But in technical fields, motorcycle jumps are not the higher priority. They’re not more important than the means with which we engage, visually, with modeled environments, generally.
The problem of effective engagement in digital models is fundamental and so common that, yeah, it’s dramatized by Tom Cruise and Simon Pegg in a scene in Mission: Impossible. That says something.
Our techniques of visual engagement in digital models are underpowered, their development inadequate
Let’s say there’s a globe of software development related to spatial digital modeling. And let’s say the globe has two hemispheres as shown below. The right hemisphere is the models themselves. The left hemisphere is the drawing attention to what should be understood and making ourselves (or things) clear, in models side of the globe, our form(s) of visual engagement with models.

Development investment in the two hemispheres is very imbalanced.
“Saved views”, BCF, and drawing fusion included, what the software industry has done so far on the — engagement — side of the modeling globe is like constructing a few temporary huts out in remote wilderness in the middle of an empty continent, a few shacks, while the digital modeling hemisphere is like Dubai, all over.
The imbalance undermines the modeling endeavor itself. That’s the problem. The thought and effort put into an entire hemisphere of the digital modeling “globe”, is inadequate to the task.
TGN is a proposed boost on the underdeveloped side of the globe, a boost to our form of visual engagement with models for interpretive purposes. A boost to making things, and ourselves, clear. TGN is described on my new and concise (2 pages only!) website — in long form, and in outline form in “describe it so a 5 year old can understand” format — here: https://tangerinefocus.com

TGN has 8 core features, all of which already exist in all modeling software. TGN packages the features together, and transforms each of the 8 features into equivalents defined in an open standard, for portability to other modeling apps that implement TGN OPEN CODE.
I added an overall scope picture of the TGN development project on The Form of Engagement page (scroll to middle of page) of my new 2 page website:
(download the large image to pan and zoom in your image viewer)

See the simple idea:
(1) all the software features proposed for TGN functionality already exist in all modeling apps (more or less)
(2) Those existing features get transformed in a code transformation layer to TGN OPEN CODE, so TGN rigs created in one modeler can be shared to other modelers
(3) TGN OPEN CODE is an open standard definition of each of the 8 features. “We” can control what that definition is. The pioneering developer community can discuss, develop, and define the standard, and push it forward into Apache or some other open source house.
The (3) concepts above hold for all existing modeling apps and formats today, and also for forthcoming future model format endeavors. There’s no conceptual barrier to continuity over ALL modeling platforms.
The transformation code layer (2) is what app developers will have to tune to fit their apps individually, to match TGN OPEN CODE feature definitions, to the equivalent features they already have in their own apps.
TGN is just about packaging existing features together for coherent function that’s very useful, and code standardization (open) for portability.
TGN requires inventing no new rocket science, and really is a well limited development endeavor. It also will give large return on development investment, at least in terms of user satisfaction. It will raise their level and depth of engagement with models, increase their understanding and enjoyment of models. Should benefit all model developers, both software developers and real world model users. And…
Tom Cruise can stick to motorcycles and not return to sad jokes about poor engagement with, and misinterpretation of, digital models
The basic nature of the visual engagement with the model that would make Tom Cruise’s problem NOT HAPPEN, is built into TGN (features 3 and 4).
Those are not built into BCF.
TGN includes within itself the essential visual behavior that makes the problem dramatized in the scene from Mission Impossible NEVER HAPPEN.
BCF doesn’t.
It’s important to note that BCF also lacks other essential features essential to the mission.
What mission?
The mission of — drawing attention to what needs to be understood and making ourselves clear, in models.
