Modeled environments are extremely complex oceans of information, and the methods/techniques for interpreting them have evolved, to date, inadequately. There are various things like ML and AR/MR and so on, but a generalized approach to sense-making in complex digital environments remains underdeveloped.
This document is a software development specification that addresses the problem. The purpose of the proposed development is to make things clear through interpretive interaction rigs, TGN rigs, within modeled digital environments.
Improved mechanisms of interactive close study of digital models (including digital twins), through TGN rigs, make user engagement with complex data more effective, more interactive, more clarifying, more communicative and expressive, and more revealing of insight. TGN might even bring the fun back into serious technical work by elevating the level of interpretive engagement in digital media.
TGN specification download
TGN discussion and demonstration video playlist:
0 1 TGN: rigging for insight https://youtu.be/CGXrk9nGj0Y (2:16)
02 TGN: what is TGN exactly? https://youtu.be/byIW0T8MCsk (5:35)
03 TGN: demonstration https://youtu.be/wTh2AozTHDc (3:40)
04 TGN: portability https://youtu.be/Je859_cNvhQ (5:17)
05 TGN: industry value https://youtu.be/Ka0o1EnGtK4 (9:27)
(the dev platform I mention in the videos is iTwins.js, but TGN can be developed on every platform where TGN is useful and desired)
The industry doesn’t need great new features (nor old features packaged together in a very effective new way) siloed in yet another new app. What it needs is a framework for attention-focusing rigs (TGN rigs) within modeled environments of all kinds, with portability of rigs from app to app, platform to platform.
There should be a TGN standard core that’s managed across vendors to support cross platform TGN expression with reliable fidelity. Above the TGN core there can be domain and app-unique TGN enhancements that support TGN rig special functions unique to a domain or vendor app constellation. TGN should ride both waves: a standardization wave and a differentiation wave. It’s tricky surf. Easy to end up on the rocks. But I think surfing just one of the waves is even worse, rockier. Gotta do both. If the standardized core happens, that creates opportunity for a lot of new differentiation. Even for new apps. Even, I’d say, new apps founded on TGN functionality. Anyone doing this will benefit from the TGN standard core.
I’ve had some contact about TGN with a couple people at buildingSMART, and a few conversations that seemed promising among a few software companies. But I haven’t reeled in anything on the hook yet. I know from experience these things can take a long time. But with a spark things can happen. Soon? I’m an optimist.
I keep blogging about it.
https://tangerinefocus.com/2021/10/29/tgn-critique/ – self-critique of the TGN demo video
https://tangerinefocus.com/2021/11/18/the-future-of-technical-drawing-rev-1/ – a short summary of TGN rig features (including the built-in viewing arc plus the rest of what comprises a TGN rig)
https://tangerinefocus.com/2021/11/15/the-future-of-technical-drawing/ – the same summary but including a bunch of personal commentary about the industry and how I got this way (obsessed with attention-focusing rigs)
https://tangerinefocus.com/2021/11/09/tgn-a-framework-for-further-interpretation-by-machine-cognition-and-human-interaction-with-cognitive-systems-applied-against-spatial-digital-models-via-deepqa-apps/ – this is for people who want to look further, a look at what can happen AFTER TGN rigs are in use clarifying models
https://tangerinefocus.com/ – the general intro here at the top of my infinite scroll homepage