TGN: A DIGITAL MODEL INTERACTIONS FORMAT STANDARD

Modeled environments are extremely complex oceans of information, and the methods/techniques for interpreting them have evolved, to date, inadequately. There are various things like ML and AR/MR and so on, but a generalized approach to sense-making in complex digital environments remains underdeveloped.

This document is a software development specification that addresses the problem. The purpose of the proposed development is to make things clear through ATTENTION-FOCUSING rigs, TGN rigs, within modeled digital environments.

Improved mechanisms of attention-focusing interactive close study of digital models (including digital twins), through TGN rigs, make user engagement with complex data more effective, more interactive, more clarifying, more communicative and expressive, and more revealing of insight. TGN might even bring the fun back into serious technical work by elevating the level of interpretive engagement in digital media. 

It also provides a framework for further interpretation by machine cognition and human interaction with cognitive systems, applied against spatial digital models, via deepQA apps.

TGN SPECIFICATION DOWNLOAD

TGN: a digital model INTERACTIONS format standard (Apple Book)

TGN: a digital model INTERACTIONS format standard (ePub)

TGN: a digital model INTERACTIONS format standard (iCloud)

https://books.apple.com/us/book/tgn/id1591434041

TGN DISCUSSION AND DEMONSTRATION VIDEO PLAYLIST:

0 1 TGN: rigging for insight https://youtu.be/CGXrk9nGj0Y  (2:16)

02 TGN: what is TGN exactly? https://youtu.be/byIW0T8MCsk  (5:35)

03 TGN: demonstration https://youtu.be/wTh2AozTHDc  (3:40)

Self critique of this demo is here:

04 TGN: portability https://youtu.be/Je859_cNvhQ  (5:17)

05 TGN: industry value https://youtu.be/Ka0o1EnGtK4  (9:27)

(the dev platform I mention in the videos is iTwins.js, but TGN can be developed on every platform where TGN is useful and desired)

The industry doesn’t need great new features (nor old features packaged in a very effective new way) siloed in yet another new app. What it needs is a framework for attention-focusing rigs (TGN rigs) within modeled environments of all kinds, with portability of rigs from app to app, platform to platform.

There should be a TGN standard core that’s managed across vendors to support cross platform TGN expression with reliable fidelity. Above the TGN core there can be domain and app-unique TGN enhancements that support TGN rig special functions unique to a domain or vendor app constellation. TGN should ride both waves: a standardization wave and a differentiation wave. The standardized core (ever-evolving), creates a lot of opportunity for a diversity of new differentiation, in existing apps and platforms, and for new apps. Even, I’d say, new apps founded on TGN functionality. Anyone doing this will benefit from the TGN standard core.

Other articles:

https://tangerinefocus.com/2021/11/18/the-future-of-technical-drawing-rev-1/ – a short summary of TGN rig features (including the built-in viewing arc plus the rest of what comprises a TGN rig)

https://tangerinefocus.com/2021/11/09/tgn-a-framework-for-further-interpretation-by-machine-cognition-and-human-interaction-with-cognitive-systems-applied-against-spatial-digital-models-via-deepqa-apps/  – this is for those who want to look further, at what can happen AFTER attention-focusing TGN rigs are in use clarifying models.


In this post is a collection of examples of focused attention and failures of attention, of difficulties in achieving attention, and then, significant risks of communication failure even when attention is achieved, the built-in limitations of human attention, and the necessity of attention nonetheless. If you read this post to the end, I think you’ll appreciate attention in ways that may not have occurred to you before. 

The discussion serves as an introduction to a software development proposal for development of attention-focusing “TGN rigs” within digital modeled environments in industrial domains like mechanical engineering, GIS, and the AECO Architecture, Engineering, Construction and Operations industry.

Try to imagine a person unable to focus, literally unable to pay attention. Imagine an architect, engineer, builder or facility operator unequipped with the tools to articulate focused attention within digital modeled media. Imagine being in this condition because software products omit development of the equipment necessary for users to develop and share their own acts of focused attention within very complex digital worlds. 

What would you say about the maturity of a software industry — and the digital media it produces — devoid of the equipment users require for focusing attention?

TGN is a framework for developing the equipment needed to make sense of and clarify complex digital modeled worlds both during their creation (design) and use (construction and operations). 

The post includes a software development proposal for development of attention-focusing “TGN rigs” within digital modeled environments in industrial domains like mechanical engineering, GIS, and the AECO Architecture, Engineering, Construction and Operations industry.

Tangerine 2021 Model Interpretation Techniques

For more than 20 years in the Architecture, Engineering, and Construction industry, software development has focused on model creation methods. Development of model interpretive techniques has been secondary. There is activity there, but it’s underdeveloped. The most traditional interaction/interpretive technique used for making sense of models is drawing, and development of that technique has stagnated through exclusive focus on automation, i.e., the creation of drawings automatically from models.

Perhaps over the next 20 years, development will shift back to much-needed improvement in techniques of model interaction and interpretation. The downloadable document below is a brief outline proposal of potential development paths. It’s written for anyone interested in advancing digital modeling software through further development of model interaction/interpretation techniques.

Öresund Bridge, frozen sea, February 12, 2021

Earlier Innovations (2012)

The CAD/BIM innovation demonstrated in the videos below has been both commercialized (2012) and patented (2015/16). This innovation realized rob snyder‘s drawing-model fusion ideas during the time that he was employed at a software company called Bentley Systems.

Seven software companies now are doing automated drawings-in-model fusion (first generation), since 2012: Bentley, Graphisoft (in BIMx Docs, mobile), Dalux, Revizto, working together: Morpholio and Shapr3D, and now Tekla too.

Chapter 3 of the Tangerine Media Innovation Spec 2018 specifies a framework for a second generation (version 2) of this fusion (that blows the doors off the first-gen work).

Simply stated, the first-gen innovation automatically displayed CAD drawings in-situ at their correct location within digital 3D models (and within other kinds of multi-media spatial visual environments). The fusion was referred to commercially as “hypermodel”, and has been marketed by Bentley in their products including their CAD application, MicroStation. Since the fusion was introduced there in 2012, other software companies have picked up on this and implemented their own versions of drawing-model fusion.

TangerinePub

The following videos demonstrate the commercialized drawing-model fusion in the MicroStation CAD application product, demo playlist. <<<<Click the link at left for a longer set of demo videos, or scroll down for videos on this page:

September 2011

__

__

__

__

__

TangerinePub

More videos here: https://www.youtube.com/playlist?list=PLAiyamA5WoZbdfVlrOFLbrgF8AEyi2Fma

The drawing-model fusion demonstrated in the videos above represents software technology that’s been patented. Rob Snyder is first-named author of two of these patents:

INTEGRATED ASSEMBLAGE OF 3D BUILDING MODELS AND 2D CONSTRUCTION DRAWINGS
Patent date
Issued Nov 3, 2015
Patent issuer and number
us 9177085
See patent http://1.usa.gov/1Hx33bZ

Multi-dimensional artifact assemblage for infrastructure and other assets with interface node mediators
Patent date
Issued Jul 5, 2016
Patent issuer and number
us 9384308
See patent http://bit.ly/2ifKkpB

Hypermodel-Based Panorama Augmentation
Patent date
Issued Oct 4, 2016
Patent issuer and number
us 9460561
Patent description: Integrated Assemblage of 2D Drawings and Panoramic Images
See patent http://bit.ly/2iFxGRF

This earlier work, above, was an important first step in realizing the idea of drawing-model fusion. It recognizes the unique value of two very different kinds of media, drawing and modeling, and the significance of their fusion.

In 2016, Rob Snyder started a new company, Tangerine, dedicated to two things:

  1. To bring new innovations related to the idea of drawing-model fusion to existing software companies. Tangerine has produced a written a spec for this, described here: Tangerine Media Innovation Spec 2018. This new work well and truly surpasses and breaks completely free from the earlier work. It also makes the patents on the earlier work obsolete and completely irrelevant.
  2. To research and develop what happens at the intersection of cognitive systems, like IBM Watson, and Tangerine Media

Tangerine will collaborate with any software company interested in building the future of media: Contact Us