TGN: a digital model INTERACTIONS format standard

Modeled environments are extremely complex oceans of information, and the methods/techniques for interpreting them have evolved, to date, inadequately. There are various things like ML and AR/MR and so on, but a generalized approach to sense-making in complex digital environments remains underdeveloped.

This document is a software development specification that addresses the problem. The purpose of the proposed development is to make things clear through interpretive interaction rigs, TGN rigs, within modeled digital environments.

Improved mechanisms of interactive close study of digital models (including digital twins), through TGN rigs, make user engagement with complex data more effective, more interactive, more clarifying, more communicative and expressive, and more revealing of insight. TGN might even bring the fun back into serious technical work by elevating the level of interpretive engagement in digital media.

It also provides a framework for further interpretation by machine cognition and human interaction with cognitive systems, applied against spatial digital models, via deepQA apps.

TGN specification download

TGN: a digital model INTERACTIONS format standard (Apple Book)

TGN: a digital model INTERACTIONS format standard (ePub)

TGN: a digital model INTERACTIONS format standard (iCloud)

TGN discussion and demonstration video playlist:

0 1 TGN: rigging for insight  (2:16)

02 TGN: what is TGN exactly?  (5:35)

03 TGN: demonstration  (3:40)

04 TGN: portability  (5:17)

05 TGN: industry value  (9:27)

(the dev platform I mention in the videos is iTwins.js, but TGN can be developed on every platform where TGN is useful and desired)

The industry doesn’t need great new features (nor old features packaged together in a very effective new way) siloed in yet another new app. What it needs is a framework for attention-focusing rigs (TGN rigs) within modeled environments of all kinds, with portability of rigs from app to app, platform to platform.

There should be a TGN standard core that’s managed across vendors to support cross platform TGN expression with reliable fidelity. Above the TGN core there can be domain and app-unique TGN enhancements that support TGN rig special functions unique to a domain or vendor app constellation. TGN should ride both waves: a standardization wave and a differentiation wave. It’s tricky surf. Easy to end up on the rocks. But I think surfing just one of the waves is even worse, rockier. Gotta do both. If the standardized core happens, that creates opportunity for a lot of new differentiation. Even for new apps. Even, I’d say, new apps founded on TGN functionality. Anyone doing this will benefit from the TGN standard core.

I’ve had some contact about TGN with a couple people at buildingSMART, and a few conversations that seemed promising among a few software companies. But I haven’t reeled in anything on the hook yet. I know from experience these things can take a long time. But with a spark things can happen. Soon? I’m an optimist.

I keep blogging about it. – self-critique of the TGN demo video – a short summary of TGN rig features (including the built-in viewing arc plus the rest of what comprises a TGN rig) – the same summary but including a bunch of personal commentary about the industry and how I got this way (obsessed with attention-focusing rigs)  – this is for people who want to look further, a look at what can happen AFTER TGN rigs are in use clarifying models – the general intro here at the top of my infinite scroll homepage 

Tangerine 2021 Model Interpretation Techniques

For more than 20 years in the Architecture, Engineering, and Construction industry, software development has focused on model creation methods. Development of model interpretive techniques has been secondary. There is activity there, but it’s underdeveloped. The most traditional interaction/interpretive technique used for making sense of models is drawing, and development of that technique has stagnated through exclusive focus on automation, i.e., the creation of drawings automatically from models.

Perhaps over the next 20 years, development will shift back to much-needed improvement in techniques of model interaction and interpretation. The downloadable document below is a brief outline proposal of potential development paths. It’s written for anyone interested in advancing digital modeling software through further development of model interaction/interpretation techniques.

Öresund Bridge, frozen sea, February 12, 2021

Earlier Innovations (2012)

The CAD/BIM innovation demonstrated in the videos below has been both commercialized (2012) and patented (2015/16). This innovation realized rob snyder‘s drawing-model fusion ideas during the time that he was employed at a software company called Bentley Systems.

Seven software companies now are doing automated drawings-in-model fusion (first generation), since 2012: Bentley, Graphisoft (in BIMx Docs, mobile), Dalux, Revizto, working together: Morpholio and Shapr3D, and now Tekla too.

Chapter 3 of the Tangerine Media Innovation Spec 2018 specifies a framework for a second generation (version 2) of this fusion (that blows the doors off the first-gen work).

Simply stated, the first-gen innovation automatically displayed CAD drawings in-situ at their correct location within digital 3D models (and within other kinds of multi-media spatial visual environments). The fusion was referred to commercially as “hypermodel”, and has been marketed by Bentley in their products including their CAD application, MicroStation. Since the fusion was introduced there in 2012, other software companies have picked up on this and implemented their own versions of drawing-model fusion.


The following videos demonstrate the commercialized drawing-model fusion in the MicroStation CAD application product, demo playlist. <<<<Click the link at left for a longer set of demo videos, or scroll down for videos on this page:

September 2011







More videos here:

The drawing-model fusion demonstrated in the videos above represents software technology that’s been patented. Rob Snyder is first-named author of two of these patents:

Patent date
Issued Nov 3, 2015
Patent issuer and number
us 9177085
See patent

Multi-dimensional artifact assemblage for infrastructure and other assets with interface node mediators
Patent date
Issued Jul 5, 2016
Patent issuer and number
us 9384308
See patent

Hypermodel-Based Panorama Augmentation
Patent date
Issued Oct 4, 2016
Patent issuer and number
us 9460561
Patent description: Integrated Assemblage of 2D Drawings and Panoramic Images
See patent

This earlier work, above, was an important first step in realizing the idea of drawing-model fusion. It recognizes the unique value of two very different kinds of media, drawing and modeling, and the significance of their fusion.

In 2016, Rob Snyder started a new company, Tangerine, dedicated to two things:

  1. To bring new innovations related to the idea of drawing-model fusion to existing software companies. Tangerine has produced a written a spec for this, described here: Tangerine Media Innovation Spec 2018. This new work well and truly surpasses and breaks completely free from the earlier work. It also makes the patents on the earlier work obsolete and completely irrelevant.
  2. To research and develop what happens at the intersection of cognitive systems, like IBM Watson, and Tangerine Media

Tangerine will collaborate with any software company interested in building the future of media: Contact Us