This post is a continuation of my ongoing call to software developers (open source and commercial), and users of software. If you want to skip past the background discussion, scroll down to the heading “200 word blurb” and continue from there.
- PROBLEM
- SOLUTION
PROBLEM
Introduction
In our industry, architecture, engineering, and construction, I’ve heard people debating “models versus drawings”, 3D v 2D, for about 19 years. For most of that time I’ve tried like Don Quixote fighting windmills, to make the argument stop by getting people to see that the argument is stupid. Yes. Really. It’s no different than arguing which is better, dark or light.
Imagine an industry full of people for decades arguing:
“dark is better than light”
vs.
“no, light is better than dark”.
I don’t regret calling it stupid.
You can’t see light without dark, or dark without light.
The argument that one is better than the other, is an argument that’s self-defeating and counterproductive..
What are drawings?
Drawings are expressions of narrowed attentive focus.
What are models?
Models are expressions of wider expansive environments.
So, 20 years of arguing which is better, wider expansive environments, or the articulation of narrowed attentive focus.
Arguing that one is better than the other, environment or focus, is an argument that goes nowhere because it cannot go, anywhere. It’s pointless.
There are much better arguments to have. This one, finally, should be abandoned entirely and replaced with better argument, about how digital environments should develop, in recognition of this most basic reality.
It’s like arguing that motion picture is better than recorded sound, that no one would want both, in fusion (known as sound film 100 years ago). Such an idiotic argument in fact was made, by Jack Warner:
https://en.wikipedia.org/wiki/Sound_film#Commerce
In our case, now, in our industry, with our forms of media, and our necessary expression of attentive focus, its FORM must evolve, once it resides in-situ within models. AFR/TGN is a proposal for that evolution.
I’ve had some success with this in the past.
In 2007 I said what I think software should do, and then it happened. Working in architecture firms using BIM-ish software before it was called BIM (Brics’ “TriForma” since 1996), I proposed the automatic fusion of drawings at their true orientation within models — like the fusion of sound into silent film 100 years ago — and then led the team that first developed and commercialized it, ten years ago at Bentley in 2012.
We closed the half circle from model-generated drawings, to embellished drawings visible in-situ in models. Since then, 2012, at least 9 software companies feature automated drawings-in-model fusion (first generation):
Bentley, Graphisoft (in BIMx Docs, mobile), Dalux, Revizto, working together: Morpholio and Shapr3D, Solidworks (since 2015), Tekla, and Autodesk Docs (2022). More info here:
I intend to do this trick again. This time better. Way better in fact.
Exactly how version 2 is better than the 10 year old first gen fusion concept is described on this page.
First, a very brief comment about how this is useful, and for whom.
Engagement with Models
Typically, software users beyond a certain age have difficulty navigating digital models, while younger users fly around effortlessly. But that’s not the whole story and doesn’t point toward adequate evaluation of what’s in play here. Users at different stages of professional experience in technical fields may be looking for different things in their engagement with models. Those looking to develop deep understanding of what’s represented by a digital model may just recognize the fact that flying around isn’t enough, that getting a sufficient grasp on things requires more than that. Many correctly intuit that whatever it is they need, for getting their claws into models for firmer grasp, is simply absent from models, so they lose interest, rightly.
Younger users tend to fluently adapt to navigation controls in modeling environments. But their fluency of movement through models is not enough for developing adequate understanding of the models. Their understanding should develop beyond some point near superficial where many often linger too long.
TGN improves the strength and effectiveness of engagement with models for both types of users. Every user gains better grasp, and greater depth of understanding of their models.
TGN will empower existing users (younger and older) with more productive engagement with models, while also bringing many new kinds of users into modeled environments by making engagement with models easier and more effective.
With so many people well engaged, model quality will improve in tandem with greater user depth of understanding. Models will be easier to use and more useful for everyone. This is true, or will be true when TGN is developed, for both human engagement and machine engagement with models (ML, AI).
Indeed, this is the premise of TGN.
We need our attention drawn
It’s just a basic need that we have, the need to be able to draw attentive focus to specific things or aspects of things within complex visual spatial environments, whether those environments are the real world, or digital environments.
We need the ability (in life and in software) to be both sender and receiver of expressions of (and calls for) attentive focus. We have to able to draw the attention of others to what we want them to see. And we have to be the beneficiary as the recipient, of having our attention drawn to things others want us to see.
There are limitless ways of doing this, limitless ways of expressing and calling for focused attention, limitless ways of doing so in an articulate manner such that your call is clear, that what you want them to see, they see, and see clearly.
It’s all about making complexity, and extreme complexity, manageable.
So although there are infinite possibilities for such expression, as any artist will tell you, there is need also for standardized forms of expression, standard methods for expressing calls for, and articulations of, attentive focus with spatial visual environments.
Standardized expression of calls to focused attention remain underdeveloped
In software for digital visual environments, standardized expression of calls to focused attention remain underdeveloped.
While there are some starts in the right direction (BCF among them, and drawing-model fusion), they remain underfunded, under-invested.
* BCF: buildingSMART (like everyone else) is free to use the open source AFR/TGN core to extend BCF if they wish, or use it in other ways. One can see how AFR/TGN may exist independent of BCF, with BCF items logged against/within AFR/TGN rigs, which themselves exist within modeled environments, in exactly the same way that BCF issues are logged against models (and drawings, and non-graphical data) without AFR/TGN.
The resource input (money, developer time) is insufficient. But more important are the insufficiency of intellectual effort and the inadequate scope.
The inputs remain inadequate to the task.
The articulation of clarifying focus is of critical importance. But the effort so far applied to addressing it hasn’t risen to the occasion, hasn’t met the need. The need and the response are imbalanced. It’s an embarrassment, really, to anyone serious about digital modeling who’s also stumbled upon the realization of what’s missing (something fundamental).
TGN is a systematized, standardized form of engagement with digital models of all kinds. It’s for sinking one’s claws into the models to get a grasp of them, figuratively, and literally as one grabs onto something to get ahold of it. And in getting hold of a thing in the right way, you generate the understanding needed for putting it to use productively.
This is significant, fundamental, in life, and will be, finally, significant in the development of software used in every industry (and discipline) that uses complex digital visual models for any purpose including:
- science
- engineering
- architecture
- construction
- medicine
- film
- art
- industrial process
- infrastructure
- ships and aircraft
- machine development
- etc., you name it…
SOLUTION
200 word blurb
TGN is a feature set proposed for development in all software apps/platforms that handle spatial visual modeled environments in any industry. The purpose is to improve user grasp (i.e., understanding, and interpretive power) vis a vis the models they’re engaged with.
TGN is a subset of features within a larger AFR (Attention Focusing Rigs) concept. AFR is also proposed for all modeled environments. AFR’s design potential is broader with unlimited possibility. TGN is envisioned as a standardizable set of open source features at the core of AFR that can be shared by all AFR developers. TGN facilitates adequate graphics fidelity when users share AFR rigs from one modeling app/platform to others.
AFR must develop beyond its traditional form (the set of drawings), evolving in these and other ways:
AFR (Attention Focusing Rigs):
- occurs WITHIN models, with I/O for externalization
- includes cinematic camera rigging built into AFR rigs
- is portable from one modeling app/platform to others
- has a standard set of open source core features (TGN)
- AFR/TGN embodies technical drawing’s past, present, and future. It’s all of these at once, each present within AFR rigs distinctly, undiluted, and AMPLIFIED. (See “extra graphics, item 7 below after the demo videos)
- is extensible beyond the open source TGN core
Me, narrating the TGN demo, starting at (12:02) in my talk at BIM Coordinator Summit 2022 Dublin:
Un-narrated demo:
TGN: AFR’s Standardized Open Source Core Features
1. Admin Features of TGN rigs
2. TGN rig volume/scope box
3. Cinematic camera rigging, built-in (resolving to parallel projection at center)
4. Rig UI
5. Filter Pegs
6. Style Pegs
7. Facility for extra graphics
8. Rig sharing – TGN rigs are portable from one modeling app to another
- Administrative features of TGN rigs
- TGN rig GUID
- author/owner ID, affiliation, project
- Delivery/certification date, authorizing signature affirming adequacy at issue date
- read only lock on archive at milestone issue date
- call to model source
- And manage the model gateway/access for model download/stream
- Store model coordinate system information, for correct TGN rig positioning when sharing rigs
- AFR/TGN cloud/server management. AFR/TGN rigs are dynamic entities, frequently edited by users. The best implementations of AFR/TGN will manage these rigs such that local user edits to rigs (e.g., rig position, orientation, viewing path, filter and style settings, extra graphics edits and additions, etc.) then update the rig definitions in a central storage (cloud or similar) from which those edits are then re-distributed wherever rigs are expressed/shared, in the various apps/environments where users have shared them.
- TGN rig volume/scope box
- the volumetric bounding box that the user positions, sizes, and orients within the model. The volume is the AFR rig’s target for users’ focused attention
- the user designates the primary face of the rig volume, the plane toward which user attention is targeted
- Rig volume clipping planes controls (clip/ don’t clip, per face)
- Cinematic camera rigging, built-in
- Each AFR/TGN rig contains a built in camera path
- A library of pre-made camera paths will be supplied to the user to choose from including various path types:
- arc, line, tilted arc, s-curve
- and custom/complex user defined
- the type of path that works well, that makes things legible and clear, we have noticed from experiment in TGN mockups, varies depending on the orientation of the rig.
- Horizontal orientations (primary rig face is horizontal, as in a plan or plan detail) tend to be perceptually effective when the viewing path is an arc that bends toward the center point of the primary rig face, with the arc lying in a plane that is parallel to the primary face of the rig volume.
- This kind of arc should be rotated (by default, by the software) such that the arc mid point is over the center of the primary rig face, and the arc end points sit on the “correct” side of approach and departure from the midpoint of the camera path. You’ll know the “correct” side because the approach to mid-path, and departure from it, appear to be visually natural according to normal human visual experience (e.g., we sense our head is above our feet). You’ll know that the rotation of the arc is wrong when the camera makes you feel you’re being flipped upside down in the model as you move along the path. This failure most typically happens with inappropriate use of s-curve paths in rig orientations that are horizontal. Very likely, s-curve should never be used in horizontally oriented AFR/TGN rigs.
- Vertical orientations (primary rig face is vertical, as in a section, section detail or elevation) are effective when the arc is in a plane perpendicular to the primary face of rig volume
- Horizontal orientations (primary rig face is horizontal, as in a plan or plan detail) tend to be perceptually effective when the viewing path is an arc that bends toward the center point of the primary rig face, with the arc lying in a plane that is parallel to the primary face of the rig volume.
- Appropriate default path selections can be developer-tested for best probability of clear legibility and avoidance of user disorientation for typical orthogonal and non-orthogonal rig orientations.
- Where the default AFR/TGN path selection logic fails, the user can override and select other paths from the library for a better result, or custom edit her own path.
- Simple proportionality rules seem to be applicable generally. For example, if a pre-defined camera path looks good at a small plan detail size and orientation, then it will look good in a complete floor plan orientation as well. When the user resizes the TGN rig volume scope box to be big enough to cover an entire floor of a building (or span of a bridge, or whatever), the entire rig including it’s built-in camera path, and the distance of the camera path PLANE from the primary face of the rig volume, all of these things scale/resize proportionally. And the result is effective.
- Non orthogonal AFR/TGN rig orientations, for example those that would be useful at locations of interest within spline lattice structures and similar, well this is interesting because obviously nothing would prevent computational designers (are they still called that?) using Grasshopper and the like, from generating these rigs at appropriate locations, probably using 2 or 3 lines in their generative code. Likewise they could generate appropriate rig viewing path selection, and correct it globally if wrong on the first try.
- arc, line, tilted arc, s-curve
- At the midpoint of each TGN rig viewing path, the camera rig transforms from perspective camera to parallel projection camera (if it is not already set to parallel on the rest of the viewing path, or on the approach to the midpoint)
- You can see this transformation in the demo videos above.
- Rig UI
- See the AFR/TGN UI elements in the demo videos above in the AFR section.
- A primary UI element is the slider interface that gives the user control over view motion back and forth along the viewing path that’s built into the rig.
- Developers certainly may implement that control in any (sensible) way they wish. The idea of AFR/TGN is not to attempt to micro-manage anyone’s preference for things like this. However the developer should keep in mind the goal that TGN is a standardizable (and TGN community-managed) form of engagement with models of any kind, and that there is great user benefit in this evolving in such a way that many users in many applications find the UI something they become familiar with. Variations in UI should be sensible, and not so divergent that users find themselves lost. And note the obvious: the TGN view slider UI in the demo above, is not unlike the extremely familiar play line at the bottom of YouTube videos.
- Other UI elements are described in items 5 through 8 below.
- Filter pegs
- See the AFR/TGN filter pegs in the demo videos above in the AFR section.
- What’s visible, and hidden, when viewing a model through an AFR/TGN rig depends on:
- the filtering/search capabilities of your app
- the filter/search settings applied by the user when authoring the rig, and
- the product/result of the filter (a list of model element GUIDS)
- Filter/search capabilities are extremely diverse among the many hundreds of model-handling apps in our industry. But the filter/search RESULT is a list of GUIDs. So no matter how sophisticated or simple are the search/filter capabilities in your app/platform, when you develop TGN features in your app,
- store the product/result (the list of GUIDs) of your search/filter in the TGN rig, The GUIDs list stored in the rig controls model element show/hide when anyone views the rig.
- you certainly have the option to store your filtering/search criteria in the rig too, and this would be good practice (and silly not to) so users in YOUR app can edit the search/filter criteria later as need arises. Such edits of course would update the GUIDs list stored in the rig too.
- Notice in the demo videos (above in the AFR section) that a set of two filter pegs can span the entire rig view path (and UI view control), or a set of pegs can cover a shorter span. Multiple peg sets can be dropped onto the rig viewing path. This enables changes to the filtering to be progressively displayed as the user moves forward and back along the viewing path of the rig.
- Style pegs (graphics render styles)
- What’s true for filter pegs (above) is true for style pegs too. So I won’t repeat here. But there is more detail in the TGN developer specification (scroll down to the specification section below for download links).
- Facility for extra graphics
- Stating the obvious, when you’re calling for and articulating focused attention at some bounded location within a model, you might want to add/author, or read, various kinds of extra graphics.
- Extra graphics might include labels, text, dimensions, extra lines, curves, shapes, special highlighting of model elements, and so on.
- With regard to the types of graphics you might add to a TGN rig, to clarify and emphasize what you’re communicating through your rig, we should think of this with an open mind. There should be no small plans or near limit to what app developers could try to do with this.
- Just for example, check out this video https://youtu.be/DhLsC2FpDZk. Not only does Pete Townsend design the graphics in this video with:
- TGN-like (in my opinion) camera rigging
- and even, camera rigs in motion, which is an item in the TGN developer spec, but
- He adds extra graphics of a most compelling kind
- Adding graphics like these using typical CAD or BIM tools would be cumbersome, not straightforward, the results not satisfactory. There are video editing apps that do what you see in this video, apps like DaVinci Resolve and Final Cut and the like that do this more effectively. There is much to learn about extra graphics from those apps.
- Of course there are many other sources of great ideas for extra graphics, like from many iPad illustration apps.
- There is vast potential for highly effective extra graphics within TGN rigs. Ambition should not be set low in this area. There are constraints to be addressed though…
- Adding graphics like these using typical CAD or BIM tools would be cumbersome, not straightforward, the results not satisfactory. There are video editing apps that do what you see in this video, apps like DaVinci Resolve and Final Cut and the like that do this more effectively. There is much to learn about extra graphics from those apps.
- Just for example, check out this video https://youtu.be/DhLsC2FpDZk. Not only does Pete Townsend design the graphics in this video with:
- With regard to the types of graphics you might add to a TGN rig, to clarify and emphasize what you’re communicating through your rig, we should think of this with an open mind. There should be no small plans or near limit to what app developers could try to do with this.
- TGN rig extra graphics must “sink down” to SVG format for shareable rig portability from one modeling app to another
- While there is complete freedom to develop whatever tools you want to for extra graphics in TGN rigs in any way you wish (and/or to use graphics tools already present in your app), you should also engage with the open source TGN core feature set. There is a proposed TGN feature that transforms your extra graphics to SVG format so that your graphics are visible in at least a minimally adequate way in other modeling apps that do express TGN rigs but don’t handle extra graphics the way your app handles them. Your users may be sharing their TGN rigs with other users of such apps. SVG takes the role of providing the foundation of shareable minimally legible extra graphics, no matter their originating format.
- Your implementation of TGN in your app may include an SVG previewer to give your users the best of both:
- Total freedom to use best of breed extra graphics tools in your app, for maximal communicative clarity, and
- Confidence that at least the minimum necessary extra graphics will survive sharing to other apps with adequate graphical fidelity
- the caveat being that any target app, of course, must also implement the AFR open source TGN core features, including the SVG support.
- Your implementation of TGN in your app may include an SVG previewer to give your users the best of both:
- While there is complete freedom to develop whatever tools you want to for extra graphics in TGN rigs in any way you wish (and/or to use graphics tools already present in your app), you should also engage with the open source TGN core feature set. There is a proposed TGN feature that transforms your extra graphics to SVG format so that your graphics are visible in at least a minimally adequate way in other modeling apps that do express TGN rigs but don’t handle extra graphics the way your app handles them. Your users may be sharing their TGN rigs with other users of such apps. SVG takes the role of providing the foundation of shareable minimally legible extra graphics, no matter their originating format.
- VERY IMPORTANT: Providing extra graphics tools in your app, to write extra graphics directly into TGN rigs, IS OPTIONAL. In your app, you are free to choose to provide no such tools at all, because there are other methods for providing extra graphics within TGN rigs.
- You may choose to develop an Input/Output (I/O) for extra graphics linked from other graphics apps.
- And particularly keeping in mind from item 3 above, Cinematic camera rigging:
- At the midpoint of each TGN rig viewing path, the camera rig transforms from perspective camera to parallel projection camera (if it is not already set to parallel on the rest of the viewing path, or on the approach to the midpoint)
- You can see this transformation in the demo videos above.
- Your I/O for extra graphics should be staged to operate at that rig path midpoint.
- At the path midpoint, with camera in parallel projection, the rig presents what otherwise looks like a conventional drawing. This is a great place to invoke extra graphics input/output links.
- In the AEC industry there are some obvious choices for graphics apps you’re likely to want to support linking to, in both input (to rig) and output (from rig) directions.
- Input (to rig)
- Probably the first rig input source you want to link to is any of the DWG CAD format graphics editors. To do this effective and easily it’s probably best to support an I/O loop (input and output)
- And regarding input generally, why not develop a TGN creation input directly from any of the common BIM apps? What’s stopping you from developing a TGN creation input from BIM apps like, for example, Revit? Bring not only the extra graphics of each drawing view, but bring the models too. This idea is in the TGN developer spec. There is a section describing “Upgrade BIM drawing views to TGN rigs”
- Output (from rig)
- At the rig viewing path midpoint, develop an output function to export the model graphics to a format that can be used in a CAD app.
- There are various options. You can convert model graphics to SVG, DWG or other CAD formats. To do this in an adequate way you may wish to implement someone’s SDK (if you don’t already) that generates cut geometry for planar clipping boundaries and also correctly handles visible off-plane geometry.
- At the rig viewing path midpoint, develop an output function to export the model graphics to a format that can be used in a CAD app.
- Input (to rig)
- At the midpoint of each TGN rig viewing path, the camera rig transforms from perspective camera to parallel projection camera (if it is not already set to parallel on the rest of the viewing path, or on the approach to the midpoint)
- If the CAD I/O workflow is the main extra graphics workflow you need to support in your app (as opposed to authoring extra graphics directly into TGN rigs with your tools), then build the data pipeline described above to support it.
- On the other hand, if your app, and your users, are more interested in direct authoring of extra graphics in TGN rigs, using your graphics tools, and new graphics tools you develop to push the boundaries of what’s possible, then consider a few more ideas:
- Extra graphics can be authored progressively along the rig viewing path (timeline). They can be authored at any positions along the viewing path. Then when the rig is viewed by others, the extra graphics are displayed progressively, as they were authored
- In the case of text, text likely should always face the camera. The text should always rotate to face the camera as the user moves along the rig viewing path back and forth.
- In the case of text with arrow leaders, the arrow head can be anchored to model points while the arrow leader lines rotate toward the camera like the text does
- Non-text extra graphics like lines, curves, shapes, dimensions… these will probably always instead be fixed space. They do not spin to face the moving camera.
- Extra graphics can be authored progressively along the rig viewing path (timeline). They can be authored at any positions along the viewing path. Then when the rig is viewed by others, the extra graphics are displayed progressively, as they were authored
- You may also want a hybrid implementation that supports both in-rig extra graphics authoring, and an I/O facility to and from various other graphics authoring apps, or graphics viewing apps.
- On the I/O output side, if you’re also writing your own in-app, in-rig, extra graphics, then let the user decide at each I/O iteration (update/refresh) whether TGN-native extra graphics should be sent with the output to the target (DWG or whatever)
- And in that case you would confine the TGN native extra graphics output to the graphics as they appear at the midpoint of the rig viewing path ONLY.
- On the input side, you’ll probably see that it’s best (for users) that graphics that link IN to TGN rigs from external graphics apps (like CAD apps), should also sink down to SVG within the TGN rig. The rig should automate that transformation. And this way, these rigs can be shared to other modeling apps that do not support DWG links. Yet still the graphics will be visible, because they’re stored in the TGN rig as SVG.
- Of course the SVG graphics would be refreshed by the TGN code automatically, whenever the external links are refreshed.
- On the I/O output side, if you’re also writing your own in-app, in-rig, extra graphics, then let the user decide at each I/O iteration (update/refresh) whether TGN-native extra graphics should be sent with the output to the target (DWG or whatever)
- TGN rig sharing. TGN rigs are meant to be shared across many different modeling environments and apps, at least as many apps as are typically used on AEC projects today among diverse teams of many disciplines, each of whom use a wide range of apps for:
- model authoring,
- model presentation,
- drawings creation,
- reality capture,
- ML analysis,
- specialized engineering,
- construction planning,
- construction management,
- issue tracking,
- facility handover,
- client engagement,
- whole team collaboration,
- and so on.
- And on and on.
- TGN is a standardized form of engagement applicable in ALL OF THOSE (above). TGN gets users’ claws into the models so they can better develop better insight and understanding, and more clearly see, think, and communicate. THIS SHOULD NOT BE SILOED. When users develop the necessary rigging for insight (AFR/TGN rigs) within their models, those rigs need to shareable to other users using other apps, or the same user using other apps.
- Making this work of course requires some adequate system(s) of data management.
- Whatever you (developer) are using now to manage user access to, and distribution of, files, databases, granular objects…, whatever change and access, management you’re using, you should get AFR/TGN rigs into the same system, to handle TGN change and distribution management successfully the same way you handle that for any other data types.
- When a TGN rig is shared from one modeling app to another, and edits are made by anyone authorized to do so (anyone assigned author/edit or view/markup privileges), those edits need to be distributed to each shared instance of any shared TGN rigs.
- You can hear me brainstorming about this in some of the videos of my TGN playlist here: https://youtube.com/playlist?list=PLAiyamA5WoZZOMO8TGQl3UAjpTHOyKruF
- TGN portability across apps and platforms: https://youtu.be/Je859_cNvhQ
- TGN managed environment https://youtu.be/Ka0o1EnGtK4
- And for example, here take a look! This is a very interesting example of the right kind of data management:
- BCF management through KANBAN boards! See the post by Juan Hoyos here and the comments under it: https://www.linkedin.com/posts/juanhoyos4_bcf-management-through-kanban-boards-activity-6983898326727286784-_wSD
- This kind of system — and many other kinds of systems including conventional drawing set layout, by the way, and many other methods both in-app and more open/collaborative methods like KANBAN boards — may likewise and just as well manage AFR/TGN rigs.
- You can hear me brainstorming about this in some of the videos of my TGN playlist here: https://youtube.com/playlist?list=PLAiyamA5WoZZOMO8TGQl3UAjpTHOyKruF
TGN Developer Specification
I am rewriting the AFR/TGN developer spec right now as a set of documents on GitHub. That’s not ready yet. In the meantime, there is the older version of the AFR/TGN spec in PDF and other formats:
TGN: a digital model INTERACTIONS format standard (Apple Book)
TGN: a digital model INTERACTIONS format standard (ePub)
TGN: a digital model INTERACTIONS format standard (iCloud)
TGN: a digital model INTERACTIONS format standard (PDF)
* Please note that in this older version of the spec I had not yet distinguished between the broader unlimited AFR concept and the TGN open source standardized core feature set. The spec, linked above, is an organized set of feature descriptions but it’s more like a superset that includes many of features that today I call AFR features (not all possible AFR features, only the one’s I’ve thought of so far), plus the features I’ve now designated as TGN open source standardized core features.
TGN Developer Specification screenshot, Table of Contents:

TGN Developer Specification screenshot, sample page:

TGN demo videos
YouTube playlist


The AFR unlimited concept and the TGN open source standardized core
TGN Sunburst diagram:
Attention-Focusing Rig (AFR) possibilities are diverse. Each industrial, creative, technical domain and discipline has its own unique characteristics and needs, and each has tremendous diversity in existing applications and platforms, with a wide range of different graphical capabilities.
That being the case, there is no single correct and definitive AFR specification possible, not one that encompasses either what’s needed from each domain, let alone what’s possible, springing from each.
And yet, it is conceivable that each developer implementing the AFR concept on each software platform may at once be:
- completely free to innovate all possible AFR features that may be discovered by each developer in evaluating their user requirements. In the diagram below I hint at these. They’re colored blue, and I sample just some of those possibilities and name/describe them in the diagram.
And yet,
- developers may conform to and and take advantage of a set of community-managed, and open source, standard features at the core of AFR. The standardized core features I refer to as TGN. TGN is colored orange in the diagram:
By the way, AtomIFC https://github.com/QonicOpen/AtomIfc looks to me like an important facilitator of a reliable models gateway, which is exactly what developers would be looking for in implementing the TGN rig ID/Admin features. When you share a TGN rig with someone using another modeling app, the correct models, — within the context of which the TGN rig was authored and makes sense (literally, makes sense of things) — the correct models …must rain down from the cloud, so to speak. They must be supplied to the target app, in a format the target app can use. See the segment at 5 O’Clock in the diagram below and item 1 above in the section TGN Core Features:

What you can do
You should develop AFR (attention focusing rig) features, and AFR’s open source standardizable TGN core feature set, IN YOUR APP.
- If you’d like my assistance designing AFR/TGN features in your app/platform contact me here: https://tangerinefocus.com/contact-us/
- if you’re interested in setting up a community-managed open source TGN core developer community, contact me here: https://tangerinefocus.com/contact-us/

Leave a Reply