An end of year post.
There are positive signs of a purposeful coming new year. I wish this for everyone. All of us.
I meet people who read this blog sometimes and I’m always amazed. This year, more people read the blog than any previous year. I’m grateful to everyone who reads it. Even better is the strength of the feedback that sometimes comes. I copy below one such email I received.
Wouter Bulens, a rail engineer from Belgium, after I asked if I could copy his text here:
Go for it Rob, hope I did not make any spelling errors 😉.
I would also suggest doing a decomposition of a drawing. Show that we are already making TGN data today, we are just not combining and sharing it. Every modeling software uses the idea of a view port or window on the model, but this connection stays in the drafting or design software. We only communicate the result not the process, although it is the process that ultimately will convince the client (story telling).
From Wouter Bulens:
It took some time due to other priorities (not because it was a bad read 😉), but I am happy to say that I am up to date now.
That being said, I would love to share my thoughts. As you know from our conversation, I cannot agree more with your fundamental point. It is simply a truth.
How to promote and give TGN form, that is the question? It is on this point that I think it would be more effective if TGN would first just be a new data definition, the new data medium for drawings. Close to what BCF gives for issue communication, TGN should be a intrinsic part of design communication (fused with the model). Within this definition I would absolutely integrate all your vision on cinematic camera rigging, this can only make the data definition stronger. But should the start not be a data source that is a true evolution of the drawing?
Why not an API as you describe? I think you are 100% correct in all the things that are coming, but I also think that each vendor will want to bring this product themselves. They will want to make their TGN engine the best in the market and why not let them. The central data standard TGN is there to ensure that the entire market participates, big and small players. The standard is there to show that drawings can take another form that can be shown in the model or apart and can hold new interaction possibilities. Or maybe old ones, people look at a paper plan from different angles (what is very hard to do with a pdf).
So what if the first step is a defined data structure:
- The volume in the model
- Integrated annotation and dimension
- Baked in 2D vector geometry
- Viewing arc calculated based on the volume and designers choice
Style I would leave separate or do it in other data structure. Why, so that clients can visualize a TGN in the way that they want based on the fundamental classification system that is part of the TGN data structure.
I recently had 3 experiences that I would like to share:
- Rail summit, people playing with Autodesk forge saying we like the interplay of the 2D section view and the 3D model. Selection synchronized between both views, but not yet integrated 😉.
- Validation of a Navisworks model, where viewpoints (volume, camera configuration, slice definition, style, selection) are created for specific model validations. Not making drawings, but make detailed views on the model.
- Dynamo script to generale Text files that define Navisworks view points based on Civil 3D alignment section positions. Every section can be visualized in Navisworks.
These three events gave me the sense that if there was already a central definition of this new “device for looking at models purposefully”, that this would be the connection between software packages to communicate the designer’s focus points. The focus points that form the basis of the model, and it is exactly these points that governments today are not able to evaluate without drawings. But the art of drafting is in my opinion declining and is too focused on the final style and not enough on formalizing the focus points. Style should be adaptable to the viewer these days, but the choice of where to look is in my opinion absolutely a communication skill that should be part of any designer’s training and fundamental abilities.
What if AutoCAD, Microstation, Civil 3D, Open Road, Navisworks, iTwin, Forge, … did not just create or visualize the model, but were able to share amongst them where to look, where are the defining details for this model? What are the notes or fundamental parameters of this detail?
Rob thanks again for taking the time for writing the book. I hope my thoughts do not come across hard or correcting. I think that as you say many times, the technology is there. So why are we not yet doing it. I think the solution is not in the technology (this will follow or is already there), it is in the simple definition.
Redefine what a drawing is, not just lines on paper. Not just lines on a digital canvas, but visual information that has a direct relationship to a lager system of which it is an intrinsic part. Show that we already have all the data, we just need to combine and share it.
In closing I also add some of my quick notes that I made during reading, until next we talk or your next blog.
Some of the short notes that I made :
- TGN = design communication
- drawing/detail views/type plans are the building blocks of the system that is visualised through a model
- do TGN’s already cover: long sections, schematic drawings, … ?
- viewing arc = the act of looking at the paper from a different angle (not unlike a painting 😉)
- TGN’s show you where to look, what are the key points/foundations of the design
- evolution of drawing views to design experience
- integrate annotations, dimensions, model data views
- baked in section geometry could be a true new digital medium for the traditional drawing (when not displayed in the model)
- modern 2D/3D vector visualization
- great line: drawings : “devices for looking at models purposefully”
- design is by definition a growing insight and understanding of a future result that at first was completely unknown
Having done some further googling: 3D registration between IFC model and 2D drawing PDF – Developers – buildingSMART Forums. Might be best to build the bridge between IFC and SVG, so not to reinvent the wheel but only connect and add to already existing concepts.
I’m in complete agreement with Wouter’s comments. So please allow this old software guy (I’m 55 in a few weeks) some further opinion:
Yes I knew there was/is an IFC Japan team working on the first gen drawings fusion stuff. I’ll put a comment there suggesting the TGN definition as a next gen concept that includes and extends the first gen. TGN can redefine the drawing creation process at time of creation in all modeling apps, and also would support (T2, from the TGN spec) entry points for legacy drawings, inclusion of which may use the methods they’re working on now (IFC Japan), but with enhanced viewing experience through the built-in camera rig and other controls.
The TGN specification offers ALL software developers via TGN development, new opportunities to redefine and:
- evolve the way users create drawings within models of any kind
- within legacy modeling apps
- within metaverses and digital twins
- within new apps developed for this purpose…
- evolve the way users view drawings within models, with viewing enhanced by cinematic camera rigs and other controls
- evolve and increase the value of models. TGN next generation user interactions within modeled worlds give users the power to create, clarify, and share, their own expressions of articulated focused attention within models. This increases the value of models by increasing their embedded interpretive quality. This increases model utilization and user uptake through embedded support for effective development of understanding of complex models among users of all kinds.
- evolve the way users SHARE expressions of articulated focused attention within models, across modeling apps, formats, platforms, and metaverses, with TGN rigs created in any app expressible when shared, with reliable fidelity in other apps.
- evolve the way legacy drawing formats are incorporated and upgraded to the modernized in-model TGN experience
I started working on this stuff a long time ago. I had the idea as a BIM software user in architecture firms since the 90s that drawings should appear at their true orientation within models automatically. By 2007 I formulated that in a little proposal document and shared it with my favorite software company. A few years later I was working for that company as part of the development team that designed, developed and commericalized automated drawing-model fusion. This was released as features within Bentley’s MicroStation in May 2012. I keep some examples on my page here:
Since then automated in-model drawings fusion developed independently at 6 other software companies that I’ve seen.
That wasn’t enough for me because to me it is obvious that the first gen drawing fusion work is step 1 of a much more significant evolution. After a few years, 2018, 19, and 20 working in other areas, this year I’ve come back to this evolution in user expressions of articulated focused attention within models. So this year I’ve been able to work on these ideas again. I completed the TGN spec (in it’s current form; I invite everyone to mark it up and improve it), which I give away for free to anyone who wants it, along with some supplemental discussion videos and a partial demo (special thanks to Sam Rice for running free of from my micromanagement (I won’t do that) and creatively expressing aspects of the TGN spec in a mock-up simulation he built in Blender. Brilliantly done:
TGN SPECIFICATION DOWNLOAD
TGN DISCUSSION AND DEMONSTRATION VIDEO PLAYLIST:
0 1 TGN: rigging for insight https://youtu.be/CGXrk9nGj0Y (2:16)
02 TGN: what is TGN exactly? https://youtu.be/byIW0T8MCsk (5:35)
03 TGN: demonstration https://youtu.be/wTh2AozTHDc (3:40)
04 TGN: portability https://youtu.be/Je859_cNvhQ (5:17)
05 TGN: industry value https://youtu.be/Ka0o1EnGtK4 (9:27)
(the dev platform I mention in the videos is iTwins.js, but TGN can be developed on every platform where TGN is useful and desired)
I’ve also continued to write about:
why this matters
Models are like worlds, as was clear long ago. But now, with the advent of digital twins and metaverses, it’s more apparent than ever. And more commonly understood.
But what is a world, without concentrated attention (focus)?
That question is poorly addressed in today’s modeling applications. But good answers ARE known. The history of technical drawing reminds us of the essential nature of well articulated concentrated focus within the context of modeled environments, whether those environments are modeled physically, digitally, and/or mentally.
This legacy of drawing has a future, an evolution in form, now to be expressed in-situ within digital modeled environments, and making full use there of today’s graphics capabilities.
Tangerine can help software companies envision, design, and implement TGN: next generation user interactions within modeled worlds that give users the power to create, clarify, and share, their own expressions of articulated focused attention, within models.
TGN is the triple fusion of:
— drawing, and
— techniques of cinematic camera rigging that have evolved over the hundred or so years of the history of film.
Wishing everyone clarifying focus in 2022
Kind Regards from Sweden