Once you notice this it’s hard not to think about it. You see it everywhere. Fundamental things about how vision works. I don’t mean retinal absorption of light, for example. Although yeah, sure, that’s there. That’s part of it. Turn on the TV and no doubt you’ll be presented with some science show getting into the intricacies of the retinal structure of the eyeball, and transmission through nerves into the brain and so on.
But this explains nothing. It’s not that it’s wrong. It’s just not enough. And if you’ve been around long enough you’ve seen the same shows presenting the same information over and over again for decades. Which would be fine if they didn’t stop too soon. There are more questions to ask and more things to notice about vision. If you stop too soon, and just keep repeating technicalities, it’s a bore.
There’s so much more.
I’ve written on this blog before about some of the fundamentals of vision. Here are a few. In the first one, pay attention to the 5 functions of vision listed and described as they pertain to technical drawing vis a vis mental models and digital models; and notice this part of the post:
think… about what a technical drawing looks like when we see it in our mind’s eye, in-situ within the mental model. We wobble back and forth in an arc around it, in context within the fuzzy mental model when visualizing it there. Maybe you notice your eyes sort of look downward (it’s true for me anyway) when we visualize this mentally. Try it and notice what you’re doing the next time you’re in that flow state fully engaged with this kind of visualization.
Here’s the post:
Related aspects of vision continue in other posts in this blog, including this one and others. And this, observations about vision, is, in general, too, of course. It’s not just about devices for visual engagement with digital models, or the nature of technical illustration and its action as a device for visual engagement with models mental and digital. Nor is it just about the evolution of that device as I propose here as V3.0 TGN Development.
No. It’s about the way vision works in general. The next two posts touch on aspects of vision that we all see in ordinary daily life. No one really knows, I would say, how vision works. But there are things worth noticing about it. And we can talk about those. And we can design equipment for visual engagement with those observations in mind.
There is a domain that has developed a century’s worth of careful attention to what can be called:
eye path intentionality
The film industry.
If you want an introduction to innovations developed in the film industry for camera motion, Mark Cousins’ 11 hour documentary, The Story of Film: An Odyssey, I recommend it maximally.
https://www.imdb.com/title/tt2044056/
This was on TV back in 2012 or so here in Sweden. And for years after that it was unavailable online. You had to buy a CD! But you can get it now at some of the usual places including Apple TV.
It’s fascinating to watch, spellbinding for the entire 11 hours. It covers many aspects of film from all over the world since the beginning, and has no shortage of observation about the care taken and the rich knowledgebase built up about eye path intentionality. And understanding the effect of that.
There are people with this expertise. That is certain. How many of them want to bring that knowledge into fields like architecture, engineering, and construction (AEC) I don’t know and it remains to be seen. I’m trying.
I mention this in my 10 minute video from last year here. The humor is set up from the beginning. I hope you think it’s funny.
In the AEC industry, this kind of equipment is as vital in computational design generated models as it is anywhere. We want good equipment for visual inspection built in, for the reasons I wrote about, to stimulate the engine of thought, to deepen understanding, to get our minds better engaged, visually, to improve visual QA/QC, to facilitate the necessary affirmations, and to well and truly have eyes on the model to make sure it’s ready, and right, to be pushed down any fabrication pipeline, or any other kind of assembly line.
Whether you’re stick building BIMs or using computational design or GenAI to author models, or whether you’re building modeled environments through video and Gaussian Splatting, or whether you’re building hybrids of both kinds of models, and whether you’re goal is to author models, or to QA/QC them, or to send them down a fabrication pipeline (model to fab), or whether you need to interpret models to construct them by any other means, and whether you’re doing IDS classification and property set checks or not, and whether you’re doing clash detection or not, and whether you share model screen shots with each other or not…
Add TGN — including ‘eye path intentionality’, features 3 and 4 of the 8 features of TGN proposed here as V3.0 — to any model-handling app that you develop, so users can ENGAGE in articulate visual close study (VCS) within spatial environments as they author, interpret, and QA/QC them.
Join our open source project if you want to use TGN in your favorite modeling apps, or help develop it.
Our GitHub project will be announced this month, August 2024.
