A table of models and attention in two industries:
|AI||LLMs (large language models)||self-attention|
|AEC||BIM/Reality/TWIN/etc.||drawing and descendants|
Conversations from LinkedIn about both…
AI first, “self-attention”:
Here are the comments under Miguel Fierro’s post:
- Overview of ChatGPT: https://www.linkedin.com/posts/miguelgfierro_ai-machinelearning-gpt-activity-7005786941853986817-vKwD/
- Explanation of RLHF: https://www.linkedin.com/posts/miguelgfierro_ai-machinelearning-datascience-activity-7046352678771142656-lVZ4/
Let’s look at attention in another industry, Architecture Engineering and Construction.
AEC: attentive focus articulated within models (mental or digital)
I’ll summarize using what Miguel Fierro said about AI self-attention within LLMs as a template:
AEC attention (focus) technique:
|1||technical drawing (+ fusion in mental model)||tradition|
|2||technical drawing (+ fusion in digital model)||digital fusion|
|3||BCF in digital model||format standard|
|4||cinematic camera rigging||camera in film (history)|
|5||TGN in digital model||TanGeriNe format proposed|
TGN is Tangerine’s proposed open source software development project. TGN is a fusion of the 4 predecessor techniques listed above it in the table.
The fusion transforms its constituent components as it combines them. Something new is made of the parts as each of the parts is made new in fusion. Here’s a good overview of the proposal. The page includes links to download the developer specification, mockup demo videos, and links to several articles discussing the TGN proposal in detail:
Attention technique TGN OPEN CODE, a package that expresses:
1/ in a coordinate system in a model
2/ within a scope/bounding box (or more complex volume)
3/ looking at the designated target face(s) of the scope from the “normal direction”
4/ which is one direction among many relevant directions of view (so, with UI for controlled view variation) – cinematic camera rigging built-in to the attention focusing rigs within model
5/ with the model/twin filtered by relevant filtering criteria
6/ with some style of display applied
7/ with some clarifying remarks or graphics added (authored within the TGN rigs or displayed in the rigs via external link)
8/ and with this feature package shareable with adequate fidelity to other modeling apps
TGN (itself) is transforming thought into action
That’s what TGN is: transformation of modeled and real reality through engagement (as articulated attention) which induces thought that transforms the whole stream into action/work/result.
TGN is itself transforming, from just me thinking/writing/demonstrating/talking about it, to software companies collaborating to make it real in software you already use.
This is getting started.
There will be many opportunities to contribute to this and shape it.
Contact me if you want to jump in!
- Digital tools are an extension of human perceptual/cognitive equipment for engagement with models (real world or digital) for interpretive and generative purposes.
- Digital tools need to further advance in support of this perceptual/cognitive equipment.
- This need is evident whether models are generated by natural or artificial intelligence (NI or AI)
- The equipment provided to date in modeling software is inadequate/underdeveloped. This inadequacy is the greatest single reason that technical drawing still supplies the majority of revenue for major commercial modeling software developers, still in 2023 after decades of modeling.
- TGN is an open source software development proposal intended to raise the adequacy of the relevant equipment within model-handling software (all kinds).
- Commercial and independent software developers are invited to join a project to make this happen. Contact me if interested.
- TGN is a minimum feature set that
- a) will make a difference and
- b) corresponds to the way human perception works in modeled worlds (real or digital).
- But TGN can be extended and added to, to the extent anyone can envision. The extensions and additions need not be open source.
- “Control Rods” for human-in-the-loop input back into AI and other computational iterative model generating systems, for human guidance, for the laying down of control parameters, control markers, control drivers, within models, for feedback back into model generating systems — you understand the idea? — are optimally hosted within TGN rigs, within models.
You can see why, right? You can imagine the development of a tremendous variety of such controls, and the fact that these would be made very easily accessible, visible, close at hand, and intelligible, within the context of these rigs.
Let your imagination loose. What control rods would you embed there to guide generative (AI or NI generated) development of the model?
What’s after that for TGN?
TGN will evolve far beyond its proposed open source core feature set. I have so many ideas for the broader attention focusing rigs (AFR) concept. No doubt many will have many such ideas.
Once TGN exists in software, people will expand it. I certainly don’t believe all the additions, extensions, and enhancements to the core TGN features have to be open source. They can be domain and app specific and proprietary.
But everyone doing that will benefit from the open standardized core feature set TGN. It gives a great foundation to build from.
Leave a Reply