The tangible ‘thingness” of each — a digital model of a building or similar physical infrastructure on the one hand, and, a body of text or a set of pixels comprising an image or sequence of images, on the other hand, BIM and AI respectively — share a common definitive characteristic:
They’re correct and complete enough, in other words, they’re “good enough”, however you choose to define “good enough”, in some areas and in some respects, but not correct, and/or, not complete enough, in other respects and in other areas.
And within their self-embodiment, meaning, yeah, within themselves, no distinction is made between those consequentially different areas and respects, those that are correct and good enough, and those that are not.
That’s remarkable, certainly.
Here’s an excerpt from a post I wrote in 2018:
Limits of “Models”
Like yin and yang, scope always goes hand in hand with limits. The major limitation of models arises from the fact that models have better-explored regions and lesser-explored regions. Better developed regions and lesser developed regions. Clearer and fuzzier regions. Areas of higher confidence and lower confidence.
The trick is to know whereare the high confidence, clearer, more articulate, better-elaborated regions and conversely where, elsewhere, things are fuzzier. This is a fundamental limitation of models. A model, by itself, gives no indication whatsoever of where we should have higher or lower confidence. It should be obvious, then, that this characteristic of models is a fundamental limitation, a limit with real and substantial practical (and legal) impact.
Let’s move to drawings now, which, as all things in this universe, have both scope and limits.
Scope of “Drawings”
In this post I’m building the argument that drawings are the ideal accompaniment to models (like sound infused into silent film), and not because I want to make that argument, but for a better reason: it’s true.
Models are wide, expansive, environmental, whole things. Drawings are nothing like this at all. On the contrary, and as polar opposite, drawings are narrow and focused; they embody the act of taking a closer look.
Fundamentally, drawings and models, as media, have essentially nothing in common with each other, but rather, instead, they are a true pair, not a pair of convenience or accidental proximity, but a pair born of absolute necessity. You can demonstrate this to yourself in a few seconds. Observe the environment in which you currently stand, or sit, or whatever you’re doing. Try to understand the environment effectively, in order to do something, like take a seat, or walk through a doorway, or pick up a glass of water.
The environment is all around you. How do you understand it and act within it? Well, you begin by taking a closer look… here, and here, and here, and there. This is fundamental. Thinking doesn’t happen without it. Understanding will not develop, without it. I discuss this more, below in the section on problems.
This act, of “taking a closer look” sufficiently describes the scope of drawing as a medium, although additional important notice may be taken of the role that “taking a closer look” plays in articulating the clearly controlled communication on which the architecture and engineering professions rely.
Also note that the scope of drawing is innately tied to its corresponding limits.
Limits of “Drawings”
Fundamentally, the act of taking a closer look is meaningless, unless there is a wider environment within which the closer look occurs. This is uncontroversial. If there is no forest, then there is no “taking a closer look in a forest”. If there is no mental model — or ‘mental and digital’ model — of a building, then there is no “taking a closer look within a building model”. This well and truly does mean exactly what it sounds like it means: without a model (at least an imaginary one, a mental model), drawing is not possible. Sure, there may be found some exceptions to this, in some kinds of experimental abstract art, but in the AEC industry this holds.
I believe it suffices to say that no person, author or viewer, has ever understood a drawing, not in a meaningful way, without engaging in the mental activity of instantiating that drawing in-situ within a mental model of the wider whole of the environment within which the drawing gains its meaning. It’s interesting that conversely, there is bi-directionality. A drawing gains its meaning, when it is put into a model, and likewise, a nascent drawing, one being authored, comes into being from, out of, a model.
Both while authoring a drawing, and, after authoring, as others are viewing a completed drawing, meaning comes from a mutual interdependency and interplay, in the mind, between drawing and model; each derives its meaning in interplay with the other. To make sense of a drawing, viewers imagine the drawing in its place within the wider whole of an environment, within a model, or portion of a model. And the reverse: to make sense of a spatial environment, modeled or real, people engage in the act of taking a closer look, within it, the act embodied traditionally in the medium of drawing.
Let’s elaborate more on a discussion of significant problems of each of these media. The major problem with drawing is that drawing requires a model for meaning to be developed. The reverse is true for models. This of course suggests the possible value of drawing-model fusion. More on that later, after a more detailed look at the problem of model media. There are two primary types of problems with models. The first type of problem is a hard problem, a major practical problem. The second type of problem is worse, a more fundamental problem that goes right to the core of the nature of human cognition.
“The pace of innovation in AI is slowing, its usefulness is limited, and the cost of running it remains exorbitant,” says the sub-title. “The rate of improvement for AIs is slowing, and there appear to be fewer applications than originally imagined for even the most capable of them. It is wildly expensive to build and run AI. New, competing AI models are popping up constantly, but it takes a long time for them to have a meaningful impact on how most people actually work.”
The first evidence that “the pace of improvement in AIs is slowing” comes from the difficulty of finding more data; “companies have already trained their AIs on more or less the entire internet, and are running out of additional data to hoover up” and synthetic data won’t help.
Second, “the best proprietary AI models are converging on about the same scores on tests of their abilities, and even free, open-source models, like those from Meta and Mistral, are catching up.” This suggests that “AI could become a commodity,” but not a very good one.
Third, “today’s AI’s remain ruinously expensive to run,” thus causing problems at startups such as Stability AI, Inflection AI, Anthropic and even OpenAI.
“The bottom line is that for a popular service that relies on generative AI, the costs of running it far exceed the already eye-watering cost of training it. That’s because AI has to think anew every single time something is asked of it, and the resources that AI uses when it generates an answer are far larger than what it takes to, say, return a conventional search result. For an almost entirely ad-supported company like Google, which is now offering AI-generated summaries across billions of search results, analysts believe delivering AI answers on those searches will eat into the company’s margins.”
These problems suggest “there is a massive gulf between the number of workers who are just playing with AI, and the subset who rely on it and pay for it. Microsoft’s AI Copilot, for example, costs $30 a month.” OpenAI’s annual revenues might be $2 billion but this “is still a far cry from the revenue needed to justify OpenAI’s now nearly $90 billion valuation.”
Finally, “evidence suggests AI isn’t nearly the productivity booster it has been touted as. While these systems can help some people do their jobs, they can’t actually replace them. This means they are unlikely to help companies save on payroll,” says a Wharton professor.
“None of this is to say that today’s AI won’t, in the long run, transform all sorts of jobs and industries. The problem is that the current level of investment—in startups and by big companies—seems to be predicated on the idea that AI is going to get so much better, so fast, and be adopted so quickly that its impact on our lives and the economy is hard to comprehend. Mounting evidence suggests that won’t be the case.”
This evidently is the BEST our civilization produces.
How far we’ve fallen.
A crap generator and a corresponding ideology inducing us to abandon distinctions between low quality and high quality, between fog and sun shine,
Evidence of lost capability for coherent effective thought, about anything, is everywhere we look now.
We swim in misty fog and it looks like a bright sun shiny day to us.
We no longer know the difference.
No good can come of this.
Time for some serious self reflection and rework.
Start again from first principles.
If those can be recovered.
Fundamentals are at stake, of course.
Well stated here:
COUNTERFEIT PEOPLE
As a student of philosphy and human consciousness, I am a big fan of Daniel C Dennet (who passed away on April 19 2024), known as the western philosopher of consciousness.
What a brilliant article and warning to humanity before he died!
Quote
Companies using AI to generate fake people are committing an immoral act of vandalism, and should be held liable.
Money has existed for several thousand years, and from the outset counterfeiting was recognized to be a very serious crime, one that in many cases calls for capital punishment because it undermines the trust on which society depends. Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people who can pass for real in many of the new digital environments we have created. These counterfeit people are the most dangerous artifacts in human history, capable of destroying not just economies but human freedom itself. Before it’s too late (it may well be too late already) we must outlaw both the creation of counterfeit people and the “passing along” of counterfeit people. The penalties for either offense should be extremely severe, given that civilization itself is at risk.
It is a terrible irony that the current infatuation with fooling people into thinking they are interacting with a real person grew out of Alan Turing’s innocent proposal in 1950 to use what he called “the imitation game” (now known as the Turing Test) as the benchmark of real thinking. This has engendered not just a cottage industry but a munificently funded high-tech industry engaged in making products that will trick even the most skeptical of interlocutors. Our natural inclination to treat anything that seems to talk sensibly with us as a person—adopting what I have called the “intentional stance”—turns out to be easy to invoke and almost impossible to resist, even for experts. We’re all going to be sitting ducks in the immediate future.
Creating counterfeit digital people risks destroying our civilization. Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us. Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into acquiescing to our own subjugation. The counterfeit people will talk us into adopting policies and convictions that will make us vulnerable to still more manipulation. Or we will simply turn off our attention and become passive and ignorant pawns. This is a terrifying prospect
Unquote
Thank You Daniel Dennet: Hope and pray you can Rest In Peace!
Raman PK Bob Pointer. Dr. Jeffrey Funk Dr. Alexander Stein Guido Palazzo
On the AI side, mass disconnection may be the answer at this point. I don’t know. A problem artificially created, the damage caused too profound, the solution ruled out within itself. No idea what to do about this.
Hi! My name is Rob Snyder, I’m on a mission to elevate digital models in AEC (architecture, engineering, and construction) by developing equipment for visual close study (VCS) within them, so that they supply an adequate assist to the engine of thought we all have running as we develop models during design and as we interpret them so they can be put to use in support of necessary action, during construction for example.