I don’t mind starting 2023 with metaphors, even bathroom humor.
Here we go then.
Standardizing the way we describe, store, and share model elements and properties is necessary (vital) and excellent work, all within “Bin 1” (modeling).
“Bin 2” is (ways of looking at models). As in other domains, number 1 and number 2 go together. And too much number 1 and not enough number 2 leads to a lot of pain, discomfort, suffering, and general ineffectiveness.
I’m trying to get number 2 de-constipated in this industry. In the beginning that won’t make for commercial ROI for software dev investors. It will only provide a tremendous amount of relief to all software users. And once they’re functioning normally again, then there are investor opportunities that will flow from this.
USD? Absolute respect to say the least. Shaping the film industry’s USD to serve AEC, what could be a better project at this point? Greg Schleusner’s take on this at https://magnetar.io should have all the support necessary and maximum attention.
To be sure, the “Bin1” (modeling) / “Bin2” (ways of looking at models) analogy breaks down when applied to USD because theUSD specification spans both categories. But because modeling efforts in AEC have been heavy on bin1 (standardization and interoperability) and light on bin2 (engagement and interpretive technique), too light on 2, for decades, the point holds.
By the way, as an example, USD includes concepts of rigging. As of course is essential in the film industry. It’s essential in AEC too, though the realization of this is suppressed by digital tech itself, so far. See: https://remedy-entertainment.github.io/USDBook/what_is_usd_not.html
TGN rigging (for insight) puts the emphasis on scene (filter, style) and camera rigging— a camera path and transformation across an area of attentive focus within a scene — plus extra graphics, not on character animation, though I do see TGN including assembly sequence rigging later, after the most basic “looking at a thing” aspects are well handled first.
I’m an outsider looking in at the film industry and USD. It seems that camera rigging is absent from USD. Camera rigging is vital to film. Why it’s excluded from USD (if it is) is something that, no doubt, is known among people using USD and making films. I’d like to know.
3. Cinematic camera rigging, built-in (resolving to parallel projection at center)
4. Rig UI
5. Filter Pegs
6. Style Pegs
7. Facility for extra graphics
8. Rig sharing – TGN rigs are portable from one modeling app to another
TGN excludes bin1 (modeling) entirely in its specification because TGN is intended as a set of features for looking at models, as a feature overlay within any modeling (bin1) environment, USD being one of many.
Bin2 is separate. For good reason.
Here are links to the AFR/TGN specification (a bin 2 proposal) and discussion of its features written for developers and users alike:
In the rest of the post I’d like to share things from a different field that grapples with really the same issues we all do in AEC:
how to engage with complex environments,
how to think about them,
what it means to think about them, make sense of them,
what’s going on in our minds when we do this,
what techniques we use to facilitate this, both mental techniques (study of mind stuff, Bin0) and tangible externalized symbolic techniques we use to assist the mental grappling ongoing when we act sensibly within complex environments, when we’re effective, doing the right things at the right time and so on. When we understand what we’re doing and need to do. When we have situational awareness. To say nothing of making ourselves clear, to others.
People in AI are well aware of the difficulty of these matters and to great extent their nature. Do they devote half (more probably) of their energy on bin2? — methods of ways of looking at, thinking about, interpreting, and making sense of whatever’s in bin1 (data), and developing these, for doing that artificially (AI).
People in AEC understand this as well. They intuit it and put it into practice using all available means, although the AEC tech space has over-elevated bin1 at the expense of bin2. A mistake, counterproductive. AI developers don’t make that mistake. Imagine if they did. They’d have basically nothing. Put bin1 and bin2 together in a serious way and then you have something. Really. A major reversal of fortune. A sea change.
In AI in 2023 we see change. Sea Change in fact.
Some pics of frozen sea and ice melt, near me on the west coast of Sweden.
Sea Change
It seems things are in flow now. Let’s hope so, that this year flows bringing everyone closer to where they need to be.
Here’s a collection of discussions about AI that came my way on LinkedIn. I followed Martin Ciupa. Lucky me. He’s thoughtful, prolific, and clear. So most of these are his posts. They all grapple with how you develop a machine, an artificial intelligence, to engage with an environment in ways such that the engagement leads to:
making sense of the environment,
understanding it in some way, or giving the impression that this is so.
Reading these you get some sense of the scale of effort put in this field into envisioning and developing methods of engagement with complex environments. It’s not a decades long effort aimed at category standardization alone. It includes that, but it’s the methods of engagement that are, I think, exercising whatever it is that makes AI, AI, and delivers its outcome.
And these methods tell us something, many somethings, about how we likewise engage, how we, all of us, engage environments, how we, in other words, think, interpret, understand, act.
Comment: Einstein said, I believe in intuitions and inspirations…I sometimes FEEL that I am right. I do not KNOW that I am.” (Source: The Ultimate Quotable Einstein”, p.435, Princeton University Press). From learned experience we build a large dataset (so to speak) of feelings about what’s right. But it’s by rational reflection that we challenge those feelings in an attempt to dianoetically support it. The use of Bayesian reasoning is an appropriate first step reflection process suitable for a data-centric model.
Abstract: Despite the success of artificial intelligence (AI), we are still far away from AI that model the world as humans do. This study focuses for explaining human behavior from intuitive mental models’ perspectives. We describe how behavior arises in biological systems and how the better understanding of this biological system can lead to advances in the development of human-like AI. Human can build intuitive models from physical, social, and cultural situations. In addition, we follow Bayesian inference to combine intuitive models and new information to make decisions. We should build similar intuitive models and Bayesian algorithms for the new AI. We suggest that the probability calculation in Bayesian sense is sensitive to semantic properties of the objects’ combination formed by observation and prior experience. We call this brain process as computational meaningfulness and it is closer to the Bayesian ideal, when the occurrence of probabilities of these objects are believable. How does the human brain form models of the world and apply these models in its behavior? We outline the answers from three perspectives. First, intuitive models support an individual to use information meaningful ways in a current context. Second, neuroeconomics proposes that the valuation network in the brain has essential role in human decision making. It combines psychological, economical, and neuroscientific approaches to reveal the biological mechanisms by which decisions are made. Then, the brain is an over-parameterized modeling organ and produces optimal behavior in a complex word. Finally, a progress in data analysis techniques in AI has allowed us to decipher how the human brain valuates different options in complex situations. By combining big datasets with machine learning models, it is possible to gain insight from complex neural data beyond what was possible before. We describe these solutions by reviewing the current research from this perspective. In this study, we outline the basic aspects for human-like AI and we discuss on how science can benefit from AI. The better we understand human’s brain mechanisms, the better we can apply this understanding for building new AI. Both development of AI and understanding of human behavior go hand in hand
I don’t know about you, but when I look around, I can actually see, can perceive an actual visual field. I’m not just responding like an AI to the detection of objects. I can damn well see them. #consciousness#ai#perception#vision#neuroscience
Comment It’s not just about “replaying past experiences like movies” that is important for neural plastic learning. It’s the connection those memories have to qualia feelings and the conscious rational reflection of those things we did/didn’t do that could support or prevent/promote consequences in the future. For this you need causal and real world modelling and common sense.
Title: DeepMind’s Idea to Build Neural Networks that can Replay Past Experiences Just Like Humans Do
“The ability to use knowledge abstracted from previous experiences is one of the magical qualities of human learning. Our dreams are often influenced by past experiences and anybody that has suffered a traumatic experience in the past can tell you how constantly see flashes of it in new situations. The human brain is able to make rich inferences in the absence of data by generalizing past experiences. This replay of experiences is has puzzled neuroscientists for decades as its an essential component of our learning processes. In artificial intelligence(AI), the idea of neural networks that can spontaneously replay learned experiences seems like a fantasy.
…
From the different fields of AI, reinforcement learning seems particularly well suited for the incorporation of experience replay mechanisms. A reinforcement learning agent, builds knowledge by constantly interacting with an environment which allows it to record and replay past experiences in a more efficient way than traditional supervised models. Some of the early works in trying to recreate experience replay in reinforcement learning agents dates back to a seminal 1992 paper that was influential in the creation of DeepMind’s DQN networks that mastered Atari games in 2015.
…
From an architecture standpoint, adding replay experiences to a reinforcement learning network seems relatively simple. Most solutions in the space relied on an additional replay buffer that records the experiences learned by the agent and plays them back at specific times. Some architectures choose to replay the experiences randomly while others use a specific preferred order that will optimize the learning experiences of the agent.
…
The current implementations of experience replay mostly follow the movie strategy based on its simplicity but researchers are starting to make inroads in models that resemble the imagination strategy. Certainly, the incorporation of experience replay modules can be a great catalyzer to the learning experiences of reinforcement learning agents. Even more fascinating is the fact that by observing how AI agents replay experiences we can develop new insights about our own human cognition.”
It’s impressive. No doubt about it. See… https://slate.com/technology/2022/12/chatgpt-openai-artificial-intelligence-chatbot-whoa.html But does it doubt? A quintessential human attribute. How does a statistical fitting algorithm think if it’s responses might be humanly appropriate? Does it think at all? Do we care? I think we should! Until it does I have doubts. An unthinking tool that might lead us into the unthinkable.
Comment: The author presents a positive view of the recent SOTA. I am less so, given I think the latest systems are somewhat hyped as surpassing human-levels of performance. They are impressive no doubt. But are we close to AGI? IMO not really.
As I mentioned in an earlier post today i feel the way of firm progress in AI needs for it to do the “hard things” (causal modeling, common sense, reflective dianoetic reasoning, the contribution of self-aware sentience, etc.).
IMO the way of progress in AI is to do the hard things (causal modeling, common sense. reflective dianoetic reasoning, the contribution of self-aware sentience, etc.). System 2 related.
It’s not just about scaling a Generative AI model that does none of those, eg., such as in Transformer based Large Language Models. That said intend to use tools like ChatGPT for System 1 module. It has a role there (once some of it hilariousl errors – Andrew Ng’s words – are fixed.)
I give credit where it’s due. But there are paths not yet fathomed that are IMO necessary equally and perhaps probably more important.
PS the limerick below has rhymes that are in its form, it makes sense. But the syllable count rhythm is off a tad! 😉
I’d change the first line and the last line to a nine syllable beat. As in: “Now they are a team that does the trick!” And drop the word “When” in line 3. So it scans better as…
A System 1 AI is so quick
With instinct and gut feelings so slick
Paired with a System 2
Their intelligence grew
Now they are a team that “does the trick!”
Note “does the trick”
That is all that was remaining or required to accomplish something.
I was asked a few minutes ago about how we get to AGI, given it is highly unlikely that the current hype-storm on transformer-based super large datasets AI (however much you choose to scale it) will not cut it.
My response…
We can integrate with Neuro Symbolic AI. That will make it half way to AGI (Really Useful Machine Learning that I published on in 2017. )
From there we extend the narrow domains , work on common sense. a causal world model, an artificial consciousness leading to rational dianoetic reflection of likely consequences of its acts/decisions/responses. Work on sentience and figure out how to do that in an energy efficient and affordable way. Oh, and perhaps embodiment so it can in a way it can learn in millions of simultaneous interactions with the real world.
That’s the known unknowns.
After that we fix the unknown unknowns! 😉
Eg., like perhaps how computationalist approaches may fail. And we may need to focus on emerging-tech like embedding cybernetic neural organoids to actually deal with the fact that all ANN are grossly simplistic in their modelling of actual intelligence.
PS: The paper s a simple one from 5 years ago, and recommends rule extraction from blackbox DL (we created some IP and patent filed it — it’s not disclosed in the paper — we used a reverse engineering scientific method) And it warns about AI hype and overoptimistic forecasts to AGI.
Comment: I believe Neuro Symbolic AI is the way forward to building more trustworthy and ethically safe AI compared to the generative AI alone approach being hyped at this time. It maybe able to perform some AGI level tasks. And be a really useful machine learning model. But, whilst it’s based on a computationalist paradigm, without consciousness/sentience it won’t really be “human-like”. IMO 😉
Title: A Semantic Framework for Neural-Symbolic Computing
Abstract: Two approaches to AI, neural networks and symbolic systems, have been proven very successful for an array of AI problems. However, neither has been able to achieve the general reasoning ability required for human-like intelligence. It has been argued that this is due to inherent weaknesses in each approach. Luckily, these weaknesses appear to be complementary, with symbolic systems being adept at the kinds of things neural networks have trouble with and vice-versa. The field of neural-symbolic AI attempts to exploit this asymmetry by combining neural networks and symbolic AI into integrated systems. Often this has been done by encoding symbolic knowledge into neural networks. Unfortunately, although many different methods for this have been proposed, there is no common definition of an encoding to compare them. We seek to rectify this problem by introducing a semantic framework for neural-symbolic AI, which is then shown to be general enough to account for a large family of neural-symbolic systems. We provide a number of examples and proofs of the application of the framework to the neural encoding of various forms of knowledge representation and neural network. These, at first sight disparate approaches, are all shown to fall within the framework’s formal definition of what we call semantic encoding for neural-symbolic AI.
I suspect the Computationalist paradigm in AI development will not succeed in generating more than an imperfect simulacrum of consciousness, the non living digital algorithmic modeling of a living cybernetic analog system may be the limiting factor (afterall “the map is not the territory”, and “all models are wrong – though some are useful!”)
But, that doesn’t mean we can not introduce those elements into our systems. So I’m not one to say “it is mpossible”, just it is way harder than some expect.
PS: in the mean time making “really useful machine learning” is not only viable but essential to help us make a better educated, healthier, highly productive, etc., world.
“Conscious machines are not coming in 2023. Indeed, they might not be possible at all. However, what the future may hold in store are machines that give the convincing impression of being conscious, even if we have no good reason to believe they actually are conscious. They will be like the Müller-Lyer optical illusion: Even when we know two lines are the same length, we cannot help seeing them as different.
Machines of this sort will have passed not the Turing Test—that flawed benchmark of machine intelligence—but rather the so-called Garland Test, named after Alex Garland, director of the movie Ex Machina. The Garland Test, inspired by dialog from the movie, is passed when a person feels that a machine has consciousness, even though they know it is a machine.
Will computers pass the Garland Test in 2023? I doubt it. But what I can predict is that claims like this will be made, resulting in yet more cycles of hype, confusion, and distraction from the many problems that even present-day AI is giving rise to.”
There are quite a few techno-cynics that get into a hot “funk” (https://www.vocabulary.com/dictionary/funk) about possibility of techno futures being overhyped. I have some sympathy with them in that innovations that are potentially powerfully impacting, and on the leading (albeit possibly bleeding) edge, are potentially hugely rewarding to VCs. If only one in ten succeed then they can be hugely rewarding. That’s hard on the 9 that fail!
History is full of examples of A C Clarke’s 3 Laws….
The laws are:
1/ When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
2/ The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
3/ Any sufficiently advanced technology is indistinguishable from magic.
Consider the innovative history of human flight, computers, GSM, space flight, electronics, … etc.
I would advocate not to rush in with speculative “me too” hype-tech being pursued by many others already for more than 1-2 years, and by companies with big pockets, eg., Generative AI, Quantum Computing, Fully Autonomous Driving, Cryptocurrency, etc. bIf these technologies are commercially achievable the early starters will get there before you and probably have patents filed already to stifle you!
However, “manufacturing spades during gold rushes is a less risky business strategy than the mining of that gold!”
Meaning to say … building support technology that can be used by several innovative technologies (generally and also those being forecasted as hugely impactful) can be attractive in risk/reward strategies and good investments!
Plus having a clear business case solving genuine problems in big markets is always a good triad!
Big VCs have spreads in terms of risk investment portfolios to manage their overall exposure.
The “AI lie” is that current development in large dataset neural networks can achieve trustworthiness and confidence in their ethical appropriate output in general (non-narrow) high stakes applications
It’s not that they have no value. They are impressive in certain aspects. But IMO they are a component in the road to AGI. They perform System-1 intuitive fast output response to prompts, but lack the necessary rational dianoetic filter and reflection of our System-2 abilities.
It’s perhaps possible that novel AGI hybrid architectures with combinations of connectionism-based and symbolism-based components will move us forward more confidently.
We are not only “not there yet”. But there are too many claiming we are on an imminent path to AGI. It’s hype. Much more innovation is required.
Comment: The title jars with the content of this article. It’s point is: Narrow defined System 1 (intuitive thought modelling) systems (that the current Generative DL-based AI is based on) won’t deliver AGI by itself. For that we need to add System 2 (rational thought modelling) necessary to consider/reflect, via dianoetic processes, not only bad prompts/responses (in terms of logic, data, ethics/safety), but also seek to update the large database to adapt to avoid such circumstances in the future. This is both a filter and high level cognitive learning process, that conscious/sentient beings have. That is likely to need symbolism AI to AUGMENT the connectionism of the DL paradigm. The future is in such Hybrid AI architectures IMO. PS: that’s my R&D interest 😉.
Title: Deep Learning Is Not Just Inadequate for Solving AGI, It Is Useless
Abstract: AI experts should not get offended when AGI researchers complain about the inadequacies of deep learning. Nobody is really out to get rid of DL. While it is true that the advent of AGI will render DL obsolete in some fields, we believe that it will probably continue to be useful for many automation tasks even after AGI is solved. But, in order to make progress toward solving AGI, researchers must point out that DL is not just inadequate for solving AGI, it is useless. And some of us know exactly why it is useless.
PS: A deep neural network cannot characterize the image or purpose of a bicycle, unless it has been previously trained to recognize it. Whereas, humans can catch the concepts of wheels and steering and pedals, that might fit a use case. They can imagine it. Seeing one in actual operation is all they need to understand it’s purpose and functionality. That’s a rational consciousness function. However, learning to balance and ride it effectively is something you get with practice, and build intuitive reflex unconscious ability.
“Unity is strength”. This old saying expresses pretty well the underlying idea that rules the very powerful “ensemble methods” in machine learning. Roughly, ensemble learning methods, that often trust the top rankings of many machine learning competitions (including Kaggle’s competitions), are based on the hypothesis that combining multiple models together can often produce a much more powerful model.
The purpose of this post is to introduce various notions of ensemble learning. We will give the reader some necessary keys to well understand and use related methods and be able to design adapted solutions when needed. We will discuss some well known notions such as boostrapping, bagging, random forest, boosting, stacking and many others that are the basis of ensemble learning. In order to make the link between all these methods as clear as possible, we will try to present them in a much broader and logical framework that, we hope, will be easier to understand and remember.
Comment: Consciousness is thought in some respects to be Goedellian, ie., the ability to think about thinkinking, a recursive facility. Research suggests Corvids have this ability. Eg. This comment is based on a thought about thinking about thinking! 😂
Title: Scientists Find Crows Are Capable of Recursion — A Cognitive Ability Thought to Be Unique to Humans and Other Primates
In the early 2000s, Noam Chomsky and other linguists thought that if there was one thing that belonged specifically to human language, it was recursion, and that this was what distinguished human language from animal communication. As it turns out, this is not the case: a 2020 study proved that rhesus monkeys can do the thing too, and a newly published study shows that crows can also do recursion.
OK, so what’s recursion? It’s the capacity to recognize paired elements in larger sequences – something that has been claimed as one of the key features of human symbolic competence. Consider this example: “The rat the cat chased ran.” Although the phrase is a bit confusing, adult humans easily get that it was the rat that ran and the cat that chased. Recursion is exactly this: pairing the elements “rat” to “ran” and “cat” to “chased”.
Put somewhat more simply, similarly to humans monkeys and crows can recognize that a structure can contain other structures with meaning. But for decades scientists thought that humans, or at least primates, are the only animals capable of understanding recursion. Yet, following the discovery, about two years ago, that rhesus monkeys can understand the idea of recursion on a par with three- to four-year-old human children (albeit with some extra training), a team has now conducted similar experiments with crows, and they turned out to outdo monkeys in certain aspects!
“Predictive Processing (henceforth PP) is a recent, exciting framework emerging at the crossroads of cognitive science, statistical modeling and philosophy of mind (Friston 2005, 2010). Informed by recent developments in computational neuroscience and Bayesian psychology, it offers a paradigm shifting approach to studying cognition, often being presented as “the first truly unifying account of perception, cognition and action” (Clark 2015, p. 2). Its highly ambitious character is expressed in Jakob Hohwy’s statement that it postulates only one mechanism which has the potential to “explain perception and action and everything mental in between” (Hohwy, 2013, p. 1). The account has already been successfully applied to a rich variety of mental phenomena, but only recently have philosophers and psychologists begun to apply it to one of the more mysterious aspects of mind, namely, consciousness. This special issue assembles some of the leading experts on the predictive processing paradigm and discusses some of its prospects and problems in this regard. In this introduction, we first sketch the explanatory framework and introduce some of the key recurring notions in this context. We then lay out some of the tasks arising from the goal of addressing consciousness with it, distinguishing those pertaining to different aspects (or kinds or concepts) of consciousness. We then provide an overview of the main ideas of the papers.”
People need forms of engagement with modeled worlds. In gaming, you give them a gun, or a car… in technical modeling, the engagement rig is AFR/TGN: https://tangerinefocus.com
Leave a Reply