Wow that was fast

Wow, already, like the movie, Her. Now it’s happening:

https://garymarcus.substack.com/p/dont-go-breaking-my-heart

Perhaps as side effect of the Bing/Sydney fiasco, one of the leading chatbots just changed course radically midstream, not for a single user, but for all users. (In particular, this instance a popular feature for erotic role play was removed). To someone who doesn’t use the system, that may not seem like a big deal, but some users get quite attached. Sex and love, even simulated, are powerful urges; some people are apparently in genuine emotional pain, as result of the change.

Vice reports:

Replika is a tool for many people who use it to support their mental health, and many people value it as an outlet for romantic intimacy. The private, judgment-free conversations are a way for many users to experiment with connection, and overcome depression, anxiety, and PTSD that affect them outside of the app. 

For some people, maybe the only thing worse than a deranged, gaslighting chatbot is a fickle chatbot that abandons them. 

§

As a the child of a psychotherapist who has followed clinical psychology for three decades, I know how vulnerable some people can be. I am genuinely concerned. This is a moment we should learn from. Hopefully nothing bad happens this time; but we need to reflect about what kind of society we are building.

What we are seeing is a disconcerting combination of facts

  • More and more people are using chatbots
  • Few people understand how they work; many people anthropomorphizing those chatbots, attributing to them real intelligence and emotion. Kevin Roose writes about AI for a living and was genuinely concerned about what Sydney was saying. Naive users may take these bots even more seriously.
  • Larger language models seems more and more human-like (but the emotions that they present are no more real). Whatever we see now is likely to escalate.
  • Some people are building real attachments to those bots
  • In some cases, those who are building bots that actively cultivate those attachments, e.g., by feigning romantic and/or sexual interest or by dotting their messages with “friendly” emoticons. 
  • Changes in those bots could leave many people in a vulnerable place.
  • There is essentially zero regulation on what these chatbots can say or do or how they can change over time, or on how they might treat their users.
  • Taking on a user in a chatbot like Replika is a long term commitment. But no known technology can reliably align a chatbot in a persistent way to a human’s emotional needs.
https://garymarcus.substack.com/p/dont-go-breaking-my-heart

A clip from Her (2013):

A friend said:

I feel kind of disgusted about the whole thing. First time I saw it I felt strongly disgusted and still this clip makes my stomach churn.

Yeah. Exactly. And that’s why it’s a great movie. It did that to you on purpose, successfully.

There’s something about the way we watch movies in this.

In Her, the OS(AI)-dating Joaquin Phoenix character’s x-wife (played by Rooney Mara) well and truly expresses how revolting that is. See the clip.

Her doesn’t propose we like the idea of romantic relationships with an “AI”.

The movie reminds, I think, what it means to be a viewer. What it means to understand that movies have intent and as a viewer you decipher that intent. If it’s a creative work like a movie of this type, you evaluate it that way.

You do the same with political rhetoric, recognize that it’s designed with specific intent. Your “job” is to decipher intent and evaluate it. This is precisely what people routinely fail to do, and even, have no idea that such could be done, or that they could do it.

Do I think there is no use for such bots? No. Do I think people won’t bond with them? No. They will (and already do). People bond with their cars. Even name them. I’m pretty sure I’ve seen people bond in some way with their toasters. But this kind of “hey, I have a toaster better than yours, my toaster makes me better…” bonding has limits and becomes quickly absurd. For some anyway. Bonding and identifying with some kind of chat bot is going to be more difficult for many to break away from. A lot of people are going to get addicted and are going to identify with these.

But I digress.

From those designing Chat tech:

The Difference Between Speaking and Thinking

— The human brain could explain why AI programs are so good at writing grammatically superb nonsense.

https://www.theatlantic.com/technology/archive/2023/01/chatgpt-ai-language-human-computer-grammar-logic/672902/

“And no matter how compelling their language use is, there’s still a healthy debate over just how much programs such as ChatGPT actually “understand” about the world by simply being fed data from books and Wikipedia entries. 

“Meaning is not given,” says Roxana Girju, a computational linguist at the University of Illinois at Urbana-Champaign. “Meaning is negotiated in our interactions, discussions, not only with other people but also with the world. It’s something that we reach at in the process of engaging through language.” 

“If that’s right, building a truly intelligent machine would require a different way of combining language and thought—not just layering different algorithms but designing a program that might, for instance, learn language and how to navigate social relationships at the same time.”

“In a new paper, cognitive scientists and linguists address this dissonance by separating communication via language from the act of thinking: Capacity for one does not imply the other. At a moment when pundits are fixated on the potential for generative AI to disrupt every aspect of how we live and work, their argument should force a reevaluation of the limits and complexities of artificial and human intelligence alike.”

A comment by Martin Ciupa on LinkedIn:

“Language is symbolic, ultimately not precisely the same as what is being “thought”, neither in terms of precision or the sentience evoked.   I can say “Mother” in many contexts but to understand it you need to have been “mothered”.  Text without understanding is not enough.  Though some models are useful they are all wrong ultimately. Or if you prefer “the map is not the territory” (an LLM is a lossy map). https://www.linkedin.com/feed/update/urn:li:activity:7034147362184777729

Dissociating language and thought in large language models: a cognitive perspective

“Today’s large language models (LLMs) routinely generate coherent, grammatical and seemingly meaningful paragraphs of text. This achievement has led to speculation that these networks are — or will soon become — “thinking machines”, capable of performing tasks that require abstract knowledge and reasoning. Here, we review the capabilities of LLMs by considering their performance on two different aspects of language use: ‘formal linguistic competence’, which includes knowledge of rules and patterns of a given language, and ‘functional linguistic competence’, a host of cognitive abilities required for language understanding and use in the real world. Drawing on evidence from cognitive neuroscience, we show that formal competence in humans relies on specialized language processing mechanisms, whereas functional competence recruits multiple extralinguistic capacities that comprise human thought, such as formal reasoning, world knowledge, situation modeling, and social cognition. In line with this distinction, LLMs show impressive (although imperfect) performance on tasks requiring formal linguistic competence, but fail on many tests requiring functional competence. Based on this evidence, we argue that (1) contemporary LLMs should be taken seriously as models of formal linguistic skills; (2) models that master real-life language use would need to incorporate or develop not only a core language module, but also multiple non-language-specific cognitive capacities required for modeling thought. Overall, a distinction between formal and functional linguistic competence helps clarify the discourse surrounding LLMs’ potential and provides a path toward building models that understand and use language in human-like ways.”

https://arxiv.org/abs/2301.06627

Website Powered by WordPress.com.

Get new content delivered directly to your inbox

%d bloggers like this: