16 Comments
User's avatar
Tom Dickins's avatar

A lovely post. Has relevance theory been applied to rhetoric?

Lionel Page's avatar

A bit, but not much on the persuasion aspect. I am actually working on that topic now.

Tom Dickins's avatar

I look forward to reading that. Thanks for letting me know.

Andries's avatar

Extremely useful and ... relevant. Lots of useful info packed into a compact, clear package!

Lionel Page's avatar

Thanks, I am glad you liked it! You might also enjoy the post on communication in Seinfeld.

Rajiv Sethi's avatar

Just happened to see this after reading your post, even better illustration of principle of relevance perhaps:

https://twitter.com/candice_counsel/status/1786810858045108470?t=7JjvDZOwuzkbC82SWBgwXg

Lionel Page's avatar

Excellent.

It is clearly a technique widely used in comedy. One common comic trope is the hyper-rational agent (e.g. Robot, Spock, Data, Seldon) clumsily not getting the actual meaning of humans' statements. I wrote this in "Optimally Irrational":

"In the movies where a rational robot interacts with humans, these intricacies

of human interactions are used for comic effect with the robot offending its listeners with

candid but untactful statements or with the robot not understanding

humans seemingly failing to communicate what they mean in rational ways. But

this movie trope gets the level of rationality wrong. It is not the robots who are

the most rational. They just do not understand the real games being played by

humans. The seemingly puzzling human behaviours are good answers to the

actual games in which humans are engaged."

Rajiv Sethi's avatar

Wonderful (and entertaining) post. The most amazing thing to me is the incredible energy efficiency with which we engage in complex strategic thinking, compare Kasparov versus Deep Blue:

1997 : Deep Blue vs. Kasparov

https://www.linkedin.com/pulse/1996-deep-blue-vs-kasparov-philippe-delanghe-njzof

Larry Summers recently made a point on a podcast (maybe Persuasion, not sure) that widespread use of LLMs is going to result in a surge of energy demand globally.

Lionel Page's avatar

Hi Rajiv, very glad you like it. Indeed, I agree with you. The efficiency of the human brain working on only 20 Wats is amazing. It must benefit from hard coded processes and learned processes since infancy well tuned to navigate social interactions. By extension, one of the limitations of LLMs is that they leverage human interactions to learn. In a sense they do not bring additional insights/ability to make inference. It may be much easier to mimic humans being trained on human data than to overtake them.

Alejandro Lopez-Lira's avatar

Great post!

Lionel Page's avatar

Thanks. One of my favourite topics.

Alejandro Lopez-Lira's avatar

Looking forward to the LLM article

Jimmy Becker's avatar

Great post... I've really enjoyed these tangents and directions you go.

Lionel Page's avatar

Thanks for the positive feedback Jimmy!

William of Hammock's avatar

I realized I have read this post before. Excellent on the second read through also!

anzabannanna's avatar

This is all well and good, but it seems to me that it takes the quality & correctness inherent to communication "a little lightly", for granted, etc. You even explicitly mentioned mind reading, one of the biggest scourges on humanity, as a good thing!! :(