15 Comments

A lovely post. Has relevance theory been applied to rhetoric?

Expand full comment

A bit, but not much on the persuasion aspect. I am actually working on that topic now.

Expand full comment

I look forward to reading that. Thanks for letting me know.

Expand full comment

Extremely useful and ... relevant. Lots of useful info packed into a compact, clear package!

Expand full comment

Thanks, I am glad you liked it! You might also enjoy the post on communication in Seinfeld.

Expand full comment

Just happened to see this after reading your post, even better illustration of principle of relevance perhaps:

https://twitter.com/candice_counsel/status/1786810858045108470?t=7JjvDZOwuzkbC82SWBgwXg

Expand full comment

Excellent.

It is clearly a technique widely used in comedy. One common comic trope is the hyper-rational agent (e.g. Robot, Spock, Data, Seldon) clumsily not getting the actual meaning of humans' statements. I wrote this in "Optimally Irrational":

"In the movies where a rational robot interacts with humans, these intricacies

of human interactions are used for comic effect with the robot offending its listeners with

candid but untactful statements or with the robot not understanding

humans seemingly failing to communicate what they mean in rational ways. But

this movie trope gets the level of rationality wrong. It is not the robots who are

the most rational. They just do not understand the real games being played by

humans. The seemingly puzzling human behaviours are good answers to the

actual games in which humans are engaged."

Expand full comment

Wonderful (and entertaining) post. The most amazing thing to me is the incredible energy efficiency with which we engage in complex strategic thinking, compare Kasparov versus Deep Blue:

1997 : Deep Blue vs. Kasparov

https://www.linkedin.com/pulse/1996-deep-blue-vs-kasparov-philippe-delanghe-njzof

Larry Summers recently made a point on a podcast (maybe Persuasion, not sure) that widespread use of LLMs is going to result in a surge of energy demand globally.

Expand full comment

Hi Rajiv, very glad you like it. Indeed, I agree with you. The efficiency of the human brain working on only 20 Wats is amazing. It must benefit from hard coded processes and learned processes since infancy well tuned to navigate social interactions. By extension, one of the limitations of LLMs is that they leverage human interactions to learn. In a sense they do not bring additional insights/ability to make inference. It may be much easier to mimic humans being trained on human data than to overtake them.

Expand full comment

Great post!

Expand full comment

Thanks. One of my favourite topics.

Expand full comment

Looking forward to the LLM article

Expand full comment

Great post... I've really enjoyed these tangents and directions you go.

Expand full comment

Thanks for the positive feedback Jimmy!

Expand full comment

This is all well and good, but it seems to me that it takes the quality & correctness inherent to communication "a little lightly", for granted, etc. You even explicitly mentioned mind reading, one of the biggest scourges on humanity, as a good thing!! :(

Expand full comment