It is clearly a technique widely used in comedy. One common comic trope is the hyper-rational agent (e.g. Robot, Spock, Data, Seldon) clumsily not getting the actual meaning of humans' statements. I wrote this in "Optimally Irrational":
"In the movies where a rational robot interacts with humans, these intricacies
of human interactions are used for comic effect with the robot offending its listeners with
candid but untactful statements or with the robot not understanding
humans seemingly failing to communicate what they mean in rational ways. But
this movie trope gets the level of rationality wrong. It is not the robots who are
the most rational. They just do not understand the real games being played by
humans. The seemingly puzzling human behaviours are good answers to the
Wonderful (and entertaining) post. The most amazing thing to me is the incredible energy efficiency with which we engage in complex strategic thinking, compare Kasparov versus Deep Blue:
Larry Summers recently made a point on a podcast (maybe Persuasion, not sure) that widespread use of LLMs is going to result in a surge of energy demand globally.
Hi Rajiv, very glad you like it. Indeed, I agree with you. The efficiency of the human brain working on only 20 Wats is amazing. It must benefit from hard coded processes and learned processes since infancy well tuned to navigate social interactions. By extension, one of the limitations of LLMs is that they leverage human interactions to learn. In a sense they do not bring additional insights/ability to make inference. It may be much easier to mimic humans being trained on human data than to overtake them.
This is all well and good, but it seems to me that it takes the quality & correctness inherent to communication "a little lightly", for granted, etc. You even explicitly mentioned mind reading, one of the biggest scourges on humanity, as a good thing!! :(
A lovely post. Has relevance theory been applied to rhetoric?
A bit, but not much on the persuasion aspect. I am actually working on that topic now.
I look forward to reading that. Thanks for letting me know.
Extremely useful and ... relevant. Lots of useful info packed into a compact, clear package!
Thanks, I am glad you liked it! You might also enjoy the post on communication in Seinfeld.
Just happened to see this after reading your post, even better illustration of principle of relevance perhaps:
https://twitter.com/candice_counsel/status/1786810858045108470?t=7JjvDZOwuzkbC82SWBgwXg
Excellent.
It is clearly a technique widely used in comedy. One common comic trope is the hyper-rational agent (e.g. Robot, Spock, Data, Seldon) clumsily not getting the actual meaning of humans' statements. I wrote this in "Optimally Irrational":
"In the movies where a rational robot interacts with humans, these intricacies
of human interactions are used for comic effect with the robot offending its listeners with
candid but untactful statements or with the robot not understanding
humans seemingly failing to communicate what they mean in rational ways. But
this movie trope gets the level of rationality wrong. It is not the robots who are
the most rational. They just do not understand the real games being played by
humans. The seemingly puzzling human behaviours are good answers to the
actual games in which humans are engaged."
Wonderful (and entertaining) post. The most amazing thing to me is the incredible energy efficiency with which we engage in complex strategic thinking, compare Kasparov versus Deep Blue:
1997 : Deep Blue vs. Kasparov
https://www.linkedin.com/pulse/1996-deep-blue-vs-kasparov-philippe-delanghe-njzof
Larry Summers recently made a point on a podcast (maybe Persuasion, not sure) that widespread use of LLMs is going to result in a surge of energy demand globally.
Hi Rajiv, very glad you like it. Indeed, I agree with you. The efficiency of the human brain working on only 20 Wats is amazing. It must benefit from hard coded processes and learned processes since infancy well tuned to navigate social interactions. By extension, one of the limitations of LLMs is that they leverage human interactions to learn. In a sense they do not bring additional insights/ability to make inference. It may be much easier to mimic humans being trained on human data than to overtake them.
Great post!
Thanks. One of my favourite topics.
Looking forward to the LLM article
Great post... I've really enjoyed these tangents and directions you go.
Thanks for the positive feedback Jimmy!
This is all well and good, but it seems to me that it takes the quality & correctness inherent to communication "a little lightly", for granted, etc. You even explicitly mentioned mind reading, one of the biggest scourges on humanity, as a good thing!! :(