This is blog post includes the original twitter thread introducing tomsup to the twitter.
Have you always wanted to understand how models of Theory of Mind (ToM) could actually work during social interactions? And have you always wanted to explore how people would adapt to agents employing different levels of ToM? Read further! 1/🧵 pic.twitter.com/cqQIkWi7h2
— Kenneth Enevoldsen (@KCEnevoldsen) March 12, 2021
In our recent pre-print (https://t.co/MoFwJHlclx), we (@KCEnevoldsen , @WaadePeter, Arndis Simonsen and @fusaroli) introduce an easy-to-use python package tomsup 👍 (https://t.co/HGpFZaLd0V). 2/🧵
— Peter Thestrup Waade (@WaadePeter) March 12, 2021
Tomsup can simulate agents with a recursive theory of mind (k-ToM models) both in agent-based models (to explore the implications of the models) and to dynamically generate experimental stimuli (agents playing against experimental participants). 3/🧵
— Kenneth Enevoldsen (@KCEnevoldsen) March 12, 2021
k-ToM agents simulate their opponent’s perspective in order to infer their beliefs about themselves. They do this recursively, their sophistication level k denoting how many recursions to do - the opponent’s level must also be inferred. 4/🧵 pic.twitter.com/zs3H7Rmzbo
— Peter Thestrup Waade (@WaadePeter) March 12, 2021
k-ToM models have proven useful to assess theory of mind in human and non-human primates (https://t.co/fFFJLMfSc4), atypicalities of ToM in autism (https://t.co/IASQS4Ldvd), and to figure out how many levels of recursion are useful in practice (https://t.co/ecHlnOxlf7). 5/🧵
— Kenneth Enevoldsen (@KCEnevoldsen) March 12, 2021
k-ToM agents use a variational Bayesian Laplace approximation to estimate their opponents’ model parameters. With recursive agents, the model gets a bit complicated – here’s the DAG for the full model with k>2 pic.twitter.com/aWF1QeBxIk
— Peter Thestrup Waade (@WaadePeter) March 12, 2021
Using tomsup we can easily show that k-ToM agents beat non-ToM models (e.g. reinforcement learning and heuristic models). This is because k-ToM agents focus on predicting the others’ choices and then trick them! To beat a k-ToM agent you need a higher level of recursion. 7/🧵 pic.twitter.com/wgpwYVWMH5
— Kenneth Enevoldsen (@KCEnevoldsen) March 12, 2021
You can find a tutorial to run these simulations here: https://t.co/1XHoOjlPrS. You can also assess how your participants will fare against different models, by dynamically simulating the agents during an experiment. See a tutorial here: https://t.co/xqn8qn19wJ 8/🧵
— Peter Thestrup Waade (@WaadePeter) March 12, 2021
k-ToM models were formalized and developed by @MarieDevaine and Daunizeau (2017)
— Kenneth Enevoldsen (@KCEnevoldsen) March 12, 2021
The code is available if you want to see how the package works - or to improve on it!
🖥️ GitHub: https://t.co/73HDISsd2S
📄 Preprint: https://t.co/dE6jEJHmYH
👩🏫Tutorials: https://t.co/GdOb6AVYc6