Thesis title: Exploring Human Planning Strategies in the ThinkAhead Task: Combining Computational, Kinematic, and Eye Movement Approaches.
Planning is a ubiquitous process in everyday life, but its covert nature makes it
difficult to study, and consequently, various paradigms and methods have been
proposed to do it.
In this thesis, we investigated human planning strategies in a novel task inspired
by the Travelling Salesman Problem (TSP), that we called the ThinkAhead task,
utilizing a combination of computational, behavioral, kinematic and eye-tracking
measures. The task required participants to solve a sequence of problems by
finding a path that connects a set of nodes placed in a 2D grid-like graph without
passing twice over the same node (simplicity condition). The task’s multistep nature,
combined with the simplicity requirement and a time limit, made it challenging
enough to require planning. We collected data about this task in two different setups:
at first, online, where participants completed the task on their mobile devices; then,
in-person, where we also recorded gaze and mouse movements.
We then attempted a characterization of their planning strategies from different
perspectives.
First, we employed a computational approach, and compared the behavior of
participants with that of artificial agents whose planning depth was manipulated.
We then tested whether the use of cognitive resources in this modeling paradigm
was better explained by a fixed planning depth or a flexible one that adapted to
the task demands. The results showed that the latter adaptive strategy provided a
better fit to the participants’ behavior.
Secondly, we looked for signatures of planning in the kinematics of mouse
trajectories and gaze behavior. We focused on instances of fluid movements, where
no pauses occurred, and observed an alternating pattern in eye-hand coordination.
This pattern involved a gaze fixating a target followed by a reaching movement
towards the fixation coordinates. Within this descriptive paradigm of motor plans
sequences, we searched for evidence of coarticulation, which is generally defined
as a phenomenon in which the execution of subtasks is influenced by the overall
task: for example sub-movements are different if part of the same motor plan rather
than independent.
To formally test this, we trained a (linear) classifier for the next action on: (1) seg-
ments of mouse trajectories, and (2) gaze position on the screen. In both cases, even
though with different performance, the classifier’s accuracy was significantly higher
than chance level, which means that current actions carry information about future
one. This coarticulation provides evidence for planning in this task. Additionally,
we tested whether gaze behavior during a fixation could predict the direction of
the next fixation above chance level and beyond the predictions of models based
purely visual information.
Finally, we studied the commitment of participants to their plans, and the
possible determinants of backtracking. We found that backtracking was in most
of the cases performed only after a visual feedback of unsolvability of the task
(for example because the map had been bisecated and there were rewards to be
collected on both the (dis)connected components). This suggests that participants
did not perform a continual (or highly frequent) replanning, but rather relied upon
perceptual messages of errors. We then proposed a task-specific taxonomy that
might help differentiate the signals that triggered the backtracking event.