Learning to Make Multiple Choices
in Problem Solving
Marsha C. Lovett
Carnegie Mellon University
ACT-R Workshop July 1998
Choice in the Building Sticks Task
• The problem space of the Building Sticks Task (BST) is quite "branchy", suggesting many opportunities for choice.
• Because of the way solvers represent this task, there is one critical choice point at the beginning of each problem. The choice taken initiates a particular problem-solving plan.
• ACT-R's conflict resolution models this choice (competing productions = alternative strategies).
• The learning mechanism for conflict-resolution parameters models the way solvers learn to make this choice.
• Each time a problem is solved (not solved) by a particular strategy, the corresponding productions' parameters are updated, influencing its future chance of being chosen, e.g.,
• But, what happens when there is more than one critical choice per problem?
- How is credit/blame assigned by human problem solvers?
- How well does ACT-R's learning mechanism handle this more complex case?
- Are all choices treated the same or is there a difference between early/late choices?
Design
Same Different
Procedure
• Instructions: Task features defined; all 4 strategies demonstrated.
• Pre-test phase: Computer presents problem at a particular state; subject chooses single next step. ~13 problems with no feedback.
• Solve phase: Computer presents problem at initial state; subject solves problem to completion. Four blocks of 16 problems each.
• Post-test phase: Same as pre-test...
Results
Modeling the Data
- Flat:
- Branch:
- Collapsed Branch:
• Different task representations in the model lead to different patterns of "transfer" of information on probability of success and cost.
Conclusions
• People learn to choose among strategies, even when there are multiple choices per problem.
In this experiment, first choices were learned better than second choices.
N.B. First choices were encountered more... further experiments vary frequency and position.
• ACT-R's conflict-resolution parameter learning can capture these data.
ACT-R's "blanket" approach to credit/blame assignment passes the test in that crediting/blaming all productions on the line to success/failure produces learning similar to solvers'.
• This kind of work sheds insight on how people represent more complex problem spaces.
ACT-R takes success/failure/cost knowledge as compartmentalized with each production,
thus the "transfer" of such knowledge from one choice point to another provides information on how solvers have represented the task.