Think! Evidence

Human performance in a multiple-task environment: effects of automation reliability on visual attention allocation

Show simple item record

dc.contributor.author Cullen, Ralph H. en_US
dc.date.accessioned 2012-02-17T19:14:52Z
dc.date.accessioned 2015-07-13T10:56:42Z
dc.date.available 2012-02-17T19:14:52Z
dc.date.available 2015-07-13T10:56:42Z
dc.date.issued 2011-08-18 en_US
dc.identifier.uri http://hdl.handle.net/1853/42721
dc.identifier.uri http://evidence.thinkportal.org/handle/1853/42721
dc.description.abstract Multiple-task environments are pervasive in a variety of workplaces; many jobs require several concurrent, time-sensitive tasks be done in one task space. One concern in these multiple-task environments is attention allocation: To perform well, the operator must be able to know when and where to look. Otherwise, he or she will not be aware of the status of each task or be able to complete them. To aid these jobs, automation has been developed to support attention allocation: Auditory and visual alerts draw attention to where the system determines it is needed. However, imperfect automation may complicate the aid by introducing misses and false alarms to which the operator must also attend. Researchers studying these environments and automation's purview within them have focused on a variety of different topics. Some examples include: different types of automation (alerts, decision aid systems, etc.), levels of reliability (0-100% reliable), what automation supports (attention allocation to situation awareness to performance), and how automation affects multiple task environments (two tasks to many). Because attention had not been directly studied in relation to imperfect automation reliability in multiple-task environments, I decided to analyze the effects of different levels of automation reliability on visual attention allocation and how removal of that automation changed those effects. To study this, I helped to develop the Simultaneous Task Environment Platform (STEP), a program to study and test participants' behavior in multiple-task environments. The STEP program enabled me to vary the frequency and criticality (number of points gained/lost) of the different tasks to disambiguate how automation was affecting the participants. In the study, participants were trained on all four tasks of the STEP system, had the automation explained to them, and then were asked to gain as many points a trial as possible. There were three between-subject conditions; a system where ~70% of the automated alerts were reliable, one where ~90% of the alerts were reliable, and one where the participants received no automated aid at all. The automation was designed to support visual attention allocation. The participants interacted with the system and automation for twenty-four trials, divided into six blocks over two days, at which point they transferred to a system with no automation at all. To better understand exactly how the participants interacted with the system, I measured the number of times they accessed each task (attention allocation, as well as a measure of workload) and the number of points they scored (task performance). Mixed ANOVAs for these two measures, as well as a derived measure of efficiency (points scored per window opened), were conducted crossing automation condition with Block (to measure how the participants changed with experience) and task (to measure how certain tasks' attributes affected the way they were acted upon). Overall, the automation provided a benefit in terms of reduced workload and improved task performance. Participants in the automated conditions opened fewer windows and performed better. This also meant higher efficiency for those conditions. Experience affected conditions differentially. Those in the no automation condition increased their score but also the number of windows opened, causing their efficiency to stay the same. The 70% reliable condition was similar, with a minor point increase and no significant window decrease, resulting in no significant efficiency gain. The 90% reliable condition gained little score boost, but opened fewer windows by the end of the experiment, becoming more efficient. The frequency and criticality of tasks affected both the windows opened and the points scored across conditions, as participants in the two automated conditions opened fewer windows and scored relatively more points on those tasks worth many points that did not appear often. This increased their efficiency on those tasks, but also caused them to suffer greater when the automation was taken away. In the transfer trials, those participants in the automated conditions experienced both a workload increase and a performance decrease. These were centered on the two high-criticality/low-frequency tasks, as the other two tasks showed only small or no change between normal and transfer trials. These results show that automation at different levels of reliability affects the behavior of the operator of that system differentially based on the attributes of the tasks the operator must oversee. Tasks that happen often and are only important when aggregated over many are not aided by automation as much as those tasks that happen rarely and are critical every time they appear. When automation fails, however, those same tasks that are aided the most suffer the most, whereas those that do not get much aid do not suffer as much. Designers of automated systems should consider the type of tasks to be automated and their attributes, as well as the effects of increasing or decreasing the reliability of the automation when designing automation to provide support to system operators. en_US
dc.publisher Georgia Institute of Technology en_US
dc.subject Human factors en_US
dc.subject Multiple-task environments en_US
dc.subject Automation en_US
dc.subject Automation reliability en_US
dc.subject.lcsh Human information processing
dc.subject.lcsh Attention
dc.title Human performance in a multiple-task environment: effects of automation reliability on visual attention allocation en_US
dc.type Thesis en_US
dc.description.degree MS en_US
dc.contributor.department Psychology en_US
dc.description.advisor Committee Chair: Rogers, Wendy; Committee Member: Durso, Francis; Committee Member: Fisk, Arthur en_US


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search Think! Evidence


Browse

My Account