2014年1月17日星期五

Assessing others: Evaluating expertise of humans, computer algorithms

Assessing others: Evaluating expertise of humans, computer algorithms

In the study, published in the journal Neuron, Antonio Rangel, Bing Professor of Neuroscience, Behavioral Biology, and Economics, and his associates used functional magnetic resonance imaging (fMRI) to monitor the brain activity of volunteers as they moved through a particular task. Specifically, the subjects were asked to observe the shifting value of a hypothetical financial asset and make predictions about whether it would go up or down. Simultaneously, the subjects interacted with an "expert" who was also making predictions.

Half the time, subjects were shown a photo of a person on their computer screen and told that they were observing that person's predictions. The other half of the time, the subjects were told they were observing predictions from a computer algorithm, and instead of a face, an abstract logo appeared on their screen. However, in every case, the subjects were interacting with a computer algorithm -- one programmed to make correct predictions 30, 40, 60, or 70 percent of the time.

Subjects' trust in the expertise of agents, whether "human" or not, was measured by the frequency with which the subjects made bets for the agents' predictions, as well as by the changes in those bets over time as the subjects observed more of the agents' predictions and their consequent accuracy.

This trust, the researchers found, turned out to be strongly linked to the accuracy of the subjects' own predictions of the ups and downs of the asset's value.

"We often speculate on what we would do in a similar situation when we are observing others -- what would I do if I were in their shoes?" explains Erie D. Boorman, formerly a postdoctoral fellow at Caltech and now a Sir Henry Wellcome Research Fellow at the Centre for FMRI of the Brain at the University of Oxford, and lead author on the study. "A growing literature suggests that we do this automatically, perhaps even unconsciously."

Indeed, the researchers found that subjects increasingly sided with both "human" agents and computer algorithms when the agents' predictions matched their own. Yet this effect was stronger for "human" agents than for algorithms.

This asymmetry -- between the value placed by the subjects on (presumably) human agents and on computer algorithms -- was present both when the agents were right and when they were wrong, but it depended on whether or not the agents' predictions matched the subjects'. When the agents were correct, subjects were more inclined to trust the human than algorithm in the future when their predictions matched the subjects' predictions. When they were wrong, human experts were easily and often "forgiven" for their blunders when the subject made the same error. But this "benefit of the doubt" vote, as Boorman calls it, did not extend to computer algorithms. In fact, when computer algorithms made inaccurate predictions, the subjects appeared to dismiss the value of the algorithm's future predictions, regardless of whether or not the subject agreed with its predictions.

Since the sequence of predictions offered by "human" and algorithm agents was perfectly matched across different test subjects, this finding shows that the mere suggestion that we are observing a human or a computer leads to key differences in how and what we learn about them.

A major motivation for this study was to tease out the difference between two types of learning: what Rangel calls "reward learning" and "attribute learning." "Computationally," says Boorman, "these kinds of learning can be described in a very similar way: We have a prediction, and when we observe an outcome, we can update that prediction."

Reward learning, in which test subjects are given money or other valued goods in response to their own successful predictions, has been studied extensively. Social learning -- specifically about the attributes of others (or so-called attribute learning) -- is a newer topic of interest for neuroscientists. In reward learning, the subject learns how much reward they can obtain, whereas in attribute learning, the subject learns about some characteristic of other people.

This self/other distinction shows up in the subjects' brain activity, as measured by fMRI during the task. Reward learning, says Boorman, "has been closely correlated with the firing rate of neurons that release dopamine" -- a neurotransmitter involved in reward-motivated behavior -- and brain regions to which they project, such as the striatum and ventromedial prefrontal cortex. Boorman and colleagues replicated previous studies in showing that this reward system made and updated predictions about subjects' own financial reward. Yet during attribute learning, another network in the brain -- consisting of the medial prefrontal cortex, anterior cingulate gyrus, and temporal parietal junction, which are thought to be a critical part of the mentalizing network that allows us to understand the state of mind of others -- also made and updated predictions, but about the expertise of people and algorithms rather than their own profit.

The differences in fMRIs between assessments of human and nonhuman agents were subtler. "The same brain regions were involved in assessing both human and nonhuman agents," says Boorman, "but they were used differently."

"Specifically, two brain regions in the prefrontal cortex -- the lateral orbitofrontal cortex and medial prefrontal cortex -- were used to update subjects' beliefs about the expertise of both humans and algorithms," Boorman explains. "These regions show what we call a 'belief update signal.'" This update signal was stronger when subjects agreed with the "human" agents than with the algorithm agents and they were correct. It was also stronger when they disagreed with the computer algorithms than when they disagreed with the "human" agents and they were incorrect. This finding shows that these brain regions are active when assigning credit or blame to others.

"The kind of learning strategies people use to judge others based on their performance has important implications when it comes to electing leaders, assessing students, choosing role models, judging defendents, and so on," Boorman notes. Knowing how this process happens in the brain, says Rangel, "may help us understand to what extent individual differences in our ability to assess the competency of others can be traced back to the functioning of specific brain regions."


Welcome to SUV System Ltd!

SUV System Ltd is ISO 90012008 Certified electronics distributor with 10 years of experiences.

We have built up long term business relationship with about many companies which are stockers and authorized agents. we have a steady and reliable supply to meet customer's demands to the greatest extent .Confidently, we are able to lower your cost and support your business with our years of professional service.

SUV System Ltd is Electronic Components Distributor Supplies,Find Quality Electronic Components Supplies Products IC(Integrated Circuits),Connectors,Capacitor,Resistors,Diodes,Transistors,LED at Suvsystem.com. Sourcing Other Energy, Environment, Excess Inventory Products from Manufacturers and Suppliers at Suvsystem.com

Electronic Components distributor:http://www.suvsystem.com

Connectors Distributor:http://www.suvsystem.com/l/Connectors-1.html

IC Distributor:http://www.suvsystem.com/l/IC(Integrated-Circuits)-1.html

LED Distributor:http://www.suvsystem.com/l/LED-1.html

Capacitor Distributor:http://www.suvsystem.com/l/Capacitor-1.html

Transistor Distributor:http://www.suvsystem.com/l/Transistors-1.html

Resistor Distributor:http://www.suvsystem.com/l/Resistors-1.html

Diode Distributor:http://www.suvsystem.com/l/Diodes-1.html

SUV System Ltd insists on the managing faith ofsincereness,speciality,foresight, win-win,so we build up stable-relationship customers located all over the world, including the States, Europe, Argentina, UAE, Malaysia, Australia,and India etc

we are focus on the following fields,and hope we can help you.


ALTERA IC Bipolar Transistors FAIRCHILD diodes Infineon Technologies Transistors Resistor Arrays Schottky Diodes Resistor Networks Chip Inductors Metal Can Packages Transistors LITTELFUSE Diodes Cypress IC Electronic News Fast Recovery Diodes PANASONIC Voltage Regulators Transistors VISHAY IC Dialight LED IR transistor Multi-units Transistors NXP Diodes Fairchild Semiconductor Transistors LED Thin Film Resistors Fleld Effect Transistors Resistors TOSHIBA Diodes AGILENT LED MOTOROLA IC NS IC NEC Diodes Connectors Current Sensors Resistors Xilinx IC HARRIS IC Kingbrigt LED DIODES Transistors ELPIDA IC MAXIM IC IDT IC Civil IC
http://www.suvsystem.com/a/9245.aspx