Conference Session Details

Saturday, April 16, 2011

Registration will be from 8:30 to 9:00 am.

 

08:30 - 09:00


 
Welcome (Coffee and Light Breakfast Items):
Location: Lincoln Room
09:00 - 10:00
 
Invited Lecture: Dr. Robert Pennock: AI to EI: Modeling the Evolution of Intelligence from the Bottom Up
Location: Lincoln Room
10:10 - 11:30
 
Attention & Perception:
Location: Willy
Session Chair: Andrew Hendrickson (Indiana University, Bloomington)
Effects of Auditory Entrainment on Visual Attention
Authors:  Jared Miller (University of Notre Dame) , Laura Carlson (University of Notre Dame) , Devin McAuley (Michigan State University)
Abstract: Auditory and visual attention to rhythmic stimuli can be thought of as involving periods of maximal and minimal attention such that the distribution of attention is entrained to an environmental input. We examined how entraining auditory attention to a specific rhythm affects the allocation of visual attention. In Experiments 1 and 2, subjects moved their eyes from a central fixation to a visual stimulus that appeared in one of four corners of the screen. The onset of the visual stimulus varied and occurred either in-synch or out-of-synch relative to an entrained auditory rhythm. Saccade latency was fastest for the in-synch condition, indicating that auditory entrainment may also entrain visual attention. In Experiment 3 we sought to extend the pattern of findings from Experiments 1 and 2 to a temporally independent task. Subjects performed a gap-judgment by indicating which side of a briefly presented Landolt-square contained a gap. Judgments were more accurate when onset of the Landolt-square was synchronized with an entrained auditory rhythm versus out-of-synch. Overall, these results suggest that auditory and visual attention are linked such that entraining auditory attention also entrains visual attention resulting in a shared attentional peak that is derived from stimulus expectancy.

A New Perspective on Visual Word Processing Efficiency
Authors:  Joseph Houpt (Indiana University) , James Townsend (Indiana University)
Abstract: It is well established that, within certain controlled conditions, letters are perceived more accurately when in a word context than when presented in nonword context or alone. To explain this phenomenon, many models of word perception include either facilitation among the letter perception processes, or some type of holistic word perception process that aids the individual letter processes. However, some models have been proposed to explain this word superiority effect that are based on independent letter processing, without word level feedback. While response time measures could be quite informative as to the underlying process, the vast majority of research on the word superiority effect has been based on accuracy and response time based effects have been elusive. In this work, we have developed a specialized task so that we may use the workload capacity coefficient, a particularly sensitive, response time based measure of processing efficiency. Using this measure, we have found clear evidence of a word superiority effect in the response time domain to complement the existing research based on accuracy. Furthermore, these results indicate that independent letter processing is not a reasonable model of word perception, instead favoring the facilitatory or feedback models.

A Computational Model for Perceptual Learning: its Specificity and Transfer
Authors:  Mojtaba Solgi (Michigan State University, Taosheng Liu (Michigan State University), and Juyang Weng (Michigan State University)
Abstract: How and under what circumstances the training effects of Perceptual Learning (PL) transfer to novel situations is critical to our understanding of generalization and abstraction in learning. Contrary to the major consensus among PL researchers, a series of recent behavioral studies have shown that training effects can transfer to untrained conditions under certain experimental paradigms. In this article, we present a neuromorphic computational model of the Where-What visual pathways which successfully explains the aforementioned recent findings. The proposed model is a network of simulated neurons which learn using the simple laws of Hebbian-learning, lateral inhibition and excitation. The model is fully developmental in the sense that all of the feature detectors and associations are developed via experience, as opposed to being pre-designed by the programmer. Our main hypothesis is that certain paradigms of experiments trigger top-down processes in the untrained condition which lead to recruitment of more neurons in early feature representation areas as well as later concept representation areas for the untrained condition. Such recruitment coupled with lateral excitation from trained neurons cause formation of extended, improved representations, and hence a better perception, for the untrained skill. It is likely that these processes take place during rest and sleep, however, verifying this possibility is beyond the scope of our model. To the best of our knowledge, this work is the first neurally-plausible model to explain transfer and specificity in a PL setting. From the model, we infer that perhaps mere thinking about a perceptual task in an untrained condition will result in transfer of training effects to that condition. However, this prediction awaits experimental verification.

Is Categorical Perception Really Verbally Mediated Perception?
Authors:  Andrew Hendrickson (Indiana University) , George Kachergis (Indiana University) , Todd Gureckis (New York University) , Robert Goldstone (Indiana University)
Abstract: Recent research has argued that categorization is strongly tied to language processing (Lupyan, 2008). For example, language (in the form of verbal category labels) has been shown to influence perceptual discriminations of previously well-learned categories such as color (Winawer et al., 2007), shape (Lupyan, 2009), and the categorization of facial emotion (Roberson & Davidoff, 2000). However, does this imply that categorical perception is essentially verbally mediated perception? Gureckis & Goldstone (2008) demonstrate categorical perception can occur even in the absence of overt labels when categories contain non-homogenous internal structure. Recent work (Hendrickson et al., 2010) extended these findings and shows the degree to which interference tasks (verbal, spatial) reduce the effect of learned categorical perception for complex visual stimuli (faces). Contrary to the previous findings with well-learned categories, these results show that a verbal interference task does not disrupt learned categorical perception effects for novel faces. The current work extends these findings to show that the within-category categorical perception effect persists despite increasing the degree to which participants rely on verbal labels by manipulating the verbal nature of the stimuli, the response options, and by increasing the difficulty of the verbal interference task. Our results are interpreted in light of the ongoing debate about the role of language in categorization. In particular, we suggest that at least a sub-set of categorical perception effects may be effectively “language-free” across a wide array of manipulations. Keywords: Perceptual Learning, Categorization, Concept Learning, Language.

 
Cognitive Decision Theory A:
Location: Heritage
Lost Causes and Unobtainable Goals: Dynamic Choice in a Multiple Goal Seeking Environment
Authors:  Jason Harman (Ohio University) , Claudia Gonzalez-Vallejo (Ohio University) , Jeffrey Vancouver (Ohio University ) , Annie Milakovic (Ohio University) , Justin Weinhardt (Ohio University)
Abstract: Dynamic choice behavior in a multiple goal-striving environment is examined using a paradigm where participants must make repeated choices about how to spend their free time. Over the course of 100 trials, participants must choose to spend their free time either with their partner, friends, or studying. Their current status in each domain is displayed with feedback (negative feedback representing natural decay and positive feedback as a function of choosing a domain). Initial studies using this paradigm have shown that despite different a priori importance ratings between areas, when balance is possible (equal status for each of the three domains) participants prefer strategies that maintain balance over utility maximizing strategies which would more closely reflect their preferences. That is to say, participants distribute their choices between domains in a manner that keeps the status of the three domains equal, rather than choosing their most preferred domain more frequently. In the current study, maintenance of all three goals is made impossible by making one domain ‘unobtainable’ (the negative disturbances to the status of the goal are equivalent to the positive increments that result from domain choice). Results show that participants ‘cut their losses’ in the unobtainable domain when that domain is not the most important domain to them, consistent with utility maximizing strategies. When the unobtainable domain is their most preferred domain, participants allocate a majority of their resources to that domain or ‘chase a lost cause’ resulting in detrimental effects to all three domains. Possible theoretical models and implications are discussed.

Group Convergence in Steps of Iterated Reasoning
Authors:  Seth Frey (Indiana University) , Robert Goldstone (Indiana University)
Abstract: In some strategic games, thinking about other players' reasoning can lead to better predictions about what they will do. In other games, infinitely iterated reasoning ultimately prescribes random play. In an online experiment of strategic thinking in groups, we tested participants in a game with the formal structure of a game that rewards randomness, but the superficial structure of a game of iterated reasoning. We found that participants conformed to the superficial structure of the game, and earned more than they would have by playing randomly. We estimated how many steps participants thought ahead in the game and discovered implicit coordination at the group level. Participants unexpectedly ``matched'' their degree of iterated thinking to each other.

Dynamic Extensions of the Proportional Difference Model of Choice
Authors:  Claudia Gonzalez-Vallejo (Ohio University) , Jason Harman (Ohio University)
Abstract: The Stochastic Difference Model (SDM) is a stochastic model of choice that describes how individuals make trade-offs between non-comparable attributes. The proportional difference rule within SDM (called PD) assumes that options are compared attribute-wise so that proportional advantages that favor an option in a given dimension move the decision maker towards that option, while proportional disadvantages have the opposite effect. The focus of our present study is to extend this model to dynamic situations and provide a further understanding of the changes in risk attitudes over time as a function of wealth and goals. First, in many domains the outcomes of decisions affect future decisions though few models in the decision making literature are dynamic. The first model extension deals with the impact of present consequences to future choices. Second, we assume a goal/target is pursued by the decision maker, and the difference between this and the current accumulated payoffs at a given point in time affects the probability of selecting a risky or risk-less option. This psychological distance changes over time as a function of the accumulated payoffs. The impact of this is that the proportional difference variable of SDM changes over time, and this gives rise to different patterns of risk-averse and risk-seeking behavior. Model simulations were conducted over pairs of simple gambles of equal expected value. Results showed a reversal of the reflection effect (Kahneman & Tversky, 1979). In the gain domain, risk-seeking behavior occurred (that is, the probability of selecting the gamble was greater than .5) and increased as the goal was approached. In the loss, domain, the opposite patter emerged. Results from two initial experiments confirm the pattern of behavior predicted by the model.

Banking on a Bad Bet: Probability Matching is Linked to Expectation Generation.
Authors:  Greta James (University of Waterloo) , Derek Koehler (University of Waterloo)
Abstract: Which would you prefer, an option offering a 70% chance of winning $1 or an option offering a 30% chance of winning $1? Though the better choice seems obvious, many people will choose the 30% odds when this choice is repeatedly presented in a sequence of decisions. These people will choose the higher odds 70% of the time and the lower odds 30% of the time, a phenomenon known as probability matching. Probability matching is a longstanding puzzle in the study of decision making under risk and uncertainty, because predicting the more probable outcome on every trial (maximizing) yields higher payoffs and greater accuracy. I discuss why current theories attempting to explain probability matching are not sufficient and present the hypothesis that probability matching is associated with generating expectations across multiple trials. I will present a series of experiments showing that under conditions designed to diminish the generation or perceived applicability of these expectations probability matching behavior becomes substantially less common and maximizing becomes the norm.

11:30 - 13:00
 
Lunch (Provided):
Location: Lincoln
13:00 - 14:20
 
Memory:
Location: Willy
Session Chair: Allison Chapman (The Ohio State University)
The Development of Context Use and Three Way Bindings in Episodic Memory
Authors:  Hyungwook Yim (The Ohio State University) , Simon Dennis (The Ohio State University) , Vladimir Sloutsky (The Ohio State University)
Abstract: Though it is well known that children do not have a good episodic memory, the underlying mechanism is not well known. To answer the question, the current study used a modified list learning paradigm for children (i.e. ABCD, ABAC, ABABr) and compared the performance of 4 year olds, 7 year olds and adults. The results show that neither 4 year olds nor 7 year olds were able to use a context cue or a three-way binding properly even though their two-way binding abilities increased throughout development. Learning-to-criterion data, which measured the amount of interference, also showed that managing the context cue or the three-way binding were not easy for the children compared to the two-way item binding. Moreover, a proposed computational model decomposed the binding strength involved in the given task and made it enable to compare the changes in these binding strengths. The model shows that children have a greater item binding strength whereas adults have a greater context cue and three-way binding strength. It is concluded that the developmental change in using complex binding structures occurs between 7 year old and adulthood, which is surprising, compared to the drastic cognitive change around the age of 4.

Item Noise versus Context Noise in Recognition Memory
Authors:  Simon Dennis (The Ohio State University)
Abstract: Logically, episodic recognition memory requires one to combine information about a test probe with information about the context of interest. Consequently, interference could come from the other items in the study context (item noise) or the other contexts an item has been studied in (context noise). In this paper, I will overview the debate on which of these perspectives best accounts for the available data, focusing on list length effects with words, word pairs, novel faces, fractals and photographs of scenes.

The Role of a Feature's Distribution in Determining its Associations with Concepts
Authors:  Matthew Zeigenfuse (Michigan State University) , Michael Lee (University of California, Irvine)
Abstract: In this paper, we investigate the role of a feature's distribution on its degree of association with a concept. To do this, we develop three measures of association between a feature and a concept based on how the feature is distributed across instances of the the concept. We compare these measures using an experiment in which participants are asked to choose which of a pair of features they more strongly associate with a particular concept. Our results show that a single measure, collocation, best accounts for human associations in regardless of the conceptual domain tested. We show how these results could be used to model how changes in context can produce changes in representation.

Item Noise in the Sternberg Paradigm
Authors:  Allison Chapman (The Ohio State University) , Simon Dennis (The Ohio State University)
Abstract: The list length effect in recognition memory is the finding that performance improves as the number of items studied decreases. Interference in recognition memory may be operationalized as noise that accumulates over: items in the study list, other contexts in which studied items have appeared, or a combination of both. Item noise models predict a list length effect. The list length effect has been eliminated in long-term recognition tasks (Dennis & Humphreys, 2001; Dennis Lee & Kinnell, 2008). We demonstrate that the length effect may also be eliminated in short-term recognition (the Sternberg paradigm) if a filled delay is introduced. List length and recency effects were eliminated following an engaging 15 second distractor task. Articulatory suppression invoked the same results at a 2 second delay. The results suggest that item interference alone is not explicative of forgetting across both short and long-term recognition tasks.

 
Cognitive Decision Theory B:
Location: Heritage
Session Chair: Avishai Wershbale (Michigan State University)
A Dynamic Model of Response Times in the Go/No-Go Discrimination Task
Authors:  Jennifer Trueblood (Indiana University) , Michael Endres (Indiana University) , Jerome Busemeyer (Indiana University) , Peter Finn (Indiana University)
Abstract: Clinical disorders and cognitive processes such as working memory are believed to influence the cognitive mechanisms involved in decision making under uncertainty and goal conflict. A dynamic model of response times in a Go/No-Go Discrimination task with motivationally distinct conditions is developed. The Go/No-Go Discrimination task is a reliable measure of passive avoidance, and it is considered an analog for real world approach-avoidance motivational conflict. We show that the cognitive model of response times provides more insight into the dynamics of cognitive processes involved in reward-approach and punishment-avoidance decisions than standard data analysis techniques. The parameters of the model inform us of underlying cognitive mechanisms because they have an established psychological meaning and allow us to quantify a subject’s ability and response caution. Using these model parameters, we focus on the differences between subjects with varying classifications of mental disorders (i.e. substance abuse and antisocial behavior) and high and low working memory capacity and show that there are reliable differences between the decision mechanisms of these subjects. Ultimately, we show that dynamic cognitive modeling has the potential to provide valuable insights into clinical phenomena that cannot be captured by traditional data analysis techniques.

Interactions between Categorization and Decision Making
Authors:  Jerome Busemeyer (Indiana University) , Zheng Wang (The Ohio State University)
Abstract: In a categorization-decision task, participants are shown faces, and then they are asked to categorize them as belonging to either a ‘good’ guy or a ‘bad ‘ guy category, and/or they are asked to decide whether to take an ‘attack’ or a ‘withdraw’ action. Three test conditions are compared: In the C-then-D condition, participants make a categorization followed by an action decision; in the C- alone condition, participants make only a categorization; and finally in the D-alone condition, participants only make an action decision. The results show that participants are more likely to attack in the D-alone condition than in the C-then-D condition, even when they categorized the face as a ‘bad’ guy. These results cannot be explained by signal detection, Markov, and exemplar models, but can be explained by a quantum decision model.

Comparability Effects in Probability Judgments: Evidence for a Sequential Sampling Process
Authors:  Timothy Pleskac (Michigan State University)
Abstract: Psychological theories of subjective probability judgments (SPs) typically assume that accumulated evidence (support) mediates the relationship between the to-be-judged event and the SP (Tversky & Koehler, 1994). These theories typically make a strong assumption regarding the independence of hypotheses: the support garnered for a particular hypothesis is independent of the alternative hypothesis(es). This assumption implies the evidence people consider when they judge the likelihood that their favorite basketball team will win a basketball game does is the same and carries the same value regardless of their opponent. However, over 50 years of work in psychology has demonstrated that, at least when making choices, the value we place on options depend on the options they are paired with. In this talk, I will present results from a study where participants had to judge the likelihood of a bicyclist winning a simulated race. The results show that when making probability judgments participants violated this independence assumption and that the supporting evidence they collected depended on how comparable the two hypotheses were. I will show that these results are consistent with a computational model of probability judgments called Judgment Field Theory (JFT). JFT assumes that when people are asked to make probability judgments they accumulate evidence over time. The evidence at each time point comes from attention dynamically switching between the attributes of each hypothesis. Markers–one for each probability estimate–are placed across the evidence space. When evidence passes a marker there is a probability the judge stops and gives the respective estimate. Besides accounting for violations of independence, JFT provides a single process account of a range of phenomenon. More generally, JFT in combination with Decision Field Theory offers a single process account for judgment and decision making rather than a process for judgment and a process for decision.

Probabilistic Assessment Model for the BART
Authors:  Avishai Wershbale (Michigan State University) , Timothy Pleskac (Michigan State University)
Abstract: In judgment and decision making, several theories posit multiple processing pathways in response selection. The general distinction between the pathways is that there are both slow, calculated pathways, and more automatic pathways. One example involves theories of bounded rationality where heuristic devices are used as shortcuts for choice selection, instead of calculating a response based on all the evidence, allowing for adequate decision making with less processing. While decision making theory allows for the utilization of multiple response selection pathways, the formal cognitive models of decision making generally use a single response selection pathway. In the current study, we investigated this issue of multiple response pathways in the Balloon Analog Risk Task (BART). During the BART participants inflate a computerized balloon for real money, but if the balloon pops they lose their money. Participants must decide when to stop pumping. We show that when participants complete multiple trials there are both very fast and very slow inter-pump response times (IRTs) and the profile changes in systematic ways. Longer IRTs occur more often in early trials, but are more likely to occur the further one gets into a single trial. We interpret this pattern of results as evidence of a learned automatic response. We use this result to modify the current cognitive model of decision making during the BART (see Wallsten et al., 2005). The model proposes that on every trial participants pick a goal pump to reach. On some pumps, they assess how far they are from the goal pump, but on others they simply pump the balloon. We show a model where the probability of engaging in an assessment changes based on pump and trial accounts for both pump and IRT data. Moreover, we show this multiple response pathway hypothesis has clinical utility: Adolescents with conduct disorder appear to utilize the different response pathways in different proportions than matched controls.

14:20 - 15:40
 
Language A:
Location: Willy
Session Chair: Stephen Hedger (The University of Chicago)
Perception and Segmentation of Non-Native Fluent Speech
Authors:  Tuuli Morrill (Michigan State University)
Abstract: Listeners segment words from the speech stream by using a combination of prosodic structure and phonotactics. In a non-native language, listeners may attempt to apply a native segmentation strategy; if the first and second languages differ in metrical structure, this could affect the ability to segment speech. However, infants and adults use statistical learning to extract units from speech; this could help adults in non-native listening. This study investigates non-native fluent speech perception by English listeners, in two experiments with distinct languages (Finnish and Japanese). After learning words and listening to fluent speech, participants identified possible words of the language, choosing from pairs with one real word from the fluent speech, and a non-word made of syllables that co-occurred in the speech. 60 participants were divided into 3 conditions with different task combinations; 20 people completed Task 1 (Word Learning), Task 2 (Fluent Speech Listening), and Task 3 (Word Identification); 20 people did not complete Task 2; 20 people did Task 2 without Task 1. Response patterns were analyzed with a mixed effects regression model. For both languages, listeners in word-learning conditions were better at identifying words than those who only heard fluent speech. For Finnish, the combination of word learning and fluent speech provided a greater advantage than word learning alone. For Japanese, groups who learned words performed about the same, though the combined tasks lead to more consistency. For Finnish, stress pattern was the greatest predictor; in Japanese, pitch accent patterns were not a predictor of word identification, and Fluent Speech Listening did not affect performance as in Finnish. The interactions between prosodic patterns of the native and non-native languages have a clear effect; learners perform better when the second language is metrically similar, and do not learn as much from the same tasks in a metrically dissimilar language.

The Influence of Subsequent Sentential Context on Spoken Word Recognition When Prior Contextual Information is Available
Authors:  Christine Szostak (The Ohio State University) , Mark Pitt (The Ohio State University)
Abstract:  Speech frequently contains lexical ambiguities such as when an extraneous noise masks important sounds that distinguish a word from similar sounding lexical competitors (e.g., ?ing where wing, ring, sing... are all viable lexical competitors). Yet listeners appear to identify the spoken words intended by the talker with minimal effort. When semantic context follows an ambiguity (e.g., The ?ing had feathers), such context aids resolution of the ambiguous word. Little is known, however, of the effects of subsequent context when prior context is also present (e.g., The veterinarian said that the ?ing had feathers). If only subsequent context is available, at least two representations of the ambiguous word (e.g., wing, ring...) must be retained in memory until the context becomes available, which may tax working memory resources. It would be advantageous therefore for the perceptual system to default to using prior context alone, even when following context is present. In a series of experiments we explore the influence of subsequent context when prior context is present or absent. Our findings suggest that prior contextual information supersedes but may not completely eliminate the effect of subsequent context on ambiguous word resolution. We consider the implications of our findings for theories of spoken word recognition.

Unlikely Allies: Acoustic and Syntactic Cues in Word Segmentation
Authors:  Chris Heffner (Michigan State University) , Laura Dilley (Michigan State University)
Abstract:  Though perception of discrete words is almost effortless, there are few acoustic cues which consistently signal a word boundary (Cole, Jakimik, & Cooper, 1980). Recently, Dilley and Pitt (2010) showed that slowing down the speech rate around a region of speech for which acoustic cues to a word boundary were ambiguous made a critical test word within the acoustically ambiguous region disappear perceptually. In a series of experiments, we examined whether the perception of a critical word would be affected by its grammatical context and/or its context speech rate. Two grammatical contexts were contrasted: one where the grammar of the sentence made the critical word optional (an “optional context”) and another which made it obligatory (an “obligatory context”). In Experiment 1, obligatory contexts were turned into optional contexts by truncation of context words; these new optional contexts showed a lower rate of critical word reports than the original obligatory contexts. Experiment 2 showed that truncation alone was not enough to modify critical word perception to the extent that was observed in Experiment 1. In Experiment 3, optional and obligatory contexts were compared without using truncation; higher rates of critical word report were observed for optional than obligatory contexts. Slowing context speech rate consistently decreased critical word reports across experiments, though that cue was somewhat less effective in obligatory contexts. These findings shed light on how listeners communicate via spoken language and suggest that listeners use all the cues at their disposal to identify words and word boundaries. References Cole, R. A., Jakimik, J., & Cooper, W. E. (1980). Segmenting speech into words. Journal of the Acoustical Society of America 67(4): 1323-1332. Dilley, L. C., & Pitt, M. A. (2010). Altering context speech rate can cause words to (dis)appear. Psychological Science 21(11): 1664-1670.

The Message in Music: Music Can Convey the Idea of Movement
Authors:  Stephen Hedger (The University of Chicago) , Howard Nusbaum (The University of Chicago) , Berthold Hoeckner (The University of Chicago)
Abstract: The acoustic patterns in music might be understood by listeners because of cross-modal associations. These associations have led researchers to explore the relationship between music and linguistic prosody. These associations, moreover, need not be historical: Recent research has demonstrated that analogical variation in speech signals can be used to communicate object motion. Given the similarities in some of the acoustic variation used by composers to putatively communicate similar images, we investigated whether music, like linguistic prosody, can analogically convey referential information about an object through acoustic cues. We test whether such information is integrated into an analog perceptual representation as a natural part of listening. Listeners heard sentences with neutral prosody describing objects and the sentences were underscored with accelerating or decelerating music motifs. After the sentence – music combination, participants saw a picture of an object and judged whether it was mentioned in the sentence. Object recognition was faster when musical motion matched visually depicted motion. These results suggest that visuo-spatial referential information can be analogically conveyed and represented by music.

 
Complex Representation:
Location: Heritage
Session Chair: Gabriel Recchia (Indiana University)
Tree Inference with Multiplicatively Interacting Factors in a Multiple Response Processing Tree
Authors:  Zhuangzhuang Xi (Purdue University) , Richard Schweickert (Purdue University)
Abstract: Many experiments show that cognitive processes can be well presented in a tree structure, e.g., multinomial processing tree (MPT) model of source memory (Batchelder and Riefer, 1990). Evidence often indicates that an experimental factor, such as item similarity, changes a single parameter, leaving others invariant. In typical studies, hypothetical tree structures are tested and compared by goodness of fit. With the method of Tree Inference, a tree is constructed by directly examining the data to see if patterns occur that are predicted when two factors selectively influence different processes (Schweickert and Chen, 2008). In earlier work, three restrictions were imposed on the trees considered: There were two classes of responses; parameters were probabilities, bounded above by 1; and factors were assumed to change parameters associated with children of a single vertex. More general trees are discussed here, removing these restrictions. We show that if two factors have a multiplicative interaction, the underlying tree structure is equivalent to a processing tree of a simple form. Analogous results apply to rate models (often used in animal experiments) in which each edge is associated with a rate rather than a probability. Theorems on representation, uniqueness of parameters, uniqueness of tree structure, and mixtures of trees are presented, along with some illustrative examples.

Distinguishing Levels of Grounding that Underlie Transfer of Learning
Authors:  Lisa Byrge (Indiana University) , Robert Goldstone (Indiana University)
Abstract: We find that transfer of learning from a perceptually concrete simulation to an isomorphic but superficially dissimilar text-based problem is sensitive to the congruence between the force dynamics common to both systems and the kinesthetic schema induced via action in the first, perceptually concrete, simulation. Counterintuitively, incompatibility between the force dynamics and the kinesthetic schema has a beneficial effect on transfer, relative to compatibility as well as an unrelated control. We suggest that this incompatibility between action and system dynamics may make the system’s relational structure more salient, leading to a more flexible conceptualization that ultimately benefits transfer. In addition, we suggest that too much “action concreteness” in hands-on learning may actually limit transfer, by fostering an understanding that is tied to that action and therefore less available for transfer in situations where that action is no longer relevant.

Effects of Grounded and Formal Representations on Combinatorics Learning
Authors:  David Braithwaite (Indiana University) , Robert Goldstone (Indiana University)
Abstract:  Mathematical ideas often admit of alternate representations. Much research has investigated the differential effects of instruction based on formal representations, such as equations, or more grounded representations, such as diagrams. In some contexts, formal representations have been found to promote learning and transfer better than grounded representations. In other cases, grounded representations have facilitated problem solving by encouraging learners to access intuitive strategies and knowledge. Some authors have advocated instructional approaches, e.g. “concreteness fading” or “progressive formalization,” that combine both forms of representation, beginning with concrete or grounded representations and transitioning to more idealized or formal ones. These issues were explored through two experiments involving mathematics learning in the domain of combinatorics. Outcome listing and combinatorics formulas were used as examples of grounded and formal representations, respectively. A pretest-training-posttest design was employed, with posttest improvement serving as a measure of transfer. Experiment 1 compared transfer performance following training involving either listing or formulas. Training in formulas led to better performance on near transfer problems, while for far transfer problems, performance did not differ by condition. Experiment 2 compared transfer performance following four types of training: listing only, formulas only, listing fading (listing followed by formulas), and listing introduction (formulas followed by listing). The listing fading condition led to performance on par with the formulas only condition, and for near transfer problems, significantly higher than the listing introduction and pure listing conditions. The results support the inclusion of grounded representations in combinatorics instruction, and suggest that such representations should precede rather than follow formal representations in the instructional sequence.

Crowdsourcing Large-Scale Semantic Feature Norms
Authors:  Gabriel Recchia (Indiana University) , Michael Jones (Indiana University)
Abstract: Semantic Space Models (SSMs) have been criticized as implausible psychological models because they learn from only linguistic information and are not grounded in perception and action. This inadequacy limits their ability to model human performance on lexical semantic tasks, and perceptually grounded SSMs are now emerging. As a proxy for sensorimotor perception, these new integrative models use norms of human-generated properties (e.g., McRae et al., 2005). These norms are collected by asking human study participants to produce the internal/external parts, appearance, sounds, smells, tastes, functional properties, category membership, etc. for concrete nouns based on multisensory experience. These databases have have proven extremely valuable to cognitive models of semantic representation and processing, but are currently limited to a few hundred concepts. Although humans ground thousands (perhaps all) of the words in their lexicon in perceptual data, no large-scale resource is available for researchers to integrate human-generated properties of thousands of words into SSMs; property norms are not even available for the majority of words appearing in standard evaluation tasks. We describe first steps toward building a vastly larger database than the traditional McRae et al. (2005) set, augmented with information about the degree to which individual features are diagnostic of particular nouns. We borrow emerging, highly successful data capture methods from computing science to describe semantic structure online--namely, "games with a purpose," which take advantage of crowdsourcing and the vast amount of human computation currently spent on online games to capture data for practical purposes. Our pilot results show that even a very simple game elicits property vectors from naive participants that correlate well with the McRae norms (r=.83), and in a manner that can be scaled up to build a dataset of property descriptions from any number of voluntary players.

15:40 - 16:10
 
Coffee Break (Provided):
Location: Lincoln
16:10 - 17:10
 
Language B:
Location: Willy
Session Chair: Vishnu Sreekumar (The Ohio State University)
Learning Grammar with Recursion via Statistical Mechanism: Comparing Simple Recurrent Networks and Human Subjects
Authors:  WonJae Shin (University of Notre Dame) , Kathleen Eberhard (University of Notre Dame)
Abstract: Much of human and animal learning involves learning contingencies among stimuli that form patterns in the environment, such as Pavlovian conditioning or the creation of cell assemblies via Hebbian learning. Language is a complex system of patterns, the learning of which has been a source of a nature-versus-nurture debate. The debate centers on the recursion and the long-distance dependency resulting from it. Nativists claim that these properties are unlearnable by a statistical mechanism that computes the probability of one word given another (e.g., Miller and Chomsky, 1963), because, for example, a long-distance dependency that involves a subject and a verb that are separated by several embedded phrases would require sampling of an enormous number of sentences. However, Elman (1993) provided support for the nurture view by demonstrating that a simple recurrent network (SRN) consisting of several connected layers of statistical processing units could learn an artificial language with recursion and long-distance dependency. Learning involved presenting the network with a large number of sentences one word at a time. The network's task was to predict the next word, and learning was taken place by adjusting the connection weights to generate better predictions. The network learned to predict a set of grammatically correct words that could occur at any given point, but only if learning began with simple sentences before progressing to complex ones. Learning failed when both were given from the start. The current study includes a new experimental paradigm with a predictive learning task that involves arbitrary visual symbols as vocabulary and reduced version of Elman's grammar. This is, in essence, a task very similar to the network's. The results provide evidence that human subjects succeeded and failed to learn grammar under the same conditions as Elman’s network. This suggests that statistical mechanism may be the driving force of human language learning.

The Structure of Disfluency Repairs
Authors:  Edward Husband (University of South Carolina)
Abstract: Normal everyday speech contains disfluencies, the um's and uh's which signal difficulty or error on the part of speakers. While disfluent speech seems to be a natural case of language performance, language competence also plays an intimate role in shaping disfluency and its repair. Beginning with Levelt (1983, 1989), I argue that the form of disfluency repair follows from two grammatical sources: right node raising and contrastive focus. Levelt noted that even disfluent utterances have acceptability judgments. He proposed a well-formedness rule for disfluency repair in which an original utterance O and a repair R, < O uh R >, is well formed iff there is a continuation C such that < O C or R > is well formed. This rule raises several questions: How is C recovered for a sentence? and How are errors and repairs connected to one another? Answering the first, I propose that the recovery of C is related to the structure of right node raising (Postal 1974). Under this analysis, a disfluent utterance is well formed iff it has the structure < O ti or R ti Ci > where C is moved out of both conjuncts as indicated by traces, ti. This analysis also addresses the second question: errors and repairs are connected through the contrastive focus requirement of right node raising (Hartmann 2000). By marking the error and repair as focus alternatives, contrastive focus provides a semantics for replacing errors with repairs. This analysis also sheds light on the mechanisms for disfluency processing. Ferreira Lau & Bailey (2004) proposed that comprehension recovers from disfluencies by Overlay, a mechanism which matches near-identical syntactic structures as best it can during reanalysis. This analysis captures the near-identical requirement of Overlay through right node raising, leaving only the error and repair to be resolved. This approach both updates Levelt's disfluency well-formedness rule and links disfluency processing mechanisms to the structure of disfluencies themselves.

The Dimensionality of Visual Environmental Input
Authors:  Vishnu Sreekumar (The Ohio State University) , Yuwen Zhuang (The Ohio State University) , Simon Dennis (The Ohio State University) , Mikhail Belkin (The Ohio State University)
Abstract: Previous studies (Doxas, Dennis, & Oliver, 2010) show that natural language discourse exhibits a two-scale structure with a lower dimension at short distances and larger dimension at long distances. We attempt to search for the source of this constraint in the visual input that goes into forming episodic experiences in human beings. Subjects use a Microsoft Research SenseCam that captures images every 10 seconds for an average of 5-7 hours every day for a week. The hypothesis is that if a two-scale structure is observed in the visual stream of images analyzed here, the two-scale structure of natural language discourse is possibly a result of a direct mapping of the structure of the environmental input stream on to the cognitive system. We use a recently developed color correlogram method to represent the images and a corresponding weighted Manhattan distance measure to calculate distances between pairs of images. We observe a two-scale structure in the correlation dimension plots when working in an appropriate orthogonal basis space and establish a correspondence with the results from the discourse study. Furthermore, we get subjects to segment their streams of images into episodes. They also tag each episode with a set of keywords, with a restriction on the number of keywords that can be used, that will later be used to probe the similarity structure of how these episodes are organized in memory.

 
Social Cognition:
Location: Heritage
Session Chair: Richard Schweickert (Purdue University)
Learning to Pursue Multiple Goals: A Computational Model
Authors:  Justin Weinhardt (Ohio University ) , Jeffrey Vancouver (Ohio University )
Abstract: Psychology is seeking a parsimonious, formal model of human behavior. Recently, a dynamic computational model of self-regulation was developed to account for multiple goal pursuit over time (Vancouver, Weinhardt, & Schmidt, 2010). This model is composed of simple self-regulatory agents, which are part of a negative feedback loop. Each agent act indirectly through output signals to achieve or maintain its desired state. These outputs are a function of the discrepancy between the perceived current and desired states. Over time, these agents pass information among themselves and the environment, leading to complex behavior. Parsimony is achieved because the single, simple structure is utilized to account for complex phenomenon through their interaction over time. Specifically, the Vancouver et al. model used five interacting self-regulatory agents to predict that when individuals strive for two competing goals with a single deadline, individuals will switch back and forth to the task with the largest discrepancy. However, as the deadline approaches and depending on rate of progress, individuals switch to the task with the smaller discrepancy, which was found in an experimental study. Building on this initial model, we incorporate a learning agent that reduces prediction error to learn the rate of goal progress. A second learning agent helps the model predict environmental disturbances. These disturbances are outside the control of the individual, and for successful goal progress, individuals must incorporate behavioral contingencies to counteract these uncertain disturbances. The learning agents are both consistent with self-regulatory agents and delta-learning rule models that underlie neural network models of supervised learning. Thus, this paper describes a dynamic, computational model of action (goal striving), thinking (decision making), and learning using the same simple building block, which recent reviews of the decision making literature advocate.

The Dynamics of Perceived Message Effectiveness in Public Service Announcements
Authors:  Amber Westcott-Baker (University of California, Santa Barbara)
Abstract: One challenge in studying persuasive media messages is determining which message features are related to changes in attitude. Though all communication, including media such as public service announcements (PSAs), occurs over time, most attitude research collects data over very few time points (Biocca, David, & West, 1992; Fink, Kaplowitz, & Hubbard, 2002). Researchers have pointed to the need for time-series examinations of message effects, and not only at the panel level, but also at the micro-level, such as during message receipt or processing (e.g., Chung & Fink, 2010). For recorded media messages such as PSAs, over-time examination of processes and outcomes is especially attractive, since these messages have an exact time course that does not vary between subjects. Continuous-response measurement (CRM; Baggaley, 1987; Biocca, et al., 1992) is one method by which such micro-level effects can be studied. During CRM studies, participants use a computer or hand-held device to continuously provide feedback about a stimulus (e.g., continuously rating the "convincingness" of a persuasive ad). Feedback is sampled over relatively brief intervals (e.g., every 1s, 500ms, etc.) and recorded as a time series of responses. This talk will present a research paradigm for predicting time-series audience response to anti-marijuana PSAs using mathematical models based on information integration theory (Anderson, 1981) and the belief-adjustment model (Hogarth & Einhorn, 1992), and fitting predicted time courses to actual CRM output from participants. The model, based on recent research into the effects of argument quality and audiovisual message features on effectiveness (e.g., Kang, Cappella, & Fishbein, 2006), will include parameters and weights for argument attributes, audiovisual attributes, and participant drug risk. The goal of this modeling is to decompose CRM ratings into effectiveness scores for individual message features to improve future message design.

Subnetworks of a Dream Social Network
Authors:  Richard Schweickert (Purdue University) , Zhuangzhuang Xi (Purdue University) , Charles Viau-Quesnel (Purdue University) , Hye Joo Han (Purdue University)
Abstract: In a social network people are represented by points and if two people have a certain relationship, for example, are friends, their corresponding points are connected by a line. A well known property of a friendship network is short average path lengths: Between two arbitrarily chosen people one can usually find a short path of friends and acquaintances. Another property of friendship networks is high clustering : If A is a friend of B and B is a friend of C, then A and C tend to be friends of each other. Networks with both properties are called small world networks. In a dream social network, points represent characters in the dreams of an individual and a line connecting two points indicates that the corresponding characters were in a dream together. Many, but not all, dream social networks are small world networks. We report on a dream social network that is a small world network, but with an important subnetwork that is not. Specifically, a subnetwork constructed from all the dreams in which the dreamer’s father is present has short average path lengths, but low clustering. In contrast, three analogous subnetworks constructed from the dreams in which her mother is present, her sister is present or her brother is present have both small world network properties.

17:30 - 19:30
 
Poster Session:
Location: Room 106
Poster Session: All posters will be displayed for the entire 2-hour session, from 5:30 to 7:30. The poster session will be split such that only half of the presenters will stand by their posters and discuss their research at any given time. Presenters of odd-numbered (1, 3, 5, etc.) posters will stand by their posters for the first hour and presenters of even-numbered (2, 4, 6, etc.) posters will stand by their posters for the second hour.
(1) Bayesian Analysis of Memory Models
Authors:  Brandon Turner (The Ohio State University) , Trish Van Zandt (The Ohio State University) , Simon Dennis (The Ohio State University)
Abstract: Many influential memory models are simulation based. This often leads to likelihood functions which are difficult or impossible to evaluate. We investigate the use of approximate Bayesian computation (ABC), which replaces the likelihood calculation with a simulation of the model. Using results from Myung, Montenegro, & Pitt (2007), we first developed a full Bayesian model and obtained estimates of the posterior distributions for the parameters of BCDMEM (Dennis & Humphreys, 2001) using MCMC. We then applied ABC to the same data and model and compared the results. The closely-matching posterior estimates indicate the usefulness of the ABC approach. Given these findings, we then used ABC to estimate the posterior distributions of the parameters in REM (Shiffrin & Steyvers, 1997). We then investigated model selection procedures between BCDMEM and REM. To do this, we developed a hierarchical mixture model, which we then fit using ABC. The mixture modeling approach gave insight to the relationships between the two model parameters. In an additional simulation study, this approach provided clear patterns in the ROC space where observed data would be more likely to have arisen from BCDMEM or REM. Since the likelihood for BCDMEM was available, we developed a mixture algorithm which combines standard sequential Monte Carlo (SMC) within model steps for BCDMEM and ABC within model steps for REM. This allows us to treat the likelihood as supplemental information. We found that our new mixture algorithm provided posterior estimates nearly identical to those obtained using ABC.

(2) Getting a Handle on Your Data
Authors:  Benjamin Stone (The Ohio State University) , Simon Dennis (The Ohio State University)
Abstract: We introduce the Handles application that has been developed to extract key concepts from large document sets and which allows users to visualize and organize key issues or themes within a multidimensional document space. A few notable applications of the Handles application include: essay grading, search engine result demarcation, life-logging, and sentiment analysis. We present results from an experiment that assesses 40 participants' performance at identifying key themes contained within 200 customer reviews of the Amazon Kindle reading device. Proficiency at this task is compared between participants who used the Handles application and participants using a more traditional "Google-like" list-based interface.

(3) Control in Technological Systems and Physical Intelligence: An Emerging Theory
Authors:  Bradly Alicea (Michigan State University)
Abstract: An increasing number of technological applications, from controlling virtual worlds to creating artificial organs, require intelligent physical control that meets several criteria. Research that combines materials, “physical” perception, and intelligent control can provide a useful tool for this emerging frontier of engineering and medicine. The first criterion of intelligent physical control is that it must be closely integrated with physiological functions such as movement, neuromuscular function, or touch. A revision of the literature leads us to reinterpret this as “physical” intelligence. In accordance with this view, physical control performs functions such as prediction, pattern recognition, and adaptation. The second criterion is to have a strategy for understanding the structure of surfaces both commonly and uncommonly encountered. Surface properties include both the texture of objects and reaction forces from objects and the environment in general. In this scheme, physical intelligence operates on and is shaped by surface properties. The third criterion involves exception-handling “spiky” or “bursty” environmental inputs. To truly understand the nature of intelligent control requires us to consider non Gaussian-noise present in environmental stimuli. This can be characterized by asking a simple question: How are the reactive properties of materials and physical sensory systems characterized by intelligent control? Research conducted so far has revealed that: • the “physical response” is an ability to match the amount of power produced with the amount and/or regularity of inertial force sensed in the environment. • switching between surfaces with different properties can create a exception-handling mechanism related to learning. • coupling motion with surface reaction forces provides a mechanism for learning. Examples will be given from human-machine interaction, adaptive materials, and the development of artificial tissues.

(4) Towards a Dynamic Stochastic Model of Intertemporal Choice
Authors:  Junyi Dai (Indiana University)
Abstract: Intertemporal choice refers to the situation where people need to choose among two or more payoffs at different points in time. Most of the relevant studies so far assume, explicitly or implicitly, a deterministic perspective on this subject. According to this perspective, when measured repeatedly, people’s preference between a specific pair of intertemporal options should remain the same from trial to trial, and thus a single attempt is sufficient. The same assumption of the stability of preference, however, has been refuted by numerous studies on related research subjects such as risky choice. Consequently, a variety of stochastic models have been proposed to account for the probabilistic nature of human preference. Among these models, decision field theory (DFT) proposed by Busemeyer and Townsend (1993) and proportional difference model (PDM) by González-Vallejo (2002) might be the most successful ones because, unlike other models, both of them can simultaneously explain the violation of stochastic dominance, stochastic transitivity, and stochastic independency. The current study is intended to facilitate the development of a dynamic stochastic model of intertemporal choice that is analogous to those for risky choice. Specifically, the proposed model incorporates both direct difference (as in DFT) and relative difference (as in PDM) and utilizes the general framework of DFT to explain a number of intertemporal effects, including the immediacy effect, the reward magnitude effect, and the delay amount effect. It is also hypothesized that the general effect of time pressure on decision making applies to intertemporal choice. Preliminary mathematical analysis and simulation results confirm the capacity of the current model to predict the general pattern of those effects. An empirical study is underway to test the validity of the proposed model quantitatively. Both choice probability and response time are recorded and will be involved in the model fitting process.

(5) The Dynamics of Post-decisional Processing in Confidence Judgments
Authors:  Shuli Yu (Michigan State University) , Tim Pleskac (Michigan State Univeristy)
Abstract: Single-stage cognitive models of decision making assume that confidence judgments reflect a person’s state of mind at the point when they make a decision. However, there is evidence to suggest that people continue to process information about their decision after they have already responded with their final choice (Petrusic & Branski, 2003; Van Zandt & Maldonado-Molina, 2004), and that increasing the duration of post-decisional processing may result in better resolution of confidence judgments (Pleskac & Busemeyer, 2010). This research utilizes the two-stage dynamic signal detection model (2DSD) to investigate the dynamics of post-decisional processing and how it impacts the rate of evidence accumulation (drift rate) and the accuracy of confidence judgments in terms of resolution and calibration. In this study, participants make an initial choice after a fixed decision time, and subsequently make a confidence judgment using a 3 x 3 design (350ms, 700ms or 1400ms inter-judgment time; easy, medium or hard task difficulty). 2DSD is fit to participants’ data and compared to other sequential sampling models. A better understanding of how post-decisional processing influences confidence ratings will highlight how decision makers can moderate their decision strategies to optimize confidence judgments.

(6) A Dynamic Model for Recognition Memory: Towards a Solution to the Problem of Criterion Setting
Authors:  Gregory Cox (Indiana University) , Richard Shiffrin (Indiana University)
Abstract: Decisions in recognition memory are often assumed to result from a comparison of the familiarity of a test item to some criterion value--above this criterion, one responds that the item is old (e.g., on the most recent study list); below it, one responds that the item is new. Familiarity can arise from multiple sources, however, including other recently seen items (which match the context of the test item and may also bear some incidental similarity to the test item in terms of perceptual or conceptual content) and instances of the test item from prior life history (which do not match well on context, but match well on content). Because the absolute familiarity of a test item may, therefore, vary widely with the degree of prior experience and the number of other studied items, it is unclear how participants might set reasonable old/new decision criteria if decisions are based solely on absolute familiarity. To address this issue, we propose a model in which recognition decisions are based not on the absolute value of familiarity, but on how familiarity changes over time as features are sampled from the test item. Recognition decisions are, then, the outcome of a race between two parallel accumulators: one that accumulates positive changes in familiarity, leading to an "old" decision; and another that accumulates negative changes, leading to a "new" decision. Model predictions of both accuracy and response latency are in accord with extant data on human recognition memory performance. Although this model is hardly the final word on recognition memory, we believe it is a promising approach to the problem of criterion setting in recognition, and it suggests that deeper study of the dynamics of long-term recognition will prove fruitful for future theory development.

(7) Modeling Risky Sexual Behaviors Over Time Using Simple Cognitive Decision Heuristics
Authors:  Andrew Hendrickson (Indiana University Bloomington) , J. Dennis Fortenberry (Indiana University School of Medicine) , Peter Todd (Indiana University Bloomington)
Abstract: The search for and choice of potential mates involves multiple decisions over time that must be made in the face of environmental uncertainty and potentially involving multiple factors that change in importance. This work begins to identify the cognitive mate search mechanisms that underlie adolescent sexual activity as a search process through time. Previous models of mate choice have used data restricted to a brief period of time, e.g. speed dating (Lenton, et al., 2009) or demographic census data (Todd, et al, 2005). Here we present agent-based models of mate choice decisions over multiple points within multiple relationships for each individual, utilizing data from the Young Women’s Project: daily diary entries from nearly 400 young women outlining their romantic and sexual interactions with partners for up to eight years (Brown, et al., 2005). The decision is certainly made in part based on the characteristics of the person being considered as a mate as well as how those features match up with the desired features in one’s ideal mate. But the decision will also be influenced by many of the factors of relationships one has had before. Thus, to get a complete picture of the mate choice process, we need to know not only what characteristics are being sought and found in the partners that people have sex with, but also what was being sought previously, and whether the previous searches were successful or unsuccessful in terms of the sexual relationship. We are able to identify some of the factors influencing the initiation and cessation of single and concurrent relationships (e.g. match between ideal- partner and actual-partner traits), as well as the dynamics leading to change in behaviors (e.g. condom use, search for additional partners) within and between relationships over time.

(8) Modeling Uncertainty and Familiarity Biases in Associative and Statistical Learning
Authors:  George Kachergis (Indiana University) , Chen Yu (Indiana University) , Richard Shiffrin (Indiana University)
Abstract: In the cross-situational statistical learning paradigm (Yu & Smith, 2007), participants are faced with learning word-referent pairings from a series of trials, each of which contains multiple words and referents. Thus, on each trial the meanings are ambiguous, and learners must integrate word-referent co-occurrences across trials to disambiguate the intended pairings. Previous cross-situational studies have found that adults typically acquire an impressive number of pairings from only a few minutes of training, and that such learning is modulated in sometimes surprising ways by factors such as pair frequency, contextual diversity, and temporal contiguity (Kachergis et al. 2010; Kachergis et al. 2009). I propose a new associative model to account for these data, which incorporates both a bias for strengthening already-strong associations and a bias for giving more attention to stimuli with no strong associates (i.e., high uncertainty or entropy). I will show that this simple associative model produces both inference-like behavior like rule- and logic-based models demonstrate, as well as order effects and graded associations like other associative models exhibit. Finally, I will show that this model can also predict blocking, highlighting, and illusory correlation effects from the associative learning literature—effects that are challenging for many extant models of associative learning (Kruschke, 2008).

(9) A Simulation of a Longitudinal Theory of Reasoned Action with Implications for the Fit of the Cross-Sectional Theory of Reasoned Action and How Past Behavior May Influence Model
Authors:  Franklin Boster (Michigan State Unviersity) , Allison Shaw (Michigan State University) , Jake Liang (Michigan State University)
Abstract: The Theory of Reasoned Action (TRA) (Fishbein & Ajzen, 1975) clarify distinctions among the four major components of the theory: normative social pressure (the subjective norm), affect (attitude toward the behavior), cognition (behavioral intention), and action (behavior). Attitude toward the behavior and subjective norm are the only predictors of behavioral intention. Behavioral intention is endogenous in the TRA, its direct causes being both attitude and subjective norm. The TRA also asserts that behavior is endogenous, behavioral intention being its sole antecedent. Converting these verbal proposition to a difference equation yields the following expression in the recursive form, A1 = αA0; α = 1 N1 = βN0; β = 1 I1 = Φ N0 + Ψ A0 + Ω I0, Ω = 1 – Φ – Ψ B1 = λ I0 + П B0, П = 1 – λ To extend TRA and related implications, this manuscript has two goals. First, a longitudinal, or dynamic, model of TRA (DTRA) is advanced. Second, cross- sectional tests develop the implications of the DTRA. Specifically, it will be shown that even when the DTRA describes the causal process perfectly, cross-sectional tests of the fit of the TRA may fail. Simulations were conducted to examine the implications of the DTRA and the RDTRA for the fit of cross-sectional tests of the TRA. In addition to examining the fit of the cross-sectional TRA at each time point, the time necessary to reach equilibrium (SD < .0005) was examined as well. It is unclear exactly how time to reach equilibrium and the fit of the cross-sectional model will change for the DTRA since no prior research has examined such a model. To examine these notion, autoregressions were varied. It was important to scrutinize time to reach equilibrium as it is expected that as the DTRA approaches equilibrium, the fit of the cross-sectional model will improve (Kaplan et al., 2001). The details of a simulation designed to test these propositions will be presented at the meeting.

(10) Quality of Care and Pain Processing in the Brain
Authors:  Lu Wang (Michigan State University) , Ashley Bartell (Michigan State University) , Kellen Fox (Michigan State University) , Allison Gardner (Michigan State University) , S. Austin Lee (Michigan State University) , Chelsea Gordon (Michigan State University) , Matthew Goodman (Michigan State University) , Issidoros Sarinopoulos (Michigan State University) , Robert Smith (Michigan State University)
Abstract: A positive patient provider relationship (PPR) is related to the provision of patient-centered care (PCC) and health benefits. Supporting evidence comes from randomized controlled trials (RCTs) showing that our group’s behaviorally defined PCC method is positively correlated with improved patient satisfaction and health outcomes. Though behavioral and communicative aspects of the PPR have been the subjects of cross–disciplinary study, potential neural correlates of a positive PPR have yet to be thoroughly explored. This study endeavors to characterize the effects of a positive PPR on neural pain pathways through the administration and assessment of a PCC intervention. In this study, 6 right-handed females between 45 and 65 years of age underwent an MRI preparation procedure with a doctor following PCC or a doctor following doctor-centered care (DCC). Outcomes of the preparation procedure were independently determined by a blinded rater and by the patients’ responses to our validated questionnaire. Next, patients completed fMRI scans while aversive stimulation was applied to their left hand under the monitor of the preparation doctor or an unknown doctor. Three trial types were included in each segment, each signaled by a color cue: pain stimulation, no stimulation and uncertain stimulation. After each segment patients rated the intensity of the preceding stimulations. In this initial stage of the study we confirmed that the anticipation and response to aversive stimulation elicited neural responses in pain related areas. Increased activation was observed in bilateral insula, anterior and middle cingulate and surrounding regions implicated in the modulation of affect-related processing, bodily arousal and visceral and musculoskeletal pain-related responses. These areas constitute the pain-sensitive regions of interest (ROIs) in which we will next assess the extent to which PCC is associated with reduced pain and pain-related activation

(11) Testing a Dual-process Model of Media Entertainment
Authors:  Robert Lewis (Michigan State University) , Ron Tamborini (Michigan State University) , Devin McAuley (Michigan State University) , Rene Weber (University of California, Santa Barbara) , Charles Atkin (Michigan State University)
Abstract: Entertainment theory argues that experiences of “enjoyment” versus “appreciation” can be distinguished by cognitive processes that produce positive valuations of narrative media. According to this logic, positive valuations of entertainment are controlled by two different process represented in a dual-process model. The two processes are initiated by attributes found in narrative resolutions: The first type of narrative resolution presents no cognitive conflict. Evaluation of this type of resolution is said to be fast and intuitive, and the positive valuation of this resolution is labeled enjoyment. By contrast, the second type of narrative resolution presents cognitive conflict. Evaluation of this type of resolution is said to be slow and deliberative, and the positive valuation of this resolution is labeled appreciation. This paper presents results supporting the claim that the experiences of enjoyment and appreciation can be distinguished by these two cognitive processes. Results show that the time necessary to evaluate narrative resolutions with cognitive conflict are longer (indicative of deliberative evaluations) than the time necessary for positive evaluations of narrative resolutions that do not present cognitive conflict. Implications for entertainment theory are discussed.

(12) Crystallization Theory: News Distribution and Construction of Reality in the Era of Social Media
Authors:  Donghee Wohn (Michigan State University) , Brian Bowe (Michigan State University)
Abstract: The notion that reality is not objectively “out there” but instead socially constructed is a longstanding philosophical debate dating back to the 1800s. With the introduction of mass media in the early 1900s, scholars began to argue that mass media contribute to our understanding about reality—from Lippmann to Gerbner, scholars have suggested that the media subjectively shapes what people view as reality. Distribution of media, however, has drastically shifted with the introduction of the Internet. Although social networks have always been influential in shaping what we perceive as being important, social media such as Facebook and Twitter are making our networks more salient. In this media environment, we suggest Crystallization Theory as a new framework for understanding the social construction of reality in the age of social media. Crystallization Theory builds on social influence theory, which purports that people have a fundamental desire to tune their attitudes towards groups that they want to affiliate themselves with. Amidst the sea of information, social media facilitates information produced by the members of our social networks, who become neo agenda setters. These neo agenda setters filter information from major media outlets and introduce information that one would otherwise not be familiar with. Since people are influenced by members of their social network, we will see patterns arise where people’s perception of reality will crystallize through their social networks and everyone will perceive that the information their social network produces reflects mainstream news, but there will be no true mainstream. We introduce basic propositions of this theory, factors that would “shape” the pattern of crystallization, and discuss its implications.

(13) Judging Heroes and Villains: The Neural Underpinnings of Person Perception
Authors:  Allison Eden (Michigan State Univerisity) , Allison Gardner (Michigan State Univerisity) , Kellen Fox (Michigan State Univerisity) , Chelsea Gordon (Michigan State University) , Matthew Goodman (Michigan State Univerisity) , Lu Wang (Michigan State University) , S. Austin Lee (Michigan State University) , Issidoros Sarinopoulos (Michigan State University)
Abstract: Although neuroimaging research has recently started to explore affective processes in moral judgment, the role of moral emotions and dissociations between uniquely moral and more basic affective processing have not been adequately addressed. Current research has thus far also failed to distinguish between judgments of moral violations versus morally virtuous behaviors, and between judging a person versus judging an act. Furthermore, no study has yet examined the neural underpinnings of different moral domains. In this fMRI study, 24 participants were presented with sequential information beginning with the presentation of an individual’s face, followed by description of a moral or immoral action, and finally a screen in which the individual or the behavior is judged. The trial distribution of moral trials is determined by a 2 Morality (Uphold, Violate) x 2 Domain (Individual, Community) x 2 Task (Person, Action) design, including non-moral trials as controls. After scanning, participants retrospectively indicated the emotion associated with each trial type from a list of moral emotions. Brain regions known to be involved in positive affective processing such as orbitofrontal cortex and ventral striatum are hypothesized to show increased activation to moral virtuous actions. Uniquely moral activation in these and other regions will be assessed by subtracting activation to positively valenced non-moral trials from activation to morally virtuous trials. Activation in these regions is hypothesized to show a positive correlation with self-reported ratings or moral elevation or admiration. The same analytic strategy will be used for moral violations. Brain regions known to underlie person perception and judgment, such as anterior medial prefrontal cortex, are hypothesized to show increased activation during person vs. action judgment trials. Finally, we predict discernable and dissociable patterns of neural activity in response to domain-specific stimuli.

(14) Reveal the Underlying Mechanism of Implicit Race Bias
Authors:  Haiyuan Yang (Indiana University) , Joseph Houpt (Indiana University) , Arash Khodadali (Indiana University) , Austin Chapman (Indiana University) , James Townsend (Indiana Univserity)
Abstract: We have conducted an experiment to study the mechanism of mental processing underlying implicit race bias. It has been demonstrated with the Implicit Association Test (IAT) that, when two concepts are associated (e.g. African American people and Violence), the IAT’s sorting tasks will be easier when if the concepts share the same response (we call that “compatible task”) than when they require different responses (“incompatible task”). Although these results are quite robust and well studied, until now there has not been an in depth study of the differences in processing mechanisms that gives rise to this disparity. We have combined the approach used in the IAT with Systems Factorial Technology (SFT), a powerful toolkit for testing the architecture, workload capacity and dependency in mental processing. Our work focused on two measures within SFT, the survivor interaction contrast (SIC) and the workload capacity coefficient (Ct). The SIC indicated that the disparity due to varying compatibility was not due to an underlying difference in architecture. However, we found reliable differences in Ct, indicating differential interactivity in processing race and affect depending on compatibility. These results dovetail nicely with the existing literature on IAT while adding a more in depth understanding of the processing differences.

(15) The Utility of Emotions: Integrating Emotion into MAUT
Authors:  Justin Weinhardt (Ohio University) , Anastasia Milakovic (Ohio University) , Claudia Gonzalez-Vallejo (Ohio University)
Abstract: When Bentham (1789) proposed the concept of utility, emotions were an important part of the concept; he theorized that utility is the sum of positive and negative feelings towards an object. One of the most widely used prescriptive decision making models under certainty is multi-attribute utility theory (MAUT), but emotions have yet to be integrated into a MAUT analysis, leaving an incomplete account of decisions under certainty. The purpose of the current study is to fill this gap by comparing a standard MAUT analysis to a MAUT analysis that integrates emotions. Individuals were randomly assigned to evaluate apartments (non-emotional decision), or relationship partners (emotional decision). Each participant completed a standard MAUT evaluation and an affective MAUT evaluation where their ratings were obtained from the self-assessment manikin (Lang, 1980) which is a measure of affect that has a high correlation with physiological measures of affect. Individuals also made binary choices among all possible pairs of the stimuli used in the experiment. They also made choices among three options of the same stimuli. People in the relationship condition were about 3 times more likely as those in the apartment condition to have their standard and affective MAUT align. However, when using MAUT values to predict choices, a large majority of individuals (91% for standard and 90% for affect) violated their MAUT predictions in binary choices and similar results were found for the three option choices (88% for standard and 91% for affect). When making choices among pairs, 37% of individuals violated transitivity, and among three options, 60% of individuals violated transitivity. However, condition did not have a significant effect on violations of transitivity. These results reveal that preference ratings in an emotional domain are best assessed with an emotional scale; however, preferences are inconsistent when changes in the procedure (rating versus choices) occur.

(16) Temporal and Spatial Consistency Between Cursive and Printed Handwritings in Early School Age Children
Authors:  Julia Barta (Eastern Michigan University) , Joni Krueger (Eastern Michigan University) , Jessica Marsh (Eastern Michigan University) , Loni McQueen (Eastern Michigan University) , Jin Bo (Eastern Michigan University)
Abstract: Strokes and loops are the basic components of handwriting. These writing movements can be further divided into discontinuous (e.g., printed letters that are written independently) and continuous patterns (e.g., cursives that are written continuously). One of the most salient features of “good” handwriting is the temporal and spatial consistency across repetition (Wann, 1987). It has been suggested that the cerebellum controls the explicit timing underlying temporal consistency during discontinuous but not continuous movements (e.g., Spencer et al., 2003). Compatibly, the cerebellum functions as an internal model of body mechanics, including kinematics and dynamics (e.g., Bastian et al., 2000), which differs in controlling loops and strokes (e.g., Dounskaia 2007). It has been documented that the cerebellum has relatively later and slower development (e.g., Giedd et al., 2001). Thus, the current study focused on the temporal and spatial consistency between cursive and printed handwritings in early school age children. Twenty-eight children (5-12 years) were screened with the Movement Assessment Battery for Children (Henderson & Sugden, 1992) and the Beery–Buktenica Developmental Test of Visual-Motor Integration (Beery, 1997). They were asked to write letters “l” and “e”, in printed and cursive forms. Data from 19 typically developing children were included for the current analysis. Results showed that children moved faster and traveled shorted distance for the printed letters “e” and “l” than the cursive letters (all P <.05). Writing printed letter “l” appeared to be faster, shorter, and had better spatial variability (CV on travel distance) than the printed letter “e”. The temporal variability (CV on movement time) was approaching to significance (P = .08) between printed letter “l” and “e”. Results suggest that children perform better on printed letters, especially printed strokes than cursive writing. The “explicit timing” hypothesis was not supported.

(17) Decoding Object-Based Attention Signals in the Human Brain
Authors:  Youyang Hou (Michigan State University) , Taosheng Liu (Michigan State University)
Abstract: Visual attention can be directed to spatial locations and various features, as well as to a unitary object independently of spatial and feature variations. Previous work has shown object-based attention can modulate neural activity in category-selective areas in the ventral visual cortex. However, whether earlier visual areas can be modulated by object-based attention and how higher-order areas control and represent the deployment of object-based attention is not clear. To investigate the neural mechanism of object-based attention, we presented two superimposed objects with similar shape that occupied the same spatial location, and asked participants to perform an attention-demanding task on one of the objects. We observed enhanced fMRI response for object-attended condition compared to a neutral condition in a network of occipital-parietal-frontal areas. There was no difference in overall sustained fMRI response between attending to different objects. Using multivariate pattern analysis (MVPA), however, we successfully "read out" the attended object from activity patterns in both early visual areas (V1 to MT+), object-selective areas (lateral occipital complex, LOC), and some parietal and frontal areas (e.g., IPS, MFG, and SFG). These results indicate that neural activity in multiple visual areas can be modulated by object-based attention. Furthermore, parietal and frontal cortical regions contain neural signals related to priority of attended object

(18) EEG Correlates of Risk and Regret in Experience-Based Choices
Authors:  Woo-Young Ahn (Indiana University) , Olga Rass (Indiana University) , Yong Wook Shin (Ulsan University School of Medicine, Korea) , Joshua Brown (Indiana University) , Jerome Busemeyer (Indiana University) , Brian O'Donnell (Indiana University)
Abstract: Electroencephalography (EEG) response reflects action/performance monitoring such as outcome valence and prediction error magnitude. However, EEG response to risk and regret remains unclear. The objectives of this study were to examine: 1) whether human choice behavior in experience-based paradigms is influenced by risk and regret; and 2) whether EEG response reflects risk and regret when making choices and evaluating choice outcomes. In Experiment 1, participants performed multiple experience-based two-choice gambling tasks with expected value and risk (measured by coefficient of variation) varied across tasks. The expected values were equated for both options in each task. Only the payoffs of chosen options were revealed. The results showed that majority of participants preferred safe options and the probability of choosing a safe option was influenced by expected value (i.e. more risk-averse with higher expected value), but not by risk. In Experiment 2, a separate group of participants performed four gambling tasks while EEG was continuously recorded. Only risk was varied across tasks and foregone payoffs were also presented. The results indicated that the probability of choosing a safe option monotonically increased as risk increased. Preliminary EEG findings suggest that the N2 and P300 components are associated with risk and regret, respectively. Then, we used reinforcement learning models to reveal underlying cognitive processes. A best-fitting model was selected by using multiple model-comparison methods, and hierarchical Bayesian analysis was used to estimate its model parameters. We are currently working on model-based EEG analysis (i.e., explaining trial-by-trial variations in EEG components by a reinforcement learning model). These results showed that: 1) choice behavior in (experienced-based) bandit tasks was modulated by risk only with foregone payoffs, and 2) preliminary results suggest EEG response might reflect risk and regret signals.

(19) Self-Timing in Memory and Visual Search
Authors:  Hye Joo Han (Purdue University) , Charles Viau-Quesnel (Universite Laval) , Zhuangzhuang Xi (Purdue University) , Richard Schweickert (Purdue University) , Claudette Fortin (Universite Laval)
Abstract: A hypothesis of Ewart Thomas is that when a subject processes information an estimate of the processing time is obtained as a byproduct. Is the estimate accurate? Subjective estimates of time often depend on concurrent non-temporal tasks. The ability of people to estimate time they spent on two different search tasks was investigated using a time production (TP) method. Participants conducted a memory search task in Experiment 1 and a visual search task in Experiment 2. After each search trial, they were asked to reproduce their response time (RT). The correlation coefficients between RT and TP varied enormously across participants. Results suggest some people have better ability than others at self-timing. Using a median split, analysis focused on participants in the upper half who are competent at self-timing. Both search tasks had two factors, set size and target presence, both with two levels. For RT, the effect of set size was significant for both tasks while the effect of target presence was significant only for visual search task. Factors that affected RT also had significant effects on TP in both tasks. TP was smaller than RT in memory search for every condition while TP was greater than RT in visual search except for the higher set size and target absent condition. This suggests that due to stronger interference with time estimation a shorter RT was perceived than the actual RT in memory search and in the highest load condition of visual search.

(20) McCollough Effect: Pairing the Same Color with Two Different Orientations.
Authors:  Walter Beagley (Alma College) , Zachary Johnson (Alma College) , Jeffrey Nielson (Alma College)
Abstract: The McCollough (1965) effect is an orientation-specific color aftereffect which causes achromatic lines to appear in different colors depending on their orientation. This study was done to resolve a disagreement between Humphrey, Dodwell, & Emerson(1985) and Allan & Siegel (1997) on whether the same color can be associated with two different orientations. Three trained observers used method of adjustment to measure the strength of McCollough effects produced in three different ways: 1. Same color: Magenta was paired with both vertical and horizontal lines; 2. Single color: Magenta was paired with vertical lines only; 3. Dual color: Magenta was paired with vertical lines, Green with horizontal lines (standard McCollough procedure). Exposure of each pairing was for twelve 15 sec periods. Perception of aftereffect color was tested with achromatic lines at vertical, horizontal and 6 in-between orientations. Observers used a computer mouse to adjust the green-magenta color balance until the test pattern appeared achromatic. The amount of magenta or green added was taken as the strength of the complementary aftereffect. Results for the same color group showed a significant green aftereffect for both horizontal and vertical lines. It was, however, much weaker than the effect produced by the single color and dual color conditions, even though all three conditions involved the same amount of exposure. Further work investigates the reasons for this difference and the implications for a Pavlovian interpretation.

(21) Sex Differences in Self-Reported Interest in Infants are Related to Visual Preferences for Infant and Adult Faces
Authors:  Rodrigo Cárdenas (Pennsylvania State University) , Lauren Harris (Michigan State University)
Abstract: Because alloparental infant care is essential to human reproductive success, it has been hypothesized that a variety of biological and cognitive mechanisms have evolved to facilitate adults’ interest in and responsiveness to infants in ways normally leading to care-giving. Evidence for such mechanisms comes from studies indicating that adults show attentional-emotional biases, along with distinct neurophysiological responses, toward features and behaviors that make infants look “cute.” These biases also are normally stronger in women than in men, consistent with the hypothesis that women, due to their central role in infant care, have evolved a greater and more stable sensitivity to infants. However, despite evidence that infants constitute a special stimulus category, the nature of the cognitive mechanisms underlying interest in infants is still largely unexplored. In this study, we examined whether one such cognitive mechanism is visual attention toward infant features, and if so, whether and how it is associated with adults’ self-reported interest in infants. We used eye-tracking to measure the oculomotor behavior of nulliparous undergraduates while they viewed pairs of faces, with each pair consisting of one adult face (a man or woman) and one infant face (a boy or girl). Subjects then completed an interest in infants’ questionnaire. The results show that women's interest in infants, as indexed by their oculomotor behavior toward infant faces and their self-reported interest in infants, is more stable than men's, consistent with the hypothesis about the evolution of sex differences. The results also show that oculomotor behavior can be successfully used to assess individual differences in interest in infants.

(22) Perceptual Isochrony and Fluency in Speech by Normal Talkers under Varying Task Demands
Authors:  Laura Dilley (Michigan State University) , Jessica Wallace (Michigan State University) , Chris Heffner (Michigan State University)
Abstract: Perceptual isochrony refers to the perception that the stressed syllables in spoken language are equidistant from one another, even though syllables almost never show equidistant temporal spacing acoustically. Until recently, perceptual isochrony was not known to play a functional role in language processing; however, effects of perceptually isochronous speech on word segmentation and lexical access have recently been identified using controlled speech materials (Dilley & McAuley, 2008; Dilley et al. 2010). The present work extends this research by investigating the linguistic conditions under which perceptual isochrony arises in spoken language, thus helping to assess the usefulness of this cue for word segmentation and lexical access. We investigated the hypothesis that perceptual isochrony occurs with greater likelihood (a) in list sequences, and (b) in task conditions which are not taxing to working memory resources. An experiment was conducted in which thirty lists of five semantically related monosyllabic words were recorded from nine speakers. These lists were produced under three conditions: a read condition, a condition where the lists were memorized before recitation, and a condition that included a comprehension task for a sentence that was read while a memorized list was simultaneously recited. Perceptual isochrony was assessed in two ways. First, recorded lists were notated for perceptual isochrony by coders trained in using a prosodic annotation system. Second, the recorded lists were judged on a six point Likert scale for fluency and for rhythmicity by naïve listeners. Lists with greater levels of anisochrony were produced under conditions that were more taxing to working memory. Similarly, lists produced with fewer instances of perceptual isochrony were rated to be both less rhythmic and less fluent over all. This research holds implications for understanding speech perception and production under fluent and nonfluent conditions.

(23) Effects of Language Attrition on Tonal Changes
Authors:  Chia-Hsin Yeh (Michigan State University) , Jung-Yueh Tu (Indiana University)
Abstract: The study investigated a correlation between a recent tonal change and language attrition in Hakka and Taiwanese. According to Luo (2005) and Yeh (2010), a mid level tone in the two languages was found to be categorized as low-falling tone among young speakers who speak Mandarin more frequently than their mother tongue. The mid level tone could be mislabeled as a low-falling tone due to dramatic decrease of language exposure. Frequent exposure to Mandarin which has no mid level tone may lead to confusion of the two similar tones, mid level tone and low- falling tone, in the two languages. The current study examined perception and production of Taiwanese tones by 21 young Taiwanese speakers. In Exp 1, Taiwanese participants were instructed to discriminate five Taiwanese tones using an AXB paradigm. In Exp 2, they were requested to identify which tone they heard. In Exp 3, they were asked to produce Taiwanese words with five different tones. In addition, a spectrographic analysis of 90 Hakka on-line dictionary entries was conducted. The mid level tone is also predicted to occur more often as in speech errors. The results conformed to the prediction that the mid level tone is the most confusing category not only in the perceptual tasks, but also in the production test. The mid level tone was mostly mispronounced as low-falling tone, but the mid level tone was seldom misperceived as low-falling tone. The difference of error patterns between perception and production results highlights a different role of perception and production in cognitive systems. The findings illustrated how changes in language, both in perception and in production, are relevant to language attrition, and production seems a main cause of language change. The study also suggested effects of use frequency and phonetic similarity on language change.

(24) In Defense of Media Multitasking: No Increase in Task-Switch or Dual-Task Costs
Authors:  Reem Alzahabi (Michigan State University ) , Mark Becker (Michigan State University )
Abstract: Extensive video game playing can increase one’s attentional control and visual skills (Green & Bavelier, 2003). By contrast, Ophir, Nass, and Wagner (2009) suggest that heavy media multitaskers have decreased attentional control, particularly task switching ability. They suggest that this task switching deficit is surprising, considering that media multitaskers switch between tasks on a regular basis. However, it is possible that media multitaskers do not consistently switch between tasks, but rather, attempt to perform multiple tasks simultaneously. If so, they may show a deficit in task-switching performance because they must perform one, and only one, of two presented tasks. To investigate this issue, we used the Media Multitasking Index Questionnaire (Ophir, Nass, Wagner, 2009) to identify heavy and light media multitaskers and then tested both their task-switching and dual-task performance. Participants performed a number-letter task (Rogers & Monsell, 1995), in which they were to classify a number as odd or even and a letter as a consonant or vowel. Each participant completed both a task-switch and dual-task paradigm. The task-switch paradigm required a switching between classifying the number or the letter across trials. The dual-task paradigm required the classification of both the number and the letter in each trial. In contrast to Ophir, Nass, and Wagner’s (2009) findings, we found that heavy media multitaskers have a decreased switch cost compared to light media multitaskers, that is, they are able to switch between two tasks more efficiently. Furthermore, both groups showed comparable dual-task performance. These findings suggest that media multitasking does not interfere with attentional control and may even produce a better ability to switch between tasks.

(25) Attention, Predictability and the Auditory ‘Oddball’ Effect in Perceived Duration.
Authors:  Alan Wedd (Michigan State University) , Molly Henry (Michigan State University) , Devin McAuley (Michigan State University)
Abstract: When an unexpected (oddball) stimulus is presented within a series of otherwise identical auditory or visual (standard) stimuli, the duration of the oddball tends to be overestimated. Explanations of the oddball effect have proposed that subjective lengthening of the oddball is due to increased attention to the unexpected stimulus (Tse et al., 2004) or conversely, to decreased attention to the predictable stimulus (repetition suppression) (Pariyadath & Eagleman, 2007). Critically, both explanations predict that the oddball duration should always be overestimated relative to the standard duration. The present study tested both possibilities by having listeners judge the duration of an oddball frequency sweep embedded in a nine-sweep series, where the oddball differed from the standard in terms of frequency velocity. The standard sweep always had a velocity equal to 1000 Hz/s and absolute duration of 500 ms; for one group of participants the oddball velocity was 500 Hz/s, and for another group of participants the oddball velocity was 1500 Hz/s. The serial position of the oddball varied randomly from trial to trial across positions 5, 6, 7, or 8 and took on duration values that ranged from 300 ms to 700 ms in 50-ms steps. Listeners judged whether the oddball duration was ‘shorter’ or ‘longer’ than the standard. When the oddball velocity was faster than the standard (1500 Hz/s), the oddball duration was overestimated. However, when the oddball velocity was slower than the standard (500 Hz/s), the oddball duration was underestimated. Results are inconsistent with both the enhanced attention and repetition suppression hypotheses, but rather support the view that duration judgments about auditory stimuli reflect systematic interactions between the pitch and time characteristics of the stimuli. Findings are more generally consistent with the view that intrinsic features of stimuli determine the nature of subjective duration distortions (van Wassenhove et al., 2008).

(26) Predicting a Failing Grade for the News Media: Cognitive Perceptions on Fairness, Influence, and Political leaning
Authors:  Robin Blom (Michigan State University)
Abstract: A growing number of citizens argue that U.S. news media, in general, present a biased picture of reality. This could be a correct observation, but the scholarly literature on the hostile media effects (HME) phenomenon points out that those bias perceptions may not be accurate assessments of news media slant, but rather subjective judgments on whether the content is neutral or favoring one side over others. This explains why people usually disagree on the actual direction of bias. Because the press is a vital component within democratic processes, it is important to get a better understanding of which personal perceptions and characteristics are causing judgments on news media trust. This study analyzed data from the Post Election Survey series, gathered by the Pew Research Center for the People & the Press (1992/1996/2000/2004/2008), which combines the opinions of thousands of randomly selected U.S. citizens. Their responses are used to test a model that explains differences in press quality perceptions. The results indicate that the evaluation of journalism performance is directly related to (1) perceptions on whether someone’s viewpoint is supported or not, (2) perceived influence of news media outlets on the presidential election process, and (3) partisanship. However, (4) selective exposure to a conservative news network did not. There is also one important trend noticeable in the data: the amount of predicted variance rose for each consecutive survey. Where in 1992 this was 28 percent, this number had grown to 37 percent in 2004. Especially for the last election cycle this predictive value rose drastically: the four independent variables explained more than half the variance for perceived press quality (R2=.52). This indicates that cognitive perceptions regarding perceived fairness, perceived influence, and partisanship have played an increasingly important role in the past two decades for assessing journalism performances.

(27) Investigating The Role of Automatic Processing on Shifts of Executive Control and the Initiation of Task Unrelated Thought
Authors:  Melena Vinski (McMaster University) , Scott Watter (McMaster University)
Abstract: Mind wandering is a ubiquitous phenomenon. Empirically measured using the go-no/go Sustained Attention Response Task, mind wandering is inferred from systematic fluctuations in nogo errors, reaction time (RT) on go trials, and self-report. In the current research we investigate how the recruitment of automatic processing influences the shift of executive control resources from task supervision, and how this relationship influences both nogo error rates and the subjective experience of mind wandering. We manipulated the temporal predictability of the task by having participants complete both a random inter-stimulus interval (RI) condition and a fixed inter-stimulus interval (FI) condition. Results from Experiment 1 suggest that participants were less likely to engage in automatic response behaviours and make fewer nogo errors in the RI condition than the FI condition; with RTs significantly predicting nogo-error rates in the RI condition. In Experiment 2, task demands were reduced to attenuate the novelty of the RI condition. Results indicate that the difference observed between conditions in Experiment 1 was eliminated; suggesting similar processes were recruited to govern task performance in the RI and FI condition. If automatic process recruitment were necessary for the shift of executive control processes to maintain off-task thought, participants would have to engage in an automatic response tendency independent of the stimulus presentation rate during the RI condition. In Experiment 3, uninterrupted task blocks were introduced to investigate the existence of an endogenous response tendency, with preliminary trial-by-trial response data supporting the hypothesis. Experiment results therefore suggest that automatic response behaviours may be necessary for the shift from on-task to off-task thought.

(28) Working Memory in Domain General Search Strategies
Authors:  Don Zhang (Michigan State University) , Timothy Pleskac (Michigan State University)
Abstract: Exploration and search have been studied by various cognitive scientists as well as consumer researchers. Research have shown evidence for a domain general search process, however, the role of working memory capacity in search behavior has not been explored. In the current study, we are interested in the cognitive processes that underly search and whether there is a domain general set of processes that controls internal, external and information search behavior. In particular, we aim to identify the degree to which individuals draw on working memory to search effectively. In the study, participants completed three search tasks (Internet search task, Anagram Task, Spatial Foraging Task) as well as two working memory tests. We devised a series of parameters to describe analogous search behavior across tasks and found significant correlation between the total amounts of search for the Anagram Task and the Spatial Foraging Task, which shows domain general search behavior across radically different environments. We also found significant correlation between the aforementioned measures with individuals’ working memory capacity across all three tasks. Findings from study one further shows evidence for a domain general search process that has been suggested by recent studies. In study two (ongoing), we have systematically characterized the search environment across different tasks in order to understand the role of working memory in search strategies. How we search in different situations is heavily depended on the output from the environment. If one search strategy is sub-optimal, the ability to shift to a better strategy is the sign of a “good searcher”. By characterizing the search environment, we can further understand the mechanism for which people change their search strategy based on the output of the environment, and potentially explain the role of executive control in general search behavior.

(29) Brain Mechanisms of Response Conflict in Political Judgment
Authors:  Robin Blom (Michigan State University) , S. Austin Lee (Michigan State University) , Allison Eden (Michigan State Univerisity) , Allison Shaw (Michigan State University) , Miron Varouhakis (University of South Carolina) , Issidoros Sarinopoulos (Michigan State University)
Abstract: The importance of studying response conflict cannot be disputed. There are many times in which people are confronted with situations in which multiple behavioral responses are possible. Conflict was induced by pitting motivation to vote for your party candidate against the motivation to follow personal convictions. Right-handed partisans (20 Republicans, 20 Democrats) recruited because of political affiliation and issue positions to take part in a single fMRI session. Participants (Ps) are presented with candidates holding positions on issues that are consistent or inconsistent with the Ps’ declared positions. Trials end with a decision screen in which Ps indicate their likelihood of voting for candidates. Thus, there are two basic trial types. In no-conflict trials, presented party affiliation and political issue position are consistent motivation and predictably clear choices for Ps. By contrast, conflict trials present party affiliation and issue position that are conflicting motivations and induce response conflict. Basic voxel-wise contrasts will be performed to identify brain regions associated with conflict processing. These analyses will be performed across political issue and social issues conditions, after controlling for economic issues. We predict that response conflict trials will show greater activation in domain general cognitive control brain regions such as ACC, right ventrolateral PFC, right middle frontal gyrus, and posterior dorsomedial PFC. There are also two subtypes of response conflict trials: party-conflict and issue-conflict trials. Calculating the difference in voting behavior between these two trial types will provide a behavioral index of party (versus issue) dominance for each participant. Using this behavioral index, we will assess the extent to which domain general brain activation will result in biasing further processing in domain specific regions in favor of either party affiliation versus political issue.

(30) Evidence of Control: Sensory Congruencies and the Sense of Agency
Authors:  John Dewey (Michigan State University) , Thomas Carr (Michigan State University)
Abstract: The sense of agency is the subjective experience of willfully causing or generating actions. We studied sense of agency in the context of controlling a moving object: how do participants perceive their control over an object and whether it “obeyed” their commands, i.e. moved where it was supposed to move. Participants control a box by moving in four directions with the arrow keys (U,D,L,R). Previously we found that performing task-irrelevant but semantically related vocalizations (e.g. saying "left") during this task influenced the reported obey moves and feelings of control depending on their congruency with the arrow keys and moving object. We attributed this effect to source monitoring errors (Dewey & Carr, submitted). Here, we 1) replicate and generalize these findings in a modified task where motion of the object was only implied by a moving background; 2) fit the response times of agency judgments with the linear ballistic accumulator model, showing that vocalizations modulate drift rates (rate of evidence accumulation); and 3) explored possible interactions among the effects already described and uncertainty (stimulus degradation), locus of the distraction (spoken by the participant, vs voices played through headphones) and identity of the box-controller (judging if box obeyed oneself, vs. judging if it obeyed another). We find that people lose track of the sources of congruencies and incongruities when actions performed in different modalities have similar referents, but the source of these congruencies also makes a difference.

(31) Nature of Expertise in the Geosciences: A Working Memory Study
Authors:  Sheldon Turner (Michigan State University)
Abstract: The emerging field of Geocognition seeks to utilize the techniques and knowledge learned from cognitive science and apply it to the Earth Sciences. We look to see how humans perceive, integrate, and make decisions that affect and are affected by the natural world. As a new field, Geocognition currently lacks the many standardized instruments available in more mature sciences to test hypotheses and provide valid and reliable results. This research attempts to create just that, by combining and adapting new technologies with methodologies developed in cognitive science. Based on chess and other working memory experiments designed to understand expertise, we have created a new tool called the Block Diagram Memory Test (BDMT) that gives researchers the ability to test novice and experts alike on their geologic working memory. This information can then be compared to spatial abilities and other psychometrics. The BDMT is also designed for use on a Tablet PC that allows researchers to go to the participants in a more authentic setting than a lab. Over two years in both the lab and in the field we have collected nearly 100 participant data. The results of this work are presented here as sketches and video, as well as the statistical comparisons with the other psychological data collected. Numerous assumptions exists on the nature of expertise in the geosciences, with this new tool we may now begin to dismiss or confirm some of these. In addition, the complex contextual setting of expertise research lends new insights to cognitive scientists to better understand human cognition on the whole.

(32) Lateralized Distractor Effects on Contingent Attentional Capture
Authors:  Kirk Harrison (Michigan State University) , Susan Ravizza (Michigan State University) , Mark Becker (Michigan State University)
Abstract: Two functional networks exist to control visospatial attention. The dorsal frontoparietal network is thought to set and maintains attention based on goals and expectations, whereas the ventral frontoparietal network deals reorients attention to unexpected items, especially when they are task relevant. The purpose of this study was to assess how task relevance is set in the ventral parietal system. The dorsal network may set task relevance in a top-down fashion by directly communicating this information to the ventral parietal system. Alternatively, task relevance may be set in the ventral parietal system in a bottom-up fashion because of the greater responsiveness of sensory areas of the brain to task-relevant stimuli. Using a modified attentional blink paradigm, we observed the the effect of distractor items presented in either the same or the opposite hemifield as the target. Support for the idea that task relevance is set indirectly through the modulation of sensory areas would be found if the attentional blink was larger for distractor items in the same hemifield compared to those in the opposite hemifield. In this experiment, participants monitored a visual stream for letters of a pre-specified target color. At the same time, singleton distractors could appear in the same hemifield, or the opposite hemifield (corresponding or diagonal location) as the target. The color of these distractors was either task-relevant (same color as target), irrelevant (different color), or neutral (not a singleton color). Distractors were presented at a lag of 0, 1, and 2 items before the target appeared. Attentional blink was largest for relevant distractors, but did not differ by hemifield at corresponding locations. However, attentional blink was smaller for relevant distractors presented diagonally in the opposite hemifield. Thus, we have some evidence that sensory areas may signal task relevance to the ventral parietal system.

(33) False Memory for Sentences
Authors:  Stephen Smith (The Ohio State University) , Simon Dennis (The Ohio State University)
Abstract: When people are asked the question “What do cows drink?”, many respond with 'milk' even though the correct answer is 'water'. One possible explanation for this phenomenon is that the word 'milk' is associated with two of the words in the question, 'cows' and 'drink', causing it to be activated as a likely response. We wanted to see if we could create these types of association errors in the laboratory. A recognition experiment was performed to test participants' memory for different kinds of sentences. Two groups of study sentences were constructed. One group was based around associating pairs of words in the study sentences that would also appear in the distractor test sentences. The other group's sentences expressed similar propositions to the first group's, but did not contain associated pairs of words. It was found that participants who read sentences containing associated pairs of words were significantly more likely to falsely endorse distractor sentences as having appeared before than participants who read sentences without associated word pairs (false alarm rate of .49 compared to .39).

(34) Geological Working Memory in Time
Authors:  Thomas Singer (Michigan State University) , Emily Ward (Michigan State University) , Julie Libarkin (Michigan State University)
Abstract: The influence of expertise on the ability to remember previously presented information is one that has rarely been disputed since the seminal work of Chase and Simon (1973). The cause of these effects is often described as being due to more efficient encoding of the input due to the additional domain specific knowledge possessed by experts in a particular field, which allows for top-down processing. This may be due to collapsing multiple pieces of information into one representation in memory (chunking) (Reingold et al., 2001), making the process of retrieving information from long-term memory more efficient (Ricks & Wiley, 2009), or by making diagnostic features of a stimulus more salient than non-diagnostic features (Wagar & Dixon, 2005). In this study, we investigate the influence of expertise on visual working memory using a previously unused stimulus set: geological block diagrams, which has the capacity to expand evidence of expertise effects in visual working memory to stimuli representative of real-world phenomena. We found that the correlation between a measure of geologic expertise (GCI score) and the ability to accurately recreate a block diagram from memory was significant only when the block represented geologic features, supporting and extending evidence for expertise effects in visual working memory. Due to the nature of the stimuli used in this study, our findings have practical relevance to the fields of both geology and education.

(35) Second Language Learners’ Sensitivity to Grammatical Cues
Authors:  Susan Gundersen (University of Notre Dame) , Kathleen Eberhard (University of Notre Dame)
Abstract:  This study investigated adult native English speakers' acquisition of Italian (L2) grammar via enrollment in college courses. Predictions were derived from the Competition Model (Bates & MacWhinney, 1982; 1987; 1989) according to which subject-verb agreement (SVA) is a vertical cue, reflecting correlations between word form and meaning (e.g., subject noun - agent role). Noun-adjective agreement (NAA) is a horizontal cue because it reflects correlations solely between word forms. Both SVA and NAA are frequent cues in Italian but not in English. SVA should be acquired by English learners of Italian sooner than NAA because of its vertical mapping. However, constructions in which SVA has an irregular mapping should be difficult to acquire. For example, the Italian verb piacere (to like) associates the subject with the theme role and the object with the experiencer role, e.g., Mi piaciono i biscotti = to me are pleasing the cookies, which is also irregular with respect the English translation I like the cookies. The frequency of piacere, which is introduced early in course curriculum, may show a U-shaped learning trajectory. The predictions were tested with a grammaticality judgment task that was administered in the middle and end of a semester to beginner (N = 21) and advanced (N = 6) Italian L2 learners. The task measured the learners' sensitivity to agreement violations in spoken sentences representing the three conditions (SVA, irregular-SVA, and NAA). Sensitivity was measured by calculating A' scores. As expected, the advanced learners' A' scores were higher than the beginners in all three conditions. Both groups had higher A' scores in the SVA condition than in the other two conditions, and NAA scores were higher than in the irregular-SVA condition. The irregular-SVA condition shows some evidence of a U-shaped learning trajectory. The implications of these results for the Competition Model as well as effects of other general learning factors will be discussed.

(36) Role of Diagrams on a Standardized Earth Science Assessment
Authors:  Nicole LaDue (Michigan State University)
Abstract: Diagrams are used in science to visually organize data (ie. charts and tables), spatially represent data (ie. graphs and maps), or depict objects and processes (ie. figures and sketches). Visual and spatial communication is essential for experts, and therefore are important for science education. Diagrams are useful in education contexts because they constrain the cognitive activities recruited for problem solving (Stenning and Oberlander, 1995) and provide offloading for working memory (Bauer and Johnson-Laird, 1993). Cognitive and learning scientists have studied the role of perception (Gibson, 1979; Rapp, 2008), spatial analogy (Larkin & Simon, 1987; Scaife, 1996), and attention (Hegarty, et al., 2010, Sanchez & Wiley, 2006; Schwonke, Berthold, & Renkl, 2009) in diagram comprehension. A persistent question in the literature relates to the relative importance of prior content knowledge versus diagram reading skills when assessing students’ ability to solve problems containing diagrams (Zhang, 1997; Scaife, 1996). Despite this focus on diagrams in the literature, the role of diagrams in learning and assessment is poorly understood (Crisp & Sweiry, 2006; Schwartz & Heiser, 2006). To begin examining the role of diagrams in science assessments we analyzed a decade of New York State Earth Science Regents exam items for content, diagram style, and cognitive task. This exam is particularly interesting because approximately 85% of the questions require some type of external representation to answer the question correctly and 165,000 students in New York State take the exam annually. This preliminary study suggests training of specific diagram reading skills is required for strong performance on this high-stakes exam. This content analysis lays the groundwork for an empirical study of the role of individual differences in verbal and spatial abilities in predicting success on questions containing diagrams that require specific cognitive tasks.

(37) Does Spatial Training Improve Children's Mathematics Ability?
Authors:  Yiling Cheng (Michigan State University) , Kelly Mix (Michigan State University)
Abstract: Previous research has established a link between the spatial ability and mathematics. That is, both children and adults with better spatial abilities also have higher math scores (Burnett, Lane, & Dratt, 1979; Casey, Nuttall, & Pezaris, 2001; Delgado & Prieto, 2004; Geary et al., 2000). Spatial ability can be defined as mental rotation ability and it appears predictive of mathematics performance (Casey et al., 2001; Kyttala et al., 2003; Reukala, 2001). The present study tested whether training in mental rotation would have a positive effect on early mathematics learning. To date, 46 children have completed the study (M = 7. 1 years, range = 6 to 8. 8 years). We are using a pretest-training-posttest design. Both groups completed tests of spatial ability and simple mathematics, before and after training. Children in spatial training group significantly improved on the spatial ability test, F (1, 43) = 11. 59, p = . 001, η2= .21. This confirms that the spatial training procedure, albeit brief, was sufficient to improve spatial ability. Our preliminary results (n = 46) also indicate this improvement leads to improved math scores, but only for certain problems. Spatial training children significantly outperformed control children on addition problems (F (1, 43) = 24. 15, p < . 001, η2= .36), specifically, they performed better on problems with missing addends (e.g., 21 + ____ = 28) (F (1, 43) = 17. 51, p < . 001, η2= .29). They also significantly improved on posttest compared to their pretest on addition problems (t (22) = 3.226, p = .004) and missing addend problems (t (22) = 2.938, p = .008) as well. Children showed the greatest improvement on addition, and more specifically, missing addend problems. This finding suggests that children use spatial processes to solve addition problems and thus improvement in mental rotation leads to immediate improvement in problem solving.

(38) Observing Hand Gestures in the Classroom Helps Students Learn Math
Authors:  Ryan Duffy (Michigan State University) , Susan Wagner Cook (University of Iowa) , Kimberly Fenn (Michigan State University)
Abstract: Recent research has found that children who produce hand gestures while learning math are more likely to score higher on a test given immediately after training (Cook & Goldin-Meadow, 2006). Furthermore, after instruction with gesture, children show greater maintenance of mathematical knowledge than children who learned without gesture, when tested several weeks later (Cook, Mitchell, & Goldin-Meadow, 2007). However, previous work has not determined whether similar benefits can be obtained when children simply observe their instructor gesture and do not produce gestures themselves. Furthermore, prior studies have all used individualized instruction; thus the benefit of gesture on a classroom-wide level remains unresolved. To address these questions, we trained third graders in their natural classroom setting on mathematical equivalence problems with addition (i.e. 4 + 6 + 7 = 4 + ____). Six short instructional videos were presented. For half of the classrooms, the instructor in the videos gestured while she explained the problems, and for the other half, she did not gesture. Students were tested on novel equivalence problems immediately after training and after a 24-hour retention interval. At the 24-hour test, we also tested their ability to generalize their knowledge to equivalence problems in multiplication. We found that gesture significantly improved equivalence learning. Students who saw gesture during instruction performed better on the immediate and 24-hour tests than students who did not see gesture during training. Furthermore, students who saw gesture were also better able to transfer their knowledge to equivalence problems in multiplication at the delayed test. These findings have strong implications for math education. Gestures that are produced at the front of the classroom may facilitate mathematical learning in children and create a more lasting memory representation that persists long after the lesson has ended.

(39) Individual Differences in Cognitive Ability Predict Sleep-dependent Consolidation of Declarative Memory
Authors:  Jennifer Van Loon (Michigan State University) , David Fried (Michigan State University) , Kimberly Tweedale (Michigan State University) , David Hambrick (Michigan State University) , Kimberly Fenn (Michigan State University)
Abstract: While many functions of sleep remain largely unknown, there is evidence for a beneficial role of sleep in the consolidation of declarative memories. In particular, several studies have shown that recall of paired associates is higher after sleep than before sleep. Nevertheless, few studies have investigated individual differences in sleep-dependent consolidation. Thus, the goal of the present study was to test for correlations between individual differences in cognitive ability and declarative memory consolidation during sleep. Across two experiments, participants learned 40 word pairs and returned for a cued recall test after a 12-hour retention interval that either spanned a period of wakefulness or sleep. Participants also completed tests of cognitive abilities, including working memory capacity (WMC) in Experiment 1, and WMC, fluid intelligence (Gf), crystallized intelligence (Gc), and verbal fluency in Experiment 2. In both experiments, there was evidence for memory consolidation during sleep; recall performance improved significantly more after sleep than after a waking retention interval. Furthermore, memory improvement after sleep was positively predicted by WMC (Experiment 1) and by a general cognitive ability factor, defined by WMC and reasoning, and by verbal fluency (Experiment 2). None of these individual difference measures were related to baseline recall performance in Session 1, suggesting that the correlations were specific to change in performance after sleep. Finally, in both experiments, no significant correlations were found between any of these factors and change in recall performance in the Wake condition. These findings suggest that variance in cognitive abilities may in part reflect differences in storage and consolidation of new memories during sleep.

(40) The Effect of Sleep Deprivation on Memory Susceptibility to Misleading Information
Authors:  Holly Lewis (Michigan State University) , Steven Frenda (Univ of California, Irvine) , Elizabeth Loftus (Univ of California, Irvine) , Kimberly Fenn (Michigan State University)
Abstract: After a memory is encoded, misleading information may severely alter it, creating a false memory. Several studies have explored factors that affect susceptibility to misleading information, but the role of sleep and sleep deprivation in this process has yet to be investigated. It is, however, well-established that sleep deprivation has severe consequences to cognitive function. In particular, sleep deprivation reduces working memory capacity and the ability to acquire information, likely due to a reduction in frontal lobe function. Furthermore, sleep deprivation has been found to increase false recall in the DRM paradigm (Diekelmann et al., 2008). In the present study, the effect of sleep deprivation on memory malleability and reconstruction was investigated using the misinformation paradigm. Participants arrived in the laboratory at 22:30; half were permitted to sleep for 8 hours and the other half remained awake all night in the laboratory. At 9:00am, participants completed all three experimental phases: encoding, misinformation and test. During encoding, participants were shown a series of pictures that depicted a story. Next, they completed an unrelated math learning task and then proceeded to the misinformation phase, in which they read sentences describing the story shown in the pictures. However, misleading information concerning the pictures had been subtly added to the sentences. Finally, participants were given a multiple choice test regarding the events in the story and were asked to respond based solely on their memory for the pictures, Results suggest that sleep-deprived participants showed lower rates of correct memory and higher rates of false memory for suggested information than participants who had slept. This suggests that sleep deprivation can increase susceptibility to false memory. These results have implications for eyewitness testimony and for test performance of sleep deprived students who are presented with trick questions.

(41) Memory and Decision-Making: Determining Action When the Sirens Sound.
Authors:  Robert Drost (Michigan State University)
Abstract: Memory’s role in risk situations is pivotal to determining potential outcomes. Specifically, semantic and episodic memories play an important part in an individual’s decision-making under risk. Social, demographic and policy variables are also important during decision-making, and together with memories, form the basis of planned action. Semantic knowledge is typically thought of as information about the world, learned through reading, media, schooling, and other secondary experiences. Episodic memories are actual life experiences, and are often connected to specific affective imagery associated with events. In this study, forty-nine undergraduate students were faced with a decision-making task both before and after viewing a 5-minute slide show of tornadoes and related damage. The experimental population averaged 22 years in age, were exclusively non-science majors, both male (n=21) and female (n=28) at varying academic ranks, and contained n=23 who reported having experienced a tornado. Initially, those participants with episodic experiences exhibited a marginally higher tendency to react to a tornado warning than those participants with only semantic knowledge. Viewing of the slide show resulted in movement of both semantic and episodic groups towards more prudent decision-making, with the semantic group exhibiting the largest gain. This paper sheds light on the impact of memories and semantic experience on decision-making during a risk situation. This study suggests new avenues for development of warnings intended to induce caution.

(42) The Effect of Comparison and Similarity Between Sequentially Presented Stimuli in Category Learning.
Authors:  Paulo Carvalho (Indiana University) , Robert Goldstone (Indiana University)
Abstract: It has been previously demonstrated that alternating the presentation of exemplars from two categories improves category learning, compared to presenting exemplars of each category in separate blocks. However, the underlying mechanisms for this advantage are not completely understood. Remaining issues pertain to the relevance of within and between category similarities, and the role of comparing sequentially presented objects. We present 3 experiments that approach these questions. In Experiment 1 and Experiment 2, within- and between-category similarity are manipulated simultaneously with presentation schedule, in both a between-subjects design (Experiment 1) and a within-subjects design (Experiment 2). In Experiment 3, alternated presentation of two categories is compared to two blocked presentation conditions: one in which only similar stimuli from the same category are presented successively and another in which these successive same-category stimuli are dissimilar. Our results show a clear overall advantage of low similarity in categorization performance, suggesting that the beneficial effects of low between-category similarity more than compensate for the disadvantageous effects of low within-category similarity. No effect of presentation schedule was found in either the within- or between-subjects design. Moreover, between-category similarity seems to be the best overall predictor of categorization performance. Also, alternation between categories that share high similarity is shown to result in better performance than the condition in which only stimuli with lower within-category similarity are successively presented. These results point to the creation of interrelated concepts and the representation of each category in terms of its opposition to the other.

(43) What are the Boundary Conditions of Differentiation in Episodic Memory?
Authors:  Adam Osth (The Ohio State University) , Simon Dennis (The Ohio State University)
Abstract: One of the critical findings in recognition memory is the null list-strength effect (LSE), which states that strengthening items by extra study time or extra repetitions does not hurt the performance of other studied items. Episodic memory models were able to predict the null LSE by using the principle of differentiation, which states that repetitions of a single item accumulate into a single strong memory trace. A hypothesized boundary of the differentiation process is that repetitions of a single item in different contexts will create new traces. Two experiments were conducted that tested this hypothesis by repeating words across different study-test cycles rather than within a single study-test cycle and subsequently testing all the lists with an inclusion instruction. Results indicated that as the proportion of strong items increased, there was both a null LSE and a non-significant decrease in the FAR, which is contrary to the predicted strength-based mirror effect. These two results in tandem provide a challenge for differentiation models.