Sample records for models typically fail

  1. Why Are There Developmental Stages in Language Learning? A Developmental Robotics Model of Language Development

    ERIC Educational Resources Information Center

    Morse, Anthony F.; Cangelosi, Angelo

    2017-01-01

    Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to "switch" between…

  2. Coulomb displacement energies of excited states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sherr, R.; Bertsch, G.

    The Bansal--French--Zamick model is quite successful in accounting for the Coulomb displacement energies of excited particle--hole states in a variety of light nuclei. Level shifts are typically reproduced to within 50 keV. However, the model fails for certain excited 0$sup +$ states, and this remains a puzzle. (AIP)

  3. What Different Kinds of Stratification Can Reveal about the Generalizability of Data-Mined Skill Assessment Models

    ERIC Educational Resources Information Center

    Sao Pedro, Michael A.; Baker, Ryan S. J. d.; Gobert, Janice D.

    2013-01-01

    When validating assessment models built with data mining, generalization is typically tested at the student-level, where models are tested on new students. This approach, though, may fail to find cases where model performance suffers if other aspects of those cases relevant to prediction are not well represented. We explore this here by testing if…

  4. Comparing Construct Definition in the Angoff and Objective Standard Setting Models: Playing in a House of Cards without a Full Deck

    ERIC Educational Resources Information Center

    Stone, Gregory Ethan; Koskey, Kristin L. K.; Sondergeld, Toni A.

    2011-01-01

    Typical validation studies on standard setting models, most notably the Angoff and modified Angoff models, have ignored construct development, a critical aspect associated with all conceptualizations of measurement processes. Stone compared the Angoff and objective standard setting (OSS) models and found that Angoff failed to define a legitimate…

  5. Lotka-Volterra pairwise modeling fails to capture diverse pairwise microbial interactions

    PubMed Central

    Momeni, Babak; Xie, Li; Shou, Wenying

    2017-01-01

    Pairwise models are commonly used to describe many-species communities. In these models, an individual receives additive fitness effects from pairwise interactions with each species in the community ('additivity assumption'). All pairwise interactions are typically represented by a single equation where parameters reflect signs and strengths of fitness effects ('universality assumption'). Here, we show that a single equation fails to qualitatively capture diverse pairwise microbial interactions. We build mechanistic reference models for two microbial species engaging in commonly-found chemical-mediated interactions, and attempt to derive pairwise models. Different equations are appropriate depending on whether a mediator is consumable or reusable, whether an interaction is mediated by one or more mediators, and sometimes even on quantitative details of the community (e.g. relative fitness of the two species, initial conditions). Our results, combined with potential violation of the additivity assumption in many-species communities, suggest that pairwise modeling will often fail to predict microbial dynamics. DOI: http://dx.doi.org/10.7554/eLife.25051.001 PMID:28350295

  6. Parametric Mass Reliability Study

    NASA Technical Reports Server (NTRS)

    Holt, James P.

    2014-01-01

    The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

  7. When modularization fails to occur: a developmental perspective.

    PubMed

    D'Souza, Dean; Karmiloff-Smith, Annette

    2011-05-01

    We argue that models of adult cognition defined in terms of independently functioning modules cannot be applied to development, whether typical or atypical. The infant brain starts out highly interconnected, and it is only over developmental time that neural networks become increasingly specialized-that is, relatively modularized. In the case of atypical development, even when behavioural scores fall within the normal range, they are frequently underpinned by different cognitive and neural processes. In other words, in neurodevelopmental disorders the gradual process of relative modularization may fail to occur.

  8. Failed Supernovae Explain the Compact Remnant Mass Function

    NASA Astrophysics Data System (ADS)

    Kochanek, C. S.

    2014-04-01

    One explanation for the absence of higher mass red supergiants (16.5 M ⊙ <~ M <~ 25 M ⊙) as the progenitors of Type IIP supernovae (SNe) is that they die in failed SNe creating black holes. Simulations show that such failed SNe still eject their hydrogen envelopes in a weak transient, leaving a black hole with the mass of the star's helium core (5-8 M ⊙). Here we show that this naturally explains the typical masses of observed black holes and the gap between neutron star and black hole masses without any fine-tuning of stellar mass loss, binary mass transfer, or the SN mechanism, beyond having it fail in a mass range where many progenitor models have density structures that make the explosions more likely to fail. There is no difficulty including this ~20% population of failed SNe in any accounting of SN types over the progenitor mass function. And, other than patience, there is no observational barrier to either detecting these black hole formation events or limiting their rates to be well below this prediction.

  9. Creative Behavior, Motivation, Environment and Culture: The Building of a Systems Model

    ERIC Educational Resources Information Center

    Hennessey, Beth A.

    2015-01-01

    With the exception of research examining the productivity of teams, the empirical study of creativity was until recently almost exclusively focused at the level of the individual creator. Investigators and theorists typically chose to decontextualize the creative process and failed to include a consideration of anyone or anything beyond the person…

  10. Automatic conversational scene analysis in children with Asperger syndrome/high-functioning autism and typically developing peers.

    PubMed

    Tavano, Alessandro; Pesarin, Anna; Murino, Vittorio; Cristani, Marco

    2014-01-01

    Individuals with Asperger syndrome/High Functioning Autism fail to spontaneously attribute mental states to the self and others, a life-long phenotypic characteristic known as mindblindness. We hypothesized that mindblindness would affect the dynamics of conversational interaction. Using generative models, in particular Gaussian mixture models and observed influence models, conversations were coded as interacting Markov processes, operating on novel speech/silence patterns, termed Steady Conversational Periods (SCPs). SCPs assume that whenever an agent's process changes state (e.g., from silence to speech), it causes a general transition of the entire conversational process, forcing inter-actant synchronization. SCPs fed into observed influence models, which captured the conversational dynamics of children and adolescents with Asperger syndrome/High Functioning Autism, and age-matched typically developing participants. Analyzing the parameters of the models by means of discriminative classifiers, the dialogs of patients were successfully distinguished from those of control participants. We conclude that meaning-free speech/silence sequences, reflecting inter-actant synchronization, at least partially encode typical and atypical conversational dynamics. This suggests a direct influence of theory of mind abilities onto basic speech initiative behavior.

  11. Methodological Measurement Fruitfulness of Exploratory Structural Equation Modeling (ESEM): New Approaches to Key Substantive Issues in Motivation and Engagement

    ERIC Educational Resources Information Center

    Marsh, Herbert W.; Liem, Gregory Arief D.; Martin, Andrew J.; Morin, Alexandre J. S.; Nagengast, Benjamin

    2011-01-01

    The most popular measures of multidimensional constructs typically fail to meet standards of good measurement: goodness of fit, measurement invariance, lack of differential item functioning, and well-differentiated factors that are not so highly correlated as to detract from their discriminant validity. Part of the problem, the authors argue, is…

  12. Chronic sublethal stress causes bee colony failure.

    PubMed

    Bryden, John; Gill, Richard J; Mitton, Robert A A; Raine, Nigel E; Jansen, Vincent A A

    2013-12-01

    Current bee population declines and colony failures are well documented yet poorly understood and no single factor has been identified as a leading cause. The evidence is equivocal and puzzling: for instance, many pathogens and parasites can be found in both failing and surviving colonies and field pesticide exposure is typically sublethal. Here, we investigate how these results can be due to sublethal stress impairing colony function. We mathematically modelled stress on individual bees which impairs colony function and found how positive density dependence can cause multiple dynamic outcomes: some colonies fail while others thrive. We then exposed bumblebee colonies to sublethal levels of a neonicotinoid pesticide. The dynamics of colony failure, which we observed, were most accurately described by our model. We argue that our model can explain the enigmatic aspects of bee colony failures, highlighting an important role for sublethal stress in colony declines. © 2013 The Authors. Ecology Letters published by John Wiley & Sons Ltd and CNRS.

  13. Chronic sublethal stress causes bee colony failure

    PubMed Central

    Bryden, John; Gill, Richard J; Mitton, Robert A A; Raine, Nigel E; Jansen, Vincent A A; Hodgson, David

    2013-01-01

    Current bee population declines and colony failures are well documented yet poorly understood and no single factor has been identified as a leading cause. The evidence is equivocal and puzzling: for instance, many pathogens and parasites can be found in both failing and surviving colonies and field pesticide exposure is typically sublethal. Here, we investigate how these results can be due to sublethal stress impairing colony function. We mathematically modelled stress on individual bees which impairs colony function and found how positive density dependence can cause multiple dynamic outcomes: some colonies fail while others thrive. We then exposed bumblebee colonies to sublethal levels of a neonicotinoid pesticide. The dynamics of colony failure, which we observed, were most accurately described by our model. We argue that our model can explain the enigmatic aspects of bee colony failures, highlighting an important role for sublethal stress in colony declines. PMID:24112478

  14. Common-Cause Failure Treatment in Event Assessment: Basis for a Proposed New Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana Kelly; Song-Hua Shen; Gary DeMoss

    2010-06-01

    Event assessment is an application of probabilistic risk assessment in which observed equipment failures and outages are mapped into the risk model to obtain a numerical estimate of the event’s risk significance. In this paper, we focus on retrospective assessments to estimate the risk significance of degraded conditions such as equipment failure accompanied by a deficiency in a process such as maintenance practices. In modeling such events, the basic events in the risk model that are associated with observed failures and other off-normal situations are typically configured to be failed, while those associated with observed successes and unchallenged components aremore » assumed capable of failing, typically with their baseline probabilities. This is referred to as the failure memory approach to event assessment. The conditioning of common-cause failure probabilities for the common cause component group associated with the observed component failure is particularly important, as it is insufficient to simply leave these probabilities at their baseline values, and doing so may result in a significant underestimate of risk significance for the event. Past work in this area has focused on the mathematics of the adjustment. In this paper, we review the Basic Parameter Model for common-cause failure, which underlies most current risk modelling, discuss the limitations of this model with respect to event assessment, and introduce a proposed new framework for common-cause failure, which uses a Bayesian network to model underlying causes of failure, and which has the potential to overcome the limitations of the Basic Parameter Model with respect to event assessment.« less

  15. The Core-Collapse Supernova-Black Hole Connection

    NASA Astrophysics Data System (ADS)

    O'Connor, Evan

    The death of a massive star is typically associated with a bright optical transient known as a core-collapse supernova. However, there is growing evidence that not all massive stars end their lives with a brillant optical display, but rather in a whimper. These failed supernovae, or unnovae, result from the central engine failing to turn the initial implosion of the iron core into an explosion that launches the supernova shock wave, unbinds the majority of the star, and creates the supernova as we know it. In these unnovae, the failure of the central engine is soon followed by the collapse of the would-be neutron star into a stellar mass black hole. Instead of the bright optical display following successful supernovae, little to no optical emission is expected from typical failed supernovae as most of the material quietly accretes onto the black hole. This makes the hunt for failed supernovae difficult. In this chapter for the Handbook of Supernovae, I present the growing observational evidence for failed supernovae and discuss the current theoretical understanding of how and in what stars the supernova central engine fails.

  16. Automatic Conversational Scene Analysis in Children with Asperger Syndrome/High-Functioning Autism and Typically Developing Peers

    PubMed Central

    Tavano, Alessandro; Pesarin, Anna; Murino, Vittorio; Cristani, Marco

    2014-01-01

    Individuals with Asperger syndrome/High Functioning Autism fail to spontaneously attribute mental states to the self and others, a life-long phenotypic characteristic known as mindblindness. We hypothesized that mindblindness would affect the dynamics of conversational interaction. Using generative models, in particular Gaussian mixture models and observed influence models, conversations were coded as interacting Markov processes, operating on novel speech/silence patterns, termed Steady Conversational Periods (SCPs). SCPs assume that whenever an agent's process changes state (e.g., from silence to speech), it causes a general transition of the entire conversational process, forcing inter-actant synchronization. SCPs fed into observed influence models, which captured the conversational dynamics of children and adolescents with Asperger syndrome/High Functioning Autism, and age-matched typically developing participants. Analyzing the parameters of the models by means of discriminative classifiers, the dialogs of patients were successfully distinguished from those of control participants. We conclude that meaning-free speech/silence sequences, reflecting inter-actant synchronization, at least partially encode typical and atypical conversational dynamics. This suggests a direct influence of theory of mind abilities onto basic speech initiative behavior. PMID:24489674

  17. Semiparametric bivariate zero-inflated Poisson models with application to studies of abundance for multiple species

    USGS Publications Warehouse

    Arab, Ali; Holan, Scott H.; Wikle, Christopher K.; Wildhaber, Mark L.

    2012-01-01

    Ecological studies involving counts of abundance, presence–absence or occupancy rates often produce data having a substantial proportion of zeros. Furthermore, these types of processes are typically multivariate and only adequately described by complex nonlinear relationships involving externally measured covariates. Ignoring these aspects of the data and implementing standard approaches can lead to models that fail to provide adequate scientific understanding of the underlying ecological processes, possibly resulting in a loss of inferential power. One method of dealing with data having excess zeros is to consider the class of univariate zero-inflated generalized linear models. However, this class of models fails to address the multivariate and nonlinear aspects associated with the data usually encountered in practice. Therefore, we propose a semiparametric bivariate zero-inflated Poisson model that takes into account both of these data attributes. The general modeling framework is hierarchical Bayes and is suitable for a broad range of applications. We demonstrate the effectiveness of our model through a motivating example on modeling catch per unit area for multiple species using data from the Missouri River Benthic Fishes Study, implemented by the United States Geological Survey.

  18. Infill Walls Contribution on the Progressive Collapse Resistance of a Typical Mid-rise RC Framed Building

    NASA Astrophysics Data System (ADS)

    Besoiu, Teodora; Popa, Anca

    2017-10-01

    This study investigates the effect of the autoclaved aerated concrete infill walls on the progressive collapse resistance of a typical RC framed structure. The 13-storey building located in Brăila (a zone with high seismic risk in Romania) was designed according to the former Romanian seismic code P13-70 (1970). Two models of the structure are generated in the Extreme Loading® for Structures computer software: a model with infill walls and a model without infill walls. Following GSA (2003) Guidelines, a nonlinear dynamic procedure is used to determine the progressive collapse risk of the building when a first-storey corner column is suddenly removed. It was found that, the structure is not expected to fail under the standard GSA loading: DL+0.25LL. Moreover, if the infill walls are introduced in the model, the maximum vertical displacement of the node above the removed column is reduced by about 48%.

  19. Predictive Rate-Distortion for Infinite-Order Markov Processes

    NASA Astrophysics Data System (ADS)

    Marzen, Sarah E.; Crutchfield, James P.

    2016-06-01

    Predictive rate-distortion analysis suffers from the curse of dimensionality: clustering arbitrarily long pasts to retain information about arbitrarily long futures requires resources that typically grow exponentially with length. The challenge is compounded for infinite-order Markov processes, since conditioning on finite sequences cannot capture all of their past dependencies. Spectral arguments confirm a popular intuition: algorithms that cluster finite-length sequences fail dramatically when the underlying process has long-range temporal correlations and can fail even for processes generated by finite-memory hidden Markov models. We circumvent the curse of dimensionality in rate-distortion analysis of finite- and infinite-order processes by casting predictive rate-distortion objective functions in terms of the forward- and reverse-time causal states of computational mechanics. Examples demonstrate that the resulting algorithms yield substantial improvements.

  20. How Autism Affects Speech Understanding in Multitalker Environments

    DTIC Science & Technology

    2013-10-01

    difficult than will typically- developing children. Knowing whether toddlers with ASD have difficulties processing speech in the presence of acoustic...to separate the speech of different talkers than do their typically- developing peers. We also predict that they will fail to exploit visual cues on...learn language from many settings in which children are typically placed. In addition, one of the cues that typically- developing listeners use to

  1. Evolutionary dynamics with fluctuating population sizes and strong mutualism.

    PubMed

    Chotibut, Thiparat; Nelson, David R

    2015-08-01

    Game theory ideas provide a useful framework for studying evolutionary dynamics in a well-mixed environment. This approach, however, typically enforces a strictly fixed overall population size, deemphasizing natural growth processes. We study a competitive Lotka-Volterra model, with number fluctuations, that accounts for natural population growth and encompasses interaction scenarios typical of evolutionary games. We show that, in an appropriate limit, the model describes standard evolutionary games with both genetic drift and overall population size fluctuations. However, there are also regimes where a varying population size can strongly influence the evolutionary dynamics. We focus on the strong mutualism scenario and demonstrate that standard evolutionary game theory fails to describe our simulation results. We then analytically and numerically determine fixation probabilities as well as mean fixation times using matched asymptotic expansions, taking into account the population size degree of freedom. These results elucidate the interplay between population dynamics and evolutionary dynamics in well-mixed systems.

  2. Evolutionary dynamics with fluctuating population sizes and strong mutualism

    NASA Astrophysics Data System (ADS)

    Chotibut, Thiparat; Nelson, David R.

    2015-08-01

    Game theory ideas provide a useful framework for studying evolutionary dynamics in a well-mixed environment. This approach, however, typically enforces a strictly fixed overall population size, deemphasizing natural growth processes. We study a competitive Lotka-Volterra model, with number fluctuations, that accounts for natural population growth and encompasses interaction scenarios typical of evolutionary games. We show that, in an appropriate limit, the model describes standard evolutionary games with both genetic drift and overall population size fluctuations. However, there are also regimes where a varying population size can strongly influence the evolutionary dynamics. We focus on the strong mutualism scenario and demonstrate that standard evolutionary game theory fails to describe our simulation results. We then analytically and numerically determine fixation probabilities as well as mean fixation times using matched asymptotic expansions, taking into account the population size degree of freedom. These results elucidate the interplay between population dynamics and evolutionary dynamics in well-mixed systems.

  3. A Solution to ``Too Big to Fail''

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-10-01

    Its a tricky business to reconcile simulations of our galaxys formation with our current observations of the Milky Way and its satellites. In a recent study, scientists have addressed one discrepancy between simulations and observations: the so-called to big to fail problem.From Missing Satellites to Too Big to FailThe favored model of the universe is the lambda-cold-dark-matter (CDM) cosmological model. This model does a great job of correctly predicting the large-scale structure of the universe, but there are still a few problems with it on smaller scales.Hubble image of UGC 5497, a dwarf galaxy associated with Messier 81. In the missing satellite problem, simulations of galaxy formation predict that there should be more such satellite galaxies than we observe. [ESA/NASA]The first is the missing satellites problem: CDM cosmology predicts that galaxies like the Milky Way should have significantly more satellite galaxies than we observe. A proposed solution to this problem is the argument that there may exist many more satellites than weve observed, but these dwarf galaxies have had their stars stripped from them during tidal interactions which prevents us from being able to see them.This solution creates a new problem, though: the too big to fail problem. This problem states that many of the satellites predicted by CDM cosmology are simply so massive that theres no way they couldnt have visible stars. Another way of looking at it: the observed satellites of the Milky Way are not massive enough to be consistent with predictions from CDM.Artists illustration of a supernova, a type of stellar feedback that can modify the dark-matter distribution of a satellite galaxy. [NASA/CXC/M. Weiss]Density Profiles and Tidal StirringLed by Mihai Tomozeiu (University of Zurich), a team of scientists has published a study in which they propose a solution to the too big to fail problem. By running detailed cosmological zoom simulations of our galaxys formation, Tomozeiu and collaborators modeled the dark matter and the stellar content of the galaxy, tracking the formation and evolution of dark-matter subhalos.Based on the results of their simulations, the team argues that the too big to fail problem can be resolved by combining two effects:Stellar feedback in a satellite galaxy can modify its dark-matter distribution, lowering the dark-matter density in the galaxys center and creating a shallower density profile. Satellites with such shallow density profiles evolve differently than those typically modeled, which have a high concentration of dark matter in their centers.After these satellites fall into the Milky Ways potential, tidal effects such as shocks and stripping modify the mass distribution of both the dark matter and the baryons even further.Each curve represents a simulated satellites circular velocity (which corresponds to its total mass) at z=0. Left: results using typical dark-matter density profiles. Right: results using the shallower profiles expected when stellar feedback is included. Results from the shallower profiles are consistent with observed Milky-Way satellites(black crosses). [Adapted from Tomozeiu et al. 2016]A Match to ObservationsTomozeiu and collaborators found that when they used traditional density profiles to model the satellites, the satellites at z=0 in the simulation were much larger than those we observe around the Milky Way consistent with the too big to fail problem.When the team used shallower density profiles and took into account tidal effects, however, the simulations produced a distribution of satellites at z=0 that is consistent with what we observe.This study provides a tidy potential solution to the too big to fail problem, further strengthening the support for CDM cosmology.CitationMihai Tomozeiu et al 2016 ApJ 827 L15. doi:10.3847/2041-8205/827/1/L15

  4. Brief Report: Faces Cause Less Distraction in Autism

    ERIC Educational Resources Information Center

    Riby, Deborah M.; Brown, Philippa H.; Jones, Nicola; Hanley, Mary

    2012-01-01

    Individuals with autism have difficulties interpreting face cues that contribute to deficits of social communication. When faces need to be processed for meaning they fail to capture and hold the attention of individuals with autism. In the current study we illustrate that faces fail to capture attention in a typical manner even when they are…

  5. Medial-based deformable models in nonconvex shape-spaces for medical image segmentation.

    PubMed

    McIntosh, Chris; Hamarneh, Ghassan

    2012-01-01

    We explore the application of genetic algorithms (GA) to deformable models through the proposition of a novel method for medical image segmentation that combines GA with nonconvex, localized, medial-based shape statistics. We replace the more typical gradient descent optimizer used in deformable models with GA, and the convex, implicit, global shape statistics with nonconvex, explicit, localized ones. Specifically, we propose GA to reduce typical deformable model weaknesses pertaining to model initialization, pose estimation and local minima, through the simultaneous evolution of a large number of models. Furthermore, we constrain the evolution, and thus reduce the size of the search-space, by using statistically-based deformable models whose deformations are intuitive (stretch, bulge, bend) and are driven in terms of localized principal modes of variation, instead of modes of variation across the entire shape that often fail to capture localized shape changes. Although GA are not guaranteed to achieve the global optima, our method compares favorably to the prevalent optimization techniques, convex/nonconvex gradient-based optimizers and to globally optimal graph-theoretic combinatorial optimization techniques, when applied to the task of corpus callosum segmentation in 50 mid-sagittal brain magnetic resonance images.

  6. Flight Guidance System Requirements Specification

    NASA Technical Reports Server (NTRS)

    Miller, Steven P.; Tribble, Alan C.; Carlson, Timothy M.; Danielson, Eric J.

    2003-01-01

    This report describes a requirements specification written in the RSML-e language for the mode logic of a Flight Guidance System of a typical regional jet aircraft. This model was created as one of the first steps in a five-year project sponsored by the NASA Langley Research Center, Rockwell Collins Inc., and the Critical Systems Research Group of the University of Minnesota to develop new methods and tools to improve the safety of avionics designs. This model will be used to demonstrate the application of a variety of methods and techniques, including safety analysis of system and subsystem requirements, verification of key properties using theorem provers and model checkers, identification of potential sources mode confusion in system designs, partitioning of applications based on the criticality of system hazards, and autogeneration of avionics quality code. While this model is representative of the mode logic of a typical regional jet aircraft, it does not describe an actual or planned product. Several aspects of a full Flight Guidance System, such as recovery from failed sensors, have been omitted, and no claims are made regarding the accuracy or completeness of this specification.

  7. Verification of short lead time forecast models: applied to Kp and Dst forecasting

    NASA Astrophysics Data System (ADS)

    Wintoft, Peter; Wik, Magnus

    2016-04-01

    In the ongoing EU/H2020 project PROGRESS models that predicts Kp, Dst, and AE from L1 solar wind data will be used as inputs to radiation belt models. The possible lead times from L1 measurements are shorter (10s of minutes to hours) than the typical duration of the physical phenomena that should be forecast. Under these circumstances several metrics fail to single out trivial cases, such as persistence. In this work we explore metrics and approaches for short lead time forecasts. We apply these to current Kp and Dst forecast models. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 637302.

  8. An Accessible, Structured Approach for Building the Intuitive Habit of Evidential Thinking before the Examination Years

    ERIC Educational Resources Information Center

    Aiken, Anna

    2017-01-01

    Anna Aiken and her history colleagues had been reflecting on the stubborn problem of students failing to tackle GCSE questions about sources with adequate thought or understanding of evidence. Teaching them the typical requirements of the GCSE examination even appeared to make things worse, encouraging superficiality and failing to bring about…

  9. Writing as Embodied, College Football Plays as Embodied: Extracurricular Multimodal Composing

    ERIC Educational Resources Information Center

    Rifenburg, J. Michael

    2014-01-01

    Recent explorations position multimodality as a largely curricular practice wherein the body typically is not figured as a potential mode of meaning making. Such a projection not only fails to acknowledge extracurricular uses of such a rhetoric but also fails to acknowledge the role of the body in and especially for composing. In hopes of…

  10. Sensory-motor deficits in children with fetal alcohol spectrum disorder assessed using a robotic virtual reality platform.

    PubMed

    Williams, Loriann; Jackson, Carl P T; Choe, Noreen; Pelland, Lucie; Scott, Stephen H; Reynolds, James N

    2014-01-01

    Fetal alcohol spectrum disorder (FASD) is associated with a large number of cognitive and sensory-motor deficits. In particular, the accurate assessment of sensory-motor deficits in children with FASD is not always simple and relies on clinical assessment tools that may be coarse and subjective. Here we present a new approach: using robotic technology to accurately and objectively assess motor deficits of children with FASD in a center-out reaching task. A total of 152 typically developing children and 31 children with FASD, all aged between 5 and 18 were assessed using a robotic exoskeleton device coupled with a virtual reality projection system. Children made reaching movements to 8 peripheral targets in a random order. Reach trajectories were subsequently analyzed to extract 12 parameters that had been previously determined to be good descriptors of a reaching movement, and these parameters were compared for each child with FASD to a normative model derived from the performance of the typically developing population. Compared with typically developing children, the children with FASD were found to be significantly impaired on most of the parameters measured, with the greatest deficits found in initial movement direction error. Also, children with FASD tended to fail more parameters than typically developing children: 95% of typically developing children failed fewer than 3 parameters compared with 69% of children with FASD. These results were particularly pronounced for younger children. The current study has shown that robotic technology is a sensitive and powerful tool that provides increased specificity regarding the type of motor problems exhibited by children with FASD. The high frequency of motor deficits in children with FASD suggests that interventions aimed at stimulating and/or improving motor development should routinely be considered for this population. Copyright © 2013 by the Research Society on Alcoholism.

  11. A model of face selection in viewing video stories.

    PubMed

    Suda, Yuki; Kitazawa, Shigeru

    2015-01-19

    When typical adults watch TV programs, they show surprisingly stereo-typed gaze behaviours, as indicated by the almost simultaneous shifts of their gazes from one face to another. However, a standard saliency model based on low-level physical features alone failed to explain such typical gaze behaviours. To find rules that explain the typical gaze behaviours, we examined temporo-spatial gaze patterns in adults while they viewed video clips with human characters that were played with or without sound, and in the forward or reverse direction. We here show the following: 1) the "peak" face scanpath, which followed the face that attracted the largest number of views but ignored other objects in the scene, still retained the key features of actual scanpaths, 2) gaze behaviours remained unchanged whether the sound was provided or not, 3) the gaze behaviours were sensitive to time reversal, and 4) nearly 60% of the variance of gaze behaviours was explained by the face saliency that was defined as a function of its size, novelty, head movements, and mouth movements. These results suggest that humans share a face-oriented network that integrates several visual features of multiple faces, and directs our eyes to the most salient face at each moment.

  12. Impacts of climate change on the formation and stability of late Quaternary sand sheets and falling dunes, Black Mesa region, southern Colorado Plateau, USA

    USGS Publications Warehouse

    Ellwein, Amy L.; Mahan, Shannon; McFadden, Leslie D.

    2015-01-01

    Widely used predictive models of eolian system dynamics are typically based entirely on climatic variables and do not account for landscape complexity and geomorphic history. Climate-only assumptions fail to give accurate predictions of the dynamics of this and many other dune fields. A growing body of work suggests that eolian deposits in wind-driven semiarid climates may be more strongly related to increases in sediment supply than to increases in aridity.

  13. Behavioural and Cognitive Sex/Gender Differences in Autism Spectrum Condition and Typically Developing Males and Females

    ERIC Educational Resources Information Center

    Hull, Laura; Mandy, William; Petrides, K. V.

    2017-01-01

    Studies assessing sex/gender differences in autism spectrum conditions often fail to include typically developing control groups. It is, therefore, unclear whether observed sex/gender differences reflect those found in the general population or are particular to autism spectrum conditions. A systematic search identified articles comparing…

  14. Dynamic modelling and response characteristics of a magnetic bearing rotor system with auxiliary bearings

    NASA Technical Reports Server (NTRS)

    Free, April M.; Flowers, George T.; Trent, Victor S.

    1995-01-01

    Auxiliary bearings are a critical feature of any magnetic bearing system. They protect the soft iron core of the magnetic bearing during an overload or failure. An auxiliary bearing typically consists of a rolling element bearing or bushing with a clearance gap between the rotor and the inner race of the support. The dynamics of such systems can be quite complex. It is desired to develop a rotordynamic model which describes the dynamic behavior of a flexible rotor system with magnetic bearings including auxiliary bearings. The model is based upon an experimental test facility. Some simulation studies are presented to illustrate the behavior of the model. In particular, the effects of introducing sideloading from the magnetic bearing when one coil fails is studied.

  15. Rotordynamic Modelling and Response Characteristics of an Active Magnetic Bearing Rotor System

    NASA Technical Reports Server (NTRS)

    Free, April M.; Flowers, George T.; Trent, Victor S.

    1996-01-01

    Auxiliary bearings are a critical feature of any magnetic bearing system. They protect the soft iron core of the magnetic bearing during an overload or failure. An auxiliary bearing typically consists of a rolling element bearing or bushing with a clearance gap between the rotor and the inner race of the support. The dynamics of such systems can be quite complex. It is desired to develop a rotordynamic model which describes the dynamic behavior of a flexible rotor system with magnetic bearings including auxiliary bearings. The model is based upon an experimental test facility. Some simulation studies are presented to illustrate the behavior of the model. In particular, the effects of introducing sideloading from the magnetic bearing when one coil fails is studied. These results are presented and discussed.

  16. Lessons Learned from Conducting a K-12 Project to Revitalize Achievement by Using Instrumentation in Science Education

    ERIC Educational Resources Information Center

    Kapila, Vikram; Iskander, Magued

    2014-01-01

    A student's first introduction to engineering and technology is typically through high school science labs. Unfortunately, in many high schools, science labs often make use of antiquated tools that fail to deliver exciting lab content. As a result, many students are turned off by science, fail to excel on standardized science exams, and do not…

  17. Hybrid estimation of complex systems.

    PubMed

    Hofbaur, Michael W; Williams, Brian C

    2004-10-01

    Modern automated systems evolve both continuously and discretely, and hence require estimation techniques that go well beyond the capability of a typical Kalman Filter. Multiple model (MM) estimation schemes track these system evolutions by applying a bank of filters, one for each discrete system mode. Modern systems, however, are often composed of many interconnected components that exhibit rich behaviors, due to complex, system-wide interactions. Modeling these systems leads to complex stochastic hybrid models that capture the large number of operational and failure modes. This large number of modes makes a typical MM estimation approach infeasible for online estimation. This paper analyzes the shortcomings of MM estimation, and then introduces an alternative hybrid estimation scheme that can efficiently estimate complex systems with large number of modes. It utilizes search techniques from the toolkit of model-based reasoning in order to focus the estimation on the set of most likely modes, without missing symptoms that might be hidden amongst the system noise. In addition, we present a novel approach to hybrid estimation in the presence of unknown behavioral modes. This leads to an overall hybrid estimation scheme for complex systems that robustly copes with unforeseen situations in a degraded, but fail-safe manner.

  18. Simulations of Fluvial Landscapes

    NASA Astrophysics Data System (ADS)

    Cattan, D.; Birnir, B.

    2013-12-01

    The Smith-Bretherton-Birnir (SBB) model for fluvial landsurfaces consists of a pair of partial differential equations, one governing water flow and one governing the sediment flow. Numerical solutions of these equations have been shown to provide realistic models in the evolution of fluvial landscapes. Further analysis of these equations shows that they possess scaling laws (Hack's Law) that are known to exist in nature. However, the simulations are highly dependent on the numerical methods used; with implicit methods exhibiting the correct scaling laws, but the explicit methods fail to do so. These equations, and the resulting models, help to bridge the gap between the deterministic and the stochastic theories of landscape evolution. Slight modifications of the SBB equations make the results of the model more realistic. By modifying the sediment flow equation, the model obtains more pronounced meandering rivers. Typical landsurface with rivers.

  19. Everyday social and conversation applications of theory-of-mind understanding by children with autism-spectrum disorders or typical development.

    PubMed

    Peterson, Candida C; Garnett, Michelle; Kelly, Adrian; Attwood, Tony

    2009-02-01

    Children with autism-spectrum disorders (ASD) often fail laboratory false-belief tests of theory of mind (ToM). Yet how this impacts on their everyday social behavior is less clear, partly owing to uncertainty over which specific everyday conversational and social skills require ToM understanding. A new caregiver-report scale of these everyday applications of ToM was developed and validated in two studies. Study 1 obtained parent ratings of 339 children (85 with autism; 230 with Asperger's; 24 typically-developing) on the new scale and results revealed (a) that the scale had good psychometric properties and (b) that children with ASD had significantly more everyday mindreading difficulties than typical developers. In Study 2, we directly tested links between laboratory ToM and everyday mindreading using teacher ratings on the new scale. The sample of 25 children included 15 with autism and 10 typical developers aged 5-12 years. Children in both groups who passed laboratory ToM tests had fewer everyday mindreading difficulties than those of the same diagnosis who failed. Yet, intriguingly, autistic ToM-passers still had more problems with everyday mindreading than younger typically-developing ToM-failers. The possible roles of family conversation and peer interaction, along with ToM, in everyday social functioning were considered.

  20. ICT Is Not Participation Is Not Democracy - eParticipation Development Models Revisited

    NASA Astrophysics Data System (ADS)

    Grönlund, Åke

    There exist several models to describe “progress” in eParticipation. Models are typically ladder type and share two assumptions; progress is equalled with more sophisticated use of technology, and direct democracy is seen as the most advanced democracy model. None of the assumptions are true, considering democratic theory, and neither is fruitful as the simplification disturbs analysis and hence obscures actual progress made. The models convey a false impression of progress, but neither the goal, nor the path or the stakeholders driving the development are clearly understood, presented or evidenced. This paper analyses commonly used models based on democratic theory and eParticipation practice, and concludes that all are biased and fail to distinguish between the three dimensions an eParticipation progress model must include; relevance to democracy by any definition, applicability to different processes, (capacity building as well as decision making), and measuring different levels of participation without direct democracy bias.

  1. NASA Model of "Threat and Error" in Pediatric Cardiac Surgery: Patterns of Error Chains.

    PubMed

    Hickey, Edward; Pham-Hung, Eric; Nosikova, Yaroslavna; Halvorsen, Fredrik; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Van Arsdell, Glen

    2017-04-01

    We introduced the National Aeronautics and Space Association threat-and-error model to our surgical unit. All admissions are considered flights, which should pass through stepwise deescalations in risk during surgical recovery. We hypothesized that errors significantly influence risk deescalation and contribute to poor outcomes. Patient flights (524) were tracked in real time for threats, errors, and unintended states by full-time performance personnel. Expected risk deescalation was wean from mechanical support, sternal closure, extubation, intensive care unit (ICU) discharge, and discharge home. Data were accrued from clinical charts, bedside data, reporting mechanisms, and staff interviews. Infographics of flights were openly discussed weekly for consensus. In 12% (64 of 524) of flights, the child failed to deescalate sequentially through expected risk levels; unintended increments instead occurred. Failed deescalations were highly associated with errors (426; 257 flights; p < 0.0001). Consequential errors (263; 173 flights) were associated with a 29% rate of failed deescalation versus 4% in flights with no consequential error (p < 0.0001). The most dangerous errors were apical errors typically (84%) occurring in the operating room, which caused chains of propagating unintended states (n = 110): these had a 43% (47 of 110) rate of failed deescalation (versus 4%; p < 0.0001). Chains of unintended state were often (46%) amplified by additional (up to 7) errors in the ICU that would worsen clinical deviation. Overall, failed deescalations in risk were extremely closely linked to brain injury (n = 13; p < 0.0001) or death (n = 7; p < 0.0001). Deaths and brain injury after pediatric cardiac surgery almost always occur from propagating error chains that originate in the operating room and are often amplified by additional ICU errors. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  2. Modeling relationships between traditional preadmission measures and clinical skills performance on a medical licensure examination.

    PubMed

    Roberts, William L; Pugliano, Gina; Langenau, Erik; Boulet, John R

    2012-08-01

    Medical schools employ a variety of preadmission measures to select students most likely to succeed in the program. The Medical College Admission Test (MCAT) and the undergraduate college grade point average (uGPA) are two academic measures typically used to select students in medical school. The assumption that presently used preadmission measures can predict clinical skill performance on a medical licensure examination was evaluated within a validity argument framework (Kane 1992). A hierarchical generalized linear model tested relationships between the log-odds of failing a high-stakes medical licensure performance examination and matriculant academic and non-academic preadmission measures, controlling for student-and school-variables. Data includes 3,189 matriculants from 22 osteopathic medical schools tested in 2009-2010. Unconditional unit-specific model expected average log-odds of failing the examination across medical schools is -3.05 (se = 0.11) or 5%. Student-level estimated coefficients for MCAT Verbal Reasoning scores (0.03), Physical Sciences scores (0.05), Biological Sciences scores (0.04), uGPA(science) (0.07), and uGPA(non-science) (0.26) lacked association with the log-odds of failing the COMLEX-USA Level 2-PE, controlling for all other predictors in the model. Evidence from this study shows that present preadmission measures of academic ability are not related to later clinical skill performance. Given that clinical skill performance is an important part of medical practice, selection measures should be developed to identify students who will be successful in communication and be able to demonstrate the ability to systematically collect a medical history, perform a physical examination, and synthesize this information to diagnose and manage patient conditions.

  3. Dyslexic children fail to comply with the rhythmic constraints of handwriting.

    PubMed

    Pagliarini, Elena; Guasti, Maria Teresa; Toneatto, Carlo; Granocchio, Elisa; Riva, Federica; Sarti, Daniela; Molteni, Bruna; Stucchi, Natale

    2015-08-01

    In this study, we sought to demonstrate that deficits in a specific motor activity, handwriting, are associated to Developmental Dyslexia. The linguistic and writing performance of children with Developmental Dyslexia, with and without handwriting problems (dysgraphia), were compared to that of children with Typical Development. The quantitative kinematic variables of handwriting were collected by means of a digitizing tablet. The results showed that all children with Developmental Dyslexia wrote more slowly than those with Typical Development. Contrary to typically developing children, they also varied more in the time taken to write the individual letters of a word and failed to comply with the principles of isochrony and homothety. Moreover, a series of correlations was found among reading, language measures and writing measures suggesting that the two abilities may be linked. We propose that the link between handwriting and reading/language deficits is mediated by rhythm, as both reading (which is grounded on language) and handwriting are ruled by principles of rhythmic organization. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Your Next Airplane: Just Hit Print

    DTIC Science & Technology

    2013-04-01

    American access to cheap and near instant fabrication. If left to development only by those envisioning cheap plastic gimmicks, 3-D printing will fail to...only by those envisioning cheap plastic gimmicks, 3-D printing will fail to significantly impact the market, but if properly managed, 3-D printing can...resolution, typically between 10 and 100 micrometers. In the filament fusing process, usually with plastic , but possible with many low melting point

  5. A preliminary assessment of the effects of heat flux distribution and penetration on the creep rupture of a reactor vessel lower head

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chu, T.Y.; Bentz, J.; Simpson, R.

    1997-02-01

    The objective of the Lower Head Failure (LHF) Experiment Program is to experimentally investigate and characterize the failure of the reactor vessel lower head due to thermal and pressure loads under severe accident conditions. The experiment is performed using 1/5-scale models of a typical PWR pressure vessel. Experiments are performed for various internal pressure and imposed heat flux distributions with and without instrumentation guide tube penetrations. The experimental program is complemented by a modest modeling program based on the application of vessel creep rupture codes developed in the TMI Vessel Investigation Project. The first three experiments under the LHF programmore » investigated the creep rupture of simulated reactor pressure vessels without penetrations. The heat flux distributions for the three experiments are uniform (LHF-1), center-peaked (LHF-2), and side-peaked (LHF-3), respectively. For all the experiments, appreciable vessel deformation was observed to initiate at vessel wall temperatures above 900K and the vessel typically failed at approximately 1000K. The size of failure was always observed to be smaller than the heated region. For experiments with non-uniform heat flux distributions, failure typically occurs in the region of peak temperature. A brief discussion of the effect of penetration is also presented.« less

  6. A model of face selection in viewing video stories

    PubMed Central

    Suda, Yuki; Kitazawa, Shigeru

    2015-01-01

    When typical adults watch TV programs, they show surprisingly stereo-typed gaze behaviours, as indicated by the almost simultaneous shifts of their gazes from one face to another. However, a standard saliency model based on low-level physical features alone failed to explain such typical gaze behaviours. To find rules that explain the typical gaze behaviours, we examined temporo-spatial gaze patterns in adults while they viewed video clips with human characters that were played with or without sound, and in the forward or reverse direction. We here show the following: 1) the “peak” face scanpath, which followed the face that attracted the largest number of views but ignored other objects in the scene, still retained the key features of actual scanpaths, 2) gaze behaviours remained unchanged whether the sound was provided or not, 3) the gaze behaviours were sensitive to time reversal, and 4) nearly 60% of the variance of gaze behaviours was explained by the face saliency that was defined as a function of its size, novelty, head movements, and mouth movements. These results suggest that humans share a face-oriented network that integrates several visual features of multiple faces, and directs our eyes to the most salient face at each moment. PMID:25597621

  7. Zero Gyro Kalman Filtering in the presence of a Reaction Wheel Failure

    NASA Technical Reports Server (NTRS)

    Hur-Diaz, Sun; Wirzburger, John; Smith, Dan; Myslinski, Mike

    2007-01-01

    Typical implementation of Kalman filters for spacecraft attitude estimation involves the use of gyros for three-axis rate measurements. When there are less than three axes of information available, the accuracy of the Kalman filter depends highly on the accuracy of the dynamics model. This is particularly significant during the transient period when a reaction wheel with a high momentum fails, is taken off-line, and spins down. This paper looks at how a reaction wheel failure can affect the zero-gyro Kalman filter performance for the Hubble Space Telescope and what steps are taken to minimize its impact.

  8. Zero Gyro Kalman Filtering in the Presence of a Reaction Wheel Failure

    NASA Technical Reports Server (NTRS)

    Hur-Diaz, Sun; Wirzburger, John; Smith, Dan; Myslinski, Mike

    2007-01-01

    Typical implementation of Kalman filters for spacecraft attitude estimation involves the use of gyros for three-axis rate measurements. When there are less than three axes of information available, the accuracy of the Kalman filter depends highly on the accuracy of the dynamics model. This is particularly significant during the transient period when a reaction wheel with a high momentum fails, is taken off-line, and spins down. This paper looks at how a reaction wheel failure can affect the zero-gyro Kalman filter performance for the Hubble Space Telescope and what steps are taken to minimize its impact.

  9. Statistical Performance Evaluation Of Soft Seat Pressure Relief Valves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, Stephen P.; Gross, Robert E.

    2013-03-26

    Risk-based inspection methods enable estimation of the probability of failure on demand for spring-operated pressure relief valves at the United States Department of Energy's Savannah River Site in Aiken, South Carolina. This paper presents a statistical performance evaluation of soft seat spring operated pressure relief valves. These pressure relief valves are typically smaller and of lower cost than hard seat (metal to metal) pressure relief valves and can provide substantial cost savings in fluid service applications (air, gas, liquid, and steam) providing that probability of failure on demand (the probability that the pressure relief valve fails to perform its intendedmore » safety function during a potentially dangerous over pressurization) is at least as good as that for hard seat valves. The research in this paper shows that the proportion of soft seat spring operated pressure relief valves failing is the same or less than that of hard seat valves, and that for failed valves, soft seat valves typically have failure ratios of proof test pressure to set pressure less than that of hard seat valves.« less

  10. Inequality measures for wealth distribution: Population vs individuals perspective

    NASA Astrophysics Data System (ADS)

    Pascoal, R.; Rocha, H.

    2018-02-01

    Economic inequality is, nowadays, frequently perceived as following a growing trend with impact on political and religious agendas. However, there is a wide range of inequality measures, each of which pointing to a possibly different degree of inequality. Furthermore, regardless of the measure used, it only acknowledges the momentary population inequality, failing to capture the individuals evolution over time. In this paper, several inequality measures were analyzed in order to compare the typical single time instant degree of wealth inequality (population perspective) to the one obtained from the individuals' wealth mean over several time instants (individuals perspective). The proposed generalization of a simple addictive model, for limited time average of individual's wealth, allows us to verify that the typically used inequality measures for a given snapshot instant of the population significantly overestimate the individuals' wealth inequality over time. Moreover, that is more extreme for the ratios than for the indices analyzed.

  11. The Role of Notch Signaling Pathway in Breast Cancer Pathogenesis

    DTIC Science & Technology

    2005-07-01

    breast cancer cells, I tested whether ErbB2 overexpression will cooperate with Notch in HMLE cells. While overexpression of activated Notch1 failed to...tyrosine kinase upstream of Ras normally found overexpressed in many breast cancers , also failed to transform HMLE cells. These observations suggested...cooperation between Notch1IC and ErbB2 signaling in transforming HMLE cells. Breast cancers typically do not harbor oncogenic Ras mutations; nevertheless

  12. Design and evaluation of a sensor fail-operational control system for a digitally controlled turbofan engine

    NASA Technical Reports Server (NTRS)

    Hrach, F. J.; Arpasi, D. J.; Bruton, W. M.

    1975-01-01

    A self-learning, sensor fail-operational, control system for the TF30-P-3 afterburning turbofan engine was designed and evaluated. The sensor fail-operational control system includes a digital computer program designed to operate in conjunction with the standard TF30-P-3 bill-of-materials control. Four engine measurements and two compressor face measurements are tested. If any engine measurements are found to have failed, they are replaced by values synthesized from computer-stored information. The control system was evaluated by using a realtime, nonlinear, hybrid computer engine simulation at sea level static condition, at a typical cruise condition, and at several extreme flight conditions. Results indicate that the addition of such a system can improve the reliability of an engine digital control system.

  13. The epistemology of climate models and some of its implications for climate science and the philosophy of science

    NASA Astrophysics Data System (ADS)

    Katzav, Joel

    2014-05-01

    I bring out the limitations of four important views of what the target of useful climate model assessment is. Three of these views are drawn from philosophy. They include the views of Elisabeth Lloyd and Wendy Parker, and an application of Bayesian confirmation theory. The fourth view I criticise is based on the actual practice of climate model assessment. In bringing out the limitations of these four views, I argue that an approach to climate model assessment that neither demands too much of such assessment nor threatens to be unreliable will, in typical cases, have to aim at something other than the confirmation of claims about how the climate system actually is. This means, I suggest, that the Intergovernmental Panel on Climate Change's (IPCC's) focus on establishing confidence in climate model explanations and predictions is misguided. So too, it means that standard epistemologies of science with pretensions to generality, e.g., Bayesian epistemologies, fail to illuminate the assessment of climate models. I go on to outline a view that neither demands too much nor threatens to be unreliable, a view according to which useful climate model assessment typically aims to show that certain climatic scenarios are real possibilities and, when the scenarios are determined to be real possibilities, partially to determine how remote they are.

  14. A comparison of methods of fitting several models to nutritional response data.

    PubMed

    Vedenov, D; Pesti, G M

    2008-02-01

    A variety of models have been proposed to fit nutritional input-output response data. The models are typically nonlinear; therefore, fitting the models usually requires sophisticated statistical software and training to use it. An alternative tool for fitting nutritional response models was developed by using widely available and easier-to-use Microsoft Excel software. The tool, implemented as an Excel workbook (NRM.xls), allows simultaneous fitting and side-by-side comparisons of several popular models. This study compared the results produced by the tool we developed and PROC NLIN of SAS. The models compared were the broken line (ascending linear and quadratic segments), saturation kinetics, 4-parameter logistics, sigmoidal, and exponential models. The NRM.xls workbook provided results nearly identical to those of PROC NLIN. Furthermore, the workbook successfully fit several models that failed to converge in PROC NLIN. Two data sets were used as examples to compare fits by the different models. The results suggest that no particular nonlinear model is necessarily best for all nutritional response data.

  15. Simulating fail-stop in asynchronous distributed systems

    NASA Technical Reports Server (NTRS)

    Sabel, Laura; Marzullo, Keith

    1994-01-01

    The fail-stop failure model appears frequently in the distributed systems literature. However, in an asynchronous distributed system, the fail-stop model cannot be implemented. In particular, it is impossible to reliably detect crash failures in an asynchronous system. In this paper, we show that it is possible to specify and implement a failure model that is indistinguishable from the fail-stop model from the point of view of any process within an asynchronous system. We give necessary conditions for a failure model to be indistinguishable from the fail-stop model, and derive lower bounds on the amount of process replication needed to implement such a failure model. We present a simple one-round protocol for implementing one such failure model, which we call simulated fail-stop.

  16. Inversion Effects in the Perception of the Moving Human Form: A Comparison of Adolescents with Autism Spectrum Disorder and Typically Developing Adolescents

    ERIC Educational Resources Information Center

    Cleary, Laura; Looney, Kathy; Brady, Nuala; Fitzgerald, Michael

    2014-01-01

    The "body inversion effect" refers to superior recognition of upright than inverted images of the human body and indicates typical configural processing. Previous research by Reed et al. using static images of the human body shows that people with autism fail to demonstrate this effect. Using a novel task in which adults, adolescents…

  17. Two Bayesian tests of the GLOMOsys Model.

    PubMed

    Field, Sarahanne M; Wagenmakers, Eric-Jan; Newell, Ben R; Zeelenberg, René; van Ravenzwaaij, Don

    2016-12-01

    Priming is arguably one of the key phenomena in contemporary social psychology. Recent retractions and failed replication attempts have led to a division in the field between proponents and skeptics and have reinforced the importance of confirming certain priming effects through replication. In this study, we describe the results of 2 preregistered replication attempts of 1 experiment by Förster and Denzler (2012). In both experiments, participants first processed letters either globally or locally, then were tested using a typicality rating task. Bayes factor hypothesis tests were conducted for both experiments: Experiment 1 (N = 100) yielded an indecisive Bayes factor of 1.38, indicating that the in-lab data are 1.38 times more likely to have occurred under the null hypothesis than under the alternative. Experiment 2 (N = 908) yielded a Bayes factor of 10.84, indicating strong support for the null hypothesis that global priming does not affect participants' mean typicality ratings. The failure to replicate this priming effect challenges existing support for the GLOMO sys model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. The model of microblog message diffusion based on complex social network

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Bai, Shu-Ying; Jin, Rui

    2014-05-01

    Microblog is a micromessage communication network in which users are the nodes and the followship between users are the edges. Sina Weibo is a typical case of these microblog service websites. As the enormous scale of nodes and complex links in the network, we choose a sample network crawled in Sina Weibo as the base of empirical analysis. The study starts with the analysis of its topological features, and brings in epidemiological SEIR model to explore the mode of message spreading throughout the microblog network. It is found that the network is obvious small-world and scale-free, which made it succeed in transferring messages and failed in resisting negative influence. In addition, the paper focuses on the rich nodes as they constitute a typical feature of Sina Weibo. It is also found that whether the message starts with a rich node will not account for its final coverage. Actually, the rich nodes always play the role of pivotal intermediaries who speed up the spreading and make the message known by much more people.

  19. Statistical mechanical analysis of linear programming relaxation for combinatorial optimization problems

    NASA Astrophysics Data System (ADS)

    Takabe, Satoshi; Hukushima, Koji

    2016-05-01

    Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.

  20. Statistical mechanical analysis of linear programming relaxation for combinatorial optimization problems.

    PubMed

    Takabe, Satoshi; Hukushima, Koji

    2016-05-01

    Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.

  1. A complex adaptive systems perspective of health information technology implementation.

    PubMed

    Keshavjee, Karim; Kuziemsky, Craig; Vassanji, Karim; Ghany, Ahmad

    2013-01-01

    Implementing health information technology (HIT) is a challenge because of the complexity and multiple interactions that define HIT implementation. Much of the research on HIT implementation is descriptive in nature and has focused on distinct processes such as order entry or decision support. These studies fail to take into account the underlying complexity of the processes, people and settings that are typical of HIT implementations. Complex adaptive systems (CAS) is a promising field that could elucidate the complexity and non-linear interacting issues that are typical in HIT implementation. Initially we sought new models that would enable us to better understand the complex nature of HIT implementation, to proactively identify problem issues that could be a precursor to unintended consequences and to develop new models and new approaches to successful HIT implementations. Our investigation demonstrates that CAS does not provide prediction, but forces us to rethink our HIT implementation paradigms and question what we think we know. CAS provides new ways to conceptualize HIT implementation and suggests new approaches to increasing HIT implementation successes.

  2. Incontinence and the duty to provide care.

    PubMed

    Dimond, Bridgit

    Bridgit Dimond considers the legal implications of failing to adequately assess the continence needs of vulnerable people. A typical situation is used to illustrate the legal issues that arise (Box 1).

  3. Initiating heavy-atom-based phasing by multi-dimensional molecular replacement.

    PubMed

    Pedersen, Bjørn Panyella; Gourdon, Pontus; Liu, Xiangyu; Karlsen, Jesper Lykkegaard; Nissen, Poul

    2016-03-01

    To obtain an electron-density map from a macromolecular crystal the phase problem needs to be solved, which often involves the use of heavy-atom derivative crystals and concomitant heavy-atom substructure determination. This is typically performed by dual-space methods, direct methods or Patterson-based approaches, which however may fail when only poorly diffracting derivative crystals are available. This is often the case for, for example, membrane proteins. Here, an approach for heavy-atom site identification based on a molecular-replacement parameter matrix (MRPM) is presented. It involves an n-dimensional search to test a wide spectrum of molecular-replacement parameters, such as different data sets and search models with different conformations. Results are scored by the ability to identify heavy-atom positions from anomalous difference Fourier maps. The strategy was successfully applied in the determination of a membrane-protein structure, the copper-transporting P-type ATPase CopA, when other methods had failed to determine the heavy-atom substructure. MRPM is well suited to proteins undergoing large conformational changes where multiple search models should be considered, and it enables the identification of weak but correct molecular-replacement solutions with maximum contrast to prime experimental phasing efforts.

  4. Initiating heavy-atom-based phasing by multi-dimensional molecular replacement

    PubMed Central

    Pedersen, Bjørn Panyella; Gourdon, Pontus; Liu, Xiangyu; Karlsen, Jesper Lykkegaard; Nissen, Poul

    2016-01-01

    To obtain an electron-density map from a macromolecular crystal the phase problem needs to be solved, which often involves the use of heavy-atom derivative crystals and concomitant heavy-atom substructure determination. This is typically performed by dual-space methods, direct methods or Patterson-based approaches, which however may fail when only poorly diffracting derivative crystals are available. This is often the case for, for example, membrane proteins. Here, an approach for heavy-atom site identification based on a molecular-replacement parameter matrix (MRPM) is presented. It involves an n-dimensional search to test a wide spectrum of molecular-replacement parameters, such as different data sets and search models with different conformations. Results are scored by the ability to identify heavy-atom positions from anomalous difference Fourier maps. The strategy was successfully applied in the determination of a membrane-protein structure, the copper-transporting P-type ATPase CopA, when other methods had failed to determine the heavy-atom substructure. MRPM is well suited to proteins undergoing large conformational changes where multiple search models should be considered, and it enables the identification of weak but correct molecular-replacement solutions with maximum contrast to prime experimental phasing efforts. PMID:26960131

  5. Models for inference in dynamic metacommunity systems

    USGS Publications Warehouse

    Dorazio, Robert M.; Kery, Marc; Royle, J. Andrew; Plattner, Matthias

    2010-01-01

    A variety of processes are thought to be involved in the formation and dynamics of species assemblages. For example, various metacommunity theories are based on differences in the relative contributions of dispersal of species among local communities and interactions of species within local communities. Interestingly, metacommunity theories continue to be advanced without much empirical validation. Part of the problem is that statistical models used to analyze typical survey data either fail to specify ecological processes with sufficient complexity or they fail to account for errors in detection of species during sampling. In this paper, we describe a statistical modeling framework for the analysis of metacommunity dynamics that is based on the idea of adopting a unified approach, multispecies occupancy modeling, for computing inferences about individual species, local communities of species, or the entire metacommunity of species. This approach accounts for errors in detection of species during sampling and also allows different metacommunity paradigms to be specified in terms of species- and location-specific probabilities of occurrence, extinction, and colonization: all of which are estimable. In addition, this approach can be used to address inference problems that arise in conservation ecology, such as predicting temporal and spatial changes in biodiversity for use in making conservation decisions. To illustrate, we estimate changes in species composition associated with the species-specific phenologies of flight patterns of butterflies in Switzerland for the purpose of estimating regional differences in biodiversity.

  6. Why Are There Developmental Stages in Language Learning? A Developmental Robotics Model of Language Development.

    PubMed

    Morse, Anthony F; Cangelosi, Angelo

    2017-02-01

    Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to "switch" between stages. We argue that by taking an embodied view, the interaction between learning mechanisms, the resulting behavior of the agent, and the opportunities for learning that the environment provides can account for the stage-wise development of cognitive abilities. We summarize work relevant to this hypothesis and suggest two simple mechanisms that account for some developmental transitions: neural readiness focuses on changes in the neural substrate resulting from ongoing learning, and perceptual readiness focuses on the perceptual requirements for learning new tasks. Previous work has demonstrated these mechanisms in replications of a wide variety of infant language experiments, spanning multiple developmental stages. Here we piece this work together as a single model of ongoing learning with no parameter changes at all. The model, an instance of the Epigenetic Robotics Architecture (Morse et al 2010) embodied on the iCub humanoid robot, exhibits ongoing multi-stage development while learning pre-linguistic and then basic language skills. Copyright © 2016 Cognitive Science Society, Inc.

  7. A frictional population model of seismicity rate change

    USGS Publications Warehouse

    Gomberg, J.; Reasenberg, P.; Cocco, M.; Belardinelli, M.E.

    2005-01-01

    We study models of seismicity rate changes caused by the application of a static stress perturbation to a population of faults and discuss our results with respect to the model proposed by Dieterich (1994). These models assume distribution of nucleation sites (e.g., faults) obeying rate-state frictional relations that fail at constant rate under tectonic loading alone, and predicts a positive static stress step at time to will cause an immediate increased seismicity rate that decays according to Omori's law. We show one way in which the Dieterich model may be constructed from simple general idead, illustratted using numerically computed synthetic seismicity and mathematical formulation. We show that seismicity rate change predicted by these models (1) depend on the particular relationship between the clock-advanced failure and fault maturity, (2) are largest for the faults closest to failure at to, (3) depend strongly on which state evolution law faults obey, and (4) are insensitive to some types of population hetrogeneity. We also find that if individual faults fail repeatedly and populations are finite, at timescales much longer than typical aftershock durations, quiescence follows at seismicity rate increase regardless of the specific frictional relations. For the examined models the quiescence duration is comparable to the ratio of stress change to stressing rate ????/??,which occurs after a time comparable to the average recurrence interval of the individual faults in the population and repeats in the absence of any new load may pertubations; this simple model may partly explain observations of repeated clustering of earthquakes. Copyright 2005 by the American Geophysical Union.

  8. Simplifications for hydronic system models in modelica

    DOE PAGES

    Jorissen, F.; Wetter, M.; Helsen, L.

    2018-01-12

    Building systems and their heating, ventilation and air conditioning flow networks, are becoming increasingly complex. Some building energy simulation tools simulate these flow networks using pressure drop equations. These flow network models typically generate coupled algebraic nonlinear systems of equations, which become increasingly more difficult to solve as their sizes increase. This leads to longer computation times and can cause the solver to fail. These problems also arise when using the equation-based modelling language Modelica and Annex 60-based libraries. This may limit the applicability of the library to relatively small problems unless problems are restructured. This paper discusses two algebraicmore » loop types and presents an approach that decouples algebraic loops into smaller parts, or removes them completely. The approach is applied to a case study model where an algebraic loop of 86 iteration variables is decoupled into smaller parts with a maximum of five iteration variables.« less

  9. To work or not to work: motivation (not low IQ) determines symptom validity test findings.

    PubMed

    Chafetz, Michael D; Prentkowski, Erica; Rao, Aparna

    2011-06-01

    Social Security Disability Determinations Service (DDS) claimants are seeking compensation for an inability to work (Chafetz, 2010). These usually low-functioning claimants fail Symptom Validity Tests (SVTs) at high rates (Chafetz, 2008), typically over 40%. In contrast, claimants for the Rehabilitation Service in Louisiana (LRS) are seeking to work. Individuals referred by the Department of Child and Family Services (DCFS) are seeking reunification with their children. All three groups consisted of equivalently low-IQ claimants when considering only those who passed SVTs. Only the DDS group failed SVTs at high rates, whereas LRS claimants failed at minimal rates and DCFS claimants did not fail. Thus, intrinsic motivation explains effort in this particular study of low-functioning claimants: those seeking to work or to look good to reunify with their children pass SVTs at high rates.

  10. Statistical Requirements For Pass-Fail Testing Of Contraband Detection Systems

    NASA Astrophysics Data System (ADS)

    Gilliam, David M.

    2011-06-01

    Contraband detection systems for homeland security applications are typically tested for probability of detection (PD) and probability of false alarm (PFA) using pass-fail testing protocols. Test protocols usually require specified values for PD and PFA to be demonstrated at a specified level of statistical confidence CL. Based on a recent more theoretical treatment of this subject [1], this summary reviews the definition of CL and provides formulas and spreadsheet functions for constructing tables of general test requirements and for determining the minimum number of tests required. The formulas and tables in this article may be generally applied to many other applications of pass-fail testing, in addition to testing of contraband detection systems.

  11. Educational content and health literacy issues in direct-to-consumer advertising of pharmaceuticals.

    PubMed

    Mackert, Michael; Love, Brad

    2011-01-01

    Direct-to-consumer (DTC) pharmaceutical advertisements have been analyzed in many ways, but richer conceptualizations of health literacy have been largely absent from this research. With approximately half of U.S. adults struggling to understand health information, it is important to consider consumers' health literacy when analyzing DTC advertisements. This project, framed by the health belief model, analyzed 82 advertisements. Advertisements provided some kinds of educational content (e.g., drugs' medical benefits) but typically failed to offer other useful information (e.g., other strategies for dealing with conditions). Issues likely to be barriers to low health literate consumers, such as nonstandard text formatting, are common.

  12. Torching the Haystack: modelling fast-fail strategies in drug development.

    PubMed

    Lendrem, Dennis W; Lendrem, B Clare

    2013-04-01

    By quickly clearing the development pipeline of failing or marginal products, fast-fail strategies release resources to focus on more promising molecules. The Quick-Kill model of drug development demonstrates that fast-fail strategies will: (1) reduce the expected time to market; (2) reduce expected R&D costs; and (3) increase R&D productivity. This paper outlines the model and demonstrates the impact of fast-fail strategies. The model is illustrated with costs and risks data from pharmaceutical and biopharmaceutical companies. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Methods for estimating expected blood alcohol concentration.

    DOT National Transportation Integrated Search

    1980-12-01

    Estimates of blood alcohol concentration (BAC) typically are based on the amount of alcohol consumed per pound body weight. This method fails to consider food intake and body composition, which significantly affect BAC. A laboratory experiment was co...

  14. Research notes : alternate method for pothole patching.

    DOT National Transportation Integrated Search

    1998-09-01

    Typically, throw and roll pothole patches will likely fail before the pavement is resurfaced or rehabilitated. Alternatively, semi-permanent repairs are time consuming and require more people and added lane closure time. An alternate method is spray ...

  15. Liability for Off-Campus Injuries.

    ERIC Educational Resources Information Center

    Zirkel, Perry A.; Gluckman, Ivan B.

    1984-01-01

    Liability in cases involving students injured off school property generally hinges on whether districts fail to exercise due care in supervising students while on school premises. Typical activities that may occasion liability for negligence and possible defenses are listed. (MJL)

  16. Methods for estimating expected blood alcohol concentration

    DOT National Transportation Integrated Search

    1980-08-01

    Estimates of blood alcohol concentration (BAC) typically are based on the amount of alcohol consumed per pound bodyweight. This method fails to consider either food intake or body composition, factors which significantly affect BAC. A laboratory expe...

  17. Models for inference in dynamic metacommunity systems

    USGS Publications Warehouse

    Dorazio, R.M.; Kery, M.; Royle, J. Andrew; Plattner, M.

    2010-01-01

    A variety of processes are thought to be involved in the formation and dynamics of species assemblages. For example, various metacommunity theories are based on differences in the relative contributions of dispersal of species among local communities and interactions of species within local communities. Interestingly, metacommunity theories continue to be advanced without much empirical validation. Part of the problem is that statistical models used to analyze typical survey data either fail to specify ecological processes with sufficient complexity or they fail to account for errors in detection of species during sampling. In this paper, we describe a statistical modeling framework for the analysis of metacommunity dynamics that is based on the idea of adopting a unified approach, multispecies occupancy modeling, for computing inferences about individual species, local communities of species, or the entire metacommunity of species. This approach accounts for errors in detection of species during sampling and also allows different metacommunity paradigms to be specified in terms of species-and location-specific probabilities of occurrence, extinction, and colonization: all of which are estimable. In addition, this approach can be used to address inference problems that arise in conservation ecology, such as predicting temporal and spatial changes in biodiversity for use in making conservation decisions. To illustrate, we estimate changes in species composition associated with the species-specific phenologies of flight patterns of butterflies in Switzerland for the purpose of estimating regional differences in biodiversity. ?? 2010 by the Ecological Society of America.

  18. Statistical Requirements For Pass-Fail Testing Of Contraband Detection Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliam, David M.

    2011-06-01

    Contraband detection systems for homeland security applications are typically tested for probability of detection (PD) and probability of false alarm (PFA) using pass-fail testing protocols. Test protocols usually require specified values for PD and PFA to be demonstrated at a specified level of statistical confidence CL. Based on a recent more theoretical treatment of this subject [1], this summary reviews the definition of CL and provides formulas and spreadsheet functions for constructing tables of general test requirements and for determining the minimum number of tests required. The formulas and tables in this article may be generally applied to many othermore » applications of pass-fail testing, in addition to testing of contraband detection systems.« less

  19. Fast and accurate fitting and filtering of noisy exponentials in Legendre space.

    PubMed

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.

  20. Means and extremes: building variability into community-level climate change experiments.

    PubMed

    Thompson, Ross M; Beardall, John; Beringer, Jason; Grace, Mike; Sardina, Paula

    2013-06-01

    Experimental studies assessing climatic effects on ecological communities have typically applied static warming treatments. Although these studies have been informative, they have usually failed to incorporate either current or predicted future, patterns of variability. Future climates are likely to include extreme events which have greater impacts on ecological systems than changes in means alone. Here, we review the studies which have used experiments to assess impacts of temperature on marine, freshwater and terrestrial communities, and classify them into a set of 'generations' based on how they incorporate variability. The majority of studies have failed to incorporate extreme events. In terrestrial ecosystems in particular, experimental treatments have reduced temperature variability, when most climate models predict increased variability. Marine studies have tended to not concentrate on changes in variability, likely in part because the thermal mass of oceans will moderate variation. In freshwaters, climate change experiments have a much shorter history than in the other ecosystems, and have tended to take a relatively simple approach. We propose a new 'generation' of climate change experiments using down-scaled climate models which incorporate predicted changes in climatic variability, and describe a process for generating data which can be applied as experimental climate change treatments. © 2013 John Wiley & Sons Ltd/CNRS.

  1. Modelling the spatio-temporal modulation response of ganglion cells with difference-of-Gaussians receptive fields: relation to photoreceptor response kinetics.

    PubMed

    Donner, K; Hemilä, S

    1996-01-01

    Difference-of-Gaussians (DOG) models for the receptive fields of retinal ganglion cells accurately predict linear responses to both periodic stimuli (typically moving sinusoidal gratings) and aperiodic stimuli (typically circular fields presented as square-wave pulses). While the relation of spatial organization to retinal anatomy has received considerable attention, temporal characteristics have been only loosely connected to retinal physiology. Here we integrate realistic photoreceptor response waveforms into the DOG model to clarify how far a single set of physiological parameters predict temporal aspects of linear responses to both periodic and aperiodic stimuli. Traditional filter-cascade models provide a useful first-order approximation of the single-photon response in photoreceptors. The absolute time scale of these, plus a time for retinal transmission, here construed as a fixed delay, are obtained from flash/step data. Using these values, we find that the DOG model predicts the main features of both the amplitude and phase response of linear cat ganglion cells to sinusoidal flicker. Where the simplest model formulation fails, it serves to reveal additional mechanisms. Unforeseen facts are the attenuation of low temporal frequencies even in pure center-type responses, and the phase advance of the response relative to the stimulus at low frequencies. Neither can be explained by any experimentally documented cone response waveform, but both would be explained by signal differentiation, e.g. in the retinal transmission pathway, as demonstrated at least in turtle retina.

  2. CHARACTERIZATION OF VIRULENCE OF Leptospira ISOLATES IN A HAMSTER MODEL

    PubMed Central

    Silva, Éverton F.; Santos, Cleiton S.; Athanazio, Daniel A.; Seyffert, Núbia; Seixas, Fabiana K.; Cerqueira, Gustavo M.; Fagundes, Michel Q.; Brod, Claudiomar S.; Reis, Mitermayer G.; Dellagostin, Odir A.; Ko, Albert I.

    2008-01-01

    Effort has been made to identify protective antigens in order to develop a recombinant vaccine against leptospirosis. Several attempts failed to conclusively demonstrate efficacy of vaccine candidates due to the lack of an appropriate model of lethal leptospirosis. The purposes of our study were: (i) to test the virulence of leptospiral isolates from Brazil, which are representative of important serogroups that cause disease in humans and animals; and (ii) to standardize the lethal dose 50% (LD50) for each of the virulent strains using a hamster (Mesocricetus auratus) model. Five of seven Brazilian isolates induced lethality in a hamster model, with inocula lower than 200 leptospires. Histopathological examination of infected animals showed typical lesions found in both natural and experimental leptospirosis. Results described here demonstrated the potential use of Brazilian isolates as highly virulent strains in challenge experiments using hamster as an appropriate animal model for leptospirosis. Furthermore these strains may be useful in heterologous challenge studies which aim to evaluate cross-protective responses induced by subunit vaccine candidates. PMID:18547690

  3. The journals are full of great studies but can we believe the statistics? Revisiting the mass privatisation - mortality debate.

    PubMed

    Gerry, Christopher J

    2012-07-01

    Cross-national statistical analyses based on country-level panel data are increasingly popular in social epidemiology. To provide reliable results on the societal determinants of health, analysts must give very careful consideration to conceptual and methodological issues: aggregate (historical) data are typically compatible with multiple alternative stories of the data-generating process. Studies in this field which fail to relate their empirical approach to the true underlying data-generating process are likely to produce misleading results if, for example, they misspecify their models by failing to explore the statistical properties of the longitudinal aspect of their data or by ignoring endogeneity issues. We illustrate the importance of this extra need for care with reference to a recent debate on whether discussing the role of rapid mass privatisation can explain post-communist mortality fluctuations. We demonstrate that the finding that rapid mass privatisation was a "crucial determinant" of male mortality fluctuations in the post-communist world is rejected once better consideration is given to the way in which the data are generated. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Gamma-ray bursts from stellar mass accretion disks around black holes

    NASA Technical Reports Server (NTRS)

    Woosley, S. E.

    1993-01-01

    A cosmological model for gamma-ray bursts is explored in which the radiation is produced as a broadly beamed pair fireball along the rotation axis of an accreting black hole. The black hole may be a consequence of neutron star merger or neutron star-black hole merger, but for long complex bursts, it is more likely to come from the collapse of a single Wolf-Rayet star endowed with rotation ('failed' Type Ib supernova). The disk is geometrically thick and typically has a mass inside 100 km of several tenths of a solar mass. In the failed supernova case, the disk is fed for a longer period of time by the collapsing star. At its inner edge the disk is thick to its own neutrino emission and evolves on a viscous time scale of several seconds. In a region roughly 30 km across, interior to the accretion disk and along its axis of rotation, a pair fireball is generated by neutrino annihilation and electron-neutrino scattering which deposit approximately 10 exp 50 ergs/s.

  5. The Use of Molecular Techniques at Hazardous Waste Sites

    EPA Science Inventory

    It is clear that typical protocols used for soil analysis would certainly fail to adequately interrogate ground-water treatment systems unless they were substantially modified. The modifications found necessary to compensate for the low biomass include molecular tools and techniq...

  6. Condition assessment survey of onsite sewage disposal systems (OSDSs) in Hawaii.

    PubMed

    Babcock, Roger W; Lamichhane, Krishna M; Cummings, Michael J; Cheong, Gloria H

    2014-01-01

    Onsite sewage disposal systems (OSDSs) are the third leading cause of groundwater contamination in the USA. The existing condition of OSDSs in the State of Hawaii was investigated to determine whether a mandatory management program should be implemented. Based on observed conditions, OSDSs were differentiated into four categories: 'pass', 'sludge scum', 'potential failure' and 'fail'. Of all OSDSs inspected, approximately 68% appear to be in good working condition while the remaining 32% are failing or are in danger of failing. Homeowner interviews found that 80% of OSDSs were not being serviced in any way. About 70% of effluent samples had values of total-N and total-P greater than typical values and 40% had total suspended solids (TSS) and 5-day biochemical oxygen demand (BOD5) greater than typical values. The performance of aerobic treatment units (ATUs) was no better than septic tanks and cesspools indicating that the State's approach of requiring but not enforcing maintenance contracts for ATUs is not working. In addition, effluent samples from OSDSs located in drinking water wells estimated 2-year capture zones had higher average concentrations of TSS, BOD5, and total-P than units outside of these zones, indicating the potential for contamination. These findings suggest the need to introduce a proactive, life-cycle OSDS management program in the State of Hawaii.

  7. The Nonstationary Dynamics of Fitness Distributions: Asexual Model with Epistasis and Standing Variation

    PubMed Central

    Martin, Guillaume; Roques, Lionel

    2016-01-01

    Various models describe asexual evolution by mutation, selection, and drift. Some focus directly on fitness, typically modeling drift but ignoring or simplifying both epistasis and the distribution of mutation effects (traveling wave models). Others follow the dynamics of quantitative traits determining fitness (Fisher’s geometric model), imposing a complex but fixed form of mutation effects and epistasis, and often ignoring drift. In all cases, predictions are typically obtained in high or low mutation rate limits and for long-term stationary regimes, thus losing information on transient behaviors and the effect of initial conditions. Here, we connect fitness-based and trait-based models into a single framework, and seek explicit solutions even away from stationarity. The expected fitness distribution is followed over time via its cumulant generating function, using a deterministic approximation that neglects drift. In several cases, explicit trajectories for the full fitness distribution are obtained for arbitrary mutation rates and standing variance. For nonepistatic mutations, especially with beneficial mutations, this approximation fails over the long term but captures the early dynamics, thus complementing stationary stochastic predictions. The approximation also handles several diminishing returns epistasis models (e.g., with an optimal genotype); it can be applied at and away from equilibrium. General results arise at equilibrium, where fitness distributions display a “phase transition” with mutation rate. Beyond this phase transition, in Fisher’s geometric model, the full trajectory of fitness and trait distributions takes a simple form; robust to the details of the mutant phenotype distribution. Analytical arguments are explored regarding why and when the deterministic approximation applies. PMID:27770037

  8. Forced imbibition through model porous media

    NASA Astrophysics Data System (ADS)

    Odier, Celeste; Levache, Bertrand; Bartolo, Denis

    2016-11-01

    A number of industrial and natural process ultimately rely on two-phase flow in heterogeneous media. One of the most prominent example is oil recovery which has driven fundamental and applied research in this field for decades. Imbibition occurs when a wetting fluid displaces an immiscible fluid e.g. in a porous media. Using model microfluidic experiment we control both the geometry and wetting properties of the heterogenous media, and show that the typical front propagation picture fails when imbibition is forced and the displacing fluid is less viscous than the non-wetting fluid. We identify and quantitatively characterize four different flow regimes at the pore scale yielding markedly different imbibition patterns at large scales. In particular we will discuss the transition from a conventional 2D-front propagation scenario to a regime where the meniscus dynamics is an intrinsically 3D process.

  9. Origin of coronal mass ejection and magnetic cloud: Thermal or magnetic driven?

    NASA Technical Reports Server (NTRS)

    Zhang, Gong-Liang; Wang, Chi; He, Shuang-Hua

    1995-01-01

    A fundamental problem in Solar-Terrestrial Physics is the origin of the solar transient plasma output, which includes the coronal mass ejection and its interplanetary manifestation, e.g. the magnetic cloud. The traditional blast wave model resulted from solar thermal pressure impulse has faced with challenge during recent years. In the MHD numerical simulation study of CME, the authors find that the basic feature of the asymmetrical event on 18 August 1980 can be reproduced neither by a thermal pressure nor by a speed increment. Also, the thermal pressure model fails in simulating the interplanetary structure with low thermal pressure and strong magnetic field strength, representative of a typical magnetic cloud. Instead, the numerical simulation results are in favor of the magnetic field expansion as the likely mechanism for both the asymmetrical CME event and magnetic cloud.

  10. Flight service evaluation of composite components on the Bell Helicopter model 206L: Design, fabrication and testing

    NASA Technical Reports Server (NTRS)

    Zinberg, H.

    1982-01-01

    The design, fabrication, and testing phases of a program to obtain long term flight service experience on representative helicopter airframe structural components operating in typical commercial environments are described. The aircraft chosen is the Bell Helicopter Model 206L. The structural components are the forward fairing, litter door, baggage door, and vertical fin. The advanced composite components were designed to replace the production parts in the field and were certified by the FAA to be operable through the full flight envelope of the 206L. A description of the fabrication process that was used for each of the components is given. Static failing load tests on all components were done. In addition fatigue tests were run on four specimens that simulated the attachment of the vertical fin to the helicopter's tail boom.

  11. Mechanism for thermal relic dark matter of strongly interacting massive particles.

    PubMed

    Hochberg, Yonit; Kuflik, Eric; Volansky, Tomer; Wacker, Jay G

    2014-10-24

    We present a new paradigm for achieving thermal relic dark matter. The mechanism arises when a nearly secluded dark sector is thermalized with the standard model after reheating. The freeze-out process is a number-changing 3→2 annihilation of strongly interacting massive particles (SIMPs) in the dark sector, and points to sub-GeV dark matter. The couplings to the visible sector, necessary for maintaining thermal equilibrium with the standard model, imply measurable signals that will allow coverage of a significant part of the parameter space with future indirect- and direct-detection experiments and via direct production of dark matter at colliders. Moreover, 3→2 annihilations typically predict sizable 2→2 self-interactions which naturally address the "core versus cusp" and "too-big-to-fail" small-scale structure formation problems.

  12. Examining the "Whole Child" to Generate Usable Knowledge

    ERIC Educational Resources Information Center

    Rappolt-Schlichtmann, Gabrielle; Ayoub, Catherine C.; Gravel, Jenna W.

    2009-01-01

    Despite the promise of scientific knowledge contributing to issues facing vulnerable children, families, and communities, typical approaches to research have made applications challenging. While contemporary theories of human development offer appropriate complexity, research has mostly failed to address dynamic developmental processes. Research…

  13. Synthesis of chloro alkoxy and alkoxy derivatives of methyl oleate

    USDA-ARS?s Scientific Manuscript database

    Vegetable oil based lubricants typically have improved lubricity and biodegradability over their mineral oil based counterparts. However, vegetable oil lubricants often fail to meet the performance standards of mineral based oils with respect to cold temperature and resistance to oxidation. Olefins ...

  14. Bioretention Monitoring: Designing Rain Gardens to Promote Nitrate Removal

    EPA Science Inventory

    Laboratory and field-scale studies of bioretention systems have often shown these structures to have a high capacity for removal of suspended solids, heavy metals, and phosphorus. Most studies, however, failed to demonstrate the same success in removing nitrate. Typical rain ga...

  15. Computed versus measured ion velocity distribution functions in a Hall effect thruster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrigues, L.; CNRS, LAPLACE, F-31062 Toulouse; Mazouffre, S.

    2012-06-01

    We compare time-averaged and time-varying measured and computed ion velocity distribution functions in a Hall effect thruster for typical operating conditions. The ion properties are measured by means of laser induced fluorescence spectroscopy. Simulations of the plasma properties are performed with a two-dimensional hybrid model. In the electron fluid description of the hybrid model, the anomalous transport responsible for the electron diffusion across the magnetic field barrier is deduced from the experimental profile of the time-averaged electric field. The use of a steady state anomalous mobility profile allows the hybrid model to capture some properties like the time-averaged ion meanmore » velocity. Yet, the model fails at reproducing the time evolution of the ion velocity. This fact reveals a complex underlying physics that necessitates to account for the electron dynamics over a short time-scale. This study also shows the necessity for electron temperature measurements. Moreover, the strength of the self-magnetic field due to the rotating Hall current is found negligible.« less

  16. MSC/NASTRAN Stress Analysis of Complete Models Subjected to Random and Quasi-Static Loads

    NASA Technical Reports Server (NTRS)

    Hampton, Roy W.

    2000-01-01

    Space payloads, such as those which fly on the Space Shuttle in Spacelab, are designed to withstand dynamic loads which consist of combined acoustic random loads and quasi-static acceleration loads. Methods for computing the payload stresses due to these loads are well known and appear in texts and NASA documents, but typically involve approximations such as the Miles' equation, as well as possible adjustments based on "modal participation factors." Alternatively, an existing capability in MSC/NASTRAN may be used to output exact root mean square [rms] stresses due to the random loads for any specified elements in the Finite Element Model. However, it is time consuming to use this methodology to obtain the rms stresses for the complete structural model and then combine them with the quasi-static loading induced stresses. Special processing was developed as described here to perform the stress analysis of all elements in the model using existing MSC/NASTRAN and MSC/PATRAN and UNIX utilities. Fail-safe and buckling analyses applications are also described.

  17. BIN1 is Reduced and Cav1.2 Trafficking is Impaired in Human Failing Cardiomyocytes

    PubMed Central

    Hong, Ting-Ting; Smyth, James W.; Chu, Kevin Y.; Vogan, Jacob M.; Fong, Tina S.; Jensen, Brian C.; Fang, Kun; Halushka, Marc K.; Russell, Stuart D.; Colecraft, Henry; Hoopes, Charles W.; Ocorr, Karen; Chi, Neil C.; Shaw, Robin M.

    2011-01-01

    Background Heart failure is a growing epidemic and a typical aspect of heart failure pathophysiology is altered calcium transients. Normal cardiac calcium transients are initiated by Cav1.2 channels at cardiac T-tubules. BIN1 is a membrane scaffolding protein that causes Cav1.2 to traffic to T-tubules in healthy hearts. The mechanisms of Cav1.2 trafficking in heart failure are not known. Objective To study BIN1 expression and its effect on Cav1.2 trafficking in failing hearts. Methods Intact myocardium and freshly isolated cardiomyocytes from non-failing and end-stage failing human hearts were used to study BIN1 expression and Cav1.2 localization. To confirm Cav1.2 surface expression dependence on BIN1, patch clamp recordings were performed of Cav1.2 current in cell lines with and without trafficking competent BIN1. Also, in adult mouse cardiomyocytes, surface Cav1.2 and calcium transients were studied after shRNA mediated knockdown of BIN1. For a functional readout in intact heart, calcium transients and cardiac contractility were analyzed in a zebrafish model with morpholino mediated knockdown of BIN1. Results BIN1 expression is significantly decreased in failing cardiomyocytes at both mRNA (30% down) and protein (36% down) levels. Peripheral Cav1.2 is reduced 42% by imaging and biochemical T-tubule fraction of Cav1.2 is reduced 68%. Total calcium current is reduced 41% in a cell line expressing non-trafficking BIN1 mutant. In mouse cardiomyocytes, BIN1 knockdown decreases surface Cav1.2 and impairs calcium transients. In zebrafish hearts, BIN1 knockdown causes a 75% reduction in calcium transients and severe ventricular contractile dysfunction. Conclusions The data indicate that BIN1 is significantly reduced in human heart failure, and this reduction impairs Cav1.2 trafficking, calcium transients, and contractility. PMID:22138472

  18. Science Matters

    ERIC Educational Resources Information Center

    Odell, Bill

    2005-01-01

    The spaces and structures used for undergraduate science often work against new teaching methods and fail to provide environments that attract the brightest students to science. The undergraduate science building often offers little to inspire the imaginations of young minds. The typical undergraduate science building also tends to work against…

  19. Exploring the relationship between cortical GABA concentrations, auditory gamma-band responses and development in ASD: Evidence for an altered maturational trajectory in ASD.

    PubMed

    Port, Russell G; Gaetz, William; Bloy, Luke; Wang, Dah-Jyuu; Blaskey, Lisa; Kuschner, Emily S; Levy, Susan E; Brodkin, Edward S; Roberts, Timothy P L

    2017-04-01

    Autism spectrum disorder (ASD) is hypothesized to arise from imbalances between excitatory and inhibitory neurotransmission (E/I imbalance). Studies have demonstrated E/I imbalance in individuals with ASD and also corresponding rodent models. One neural process thought to be reliant on E/I balance is gamma-band activity (Gamma), with support arising from observed correlations between motor, as well as visual, Gamma and underlying GABA concentrations in healthy adults. Additionally, decreased Gamma has been observed in ASD individuals and relevant animal models, though the direct relationship between Gamma and GABA concentrations in ASD remains unexplored. This study combined magnetoencephalography (MEG) and edited magnetic resonance spectroscopy (MRS) in 27 typically developing individuals (TD) and 30 individuals with ASD. Auditory cortex localized phase-locked Gamma was compared to resting Superior Temporal Gyrus relative cortical GABA concentrations for both children/adolescents and adults. Children/adolescents with ASD exhibited significantly decreased GABA+/Creatine (Cr) levels, though typical Gamma. Additionally, these children/adolescents lacked the typical maturation of GABA+/Cr concentrations and gamma-band coherence. Furthermore, children/adolescents with ASD additionally failed to exhibit the typical GABA+/Cr to gamma-band coherence association. This altered coupling during childhood/adolescence may result in Gamma decreases observed in the adults with ASD. Therefore, individuals with ASD exhibit improper local neuronal circuitry maturation during a childhood/adolescence critical period, when GABA is involved in configuring of such circuit functioning. Provocatively a novel line of treatment is suggested (with a critical time window); by increasing neural GABA levels in children/adolescents with ASD, proper local circuitry maturation may be restored resulting in typical Gamma in adulthood. Autism Res 2017, 10: 593-607. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  20. Assessing sea-level rise impact on saltwater intrusion into the root zone of a geo-typical area in coastal east-central Florida.

    PubMed

    Xiao, Han; Wang, Dingbao; Medeiros, Stephen C; Hagen, Scott C; Hall, Carlton R

    2018-07-15

    Saltwater intrusion (SWI) into root zone in low-lying coastal areas can affect the survival and spatial distribution of various vegetation species by altering plant communities and the wildlife habitats they support. In this study, a baseline model was developed based on FEMWATER to simulate the monthly variation of root zone salinity of a geo-typical area located at the Cape Canaveral Barrier Island Complex (CCBIC) of coastal east-central Florida (USA) in 2010. Based on the developed and calibrated baseline model, three diagnostic FEMWATER models were developed to predict the extent of SWI into root zone by modifying the boundary values representing the rising sea level based on various sea-level rise (SLR) scenarios projected for 2080. The simulation results indicated that the extent of SWI would be insignificant if SLR is either low (23.4cm) or intermediate (59.0cm), but would be significant if SLR is high (119.5cm) in that infiltration/diffusion of overtopping seawater in coastal low-lying areas can greatly increase root zone salinity level, since the sand dunes may fail to prevent the landward migration of seawater because the waves of the rising sea level can reach and pass over the crest under high (119.5cm) SLR scenario. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. On the mechanics of growing thin biological membranes

    NASA Astrophysics Data System (ADS)

    Rausch, Manuel K.; Kuhl, Ellen

    2014-02-01

    Despite their seemingly delicate appearance, thin biological membranes fulfill various crucial roles in the human body and can sustain substantial mechanical loads. Unlike engineering structures, biological membranes are able to grow and adapt to changes in their mechanical environment. Finite element modeling of biological growth holds the potential to better understand the interplay of membrane form and function and to reliably predict the effects of disease or medical intervention. However, standard continuum elements typically fail to represent thin biological membranes efficiently, accurately, and robustly. Moreover, continuum models are typically cumbersome to generate from surface-based medical imaging data. Here we propose a computational model for finite membrane growth using a classical midsurface representation compatible with standard shell elements. By assuming elastic incompressibility and membrane-only growth, the model a priori satisfies the zero-normal stress condition. To demonstrate its modular nature, we implement the membrane growth model into the general-purpose non-linear finite element package Abaqus/Standard using the concept of user subroutines. To probe efficiently and robustness, we simulate selected benchmark examples of growing biological membranes under different loading conditions. To demonstrate the clinical potential, we simulate the functional adaptation of a heart valve leaflet in ischemic cardiomyopathy. We believe that our novel approach will be widely applicable to simulate the adaptive chronic growth of thin biological structures including skin membranes, mucous membranes, fetal membranes, tympanic membranes, corneoscleral membranes, and heart valve membranes. Ultimately, our model can be used to identify diseased states, predict disease evolution, and guide the design of interventional or pharmaceutic therapies to arrest or revert disease progression.

  2. On the mechanics of growing thin biological membranes

    PubMed Central

    Rausch, Manuel K.; Kuhl, Ellen

    2013-01-01

    Despite their seemingly delicate appearance, thin biological membranes fulfill various crucial roles in the human body and can sustain substantial mechanical loads. Unlike engineering structures, biological membranes are able to grow and adapt to changes in their mechanical environment. Finite element modeling of biological growth holds the potential to better understand the interplay of membrane form and function and to reliably predict the effects of disease or medical intervention. However, standard continuum elements typically fail to represent thin biological membranes efficiently, accurately, and robustly. Moreover, continuum models are typically cumbersome to generate from surface-based medical imaging data. Here we propose a computational model for finite membrane growth using a classical midsurface representation compatible with standard shell elements. By assuming elastic incompressibility and membrane-only growth, the model a priori satisfies the zero-normal stress condition. To demonstrate its modular nature, we implement the membrane growth model into the general-purpose non-linear finite element package Abaqus/Standard using the concept of user subroutines. To probe efficiently and robustness, we simulate selected benchmark examples of growing biological membranes under different loading conditions. To demonstrate the clinical potential, we simulate the functional adaptation of a heart valve leaflet in ischemic cardiomyopathy. We believe that our novel approach will be widely applicable to simulate the adaptive chronic growth of thin biological structures including skin membranes, mucous membranes, fetal membranes, tympanic membranes, corneoscleral membranes, and heart valve membranes. Ultimately, our model can be used to identify diseased states, predict disease evolution, and guide the design of interventional or pharmaceutic therapies to arrest or revert disease progression. PMID:24563551

  3. Grade Repetition and Primary School Dropout in Uganda

    ERIC Educational Resources Information Center

    Kabay, Sarah

    2016-01-01

    Research on education in low-income countries rarely focuses on grade repetition. When addressed, repetition is typically presented along with early school dropout as the "wasting" of educational resources. Simplifying grade repetition in this way often fails to recognize significant methodological concerns and also overlooks the unique…

  4. Supplementing Summative Findings with Formative Data.

    ERIC Educational Resources Information Center

    Noggle, Nelson L.

    This paper attempts to provide evaluators, administrators, and policy makers with the advantages of and methodology of merging formative and summative data to enhance summative evaluations. It draws on RMC Research Corporation's 1980-81 California Statewide Evaluation of Migrant Education. The concern that evaluations typically fail to obtain the…

  5. Communication: Density functional theory overcomes the failure of predicting intermolecular interaction energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Podeszwa, Rafal; Department of Physics and Astronomy, University of Delaware, Newark, Delaware 19716; Szalewicz, Krzysztof

    2012-04-28

    Density-functional theory (DFT) revolutionized the ability of computational quantum mechanics to describe properties of matter and is by far the most often used method. However, all the standard variants of DFT fail to predict intermolecular interaction energies. In recent years, a number of ways to go around this problem has been proposed. We show that some of these approaches can reproduce interaction energies with median errors of only about 5% in the complete range of intermolecular configurations. Such errors are comparable to typical uncertainties of wave-function-based methods in practical applications. Thus, these DFT methods are expected to find broad applicationsmore » in modelling of condensed phases and of biomolecules.« less

  6. Ship Detection in SAR Image Based on the Alpha-stable Distribution

    PubMed Central

    Wang, Changcheng; Liao, Mingsheng; Li, Xiaofeng

    2008-01-01

    This paper describes an improved Constant False Alarm Rate (CFAR) ship detection algorithm in spaceborne synthetic aperture radar (SAR) image based on Alpha-stable distribution model. Typically, the CFAR algorithm uses the Gaussian distribution model to describe statistical characteristics of a SAR image background clutter. However, the Gaussian distribution is only valid for multilook SAR images when several radar looks are averaged. As sea clutter in SAR images shows spiky or heavy-tailed characteristics, the Gaussian distribution often fails to describe background sea clutter. In this study, we replace the Gaussian distribution with the Alpha-stable distribution, which is widely used in impulsive or spiky signal processing, to describe the background sea clutter in SAR images. In our proposed algorithm, an initial step for detecting possible ship targets is employed. Then, similar to the typical two-parameter CFAR algorithm, a local process is applied to the pixel identified as possible target. A RADARSAT-1 image is used to validate this Alpha-stable distribution based algorithm. Meanwhile, known ship location data during the time of RADARSAT-1 SAR image acquisition is used to validate ship detection results. Validation results show improvements of the new CFAR algorithm based on the Alpha-stable distribution over the CFAR algorithm based on the Gaussian distribution. PMID:27873794

  7. Designing for sustained adoption: A model of developing educational innovations for successful propagation

    NASA Astrophysics Data System (ADS)

    Khatri, Raina; Henderson, Charles; Cole, Renée; Froyd, Jeffrey E.; Friedrichsen, Debra; Stanford, Courtney

    2016-06-01

    [This paper is part of the Focused Collection on Preparing and Supporting University Physics Educators.] The physics education research community has produced a wealth of knowledge about effective teaching and learning of college level physics. Based on this knowledge, many research-proven instructional strategies and teaching materials have been developed and are currently available to instructors. Unfortunately, these intensive research and development activities have failed to influence the teaching practices of many physics instructors. This paper describes interim results of a larger study to develop a model of designing materials for successful propagation. The larger study includes three phases, the first two of which are reported here. The goal of the first phase was to characterize typical propagation practices of education developers, using data from a survey of 1284 National Science Foundation (NSF) principal investigators and focus group data from eight disciplinary groups of NSF program directors. The goal of the second phase was to develop an understanding of successful practice by studying three instructional strategies that have been well propagated. The result of the first two phases is a tentative model of designing for successful propagation, which will be further validated in the third phase through purposeful sampling of additional well-propagated instructional strategies along with typical education development projects. We found that interaction with potential adopters was one of the key missing ingredients in typical education development activities. Education developers often develop a polished product before getting feedback, rely on mass-market communication channels for dissemination, and do not plan for supporting adopters during implementation. The tentative model resulting from this study identifies three key propagation activities: interactive development, interactive dissemination, and support of adopters. Interactive development uses significant feedback from potential adopters to develop a strong product suitable for use in many settings. Interactive dissemination uses personal interactions to reach and motivate potential users. Support of adopters is missing from typical propagation practice and is important to reduce the burden of implementation and increases the likelihood of successful adoption.

  8. The Problem with Briefs, in Brief

    ERIC Educational Resources Information Center

    Conaway, Carrie L.

    2013-01-01

    Policy briefs written by academics--the kind typically published in "Education Finance and Policy"--should be a crucial source of information for policy makers. Yet too frequently these briefs fail to garner the consideration they deserve. Their authors are too focused on the potential objections of their fellow academics, who are…

  9. Storage or Retrieval Deficit: The Yin and Yang of Amnesia

    ERIC Educational Resources Information Center

    Hardt, Oliver; Wang, Szu-Han; Nader, Karim

    2009-01-01

    To this day, it remains unresolved whether experimental amnesia reflects failed memory storage or the inability to retrieve otherwise intact memory. Methodological as well as conceptual reasons prevented deciding between these two alternatives: The absence of recovery from amnesia is typically taken as supporting storage impairment…

  10. No-Fail Software Gifts for Kids.

    ERIC Educational Resources Information Center

    Buckleitner, Warren

    1996-01-01

    Reviews children's software packages: (1) "Fun 'N Games"--nonviolent games and activities; (2) "Putt-Putt Saves the Zoo"--matching, logic games, and animal facts; (3) "Big Job"--12 logic games with video from job sites; (4) "JumpStart First Grade"--15 activities introducing typical school lessons; and (5) "Read, Write, & Type!"--progressively…

  11. The effects of engineering fabric in street pavement on low bearing capacity soil in New Orleans : executive summary.

    DOT National Transportation Integrated Search

    1985-07-01

    Subsurface soil in the New Orleans area is generally composed of peat and clay. The low bearing capacity of the soft natural soil has caused early deterioration of asphaltic concrete pavements which typically fail prior to carrying their designed loa...

  12. Imitation Therapy for Non-Verbal Toddlers

    ERIC Educational Resources Information Center

    Gill, Cindy; Mehta, Jyutika; Fredenburg, Karen; Bartlett, Karen

    2011-01-01

    When imitation skills are not present in young children, speech and language skills typically fail to emerge. There is little information on practices that foster the emergence of imitation skills in general and verbal imitation skills in particular. The present study attempted to add to our limited evidence base regarding accelerating the…

  13. Biochemical and physiological consequences of the Apollo flight diet.

    NASA Technical Reports Server (NTRS)

    Hander, E. W.; Leach, C. S.; Fischer, C. L.; Rummel, J.; Rambaut, P.; Johnson, P. C.

    1971-01-01

    Six male subjects subsisting on a typical Apollo flight diet for five consecutive days were evaluated for changes in biochemical and physiological status. Laboratory examinations failed to demonstrate any significant changes of the kind previously attributed to weightlessness, such as in serum electrolytes, endocrine values, body fluid, or hematologic parameters.

  14. The Effects of Multiple Exemplar Instruction on the Relation between Listener and Intraverbal Categorization Repertoires

    ERIC Educational Resources Information Center

    Lechago, Sarah A.; Carr, James E.; Kisamore, April N.; Grow, Laura L.

    2015-01-01

    We evaluated the effects of multiple exemplar instruction (MEI) on the relation between listener and intraverbal categorization repertoires of six typically developing preschool-age children using a nonconcurrent multiple-probe design across participants. After failing to emit intraverbal categorization responses following listener categorization…

  15. A probabilistic method for the estimation of residual risk in donated blood.

    PubMed

    Bish, Ebru K; Ragavan, Prasanna K; Bish, Douglas R; Slonim, Anthony D; Stramer, Susan L

    2014-10-01

    The residual risk (RR) of transfusion-transmitted infections, including the human immunodeficiency virus and hepatitis B and C viruses, is typically estimated by the incidence[Formula: see text]window period model, which relies on the following restrictive assumptions: Each screening test, with probability 1, (1) detects an infected unit outside of the test's window period; (2) fails to detect an infected unit within the window period; and (3) correctly identifies an infection-free unit. These assumptions need not hold in practice due to random or systemic errors and individual variations in the window period. We develop a probability model that accurately estimates the RR by relaxing these assumptions, and quantify their impact using a published cost-effectiveness study and also within an optimization model. These assumptions lead to inaccurate estimates in cost-effectiveness studies and to sub-optimal solutions in the optimization model. The testing solution generated by the optimization model translates into fewer expected infections without an increase in the testing cost. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space

    PubMed Central

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904

  17. Resolving the observer reference class problem in cosmology

    NASA Astrophysics Data System (ADS)

    Friederich, Simon

    2017-06-01

    The assumption that we are typical observers plays a core role in attempts to make multiverse theories empirically testable. A widely shared worry about this assumption is that it suffers from systematic ambiguity concerning the reference class of observers with respect to which typicality is assumed. As a way out, Srednicki and Hartle recommend that we empirically test typicality with respect to different candidate reference classes in analogy to how we test physical theories. Unfortunately, as this paper argues, this idea fails because typicality is not the kind of assumption that can be subjected to empirical tests. As an alternative, a background information constraint on observer reference class choice is suggested according to which the observer reference class should be chosen such that it includes precisely those observers who one could possibly be, given one's assumed background information.

  18. Communicating with sentences: A multi-word naming game model

    NASA Astrophysics Data System (ADS)

    Lou, Yang; Chen, Guanrong; Hu, Jianwei

    2018-01-01

    Naming game simulates the process of naming an object by a single word, in which a population of communicating agents can reach global consensus asymptotically through iteratively pair-wise conversations. We propose an extension of the single-word model to a multi-word naming game (MWNG), simulating the case of describing a complex object by a sentence (multiple words). Words are defined in categories, and then organized as sentences by combining them from different categories. We refer to a formatted combination of several words as a pattern. In such an MWNG, through a pair-wise conversation, it requires the hearer to achieve consensus with the speaker with respect to both every single word in the sentence as well as the sentence pattern, so as to guarantee the correct meaning of the saying; otherwise, they fail reaching consensus in the interaction. We validate the model in three typical topologies as the underlying communication network, and employ both conventional and man-designed patterns in performing the MWNG.

  19. Experiential avoidance mediates the link between maternal attachment style and theory of mind.

    PubMed

    Vanwoerden, Salome; Kalpakci, Allison H; Sharp, Carla

    2015-02-01

    Theoretical and empirical models suggest a relation between attachment style and theory of mind (ToM) in childhood and adulthood; however, this link has not been evaluated to the same extent in adolescence. Additionally, these models typically fail to consider mechanisms by which attachment style affects ToM abilities. The present study sought to test a mediational model in which experiential avoidance mediates the relation between maternal attachment style and ToM. A sample of 282 adolescents (Mage=15.42years, SD=1.44, 62.8% female) was recruited from an inpatient psychiatric hospital. Findings revealed that maternal attachment style in females was related to ToM, through experiential avoidance. Specifically, those with a disorganized maternal attachment were most likely to engage in experiential avoidant cognitive and emotional strategies, which in turn related to lower levels of ToM ability. Implications and areas for future research are discussed. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Vitamin D fails to prevent serum starvation- or staurosporine-induced apoptosis in human and rat osteosarcoma-derived cell lines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witasp, Erika; Division of Biochemical Toxicology, Institute of Environmental Medicine, Karolinska Institutet, Stockholm; Gustafsson, Ann-Catrin

    2005-05-13

    Previous studies have suggested that 1,25(OH){sub 2}D{sub 3}, the active form of vitamin D{sub 3}, may increase the survival of bone-forming osteoblasts through an inhibition of apoptosis. On the other hand, vitamin D{sub 3} has also been shown to trigger apoptosis in human cancer cells, including osteosarcoma-derived cell lines. In the present study, we show that 1,25(OH){sub 2}D{sub 3} induces a time- and dose-dependent loss of cell viability in the rat osteosarcoma cell line, UMR-106, and the human osteosarcoma cell line, TE-85. We were unable, however, to detect nuclear condensation, phosphatidylserine externalization, or other typical signs of apoptosis in thismore » model. Moreover, 1,25(OH){sub 2}D{sub 3} failed to protect against apoptosis induced by serum starvation or incubation with the protein kinase inhibitor, staurosporine. These in vitro findings are thus at variance with several previous reports in the literature and suggest that induction of or protection against apoptosis of bone-derived cells may not be a primary function of vitamin D{sub 3}.« less

  1. Failed rib region prediction in a human body model during crash events with precrash braking.

    PubMed

    Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S

    2018-02-28

    The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.

  2. Cloud rise model for radiological dispersal devices events

    NASA Astrophysics Data System (ADS)

    Sharon, Avi; Halevy, Itzhak; Sattinger, Daniel; Yaar, Ilan

    2012-07-01

    As a part of the preparedness and response to possible radiological terror events, it is important to model the evolution of the radioactive cloud immediately after its formation, as a function of time, explosive quantity and local meteorological conditions. One of the major outputs of a cloud rise models is the evaluation of cloud top height, which is an essential input for most of the succeeding atmospheric dispersion models. This parameter strongly affects the radiological consequences of the event. Most of the cloud rise models used today, have been developed according to experiments were large quantities of explosives were used, within the range of hundreds of kilograms of TNT. The majority of these models, however, fail to address Radiological Dispersion Devices (RDD) events, which are typically characterized by smaller amounts of TNT. In this paper, a new, semi-empirical model that describes the vertical evolution of the cloud up to its effective height as a function of time, explosive quantity, atmospheric stability and horizontal wind speed, is presented. The database for this model is taken from five sets of experiments done in Israel during 2006-2009 under the "Green Field" (GF) project, using 0.25-100 kg of TNT.

  3. Integral Method for the Assessment of U-RANS Effectiveness in Non-Equilibrium Flows and Heat Transfer

    NASA Astrophysics Data System (ADS)

    Pond, Ian; Edabi, Alireza; Dubief, Yves; White, Christopher

    2015-11-01

    Reynolds Average Navier Stokes (RANS) modeling has established itself as a critical design tool in many engineering applications, thanks to its superior computational efficiency. The drawbacks of RANS models are well known, but not necessarily well understood: poor prediction of transition, non equilibrium flows, mixing and heat transfer, to name the ones relevant to our study. In the present study, we use a DNS of a reciprocating channel flow driven by an oscillating pressure gradient to test several low- and high-Reynolds RANS models. Temperature is introduced as a passive scalar to study heat transfer modeling. Low-Reynolds models manage to capture the overall physics of wall shear and heat flux well, yet with some phase discrepancies, whereas high Reynolds models fail. Under the microscope of the integral method for wall shear and wall heat flux, the qualitative agreement appears more serendipitous than driven by the ability of the models to capture the correct physics. The integral method is shown to be more insightful in the benchmarking of RANS models than the typical comparisons of statistical quantities. The authors acknowledges the support of NSF and DOE under grant NSF/DOE 1258697 (VT) and 1258702 (NH).

  4. Fetal Abuse and the Criminalization of Behavior during Pregnancy.

    ERIC Educational Resources Information Center

    Farr, Kathryn Ann

    1995-01-01

    Discusses efforts to criminalize fetal abuse, harm caused from a pregnant woman's use of illegal drugs. Such efforts have typically failed to withstand judicial scrutiny. Suggests that criminal prosecution for fetal abuse relies on questionable procedures, is unevenly applied, and may keep women from seeking drug treatment or prenatal care. (LKS)

  5. Learning to Succeed at Scale. NACSA Monograph

    ERIC Educational Resources Information Center

    Higgins, Monica; Hess, Frederick M.

    2008-01-01

    Given the desperate plight of urban schooling and the disheartening track record of conventional reform, dynamic new ventures like the KIPP Academies, Edison, or Green Dot Public Schools are increasingly being asked to stand in for failing district schools. While promising, these ventures have thus far typically been characterized by "one-off"…

  6. Contraceptive Use Patterns across Teens' Sexual Relationships. Fact Sheet. Publication #2008-07

    ERIC Educational Resources Information Center

    Holcombe, Emily; Carrier, David; Manlove, Jennifer; Ryan, Suzanne

    2008-01-01

    Teens typically fail to use contraceptives consistently, which contributes to high rates of unintended pregnancy and sexually transmitted infections (STIs) among this age group. Existing research has focused primarily on how teens' own characteristics are related to contraceptive use, but has paid less attention to how the characteristics of…

  7. Keep Calm and Contracept! Addressing Young Women's Pleasure in Sexual Health and Contraception Consultations

    ERIC Educational Resources Information Center

    Hanbury, Ali; Eastham, Rachael

    2016-01-01

    Clinical sexual health consultations with young women often focus on avoiding "risks;" namely pregnancy and sexually transmitted infection transmission. They also typically fail to explore how contraception use can impact on the capacity to enjoy sexual relationships. In contrast, this paper argues that sexual pleasure should be a…

  8. Child Care under the Family Support Act: Early Lessons from the States.

    ERIC Educational Resources Information Center

    Children's Defense Fund, Washington, DC.

    A Children's Defense Fund (CDF) survey indicates that many AFDC families are being forced to place their children in low-quality and potentially dangerous child care. Family Support Act (FSA) childcare typically lacks basic health and safety and precautions, fails to provide sufficient assistance to support quality childcare and preschool…

  9. Temporary and Travelling Exhibitions. Museums and Monuments, X.

    ERIC Educational Resources Information Center

    Daifuku, Hiroshi; And Others

    The permanent exhibition, the most typical form of museum exhibition, has failed to attract repeated visitation, since visitors quickly become familiar with the objects shown. The temporary exhibition evolved as a result for the need of repeated visitation. The temporary exhibition, set up for a period of one to six months, introduces fresh…

  10. Meeting Educators Where They Are: Professional Development to Address Selective Mutism

    ERIC Educational Resources Information Center

    Harwood, Debra; Bork, Po-Ling

    2011-01-01

    Children with selective mutism (SM) present unique challenges for teachers. Typically, children with SM have such an immense anxiety associated with being seen or heard speaking they fail to speak inside the classroom and particularly with teachers. This article reports on the effectiveness of a small-scale exploratory study involving 22…

  11. The Narrative Waltz: The Role of Flexibility in Writing Proficiency

    ERIC Educational Resources Information Center

    Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S.

    2016-01-01

    A commonly held belief among educators, researchers, and students is that high-quality texts are easier to read than low-quality texts, as they contain more engaging narrative and story-like elements. Interestingly, these assumptions have typically failed to be supported by the literature on writing. Previous research suggests that higher quality…

  12. Examining the Role of Linguistic Flexibility in the Text Production Process

    ERIC Educational Resources Information Center

    Allen, Laura

    2017-01-01

    A commonly held belief among educators, researchers, and students is that high-quality texts are easier to read than low-quality texts, as they contain more engaging narrative and story-like elements. Interestingly, these assumptions have typically failed to be supported by the writing literature. Research suggests that higher quality writing is…

  13. Student Movement in Social Context: The Influence of Time, Peers, and Place

    ERIC Educational Resources Information Center

    Dauter, Luke; Fuller, Bruce

    2016-01-01

    Higher rates of school switching by students contribute to achievement disparities and are typically theorized as driven by attributes of individual pupils or families. In contrast the neoclassical-economic account postulates that switching is necessary for competition among schools. We argue that both frames fail to capture social-referential and…

  14. 50 Million Strong for All: Universally Designing CSPAPs to Align with APE Best Practices

    ERIC Educational Resources Information Center

    Brian, Ali; Grenier, Michelle; Lieberman, Lauren J.; Egan, Cate; Taunton, Sally

    2017-01-01

    Many children in the United States fail to meet the national recommendations for daily moderate-to-vigorous physical activity (MVPA). However, children with disabilities are more sedentary than their typically developed peers. Comprehensive school physical activity programming (CSPAP) is a whole-of-school approach to provide children with…

  15. Contrast-Marking Prosodic Emphasis in Williams Syndrome: Results of Detailed Phonetic Analysis

    ERIC Educational Resources Information Center

    Ito, Kiwako; Martens, Marilee A.

    2017-01-01

    Background: Past reports on the speech production of individuals with Williams syndrome (WS) suggest that their prosody is anomalous and may lead to challenges in spoken communication. While existing prosodic assessments confirm that individuals with WS fail to use prosodic emphasis to express contrast, those reports typically lack detailed…

  16. Game Engagement Theory and Adult Learning

    ERIC Educational Resources Information Center

    Whitton, Nicola

    2011-01-01

    One of the benefits of computer game-based learning is the ability of certain types of game to engage and motivate learners. However, theories of learning and engagement, particularly in the sphere of higher education, typically fail to consider gaming engagement theory. In this article, the author examines the principles of engagement from games…

  17. Learning Edge Momentum: A New Account of Outcomes in CS1

    ERIC Educational Resources Information Center

    Robins, Anthony

    2010-01-01

    Compared to other subjects, the typical introductory programming (CS1) course has higher than usual rates of both failing and high grades, creating a characteristic bimodal grade distribution. In this article, I explore two possible explanations. The conventional explanation has been that learners naturally fall into populations of programmers and…

  18. Development and Validation of a Child Version of the Obsessive Compulsive Inventory

    ERIC Educational Resources Information Center

    Foa, Edna B.; Coles, Meredith; Huppert, Jonathan D.; Pasupuleti, Radhika V.; Franklin, Martin E.; March, John

    2010-01-01

    Surprisingly, only 3 self-report measures that directly assess pediatric obsessive-compulsive disorder (OCD) have been developed. In addition, these scales have typically been developed in small samples and fail to provide a quick assessment of symptoms across multiple domains. Therefore, the current paper presents initial psychometric data for a…

  19. A Unique Challenge: Sorting out the Differences between Giftedness and Asperger's Disorder

    ERIC Educational Resources Information Center

    Amend, Edward R.; Schuler, Patricia; Beaver-Gavin, Kathleen; Beights, Rebecca

    2009-01-01

    All-too-often, well-meaning individuals urge parents to seek referrals for psychological evaluation without considering or understanding the typical characteristics of gifted children or the impact of intellectual ability on behavior and relationships. In turn, clinicians receiving such referrals also may fail to assess the implications of…

  20. Brief report: atypical neuromagnetic responses to illusory auditory pitch in children with autism spectrum disorders.

    PubMed

    Brock, Jon; Bzishvili, Samantha; Reid, Melanie; Hautus, Michael; Johnson, Blake W

    2013-11-01

    Atypical auditory perception is a widely recognised but poorly understood feature of autism. In the current study, we used magnetoencephalography to measure the brain responses of 10 autistic children as they listened passively to dichotic pitch stimuli, in which an illusory tone is generated by sub-millisecond inter-aural timing differences in white noise. Relative to control stimuli that contain no inter-aural timing differences, dichotic pitch stimuli typically elicit an object related negativity (ORN) response, associated with the perceptual segregation of the tone and the carrier noise into distinct auditory objects. Autistic children failed to demonstrate an ORN, suggesting a failure of segregation; however, comparison with the ORNs of age-matched typically developing controls narrowly failed to attain significance. More striking, the autistic children demonstrated a significant differential response to the pitch stimulus, peaking at around 50 ms. This was not present in the control group, nor has it been found in other groups tested using similar stimuli. This response may be a neural signature of atypical processing of pitch in at least some autistic individuals.

  1. A micrographic study of bending failure in five thermoplastic/carbon fiber composite laminates

    NASA Technical Reports Server (NTRS)

    Yurgartis, S. W.; Sternstein, S. S.

    1987-01-01

    The local deformation and failure sequences of five thermoplastic matrix composites were microscopically observed while bending the samples in a small fixture attached to a microscope stage. The themoplastics are polycarbonate, polysulfone, polyphenylsulfide, polyethersulfone, and polyetheretherketone. Comparison was made to an epoxy matrix composite, 5208/T-300. Laminates tested are (0/90) sub 2S, with outer ply fibers parallel to the beam axis. Four point bending was used at a typical span-to-thickness ratio of 39:1. It was found that all of the thermoplastic composites failed by abrupt longitudinal compression buckling of the outer ply. Very little precursory damage was observed. Micrographs reveal typical fiber kinking associated with longitudinal compression failure. Curved fracture surfaces on the fibers suggest they failed in bending rather than direct compression. Delamination was suppressed in the thermoplastic composites, and the delamination that did occur was found to be the result of compression buckling, rather than visa-versa. Microbuckling also caused other subsequent damage such as ply splitting, transverse ply shear failure, fiber tensile failure, and transverse ply cracking.

  2. Uncertainty in Population Growth Rates: Determining Confidence Intervals from Point Estimates of Parameters

    PubMed Central

    Devenish Nelson, Eleanor S.; Harris, Stephen; Soulsbury, Carl D.; Richards, Shane A.; Stephens, Philip A.

    2010-01-01

    Background Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. Methodology/Principal Findings We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. Conclusions/Significance Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species. PMID:21049049

  3. Blackboard architecture for medical image interpretation

    NASA Astrophysics Data System (ADS)

    Davis, Darryl N.; Taylor, Christopher J.

    1991-06-01

    There is a growing interest in using sophisticated knowledge-based systems for biomedical image interpretation. We present a principled attempt to use artificial intelligence methodologies in interpreting lateral skull x-ray images. Such radiographs are routinely used in cephalometric analysis to provide quantitative measurements useful to clinical orthodontists. Manual and interactive methods of analysis are known to be error prone and previous attempts to automate this analysis typically fail to capture the expertise and adaptability required to cope with the variability in biological structure and image quality. An integrated model-based system has been developed which makes use of a blackboard architecture and multiple knowledge sources. A model definition interface allows quantitative models, of feature appearance and location, to be built from examples as well as more qualitative modelling constructs. Visual task definition and blackboard control modules allow task-specific knowledge sources to act on information available to the blackboard in a hypothesise and test reasoning cycle. Further knowledge-based modules include object selection, location hypothesis, intelligent segmentation, and constraint propagation systems. Alternative solutions to given tasks are permitted.

  4. Assessment of CO2 Storage Potential in Naturally Fractured Reservoirs With Dual-Porosity Models

    NASA Astrophysics Data System (ADS)

    March, Rafael; Doster, Florian; Geiger, Sebastian

    2018-03-01

    Naturally Fractured Reservoirs (NFR's) have received little attention as potential CO2 storage sites. Two main facts deter from storage projects in fractured reservoirs: (1) CO2 tends to be nonwetting in target formations and capillary forces will keep CO2 in the fractures, which typically have low pore volume; and (2) the high conductivity of the fractures may lead to increased spatial spreading of the CO2 plume. Numerical simulations are a powerful tool to understand the physics behind brine-CO2 flow in NFR's. Dual-porosity models are typically used to simulate multiphase flow in fractured formations. However, existing dual-porosity models are based on crude approximations of the matrix-fracture fluid transfer processes and often fail to capture the dynamics of fluid exchange accurately. Therefore, more accurate transfer functions are needed in order to evaluate the CO2 transfer to the matrix. This work presents an assessment of CO2 storage potential in NFR's using dual-porosity models. We investigate the impact of a system of fractures on storage in a saline aquifer, by analyzing the time scales of brine drainage by CO2 in the matrix blocks and the maximum CO2 that can be stored in the rock matrix. A new model to estimate drainage time scales is developed and used in a transfer function for dual-porosity simulations. We then analyze how injection rates should be limited in order to avoid early spill of CO2 (lost control of the plume) on a conceptual anticline model. Numerical simulations on the anticline show that naturally fractured reservoirs may be used to store CO2.

  5. Parotid gland mean dose as a xerostomia predictor in low-dose domains.

    PubMed

    Gabryś, Hubert Szymon; Buettner, Florian; Sterzing, Florian; Hauswald, Henrik; Bangert, Mark

    2017-09-01

    Xerostomia is a common side effect of radiotherapy resulting from excessive irradiation of salivary glands. Typically, xerostomia is modeled by the mean dose-response characteristic of parotid glands and prevented by mean dose constraints to either contralateral or both parotid glands. The aim of this study was to investigate whether normal tissue complication probability (NTCP) models based on the mean radiation dose to parotid glands are suitable for the prediction of xerostomia in a highly conformal low-dose regime of modern intensity-modulated radiotherapy (IMRT) techniques. We present a retrospective analysis of 153 head and neck cancer patients treated with radiotherapy. The Lyman Kutcher Burman (LKB) model was used to evaluate predictive power of the parotid gland mean dose with respect to xerostomia at 6 and 12 months after the treatment. The predictive performance of the model was evaluated by receiver operating characteristic (ROC) curves and precision-recall (PR) curves. Average mean doses to ipsilateral and contralateral parotid glands were 25.4 Gy and 18.7 Gy, respectively. QUANTEC constraints were met in 74% of patients. Mild to severe (G1+) xerostomia prevalence at both 6 and 12 months was 67%. Moderate to severe (G2+) xerostomia prevalence at 6 and 12 months was 20% and 15%, respectively. G1 + xerostomia was predicted reasonably well with area under the ROC curve ranging from 0.69 to 0.76. The LKB model failed to provide reliable G2 + xerostomia predictions at both time points. Reduction of the mean dose to parotid glands below QUANTEC guidelines resulted in low G2 + xerostomia rates. In this dose domain, the mean dose models predicted G1 + xerostomia fairly well, however, failed to recognize patients at risk of G2 + xerostomia. There is a need for the development of more flexible models able to capture complexity of dose response in this dose regime.

  6. Robust encoding of stimulus identity and concentration in the accessory olfactory system.

    PubMed

    Arnson, Hannah A; Holy, Timothy E

    2013-08-14

    Sensory systems represent stimulus identity and intensity, but in the neural periphery these two variables are typically intertwined. Moreover, stable detection may be complicated by environmental uncertainty; stimulus properties can differ over time and circumstance in ways that are not necessarily biologically relevant. We explored these issues in the context of the mouse accessory olfactory system, which specializes in detection of chemical social cues and infers myriad aspects of the identity and physiological state of conspecifics from complex mixtures, such as urine. Using mixtures of sulfated steroids, key constituents of urine, we found that spiking responses of individual vomeronasal sensory neurons encode both individual compounds and mixtures in a manner consistent with a simple model of receptor-ligand interactions. Although typical neurons did not accurately encode concentration over a large dynamic range, from population activity it was possible to reliably estimate the log-concentration of pure compounds over several orders of magnitude. For binary mixtures, simple models failed to accurately segment the individual components, largely because of the prevalence of neurons responsive to both components. By accounting for such overlaps during model tuning, we show that, from neuronal firing, one can accurately estimate log-concentration of both components, even when tested across widely varying concentrations. With this foundation, the difference of logarithms, log A - log B = log A/B, provides a natural mechanism to accurately estimate concentration ratios. Thus, we show that a biophysically plausible circuit model can reconstruct concentration ratios from observed neuronal firing, representing a powerful mechanism to separate stimulus identity from absolute concentration.

  7. Predicting Quarantine Failure Rates

    PubMed Central

    2004-01-01

    Preemptive quarantine through contact-tracing effectively controls emerging infectious diseases. Occasionally this quarantine fails, however, and infected persons are released. The probability of quarantine failure is typically estimated from disease-specific data. Here a simple, exact estimate of the failure rate is derived that does not depend on disease-specific parameters. This estimate is universally applicable to all infectious diseases. PMID:15109418

  8. When Does Social Capital Matter? Non-Searching for Jobs across the Life Course

    ERIC Educational Resources Information Center

    McDonald, Steve; Elder, Glen H., Jr.

    2006-01-01

    Non-searchers--people who get their jobs without engaging in a job search--are often excluded from investigations of the role of personal relationships in job finding processes. This practice fails to capture the scope of informal job matching activity and underestimates the effectiveness of social capital. Moreover, studies typically obtain…

  9. Neuropsychological Profile on the WISC-IV of French Children with Dyslexia

    ERIC Educational Resources Information Center

    De Clercq-Quaegebeur, Maryse; Casalis, Severine; Lemaitre, Marie-Pierre; Bourgois, Beatrice; Getto, Marie; Vallee, Louis

    2010-01-01

    This study examined the pattern of results on the "Wechsler Intelligence Scale for Children" (WISC-IV; French version) for 60 French children with dyslexia, from 8 to 16 years of age. Although use of WISC-III failed to clearly identify typical profiles and cognitive deficits in dyslexia, WISC-IV offers an opportunity to reach these…

  10. Specificity of Putative Psychosocial Risk Factors for Psychiatric Disorders in Children and Adolescents

    ERIC Educational Resources Information Center

    Shanahan, Lilly; Copeland, William; Costello, E. Jane; Angold, Adrian

    2008-01-01

    Background: Most psychosocial risk factors appear to have general rather than specific patterns of association with common childhood and adolescence disorders. However, previous research has typically failed to 1) control for comorbidity among disorders, 2) include a wide range of risk factors, and 3) examine sex by developmental stage effects on…

  11. Understanding the Specificity and Random Collision of Enzyme-Substrate Interaction

    ERIC Educational Resources Information Center

    Kin, Ng Hong; Ling, Tan Aik

    2016-01-01

    The concept of specificity of enzyme action can potentially be abstract for some students as they fail to appreciate how the three-dimensional configuration of enzymes and the active sites confer perfect fit for specific substrates. In science text books, the specificity of enzyme-substrate binding is typically likened to the action of a lock and…

  12. The Effect of Foster Care Experience and Characteristics on Academic Achievement

    ERIC Educational Resources Information Center

    Calix, Alexandra

    2009-01-01

    This study examined the effect of foster care experience and characteristics on educational outcomes. The typical strategy in examining the effect foster care has on educational outcomes is to compare the educational achievement of youth with foster care experience to that of their peers or to national norms. This strategy fails to take selection…

  13. How Within-District Spending Inequities Help Some Schools to Fail

    ERIC Educational Resources Information Center

    Roza, Marguerite; Hill, Paul Thomas

    2004-01-01

    School district budgets typically hide as much as they reveal. Superintendents are finding this as they discover huge deficits that nobody saw coming. District budgets are opaque by design, and they often mask important facts about resource allocation within a district, as well as about total spending. This paper reports the results of an original…

  14. From Disconnection to Connection: "Race", Gender and the Politics of Therapy

    ERIC Educational Resources Information Center

    Chantler, Khatidja

    2005-01-01

    Person-centred therapy typically fails to address structural dimensions of inequality such as "race", gender and class. In this paper, I explore why this is, and what can be done about it ? at the levels of theory, practice and the organisation of services. Drawing on person-centred theory and practice, I discuss theoretical and…

  15. Implementing Army Training Programs: An Overview for Managers. Research Report 1382.

    ERIC Educational Resources Information Center

    Gray, Wayne D.

    The place and importance of implementation in the life cycle of Army training programs is frequently misunderstood. Typically, a program's life cycle is thought of as research, development, and use. If implementation is thought of at all, it is regarded as an event, not a process. Many worthwhile programs have failed because the implementation…

  16. BIN1 is reduced and Cav1.2 trafficking is impaired in human failing cardiomyocytes.

    PubMed

    Hong, Ting-Ting; Smyth, James W; Chu, Kevin Y; Vogan, Jacob M; Fong, Tina S; Jensen, Brian C; Fang, Kun; Halushka, Marc K; Russell, Stuart D; Colecraft, Henry; Hoopes, Charles W; Ocorr, Karen; Chi, Neil C; Shaw, Robin M

    2012-05-01

    Heart failure is a growing epidemic, and a typical aspect of heart failure pathophysiology is altered calcium transients. Normal cardiac calcium transients are initiated by Cav1.2 channels at cardiac T tubules. Bridging integrator 1 (BIN1) is a membrane scaffolding protein that causes Cav1.2 to traffic to T tubules in healthy hearts. The mechanisms of Cav1.2 trafficking in heart failure are not known. To study BIN1 expression and its effect on Cav1.2 trafficking in failing hearts. Intact myocardium and freshly isolated cardiomyocytes from nonfailing and end-stage failing human hearts were used to study BIN1 expression and Cav1.2 localization. To confirm Cav1.2 surface expression dependence on BIN1, patch-clamp recordings were performed of Cav1.2 current in cell lines with and without trafficking-competent BIN1. Also, in adult mouse cardiomyocytes, surface Cav1.2 and calcium transients were studied after small hairpin RNA-mediated knockdown of BIN1. For a functional readout in intact heart, calcium transients and cardiac contractility were analyzed in a zebrafish model with morpholino-mediated knockdown of BIN1. BIN1 expression is significantly decreased in failing cardiomyocytes at both mRNA (30% down) and protein (36% down) levels. Peripheral Cav1.2 is reduced to 42% by imaging, and a biochemical T-tubule fraction of Cav1.2 is reduced to 68%. The total calcium current is reduced to 41% in a cell line expressing a nontrafficking BIN1 mutant. In mouse cardiomyocytes, BIN1 knockdown decreases surface Cav1.2 and impairs calcium transients. In zebrafish hearts, BIN1 knockdown causes a 75% reduction in calcium transients and severe ventricular contractile dysfunction. The data indicate that BIN1 is significantly reduced in human heart failure, and this reduction impairs Cav1.2 trafficking, calcium transients, and contractility. Copyright © 2012 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  17. On-line Bayesian model updating for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Rocchetta, Roberto; Broggi, Matteo; Huchet, Quentin; Patelli, Edoardo

    2018-03-01

    Fatigue induced cracks is a dangerous failure mechanism which affects mechanical components subject to alternating load cycles. System health monitoring should be adopted to identify cracks which can jeopardise the structure. Real-time damage detection may fail in the identification of the cracks due to different sources of uncertainty which have been poorly assessed or even fully neglected. In this paper, a novel efficient and robust procedure is used for the detection of cracks locations and lengths in mechanical components. A Bayesian model updating framework is employed, which allows accounting for relevant sources of uncertainty. The idea underpinning the approach is to identify the most probable crack consistent with the experimental measurements. To tackle the computational cost of the Bayesian approach an emulator is adopted for replacing the computationally costly Finite Element model. To improve the overall robustness of the procedure, different numerical likelihoods, measurement noises and imprecision in the value of model parameters are analysed and their effects quantified. The accuracy of the stochastic updating and the efficiency of the numerical procedure are discussed. An experimental aluminium frame and on a numerical model of a typical car suspension arm are used to demonstrate the applicability of the approach.

  18. Evolution and anti-evolution in a minimal stock market model

    NASA Astrophysics Data System (ADS)

    Rothenstein, R.; Pawelzik, K.

    2003-08-01

    We present a novel microscopic stock market model consisting of a large number of random agents modeling traders in a market. Each agent is characterized by a set of parameters that serve to make iterated predictions of two successive returns. The future price is determined according to the offer and the demand of all agents. The system evolves by redistributing the capital among the agents in each trading cycle. Without noise the dynamics of this system is nearly regular and thereby fails to reproduce the stochastic return fluctuations observed in real markets. However, when in each cycle a small amount of noise is introduced we find the typical features of real financial time series like fat-tails of the return distribution and large temporal correlations in the volatility without significant correlations in the price returns. Introducing the noise by an evolutionary process leads to different scalings of the return distributions that depend on the definition of fitness. Because our realistic model has only very few parameters, and the results appear to be robust with respect to the noise level and the number of agents we expect that our framework may serve as new paradigm for modeling self-generated return fluctuations in markets.

  19. Shock-induced bubble jetting into a viscous fluid with application to tissue injury in shock-wave lithotripsy.

    PubMed

    Freund, J B; Shukla, R K; Evan, A P

    2009-11-01

    Shock waves in liquids are known to cause spherical gas bubbles to rapidly collapse and form strong re-entrant jets in the direction of the propagating shock. The interaction of these jets with an adjacent viscous liquid is investigated using finite-volume simulation methods. This configuration serves as a model for tissue injury during shock-wave lithotripsy, a medical procedure to remove kidney stones. In this case, the viscous fluid provides a crude model for the tissue. It is found that for viscosities comparable to what might be expected in tissue, the jet that forms upon collapse of a small bubble fails to penetrate deeply into the viscous fluid "tissue." A simple model reproduces the penetration distance versus viscosity observed in the simulations and leads to a phenomenological model for the spreading of injury with multiple shocks. For a reasonable selection of a single efficiency parameter, this model is able to reproduce in vivo observations of an apparent 1000-shock threshold before wide-spread tissue injury occurs in targeted kidneys and the approximate extent of this injury after a typical clinical dose of 2000 shock waves.

  20. Shock-induced bubble jetting into a viscous fluid with application to tissue injury in shock-wave lithotripsy

    PubMed Central

    Freund, J. B.; Shukla, R. K.; Evan, A. P.

    2009-01-01

    Shock waves in liquids are known to cause spherical gas bubbles to rapidly collapse and form strong re-entrant jets in the direction of the propagating shock. The interaction of these jets with an adjacent viscous liquid is investigated using finite-volume simulation methods. This configuration serves as a model for tissue injury during shock-wave lithotripsy, a medical procedure to remove kidney stones. In this case, the viscous fluid provides a crude model for the tissue. It is found that for viscosities comparable to what might be expected in tissue, the jet that forms upon collapse of a small bubble fails to penetrate deeply into the viscous fluid “tissue.” A simple model reproduces the penetration distance versus viscosity observed in the simulations and leads to a phenomenological model for the spreading of injury with multiple shocks. For a reasonable selection of a single efficiency parameter, this model is able to reproduce in vivo observations of an apparent 1000-shock threshold before wide-spread tissue injury occurs in targeted kidneys and the approximate extent of this injury after a typical clinical dose of 2000 shock waves. PMID:19894850

  1. Learning and Information Approaches for Inference in Dynamic Data-Driven Geophysical Applications

    NASA Astrophysics Data System (ADS)

    Ravela, S.

    2015-12-01

    Many Geophysical inference problems are characterized by non-linear processes, high-dimensional models and complex uncertainties. A dynamic coupling between models, estimation, and sampling is typically sought to efficiently characterize and reduce uncertainty. This process is however fraught with several difficulties. Among them, the key difficulties are the ability to deal with model errors, efficacy of uncertainty quantification and data assimilation. In this presentation, we present three key ideas from learning and intelligent systems theory and apply them to two geophysical applications. The first idea is the use of Ensemble Learning to compensate for model error, the second is to develop tractable Information Theoretic Learning to deal with non-Gaussianity in inference, and the third is a Manifold Resampling technique for effective uncertainty quantification. We apply these methods, first to the development of a cooperative autonomous observing system using sUAS for studying coherent structures. We apply this to Second, we apply this to the problem of quantifying risk from hurricanes and storm surges in a changing climate. Results indicate that learning approaches can enable new effectiveness in cases where standard approaches to model reduction, uncertainty quantification and data assimilation fail.

  2. Separation of time scales in one-dimensional directed nucleation-growth processes

    NASA Astrophysics Data System (ADS)

    Pierobon, Paolo; Miné-Hattab, Judith; Cappello, Giovanni; Viovy, Jean-Louis; Lagomarsino, Marco Cosentino

    2010-12-01

    Proteins involved in homologous recombination such as RecA and hRad51 polymerize on single- and double-stranded DNA according to a nucleation-growth kinetics, which can be monitored by single-molecule in vitro assays. The basic models currently used to extract biochemical rates rely on ensemble averages and are typically based on an underlying process of bidirectional polymerization, in contrast with the often observed anisotropic polymerization of similar proteins. For these reasons, if one considers single-molecule experiments, the available models are useful to understand observations only in some regimes. In particular, recent experiments have highlighted a steplike polymerization kinetics. The classical model of one-dimensional nucleation growth, the Kolmogorov-Avrami-Mehl-Johnson (KAMJ) model, predicts the correct polymerization kinetics only in some regimes and fails to predict the steplike behavior. This work illustrates by simulations and analytical arguments the limitation of applicability of the KAMJ description and proposes a minimal model for the statistics of the steps based on the so-called stick-breaking stochastic process. We argue that this insight might be useful to extract information on the time and length scales involved in the polymerization kinetics.

  3. Simulation-Based Training for Colonoscopy

    PubMed Central

    Preisler, Louise; Svendsen, Morten Bo Søndergaard; Nerup, Nikolaj; Svendsen, Lars Bo; Konge, Lars

    2015-01-01

    Abstract The aim of this study was to create simulation-based tests with credible pass/fail standards for 2 different fidelities of colonoscopy models. Only competent practitioners should perform colonoscopy. Reliable and valid simulation-based tests could be used to establish basic competency in colonoscopy before practicing on patients. Twenty-five physicians (10 consultants with endoscopic experience and 15 fellows with very little endoscopic experience) were tested on 2 different simulator models: a virtual-reality simulator and a physical model. Tests were repeated twice on each simulator model. Metrics with discriminatory ability were identified for both modalities and reliability was determined. The contrasting-groups method was used to create pass/fail standards and the consequences of these were explored. The consultants significantly performed faster and scored higher than the fellows on both the models (P < 0.001). Reliability analysis showed Cronbach α = 0.80 and 0.87 for the virtual-reality and the physical model, respectively. The established pass/fail standards failed one of the consultants (virtual-reality simulator) and allowed one fellow to pass (physical model). The 2 tested simulations-based modalities provided reliable and valid assessments of competence in colonoscopy and credible pass/fail standards were established for both the tests. We propose to use these standards in simulation-based training programs before proceeding to supervised training on patients. PMID:25634177

  4. In Vitro, In Vivo and Post Explantation Testing of Glucose-Detecting Biosensors: Current Methods and Recommendations

    PubMed Central

    Koschwanez, Heidi E.; Reichert, W. Monty

    2007-01-01

    To date, there have been a number of cases where glucose sensors have performed well over long periods of implantation; however, it remains difficult to predict whether a given sensor will perform reliably, will exhibit gradual degradation of performance, or will fail outright soon after implantation. Typically, the literature emphasizes the sensor that performed well, while only briefly (if at all) mentioning the failed devices. This leaves open the question of whether current sensor designs are adequate for the hostile in vivo environment, and whether these sensors have been assessed by the proper regimen of testing protocols. This paper reviews the current in vitro and in vivo testing procedures used to evaluate the functionality and biocompatibility of implantable glucose sensors. An overview of the standards and regulatory bodies that govern biomaterials and end-product device testing precedes a discussion of up-to-date invasive and non-invasive technologies for diabetes management. Analysis of current in vitro, in vivo, and then post implantation testing is presented. Given the underlying assumption that the success of the sensor in vivo foreshadows the long-term reliability of the sensor in the human body, the relative merits of these testing methods are evaluated with respect to how representative they are of human models. PMID:17524479

  5. The Evolution of Generosity in the Ultimatum Game

    PubMed Central

    Hintze, Arend; Hertwig, Ralph

    2016-01-01

    When humans fail to make optimal decisions in strategic games and economic gambles, researchers typically try to explain why that behaviour is biased. To this end, they search for mechanisms that cause human behaviour to deviate from what seems to be the rational optimum. But perhaps human behaviour is not biased; perhaps research assumptions about the optimality of strategies are incomplete. In the one-shot anonymous symmetric ultimatum game (UG), humans fail to play optimally as defined by the Nash equilibrium. However, the distinction between kin and non-kin—with kin detection being a key evolutionary adaption—is often neglected when deriving the “optimal” strategy. We computationally evolved strategies in the UG that were equipped with an evolvable probability to discern kin from non-kin. When an opponent was not kin, agents evolved strategies that were similar to those used by humans. We therefore conclude that the strategy humans play is not irrational. The deviation between behaviour and the Nash equilibrium may rather be attributable to key evolutionary adaptations, such as kin detection. Our findings further suggest that social preference models are likely to capture mechanisms that permit people to play optimally in an evolutionary context. Once this context is taken into account, human behaviour no longer appears irrational. PMID:27677330

  6. The Evolution of Generosity in the Ultimatum Game.

    PubMed

    Hintze, Arend; Hertwig, Ralph

    2016-09-28

    When humans fail to make optimal decisions in strategic games and economic gambles, researchers typically try to explain why that behaviour is biased. To this end, they search for mechanisms that cause human behaviour to deviate from what seems to be the rational optimum. But perhaps human behaviour is not biased; perhaps research assumptions about the optimality of strategies are incomplete. In the one-shot anonymous symmetric ultimatum game (UG), humans fail to play optimally as defined by the Nash equilibrium. However, the distinction between kin and non-kin-with kin detection being a key evolutionary adaption-is often neglected when deriving the "optimal" strategy. We computationally evolved strategies in the UG that were equipped with an evolvable probability to discern kin from non-kin. When an opponent was not kin, agents evolved strategies that were similar to those used by humans. We therefore conclude that the strategy humans play is not irrational. The deviation between behaviour and the Nash equilibrium may rather be attributable to key evolutionary adaptations, such as kin detection. Our findings further suggest that social preference models are likely to capture mechanisms that permit people to play optimally in an evolutionary context. Once this context is taken into account, human behaviour no longer appears irrational.

  7. Using whole-exome sequencing to identify variants inherited from mosaic parents

    PubMed Central

    Rios, Jonathan J; Delgado, Mauricio R

    2015-01-01

    Whole-exome sequencing (WES) has allowed the discovery of genes and variants causing rare human disease. This is often achieved by comparing nonsynonymous variants between unrelated patients, and particularly for sporadic or recessive disease, often identifies a single or few candidate genes for further consideration. However, despite the potential for this approach to elucidate the genetic cause of rare human disease, a majority of patients fail to realize a genetic diagnosis using standard exome analysis methods. Although genetic heterogeneity contributes to the difficulty of exome sequence analysis between patients, it remains plausible that rare human disease is not caused by de novo or recessive variants. Multiple human disorders have been described for which the variant was inherited from a phenotypically normal mosaic parent. Here we highlight the potential for exome sequencing to identify a reasonable number of candidate genes when dominant disease variants are inherited from a mosaic parent. We show the power of WES to identify a limited number of candidate genes using this disease model and how sequence coverage affects identification of mosaic variants by WES. We propose this analysis as an alternative to discover genetic causes of rare human disorders for which typical WES approaches fail to identify likely pathogenic variants. PMID:24986828

  8. Evaluating landfill aftercare strategies: A life cycle assessment approach.

    PubMed

    Turner, David A; Beaven, Richard P; Woodman, Nick D

    2017-05-01

    This study investigates the potential impacts caused by the loss of active environmental control measures during the aftercare period of landfill management. A combined mechanistic solute flow model and life cycle assessment (LCA) approach was used to evaluate the potential impacts of leachate emissions over a 10,000year time horizon. A continuum of control loss possibilities occurring at different times and for different durations were investigated for four different basic aftercare scenarios, including a typical aftercare scenario involving a low permeability cap and three accelerated aftercare scenarios involving higher initial infiltration rates. Assuming a 'best case' where control is never lost, the largest potential impacts resulted from the typical aftercare scenario. The maximum difference between potential impacts from the 'best case' and the 'worst case', where control fails at the earliest possible point and is never reinstated, was only a fourfold increase. This highlights potential deficiencies in standard life cycle impact assessment practice, which are discussed. Nevertheless, the results show how the influence of active control loss on the potential impacts of landfilling varies considerably depending on the aftercare strategy used and highlight the importance that leachate treatment efficiencies have upon impacts. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Fatigue of restorative materials.

    PubMed

    Baran, G; Boberick, K; McCool, J

    2001-01-01

    Failure due to fatigue manifests itself in dental prostheses and restorations as wear, fractured margins, delaminated coatings, and bulk fracture. Mechanisms responsible for fatigue-induced failure depend on material ductility: Brittle materials are susceptible to catastrophic failure, while ductile materials utilize their plasticity to reduce stress concentrations at the crack tip. Because of the expense associated with the replacement of failed restorations, there is a strong desire on the part of basic scientists and clinicians to evaluate the resistance of materials to fatigue in laboratory tests. Test variables include fatigue-loading mode and test environment, such as soaking in water. The outcome variable is typically fracture strength, and these data typically fit the Weibull distribution. Analysis of fatigue data permits predictive inferences to be made concerning the survival of structures fabricated from restorative materials under specified loading conditions. Although many dental-restorative materials are routinely evaluated, only limited use has been made of fatigue data collected in vitro: Wear of materials and the survival of porcelain restorations has been modeled by both fracture mechanics and probabilistic approaches. A need still exists for a clinical failure database and for the development of valid test methods for the evaluation of composite materials.

  10. Fast luminous blue transients from newborn black holes

    NASA Astrophysics Data System (ADS)

    Kashiyama, Kazumi; Quataert, Eliot

    2015-08-01

    Newborn black holes in collapsing massive stars can be accompanied by a fallback disc. The accretion rate is typically super-Eddington and strong disc outflows are expected. Such outflows could be directly observed in some failed explosions of compact (blue supergiants or Wolf-Rayet stars) progenitors, and may be more common than long-duration gamma-ray bursts. Using an analytical model, we show that the fallback disc outflows produce blue UV-optical transients with a peak bolometric luminosity of ˜ 1042-43 erg s- 1 (peak R-band absolute AB magnitudes of -16 to -18) and an emission duration of ˜ a few to ˜10 d. The spectra are likely dominated intermediate mass elements, but will lack much radioactive nuclei and iron-group elements. The above properties are broadly consistent with some of the rapid blue transients detected by Panoramic Survey Telescope & Rapid Response System and Palomar Transient Factory. This scenario can be distinguished from alternative models using radio observations within a few years after the optical peak.

  11. Geometrical-optics approximation of forward scattering by gradient-index spheres.

    PubMed

    Li, Xiangzhen; Han, Xiang'e; Li, Renxian; Jiang, Huifen

    2007-08-01

    By means of geometrical optics we present an approximation method for acceleration of the computation of the scattering intensity distribution within a forward angular range (0-60 degrees ) for gradient-index spheres illuminated by a plane wave. The incident angle of reflected light is determined by the scattering angle, thus improving the approximation accuracy. The scattering angle and the optical path length are numerically integrated by a general-purpose integrator. With some special index models, the scattering angle and the optical path length can be expressed by a unique function and the calculation is faster. This method is proved effective for transparent particles with size parameters greater than 50. It fails to give good approximation results at scattering angles whose refractive rays are in the backward direction. For different index models, the geometrical-optics approximation is effective only for forward angles, typically those less than 60 degrees or when the refractive-index difference of a particle is less than a certain value.

  12. Collective memory in primate conflict implied by temporal scaling collapse.

    PubMed

    Lee, Edward D; Daniels, Bryan C; Krakauer, David C; Flack, Jessica C

    2017-09-01

    In biological systems, prolonged conflict is costly, whereas contained conflict permits strategic innovation and refinement. Causes of variation in conflict size and duration are not well understood. We use a well-studied primate society model system to study how conflicts grow. We find conflict duration is a 'first to fight' growth process that scales superlinearly, with the number of possible pairwise interactions. This is in contrast with a 'first to fail' process that characterizes peaceful durations. Rescaling conflict distributions reveals a universal curve, showing that the typical time scale of correlated interactions exceeds nearly all individual fights. This temporal correlation implies collective memory across pairwise interactions beyond those assumed in standard models of contagion growth or iterated evolutionary games. By accounting for memory, we make quantitative predictions for interventions that mitigate or enhance the spread of conflict. Managing conflict involves balancing the efficient use of limited resources with an intervention strategy that allows for conflict while keeping it contained and controlled. © 2017 The Author(s).

  13. The evolution of mate choice: a dialogue between theory and experiment.

    PubMed

    Roff, Derek A

    2015-12-01

    Research on the evolution of mate choice has followed three avenues of investigation: (1) theoretical models of the evolution of preference and the preferred trait; (2) proposed models of mate choice; and (3) experiments and observations on mate choice, both in the laboratory and with free-ranging animals. However, there has been relatively little dialogue among these three areas. Most attempts to account for observations of mate choice using theoretical mate-choice models have focused only upon a subset of particular models and have generally failed to consider the difference between probabilistic and deterministic models. In this review, I outline the underlying reasoning of the commonly cited mate-choice models and review the conclusions of the empirical investigations. I present a brief outline of how one might go about testing these models. It remains uncertain if, in general, mate-choice models can be realistically analyzed. Although it is clear that females frequently discriminate among males, data also suggest that females may typically have a very limited number of males from which to choose. The extent to which female choice under natural conditions is relatively random because of limited opportunities remains an open question for the majority of species. © 2015 New York Academy of Sciences.

  14. Misspecification in Latent Change Score Models: Consequences for Parameter Estimation, Model Evaluation, and Predicting Change.

    PubMed

    Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P

    2018-01-01

    Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.

  15. Analysis of longitudinal seismic response of bridge with magneto-rheological elastomeric bearings

    NASA Astrophysics Data System (ADS)

    Li, Rui; Li, Xi; Wu, Yueyuan; Chen, Shiwei; Wang, Xiaojie

    2016-04-01

    As the weakest part in the bridge system, traditional bridge bearing is incapable of isolating the impact load such as earthquake. A magneto-rheological elastomeric bearing (MRB) with adjustable stiffness and damping parameters is designed, tested and modeled. The developed Bouc-Wen model is adopted to represent the constitutive relation and force-displacement behavior of an MRB. Then, the lead rubber bearing (LRB), passive MRB and controllable MRB are modeled by finite element method (FEM). Furthermore, two typical seismic waves are adopted as inputs for the isolation system of bridge seismic response. The experiments are carried out to investigate the different response along the bridge with on-off controlled MRBs. The results show that the isolating performance of MRB is similar to that of traditional LRB, which ensures the fail-safe capability of bridge with MRBs under seismic excitation. In addition, the controllable bridge with MRBs demonstrated the advantage of isolating capacity and energy dissipation, because it restrains the acceleration peak of bridge beam by 33.3%, and the displacement of bearing decrease by 34.1%. The shear force of the pier top is also alleviated.

  16. The Use of Motion Tracking Technologies in Serious Games to Enhance Rehabilitation in Stroke Patients

    ERIC Educational Resources Information Center

    Burton, Andrew M.; Liu, Hao; Battersby, Steven; Brown, David; Sherkat, Nasser; Standen, Penny; Walker, Marion

    2011-01-01

    Stroke is the main cause of long term disability worldwide. Of those surviving, more than half will fail to regain functional usage of their impaired upper limb. Typically stroke upper limb rehabilitation exercises consist of repeated movements, which when tracked can form the basis of inputs to games. This paper discusses two systems utilizing…

  17. Once a Year to Be Black: Fighting against Typical Black History Month Pedagogies

    ERIC Educational Resources Information Center

    King, LaGarrett J.; Brown, Keffrelyn

    2014-01-01

    Our study examined the experiences of three middle school teachers who created their own Black History Month curriculum. Although, the relevance of Black History Month is under scrutiny by opponents who feel it marginalized the history of Black Americans, proponents of this position have failed: to account for teachers who view and use this Month…

  18. Myth-Busting Is a Bust for Patient Education: Making Salient Older Adults' Misconceptions about Osteoarthritis Fails to Lead to Lasting Corrections

    ERIC Educational Resources Information Center

    Ansburg, Pamela I.

    2016-01-01

    Older adults hold many misconceptions about health and wellness that reduce their health literacy. To counter these misconceptions, health educators commonly turn to educational interventions that include myth-busting--making explicit health-related myths and refuting those myths. Because of typical age-related changes in memory functioning, there…

  19. Beyond Online versus Face-to-Face Comparisons: The Interaction of Student Age and Mode of Instruction on Academic Achievement

    ERIC Educational Resources Information Center

    Slover, Ed; Mandernach, Jean

    2018-01-01

    While it is well-established that nontraditional students are more likely to take online courses than their traditional-age counterparts, investigations of the learning equivalence between online and campus-based instruction typically fail to consider student age as a mediating factor in the learning experience. To examine learning outcomes as a…

  20. Conditions under which Young Children Can Hold Two Rules in Mind and Inhibit a Prepotent Response.

    ERIC Educational Resources Information Center

    Diamond, Adele; Kirkham, Natasha; Amso, Dima

    2002-01-01

    Systematically varied the day-night task requiring children to say "night" to a sun picture and "day" to a moon picture to investigate why young children typically fail the task. Found that reducing memory load did not help performance. Reducing inhibitory demand by requiring an unrelated response or inserting a delay between…

  1. Failing to Forget: Prospective Memory Commission Errors Can Result from Spontaneous Retrieval and Impaired Executive Control

    ERIC Educational Resources Information Center

    Scullin, Michael K.; Bugg, Julie M.

    2013-01-01

    Prospective memory (PM) research typically examines the ability to remember to execute delayed intentions but often ignores the ability to forget finished intentions. We had participants perform (or not perform; control group) a PM task and then instructed them that the PM task was finished. We later (re)presented the PM cue. Approximately 25% of…

  2. Using the First Exam for Student Placement in Beginning Chemistry Courses

    ERIC Educational Resources Information Center

    Mills, Pamela; Sweeney, William; Bonner, Sarah M.

    2009-01-01

    The first exam in a typical first-semester general chemistry course is used to identify students at risk of failing the course. The performance at Hunter College of 667 students on the first exam in general chemistry in seven different classes between fall 2000 and fall 2005 was correlated with the students' final score in the course. The…

  3. The origin of neap-spring tidal cycles

    USGS Publications Warehouse

    Kvale, E.P.

    2006-01-01

    The origin of oceanic tides is a basic concept taught in most introductory college-level sedimentology/geology, oceanography, and astronomy courses. Tides are typically explained in the context of the equilibrium tidal theory model. Yet this model does not take into account real tides in many parts of the world. Not only does the equilibrium tidal model fail to explicate amphidromic circulation, it also does not explain diurnal tides in low latitude positions. It likewise fails to explain the existence of tide-dominated areas where neap-spring cycles are synchronized with the 27.32-day orbital cycle of the Moon (tropical month), rather than with the more familiar 29.52-day cycle of lunar phases (synodic month). Both types of neap-spring cycles can be recognized in the rock record. A complete explanation of the origin of tides should include a discussion of dynamic tidal theory. In the dynamic tidal model, tides resulting from the motions of the Moon in its orbit around the Earth and the Earth in its orbit around the Sun are modeled as products of the combined effects of a series of phantom satellites. The movement of each of these satellites, relative to the Earth's equator, creates its own tidal wave that moves around an amphidromic point. Each of these waves is referred to as a tidal constituent. The geometries of the ocean basins determine which of these constituents are amplified. Thus, the tide-raising potential for any locality on Earth can be conceptualized as the result of a series of tidal constituents specific to that region. A better understanding of tidal cycles opens up remarkable opportunities for research on tidal deposits with implications for, among other things, a more complete understanding of the tidal dynamics responsible for sediment transport and deposition, changes in Earth-Moon distance through time, and the possible influences tidal cycles may exert on organisms. ?? 2006 Elsevier B.V. All rights reserved.

  4. Maternal choline supplementation in a sheep model of first trimester binge alcohol fails to protect against brain volume reductions in peripubertal lambs

    PubMed Central

    Birch, Sharla M.; Lenox, Mark W.; Kornegay, Joe N.; Paniagua, Beatriz; Styner, Martin A.; Goodlett, Charles R.; Cudd, Tim A.; Washburn, Shannon E.

    2016-01-01

    Fetal alcohol spectrum disorder (FASD) is a leading potentially preventable birth defect. Poor nutrition may contribute to adverse developmental outcomes of prenatal alcohol exposure, and supplementation of essential micronutrients such as choline has shown benefit in rodent models. The sheep model of first-trimester binge alcohol exposure was used in this study to model the dose of maternal choline supplementation used in an ongoing prospective clinical trial involving pregnancies at risk for FASD. Primary outcome measures included volumetrics of the whole brain, cerebellum, and pituitary derived from magnetic resonance imaging (MRI) in 6-month-old lambs, testing the hypothesis that alcohol-exposed lambs would have brain volume reductions that would be ameliorated by maternal choline supplementation. Pregnant sheep were randomly assigned to one of five groups – heavy binge alcohol (HBA; 2.5 g/kg/treatment ethanol), heavy binge alcohol plus choline supplementation (HBC; 2.5 g/kg/treatment ethanol and 10 mg/kg/day choline), saline control (SC), saline control plus choline supplementation (SCC; 10 mg/kg/day choline), and normal control (NC). Ewes were given intravenous alcohol (HBA, HBC; mean peak BACs of ~280 mg/dL) or saline (SC, SCC) on three consecutive days per week from gestation day (GD) 4–41; choline was administered on GD 4–148. MRI scans of lamb brains were performed postnatally on day 182. Lambs from both alcohol groups (with or without choline) showed significant reductions in total brain volume; cerebellar and pituitary volumes were not significantly affected. This is the first report of MRI-derived volumetric brain reductions in a sheep model of FASD following binge-like alcohol exposure during the first trimester. These results also indicate that maternal choline supplementation comparable to doses in human studies fails to prevent brain volume reductions typically induced by first-trimester binge alcohol exposure. Future analyses will assess behavioral outcomes along with regional brain and neurohistological measures. PMID:27788773

  5. The Peruvian diaspora: portrait of a migratory process.

    PubMed

    Durand, Jorge

    2010-01-01

    Since the 1980s and especially the 1990s, Peru has become a nation of emigrants. Emigration has become massive over the past two decades, and the Peruvian populations of the United States, Japan, and Spain have tripled in less than a decade. A survey of households in five localities, three urban and two rural, in and around Lima helps to reveal the special character of this emigration. It tends to involve older and better-educated individuals than are typical of international migration and to target a wider variety of destinations. Moreover, it is a multiclass phenomenon. The economic, political, and social crisis brought about by a change in the economic model, two decades of terrorism, and a succession of failed democratic administrations has affected the society as a whole, and international migration seems to operate as an escape valve.

  6. Model tropical Atlantic biases underpin diminished Pacific decadal variability

    NASA Astrophysics Data System (ADS)

    McGregor, Shayne; Stuecker, Malte F.; Kajtar, Jules B.; England, Matthew H.; Collins, Mat

    2018-06-01

    Pacific trade winds have displayed unprecedented strengthening in recent decades1. This strengthening has been associated with east Pacific sea surface cooling2 and the early twenty-first-century slowdown in global surface warming2,3, amongst a host of other substantial impacts4-9. Although some climate models produce the timing of these recently observed trends10, they all fail to produce the trend magnitude2,11,12. This may in part be related to the apparent model underrepresentation of low-frequency Pacific Ocean variability and decadal wind trends2,11-13 or be due to a misrepresentation of a forced response1,14-16 or a combination of both. An increasingly prominent connection between the Pacific and Atlantic basins has been identified as a key driver of this strengthening of the Pacific trade winds12,17-20. Here we use targeted climate model experiments to show that combining the recent Atlantic warming trend with the typical climate model bias leads to a substantially underestimated response for the Pacific Ocean wind and surface temperature. The underestimation largely stems from a reduction and eastward shift of the atmospheric heating response to the tropical Atlantic warming trend. This result suggests that the recent Pacific trends and model decadal variability may be better captured by models with improved mean-state climatologies.

  7. Anti-backlash drive systems for multi-degree freedom devices

    DOEpatents

    Tsai, Lung-Wen; Chang, Sun-Lai

    1993-01-01

    A new and innovative concept for the control of backlash in gear-coupled transmission mechanisms. The concept utilizes redundant unidirectional drives to assure positive coupling of gear meshes at all times. Based on this concept, a methodology for the enumeration of admissible redundant-drive backlash-free robotic mechanisms has been established. Some typical two- and three-DOF mechanisms are disclosed. Furthermore, actuator torques have been derived as functions of either joint torques or end-effector dynamic performance requirements. A redundantly driven gear coupled transmission mechanism manipulator has a fail-safe advantage in that, except of the loss of backlash control, it can continue to function when one of its actuators fails. A two-DOF backlash-free arm has been reduced to practice to demonstrate the principle.

  8. Impact of typical rather than nutrient-dense food choices in the US Department of Agriculture Food Patterns.

    PubMed

    Britten, Patricia; Cleveland, Linda E; Koegel, Kristin L; Kuczynski, Kevin J; Nickols-Richardson, Sharon M

    2012-10-01

    The US Department of Agriculture (USDA) Food Patterns, released as part of the 2010 Dietary Guidelines for Americans, are designed to meet nutrient needs without exceeding energy requirements. They identify amounts to consume from each food group and recommend that nutrient-dense forms-lean or low-fat, without added sugars or salt-be consumed. Americans fall short of most food group intake targets and do not consume foods in nutrient-dense forms. Intake of calories from solid fats and added sugars exceed maximum limits by large margins. Our aim was to determine the potential effect on meeting USDA Food Pattern nutrient adequacy and moderation goals if Americans consumed the recommended quantities from each food group, but did not implement the advice to select nutrient-dense forms of food and instead made more typical food choices. Food-pattern modeling analysis using the USDA Food Patterns, which are structured to allow modifications in one or more aspects of the patterns, was used. Nutrient profiles for each food group were modified by replacing each nutrient-dense representative food with a similar but typical choice. Typical nutrient profiles were used to determine the energy and nutrient content of the food patterns. Moderation goals are not met when amounts of food in the USDA Food Patterns are followed and typical rather than nutrient-dense food choices are made. Energy, total fat, saturated fat, and sodium exceed limits in all patterns, often by substantial margins. With typical choices, calories were 15% to 30% (ie, 350 to 450 kcal) above the target calorie level for each pattern. Adequacy goals were not substantially affected by the use of typical food choices. If consumers consume the recommended quantities from each food group and subgroup, but fail to choose foods in low-fat, no-added-sugars, and low-sodium forms, they will not meet the USDA Food Patterns moderation goals or the 2010 Dietary Guidelines for Americans. Copyright © 2012 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  9. U.S. Patent Pending, Cyberspace Security System for Complex Systems, U.S. Patent Application No.: 14/134,949

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Sheldon, Frederick T; Mili, Ali

    A computer implemented method monetizes the security of a cyber-system in terms of losses each stakeholder may expect to lose if a security break down occurs. A non-transitory media stores instructions for generating a stake structure that includes costs that each stakeholder of a system would lose if the system failed to meet security requirements and generating a requirement structure that includes probabilities of failing requirements when computer components fails. The system generates a vulnerability model that includes probabilities of a component failing given threats materializing and generates a perpetrator model that includes probabilities of threats materializing. The system generatesmore » a dot product of the stakes structure, the requirement structure, the vulnerability model and the perpetrator model. The system can further be used to compare, contrast and evaluate alternative courses of actions best suited for the stakeholders and their requirements.« less

  10. Dynamics of water confined in lyotropic liquid crystals: Molecular dynamics simulations of the dynamic structure factor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mantha, Sriteja; Yethiraj, Arun

    2016-02-24

    The properties of water under confinement are of practical and fundamental interest. Here in this work we study the properties of water in the self-assembled lyotropic phases of gemini surfactants with a focus on testing the standard analysis of quasi-elastic neutron scattering (QENS) experiments. In QENS experiments the dynamic structure factor is measured and fit to models to extract the translational diffusion constant, D T , and rotational relaxation time, τ R. We test this procedure by using simulation results for the dynamic structure factor, extracting the dynamic parameters from the fit as is typically done in experiments, and comparingmore » the values to those directly measured in the simulations. We find that the decoupling approximation, where the intermediate scattering function is assumed to be a product of translational and rotational contributions, is quite accurate. The jump-diffusion and isotropic rotation models, however, are not accurate when the degree of confinement is high. In particular, the exponential approximations for the intermediate scattering function fail for highly confined water and the values of D T and τ R can differ from the measured value by as much as a factor of two. Other models have more fit parameters, however, and with the range of energies and wave-vectors accessible to QENS, the typical analysis appears to be the best choice. In the most confined lamellar phase, the dynamics are sufficiently slow that QENS does not access a large enough time scale and neutron spin echo measurements would be a valuable technique in addition to QENS.« less

  11. Bidirectional Associations Between Externalizing Behavior Problems and Maladaptive Parenting Within Parent-Son Dyads Across Childhood

    PubMed Central

    Loeber, Rolf; Hinshaw, Stephen P.; Pardini, Dustin A.

    2018-01-01

    Coercive parent–child interaction models posit that an escalating cycle of negative, bidirectional interchanges influences the development of boys’ externalizing problems and caregivers’ maladaptive parenting over time. However, longitudinal studies examining this hypothesis have been unable to rule out the possibility that between-individual factors account for bidirectional associations between child externalizing problems and maladaptive parenting. Using a longitudinal sample of boys (N = 503) repeatedly assessed eight times across 6-month intervals in childhood (in a range between 6 and 13 years), the current study is the first to use novel within-individual change (fixed effects) models to examine whether parents tend to increase their use of maladaptive parenting strategies following an increase in their son’s externalizing problems, or vice versa. These bidirectional associations were examined using multiple facets of externalizing problems (i.e., interpersonal callousness, conduct and oppositional defiant problems, hyperactivity/impulsivity) and parenting behaviors (i.e., physical punishment, involvement, parent–child communication). Analyses failed to support the notion that when boys increase their typical level of problem behaviors, their parents show an increase in their typical level of maladaptive parenting across the subsequent 6 month period, and vice versa. Instead, across 6-month intervals, within parent-son dyads, changes in maladaptive parenting and child externalizing problems waxed and waned in concert. Fixed effects models to address the topic of bidirectional relations between parent and child behavior are severely underrepresented. We recommend that other researchers who have found significant bidirectional parent–child associations using rank-order change models reexamine their data to determine whether these findings hold when examining changes within parent–child dyads. PMID:26780209

  12. Bidirectional Associations Between Externalizing Behavior Problems and Maladaptive Parenting Within Parent-Son Dyads Across Childhood.

    PubMed

    Besemer, Sytske; Loeber, Rolf; Hinshaw, Stephen P; Pardini, Dustin A

    2016-10-01

    Coercive parent-child interaction models posit that an escalating cycle of negative, bidirectional interchanges influences the development of boys' externalizing problems and caregivers' maladaptive parenting over time. However, longitudinal studies examining this hypothesis have been unable to rule out the possibility that between-individual factors account for bidirectional associations between child externalizing problems and maladaptive parenting. Using a longitudinal sample of boys (N = 503) repeatedly assessed eight times across 6-month intervals in childhood (in a range between 6 and 13 years), the current study is the first to use novel within-individual change (fixed effects) models to examine whether parents tend to increase their use of maladaptive parenting strategies following an increase in their son's externalizing problems, or vice versa. These bidirectional associations were examined using multiple facets of externalizing problems (i.e., interpersonal callousness, conduct and oppositional defiant problems, hyperactivity/impulsivity) and parenting behaviors (i.e., physical punishment, involvement, parent-child communication). Analyses failed to support the notion that when boys increase their typical level of problem behaviors, their parents show an increase in their typical level of maladaptive parenting across the subsequent 6 month period, and vice versa. Instead, across 6-month intervals, within parent-son dyads, changes in maladaptive parenting and child externalizing problems waxed and waned in concert. Fixed effects models to address the topic of bidirectional relations between parent and child behavior are severely underrepresented. We recommend that other researchers who have found significant bidirectional parent-child associations using rank-order change models reexamine their data to determine whether these findings hold when examining changes within parent-child dyads.

  13. Nonlinear evolution of f(R) cosmologies. II. Power spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oyaizu, Hiroaki; Hu, Wayne; Department of Astronomy and Astrophysics, University of Chicago, Chicago Illinois 60637

    2008-12-15

    We carry out a suite of cosmological simulations of modified action f(R) models where cosmic acceleration arises from an alteration of gravity instead of dark energy. These models introduce an extra scalar degree of freedom which enhances the force of gravity below the inverse mass or Compton scale of the scalar. The simulations exhibit the so-called chameleon mechanism, necessary for satisfying local constraints on gravity, where this scale depends on environment, in particular, the depth of the local gravitational potential. We find that the chameleon mechanism can substantially suppress the enhancement of power spectrum in the nonlinear regime if themore » background field value is comparable to or smaller than the depth of the gravitational potentials of typical structures. Nonetheless power spectrum enhancements at intermediate scales remain at a measurable level for models even when the expansion history is indistinguishable from a cosmological constant, cold dark matter model. Simple scaling relations that take the linear power spectrum into a nonlinear spectrum fail to capture the modifications of f(R) due to the change in collapsed structures, the chameleon mechanism, and the time evolution of the modifications.« less

  14. Variable diffusion in stock market fluctuations

    NASA Astrophysics Data System (ADS)

    Hua, Jia-Chen; Chen, Lijian; Falcon, Liberty; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2015-02-01

    We analyze intraday fluctuations in several stock indices to investigate the underlying stochastic processes using techniques appropriate for processes with nonstationary increments. The five most actively traded stocks each contains two time intervals during the day where the variance of increments can be fit by power law scaling in time. The fluctuations in return within these intervals follow asymptotic bi-exponential distributions. The autocorrelation function for increments vanishes rapidly, but decays slowly for absolute and squared increments. Based on these results, we propose an intraday stochastic model with linear variable diffusion coefficient as a lowest order approximation to the real dynamics of financial markets, and to test the effects of time averaging techniques typically used for financial time series analysis. We find that our model replicates major stylized facts associated with empirical financial time series. We also find that ensemble averaging techniques can be used to identify the underlying dynamics correctly, whereas time averages fail in this task. Our work indicates that ensemble average approaches will yield new insight into the study of financial markets' dynamics. Our proposed model also provides new insight into the modeling of financial markets dynamics in microscopic time scales.

  15. Approaches to biofilm-associated infections: the need for standardized and relevant biofilm methods for clinical applications.

    PubMed

    Malone, Matthew; Goeres, Darla M; Gosbell, Iain; Vickery, Karen; Jensen, Slade; Stoodley, Paul

    2017-02-01

    The concept of biofilms in human health and disease is now widely accepted as cause of chronic infection. Typically, biofilms show remarkable tolerance to many forms of treatments and the host immune response. This has led to vast increase in research to identify new (and sometimes old) anti-biofilm strategies that demonstrate effectiveness against these tolerant phenotypes. Areas covered: Unfortunately, a standardized methodological approach of biofilm models has not been adopted leading to a large disparity between testing conditions. This has made it almost impossible to compare data across multiple laboratories, leaving large gaps in the evidence. Furthermore, many biofilm models testing anti-biofilm strategies aimed at the medical arena have not considered the matter of relevance to an intended application. This may explain why some in vitro models based on methodological designs that do not consider relevance to an intended application fail when applied in vivo at the clinical level. Expert commentary: This review will explore the issues that need to be considered in developing performance standards for anti-biofilm therapeutics and provide a rationale for the need to standardize models/methods that are clinically relevant. We also provide some rational as to why no standards currently exist.

  16. Prometheus payment model: application to hip and knee replacement surgery.

    PubMed

    Rastogi, Amita; Mohr, Beth A; Williams, Jeffery O; Soobader, Mah-Jabeen; de Brantes, Francois

    2009-10-01

    The Prometheus Payment Model offers a potential solution to the failings of the current fee-for-service system and various forms of capitation. At the core of the Prometheus model are evidence-informed case rates (ECRs), which include a bundle of typical services that are informed by evidence and/or expert opinion as well as empirical data analysis, payment based on the severity of patients, and allowances for potentially avoidable complications (PACs) and other provider-specific variations in payer costs. We outline the methods and findings of the hip and knee arthroplasty ECRs with an emphasis on PACs. Of the 2076 commercially insured patients undergoing hip arthroplasty in our study, PAC costs totaled $7.8 million (14% of total costs; n = 699 index PAC stays). Similarly, PAC costs were $12.7 million (14% of total costs; n = 897 index PAC stays) for 3403 patients undergoing knee arthroplasty. By holding the providers clinically and financially responsible for PACs, and by segmenting and quantifying the type of PACs generated during and after the procedure, the Prometheus model creates an opportunity for providers to focus on the reduction of PACs, including readmissions, making the data actionable and turn the waste related to PAC costs into potential savings.

  17. Discrete Gust Model for Launch Vehicle Assessments

    NASA Technical Reports Server (NTRS)

    Leahy, Frank B.

    2008-01-01

    Analysis of spacecraft vehicle responses to atmospheric wind gusts during flight is important in the establishment of vehicle design structural requirements and operational capability. Typically, wind gust models can be either a spectral type determined by a random process having a wide range of wavelengths, or a discrete type having a single gust of predetermined magnitude and shape. Classical discrete models used by NASA during the Apollo and Space Shuttle Programs included a 9 m/sec quasi-square-wave gust with variable wavelength from 60 to 300 m. A later study derived discrete gust from a military specification (MIL-SPEC) document that used a "1-cosine" shape. The MIL-SPEC document contains a curve of non-dimensional gust magnitude as a function of non-dimensional gust half-wavelength based on the Dryden spectral model, but fails to list the equation necessary to reproduce the curve. Therefore, previous studies could only estimate a value of gust magnitude from the curve, or attempt to fit a function to it. This paper presents the development of the MIL-SPEC curve, and provides the necessary information to calculate discrete gust magnitudes as a function of both gust half-wavelength and the desired probability level of exceeding a specified gust magnitude.

  18. An approximate block Newton method for coupled iterations of nonlinear solvers: Theory and conjugate heat transfer applications

    NASA Astrophysics Data System (ADS)

    Yeckel, Andrew; Lun, Lisa; Derby, Jeffrey J.

    2009-12-01

    A new, approximate block Newton (ABN) method is derived and tested for the coupled solution of nonlinear models, each of which is treated as a modular, black box. Such an approach is motivated by a desire to maintain software flexibility without sacrificing solution efficiency or robustness. Though block Newton methods of similar type have been proposed and studied, we present a unique derivation and use it to sort out some of the more confusing points in the literature. In particular, we show that our ABN method behaves like a Newton iteration preconditioned by an inexact Newton solver derived from subproblem Jacobians. The method is demonstrated on several conjugate heat transfer problems modeled after melt crystal growth processes. These problems are represented by partitioned spatial regions, each modeled by independent heat transfer codes and linked by temperature and flux matching conditions at the boundaries common to the partitions. Whereas a typical block Gauss-Seidel iteration fails about half the time for the model problem, quadratic convergence is achieved by the ABN method under all conditions studied here. Additional performance advantages over existing methods are demonstrated and discussed.

  19. Sparse kernel methods for high-dimensional survival data.

    PubMed

    Evers, Ludger; Messow, Claudia-Martina

    2008-07-15

    Sparse kernel methods like support vector machines (SVM) have been applied with great success to classification and (standard) regression settings. Existing support vector classification and regression techniques however are not suitable for partly censored survival data, which are typically analysed using Cox's proportional hazards model. As the partial likelihood of the proportional hazards model only depends on the covariates through inner products, it can be 'kernelized'. The kernelized proportional hazards model however yields a solution that is dense, i.e. the solution depends on all observations. One of the key features of an SVM is that it yields a sparse solution, depending only on a small fraction of the training data. We propose two methods. One is based on a geometric idea, where-akin to support vector classification-the margin between the failed observation and the observations currently at risk is maximised. The other approach is based on obtaining a sparse model by adding observations one after another akin to the Import Vector Machine (IVM). Data examples studied suggest that both methods can outperform competing approaches. Software is available under the GNU Public License as an R package and can be obtained from the first author's website http://www.maths.bris.ac.uk/~maxle/software.html.

  20. Assessing Glacial Lake Outburst Flood Hazard in the Nepal Himalayas using Satellite Imagery and Hydraulic Models

    NASA Astrophysics Data System (ADS)

    Rounce, D.; McKinney, D. C.

    2015-12-01

    The last half century has witnessed considerable glacier melt that has led to the formation of large glacial lakes. These glacial lakes typically form behind terminal moraines comprising loose boulders, debris, and soil, which are susceptible to fail and cause a glacial lake outburst flood (GLOF). These lakes also act as a heat sink that accelerates glacier melt and in many cases is accompanied by rapid areal expansion. As these glacial lakes continue to grow, their hazard also increases due to the increase in potential flood volume and the lakes' proximity to triggering events such as avalanches and landslides. Despite the large threat these lakes may pose to downstream communities, there are few detailed studies that combine satellite imagery with hydraulic models to present a holistic understanding of the GLOF hazard. The aim of this work is to assess the GLOF hazard of glacial lakes in Nepal using a holistic approach based on a combination of satellite imagery and hydraulic models. Imja Lake will be the primary focus of the modeling efforts, but the methods will be developed in a manner that is transferable to other potentially dangerous glacial lakes in Nepal.

  1. A System of Conservative Regridding for Ice-Atmosphere Coupling in a General Circulation Model (GCM)

    NASA Technical Reports Server (NTRS)

    Fischer, R.; Nowicki, S.; Kelley, M.; Schmidt, G. A.

    2014-01-01

    The method of elevation classes, in which the ice surface model is run at multiple elevations within each grid cell, has proven to be a useful way for a low-resolution atmosphere inside a general circulation model (GCM) to produce high-resolution downscaled surface mass balance fields for use in one-way studies coupling atmospheres and ice flow models. Past uses of elevation classes have failed to conserve mass and energy because the transformation used to regrid to the atmosphere was inconsistent with the transformation used to downscale to the ice model. This would cause problems for two-way coupling. A strategy that resolves this conservation issue has been designed and is presented here. The approach identifies three grids between which data must be regridded and five transformations between those grids required by a typical coupled atmosphere-ice flow model. This paper develops a theoretical framework for the problem and shows how each of these transformations may be achieved in a consistent, conservative manner. These transformations are implemented in Glint2, a library used to couple atmosphere models with ice models. Source code and documentation are available for download. Confounding real-world issues are discussed, including the use of projections for ice modeling, how to handle dynamically changing ice geometry, and modifications required for finite element ice models.

  2. Acoustic and Perceptual Measurements of Prosody Production on the Profiling Elements of Prosodic Systems in Children by Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Diehl, Joshua John; Paul, Rhea

    2013-01-01

    Prosody production atypicalities are a feature of autism spectrum disorders (ASDs), but behavioral measures of performance have failed to provide detail on the properties of these deficits. We used acoustic measures of prosody to compare children with ASDs to age-matched groups with learning disabilities and typically developing peers. Overall,…

  3. Development of Bushing Compounds for Tracked Vehicles

    DTIC Science & Technology

    1990-10-01

    unwanted stepchild - part of called anti patented NBR -12 formulation the family (system), but devoid of needed indicated that service life could be...as Development and Engineering Center’s long. Bushings currently used in the M I track Rubber and Coated Fabrics Research Group assembly typically fail...formulations of should be improved. Numerous selected candidate natural rubber , propylene formulations-based on natural rubber , oxide, and silicone

  4. Are Students Blind to Their Ethical Blind Spots? An Exploration of Why Ethics Education Should Focus on Self-Perception Biases

    ERIC Educational Resources Information Center

    Tomlin, Kathleen A.; Metzger, Matthew L.; Bradley-Geist, Jill; Gonzalez-Padron, Tracy

    2017-01-01

    Ethics blind spots, which have become a keystone of the emerging behavioral ethics literature, are essentially biases, heuristics, and psychological traps. Though students typically recognize that ethical challenges exist in the world at large, they often fail to see when they are personally prone to ethics blind spots. This creates an obstacle…

  5. Curriculum on the Edge of Survival: How Schools Fail to Prepare Students for Membership in a Democracy

    ERIC Educational Resources Information Center

    Heller, Daniel

    2007-01-01

    Typically, school curriculum has been viewed through the lens of preparation for the workplace or higher education--both worthy objectives. However, this is not the only lens, and perhaps not even the most powerful one to use, if the goal is to optimize the educational system. "Curriculum on the Edge of Survival" attempts to define basic aspects…

  6. High Reliability Engine Control Demonstrated for Aircraft Engines

    NASA Technical Reports Server (NTRS)

    Guo, Ten-Huei

    1999-01-01

    For a dual redundant-control system, which is typical for short-haul aircraft, if a failure is detected in a control sensor, the engine control is transferred to a safety mode and an advisory is issued for immediate maintenance action to replace the failed sensor. The safety mode typically results in severely degraded engine performance. The goal of the High Reliability Engine Control (HREC) program was to demonstrate that the neural-network-based sensor validation technology can safely operate an engine by using the nominal closed-loop control during and after sensor failures. With this technology, engine performance could be maintained, and the sensor could be replaced as a conveniently scheduled maintenance action.

  7. Feeding behaviour in a 'basal' tortoise provides insights on the transitional feeding mode at the dawn of modern land turtle evolution.

    PubMed

    Natchev, Nikolay; Tzankov, Nikolay; Werneburg, Ingmar; Heiss, Egon

    2015-01-01

    Almost all extant testudinids are highly associated with terrestrial habitats and the few tortoises with high affinity to aquatic environments are found within the genus Manouria. Manouria belongs to a clade which forms a sister taxon to all remaining tortoises and is suitable as a model for studying evolutionary transitions within modern turtles. We analysed the feeding behaviour of Manouria emys and due to its phylogenetic position, we hypothesise that the species might have retained some ancestral features associated with an aquatic lifestyle. We tested whether M. emys is able to feed both in aquatic and terrestrial environments. In fact, M. emys repetitively tried to reach submerged food items in water, but always failed to grasp them-no suction feeding mechanism was applied. When feeding on land, M. emys showed another peculiar behaviour; it grasped food items by its jaws-a behaviour typical for aquatic or semiaquatic turtles-and not by the tongue as generally accepted as the typical feeding mode in all tortoises studied so far. In M. emys, the hyolingual complex remained retracted during all food uptake sequences, but the food transport was entirely lingual based. The kinematical profiles significantly differed from those described for other tortoises and from those proposed from the general models on the function of the feeding systems in lower tetrapods. We conclude that the feeding behaviour of M. emys might reflect a remnant of the primordial condition expected in the aquatic ancestor of the tortoises.

  8. Avian movements and wetland connectivity in landscape conservation

    USGS Publications Warehouse

    Haig, Susan M.; Mehlman, D.W.; Oring, L.W.

    1998-01-01

    The current conservation crisis calls for research and management to be carried out on a long-term, multi-species basis at large spatial scales. Unfortunately, scientists, managers, and agencies often are stymied in their effort to conduct these large-scale studies because of a lack of appropriate technology, methodology, and funding. This issue is of particular concern in wetland conservation, for which the standard landscape approach may include consideration of a large tract of land but fail to incorporate the suite of wetland sites frequently used by highly mobile organisms such as waterbirds (e.g., shorebirds, wading birds, waterfowl). Typically, these species have population dynamics that require use of multiple wetlands, but this aspect of their life history has often been ignored in planning for their conservation. We outline theoretical, empirical, modeling, and planning problems associated with this issue and suggest solutions to some current obstacles. These solutions represent a tradeoff between typical in-depth single-species studies and more generic multi-species studies. They include studying within- and among-season movements of waterbirds on a spatial scale appropriate to both widely dispersing and more stationary species; multi-species censuses at multiple sites; further development and use of technology such as satellite transmitters and population-specific molecular markers; development of spatially explicit population models that consider within-season movements of waterbirds; and recognition from funding agencies that landscape-level issues cannot adequately be addressed without support for these types of studies.

  9. Likelihood analysis of spatial capture-recapture models for stratified or class structured populations

    USGS Publications Warehouse

    Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.

    2015-01-01

    We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.

  10. A mass spectrometry proteomics data management platform.

    PubMed

    Sharma, Vagisha; Eng, Jimmy K; Maccoss, Michael J; Riffle, Michael

    2012-09-01

    Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are "organically" distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/.

  11. Predicting on-road assessment pass and fail outcomes in older drivers with cognitive impairment using a battery of computerized sensory-motor and cognitive tests.

    PubMed

    Hoggarth, Petra A; Innes, Carrie R H; Dalrymple-Alford, John C; Jones, Richard D

    2013-12-01

    To generate a robust model of computerized sensory-motor and cognitive test performance to predict on-road driving assessment outcomes in older persons with diagnosed or suspected cognitive impairment. A logistic regression model classified pass–fail outcomes of a blinded on-road driving assessment. Generalizability of the model was tested using leave-one-out cross-validation. Three specialist clinics in New Zealand. Drivers (n=279; mean age 78.4, 65% male) with diagnosed or suspected dementia, mild cognitive impairment, unspecified cognitive impairment, or memory problems referred for a medical driving assessment. A computerized battery of sensory-motor and cognitive tests and an on-road medical driving assessment. One hundred fifty-five participants (55.5%) received an on-road fail score. Binary logistic regression correctly classified 75.6% of the sample into on-road pass and fail groups. The cross-validation indicated accuracy of the model of 72.0% with sensitivity for detecting on-road fails of 73.5%, specificity of 70.2%, positive predictive value of 75.5%, and negative predictive value of 68%. The off-road assessment prediction model resulted in a substantial number of people who were assessed as likely to fail despite passing an on-road assessment and vice versa. Thus, despite a large multicenter sample, the use of off-road tests previously found to be useful in other older populations, and a carefully constructed and tested prediction model, off-road measures have yet to be found that are sufficiently accurate to allow acceptable determination of on-road driving safety of cognitively impaired older drivers. © 2013, Copyright the Authors Journal compilation © 2013, The American Geriatrics Society.

  12. Investigating Individual Differences in Toddler Search with Mixture Models

    ERIC Educational Resources Information Center

    Berthier, Neil E.; Boucher, Kelsea; Weisner, Nina

    2015-01-01

    Children's performance on cognitive tasks is often described in categorical terms in that a child is described as either passing or failing a test, or knowing or not knowing some concept. We used binomial mixture models to determine whether individual children could be classified as passing or failing two search tasks, the DeLoache model room…

  13. A Methodology for Quantifying Certain Design Requirements During the Design Phase

    NASA Technical Reports Server (NTRS)

    Adams, Timothy; Rhodes, Russel

    2005-01-01

    A methodology for developing and balancing quantitative design requirements for safety, reliability, and maintainability has been proposed. Conceived as the basis of a more rational approach to the design of spacecraft, the methodology would also be applicable to the design of automobiles, washing machines, television receivers, or almost any other commercial product. Heretofore, it has been common practice to start by determining the requirements for reliability of elements of a spacecraft or other system to ensure a given design life for the system. Next, safety requirements are determined by assessing the total reliability of the system and adding redundant components and subsystems necessary to attain safety goals. As thus described, common practice leaves the maintainability burden to fall to chance; therefore, there is no control of recurring costs or of the responsiveness of the system. The means that have been used in assessing maintainability have been oriented toward determining the logistical sparing of components so that the components are available when needed. The process established for developing and balancing quantitative requirements for safety (S), reliability (R), and maintainability (M) derives and integrates NASA s top-level safety requirements and the controls needed to obtain program key objectives for safety and recurring cost (see figure). Being quantitative, the process conveniently uses common mathematical models. Even though the process is shown as being worked from the top down, it can also be worked from the bottom up. This process uses three math models: (1) the binomial distribution (greaterthan- or-equal-to case), (2) reliability for a series system, and (3) the Poisson distribution (less-than-or-equal-to case). The zero-fail case for the binomial distribution approximates the commonly known exponential distribution or "constant failure rate" distribution. Either model can be used. The binomial distribution was selected for modeling flexibility because it conveniently addresses both the zero-fail and failure cases. The failure case is typically used for unmanned spacecraft as with missiles.

  14. Cascading Failures in Bi-partite Graphs: Model for Systemic Risk Propagation

    PubMed Central

    Huang, Xuqing; Vodenska, Irena; Havlin, Shlomo; Stanley, H. Eugene

    2013-01-01

    As economic entities become increasingly interconnected, a shock in a financial network can provoke significant cascading failures throughout the system. To study the systemic risk of financial systems, we create a bi-partite banking network model composed of banks and bank assets and propose a cascading failure model to describe the risk propagation process during crises. We empirically test the model with 2007 US commercial banks balance sheet data and compare the model prediction of the failed banks with the real failed banks after 2007. We find that our model efficiently identifies a significant portion of the actual failed banks reported by Federal Deposit Insurance Corporation. The results suggest that this model could be useful for systemic risk stress testing for financial systems. The model also identifies that commercial rather than residential real estate assets are major culprits for the failure of over 350 US commercial banks during 2008–2011. PMID:23386974

  15. Anti-backlash drive systems for multi-degree freedom devices

    DOEpatents

    Lungwen Tsai; Sunlai Chang.

    1993-09-14

    A new and innovative concept is described for the control of backlash in gear-coupled transmission mechanisms. The concept utilizes redundant unidirectional drives to assure positive coupling of gear meshes at all times. Based on this concept, a methodology for the enumeration of admissible redundant-drive backlash-free robotic mechanisms has been established. Some typical two- and three-DOF mechanisms are disclosed. Furthermore, actuator torques have been derived as functions of either joint torques or end-effector dynamic performance requirements. A redundantly driven gear coupled transmission mechanism manipulator has a fail-safe advantage in that, except of the loss of backlash control, it can continue to function when one of its actuators fails. A two-DOF backlash-free arm has been reduced to practice to demonstrate the principle. 20 figures.

  16. Cloud life cycle investigated via high resolution and full microphysics simulations in the surroundings of Manaus, Central Amazonia

    NASA Astrophysics Data System (ADS)

    Pauliquevis, T.; Gomes, H. B.; Barbosa, H. M.

    2014-12-01

    In this study we evaluate the skill of WRF model to simulate the actual diurnal cycle of convection in the Amazon basin. Models tipically are not capable to simulate the well documented cycle of 1) shallow cumulus in the morning; 2) towering process around noon; 3) shallow-to-deep convection and rain around 14h (LT). The fail in models is explained by the typical size of shallow cumulus (~0.5 - 2.0 km) and the coarse resolution of models using convection parameterisation (> 20 km). In this study we employed high spatial resolution (Dx = 0.625 km) to reach the shallow cumulus scale. . The simulations corresponds to a dynamical downscaling of ERA-Interim from 25 to 28 February 2013 with 40 vertical levels, 30 minutes outputs,and three nested grids (10 km, 2.5 km, 0.625 km). Improved vegetation (USGS + PROVEG), albedo and greenfrac (computed from MODIS-NDVI + LEAF-2 land surface parameterization), as well as pseudo analysis of soil moisture were used as input data sets, resulting in more realistic precipitation fields when compared to observations in sensitivity tests. Convective parameterization was switched off for the 2.5/0.625 km grids, where cloud formation was solely resolved by the microphysics module (WSM6 scheme, which provided better results). Results showed a significant improved capability of the model to simulate diurnal cycle. Shallow cumulus begin to appear in the first hours in the morning. They were followed by a towering process that culminates with precipitation in the early afternoon, which is a behavior well described by observations but rarely obtained in models. Rain volumes were also realistic (~20 mm for single events) when compared to typical events during the period, which is in the core of the wet season. Cloud fields evolution also differed with respect to Amazonas River bank, which is a clear evidence of the interaction between river breeze and large scale circulation.

  17. Balance failure in single limb stance due to ankle sprain injury: an analysis of center of pressure using the fractal dimension method.

    PubMed

    Doherty, Cailbhe; Bleakley, Chris; Hertel, Jay; Caulfield, Brian; Ryan, John; Delahunt, Eamonn

    2014-01-01

    Instrumented postural control analysis plays an important role in evaluating the effects of injury on dynamic stability during balance tasks, and is often conveyed with measures based on the displacement of the center-of-pressure (COP) assessed with a force platform. However, the desired outcome of the task is frequently characterized by a loss of dynamic stability, secondary to injury. Typically, these failed trials are discarded during research investigations, with the potential loss of informative data pertaining to task success. The novelty of the present study is that COP characteristics of failed trials in injured participants are compared to successful trial data in another injured group, and a control group of participants, using the fractal dimension (FD) method. Three groups of participants attempted a task of eyes closed single limb stance (SLS): twenty-nine participants with acute ankle sprain successfully completed the task on their non-injured limb (successful injury group); twenty eight participants with acute ankle sprain failed their attempt on their injured limb (failed injury group); sixteen participants with no current injury successfully completed the task on their non-dominant limb (successful non-injured group). Between trial analyses of these groups revealed significant differences in COP trajectory FD (successful injury group: 1.58±0.06; failed injury group: 1.54±0.07; successful non-injured group: 1.64±0.06) with a large effect size (0.27). These findings demonstrate that successful eyes-closed SLS is characterized by a larger FD of the COP path when compared to failed trials, and that injury causes a decrease in COP path FD. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. The Frank-Starling mechanism involves deceleration of cross-bridge kinetics and is preserved in failing human right ventricular myocardium.

    PubMed

    Milani-Nejad, Nima; Canan, Benjamin D; Elnakish, Mohammad T; Davis, Jonathan P; Chung, Jae-Hoon; Fedorov, Vadim V; Binkley, Philip F; Higgins, Robert S D; Kilic, Ahmet; Mohler, Peter J; Janssen, Paul M L

    2015-12-15

    Cross-bridge cycling rate is an important determinant of cardiac output, and its alteration can potentially contribute to reduced output in heart failure patients. Additionally, animal studies suggest that this rate can be regulated by muscle length. The purpose of this study was to investigate cross-bridge cycling rate and its regulation by muscle length under near-physiological conditions in intact right ventricular muscles of nonfailing and failing human hearts. We acquired freshly explanted nonfailing (n = 9) and failing (n = 10) human hearts. All experiments were performed on intact right ventricular cardiac trabeculae (n = 40) at physiological temperature and near the normal heart rate range. The failing myocardium showed the typical heart failure phenotype: a negative force-frequency relationship and β-adrenergic desensitization (P < 0.05), indicating the expected pathological myocardium in the right ventricles. We found that there exists a length-dependent regulation of cross-bridge cycling kinetics in human myocardium. Decreasing muscle length accelerated the rate of cross-bridge reattachment (ktr) in both nonfailing and failing myocardium (P < 0.05) equally; there were no major differences between nonfailing and failing myocardium at each respective length (P > 0.05), indicating that this regulatory mechanism is preserved in heart failure. Length-dependent assessment of twitch kinetics mirrored these findings; normalized dF/dt slowed down with increasing length of the muscle and was virtually identical in diseased tissue. This study shows for the first time that muscle length regulates cross-bridge kinetics in human myocardium under near-physiological conditions and that those kinetics are preserved in the right ventricular tissues of heart failure patients. Copyright © 2015 the American Physiological Society.

  19. Comparison of four methods for deriving hospital standardised mortality ratios from a single hierarchical logistic regression model.

    PubMed

    Mohammed, Mohammed A; Manktelow, Bradley N; Hofer, Timothy P

    2016-04-01

    There is interest in deriving case-mix adjusted standardised mortality ratios so that comparisons between healthcare providers, such as hospitals, can be undertaken in the controversial belief that variability in standardised mortality ratios reflects quality of care. Typically standardised mortality ratios are derived using a fixed effects logistic regression model, without a hospital term in the model. This fails to account for the hierarchical structure of the data - patients nested within hospitals - and so a hierarchical logistic regression model is more appropriate. However, four methods have been advocated for deriving standardised mortality ratios from a hierarchical logistic regression model, but their agreement is not known and neither do we know which is to be preferred. We found significant differences between the four types of standardised mortality ratios because they reflect a range of underlying conceptual issues. The most subtle issue is the distinction between asking how an average patient fares in different hospitals versus how patients at a given hospital fare at an average hospital. Since the answers to these questions are not the same and since the choice between these two approaches is not obvious, the extent to which profiling hospitals on mortality can be undertaken safely and reliably, without resolving these methodological issues, remains questionable. © The Author(s) 2012.

  20. Assessing Interval Estimation Methods for Hill Model ...

    EPA Pesticide Factsheets

    The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet

  1. Topology Counts: Force Distributions in Circular Spring Networks.

    PubMed

    Heidemann, Knut M; Sageman-Furnas, Andrew O; Sharma, Abhinav; Rehfeldt, Florian; Schmidt, Christoph F; Wardetzky, Max

    2018-02-09

    Filamentous polymer networks govern the mechanical properties of many biological materials. Force distributions within these networks are typically highly inhomogeneous, and, although the importance of force distributions for structural properties is well recognized, they are far from being understood quantitatively. Using a combination of probabilistic and graph-theoretical techniques, we derive force distributions in a model system consisting of ensembles of random linear spring networks on a circle. We show that characteristic quantities, such as the mean and variance of the force supported by individual springs, can be derived explicitly in terms of only two parameters: (i) average connectivity and (ii) number of nodes. Our analysis shows that a classical mean-field approach fails to capture these characteristic quantities correctly. In contrast, we demonstrate that network topology is a crucial determinant of force distributions in an elastic spring network. Our results for 1D linear spring networks readily generalize to arbitrary dimensions.

  2. Optimization of Low-Thrust Spiral Trajectories by Collocation

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Dankanich, John W.

    2012-01-01

    As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.

  3. Localization and characterization of X chromosome inversion breakpoints separating Drosophila mojavensis and Drosophila arizonae.

    PubMed

    Cirulli, Elizabeth T; Noor, Mohamed A F

    2007-01-01

    Ectopic exchange between transposable elements or other repetitive sequences along a chromosome can produce chromosomal inversions. As a result, genome sequence studies typically find sequence similarity between corresponding inversion breakpoint regions. Here, we identify and investigate the breakpoint regions of the X chromosome inversion distinguishing Drosophila mojavensis and Drosophila arizonae. We localize one inversion breakpoint to 13.7 kb and localize the other to a 1-Mb interval. Using this localization and assuming microsynteny between Drosophila melanogaster and D. arizonae, we pinpoint likely positions of the inversion breakpoints to windows of less than 3000 bp. These breakpoints define the size of the inversion to approximately 11 Mb. However, in contrast to many other studies, we fail to find significant sequence similarity between the 2 breakpoint regions. The localization of these inversion breakpoints will facilitate future genetic and molecular evolutionary studies in this species group, an emerging model system for ecological genetics.

  4. Effect of first-encounter pretest on pass/fail rates of a clinical skills medical licensure examination.

    PubMed

    Roberts, William L; McKinley, Danette W; Boulet, John R

    2010-05-01

    Due to the high-stakes nature of medical exams it is prudent for test agencies to critically evaluate test data and control for potential threats to validity. For the typical multiple station performance assessments used in medicine, it may take time for examinees to become comfortable with the test format and administrative protocol. Since each examinee in the rotational sequence starts with a different task (e.g., simulated clinical encounter), those who are administered non-scored pretest material on their first station may have an advantage compared to those who are not. The purpose of this study is to investigate whether pass/fail rates are different across the sequence of pretest encounters administered during the testing day. First-time takers were grouped by the sequential order in which they were administered the pretest encounter. No statistically significant difference in fail rates was found between examinees who started with the pretest encounter and those who encountered the pretest encounter later in the sequence. Results indicate that current examination administration protocols do not present a threat to the validity of test score interpretations.

  5. Prediction of particulate loading in exhaust from fabric filter baghouses with one or more failed bags.

    PubMed

    Qin, Wenjun; Dekermenjian, Manuel; Martin, Richard J

    2006-08-01

    Loss of filtration efficiency in a fabric filter baghouse is typically caused by bag failure, in one form or another. The degree of such failure can be as minor as a pinhole leak or as major as a fully involved baghouse fire. In some cases, local air pollution regulations or federal hazardous waste laws may require estimation of the total quantity of particulate matter released to the environment as a result of such failures. In this paper, a technique is presented for computing the dust loading in the baghouse exhaust when one or more bags have failed. The algorithm developed is shown to be an improvement over a previously published result, which requires empirical knowledge of the variation in baghouse pressure differential with bag failures. An example calculation is presented for a baghouse equipped with 200 bags. The prediction shows that a small percentage of failed bags can cause a relatively large proportion of the gas flow to bypass the active bags, which, in turn, leads to high outlet dust loading and low overall collection efficiency from the baghouse.

  6. Curriculum on the Edge of Survival: How Schools Fail to Prepare Students for Membership in a Democracy. 2nd Edition

    ERIC Educational Resources Information Center

    Heller, Daniel

    2012-01-01

    Typically, school curriculum has been viewed through the lens of preparation for the workplace or higher education, both worthy objectives. However, this is not the only lens, and perhaps not even the most powerful one to use, if the goal is to optimize the educational system. "Curriculum on the Edge of Survival, 2nd Edition," attempts to define…

  7. Chick loss from mixed broods reflects severe nestmate competition between an evictor brood parasite and its hosts.

    PubMed

    Moskát, Csaba; Hauber, Márk E

    2010-03-01

    Hatchlings of the obligate brood parasite common cuckoo Cuculus canorus typically evict eggs and nestmates but, rarely, host and parasite nestlings may grow up together. As part of previous experiments, we manipulated host clutches by inducing two great reed warbler Acrocephalus arundinaceus and one parasite young to share a nest from 4 days posthatch, when the cuckoo's eviction behaviour is thought to cease. We documented that in mixed broods typically at least one nestling eventually fell out of nest during the period of 5-10 days posthatch. In 83% of nests one or two host chicks disappeared, and in 17% of nests parasite chicks were lost. All nestlings remained in control broods of three hosts or one parasite. These results imply strong physical competition for space in mixed broods. We suggest that continued foster care for parasitized broods may occasionally be beneficial because host nestlings have some chance to escape the costs of parasitism, even when their parents fail to reject the parasite's egg and the parasite hatchling fails to evict nestmates. Conversely, evictor parasite chicks benefit not only through improved growth, as reported before, but also through the elimination of nestmate competition for space and the risk of displacement from mixed broods. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  8. The Objective Borderline Method (OBM): A Probability-Based Model for Setting up an Objective Pass/Fail Cut-Off Score in Medical Programme Assessments

    ERIC Educational Resources Information Center

    Shulruf, Boaz; Turner, Rolf; Poole, Phillippa; Wilkinson, Tim

    2013-01-01

    The decision to pass or fail a medical student is a "high stakes" one. The aim of this study is to introduce and demonstrate the feasibility and practicality of a new objective standard-setting method for determining the pass/fail cut-off score from borderline grades. Three methods for setting up pass/fail cut-off scores were compared: the…

  9. The Typical General Aviation Aircraft

    NASA Technical Reports Server (NTRS)

    Turnbull, Andrew

    1999-01-01

    The reliability of General Aviation aircraft is unknown. In order to "assist the development of future GA reliability and safety requirements", a reliability study needs to be performed. Before any studies on General Aviation aircraft reliability begins, a definition of a typical aircraft that encompasses most of the general aviation characteristics needs to be defined. In this report, not only is the typical general aviation aircraft defined for the purpose of the follow-on reliability study, but it is also separated, or "sifted" into several different categories where individual analysis can be performed on the reasonably independent systems. In this study, the typical General Aviation aircraft is a four-place, single engine piston, all aluminum fixed-wing certified aircraft with a fixed tricycle landing gear and a cable operated flight control system. The system breakdown of a GA aircraft "sifts" the aircraft systems and components into five categories: Powerplant, Airframe, Aircraft Control Systems, Cockpit Instrumentation Systems, and the Electrical Systems. This breakdown was performed along the lines of a failure of the system. Any component that caused a system to fail was considered a part of that system.

  10. Visual perspective taking impairment in children with autistic spectrum disorder.

    PubMed

    Hamilton, Antonia F de C; Brindley, Rachel; Frith, Uta

    2009-10-01

    Evidence from typical development and neuroimaging studies suggests that level 2 visual perspective taking - the knowledge that different people may see the same thing differently at the same time - is a mentalising task. Thus, we would expect children with autism, who fail typical mentalising tasks like false belief, to perform poorly on level 2 visual perspective taking as well. However, prior data on this issue are inconclusive. We re-examined this question, testing a group of 23 young autistic children, aged around 8years with a verbal mental age of around 4years and three groups of typical children (n=60) ranging in age from 4 to 8years on a level 2 visual perspective task and a closely matched mental rotation task. The results demonstrate that autistic children have difficulty with visual perspective taking compared to a task requiring mental rotation, relative to typical children. Furthermore, performance on the level 2 visual perspective taking task correlated with theory of mind performance. These findings resolve discrepancies in previous studies of visual perspective taking in autism, and demonstrate that level 2 visual perspective taking is a mentalising task.

  11. Where and why do models fail? Perspectives from Oregon Hydrologic Landscape classification

    EPA Science Inventory

    A complete understanding of why rainfall-runoff models provide good streamflow predictions at catchments in some regions, but fail to do so in other regions, has still not been achieved. Here, we argue that a hydrologic classification system is a robust conceptual tool that is w...

  12. Managing future Gulf War Syndromes: international lessons and new models of care

    PubMed Central

    Engel, Charles C; Hyams, Kenneth C; Scott, Ken

    2006-01-01

    After the 1991 Gulf War, veterans of the conflict from the United States, United Kingdom, Canada, Australia and other nations described chronic idiopathic symptoms that became popularly known as ‘Gulf War Syndrome’. Nearly 15 years later, some 250 million dollars in United States medical research has failed to confirm a novel war-related syndrome and controversy over the existence and causes of idiopathic physical symptoms has persisted. Wartime exposures implicated as possible causes of subsequent symptoms include oil well fire smoke, infectious diseases, vaccines, chemical and biological warfare agents, depleted uranium munitions and post-traumatic stress disorder. Recent historical analyses have identified controversial idiopathic symptom syndromes associated with nearly every modern war, suggesting that war typically sets into motion interrelated physical, emotional and fiscal consequences for veterans and for society. We anticipate future controversial war syndromes and maintain that a population-based approach to care can mitigate their impact. This paper delineates essential features of the model, describes its public health and scientific underpinnings and details how several countries are trying to implement it. With troops returning from combat in Afghanistan, Iraq and elsewhere, the model is already getting put to the test. PMID:16687273

  13. The competing risks Cox model with auxiliary case covariates under weaker missing-at-random cause of failure.

    PubMed

    Nevo, Daniel; Nishihara, Reiko; Ogino, Shuji; Wang, Molin

    2017-08-04

    In the analysis of time-to-event data with multiple causes using a competing risks Cox model, often the cause of failure is unknown for some of the cases. The probability of a missing cause is typically assumed to be independent of the cause given the time of the event and covariates measured before the event occurred. In practice, however, the underlying missing-at-random assumption does not necessarily hold. Motivated by colorectal cancer molecular pathological epidemiology analysis, we develop a method to conduct valid analysis when additional auxiliary variables are available for cases only. We consider a weaker missing-at-random assumption, with missing pattern depending on the observed quantities, which include the auxiliary covariates. We use an informative likelihood approach that will yield consistent estimates even when the underlying model for missing cause of failure is misspecified. The superiority of our method over naive methods in finite samples is demonstrated by simulation study results. We illustrate the use of our method in an analysis of colorectal cancer data from the Nurses' Health Study cohort, where, apparently, the traditional missing-at-random assumption fails to hold.

  14. Managing future Gulf War Syndromes: international lessons and new models of care.

    PubMed

    Engel, Charles C; Hyams, Kenneth C; Scott, Ken

    2006-04-29

    After the 1991 Gulf War, veterans of the conflict from the United States, United Kingdom, Canada, Australia and other nations described chronic idiopathic symptoms that became popularly known as 'Gulf War Syndrome'. Nearly 15 years later, some 250 million dollars in United States medical research has failed to confirm a novel war-related syndrome and controversy over the existence and causes of idiopathic physical symptoms has persisted. Wartime exposures implicated as possible causes of subsequent symptoms include oil well fire smoke, infectious diseases, vaccines, chemical and biological warfare agents, depleted uranium munitions and post-traumatic stress disorder. Recent historical analyses have identified controversial idiopathic symptom syndromes associated with nearly every modern war, suggesting that war typically sets into motion interrelated physical, emotional and fiscal consequences for veterans and for society. We anticipate future controversial war syndromes and maintain that a population-based approach to care can mitigate their impact. This paper delineates essential features of the model, describes its public health and scientific underpinnings and details how several countries are trying to implement it. With troops returning from combat in Afghanistan, Iraq and elsewhere, the model is already getting put to the test.

  15. Smoking, death, and Alzheimer disease: a case of competing risks.

    PubMed

    Chang, Chung-Chou H; Zhao, Yongyun; Lee, Ching-Wen; Ganguli, Mary

    2012-01-01

    If smoking is a risk factor for Alzheimer disease (AD) but a smoker dies of another cause before developing or manifesting AD, smoking-related mortality may mask the relationship between smoking and AD. This phenomenon, referred to as competing risk, complicates efforts to model the effect of smoking on AD. Typical survival regression models assume that censorship from analysis is unrelated to an individual's probability for developing AD (ie, censoring is noninformative). However, if individuals who die before developing AD are younger than those who survive long enough to develop AD, and if they include a higher percentage of smokers than nonsmokers, the incidence of AD will appear to be higher in older individuals and in nonsmokers. Further, age-specific mortality rates are higher in smokers because they die earlier than nonsmokers. Therefore, if we fail to take into account the competing risk of death when we estimate the effect of smoking on AD, we bias the results and are in fact only comparing the incidence of AD in nonsmokers with that in the healthiest smokers. In this study, we demonstrate that the effect of smoking on AD differs in models that are and are not adjusted for competing risks.

  16. The economic value added (EVA) resulting from medical care of functional amblyopia, strabismus, (pathologies of binocular vision) and asthma.

    PubMed

    Beauchamp, Cynthia L; Felius, Joost; Beauchamp, George R

    2010-01-01

    Value analysis in health care calculates the economic value added (EVA) that results from improvements in health and health care. Our purpose was to develop an EVA model and to apply the model to typical and hypothetical (instantaneous and perfect) cures for amblyopia, surgical strabismus and asthma, as another, but non-ophthalmological disease standard for comparison, in the United States. The model is based on changes in utility and longevity, the associated incremental costs, and an estimate of the value of life. Univariate sensitivity analyses were performed to arrive at a plausible range of outcomes. For the United States, the EVA for current practice amblyopia care is 12.9B dollars (billion) per year, corresponding to a return on investment (ROI) of 10.4% per yr. With substantial increases in investment aimed at maximal improvement ("perfect cure"), the EVA is 32.7B per yr, with ROI of 5.3% per yr. The EVA for typical surgical strabismus care is 10.3B per yr. A perfect cure may yield EVA of 9.6B per yr. The EVA for asthma is 1317B per yr (ROI 20.4% per yr.., while a perfect cure may yield EVA of 110 B per yr. Sensitivity analysis demonstrated the relatively large effects of incidence, utility, and longevity, while incremental costs have a relatively minor effect on the EVA. The economic value added by improvements in patient-centered outcomes is very large. Failing to make the necessary investments in research, prevention, detection, prompt treatment and rehabilitation of these diseases, at virtually any conceivable cost, appears economically, medically, morally and ethically deficient and consequently wasteful at very least economically for our society.

  17. Contrasting effects of increased and decreased dopamine transmission on latent inhibition in ovariectomized rats and their modulation by 17beta-estradiol: an animal model of menopausal psychosis?

    PubMed

    Arad, Michal; Weiner, Ina

    2010-06-01

    Women with schizophrenia have later onset and better response to antipsychotic drugs (APDs) than men during reproductive years, but the menopausal period is associated with increased symptom severity and reduced treatment response. Estrogen replacement therapy has been suggested as beneficial but clinical data are inconsistent. Latent inhibition (LI), the capacity to ignore irrelevant stimuli, is a measure of selective attention that is disrupted in acute schizophrenia patients and in rats and humans treated with the psychosis-inducing drug amphetamine and can be reversed by typical and atypical APDs. Here we used amphetamine (1 mg/kg)-induced disrupted LI in ovariectomized rats to model low levels of estrogen along with hyperfunction of the dopaminergic system that may be occurring in menopausal psychosis, and tested the efficacy of APDs and estrogen in reversing disrupted LI. 17beta-Estradiol (50, 150 microg/kg), clozapine (atypical APD; 5, 10 mg/kg), and haloperidol (typical APD; 0.1, 0.3 mg/kg) effectively reversed amphetamine-induced LI disruption in sham rats, but were much less effective in ovariectomized rats; 17beta-estradiol and clozapine were effective only at high doses (150 microg/kg and 10 mg/kg, respectively), whereas haloperidol failed at both doses. Haloperidol and clozapine regained efficacy if coadministered with 17beta-estradiol (50 microg/kg, an ineffective dose). Reduced sensitivity to dopamine (DA) blockade coupled with spared/potentiated sensitivity to DA stimulation after ovariectomy may provide a novel model recapitulating the combination of increased vulnerability to psychosis with reduced response to APD treatment in female patients during menopause. In addition, our data show that 17beta-estradiol exerts antipsychotic activity.

  18. Collective motion of predictive swarms

    PubMed Central

    Vural, Dervis Can

    2017-01-01

    Theoretical models of populations and swarms typically start with the assumption that the motion of agents is governed by the local stimuli. However, an intelligent agent, with some understanding of the laws that govern its habitat, can anticipate the future, and make predictions to gather resources more efficiently. Here we study a specific model of this kind, where agents aim to maximize their consumption of a diffusing resource, by attempting to predict the future of a resource field and the actions of other agents. Once the agents make a prediction, they are attracted to move towards regions that have, and will have, denser resources. We find that the further the agents attempt to see into the future, the more their attempts at prediction fail, and the less resources they consume. We also study the case where predictive agents compete against non-predictive agents and find the predictors perform better than the non-predictors only when their relative numbers are very small. We conclude that predictivity pays off either when the predictors do not see too far into the future or the number of predictors is small. PMID:29065136

  19. Physics of Inference

    NASA Astrophysics Data System (ADS)

    Toroczkai, Zoltan

    Jaynes's maximum entropy method provides a family of principled models that allow the prediction of a system's properties as constrained by empirical data (observables). However, their use is often hindered by the degeneracy problem characterized by spontaneous symmetry breaking, where predictions fail. Here we show that degeneracy appears when the corresponding density of states function is not log-concave, which is typically the consequence of nonlinear relationships between the constraining observables. We illustrate this phenomenon on several examples, including from complex networks, combinatorics and classical spin systems (e.g., Blume-Emery-Griffiths lattice-spin models). Exploiting these nonlinear relationships we then propose a solution to the degeneracy problem for a large class of systems via transformations that render the density of states function log-concave. The effectiveness of the method is demonstrated on real-world network data. Finally, we discuss the implications of these findings on the relationship between the geometrical properties of the density of states function and phase transitions in spin systems. Supported in part by Grant No. FA9550-12-1-0405 from AFOSR/DARPA and by Grant No. HDTRA 1-09-1-0039 from DTRA.

  20. Collective motion of predictive swarms.

    PubMed

    Rupprecht, Nathaniel; Vural, Dervis Can

    2017-01-01

    Theoretical models of populations and swarms typically start with the assumption that the motion of agents is governed by the local stimuli. However, an intelligent agent, with some understanding of the laws that govern its habitat, can anticipate the future, and make predictions to gather resources more efficiently. Here we study a specific model of this kind, where agents aim to maximize their consumption of a diffusing resource, by attempting to predict the future of a resource field and the actions of other agents. Once the agents make a prediction, they are attracted to move towards regions that have, and will have, denser resources. We find that the further the agents attempt to see into the future, the more their attempts at prediction fail, and the less resources they consume. We also study the case where predictive agents compete against non-predictive agents and find the predictors perform better than the non-predictors only when their relative numbers are very small. We conclude that predictivity pays off either when the predictors do not see too far into the future or the number of predictors is small.

  1. Climate change, species distribution models, and physiological performance metrics: predicting when biogeographic models are likely to fail.

    PubMed

    Woodin, Sarah A; Hilbish, Thomas J; Helmuth, Brian; Jones, Sierra J; Wethey, David S

    2013-09-01

    Modeling the biogeographic consequences of climate change requires confidence in model predictions under novel conditions. However, models often fail when extended to new locales, and such instances have been used as evidence of a change in physiological tolerance, that is, a fundamental niche shift. We explore an alternative explanation and propose a method for predicting the likelihood of failure based on physiological performance curves and environmental variance in the original and new environments. We define the transient event margin (TEM) as the gap between energetic performance failure, defined as CTmax, and the upper lethal limit, defined as LTmax. If TEM is large relative to environmental fluctuations, models will likely fail in new locales. If TEM is small relative to environmental fluctuations, models are likely to be robust for new locales, even when mechanism is unknown. Using temperature, we predict when biogeographic models are likely to fail and illustrate this with a case study. We suggest that failure is predictable from an understanding of how climate drives nonlethal physiological responses, but for many species such data have not been collected. Successful biogeographic forecasting thus depends on understanding when the mechanisms limiting distribution of a species will differ among geographic regions, or at different times, resulting in realized niche shifts. TEM allows prediction of the likelihood of such model failure.

  2. Characterizing the stellar population of a sample of star forming galaxies with high emission of both [OIV]25.9um and [NeII]12.8um

    NASA Astrophysics Data System (ADS)

    Martínez-Paredes, M.; Bruzual, G.; Meléndez, M.; González-Martín, O.

    2017-11-01

    The optical diagnostic diagram te{BPT81, VO87} allow us to discriminate between the different excitation mechanism, like that produce by young stars and that produce by the AGN during the accretion of matter onto the super massive black hole. This kind of tool are important because allow us to study the connection between starburst and AGN. However, despite the great success, the identification of the most heavily dust-obscured systems remains a challenge for optical diagrams. Mid-infrared diagnostic are more suitable to study dust-enshrouded systems, where the effect of dust obscuration can hamper the interpretation of traditional optical diagnostics, since in this spectral range we have access to low-ionization lines (as [Ne II]12.8μm) typical of star forming regions and high ionization lines typical of active galaxies ([OIV]25.9μm), while intermediate ionization-lines ([Ne III]15.3μm) provide a unique scenario where the AGN coexist with active star formation in the host galaxy. In a previous work te{Melendez14} we have carried out extensive and detailed photoionization modeling to successfully separate the different excitation mechanism in the mid-infrared diagnostic diagrams proposed by te{Weaver10}. We successfully modelled the AGN and starburst galaxies ratios lines of [NeIII]/[NeII] Vs [OIV]/[NeIII]. However, we failed in modelling the observed ratio lines in galaxies with a normal star formation activity ([NeIII]/[NeII]<1 and [OIV]/[NeIII]<1). These results suggest the presence of a more complex excitation mechanism in these galaxies. In this project we are using the update stellar population models from te{BC17} that include massive stars, and the update photoionization models from CLOUDY from te{Ferland17}, to characterize the properties of the stellar population that produce the high ionization conditions in these galaxies.

  3. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    PubMed

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. A dynamic magnetic tension force as the cause of failed solar eruptions

    DOE Data Explorer

    Myers, Clayton E. [Princeton Univ., NJ (United States). Dept. of Astrophysical Sciences; Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); ] (ORCID:0000000345398406); Yamada, Maasaki [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)] (ORCID:0000000349961649); Ji, Hantao [Princeton Univ., NJ (United States). Dept. of Astrophysical Sciences; Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Laboratory for Space Environment and Physical Sciences, Harbin Institute of Technology, Harbin, Heilongjiang 150001, China] (ORCID:0000000196009963); Yoo, Jongsoo [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)] (ORCID:0000000338811995); Fox, William [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)] (ORCID:000000016289858X); Jara-Almonte, Jonathan [Princeton Univ., NJ (United States). Dept. of Astrophysical Sciences; Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); ] (ORCID:0000000307606198); Savcheva, Antonia [Harvard†“ Smithsonian Center for Astrophysics, Cambridge, Massachusetts 02138, USA] (ORCID:000000025598046X); DeLuca, Edward E. [Harvard†“ Smithsonian Center for Astrophysics, Cambridge, Massachusetts 02138, USA] (ORCID:0000000174162895)

    2015-12-11

    Coronal mass ejections are solar eruptions driven by a sudden release of magnetic energy stored in the Sun’s corona. In many cases, this magnetic energy is stored in long-lived, arched structures called magnetic flux ropes. When a flux rope destabilizes, it can either erupt and produce a coronal mass ejection or fail and collapse back towards the Sun. The prevailing belief is that the outcome of a given event is determined by a magnetohydrodynamic force imbalance called the torus instability. This belief is challenged, however, by observations indicating that torus-unstable flux ropes sometimes fail to erupt. This contradiction has not yet been resolved because of a lack of coronal magnetic field measurements and the limitations of idealized numerical modelling. Here we report the results of a laboratory experiment that reveal a previously unknown eruption criterion below which torus-unstable flux ropes fail to erupt. We find that such ‘failed torus’ events occur when the guide magnetic field (that is, the ambient field that runs toroidally along the flux rope) is strong enough to prevent the flux rope from kinking. Under these conditions, the guide field interacts with electric currents in the flux rope to produce a dynamic toroidal field tension force that halts the eruption. This magnetic tension force is missing from existing eruption models, which is why such models cannot explain or predict failed torus events.

  5. Cation reordering in natural titanomagnetites and implications for paleointensity studies

    NASA Astrophysics Data System (ADS)

    Bowles, J. A.; Jackson, M. J.; Gee, J. S.

    2013-05-01

    Successful paleointensity experiments hinge on the underlying assumption of reciprocity; the remanence acquired over a particular temperature range should be fully removed over the same temperature range, and vice versa. This means that the blocking (TB) and unblocking (TUB) temperature spectra are identical and do not change during the course of the experiment. We will present the results of recent work demonstrating that some natural titanomagnetites undergo cation reordering on laboratory timescales and at temperatures at or below the Curie temperature (TC). The bulk composition of the titanomagnetites (Fe3-xTixO4) varies between approximately 0.2 < x < 0.4, with moderate degrees of Mg and Al substitution. Although there is no attendant structural or chemical alteration, the re-distribution of ferric and ferrous iron cations results in reversible changes in Curie temperature of up to 150°C. This necessarily changes the blocking temperature spectrum as a function of prior thermal history. These changes in TC, TUB and TB clearly pose problems for all paleointensity experiments, but the effects may be most apparent during Thellier-type experiments where the sample is step-wise heated to increasingly higher temperatures. The blocking temperature distribution will be expected to change over the course of the experiment even in the absence of chemical alteration, and one can expect the experiment to fail. We will explore the effects of cation redistribution on paleointensity experiments through numerical models and by comparison with paleointensity data from pumice samples taken from the 1980 pyroclastic flows at Mt. St. Helens (MSH). In the MSH samples, two phases are typically present: a predominantly multi-domain, homogeneous titanomagnetite (associated with the cation reordering) and an oxyexsolved, single-domain to pseudo-single-domain phase with ilmenite lamellae in a magnetite-rich host. Samples that result in technically successful paleointensity experiments that give the correct field value are most likely to be dominated by the oxyexsolved phase. By contrast, samples with a considerable proportion of the homogeneous phase typically fail the paleointensity experiments and have unstable magnetization behavior at temperatures associated with cation reordering on laboratory time scales. In many samples with both phases, pTRM checks pass at both low (<300°C) and high (>500°C) temperatures, but fail in the intermediate temperature window. The composition of the titanomagnetites that exhibit this cation reordering effect are extremely common in rocks of andesitic, dacitic, and rhyolitic composition, as well as in some basalts. Cation reordering may therefore be a previously unrecognized cause for failure in paleointensity experiments.

  6. Combined In-Plane and Through-the-Thickness Analysis for Failure Prediction of Bolted Composite Joints

    NASA Technical Reports Server (NTRS)

    Kradinov, V.; Madenci, E.; Ambur, D. R.

    2004-01-01

    Although two-dimensional methods provide accurate predictions of contact stresses and bolt load distribution in bolted composite joints with multiple bolts, they fail to capture the effect of thickness on the strength prediction. Typically, the plies close to the interface of laminates are expected to be the most highly loaded, due to bolt deformation, and they are usually the first to fail. This study presents an analysis method to account for the variation of stresses in the thickness direction by augmenting a two-dimensional analysis with a one-dimensional through the thickness analysis. The two-dimensional in-plane solution method based on the combined complex potential and variational formulation satisfies the equilibrium equations exactly, and satisfies the boundary conditions and constraints by minimizing the total potential. Under general loading conditions, this method addresses multiple bolt configurations without requiring symmetry conditions while accounting for the contact phenomenon and the interaction among the bolts explicitly. The through-the-thickness analysis is based on the model utilizing a beam on an elastic foundation. The bolt, represented as a short beam while accounting for bending and shear deformations, rests on springs, where the spring coefficients represent the resistance of the composite laminate to bolt deformation. The combined in-plane and through-the-thickness analysis produces the bolt/hole displacement in the thickness direction, as well as the stress state in each ply. The initial ply failure predicted by applying the average stress criterion is followed by a simple progressive failure. Application of the model is demonstrated by considering single- and double-lap joints of metal plates bolted to composite laminates.

  7. Signals in the ionosphere generated by tsunami earthquakes: observations and modeling suppor

    NASA Astrophysics Data System (ADS)

    Rolland, L.; Sladen, A.; Mikesell, D.; Larmat, C. S.; Rakoto, V.; Remillieux, M.; Lee, R.; Khelfi, K.; Lognonne, P. H.; Astafyeva, E.

    2017-12-01

    Forecasting systems failed to predict the magnitude of the 2011 great tsunami in Japan due to the difficulty and cost of instrumenting the ocean with high-quality and dense networks. Melgar et al. (2013) show that using all of the conventional data (inland seismic, geodetic, and tsunami gauges) with the best inversion method still fails to predict the correct height of the tsunami before it breaks onto a coast near the epicenter (< 500 km). On the other hand, in the last decade, scientists have gathered convincing evidence of transient signals in the ionosphere Total Electron Content (TEC) observations that are associated to open ocean tsunami waves. Even though typical tsunami waves are only a few centimeters high, they are powerful enough to create atmospheric vibrations extending all the way to the ionosphere, 300 kilometers up in the atmosphere. Therefore, we are proposing to incorporate the ionospheric signals into tsunami early-warning systems. We anticipate that the method could be decisive for mitigating "tsunami earthquakes" which trigger tsunamis larger than expected from their short-period magnitude. These events are challenging to characterize as they rupture the near-trench subduction interface, in a distant region less constrained by onshore data. As a couple of devastating tsunami earthquakes happens per decade, they represent a real threat for onshore populations and a challenge for tsunami early-warning systems. We will present the TEC observations of the recent Java 2006 and Mentawaii 2010 tsunami earthquakes and base our analysis on acoustic ray tracing, normal modes summation and the simulation code SPECFEM, which solves the wave equation in coupled acoustic (ocean, atmosphere) and elastic (solid earth) domains. Rupture histories are entered as finite source models, which will allow us to evaluate the effect of a relatively slow rupture on the surrounding ocean and atmosphere.

  8. The economics (or lack thereof) of aerosol geoengineering

    NASA Astrophysics Data System (ADS)

    Goes, M.; Keller, K.; Tuana, N.

    2009-04-01

    Anthropogenic greenhouse gas emissions are changing the Earth's climate and impose substantial risks for current and future generations. What are scientifically sound, economically viable, and ethically defendable strategies to manage these climate risks? Ratified international agreements call for a reduction of greenhouse gas emissions to avoid dangerous anthropogenic interference with the climate system. Recent proposals, however, call for the deployment of a different approach: to geoengineer climate by injecting aerosol precursors into the stratosphere. Published economic studies typically suggest that substituting aerosol geoengineering for abatement of carbon dioxide emissions results in large net monetary benefits. However, these studies neglect the risks of aerosol geoengineering due to (i) the potential for future geoengineering failures and (ii) the negative impacts associated with the aerosol forcing. Here we use a simple integrated assessment model of climate change to analyze potential economic impacts of aerosol geoengineering strategies over a wide range of uncertain parameters such as climate sensitivity, the economic damages due to climate change, and the economic damages due to aerosol geoengineering forcing. The simplicity of the model provides the advantages of parsimony and transparency, but it also imposes severe caveats on the interpretation of the results. For example, the analysis is based on a globally aggregated model and is hence silent on the question of intragenerational distribution of costs and benefits. In addition, the analysis neglects the effects of endogenous learning about the climate system. We show that the risks associated with a future geoengineering failure and negative impacts of aerosol forcings can cause geoenginering strategies to fail an economic cost-benefit test. One key to this finding is that a geoengineering failure would lead to dramatic and abrupt climatic changes. The monetary damages due to this failure can dominate the cost-benefit analysis because the monetary damages of climate change are expected to increase with the rate of change. Substituting aerosol geoengineering for greenhouse gas emission abatement might fail not only an economic cost-benefit test but also an ethical test of distributional justice. Substituting aerosol geoengineering for greenhouse gas emissions abatements constitutes a conscious risk transfer to future generations. Intergenerational justice demands distributional justice, namely that present generations may not create benefits for themselves in exchange for burdens on future generations. We use the economic model to quantify this risk transfer to better inform the judgment of whether substituting aerosol geoengineering for carbon dioxide emission abatement fails this ethical test.

  9. Metabolic support for the heart: complementary therapy for heart failure?

    PubMed

    Heggermont, Ward A; Papageorgiou, Anna-Pia; Heymans, Stephane; van Bilsen, Marc

    2016-12-01

    The failing heart has an increased metabolic demand and at the same time suffers from impaired energy efficiency, which is a detrimental combination. Therefore, therapies targeting the energy-deprived failing heart and rewiring cardiac metabolism are of great potential, but are lacking in daily clinical practice. Metabolic impairment in heart failure patients has been well characterized for patients with reduced ejection fraction, and is coming of age in patients with 'preserved' ejection fraction. Targeting cardiomyocyte metabolism in heart failure could complement current heart failure treatments that do improve cardiovascular haemodynamics, but not the energetic status of the heart. In this review, we discuss the hallmarks of normal cardiac metabolism, typical metabolic disturbances in heart failure, and past and present therapeutic targets that impact on cardiac metabolism. © 2016 The Authors. European Journal of Heart Failure © 2016 European Society of Cardiology.

  10. Compression failure mechanisms of single-ply, unidirectional, carbon-fiber composites

    NASA Technical Reports Server (NTRS)

    Ha, Jong-Bae; Nairn, John A.

    1992-01-01

    A single-ply composite compression test was used to study compression failure mechanisms as a function of fiber type, matrix type, and interfacial strength. Composites made with low- and intermediate-modulus fibers (Hercules AS4 and IM7) in either an epoxy (Hercules 3501-6) or a thermoplastic (ULTEM and LARC-TPI) matrix failed by kink banding and out-of-plane slip. The failures proceeded by rapid and catastrophic damage propagation across the specimen width. Composites made with high-modulus fibers (Hercules HMS4/3501-6) had a much lower compression strength. Their failures were characterized by kink banding and longitudinal splitting. The damage propagated slowly across the specimen width. Composites made with fibers treated to give low interfacial strength had low compression strength. These composites typically failed near the specimen ends and had long kink bands.

  11. Decision-Tree Analysis for Predicting First-Time Pass/Fail Rates for the NCLEX-RN® in Associate Degree Nursing Students.

    PubMed

    Chen, Hsiu-Chin; Bennett, Sean

    2016-08-01

    Little evidence shows the use of decision-tree algorithms in identifying predictors and analyzing their associations with pass rates for the NCLEX-RN(®) in associate degree nursing students. This longitudinal and retrospective cohort study investigated whether a decision-tree algorithm could be used to develop an accurate prediction model for the students' passing or failing the NCLEX-RN. This study used archived data from 453 associate degree nursing students in a selected program. The chi-squared automatic interaction detection analysis of the decision trees module was used to examine the effect of the collected predictors on passing/failing the NCLEX-RN. The actual percentage scores of Assessment Technologies Institute®'s RN Comprehensive Predictor(®) accurately identified students at risk of failing. The classification model correctly classified 92.7% of the students for passing. This study applied the decision-tree model to analyze a sequence database for developing a prediction model for early remediation in preparation for the NCLEXRN. [J Nurs Educ. 2016;55(8):454-457.]. Copyright 2016, SLACK Incorporated.

  12. 77 FR 8722 - Airworthiness Directives; Eurocopter Deutschland Model EC135 Helicopters

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-15

    ... that in the past, the FADEC FAIL caution light illuminated on a few EC135 T1 helicopters. It states... metering unit and transmitted to the FADEC. This discrepancy led to the display of the FADEC FAIL caution... the FADEC to automatically meter fuel, indicated by a FADEC FAIL cockpit caution light, and subsequent...

  13. 76 FR 27956 - Airworthiness Directives; Eurocopter Deutschland Model EC135 Helicopters

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-13

    ... past, the FADEC FAIL caution light illuminated on a few EC135 T1 helicopters. They state that this was... transmitted to the FADEC. This discrepancy led to the display of the FADEC FAIL caution light and ``freezing... to automatically meter fuel, indicated by a FADEC FAIL cockpit caution light, and subsequent loss of...

  14. Allogeneic lymphocytes persist and traffic in feral MHC-matched mauritian cynomolgus macaques.

    PubMed

    Greene, Justin M; Burwitz, Benjamin J; Blasky, Alex J; Mattila, Teresa L; Hong, Jung Joo; Rakasz, Eva G; Wiseman, Roger W; Hasenkrug, Kim J; Skinner, Pamela J; O'Connor, Shelby L; O'Connor, David H

    2008-06-11

    Thus far, live attenuated SIV has been the most successful method for vaccinating macaques against pathogenic SIV challenge; however, it is not clear what mechanisms are responsible for this protection. Adoptive transfer studies in mice have been integral to understanding live attenuated vaccine protection in models like Friend virus. Previous adoptive transfers in primates have failed as transferred cells are typically cleared within hours after transfer. Here we describe adoptive transfer studies in Mauritian origin cynomolgus macaques (MCM), a non-human primate model with limited MHC diversity. Cells transferred between unrelated MHC-matched macaques persist for at least fourteen days but are rejected within 36 hours in MHC-mismatched macaques. Cells trafficked from the blood to peripheral lymphoid tissues within 12 hours of transfer. MHC-matched MCM provide the first viable primate model for adoptive transfer studies. Because macaques infected with SIV are the best model for HIV/AIDS pathogenesis, we can now directly study the correlates of protective immune responses to AIDS viruses. For example, plasma viral loads following pathogenic SIV challenge are reduced by several orders of magnitude in macaques previously immunized with attenuated SIV. Adoptive transfer of lymphocyte subpopulations from vaccinated donors into SIV-naïve animals may define the immune mechanisms responsible for protection and guide future vaccine development.

  15. A Judgement Bias Test to Assess Affective State and Potential Therapeutics in a Rat Model of Chemotherapy-Induced Mucositis.

    PubMed

    George, Rebecca P; Barker, Timothy H; Lymn, Kerry A; Bigatton, Dylan A; Howarth, Gordon S; Whittaker, Alexandra L

    2018-05-29

    Chemotherapy-induced mucositis is an extremely painful condition that occurs in 40-60% of patients undergoing chemotherapy. As mucositis currently has no effective treatment, and due to the self-limiting nature of the condition, the major treatment aims are to manage symptoms and limit pain with significance placed on improving patient quality of life. Rodent models are frequently used in mucositis research. These investigations typically assess pathological outcomes, yet fail to include a measure of affective state; the key therapeutic goal. Assessment of cognitive biases is a novel approach to determining the affective state of animals. Consequently, this study aimed to validate a cognitive bias test through a judgement bias paradigm to measure affective state in a rat model of chemotherapy-induced intestinal mucositis. Rats with intestinal mucositis demonstrated a negative affective state, which was partially ameliorated by analgesic administration, whilst healthy rats showed an optimistic response. This study concluded that the judgement bias test was able to evaluate the emotional state of rats with chemotherapy-induced mucositis. These findings provide a foundation for future refinement to the experimental design associated with the animal model that will expedite successful transitioning of novel therapeutics to clinical practice, and also improve humane endpoint implementation.

  16. Corner Polyhedron and Intersection Cuts

    DTIC Science & Technology

    2011-03-01

    any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a...19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98... finding valid inequalities for the set (1) that are violated by the point x̄. Typically, x̄ is an optimal solution of the linear programming (LP

  17. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  18. Population-expression models of immune response

    NASA Astrophysics Data System (ADS)

    Stromberg, Sean P.; Antia, Rustom; Nemenman, Ilya

    2013-06-01

    The immune response to a pathogen has two basic features. The first is the expansion of a few pathogen-specific cells to form a population large enough to control the pathogen. The second is the process of differentiation of cells from an initial naive phenotype to an effector phenotype which controls the pathogen, and subsequently to a memory phenotype that is maintained and responsible for long-term protection. The expansion and the differentiation have been considered largely independently. Changes in cell populations are typically described using ecologically based ordinary differential equation models. In contrast, differentiation of single cells is studied within systems biology and is frequently modeled by considering changes in gene and protein expression in individual cells. Recent advances in experimental systems biology make available for the first time data to allow the coupling of population and high dimensional expression data of immune cells during infections. Here we describe and develop population-expression models which integrate these two processes into systems biology on the multicellular level. When translated into mathematical equations, these models result in non-conservative, non-local advection-diffusion equations. We describe situations where the population-expression approach can make correct inference from data while previous modeling approaches based on common simplifying assumptions would fail. We also explore how model reduction techniques can be used to build population-expression models, minimizing the complexity of the model while keeping the essential features of the system. While we consider problems in immunology in this paper, we expect population-expression models to be more broadly applicable.

  19. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  20. A Mass Spectrometry Proteomics Data Management Platform*

    PubMed Central

    Sharma, Vagisha; Eng, Jimmy K.; MacCoss, Michael J.; Riffle, Michael

    2012-01-01

    Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are “organically” distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/. PMID:22611296

  1. A null model for microbial diversification

    PubMed Central

    Straub, Timothy J.

    2017-01-01

    Whether prokaryotes (Bacteria and Archaea) are naturally organized into phenotypically and genetically cohesive units comparable to animal or plant species remains contested, frustrating attempts to estimate how many such units there might be, or to identify the ecological roles they play. Analyses of gene sequences in various closely related prokaryotic groups reveal that sequence diversity is typically organized into distinct clusters, and processes such as periodic selection and extensive recombination are understood to be drivers of cluster formation (“speciation”). However, observed patterns are rarely compared with those obtainable with simple null models of diversification under stochastic lineage birth and death and random genetic drift. Via a combination of simulations and analyses of core and phylogenetic marker genes, we show that patterns of diversity for the genera Escherichia, Neisseria, and Borrelia are generally indistinguishable from patterns arising under a null model. We suggest that caution should thus be taken in interpreting observed clustering as a result of selective evolutionary forces. Unknown forces do, however, appear to play a role in Helicobacter pylori, and some individual genes in all groups fail to conform to the null model. Taken together, we recommend the presented birth−death model as a null hypothesis in prokaryotic speciation studies. It is only when the real data are statistically different from the expectations under the null model that some speciation process should be invoked. PMID:28630293

  2. Behavioural and cognitive sex/gender differences in autism spectrum condition and typically developing males and females.

    PubMed

    Hull, Laura; Mandy, William; Petrides, K V

    2017-08-01

    Studies assessing sex/gender differences in autism spectrum conditions often fail to include typically developing control groups. It is, therefore, unclear whether observed sex/gender differences reflect those found in the general population or are particular to autism spectrum conditions. A systematic search identified articles comparing behavioural and cognitive characteristics in males and females with and without an autism spectrum condition diagnosis. A total of 13 studies were included in meta-analyses of sex/gender differences in core autism spectrum condition symptoms (social/communication impairments and restricted/repetitive behaviours and interests) and intelligence quotient. A total of 20 studies were included in a qualitative review of sex/gender differences in additional autism spectrum condition symptoms. For core traits and intelligence quotient, sex/gender differences were comparable in autism spectrum conditions and typical samples. Some additional autism spectrum condition symptoms displayed different patterns of sex/gender differences in autism spectrum conditions and typically developing groups, including measures of executive function, empathising and systemising traits, internalising and externalising problems and play behaviours. Individuals with autism spectrum conditions display typical sex/gender differences in core autism spectrum condition traits, suggesting that diagnostic criteria based on these symptoms should take into account typical sex/gender differences. However, awareness of associated autism spectrum condition symptoms should include the possibility of different male and female phenotypes, to ensure those who do not fit the 'typical' autism spectrum condition presentation are not missed.

  3. Resurgence as Choice.

    PubMed

    Shahan, Timothy A; Craig, Andrew R

    2017-08-01

    Resurgence is typically defined as an increase in a previously extinguished target behavior when a more recently reinforced alternative behavior is later extinguished. Some treatments of the phenomenon have suggested that it might also extend to circumstances where either the historic or more recently reinforced behavior is reduced by other non-extinction related means (e.g., punishment, decreases in reinforcement rate, satiation, etc.). Here we present a theory of resurgence suggesting that the phenomenon results from the same basic processes governing choice. In its most general form, the theory suggests that resurgence results from changes in the allocation of target behavior driven by changes in the values of the target and alternative options across time. Specifically, resurgence occurs when there is an increase in the relative value of an historically effective target option as a result of a subsequent devaluation of a more recently effective alternative option. We develop a more specific quantitative model of how extinction of the target and alternative responses in a typical resurgence paradigm might produce such changes in relative value across time using a temporal weighting rule. The example model does a good job in accounting for the effects of reinforcement rate and related manipulations on resurgence in simple schedules where Behavioral Momentum Theory has failed. We also discuss how the general theory might be extended to other parameters of reinforcement (e.g., magnitude, quality), other means to suppress target or alternative behavior (e.g., satiation, punishment, differential reinforcement of other behavior), and other factors (e.g., non- contingent versus contingent alternative reinforcement, serial alternative reinforcement, and multiple schedules). Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Application of the NUREG/CR-6850 EPRI/NRC Fire PRA Methodology to a DOE Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom Elicson; Bentley Harwood; Richard Yorg

    2011-03-01

    The application NUREG/CR-6850 EPRI/NRC fire PRA methodology to DOE facility presented several challenges. This paper documents the process and discusses several insights gained during development of the fire PRA. A brief review of the tasks performed is provided with particular focus on the following: • Tasks 5 and 14: Fire-induced risk model and fire risk quantification. A key lesson learned was to begin model development and quantification as early as possible in the project using screening values and simplified modeling if necessary. • Tasks 3 and 9: Fire PRA cable selection and detailed circuit failure analysis. In retrospect, it wouldmore » have been beneficial to perform the model development and quantification in 2 phases with detailed circuit analysis applied during phase 2. This would have allowed for development of a robust model and quantification earlier in the project and would have provided insights into where to focus the detailed circuit analysis efforts. • Tasks 8 and 11: Scoping fire modeling and detailed fire modeling. More focus should be placed on detailed fire modeling and less focus on scoping fire modeling. This was the approach taken for the fire PRA. • Task 14: Fire risk quantification. Typically, multiple safe shutdown (SSD) components fail during a given fire scenario. Therefore dependent failure analysis is critical to obtaining a meaningful fire risk quantification. Dependent failure analysis for the fire PRA presented several challenges which will be discussed in the full paper.« less

  5. Maternal choline supplementation in a sheep model of first trimester binge alcohol fails to protect against brain volume reductions in peripubertal lambs.

    PubMed

    Birch, Sharla M; Lenox, Mark W; Kornegay, Joe N; Paniagua, Beatriz; Styner, Martin A; Goodlett, Charles R; Cudd, Tim A; Washburn, Shannon E

    2016-09-01

    Fetal alcohol spectrum disorder (FASD) is a leading potentially preventable birth defect. Poor nutrition may contribute to adverse developmental outcomes of prenatal alcohol exposure, and supplementation of essential micronutrients such as choline has shown benefit in rodent models. The sheep model of first-trimester binge alcohol exposure was used in this study to model the dose of maternal choline supplementation used in an ongoing prospective clinical trial involving pregnancies at risk for FASD. Primary outcome measures including volumetrics of the whole brain, cerebellum, and pituitary derived from magnetic resonance imaging (MRI) in 6-month-old lambs, testing the hypothesis that alcohol-exposed lambs would have brain volume reductions that would be ameliorated by maternal choline supplementation. Pregnant sheep were randomly assigned to one of five groups - heavy binge alcohol (HBA; 2.5 g/kg/treatment ethanol), heavy binge alcohol plus choline supplementation (HBC; 2.5 g/kg/treatment ethanol and 10 mg/kg/day choline), saline control (SC), saline control plus choline supplementation (SCC; 10 mg/kg/day choline), and normal control (NC). Ewes were given intravenous alcohol (HBA, HBC; mean peak BACs of ∼280 mg/dL) or saline (SC, SCC) on three consecutive days per week from gestation day (GD) 4-41; choline was administered on GD 4-148. MRI scans of lamb brains were performed postnatally on day 182. Lambs from both alcohol groups (with or without choline) showed significant reductions in total brain volume; cerebellar and pituitary volumes were not significantly affected. This is the first report of MRI-derived volumetric brain reductions in a sheep model of FASD following binge-like alcohol exposure during the first trimester. These results also indicate that maternal choline supplementation comparable to doses in human studies fails to prevent brain volume reductions typically induced by first-trimester binge alcohol exposure. Future analyses will assess behavioral outcomes along with regional brain and neurohistological measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Response to Intervention: Ready or Not? Or, from Wait-to-Fail to Watch-Them-Fail

    ERIC Educational Resources Information Center

    Reynolds, Cecil R.; Shaywitz, Sally E.

    2009-01-01

    Response to Intervention (RTI) models of diagnosis and intervention are being implemented rapidly throughout the schools. The purposes of invoking an RTI model for disabilities in the schools clearly are laudable, yet close examination reveals an unappreciated paucity of empirical support for RTI and an overly optimistic view of its practical,…

  7. Improving the Enterprise Requirements and Acquisition Model’s Developmental Test and Evaluation Process Fidelity

    DTIC Science & Technology

    2014-03-27

    and excluded from the model. The “Check SVR Loop” prevents programs from failing the SVR a second time. If a program has not previously failed the SVR...and Acquisition Management Plan Initiative. Briefing, Peterson AFB, CO: HQ AFSPC/A5X, 2011. Gilmore, Michael J., Key Issues Causing Prgram Delays

  8. Deficiency of Huntingtin Has Pleiotropic Effects in the Social Amoeba Dictyostelium discoideum

    PubMed Central

    Myre, Michael A.; Lumsden, Amanda L.; Thompson, Morgan N.; Wasco, Wilma; MacDonald, Marcy E.; Gusella, James F.

    2011-01-01

    Huntingtin is a large HEAT repeat protein first identified in humans, where a polyglutamine tract expansion near the amino terminus causes a gain-of-function mechanism that leads to selective neuronal loss in Huntington's disease (HD). Genetic evidence in humans and knock-in mouse models suggests that this gain-of-function involves an increase or deregulation of some aspect of huntingtin's normal function(s), which remains poorly understood. As huntingtin shows evolutionary conservation, a powerful approach to discovering its normal biochemical role(s) is to study the effects caused by its deficiency in a model organism with a short life-cycle that comprises both cellular and multicellular developmental stages. To facilitate studies aimed at detailed knowledge of huntingtin's normal function(s), we generated a null mutant of hd, the HD ortholog in Dictyostelium discoideum. Dictyostelium cells lacking endogenous huntingtin were viable but during development did not exhibit the typical polarized morphology of Dictyostelium cells, streamed poorly to form aggregates by accretion rather than chemotaxis, showed disorganized F-actin staining, exhibited extreme sensitivity to hypoosmotic stress, and failed to form EDTA-resistant cell–cell contacts. Surprisingly, chemotactic streaming could be rescued in the presence of the bivalent cations Ca2+ or Mg2+ but not pulses of cAMP. Although hd − cells completed development, it was delayed and proceeded asynchronously, producing small fruiting bodies with round, defective spores that germinated spontaneously within a glassy sorus. When developed as chimeras with wild-type cells, hd − cells failed to populate the pre-spore region of the slug. In Dictyostelium, huntingtin deficiency is compatible with survival of the organism but renders cells sensitive to low osmolarity, which produces pleiotropic cell autonomous defects that affect cAMP signaling and as a consequence development. Thus, Dictyostelium provides a novel haploid organism model for genetic, cell biological, and biochemical studies to delineate the functions of the HD protein. PMID:21552328

  9. 76 FR 10288 - Airworthiness Directives; The Boeing Company Model 767-200, -300, -300F, and -400ER Series Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-24

    ... and corrosion to an area within five inches of the fail-safe strap. Revision 2 of this service.... The existing AD currently requires inspections to detect cracking or corrosion of the fail-safe straps... corrective actions. Since we issued that AD, we have received additional reports of cracks in 51 fail-safe...

  10. Mind and body: concepts of human cognition, physiology and false belief in children with autism or typical development.

    PubMed

    Peterson, Candida C

    2005-08-01

    This study examined theory of mind (ToM) and concepts of human biology (eyes, heart, brain, lungs and mind) in a sample of 67 children, including 25 high functioning children with autism (age 6-13), plus age-matched and preschool comparison groups. Contrary to Baron-Cohen [1989, Journal of Autism and Developmental Disorders, 19(4), 579-600], most children with autism correctly understood the functions of the brain (84%) and the mind (64%). Their explanations were predominantly mentalistic. They outperformed typically developing preschoolers in understanding inner physiological (heart, lungs) and cognitive (brain, mind) systems, and scored as high as age-matched typical children. Yet, in line with much previous ToM research, most children with autism (60%) failed false belief, and their ToM performance was unrelated to their understanding of. human biology. Results were discussed in relation to neurobiological and social-experiential accounts of the ToM deficit in autism.

  11. A dynamic magnetic tension force as the cause of failed solar eruptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, Clayton E.; Yamada, Masaaki; Ji, Hantao

    Coronal mass ejections are solar eruptions driven by a sudden release of magnetic energy stored in the Sun's corona. In many cases, this magnetic energy is stored in long-lived, arched structures called magnetic flux ropes. When a flux rope destabilizes, it can either erupt and produce a coronal mass ejection or fail and collapse back towards the Sun. The prevailing belief is that the outcome of a given event is determined by a magnetohydrodynamic force imbalance called the torus instability. This belief is challenged, however, by observations indicating that torus-unstable flux ropes sometimes fail to erupt. This contradiction has notmore » yet been resolved because of a lack of coronal magnetic field measurements and the limitations of idealized numerical modelling. In this paper, we report the results of a laboratory experiment that reveal a previously unknown eruption criterion below which torus-unstable flux ropes fail to erupt. We find that such 'failed torus' events occur when the guide magnetic field (that is, the ambient field that runs toroidally along the flux rope) is strong enough to prevent the flux rope from kinking. Under these conditions, the guide field interacts with electric currents in the flux rope to produce a dynamic toroidal field tension force that halts the eruption. Lastly, this magnetic tension force is missing from existing eruption models, which is why such models cannot explain or predict failed torus events.« less

  12. A dynamic magnetic tension force as the cause of failed solar eruptions

    DOE PAGES

    Myers, Clayton E.; Yamada, Masaaki; Ji, Hantao; ...

    2015-12-23

    Coronal mass ejections are solar eruptions driven by a sudden release of magnetic energy stored in the Sun's corona. In many cases, this magnetic energy is stored in long-lived, arched structures called magnetic flux ropes. When a flux rope destabilizes, it can either erupt and produce a coronal mass ejection or fail and collapse back towards the Sun. The prevailing belief is that the outcome of a given event is determined by a magnetohydrodynamic force imbalance called the torus instability. This belief is challenged, however, by observations indicating that torus-unstable flux ropes sometimes fail to erupt. This contradiction has notmore » yet been resolved because of a lack of coronal magnetic field measurements and the limitations of idealized numerical modelling. In this paper, we report the results of a laboratory experiment that reveal a previously unknown eruption criterion below which torus-unstable flux ropes fail to erupt. We find that such 'failed torus' events occur when the guide magnetic field (that is, the ambient field that runs toroidally along the flux rope) is strong enough to prevent the flux rope from kinking. Under these conditions, the guide field interacts with electric currents in the flux rope to produce a dynamic toroidal field tension force that halts the eruption. Lastly, this magnetic tension force is missing from existing eruption models, which is why such models cannot explain or predict failed torus events.« less

  13. Prediction of particulate loading in exhaust from fabric filter baghouses with one or more failed bags

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenjun Qin; Manuel Dekermenjian; Richard J. Martin

    2006-08-15

    Loss of filtration efficiency in a fabric filter baghouse is typically caused by bag failure, in one form or another. The degree of such failure can be as minor as a pinhole leak or as major as a fully involved baghouse fire. In some cases, local air pollution regulations or federal hazardous waste laws may require estimation of the total quantity of particulate matter released to the environment as a result of such failures. In this paper, a technique is presented for computing the dust loading in the baghouse exhaust when one or more bags have failed. The algorithm developedmore » is shown to be an improvement over a previously published result, which requires empirical knowledge of the variation in baghouse pressure differential with bag failures. An example calculation is presented for a baghouse equipped with 200 bags. The prediction shows that a small percentage of failed bags can cause a relatively large proportion of the gas flow to bypass the active bags, which, in turn, leads to high outlet dust loading and low overall collection efficiency from the baghouse. 10 refs., 5 figs., 3 tabs.« less

  14. Posttransplant lymphoproliferative disorder presenting as a small bowel obstruction in a patient with pancreas transplantation alone

    PubMed Central

    Kruel, Cleber R.; Shiller, S. Michelle; Anthony, Tiffany L.; Goldstein, Robert M.; Kim, Peter T. W.; Levy, Marlon F.; McKenna, Gregory J.; Onaca, Nicholas; Testa, Giuliano; Klintmalm, Goran B.

    2014-01-01

    Posttransplant lymphoproliferative disorder (PTLD) is a well-known complication associated with the transplant recipient. We chronicle a case of PTLD in a failed graft presenting as a small bowel obstruction in a pancreas-only transplant patient. While typical symptoms may be elusive in the complex immunosuppressed patient, graft pain along with persistent graft pancreatitis and a positive Epstein-Barr viremia should raise suspicion for an underlying PTLD. PMID:25484508

  15. Force to Fail Reactions With Monoethanolamine: Application to the Explosive Destruction System

    DTIC Science & Technology

    2014-02-01

    include good solvent properties for agents, miscibility with water, noncorrosivity to stainless steel under typical EDS operating conditions, and low...50%. However, when the HD loading was ≥50%, noticeable amounts of heat were generated and white fumes were observed to form when the reagent was...heat and fumes were generated when the MEA was added to the HD. At loadings ≥ 80%, the neutralent became so viscous it could not be stirred

  16. Modulation of the Foreign Body Reaction for Implants in the Subcutaneous Space: Microdialysis Probes as Localized Drug Delivery/Sampling Devices

    PubMed Central

    Mou, Xiaodun; Lennartz, Michelle R; Loegering, Daniel J; Stenken, Julie A

    2011-01-01

    Modulation of the foreign body reaction is considered to be an important step toward creation of implanted sensors with reliable long-term performance. In this work, microdialysis probes were implanted into the subcutaneous space of Sprague-Dawley rats. The probe performance was evaluated by comparing collected endogenous glucose concentrations with internal standard calibration (2-deoxyglucose, antipyrine, and vitamin B12). Probes were tested until failure, which for this work was defined as loss of fluid flow. In order to determine the effect of fibrous capsule formation on probe function, monocyte chemoattractant protein-1/CC chemokine ligand 2 (MCP-1/CCL2) was delivered locally via the probe to increase capsule thickness and dexamethasone 21-phosphate was delivered to reduce capsule thickness. Probes delivering MCP-1 had a capsule that was twice the thickness (500–600 μm) of control probes (200–225 μm) and typically failed 2 days earlier than control probes. Probes delivering dexamethasone 21-phosphate had more fragile capsules and the probes typically failed 2 days later than controls. Unexpectedly, extraction efficiency and collected glucose concentrations exhibited minor differences between groups. This is an interesting result in that the foreign body capsule formation was related to the duration of probe function but did not consistently relate to probe calibration. PMID:21722577

  17. Towards a climate-dependent paradigm of ammonia emission and deposition

    PubMed Central

    Sutton, Mark A.; Reis, Stefan; Riddick, Stuart N.; Dragosits, Ulrike; Nemitz, Eiko; Theobald, Mark R.; Tang, Y. Sim; Braban, Christine F.; Vieno, Massimo; Dore, Anthony J.; Mitchell, Robert F.; Wanless, Sarah; Daunt, Francis; Fowler, David; Blackall, Trevor D.; Milford, Celia; Flechard, Chris R.; Loubet, Benjamin; Massad, Raia; Cellier, Pierre; Personne, Erwan; Coheur, Pierre F.; Clarisse, Lieven; Van Damme, Martin; Ngadi, Yasmine; Clerbaux, Cathy; Skjøth, Carsten Ambelas; Geels, Camilla; Hertel, Ole; Wichink Kruit, Roy J.; Pinder, Robert W.; Bash, Jesse O.; Walker, John T.; Simpson, David; Horváth, László; Misselbrook, Tom H.; Bleeker, Albert; Dentener, Frank; de Vries, Wim

    2013-01-01

    Existing descriptions of bi-directional ammonia (NH3) land–atmosphere exchange incorporate temperature and moisture controls, and are beginning to be used in regional chemical transport models. However, such models have typically applied simpler emission factors to upscale the main NH3 emission terms. While this approach has successfully simulated the main spatial patterns on local to global scales, it fails to address the environment- and climate-dependence of emissions. To handle these issues, we outline the basis for a new modelling paradigm where both NH3 emissions and deposition are calculated online according to diurnal, seasonal and spatial differences in meteorology. We show how measurements reveal a strong, but complex pattern of climatic dependence, which is increasingly being characterized using ground-based NH3 monitoring and satellite observations, while advances in process-based modelling are illustrated for agricultural and natural sources, including a global application for seabird colonies. A future architecture for NH3 emission–deposition modelling is proposed that integrates the spatio-temporal interactions, and provides the necessary foundation to assess the consequences of climate change. Based on available measurements, a first empirical estimate suggests that 5°C warming would increase emissions by 42 per cent (28–67%). Together with increased anthropogenic activity, global NH3 emissions may increase from 65 (45–85) Tg N in 2008 to reach 132 (89–179) Tg by 2100. PMID:23713128

  18. Towards a climate-dependent paradigm of ammonia emission and deposition.

    PubMed

    Sutton, Mark A; Reis, Stefan; Riddick, Stuart N; Dragosits, Ulrike; Nemitz, Eiko; Theobald, Mark R; Tang, Y Sim; Braban, Christine F; Vieno, Massimo; Dore, Anthony J; Mitchell, Robert F; Wanless, Sarah; Daunt, Francis; Fowler, David; Blackall, Trevor D; Milford, Celia; Flechard, Chris R; Loubet, Benjamin; Massad, Raia; Cellier, Pierre; Personne, Erwan; Coheur, Pierre F; Clarisse, Lieven; Van Damme, Martin; Ngadi, Yasmine; Clerbaux, Cathy; Skjøth, Carsten Ambelas; Geels, Camilla; Hertel, Ole; Wichink Kruit, Roy J; Pinder, Robert W; Bash, Jesse O; Walker, John T; Simpson, David; Horváth, László; Misselbrook, Tom H; Bleeker, Albert; Dentener, Frank; de Vries, Wim

    2013-07-05

    Existing descriptions of bi-directional ammonia (NH3) land-atmosphere exchange incorporate temperature and moisture controls, and are beginning to be used in regional chemical transport models. However, such models have typically applied simpler emission factors to upscale the main NH3 emission terms. While this approach has successfully simulated the main spatial patterns on local to global scales, it fails to address the environment- and climate-dependence of emissions. To handle these issues, we outline the basis for a new modelling paradigm where both NH3 emissions and deposition are calculated online according to diurnal, seasonal and spatial differences in meteorology. We show how measurements reveal a strong, but complex pattern of climatic dependence, which is increasingly being characterized using ground-based NH3 monitoring and satellite observations, while advances in process-based modelling are illustrated for agricultural and natural sources, including a global application for seabird colonies. A future architecture for NH3 emission-deposition modelling is proposed that integrates the spatio-temporal interactions, and provides the necessary foundation to assess the consequences of climate change. Based on available measurements, a first empirical estimate suggests that 5°C warming would increase emissions by 42 per cent (28-67%). Together with increased anthropogenic activity, global NH3 emissions may increase from 65 (45-85) Tg N in 2008 to reach 132 (89-179) Tg by 2100.

  19. Probabilistic PCA of censored data: accounting for uncertainties in the visualization of high-throughput single-cell qPCR data.

    PubMed

    Buettner, Florian; Moignard, Victoria; Göttgens, Berthold; Theis, Fabian J

    2014-07-01

    High-throughput single-cell quantitative real-time polymerase chain reaction (qPCR) is a promising technique allowing for new insights in complex cellular processes. However, the PCR reaction can be detected only up to a certain detection limit, whereas failed reactions could be due to low or absent expression, and the true expression level is unknown. Because this censoring can occur for high proportions of the data, it is one of the main challenges when dealing with single-cell qPCR data. Principal component analysis (PCA) is an important tool for visualizing the structure of high-dimensional data as well as for identifying subpopulations of cells. However, to date it is not clear how to perform a PCA of censored data. We present a probabilistic approach that accounts for the censoring and evaluate it for two typical datasets containing single-cell qPCR data. We use the Gaussian process latent variable model framework to account for censoring by introducing an appropriate noise model and allowing a different kernel for each dimension. We evaluate this new approach for two typical qPCR datasets (of mouse embryonic stem cells and blood stem/progenitor cells, respectively) by performing linear and non-linear probabilistic PCA. Taking the censoring into account results in a 2D representation of the data, which better reflects its known structure: in both datasets, our new approach results in a better separation of known cell types and is able to reveal subpopulations in one dataset that could not be resolved using standard PCA. The implementation was based on the existing Gaussian process latent variable model toolbox (https://github.com/SheffieldML/GPmat); extensions for noise models and kernels accounting for censoring are available at http://icb.helmholtz-muenchen.de/censgplvm. © The Author 2014. Published by Oxford University Press. All rights reserved.

  20. Probabilistic PCA of censored data: accounting for uncertainties in the visualization of high-throughput single-cell qPCR data

    PubMed Central

    Buettner, Florian; Moignard, Victoria; Göttgens, Berthold; Theis, Fabian J.

    2014-01-01

    Motivation: High-throughput single-cell quantitative real-time polymerase chain reaction (qPCR) is a promising technique allowing for new insights in complex cellular processes. However, the PCR reaction can be detected only up to a certain detection limit, whereas failed reactions could be due to low or absent expression, and the true expression level is unknown. Because this censoring can occur for high proportions of the data, it is one of the main challenges when dealing with single-cell qPCR data. Principal component analysis (PCA) is an important tool for visualizing the structure of high-dimensional data as well as for identifying subpopulations of cells. However, to date it is not clear how to perform a PCA of censored data. We present a probabilistic approach that accounts for the censoring and evaluate it for two typical datasets containing single-cell qPCR data. Results: We use the Gaussian process latent variable model framework to account for censoring by introducing an appropriate noise model and allowing a different kernel for each dimension. We evaluate this new approach for two typical qPCR datasets (of mouse embryonic stem cells and blood stem/progenitor cells, respectively) by performing linear and non-linear probabilistic PCA. Taking the censoring into account results in a 2D representation of the data, which better reflects its known structure: in both datasets, our new approach results in a better separation of known cell types and is able to reveal subpopulations in one dataset that could not be resolved using standard PCA. Availability and implementation: The implementation was based on the existing Gaussian process latent variable model toolbox (https://github.com/SheffieldML/GPmat); extensions for noise models and kernels accounting for censoring are available at http://icb.helmholtz-muenchen.de/censgplvm. Contact: fbuettner.phys@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24618470

  1. 76 FR 38074 - Airworthiness Directives; The Boeing Company Model 747-100, 747-100B, 747-200B, 747-200C, 747...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-29

    ... withstand fail-safe loads. DATES: We must receive comments on this proposed AD by August 15, 2011. ADDRESSES... pressurization and the inability of the airplane fuselage to withstand fail-safe loads. Actions Since Existing AD... withstand fail-safe loads. Compliance (f) You are responsible for having the actions required by this AD...

  2. 77 FR 26663 - Airworthiness Directives; The Boeing Company Model 767-200, -300, -300F, and -400ER Series Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-07

    ... removing corrosion from fail-safe straps. We have received Boeing Service Bulletin 767-53A0100, Revision 3.... That AD currently requires inspections to detect cracking or corrosion of the fail-safe straps between... cracks in 51 fail-safe straps on 41 airplanes; we have also received a report of a crack found in the ``T...

  3. Leadership Development: A Senior Leader Case Study

    DTIC Science & Technology

    2014-10-01

    LIFE model Element Investigative Question Strategy How does (development program) posture (or fail to posture ) leaders to meet organizational...Management How does (development program) adequately posture (or fail to posture ) officer talent capable of filling talent gaps within the...LIFE model in figure 1 stems from conceptualizing and integrat- ing elements of leadership development in the work of Stephen Co- hen , Lisa Gabel

  4. Review of the damage mechanism in wind turbine gearbox bearings under rolling contact fatigue

    NASA Astrophysics Data System (ADS)

    Su, Yun-Shuai; Yu, Shu-Rong; Li, Shu-Xin; He, Yan-Ni

    2017-12-01

    Wind turbine gearbox bearings fail with the service life is much shorter than the designed life. Gearbox bearings are subjected to rolling contact fatigue (RCF) and they are observed to fail due to axial cracking, surface flaking, and the formation of white etching areas (WEAs). The current study reviewed these three typical failure modes. The underlying dominant mechanisms were discussed with emphasis on the formation mechanism of WEAs. Although numerous studies have been carried out, the formation of WEAs remains unclear. The prevailing mechanism of the rubbing of crack faces that generates WEAs was questioned by the authors. WEAs were compared with adiabatic shear bands (ASBs) generated in the high strain rate deformation in terms of microstructural compositions, grain refinement, and formation mechanism. Results indicate that a number of similarities exist between them. However, substantial evidence is required to verify whether or not WEAs and ASBs are the same matters.

  5. Blind restoration of retinal images degraded by space-variant blur with adaptive blur estimation

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés. G.; Millán, María. S.; Å orel, Michal; Å roubek, Filip

    2013-11-01

    Retinal images are often degraded with a blur that varies across the field view. Because traditional deblurring algorithms assume the blur to be space-invariant they typically fail in the presence of space-variant blur. In this work we consider the blur to be both unknown and space-variant. To carry out the restoration, we assume that in small regions the space-variant blur can be approximated by a space-invariant point-spread function (PSF). However, instead of deblurring the image on a per-patch basis, we extend individual PSFs by linear interpolation and perform a global restoration. Because the blind estimation of local PSFs may fail we propose a strategy for the identification of valid local PSFs and perform interpolation to obtain the space-variant PSF. The method was tested on artificial and real degraded retinal images. Results show significant improvement in the visibility of subtle details like small blood vessels.

  6. The failed state transition of the ATOLL source GRS 1724-308

    NASA Astrophysics Data System (ADS)

    Tarana, A.; Capitanio, F.; Cocchi, M.

    2018-07-01

    The 2004-2012 X-ray time history of the NS LMXB GRS 1724-308 shows, along with the episodic brightenings associated with the low-high state transitions typical of the ATOLL sources, a peculiar, long lasting (˜300 d) flaring event, observed in 2008. This rare episode, characterized by a high-flux hard state, has never been observed before for GRS 1724-308, and in any case is not common among ATOLL sources. We discuss here different hypotheses on the origin of this peculiar event that displayed the spectral signatures of a failed transition, similar in shape and duration to those rarely observed in black hole binaries. We also suggest the possibility that the atypical flare occurred in coincidence with a new rising phase of the 12-yr superorbital modulation that has been previously reported by other authors. The analysed data also confirm for GRS 1724-308 the already reported orbital period of ˜90 d.

  7. Application of Thermo-Mechanical Measurements of Plastic Packages for Reliability Evaluation of PEMS

    NASA Technical Reports Server (NTRS)

    Sharma, Ashok K.; Teverovsky, Alexander

    2004-01-01

    Thermo-mechanical analysis (TMA) is typically employed for measurements of the glass transition temperature (Tg) and coefficients of thermal expansion (CTE) in molding compounds used in plastic encapsulated microcircuits (PEMs). Application of TMA measurements directly to PEMs allows anomalies to be revealed in deformation of packages with temperature, and thus indicates possible reliability concerns related to thermo-mechanical integrity and stability of the devices. In this work, temperature dependencies of package deformation were measured in several types of PEMs that failed environmental stress testing including temperature cycling, highly accelerated stress testing (HAST) in humid environments, and bum-in (BI) testing. Comparison of thermo-mechanical characteristics of packages and molding compounds in the failed parts allowed for explanation of the observed failures. The results indicate that TMA of plastic packages might be used for quality evaluation of PEMs intended for high-reliability applications.

  8. Failed less invasive lumbar spine surgery as a predictor of subsequent fusion outcomes.

    PubMed

    Gillard, Douglas M; Corenman, Donald S; Dornan, Grant J

    2014-04-01

    It is not uncommon for patients to undergo less invasive spine surgery (LISS) prior to succumbing to lumbar fusion; however, the effect of failed LISS on subsequent fusion outcomes is relatively unknown. The aim of this study was to test the hypothesis that patients who suffered failed LISS would afford inferior subsequent fusion outcomes when compared to patients who did not have prior LISS. After IRB approval, registry from a spine surgeon was queried for consecutive patients who underwent fusion for intractable low back pain. The 47 qualifying patients were enrolled and split into two groups based upon a history for prior LISS: a prior surgery group (PSG) and a non-prior surgery group (nPSG). Typical postoperative outcome questionnaires, which were available in 80.9% of the patients (38/47) at an average time point of 40.4 months (range, 13.5-66.1 months), were comparatively analysed and failed to demonstrate significant difference between the groups, e.g. PSG v. nPSG: ODI--14.6 ± 10.9 vs. 17.2 ± 19.4 (P = 0.60); SF12-PCS--10.9 ± 11.0 vs. 8.7 ± 12.4 (p = 0.59); bNRS--3.0 (range -2-7) vs. 2.0 (range -3-8) (p = 0.91). Patient satisfaction, return to work rates, peri-operative complications, success of fusion and rate of revision surgery were also not different. Although limited by size and retrospective design, the results of this rare investigation suggest that patients who experience a failed LISS prior to undergoing fusion will not suffer inferior fusion outcomes when compared to patients who did not undergo prior LISS.

  9. Reasons for failed ablation for idiopathic right ventricular outflow tract-like ventricular arrhythmias.

    PubMed

    Yokokawa, Miki; Good, Eric; Crawford, Thomas; Chugh, Aman; Pelosi, Frank; Latchamsetty, Rakesh; Jongnarangsin, Krit; Ghanbari, Hamid; Oral, Hakan; Morady, Fred; Bogun, Frank

    2013-08-01

    The right ventricular outflow tract (RVOT) is the most common site of origin of ventricular arrhythmias (VAs) in patients with idiopathic VAs. A left bundle branch block, inferior axis morphology arrhythmia is the hallmark of RVOT arrhythmias. VAs from other sites of origin can mimic RVOT VAs, and ablation in the RVOT typically fails for these VAs. To analyze reasons for failed ablations of RVOT-like VAs. Among a consecutive series of 197 patients with an RVOT-like electrocardiographic (ECG) morphology who were referred for ablation, 38 patients (13 men; age 46 ± 14 years; left ventricular ejection fraction 47% ± 14%) in whom a prior procedure failed within the RVOT underwent a second ablation procedure. ECG characteristics of the VA were compared to a consecutive series of 50 patients with RVOT VAs. The origin of the VA was identified in 95% of the patients. In 28 of 38 (74%) patients, the arrhythmia origin was not in the RVOT. The VA originated from intramural sites (n = 8, 21%), the pulmonary arteries (n = 7, 18%), the aortic cusps (n = 6, 16%), and the epicardium (n = 5, 13%). The origin was within the RVOT in 10 (26%) patients. In 2 (5%) patients, the origin could not be identified despite biventricular, aortic, and epicardial mapping. The VA was eliminated in 34 of 38 (89%) patients with repeat procedures. The ECG features of patients with failed RVOT-like arrhythmias were different from the characteristics of RVOT arrhythmias. In patients in whom ablation of a VA with an RVOT-like appearance fails, mapping of the pulmonary artery, the aortic cusps, the epicardium, the left ventricular outflow tract, and the aortic cusps will help identify the correct site of origin. The 12-lead ECG is helpful in differentiating these VAs from RVOT VAs. Copyright © 2013 Heart Rhythm Society. All rights reserved.

  10. Using Machine Learning in Adversarial Environments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren Leon Davis

    Intrusion/anomaly detection systems are among the first lines of cyber defense. Commonly, they either use signatures or machine learning (ML) to identify threats, but fail to account for sophisticated attackers trying to circumvent them. We propose to embed machine learning within a game theoretic framework that performs adversarial modeling, develops methods for optimizing operational response based on ML, and integrates the resulting optimization codebase into the existing ML infrastructure developed by the Hybrid LDRD. Our approach addresses three key shortcomings of ML in adversarial settings: 1) resulting classifiers are typically deterministic and, therefore, easy to reverse engineer; 2) ML approachesmore » only address the prediction problem, but do not prescribe how one should operationalize predictions, nor account for operational costs and constraints; and 3) ML approaches do not model attackers’ response and can be circumvented by sophisticated adversaries. The principal novelty of our approach is to construct an optimization framework that blends ML, operational considerations, and a model predicting attackers reaction, with the goal of computing optimal moving target defense. One important challenge is to construct a realistic model of an adversary that is tractable, yet realistic. We aim to advance the science of attacker modeling by considering game-theoretic methods, and by engaging experimental subjects with red teaming experience in trying to actively circumvent an intrusion detection system, and learning a predictive model of such circumvention activities. In addition, we will generate metrics to test that a particular model of an adversary is consistent with available data.« less

  11. Experimental and mathematical modeling of flow in headboxes

    NASA Astrophysics Data System (ADS)

    Shariati, Mohammad Reza

    The fluid flow patterns in a paper-machine headbox have a strong influence on the quality of the paper produced by the machine. Due to increasing demand for high quality paper there is a need to investigate the details of the fluid flow in the paper machine headbox. The objective of this thesis is to use experimental and computational methods of modeling the flow inside a typical headbox in order to evaluate and understand the mean flow patterns and turbulence created there. In particular, spatial variations of the mean flow and of the turbulence quantities and the turbulence generated secondary flows are studied. In addition to the flow inside the headbox, the flow leaving the slice is also modeled both experimentally and computationally. Comparison of the experimental and numerical results indicated that streamwise mean components of the velocities in the headbox are predicted well by all the turbulence models considered in this study. However, the standard k-epsilon model and the algebraic turbulence models fail to predict the turbulence quantities accurately. Standard k-epsilon-model also fails to predict the direction and magnitude of the secondary flows. Significant improvements in the k-epsilon model predictions were achieved when the turbulence production term was artificially set to zero. This is justified by observations of the turbulent velocities from the experiments and by a consideration of the form of the kinetic energy equation. A better estimation of the Reynolds normal stress distribution and the degree of anisotropy of turbulence was achieved using the Reynolds stress turbulence model. Careful examination of the measured turbulence velocity results shows that after the initial decay of the turbulence in the headbox, there is a short region close to the exit, but inside the headbox, where the turbulent kinetic energy actually increases as a result of the distortion imposed by the contraction. The turbulence energy quickly resumes its decay in the free jet after the headbox. The overall conclusion from this thesis, obtained by comparison of experimental and computational simulations of the flow in a headbox, is that numerical simulations show great promise for predictions of headbox flows. Mean velocities and turbulence characteristics can now be predicted with fair accuracy by careful use of specialized turbulence models. Standard engineering turbulence models, such as the k-epsilon model and its immediate relatives, should not be used to estimate the turbulence quantities essential for predicting pulp fiber dispersion within the contracting region and free jet of a headbox, particularly when the overall contraction ratio is greater than about five. (Abstract shortened by UMI.)

  12. A vapourized Δ(9)-tetrahydrocannabinol (Δ(9)-THC) delivery system part II: comparison of behavioural effects of pulmonary versus parenteral cannabinoid exposure in rodents.

    PubMed

    Manwell, Laurie A; Ford, Brittany; Matthews, Brittany A; Heipel, Heather; Mallet, Paul E

    2014-01-01

    Studies of the rewarding and addictive properties of cannabinoids using rodents as animal models of human behaviour often fail to replicate findings from human studies. Animal studies typically employ parenteral routes of administration, whereas humans typically smoke cannabis, thus discrepancies may be related to different pharmacokinetics of parenteral and pulmonary routes of administration. Accordingly, a novel delivery system of vapourized Δ(9)-tetrahydrocannabinol (Δ(9)-THC) was developed and assessed for its pharmacokinetic, pharmacodynamic, and behavioural effects in rodents. A commercially available vapourizer was used to assess the effects of pulmonary (vapourized) administration of Δ(9)-THC and directly compared to parenteral (intraperitoneal, IP) administration of Δ(9)-THC. Sprague-Dawley rats were exposed to pure Δ(9)-THC vapour (1, 2, 5, 10, and 20mg/pad), using a Volcano® vapourizing device (Storz and Bickel, Germany) or IP-administered Δ(9)-THC (0.1, 0.3, 0.5, 1.0mg/kg), and drug effects on locomotor activity, food and water consumption, and cross-sensitization to morphine (5mg/kg) were measured. Vapourized Δ(9)-THC significantly increased feeding during the first hour following exposure, whereas IP-administered Δ(9)-THC failed to produce a reliable increase in feeding at all doses tested. Acute administration of 10mg of vapourized Δ(9)-THC induced a short-lasting stimulation in locomotor activity compared to control in the first of four hours of testing over 7days of repeated exposure; this chronic exposure to 10mg of vapourized Δ(9)-THC did not induce behavioural sensitization to morphine. These results suggest vapourized Δ(9)-THC administration produces behavioural effects qualitatively different from those induced by IP administration in rodents. Furthermore, vapourized Δ(9)-THC delivery in rodents may produce behavioural effects more comparable to those observed in humans. We conclude that some of the conflicting findings in animal and human cannabinoid studies may be related to pharmacokinetic differences associated with route of administration. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Eco-engineered rock pools: a concrete solution to biodiversity loss and urban sprawl in the marine environment

    NASA Astrophysics Data System (ADS)

    Firth, Louise B.; Browne, Keith A.; Knights, Antony M.; Hawkins, Stephen J.; Nash, Róisín

    2016-09-01

    In coastal habitats artificial structures typically support lower biodiversity and can support greater numbers of non-native and opportunistic species than natural rocky reefs. Eco-engineering experiments are typically trialed to succeed; but arguably as much is learnt from failure than from success. Our goal was to trial a generic, cost effective, eco-engineering technique that could be incorporated into rock armouring anywhere in the world. Artificial rock pools were created from manipulated concrete between boulders on the exposed and sheltered sides of a causeway. Experimental treatments were installed in locations where they were expected to fail and compared to controls installed in locations in which they were expected to succeed. Control pools were created lower on the structure where they were immersed on every tidal cycle; experimental pools were created above mean high water spring tide which were only immersed on spring tides. We hypothesised that lower and exposed pools would support significantly higher taxon and functional diversity than upper and sheltered pools. The concrete pools survived the severe winter storms of 2013/14. After 12 months, non-destructive sampling revealed significantly higher mean taxon and functional richness in lower pools than upper pools on the exposed side only. After 24 months the sheltered pools had become inundated with sediments, thus failing to function as rock pools as intended. Destructive sampling on the exposed side revealed significantly higher mean functional richness in lower than upper pools. However, a surprisingly high number of taxa colonised the upper pools leading to no significant difference in mean taxon richness among shore heights. A high number of rare taxa in the lower pools led to total taxon richness being almost twice that of upper pools. These findings highlight that even when expected to fail concrete pools supported diverse assemblages, thus representing an affordable, replicable means of enhancing biodiversity on a variety of artificial structures.

  14. Poromechanics of compressible charged porous media using the theory of mixtures.

    PubMed

    Huyghe, J M; Molenaar, M M; Baajens, F P T

    2007-10-01

    Osmotic, electrostatic, and/or hydrational swellings are essential mechanisms in the deformation behavior of porous media, such as biological tissues, synthetic hydrogels, and clay-rich rocks. Present theories are restricted to incompressible constituents. This assumption typically fails for bone, in which electrokinetic effects are closely coupled to deformation. An electrochemomechanical formulation of quasistatic finite deformation of compressible charged porous media is derived from the theory of mixtures. The model consists of a compressible charged porous solid saturated with a compressible ionic solution. Four constituents following different kinematic paths are identified: a charged solid and three streaming constituents carrying either a positive, negative, or no electrical charge, which are the cations, anions, and fluid, respectively. The finite deformation model is reduced to infinitesimal theory. In the limiting case without ionic effects, the presented model is consistent with Blot's theory. Viscous drag compression is computed under closed circuit and open circuit conditions. Viscous drag compression is shown to be independent of the storage modulus. A compressible version of the electrochemomechanical theory is formulated. Using material parameter values for bone, the theory predicts a substantial influence of density changes on a viscous drag compression simulation. In the context of quasistatic deformations, conflicts between poromechanics and mixture theory are only semantic in nature.

  15. THE BINARY BLACK HOLE MODEL FOR MRK 231 BITES THE DUST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leighly, Karen M.; Terndrup, Donald M.; Gallagher, Sarah C.

    2016-09-20

    Mrk 231 is a nearby quasar with an unusually red near-UV-to-optical continuum, generally explained as heavy reddening by dust. Yan et al. proposed that Mrk 231 is a milliparsec black hole binary with little intrinsic reddening. We show that if the observed FUV continuum is intrinsic, as assumed by Yan et al., it fails by a factor of about 100 in powering the observed strength of the near-infrared emission lines and the thermal near and mid-infrared continuum. In contrast, the line and continuum strengths are typical for a reddened AGN spectral energy distribution (SED). We find that the He i*/Pmore » β ratio is sensitive to the SED for a one-zone model. If this sensitivity is maintained in general broadline region models, then this ratio may prove a useful diagnostic for heavily reddened quasars. Analysis of archival Hubble Space Telescope STIS and Faint Object Camera data revealed evidence that the far-UV continuum emission is resolved on size scales of ∼40 pc. The lack of broad absorption lines in the far-UV continuum might be explained if it were not coincident with the central engine. One possibility is that it is the central engine continuum reflected from the receding wind on the far side of the quasar.« less

  16. Decarbonation in an intracratonic setting: Insight from petrological-thermomechanical modeling

    NASA Astrophysics Data System (ADS)

    Gonzalez, Christopher M.; Gorczyk, Weronika

    2017-08-01

    Cratons form the stable core roots of the continental crust. Despite long-term stability, cratons have failed in the past. Cratonic destruction (e.g., North Atlantic Craton) due to chemical rejuvenation at the base of the lithosphere remains poorly constrained numerically. We use 2-D petrological-thermomechanical models to assess cratonic rifting characteristics and mantle CO2 degassing in the presence of a carbonated subcontinental lithospheric mantle (SCLM). We test two tectonothermal SCLM compositions: Archon (depleted) and Tecton (fertilized) using 2 CO2 wt % in the bulk composition to represent a metasomatized SCLM. We parameterize cratonic breakup via extensional duration (7-12 Ma; full breakup), tectonothermal age, TMoho (300-600°C), and crustal rheology. The two compositions with metasomatized SCLMs share similar rifting features and decarbonation trends during initial extension. However, we show long-term (>67 Ma) stability differences due to lithospheric density contrasts between SCLM compositions. The Tecton model shows convective removal and thinning of the metasomatized SCLM during failed rifting. The Archon composition remained stable, highlighting the primary role for SCLM density even when metasomatized at its base. In the short-term, three failed rifting characteristics emerge: failed rifting without decarbonation, failed rifting with decarbonation, and semifailed rifting with dry asthenospheric melting and decarbonation. Decarbonation trends were greatest in the failed rifts, reaching peak fluxes of 94 × 104 kg m-3. Increased TMoho did not alter the effects of rifting or decarbonation. Lastly, we show mantle regions where decarbonation, mantle melting in the presence of carbonate, and preservation of carbonated mantle occur during rifting.

  17. The influence of tectonic inheritance on crustal extension style following failed subduction of continental crust: applications to metamorphic core complexes in Papua New Guinea

    NASA Astrophysics Data System (ADS)

    Biemiller, J.; Ellis, S. M.; Little, T.; Mizera, M.; Wallace, L. M.; Lavier, L.

    2017-12-01

    The structural, mechanical and geometric evolution of rifted continental crust depends on the lithospheric conditions in the region prior to the onset of extension. In areas where tectonic activity preceded rift initiation, structural and physical properties of the previous tectonic regime may be inherited by the rift and influence its development. Many continental rifts form and exhume metamorphic core complexes (MCCs), coherent exposures of deep crustal rocks which typically surface as arched or domed structures. MCCs are exhumed in regions where the faulted upper crust is displaced laterally from upwelling ductile material along a weak detachment fault. Some MCCs form during extensional inversion of a subduction thrust following failed subduction of continental crust, but the degree to which lithospheric conditions inherited from the preceding subduction phase control the extensional style in these systems remains unclear. For example, the Dayman Dome in Southeastern Papua New Guinea exposes prehnite-pumpellyite to greenschist facies rocks in a smooth 3 km-high dome exhumed with at least 24 km of slip along one main detachment normal fault, the Mai'iu Fault, which dips 21° at the surface. The extension driving this exhumation is associated with the cessation of northward subduction of Australian continental crust beneath the oceanic lithosphere of the Woodlark Plate. We use geodynamic models to explore the effect of pre-existing crustal structures inherited from the preceding subduction phase on the style of rifting. We show that different geometries and strengths of inherited subduction shear zones predict three distinct modes of subsequent rift development: 1) symmetric rifting by newly formed high-angle normal faults; 2) asymmetric rifting along a weak low-angle detachment fault extending from the surface to the brittle-ductile transition; and 3) extension along a rolling-hinge structure which exhumes deep crustal rocks in coherent rounded exposures. We propose the latter mode as an exhumation model for Dayman Dome and compare the model predictions to regional geophysical and geological evidence. Our models find that tectonically inherited subduction structures may strongly control subsequent extension style when the subduction thrust is weak and well-oriented for reactivation.

  18. Erosion Control and Environment Restoration Plan Development, Matagorda County, Texas. Phase 1: Preliminary Investigation

    DTIC Science & Technology

    2012-07-01

    Matagorda Peninsula east of MCR where a thicker cover of sand with vegetated dunes can be observed. 2.8 Typical beach profile Beach profile shape is a...clay bluffs on the beach face o Small tidal range, defined in Chapter 2, tends to focus wave action on the bluff toe o Breaking waves propel shell...toward the bluff, abrading the bluff toe o Abrasion undercuts the bluff, causing large sections to fail  Slope failure o Cyclical wave loading on

  19. Potential fault region detection in TFDS images based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Sun, Junhua; Xiao, Zhongwen

    2016-10-01

    In recent years, more than 300 sets of Trouble of Running Freight Train Detection System (TFDS) have been installed on railway to monitor the safety of running freight trains in China. However, TFDS is simply responsible for capturing, transmitting, and storing images, and fails to recognize faults automatically due to some difficulties such as such as the diversity and complexity of faults and some low quality images. To improve the performance of automatic fault recognition, it is of great importance to locate the potential fault areas. In this paper, we first introduce a convolutional neural network (CNN) model to TFDS and propose a potential fault region detection system (PFRDS) for simultaneously detecting four typical types of potential fault regions (PFRs). The experimental results show that this system has a higher performance of image detection to PFRs in TFDS. An average detection recall of 98.95% and precision of 100% are obtained, demonstrating the high detection ability and robustness against various poor imaging situations.

  20. Statistically Controlling for Confounding Constructs Is Harder than You Think

    PubMed Central

    Westfall, Jacob; Yarkoni, Tal

    2016-01-01

    Social scientists often seek to demonstrate that a construct has incremental validity over and above other related constructs. However, these claims are typically supported by measurement-level models that fail to consider the effects of measurement (un)reliability. We use intuitive examples, Monte Carlo simulations, and a novel analytical framework to demonstrate that common strategies for establishing incremental construct validity using multiple regression analysis exhibit extremely high Type I error rates under parameter regimes common in many psychological domains. Counterintuitively, we find that error rates are highest—in some cases approaching 100%—when sample sizes are large and reliability is moderate. Our findings suggest that a potentially large proportion of incremental validity claims made in the literature are spurious. We present a web application (http://jakewestfall.org/ivy/) that readers can use to explore the statistical properties of these and other incremental validity arguments. We conclude by reviewing SEM-based statistical approaches that appropriately control the Type I error rate when attempting to establish incremental validity. PMID:27031707

  1. Degradation Mechanisms of an Advanced Jet Engine Service-Retired TBC Component

    NASA Astrophysics Data System (ADS)

    Wu, Rudder T.; Osawa, Makoto; Yokokawa, Tadaharu; Kawagishi, Kyoko; Harada, Hiroshi

    Current use of TBCs is subjected to premature spallation failure mainly due to the formation of thermally grown oxides (TGOs). Although extensive research has been carried out to gain better understanding of the thermo - mechanical and -chemical characteristics of TBCs, laboratory-scale studies and simulation tests are often carried out in conditions significantly differed from the complex and extreme environment typically of a modern gas-turbine engine, thus, failed to truly model service conditions. In particular, the difference in oxygen partial pressure and the effects of contaminants present in the engine compartment have often been neglected. In this respect, an investigation is carried out to study the in-service degradation of an EB-PVD TBC coated nozzle-guide vane. Several modes of degradation were observed due to three factors: 1) presence of residual stresses induced by the thermal-expansion mismatches, 2) evolution of bond coat microstructure and subsequent formation of oxide spinels, 3) deposition of CMAS on the surface of TBC.

  2. Shipbuilding Docks as Experimental Systems for Realistic Assessments of Anthropogenic Stressors on Marine Organisms

    PubMed Central

    Harding, Harry R.; Bunce, Tom; Birch, Fiona; Lister, Jessica; Spiga, Ilaria; Benson, Tom; Rossington, Kate; Jones, Diane; Tyler, Charles R.; Simpson, Stephen D.

    2017-01-01

    Abstract Empirical investigations of the impacts of anthropogenic stressors on marine organisms are typically performed under controlled laboratory conditions, onshore mesocosms, or via offshore experiments with realistic (but uncontrolled) environmental variation. These approaches have merits, but onshore setups are generally small sized and fail to recreate natural stressor fields, whereas offshore studies are often compromised by confounding factors. We suggest the use of flooded shipbuilding docks to allow studying realistic exposure to stressors and their impacts on the intra- and interspecific responses of animals. Shipbuilding docks permit the careful study of groups of known animals, including the evaluation of their behavioral interactions, while enabling full control of the stressor and many environmental conditions. We propose that this approach could be used for assessing the impacts of prominent anthropogenic stressors, including chemicals, ocean warming, and sound. Results from shipbuilding-dock studies could allow improved parameterization of predictive models relating to the environmental risks and population consequences of anthropogenic stressors. PMID:29599545

  3. Planning Under Continuous Time and Resource Uncertainty: A Challenge for AI

    NASA Technical Reports Server (NTRS)

    Bresina, John; Dearden, Richard; Meuleau, Nicolas; Smith, David; Washington, Rich; Clancy, Daniel (Technical Monitor)

    2002-01-01

    There has been considerable work in Al on decision-theoretic planning and planning under uncertainty. Unfortunately, all of this work suffers from one or more of the following limitations: 1) it relies on very simple models of actions and time, 2) it assumes that uncertainty is manifested in discrete action outcomes, and 3) it is only practical for very small problems. For many real world problems, these assumptions fail to hold. A case in point is planning the activities for a Mars rover. For this domain none of the above assumptions are valid: 1) actions can be concurrent and have differing durations, 2) there is uncertainty concerning action durations and consumption of continuous resources like power, and 3) typical daily plans involve on the order of a hundred actions. We describe the rover problem, discuss previous work on planning under uncertainty, and present a detailed. but very small, example illustrating some of the difficulties of finding good plans.

  4. Shipbuilding Docks as Experimental Systems for Realistic Assessments of Anthropogenic Stressors on Marine Organisms.

    PubMed

    Bruintjes, Rick; Harding, Harry R; Bunce, Tom; Birch, Fiona; Lister, Jessica; Spiga, Ilaria; Benson, Tom; Rossington, Kate; Jones, Diane; Tyler, Charles R; Radford, Andrew N; Simpson, Stephen D

    2017-09-01

    Empirical investigations of the impacts of anthropogenic stressors on marine organisms are typically performed under controlled laboratory conditions, onshore mesocosms, or via offshore experiments with realistic (but uncontrolled) environmental variation. These approaches have merits, but onshore setups are generally small sized and fail to recreate natural stressor fields, whereas offshore studies are often compromised by confounding factors. We suggest the use of flooded shipbuilding docks to allow studying realistic exposure to stressors and their impacts on the intra- and interspecific responses of animals. Shipbuilding docks permit the careful study of groups of known animals, including the evaluation of their behavioral interactions, while enabling full control of the stressor and many environmental conditions. We propose that this approach could be used for assessing the impacts of prominent anthropogenic stressors, including chemicals, ocean warming, and sound. Results from shipbuilding-dock studies could allow improved parameterization of predictive models relating to the environmental risks and population consequences of anthropogenic stressors.

  5. Microcanonical and resource-theoretic derivations of the thermal state of a quantum system with noncommuting charges

    PubMed Central

    Yunger Halpern, Nicole; Faist, Philippe; Oppenheim, Jonathan; Winter, Andreas

    2016-01-01

    The grand canonical ensemble lies at the core of quantum and classical statistical mechanics. A small system thermalizes to this ensemble while exchanging heat and particles with a bath. A quantum system may exchange quantities represented by operators that fail to commute. Whether such a system thermalizes and what form the thermal state has are questions about truly quantum thermodynamics. Here we investigate this thermal state from three perspectives. First, we introduce an approximate microcanonical ensemble. If this ensemble characterizes the system-and-bath composite, tracing out the bath yields the system's thermal state. This state is expected to be the equilibrium point, we argue, of typical dynamics. Finally, we define a resource-theory model for thermodynamic exchanges of noncommuting observables. Complete passivity—the inability to extract work from equilibrium states—implies the thermal state's form, too. Our work opens new avenues into equilibrium in the presence of quantum noncommutation. PMID:27384494

  6. Autonomous Component Health Management with Failed Component Detection, Identification, and Avoidance

    NASA Technical Reports Server (NTRS)

    Davis, Robert N.; Polites, Michael E.; Trevino, Luis C.

    2004-01-01

    This paper details a novel scheme for autonomous component health management (ACHM) with failed actuator detection and failed sensor detection, identification, and avoidance. This new scheme has features that far exceed the performance of systems with triple-redundant sensing and voting, yet requires fewer sensors and could be applied to any system with redundant sensing. Relevant background to the ACHM scheme is provided, and the simulation results for the application of that scheme to a single-axis spacecraft attitude control system with a 3rd order plant and dual-redundant measurement of system states are presented. ACHM fulfills key functions needed by an integrated vehicle health monitoring (IVHM) system. It is: autonomous; adaptive; works in realtime; provides optimal state estimation; identifies failed components; avoids failed components; reconfigures for multiple failures; reconfigures for intermittent failures; works for hard-over, soft, and zero-output failures; and works for both open- and closed-loop systems. The ACHM scheme combines a prefilter that generates preliminary state estimates, detects and identifies failed sensors and actuators, and avoids the use of failed sensors in state estimation with a fixed-gain Kalman filter that generates optimal state estimates and provides model-based state estimates that comprise an integral part of the failure detection logic. The results show that ACHM successfully isolates multiple persistent and intermittent hard-over, soft, and zero-output failures. It is now ready to be tested on a computer model of an actual system.

  7. Geometric flow control of shear bands by suppression of viscous sliding

    PubMed Central

    Viswanathan, Koushik; Mahato, Anirban; Sundaram, Narayan K.; M'Saoubi, Rachid; Trumble, Kevin P.; Chandrasekar, Srinivasan

    2016-01-01

    Shear banding is a plastic flow instability with highly undesirable consequences for metals processing. While band characteristics have been well studied, general methods to control shear bands are presently lacking. Here, we use high-speed imaging and micro-marker analysis of flow in cutting to reveal the common fundamental mechanism underlying shear banding in metals. The flow unfolds in two distinct phases: an initiation phase followed by a viscous sliding phase in which most of the straining occurs. We show that the second sliding phase is well described by a simple model of two identical fluids being sheared across their interface. The equivalent shear band viscosity computed by fitting the model to experimental displacement profiles is very close in value to typical liquid metal viscosities. The observation of similar displacement profiles across different metals shows that specific microstructure details do not affect the second phase. This also suggests that the principal role of the initiation phase is to generate a weak interface that is susceptible to localized deformation. Importantly, by constraining the sliding phase, we demonstrate a material-agnostic method—passive geometric flow control—that effects complete band suppression in systems which otherwise fail via shear banding. PMID:27616920

  8. Geometric flow control of shear bands by suppression of viscous sliding

    NASA Astrophysics Data System (ADS)

    Sagapuram, Dinakar; Viswanathan, Koushik; Mahato, Anirban; Sundaram, Narayan K.; M'Saoubi, Rachid; Trumble, Kevin P.; Chandrasekar, Srinivasan

    2016-08-01

    Shear banding is a plastic flow instability with highly undesirable consequences for metals processing. While band characteristics have been well studied, general methods to control shear bands are presently lacking. Here, we use high-speed imaging and micro-marker analysis of flow in cutting to reveal the common fundamental mechanism underlying shear banding in metals. The flow unfolds in two distinct phases: an initiation phase followed by a viscous sliding phase in which most of the straining occurs. We show that the second sliding phase is well described by a simple model of two identical fluids being sheared across their interface. The equivalent shear band viscosity computed by fitting the model to experimental displacement profiles is very close in value to typical liquid metal viscosities. The observation of similar displacement profiles across different metals shows that specific microstructure details do not affect the second phase. This also suggests that the principal role of the initiation phase is to generate a weak interface that is susceptible to localized deformation. Importantly, by constraining the sliding phase, we demonstrate a material-agnostic method-passive geometric flow control-that effects complete band suppression in systems which otherwise fail via shear banding.

  9. Connectivity disruption sparks explosive epidemic spreading.

    PubMed

    Böttcher, L; Woolley-Meza, O; Goles, E; Helbing, D; Herrmann, H J

    2016-04-01

    We investigate the spread of an infection or other malfunction of cascading nature when a system component can recover only if it remains reachable from a functioning central component. We consider the susceptible-infected-susceptible model, typical of mathematical epidemiology, on a network. Infection spreads from infected to healthy nodes, with the addition that infected nodes can only recover when they remain connected to a predefined central node, through a path that contains only healthy nodes. In this system, clusters of infected nodes will absorb their noninfected interior because no path exists between the central node and encapsulated nodes. This gives rise to the simultaneous infection of multiple nodes. Interestingly, the system converges to only one of two stationary states: either the whole population is healthy or it becomes completely infected. This simultaneous cluster infection can give rise to discontinuous jumps of different sizes in the number of failed nodes. Larger jumps emerge at lower infection rates. The network topology has an important effect on the nature of the transition: we observed hysteresis for networks with dominating local interactions. Our model shows how local spread can abruptly turn uncontrollable when it disrupts connectivity at a larger spatial scale.

  10. An Evaluation of a Phase-Lag Boundary Condition for Francis Hydroturbine Simulations Using a Pressure-Based Solver

    NASA Astrophysics Data System (ADS)

    Wouden, Alex; Cimbala, John; Lewis, Bryan

    2014-11-01

    While the periodic boundary condition is useful for handling rotational symmetry in many axisymmetric geometries, its application fails for analysis of rotor-stator interaction (RSI) in multi-stage turbomachinery flow. The inadequacy arises from the underlying geometry where the blade counts per row differ, since the blade counts are crafted to deter the destructive harmonic forces of synchronous blade passing. Therefore, to achieve the computational advantage of modeling a single blade passage per row while preserving the integrity of the RSI, a phase-lag boundary condition is adapted to OpenFOAM® software's incompressible pressure-based solver. The phase-lag construct is accomplished through restating the implicit periodic boundary condition as a constant boundary condition that is updated at each time step with phase-shifted data from the coupled cells adjacent to the boundary. Its effectiveness is demonstrated using a typical Francis hydroturbine modeled as single- and double-passages with phase-lag boundary conditions. The evaluation of the phase-lag condition is based on the correspondence of the overall computational performance and the calculated flow parameters of the phase-lag simulations with those of a baseline full-wheel simulation. Funded in part by DOE Award Number: DE-EE0002667.

  11. Approach and withdrawal motivation in schizophrenia: an examination of frontal brain asymmetric activity.

    PubMed

    Horan, William P; Wynn, Jonathan K; Mathis, Ian; Miller, Gregory A; Green, Michael F

    2014-01-01

    Although motivational disturbances are common in schizophrenia, their neurophysiological and psychological basis is poorly understood. This electroencephalography (EEG) study examined the well-established motivational direction model of asymmetric frontal brain activity in schizophrenia. According to this model, relative left frontal activity in the resting EEG reflects enhanced approach motivation tendencies, whereas relative right frontal activity reflects enhanced withdrawal motivation tendencies. Twenty-five schizophrenia outpatients and 25 healthy controls completed resting EEG assessments of frontal asymmetry in the alpha frequency band (8-12 Hz), as well as a self-report measure of behavioral activation and inhibition system (BIS/BAS) sensitivity. Patients showed an atypical pattern of differences from controls. On the EEG measure patients failed to show the left lateralized activity that was present in controls, suggesting diminished approach motivation. On the self-report measure, patients reported higher BIS sensitivity than controls, which is typically interpreted as heightened withdrawal motivation. EEG asymmetry scores did not significantly correlate with BIS/BAS scores or with clinical symptom ratings among patients. The overall pattern suggests a motivational disturbance in schizophrenia characterized by elements of both diminished approach and elevated withdrawal tendencies.

  12. Anomaly Detection Using an Ensemble of Feature Models

    PubMed Central

    Noto, Keith; Brodley, Carla; Slonim, Donna

    2011-01-01

    We present a new approach to semi-supervised anomaly detection. Given a set of training examples believed to come from the same distribution or class, the task is to learn a model that will be able to distinguish examples in the future that do not belong to the same class. Traditional approaches typically compare the position of a new data point to the set of “normal” training data points in a chosen representation of the feature space. For some data sets, the normal data may not have discernible positions in feature space, but do have consistent relationships among some features that fail to appear in the anomalous examples. Our approach learns to predict the values of training set features from the values of other features. After we have formed an ensemble of predictors, we apply this ensemble to new data points. To combine the contribution of each predictor in our ensemble, we have developed a novel, information-theoretic anomaly measure that our experimental results show selects against noisy and irrelevant features. Our results on 47 data sets show that for most data sets, this approach significantly improves performance over current state-of-the-art feature space distance and density-based approaches. PMID:22020249

  13. Analysis of Naval Facilities Engineering Command’s (NAVFAC) Contracting Processes Using the Contract Management Maturity Model (CMMM)

    DTIC Science & Technology

    2006-12-01

    Antecedents and Consequences of Failed Governance : the Enron Example. Corporate Governance , 5: 5. Garrett, G., & Rendon, R. 2005(a). Contract Management... government organization because NAVFAC faces competition analogous to the corporate world. If NAVFAC fails to provide adequate services, the ...applicable to NAVFAC even though NAVFAC is a government organization because NAVFAC faces competition analogous to the corporate world. If NAVFAC fails to

  14. 76 FR 19278 - Airworthiness Directives; The Boeing Company Model 747-100, 747-100B, 747-100B SUD, 747-200B, 747...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-07

    ... are proposing this AD to detect and correct cracking in the fail-safe interlayer of certain No. 2 and... to detect and correct cracking in the fail-safe interlayer of certain No. 2 and No. 3 glass windows... cracking in the fail-safe interlayer of certain No. 2 and No. 3 glass windows, which could result in loss...

  15. Neural Correlates of Letter Reversal in Children and Adults

    PubMed Central

    Kalra, Priya; Yee, Debbie; Sinha, Pawan; Gabrieli, John D. E.

    2014-01-01

    Children often make letter reversal errors when first learning to read and write, even for letters whose reversed forms do not appear in normal print. However, the brain basis of such letter reversal in children learning to read is unknown. The present study compared the neuroanatomical correlates (via functional magnetic resonance imaging) and the electrophysiological correlates (via event-related potentials or ERPs) of this phenomenon in children, ages 5–12, relative to young adults. When viewing reversed letters relative to typically oriented letters, adults exhibited widespread occipital, parietal, and temporal lobe activations, including activation in the functionally localized visual word form area (VWFA) in left occipito-temporal cortex. Adults exhibited significantly greater activation than children in all of these regions; children only exhibited such activation in a limited frontal region. Similarly, on the P1 and N170 ERP components, adults exhibited significantly greater differences between typical and reversed letters than children, who failed to exhibit significant differences between typical and reversed letters. These findings indicate that adults distinguish typical and reversed letters in the early stages of specialized brain processing of print, but that children do not recognize this distinction during the early stages of processing. Specialized brain processes responsible for early stages of letter perception that distinguish between typical and reversed letters may develop slowly and remain immature even in older children who no longer produce letter reversals in their writing. PMID:24859328

  16. Modelling the tumour microenvironment in long-term microencapsulated 3D co-cultures recapitulates phenotypic features of disease progression.

    PubMed

    Estrada, Marta F; Rebelo, Sofia P; Davies, Emma J; Pinto, Marta T; Pereira, Hugo; Santo, Vítor E; Smalley, Matthew J; Barry, Simon T; Gualda, Emilio J; Alves, Paula M; Anderson, Elizabeth; Brito, Catarina

    2016-02-01

    3D cell tumour models are generated mainly in non-scalable culture systems, using bioactive scaffolds. Many of these models fail to reflect the complex tumour microenvironment and do not allow long-term monitoring of tumour progression. To overcome these limitations, we have combined alginate microencapsulation with agitation-based culture systems, to recapitulate and monitor key aspects of the tumour microenvironment and disease progression. Aggregates of MCF-7 breast cancer cells were microencapsulated in alginate, either alone or in combination with human fibroblasts, then cultured for 15 days. In co-cultures, the fibroblasts arranged themselves around the tumour aggregates creating distinct epithelial and stromal compartments. The presence of fibroblasts resulted in secretion of pro-inflammatory cytokines and deposition of collagen in the stromal compartment. Tumour cells established cell-cell contacts and polarised around small lumina in the interior of the aggregates. Over the culture period, there was a reduction in oestrogen receptor and membranous E-cadherin alongside loss of cell polarity, increased collective cell migration and enhanced angiogenic potential in co-cultures. These phenotypic alterations, typical of advanced stages of cancer, were not observed in the mono-cultures of MCF-7 cells. The proposed model system constitutes a new tool to study tumour-stroma crosstalk, disease progression and drug resistance mechanisms. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Corrective response times in a coordinated eye-head-arm countermanding task.

    PubMed

    Tao, Gordon; Khan, Aarlenne Z; Blohm, Gunnar

    2018-06-01

    Inhibition of motor responses has been described as a race between two competing decision processes of motor initiation and inhibition, which manifest as the reaction time (RT) and the stop signal reaction time (SSRT); in the case where motor initiation wins out over inhibition, an erroneous movement occurs that usually needs to be corrected, leading to corrective response times (CRTs). Here we used a combined eye-head-arm movement countermanding task to investigate the mechanisms governing multiple effector coordination and the timing of corrective responses. We found a high degree of correlation between effector response times for RT, SSRT, and CRT, suggesting that decision processes are strongly dependent across effectors. To gain further insight into the mechanisms underlying CRTs, we tested multiple models to describe the distribution of RTs, SSRTs, and CRTs. The best-ranked model (according to 3 information criteria) extends the LATER race model governing RTs and SSRTs, whereby a second motor initiation process triggers the corrective response (CRT) only after the inhibition process completes in an expedited fashion. Our model suggests that the neural processing underpinning a failed decision has a residual effect on subsequent actions. NEW & NOTEWORTHY Failure to inhibit erroneous movements typically results in corrective movements. For coordinated eye-head-hand movements we show that corrective movements are only initiated after the erroneous movement cancellation signal has reached a decision threshold in an accelerated fashion.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Key, Joey Shapiro; Cornish, Neil J.

    The Laser Interferometer Space Antenna (LISA) is designed to detect gravitational wave signals from astrophysical sources, including those from coalescing binary systems of compact objects such as black holes. Colliding galaxies have central black holes that sink to the center of the merged galaxy and begin to orbit one another and emit gravitational waves. Some galaxy evolution models predict that the binary black hole system will enter the LISA band with significant orbital eccentricity, while other models suggest that the orbits will already have circularized. Using a full 17 parameter waveform model that includes the effects of orbital eccentricity, spinmore » precession, and higher harmonics, we investigate how well the source parameters can be inferred from simulated LISA data. Defining the reference eccentricity as the value one year before merger, we find that for typical LISA sources, it will be possible to measure the eccentricity to an accuracy of parts in a thousand. The accuracy with which the eccentricity can be measured depends only very weakly on the eccentricity, making it possible to distinguish circular orbits from those with very small eccentricities. LISA measurements of the orbital eccentricity can help constraints theories of galaxy mergers in the early universe. Failing to account for the eccentricity in the waveform modeling can lead to a loss of signal power and bias the estimation of parameters such as the black hole masses and spins.« less

  19. Material Modeling of Space Shuttle Leading Edge and External Tank Materials For Use in the Columbia Accident Investigation

    NASA Technical Reports Server (NTRS)

    Carney, Kelly; Melis, Matthew; Fasanella, Edwin L.; Lyle, Karen H.; Gabrys, Jonathan

    2004-01-01

    Upon the commencement of the analytical effort to characterize the impact dynamics and damage of the Space Shuttle Columbia leading edge due to External Tank insulating foam, the necessity of creating analytical descriptions of these materials became evident. To that end, material models were developed of the leading edge thermal protection system, Reinforced Carbon Carbon (RCC), and a low density polyurethane foam, BX-250. Challenges in modeling the RCC include its extreme brittleness, the differing behavior in compression and tension, and the anisotropic fabric layup. These effects were successfully included in LS-DYNA Material Model 58, *MAT_LAMINATED_ COMPOSITE_ FABRIC. The differing compression and tension behavior was modeled using the available damage parameters. Each fabric layer was given an integration point in the shell element, and was allowed to fail independently. Comparisons were made to static test data and coupon ballistic impact tests before being utilized in the full scale analysis. The foam's properties were typical of elastic automotive foams; and LS-DYNA Material Model 83, *MAT_FU_CHANG_FOAM, was successfully used to model its behavior. Material parameters defined included strain rate dependent stress-strain curves for both loading and un-loading, and for both compression and tension. This model was formulated with static test data and strain rate dependent test data, and was compared to ballistic impact tests on load-cell instrumented aluminum plates. These models were subsequently utilized in analysis of the Shuttle leading edge full scale ballistic impact tests, and are currently being used in the Return to Flight Space Shuttle re-certification effort.

  20. Demonstration of Advanced EMI Models for Live-Site UXO Discrimination at Waikoloa, Hawaii

    DTIC Science & Technology

    2015-12-01

    magnetic source models PNN Probabilistic Neural Network SERDP Strategic Environmental Research and Development Program SLO San Luis Obispo...SNR Signal to noise ratio SVM Support vector machine TD Time Domain TEMTADS Time Domain Electromagnetic Towed Array Detection System TOI... intrusive procedure, which was used by Parsons at WMA, failed to document accurately all intrusive results, or failed to detect and clear all UXO like

  1. 75 FR 9809 - Airworthiness Directives; Airbus Model A330-243, -341, -342, and -343 Airplanes; and Model A340...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-04

    ..., as a consequence of the over-torque, fail and move away, it would lead to loss of the vertical load pins, which could result in loss of the primary and/or secondary load path of the forward and/or aft..., as a consequence of the over-torque, fail and move away, it would lead to loss of the vertical load...

  2. Analysis of local scale tree-atmosphere interaction on pollutant concentration in idealized street canyons and application to a real urban junction

    NASA Astrophysics Data System (ADS)

    Buccolieri, Riccardo; Salim, Salim Mohamed; Leo, Laura Sandra; Di Sabatino, Silvana; Chan, Andrew; Ielpo, Pierina; de Gennaro, Gianluigi; Gromke, Christof

    2011-03-01

    This paper first discusses the aerodynamic effects of trees on local scale flow and pollutant concentration in idealized street canyon configurations by means of laboratory experiments and Computational Fluid Dynamics (CFD). These analyses are then used as a reference modelling study for the extension a the neighbourhood scale by investigating a real urban junction of a medium size city in southern Italy. A comparison with previous investigations shows that street-level concentrations crucially depend on the wind direction and street canyon aspect ratio W/H (with W and H the width and the height of buildings, respectively) rather than on tree crown porosity and stand density. It is usually assumed in the literature that larger concentrations are associated with perpendicular approaching wind. In this study, we demonstrate that while for tree-free street canyons under inclined wind directions the larger the aspect ratio the lower the street-level concentration, in presence of trees the expected reduction of street-level concentration with aspect ratio is less pronounced. Observations made for the idealized street canyons are re-interpreted in real case scenario focusing on the neighbourhood scale in proximity of a complex urban junction formed by street canyons of similar aspect ratios as those investigated in the laboratory. The aim is to show the combined influence of building morphology and vegetation on flow and dispersion and to assess the effect of vegetation on local concentration levels. To this aim, CFD simulations for two typical winter/spring days show that trees contribute to alter the local flow and act to trap pollutants. This preliminary study indicates that failing to account for the presence of vegetation, as typically practiced in most operational dispersion models, would result in non-negligible errors in the predictions.

  3. Improvement of preclinical animal models for autoimmune-mediated disorders via reverse translation of failed therapies.

    PubMed

    't Hart, Bert A; Jagessar, S Anwar; Kap, Yolanda S; Haanstra, Krista G; Philippens, Ingrid H C H M; Serguera, Che; Langermans, Jan; Vierboom, Michel

    2014-09-01

    The poor translational validity of autoimmune-mediated inflammatory disease (AIMID) models in inbred and specific pathogen-free (SPF) rodents underlies the high attrition of new treatments for the corresponding human disease. Experimental autoimmune encephalomyelitis (EAE) is a frequently used preclinical AIMID model. We discuss here how crucial information needed for the innovation of current preclinical models can be obtained from postclinical analysis of the nonhuman primate EAE model, highlighting the mechanistic reasons why some therapies fail and others succeed. These new insights can also help identify new targets for treatment. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. DOES GARP REALLY FAIL MISERABLY? A RESPONSE TO STOCKMAN ET AL. (2006)

    EPA Science Inventory

    Stockman et al. (2006) found that ecological niche models built using DesktopGARP 'failed miserably' to predict trapdoor spider (genus Promyrmekiaphila) distributions in California. This apparent failure of GARP (Genetic Algorithm for Rule-Set Production) was actually a failure ...

  5. Component Repair Times Obtained from MSPI Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eide, Steven A.; Cadwallader, Lee

    Information concerning times to repair or restore equipment to service given a failure is valuable to probabilistic risk assessments (PRAs). Examples of such uses in modern PRAs include estimation of the probability of failing to restore a failed component within a specified time period (typically tied to recovering a mitigating system before core damage occurs at nuclear power plants) and the determination of mission times for support system initiating event (SSIE) fault tree models. Information on equipment repair or restoration times applicable to PRA modeling is limited and dated for U.S. commercial nuclear power plants. However, the Mitigating Systems Performancemore » Index (MSPI) program covering all U.S. commercial nuclear power plants provides up-to-date information on restoration times for a limited set of component types. This paper describes the MSPI program data available and analyzes the data to obtain median and mean component restoration times as well as non-restoration cumulative probability curves. The MSPI program provides guidance for monitoring both planned and unplanned outages of trains of selected mitigating systems deemed important to safety. For systems included within the MSPI program, plants monitor both train UA and component unreliability (UR) against baseline values. If the combined system UA and UR increases sufficiently above established baseline results (converted to an estimated change in core damage frequency or CDF), a “white” (or worse) indicator is generated for that system. That in turn results in increased oversight by the US Nuclear Regulatory Commission (NRC) and can impact a plant’s insurance rating. Therefore, there is pressure to return MSPI program components to service as soon as possible after a failure occurs. Three sets of unplanned outages might be used to determine the component repair durations desired in this article: all unplanned outages for the train type that includes the component of interest, only unplanned outages associated with failures of the component of interest, and only unplanned outages associated with PRA failures of the component of interest. The paper will describe how component repair times can be generated from each set and which approach is most applicable. Repair time information will be summarized for MSPI pumps and diesel generators using data over 2003 – 2007. Also, trend information over 2003 – 2012 will be presented to indicate whether the 2003 – 2007 repair time information is still considered applicable. For certain types of pumps, mean repair times are significantly higher than the typically assumed 24 h duration.« less

  6. Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices

    PubMed Central

    Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen

    2013-01-01

    In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588

  7. Cycle life test and failure model of nickel-hydrogen cells

    NASA Technical Reports Server (NTRS)

    Smithrick, J. J.

    1983-01-01

    Six ampere hour individual pressure vessel nickel hydrogen cells were charge/discharge cycled to failure. Failure as used here is defined to occur when the end of discharge voltage degraded to 0.9 volts. They were cycled under a low earth orbit cycle regime to a deep depth of discharge (80 percent of rated ampere hour capacity). Both cell designs were fabricated by the same manufacturer and represent current state of the art. A failure model was advanced which suggests both cell designs have inadequate volume tolerance characteristics. The limited existing data base at a deep depth of discharge (DOD) was expanded. Two cells of each design were cycled. One COMSAT cell failed at cycle 1712 and the other failed at cycle 1875. For the Air Force/Hughes cells, one cell failed at cycle 2250 and the other failed at cycle 2638. All cells, of both designs, failed due to low end of discharge voltage (0.9 volts). No cell failed due to electrical shorts. After cell failure, three different reconditioning tests (deep discharge, physical reorientation, and open circuit voltage stand) were conducted on all cells of each design. A fourth reconditioning test (electrolyte addition) was conducted on one cell of each design. In addition post cycle cell teardown and failure analysis were performed on the one cell of each design which did not have electrolyte added after failure.

  8. A Spectral Evaluation of Models Performances in Mediterranean Oak Woodlands

    NASA Astrophysics Data System (ADS)

    Vargas, R.; Baldocchi, D. D.; Abramowitz, G.; Carrara, A.; Correia, A.; Kobayashi, H.; Papale, D.; Pearson, D.; Pereira, J.; Piao, S.; Rambal, S.; Sonnentag, O.

    2009-12-01

    Ecosystem processes are influenced by climatic trends at multiple temporal scales including diel patterns and other mid-term climatic modes, such as interannual and seasonal variability. Because interactions between biophysical components of ecosystem processes are complex, it is important to test how models perform in frequency (e.g. hours, days, weeks, months, years) and time (i.e. day of the year) domains in addition to traditional tests of annual or monthly sums. Here we present a spectral evaluation using wavelet time series analysis of model performance in seven Mediterranean Oak Woodlands that encompass three deciduous and four evergreen sites. We tested the performance of five models (CABLE, ORCHIDEE, BEPS, Biome-BGC, and JULES) on measured variables of gross primary production (GPP) and evapotranspiration (ET). In general, model performance fails at intermediate periods (e.g. weeks to months) likely because these models do not represent the water pulse dynamics that influence GPP and ET at these Mediterranean systems. To improve the performance of a model it is critical to identify first where and when the model fails. Only by identifying where a model fails we can improve the model performance and use them as prognostic tools and to generate further hypotheses that can be tested by new experiments and measurements.

  9. Modal smoothing for analysis of room reflections measured with spherical microphone and loudspeaker arrays.

    PubMed

    Morgenstern, Hai; Rafaely, Boaz

    2018-02-01

    Spatial analysis of room acoustics is an ongoing research topic. Microphone arrays have been employed for spatial analyses with an important objective being the estimation of the direction-of-arrival (DOA) of direct sound and early room reflections using room impulse responses (RIRs). An optimal method for DOA estimation is the multiple signal classification algorithm. When RIRs are considered, this method typically fails due to the correlation of room reflections, which leads to rank deficiency of the cross-spectrum matrix. Preprocessing methods for rank restoration, which may involve averaging over frequency, for example, have been proposed exclusively for spherical arrays. However, these methods fail in the case of reflections with equal time delays, which may arise in practice and could be of interest. In this paper, a method is proposed for systems that combine a spherical microphone array and a spherical loudspeaker array, referred to as multiple-input multiple-output systems. This method, referred to as modal smoothing, exploits the additional spatial diversity for rank restoration and succeeds where previous methods fail, as demonstrated in a simulation study. Finally, combining modal smoothing with a preprocessing method is proposed in order to increase the number of DOAs that can be estimated using low-order spherical loudspeaker arrays.

  10. Environmental stress-corrosion cracking of fiberglass: lessons learned from failures in the chemical industry.

    PubMed

    Myers, T J; Kytömaa, H K; Smith, T R

    2007-04-11

    Fiberglass reinforced plastic (FRP) composite materials are often used to construct tanks, piping, scrubbers, beams, grating, and other components for use in corrosive environments. While FRP typically offers superior and cost effective corrosion resistance relative to other construction materials, the glass fibers traditionally used to provide the structural strength of the FRP can be susceptible to attack by the corrosive environment. The structural integrity of traditional FRP components in corrosive environments is usually dependent on the integrity of a corrosion-resistant barrier, such as a resin-rich layer containing corrosion resistant glass fibers. Without adequate protection, FRP components can fail under loads well below their design by an environmental stress-corrosion cracking (ESCC) mechanism when simultaneously exposed to mechanical stress and a corrosive chemical environment. Failure of these components can result in significant releases of hazardous substances into plants and the environment. In this paper, we present two case studies where fiberglass components failed due to ESCC at small chemical manufacturing facilities. As is often typical, the small chemical manufacturing facilities relied largely on FRP component suppliers to determine materials appropriate for the specific process environment and to repair damaged in-service components. We discuss the lessons learned from these incidents and precautions companies should take when interfacing with suppliers and other parties during the specification, design, construction, and repair of FRP components in order to prevent similar failures and chemical releases from occurring in the future.

  11. Driving and off-road impairments underlying failure on road testing in Parkinson's disease.

    PubMed

    Devos, Hannes; Vandenberghe, Wim; Tant, Mark; Akinwuntan, Abiodun E; De Weerdt, Willy; Nieuwboer, Alice; Uc, Ergun Y

    2013-12-01

    Parkinson's disease (PD) affects driving ability. We aimed to determine the most critical impairments in specific road skills and in clinical characteristics leading to failure on a road test in PD. In this cross-sectional study, certified driving assessment experts evaluated specific driving skills in 104 active, licensed drivers with PD using a standardized, on-road checklist and issued a global decision of pass/fail. Participants also completed an off-road evaluation assessing demographic features, disease characteristics, motor function, vision, and cognition. The most important driving skills and off-road predictors of the pass/fail outcome were identified using multivariate stepwise regression analyses. Eighty-six (65%) passed and 36 (35%) failed the on-road driving evaluation. Persons who failed performed worse on all on-road items. When adjusted for age and gender, poor performances on lateral positioning at low speed, speed adaptations at high speed, and left turning maneuvers yielded the best model that determined the pass/fail decision (R(2) = 0.56). The fail group performed poorer on all motor, visual, and cognitive tests. Measures of visual scanning, motor severity, PD subtype, visual acuity, executive functions, and divided attention were independent predictors of pass/fail decisions in the multivariate model (R(2) = 0.60). Our study demonstrated that failure on a road test in PD is determined by impairments in specific driving skills and associated with deficits in motor, visual, executive, and visuospatial functions. These findings point to specific driving and off-road impairments that can be targeted in multimodal rehabilitation programs for drivers with PD. © 2013 Movement Disorder Society.

  12. Active-standby servovalue/actuator development

    NASA Technical Reports Server (NTRS)

    Masm, R. K.

    1973-01-01

    A redundant, fail/operate fail/fixed servoactuator was constructed and tested along with electronic models of a servovalve. It was found that a torque motor switch is satisfactory for the space shuttle main engine hydraulic actuation system, and that this system provides an effective failure monitoring technique.

  13. Language and False-Belief Task Performance in Children With Autism Spectrum Disorder.

    PubMed

    Jeffrey Farrar, M; Seung, Hye Kyeung; Lee, Hyeonjin

    2017-07-12

    Language is related to false-belief (FB) understanding in both typically developing children and children with autism spectrum disorder (ASD). The current study examined the role of complementation and general language in FB understanding. Of interest was whether language plays similar or different roles in the groups' FB performance. Participants were 16 typically developing children (mean age = 5.0 years; mental age = 6.7) and 18 with ASD (mean age = 7.3 years; mental age = 8.3). Children were administered FB and language tasks (say- and think-complements), receptive and expressive vocabulary tests, and relative clauses. When mental age and receptive and expressive vocabulary were used as separate covariates, the typical control group outperformed the children with ASD in FB task performance. Chi-square analyses indicated that passing both complementation tasks was linked to the FB understanding of children with ASD. Children with ASD who passed FB tasks all passed say- and think-complement tasks. However, some children in the control group were able to pass the FB tasks, even if they failed the say- and think-complement tasks. The results indicate that children with ASD relied more on complement understanding to pass FB than typically developing children. Results are discussed regarding the developmental pathways for FB understanding.

  14. Theory of mind understanding and empathic behavior in children with autism spectrum disorders.

    PubMed

    Peterson, Candida

    2014-12-01

    This paper begins with a review of past research on theory of mind and empathy in children with ASD. Using varied operational definitions of empathy ranging from physiological heart rate through story vignettes to reports by privileged observers (e.g., teachers) of children's empathic behavior, results of previous studies are limited and contradictory. Thus new evidence is needed to answer two key questions: Are children with ASD less empathic than typically developing children? Do individual differences in theory of mind (ToM) understanding among children with ASD predict differences in their behavioral empathy? An original empirical study of 76 children aged 3-12 years (37 with ASD; 39 with typical development) addressed these. Results showed that children with ASD were significantly less empathic, according to their teachers, than typically developing children. However, this was not because of their slower ToM development. Findings showed equally clearly that ToM understanding was unrelated to empathy in children with ASD. The same was true for typically developing children once age and verbal maturity were controlled. Indeed, even the subgroup of older children with ASD in the sample who passed false belief tests were significantly less empathic than younger preschoolers who failed them. Copyright © 2014 ISDN. Published by Elsevier Ltd. All rights reserved.

  15. Global MHD Simulations of the Earth's Bow Shock Shape and Motion Under Variable Solar Wind Conditions

    NASA Astrophysics Data System (ADS)

    Mejnertsen, L.; Eastwood, J. P.; Hietala, H.; Schwartz, S. J.; Chittenden, J. P.

    2018-01-01

    Empirical models of the Earth's bow shock are often used to place in situ measurements in context and to understand the global behavior of the foreshock/bow shock system. They are derived statistically from spacecraft bow shock crossings and typically treat the shock surface as a conic section parameterized according to a uniform solar wind ram pressure, although more complex models exist. Here a global magnetohydrodynamic simulation is used to analyze the variability of the Earth's bow shock under real solar wind conditions. The shape and location of the bow shock is found as a function of time, and this is used to calculate the shock velocity over the shock surface. The results are compared to existing empirical models. Good agreement is found in the variability of the subsolar shock location. However, empirical models fail to reproduce the two-dimensional shape of the shock in the simulation. This is because significant solar wind variability occurs on timescales less than the transit time of a single solar wind phase front over the curved shock surface. Empirical models must therefore be used with care when interpreting spacecraft data, especially when observations are made far from the Sun-Earth line. Further analysis reveals a bias to higher shock speeds when measured by virtual spacecraft. This is attributed to the fact that the spacecraft only observes the shock when it is in motion. This must be accounted for when studying bow shock motion and variability with spacecraft data.

  16. NCC-RANSAC: a fast plane extraction method for 3-D range data segmentation.

    PubMed

    Qian, Xiangfei; Ye, Cang

    2014-12-01

    This paper presents a new plane extraction (PE) method based on the random sample consensus (RANSAC) approach. The generic RANSAC-based PE algorithm may over-extract a plane, and it may fail in case of a multistep scene where the RANSAC procedure results in multiple inlier patches that form a slant plane straddling the steps. The CC-RANSAC PE algorithm successfully overcomes the latter limitation if the inlier patches are separate. However, it fails if the inlier patches are connected. A typical scenario is a stairway with a stair wall where the RANSAC plane-fitting procedure results in inliers patches in the tread, riser, and stair wall planes. They connect together and form a plane. The proposed method, called normal-coherence CC-RANSAC (NCC-RANSAC), performs a normal coherence check to all data points of the inlier patches and removes the data points whose normal directions are contradictory to that of the fitted plane. This process results in separate inlier patches, each of which is treated as a candidate plane. A recursive plane clustering process is then executed to grow each of the candidate planes until all planes are extracted in their entireties. The RANSAC plane-fitting and the recursive plane clustering processes are repeated until no more planes are found. A probabilistic model is introduced to predict the success probability of the NCC-RANSAC algorithm and validated with real data of a 3-D time-of-flight camera-SwissRanger SR4000. Experimental results demonstrate that the proposed method extracts more accurate planes with less computational time than the existing RANSAC-based methods.

  17. Children's living arrangements following separation and divorce: insights from empirical and clinical research.

    PubMed

    Kelly, Joan B

    2007-03-01

    When parents separate, children typically enter into new living arrangements with each parent in a pattern determined most often by one or both parents or, failing private agreement, as a result of recommendations and decisions by lawyers, therapists, custody evaluators, or family courts. Most of these decisions have been based on cultural traditions and beliefs regarding postseparation parenting plans, visitation guidelines adopted within jurisdictions, unsubstantiated theory, and strongly held personal values and professional opinions, and have resulted since the 1960s in children spending most of their time with one residential parent and limited time with nonresident, or "visiting", parents. A large body of social science and child development research generated over the past three decades has identified factors associated with risk and resiliency of children after divorce. Such research remains largely unknown and untapped by parents and professionals making these crucial decisions about children's living arrangements. This article highlights empirical and clinical research that is relevant to the shape of children's living arrangements after separation, focusing first on what is known about living arrangements following divorce, what factors influence living arrangements for separated and divorced children, children's views about their living arrangements, and living arrangements associated with children's adjustment following divorce. Based on this research, it is argued that traditional visiting patterns and guidelines are, for the majority of children, outdated, unnecessarily rigid, and restrictive, and fail in both the short and long term to address their best interests. Research-based parenting plan models offering multiple options for living arrangements following separation and divorce more appropriately serve children's diverse developmental and psychological needs.

  18. NCC-RANSAC: A Fast Plane Extraction Method for 3-D Range Data Segmentation

    PubMed Central

    Qian, Xiangfei; Ye, Cang

    2015-01-01

    This paper presents a new plane extraction (PE) method based on the random sample consensus (RANSAC) approach. The generic RANSAC-based PE algorithm may over-extract a plane, and it may fail in case of a multistep scene where the RANSAC procedure results in multiple inlier patches that form a slant plane straddling the steps. The CC-RANSAC PE algorithm successfully overcomes the latter limitation if the inlier patches are separate. However, it fails if the inlier patches are connected. A typical scenario is a stairway with a stair wall where the RANSAC plane-fitting procedure results in inliers patches in the tread, riser, and stair wall planes. They connect together and form a plane. The proposed method, called normal-coherence CC-RANSAC (NCC-RANSAC), performs a normal coherence check to all data points of the inlier patches and removes the data points whose normal directions are contradictory to that of the fitted plane. This process results in separate inlier patches, each of which is treated as a candidate plane. A recursive plane clustering process is then executed to grow each of the candidate planes until all planes are extracted in their entireties. The RANSAC plane-fitting and the recursive plane clustering processes are repeated until no more planes are found. A probabilistic model is introduced to predict the success probability of the NCC-RANSAC algorithm and validated with real data of a 3-D time-of-flight camera–SwissRanger SR4000. Experimental results demonstrate that the proposed method extracts more accurate planes with less computational time than the existing RANSAC-based methods. PMID:24771605

  19. Numerical Modeling of a Vortex Stabilized Arcjet. Ph.D. Thesis, 1991 Final Report

    NASA Technical Reports Server (NTRS)

    Pawlas, Gary E.

    1992-01-01

    Arcjet thrusters are being actively considered for use in Earth orbit maneuvering applications. Experimental studies are currently the chief means of determining an optimal thruster configuration. Earlier numerical studies have failed to include all of the effects found in typical arcjets including complex geometries, viscosity, and swirling flow. Arcjet geometries are large area ratio converging nozzles with centerbodies in the subsonic portion of the nozzle. The nozzle walls serve as the anode while the centerbody functions as the cathode. Viscous effects are important because the Reynolds number, based on the throat radius, is typically less than 1,000. Experimental studies have shown that a swirl or circumferential velocity component stabilizes a constricted arc. This dissertation describes the equations governing flow through a constricted arcjet thruster. An assumption that the flowfield is in local thermodynamic equilibrium leads to a single fluid plasma temperature model. An order of magnitude analysis reveals the governing fluid mechanics equations are uncoupled from the electromagnetic field equations. A numerical method is developed to solve the governing fluid mechanics equations, the Thin Layer Navier-Stokes equations. A coordinate transformation is employed in deriving the governing equations to simplify the application of boundary conditions in complex geometries. An axisymmetric formulation is employed to include the swirl velocity component as well as the axial and radial velocity components. The numerical method is an implicit finite-volume technique and allows for large time steps to reach a converged steady-state solution. The inviscid fluxes are flux-split, and Gauss-Seidel line relaxation is used to accelerate convergence. Converging-diverging nozzles with exit-to-throat area ratios up to 100:1 and annular nozzles were examined. Quantities examined included Mach number and static wall pressure distributions, and oblique shock structures. As the level of swirl and viscosity in the flowfield increased the mass flow rate and thrust decreased. The technique was used to predict the flow through a typical arcjet thruster geometry. Results indicate swirl and viscosity play an important role in the complex geometry of an arcjet.

  20. Numerical modeling of a vortex stabilized arcjet

    NASA Astrophysics Data System (ADS)

    Pawlas, Gary E.

    1992-11-01

    Arcjet thrusters are being actively considered for use in Earth orbit maneuvering applications. Experimental studies are currently the chief means of determining an optimal thruster configuration. Earlier numerical studies have failed to include all of the effects found in typical arcjets including complex geometries, viscosity, and swirling flow. Arcjet geometries are large area ratio converging nozzles with centerbodies in the subsonic portion of the nozzle. The nozzle walls serve as the anode while the centerbody functions as the cathode. Viscous effects are important because the Reynolds number, based on the throat radius, is typically less than 1,000. Experimental studies have shown that a swirl or circumferential velocity component stabilizes a constricted arc. This dissertation describes the equations governing flow through a constricted arcjet thruster. An assumption that the flowfield is in local thermodynamic equilibrium leads to a single fluid plasma temperature model. An order of magnitude analysis reveals the governing fluid mechanics equations are uncoupled from the electromagnetic field equations. A numerical method is developed to solve the governing fluid mechanics equations, the Thin Layer Navier-Stokes equations. A coordinate transformation is employed in deriving the governing equations to simplify the application of boundary conditions in complex geometries. An axisymmetric formulation is employed to include the swirl velocity component as well as the axial and radial velocity components. The numerical method is an implicit finite-volume technique and allows for large time steps to reach a converged steady-state solution. The inviscid fluxes are flux-split, and Gauss-Seidel line relaxation is used to accelerate convergence. Converging-diverging nozzles with exit-to-throat area ratios up to 100:1 and annular nozzles were examined. Quantities examined included Mach number and static wall pressure distributions, and oblique shock structures. As the level of swirl and viscosity in the flowfield increased the mass flow rate and thrust decreased.

  1. 40 CFR 86.098-30 - Certification.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas... with the selection criteria employed in selecting the failed vehicle, a new emission data vehicle which... selected in accordance with the selection criteria employed in selecting the failed vehicle, then two or...

  2. 40 CFR 86.098-30 - Certification.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas... with the selection criteria employed in selecting the failed vehicle, a new emission data vehicle which... selected in accordance with the selection criteria employed in selecting the failed vehicle, then two or...

  3. Evaluation of Multi-Level Support Structure Requirements for New Weapon Systems.

    DTIC Science & Technology

    1987-09-01

    transformer 1 total consumed manhours on this level 19.45 hrs average manhrs within 4 weeks on this level : .38 hrs average rounded number of mainten; personal ...major unit data to provide conclusions about the logistics behavior of failing weapon systems. The modeling of system behavior with CAESAR has severa-l...characteristic data and major unit data to provide conclusions about the logistics behavior of failing weapon systems. The modelling of system behavior

  4. Memory and the Moses illusion: failures to detect contradictions with stored knowledge yield negative memorial consequences.

    PubMed

    Bottoms, Hayden C; Eslick, Andrea N; Marsh, Elizabeth J

    2010-08-01

    Although contradictions with stored knowledge are common in daily life, people often fail to notice them. For example, in the Moses illusion, participants fail to notice errors in questions such as "How many animals of each kind did Moses take on the Ark?" despite later showing knowledge that the Biblical reference is to Noah, not Moses. We examined whether error prevalence affected participants' ability to detect distortions in questions, and whether this in turn had memorial consequences. Many of the errors were overlooked, but participants were better able to catch them when they were more common. More generally, the failure to detect errors had negative memorial consequences, increasing the likelihood that the errors were used to answer later general knowledge questions. Methodological implications of this finding are discussed, as it suggests that typical analyses likely underestimate the size of the Moses illusion. Overall, answering distorted questions can yield errors in the knowledge base; most importantly, prior knowledge does not protect against these negative memorial consequences.

  5. Analysis of defects in ProTaper hand-operated instruments after clinical use.

    PubMed

    Shen, Ya; Bian, Zhuan; Cheung, Gary Shun-pan; Peng, Bin

    2007-03-01

    The purpose of this study was to analyze the type and location of defects observed in ProTaper for Hand Use (PHU) instruments after routine clinical use. We analyzed a total of 401 PHUs discarded from an endodontic clinic over a 17-month period. Those failed instruments were examined on the lateral and fractographic surface by scanning electron microscope. Of the 86 PHUs that showed discernible defects, 28 were intact but partially unwound, and 58 were fractured (36 because of shear and 22 from fatigue failure). The primary characteristic of shear failure was the presence of a skewed dimple and/or tear ridge, a typical pattern developed because of a combination of various loads. Nearly 74% of the instruments with defects exhibited shear damage. About three-quarters of the instrument fractures occurred in the apical one-third of the canal, mostly in molars. The results of this study indicated that most PHU instruments fail because of either shear or fatigue.

  6. CMM Data Analysis Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Due to the increase in the use of Coordinate Measuring Machines (CMMs) to measure fine details and complex geometries in manufacturing, many programs have been made to compile and analyze the data. These programs typically require extensive setup to determine the expected results in order to not only track the pass/fail of a dimension, but also to use statistical process control (SPC). These extra steps and setup times have been addressed through the CMM Data Analysis Tool, which only requires the output of the CMM to provide both pass/fail analysis on all parts run to the same inspection program asmore » well as provide graphs which help visualize where the part measures within the allowed tolerances. This provides feedback not only to the customer for approval of a part during development, but also to machining process engineers to identify when any dimension is drifting towards an out of tolerance condition during production. This program can handle hundreds of parts with complex dimensions and will provide an analysis within minutes.« less

  7. Checklists and Monitoring in the Cockpit: Why Crucial Defenses Sometimes Fail

    NASA Technical Reports Server (NTRS)

    Dismukes, R. Key; Berman, Ben

    2010-01-01

    Checklists and monitoring are two essential defenses against equipment failures and pilot errors. Problems with checklist use and pilots failures to monitor adequately have a long history in aviation accidents. This study was conducted to explore why checklists and monitoring sometimes fail to catch errors and equipment malfunctions as intended. Flight crew procedures were observed from the cockpit jumpseat during normal airline operations in order to: 1) collect data on monitoring and checklist use in cockpit operations in typical flight conditions; 2) provide a plausible cognitive account of why deviations from formal checklist and monitoring procedures sometimes occur; 3) lay a foundation for identifying ways to reduce vulnerability to inadvertent checklist and monitoring errors; 4) compare checklist and monitoring execution in normal flights with performance issues uncovered in accident investigations; and 5) suggest ways to improve the effectiveness of checklists and monitoring. Cognitive explanations for deviations from prescribed procedures are provided, along with suggestions for countermeasures for vulnerability to error.

  8. Prediction of the Electromagnetic Field Distribution in a Typical Aircraft Using the Statistical Energy Analysis

    NASA Astrophysics Data System (ADS)

    Kovalevsky, Louis; Langley, Robin S.; Caro, Stephane

    2016-05-01

    Due to the high cost of experimental EMI measurements significant attention has been focused on numerical simulation. Classical methods such as Method of Moment or Finite Difference Time Domain are not well suited for this type of problem, as they require a fine discretisation of space and failed to take into account uncertainties. In this paper, the authors show that the Statistical Energy Analysis is well suited for this type of application. The SEA is a statistical approach employed to solve high frequency problems of electromagnetically reverberant cavities at a reduced computational cost. The key aspects of this approach are (i) to consider an ensemble of system that share the same gross parameter, and (ii) to avoid solving Maxwell's equations inside the cavity, using the power balance principle. The output is an estimate of the field magnitude distribution in each cavity. The method is applied on a typical aircraft structure.

  9. [Groups and sources of yeasts in house dust].

    PubMed

    Glushakova, A M; Zheltikova, T M; Chernov, I Iu

    2004-01-01

    House dust contains bacteria, mycelial fungi, microarthropods, and yeasts. The house dust samples collected in 25 apartments in Moscow and the Moscow region were found to contain yeasts belonging to the genera Candida, Cryptococcus, Debaryomyces, Rhodotorula, Sporobolomyces, and Trichosporon. The most frequently encountered microorganisms were typical epiphytic yeasts, such as Cryptococcus diffluens and Rhodotorula mucilaginosa, which are capable of long-term preservation in an inactive state. The direct source of epiphytic yeasts occurring in the house dust might be the indoor plants, which were contaminated with these yeasts, albeit to a lesser degree than outdoor plants. Along with the typical epiphytic yeasts, the house dust contained the opportunistic yeast pathogens Candida catenulata, C. guillermondii, C. haemulonii, C. rugosa, and C. tropicalis, which are known as the causal agents of candidiasis. We failed to reveal any correlation between the abundance of particular yeast species in the house dust, residential characteristics, and the atopic dermatitis of the inhabitants.

  10. The pivotal role of inflammation in scar/keloid formation after acne

    PubMed Central

    Shi, Chao; Zhu, Jianyu; Yang, Degang

    2017-01-01

    ABSTRACT Most keloids are clinically observed as solid nodules or claw-like extensions. However, they appear hypoechoic on ultrasound images and are therefore easily confused with liquid features such as blood or vessels. The pathological manifestations of typical keloids also include prominent, thick blood vessels. The existing classification of scars fails to reflect the natural history of keloids. The outer characteristics of a typical keloid include bright red hyperplasia with abundant vessels, suggesting the importance of vascular components in the process of scar formation and prompting consideration of the role of inflammation in the development of granular hyperplasia. Additionally, we further considered the potential effectiveness of oral isotretinoin for severe keloids secondary to severe acne. We also explored different principles and applications related to 5-fluorouracil (5-FU), pulsed dye laser (PDL), and CO2 laser treatments for scars. PMID:29707102

  11. Autoerotic asphyxial deaths: analysis of nineteen fatalities in Alberta, 1978 to 1989.

    PubMed

    Tough, S C; Butt, J C; Sanders, G L

    1994-04-01

    This paper presents an unusual form of sexual (masturbatory) activity and brings this unusual cause of death to wider medical attention and understanding. All 19 cases of autoerotic asphyxial death that occurred between 1978 and 1989 in the province of Alberta, Canada were reviewed. The fatal victim of autoerotic asphyxia is typically a single male aged 15 to 29 years. Autoerotic sexual activity is typically performed in isolation; often there is evidence of repetitive practice. The accidental death usually results when the "safety" mechanism designed to alleviate neck compression fails. Often the first sign of the activity (usually a surprise to family and friends) is death itself. Physicians who are alert to the practice may suggest counselling when patients present with sexual concerns, unusual marks around the neck or evidence of abrasions to limbs suggesting bondage or other masochistic practices.

  12. Comparison of choose-a-movie and approach-avoidance paradigms to measure social motivation.

    PubMed

    Dubey, Indu; Ropar, Danielle; Hamilton, Antonia

    2018-01-01

    Social motivation is a subjective state which is rather difficult to quantify. It has sometimes been conceptualised as "behavioural effort" to seek social contact. Two paradigms: approach-avoidance (AA) and choose a movie (CAM), based on the same conceptualisation, have been used to measure social motivation in people with and without autism. However, in absence of a direct comparison, it is hard to know which of these paradigms has higher sensitivity in estimating preference for social over non-social stimuli. Here we compare these two tasks for their utility in (1) evaluating social seeking in typical people and (2) identifying the influence of autistic traits on social motivation. Our results suggest that CAM reveals a clear preference for social stimuli over non-social in typical adults but AA fails to do so. Also, social seeking measured with CAM but not AA has a negative relationship between autistic traits.

  13. Utility of video-EEG monitoring in a tertiary care epilepsy center.

    PubMed

    Kumar-Pelayo, M; Oller-Cramsie, M; Mihu, N; Harden, C

    2013-09-01

    Our video-EEG monitoring (VEEG) unit is part of a typical metropolitan tertiary care center that services a diverse patient population. We aimed to determine if the specific clinical reason for inpatient VEEG was actually resolved. Our method was to retrospectively determine the stated goal of inpatient VEEG and to analyze the outcome of one hundred consecutive adult patients admitted for VEEG. The reason for admission fit into one of four categories: 1) to characterize paroxysmal events as either epileptic or nonepileptic, 2) to localize epileptic foci, 3) to characterize the epilepsy syndrome, and 4) to attempt safe antiepileptic drug adjustment. We found that VEEG was successful in accomplishing the goal of admission in 77% of cases. The remaining 23% failed primarily due to lack of typical events during monitoring. Furthermore, of the overall study cohort, VEEG outcomes altered medical management in 53% and surgery was pursued in 5%. © 2013.

  14. Beyond Sexual Orientation: Integrating Gender/Sex and Diverse Sexualities via Sexual Configurations Theory.

    PubMed

    van Anders, Sari M

    2015-07-01

    Sexual orientation typically describes people's sexual attractions or desires based on their sex relative to that of a target. Despite its utility, it has been critiqued in part because it fails to account for non-biological gender-related factors, partnered sexualities unrelated to gender or sex, or potential divergences between love and lust. In this article, I propose Sexual Configurations Theory (SCT) as a testable, empirically grounded framework for understanding diverse partnered sexualities, separate from solitary sexualities. I focus on and provide models of two parameters of partnered sexuality--gender/sex and partner number. SCT also delineates individual gender/sex. I discuss a sexual diversity lens as a way to study the particularities and generalities of diverse sexualities without privileging either. I also discuss how sexual identities, orientations, and statuses that are typically seen as misaligned or aligned are more meaningfully conceptualized as branched or co-incident. I map out some existing identities using SCT and detail its applied implications for health and counseling work. I highlight its importance for sexuality in terms of measurement and social neuroendocrinology, and the ways it may be useful for self-knowledge and feminist and queer empowerment and alliance building. I also make a case that SCT changes existing understandings and conceptualizations of sexuality in constructive and generative ways informed by both biology and culture, and that it is a potential starting point for sexual diversity studies and research.

  15. Reading and language intervention for children at risk of dyslexia: a randomised controlled trial.

    PubMed

    Duff, Fiona J; Hulme, Charles; Grainger, Katy; Hardwick, Samantha J; Miles, Jeremy N V; Snowling, Margaret J

    2014-11-01

    Intervention studies for children at risk of dyslexia have typically been delivered preschool, and show short-term effects on letter knowledge and phoneme awareness, with little transfer to literacy. This randomised controlled trial evaluated the effectiveness of a reading and language intervention for 6-year-old children identified by research criteria as being at risk of dyslexia (n = 56), and their school-identified peers (n = 89). An Experimental group received two 9-week blocks of daily intervention delivered by trained teaching assistants; the Control group received 9 weeks of typical classroom instruction, followed by 9 weeks of intervention. Following mixed effects regression models and path analyses, small-to-moderate effects were shown on letter knowledge, phoneme awareness and taught vocabulary. However, these were fragile and short lived, and there was no reliable effect on the primary outcome of word-level reading. This new intervention was theoretically motivated and based on previous successful interventions, yet failed to show reliable effects on language and literacy measures following a rigorous evaluation. We suggest that the intervention may have been too short to yield improvements in oral language; and that literacy instruction in and beyond the classroom may have weakened training effects. We argue that reporting of null results makes an important contribution in terms of raising standards both of trial reporting and educational practice. © 2014 The Authors. Journal of Child Psychology and Psychiatry published by John Wiley & Sons Ltd on behalf of Association for Child and Adolescent Mental Health.

  16. Derivation of the Johnson-Samwer T2/3 temperature dependence of the yield strain in metallic glasses

    NASA Astrophysics Data System (ADS)

    Dasgupta, Ratul; Joy, Ashwin; Hentschel, H. G. E.; Procaccia, Itamar

    2013-01-01

    Metallic glasses are prone to fail mechanically via a shear-banding instability. In a remarkable paper Johnson and Samwer demonstrated that this failure enjoys a high degree of universality in the sense that a large group of metallic glasses appears to possess a yield strain that decreases with temperature following a -T2/3 law up to logarithmic corrections. In this Rapid Communication we offer a theoretical derivation of this law. We show that our formula fits very well simulation data on typical amorphous solids.

  17. [New assessment scale based on the type of person desired by an employer].

    PubMed

    Sasaki, Kenichi; Toyoda, Hideki

    2011-10-01

    In many cases, aptitude tests used in the hiring process fail to connect the measurement scale with the emotional type of the person desired by an employer. This experimental study introduced a new measuring method, in which the measurement scale could be adjusted according to the type of person an employer is seeking. Then the effectiveness of this method was verified by comparing the results of an aptitude test utilizing the method and the results of the typical hiring process carried out by the new method in hiring.

  18. Putting on a clinic in Va. Carilion, a not-for-profit hospital system based in Roanoke, is taking a $100 million risk to become a physician-run venture.

    PubMed

    Evans, Melanie

    2006-06-26

    Carilion Health System needs to change or die, according to its leaders, so the Roanoke, Va., organization is converting from a typical not-for-profit system into a physician-run clinic. The switch is an extreme version of an industrywide push to employ doctors. James Thweatt Jr., left, of rival Lewis-Gale, says his hospital joined the trend when it hired 80 specialists from a failing local clinic.

  19. Tension and compression fatigue response of unnotched 3D braided composites

    NASA Technical Reports Server (NTRS)

    Portanova, M. A.

    1992-01-01

    The unnotched compression and tension fatigue response of a 3-D braided composite was measured. Both gross compressive stress and tensile stress were plotted against cycles to failure to evaluate the fatigue life of these materials. Damage initiation and growth was monitored visually and by tracking compliance change during cycle loading. The intent was to establish by what means the strength of a 3-D architecture will start to degrade, at what point will it degrade beyond an acceptable level, and how this material will typically fail.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michaelos, Thoe K.; Shopov, Dimitar Y.; Sinha, Shashi Bhushan

    Here, water-oxidation catalysis is a critical bottleneck in the direct generation of solar fuels by artificial photosynthesis. Catalytic oxidation of difficult substrates such as water requires harsh conditions, so that the ligand must be designed both to stabilize high oxidation states of the metal center and to strenuously resist ligand degradation. Typical ligand choices either lack sufficient electron donor power or fail to stand up to the oxidizing conditions. This research on Ir-based water-oxidation catalysts (WOCs) has led us to identify a ligand, 2-(2'-pyridyl)-2-propanoate or “pyalk” that fulfills these requirements.

  1. [Proprioceptive sensitivity and orofacial functions].

    PubMed

    Auriol, M; Coutand, A; Crinetz, V; Chomette, G; Doumit, A; Lucht, M

    1985-01-01

    Proprioceptive sensibility from stimulation of muscle, ligament articular and vestibular receptors plays a determining role in the regulation of tone, of the resting position of the mandible, of head posture and of the closure pathway of the mandible. Studies conducted on temporomandibular joints of fetuses and adult subjects failed to demonstrate the specialized corpuscles (a fact previously noted by Ramfjord) in the temporomaxillary joint capsule, described typically in other joints. In contrast, however, histology showed a particularly rich population of muscle receptors adjacent to this joint, this being only one of several particular characteristics.

  2. Large telangiectatic focal nodular hyperplasia presenting with normal radionuclide studies: Case report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterfy, C.G.; Rosenthall, L.

    1990-12-01

    A 9 cm-lesion of telangiectatic focal nodular hyperplasia was incidentally identified in a 31-yr-old female. Despite a typical appearance by X-ray computed tomography and ultrasonography, scintigraphy with technetium-99m-({sup 99m}Tc) colloid, {sup 99m}Tc-diethyliminodiacetic acid, and {sup 99m}Tc-labeled red cells failed to demonstrate any abnormalities. These findings are felt to reflect the relative lack of architectural disruption that histologically characterizes this particular lesion. The present report described the imaging characteristics of the telangiectatic form of focal nodular hyperplasia.

  3. 75 FR 26883 - Airworthiness Directives; Empresa Brasileira de Aeronautica S.A. (EMBRAER) Model ERJ 170 and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-13

    ... slat actuator structural failure (rupture) and its adjacent actuator torque limiter failing high... outboard slat actuator structural failure (rupture) and its adjacent actuator torque limiter failing high... requirements.'' Under that section, Congress charges the FAA with promoting safe flight of civil aircraft in...

  4. Solutions for Failing High Schools: Converging Visions and Promising Models.

    ERIC Educational Resources Information Center

    Legters, Nettie; Balfanz, Robert; McPartland, James

    Promising solutions to the failings of traditional comprehensive high schools were reviewed to identify basic principles and strategies for improving high schools nationwide. Selected research studies, policy documents, and promising high school programs were reviewed. The review revealed the following principles for helping high schools better…

  5. Productive Failure in STEM Education

    ERIC Educational Resources Information Center

    Trueman, Rebecca J.

    2014-01-01

    Science education is criticized because it often fails to support problem-solving skills in students. Instead, the instructional methods primarily emphasize didactic models that fail to engage students and reveal how the material can be applied to solve real problems. To overcome these limitations, this study asked participants in a general…

  6. SEPARABLE FACTOR ANALYSIS WITH APPLICATIONS TO MORTALITY DATA

    PubMed Central

    Fosdick, Bailey K.; Hoff, Peter D.

    2014-01-01

    Human mortality data sets can be expressed as multiway data arrays, the dimensions of which correspond to categories by which mortality rates are reported, such as age, sex, country and year. Regression models for such data typically assume an independent error distribution or an error model that allows for dependence along at most one or two dimensions of the data array. However, failing to account for other dependencies can lead to inefficient estimates of regression parameters, inaccurate standard errors and poor predictions. An alternative to assuming independent errors is to allow for dependence along each dimension of the array using a separable covariance model. However, the number of parameters in this model increases rapidly with the dimensions of the array and, for many arrays, maximum likelihood estimates of the covariance parameters do not exist. In this paper, we propose a submodel of the separable covariance model that estimates the covariance matrix for each dimension as having factor analytic structure. This model can be viewed as an extension of factor analysis to array-valued data, as it uses a factor model to estimate the covariance along each dimension of the array. We discuss properties of this model as they relate to ordinary factor analysis, describe maximum likelihood and Bayesian estimation methods, and provide a likelihood ratio testing procedure for selecting the factor model ranks. We apply this methodology to the analysis of data from the Human Mortality Database, and show in a cross-validation experiment how it outperforms simpler methods. Additionally, we use this model to impute mortality rates for countries that have no mortality data for several years. Unlike other approaches, our methodology is able to estimate similarities between the mortality rates of countries, time periods and sexes, and use this information to assist with the imputations. PMID:25489353

  7. English as a Foreign Language in Bilingual Language-minority Children, Children with Dyslexia and Monolingual Typical Readers.

    PubMed

    Bonifacci, Paola; Canducci, Elisa; Gravagna, Giulia; Palladino, Paola

    2017-05-01

    The present study was aimed at investigating literacy skills in English as a foreign language in three different groups of children: monolinguals with dyslexia (n = 19), typically developing bilinguals (language-minority) (n = 19) and a control group of monolinguals (Italian) (n = 76). Bilinguals were not expected to fail in English measures, and their gap with monolinguals would be expected to be limited to the instructional language, owing to underexposure. All participants were enrolled in Italian primary schools (fourth and fifth grades). A non-verbal reasoning task and Italian and English literacy tasks were administered. The Italian battery included word and non-word reading (speed and accuracy), word and non-word writing, and reading comprehension; the English battery included similar tasks, except for the non-word writing. Bilingual children performed similarly to typical readers in English tasks, whereas in Italian tasks, their performance was similar to that of typical readers in reading speed but not in reading accuracy and writing. Children with dyslexia underperformed compared with typically developing children in all English and Italian tasks, except for reading comprehension in Italian. Profile analysis and correlational analyses were further discussed. These results suggest that English as a foreign language might represent a challenge for students with dyslexia but a strength for bilingual language-minority children. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Space robots with flexible appendages: Dynamic modeling, coupling measurement, and vibration suppression

    NASA Astrophysics Data System (ADS)

    Meng, Deshan; Wang, Xueqian; Xu, Wenfu; Liang, Bin

    2017-05-01

    For a space robot with flexible appendages, vibrations of flexible structure can be easily excited during both orbit and/or attitude maneuvers of the base and the operation of the manipulators. Hence, the pose (position and attitude) of the manipulator's end-effector will greatly deviate from the desired values, and furthermore, the motion of the manipulator will trigger and exacerbate vibrations of flexible appendages. Given lack of the atmospheric damping in orbit, the vibrations will last for quite a while and cause the on-orbital tasks to fail. We derived the rigid-flexible coupling dynamics of a space robot system with flexible appendages and established a coupling model between the flexible base and the space manipulator. A specific index was defined to measure the coupling degree between the flexible motion of the appendages and the rigid motion of the end-effector. Then, we analyzed the dynamic coupling for different conditions, such as modal displacements, joint angles (manipulator configuration), and mass properties. Moreover, the coupling map was adopted and drawn to represent the coupling motion. Based on this map, a trajectory planning method was addressed to suppress structure vibration. Finally, simulation studies of typical cases were performed, which verified the proposed models and method. This work provides a theoretic basis for the system design, performance evaluation, trajectory planning, and control of such space robots.

  9. Improved porcine model for Shiga toxin-producing Escherichia coli infection by deprivation of colostrum feeding in newborn piglets.

    PubMed

    Sato, Toshio; Hamabata, Takashi; Takita, Eiji; Matsui, Takeshi; Sawada, Kazutoshi; Imaoka, Taishi; Nakanishi, Nobuo; Nakayama, Keizo; Tsukahara, Takamitsu

    2017-05-01

    Porcine edema disease (ED) is a toxemia caused by enteric infection with Shiga toxin 2e (Stx2e)-producing Escherichia coli (STEC). ED occurs most frequently during the weaning period and is manifested as emaciation associated with high mortality. In our experimental infection with a specific STEC strain, we failed to cause the suppression of weight gain in piglets, which is a typical symptom of ED, in two consecutive experiments. Therefore, we examined the effects of deprivation of colostrum on the sensitivity of newborn piglets to STEC infection. Neonatal pigs were categorized into two groups: one fed artificial milk instead of colostrum in the first 24 h after birth and then returned to the care of their mother, the other breastfed by a surrogate mother until weaning. The oral challenge with 10 11  colony-forming units of virulent STEC strain on days 25, 26 and 27 caused suppression of weight gain and other ED symptoms in both groups, suggesting that colostrum deprivation from piglets was effective in enhancing susceptibility to STEC. Two successive STEC infection experiments using colostrum-deprived piglets reproduced this result, leading us to conclude that this improved ED piglet model is more sensitive to STEC infection than the previously established models. © 2017 Japanese Society of Animal Science.

  10. The isolation of spatial patterning modes in a mathematical model of juxtacrine cell signalling.

    PubMed

    O'Dea, R D; King, J R

    2013-06-01

    Juxtacrine signalling mechanisms are known to be crucial in tissue and organ development, leading to spatial patterns in gene expression. We investigate the patterning behaviour of a discrete model of juxtacrine cell signalling due to Owen & Sherratt (1998, Mathematical modelling of juxtacrine cell signalling. Math. Biosci., 153, 125-150) in which ligand molecules, unoccupied receptors and bound ligand-receptor complexes are modelled. Feedback between the ligand and receptor production and the level of bound receptors is incorporated. By isolating two parameters associated with the feedback strength and employing numerical simulation, linear stability and bifurcation analysis, the pattern-forming behaviour of the model is analysed under regimes corresponding to lateral inhibition and induction. Linear analysis of this model fails to capture the patterning behaviour exhibited in numerical simulations. Via bifurcation analysis, we show that since the majority of periodic patterns fold subcritically from the homogeneous steady state, a wide variety of stable patterns exists at a given parameter set, providing an explanation for this failure. The dominant pattern is isolated via numerical simulation. Additionally, by sampling patterns of non-integer wavelength on a discrete mesh, we highlight a disparity between the continuous and discrete representations of signalling mechanisms: in the continuous case, patterns of arbitrary wavelength are possible, while sampling such patterns on a discrete mesh leads to longer wavelength harmonics being selected where the wavelength is rational; in the irrational case, the resulting aperiodic patterns exhibit 'local periodicity', being constructed from distorted stable shorter wavelength patterns. This feature is consistent with experimentally observed patterns, which typically display approximate short-range periodicity with defects.

  11. A tissue adaptation model based on strain-dependent collagen degradation and contact-guided cell traction.

    PubMed

    Heck, T A M; Wilson, W; Foolen, J; Cilingir, A C; Ito, K; van Donkelaar, C C

    2015-03-18

    Soft biological tissues adapt their collagen network to the mechanical environment. Collagen remodeling and cell traction are both involved in this process. The present study presents a collagen adaptation model which includes strain-dependent collagen degradation and contact-guided cell traction. Cell traction is determined by the prevailing collagen structure and is assumed to strive for tensional homeostasis. In addition, collagen is assumed to mechanically fail if it is over-strained. Care is taken to use principally measurable and physiologically meaningful relationships. This model is implemented in a fibril-reinforced biphasic finite element model for soft hydrated tissues. The versatility and limitations of the model are demonstrated by corroborating the predicted transient and equilibrium collagen adaptation under distinct mechanical constraints against experimental observations from the literature. These experiments include overloading of pericardium explants until failure, static uniaxial and biaxial loading of cell-seeded gels in vitro and shortening of periosteum explants. In addition, remodeling under hypothetical conditions is explored to demonstrate how collagen might adapt to small differences in constraints. Typical aspects of all essentially different experimental conditions are captured quantitatively or qualitatively. Differences between predictions and experiments as well as new insights that emerge from the present simulations are discussed. This model is anticipated to evolve into a mechanistic description of collagen adaptation, which may assist in developing load-regimes for functional tissue engineered constructs, or may be employed to improve our understanding of the mechanisms behind physiological and pathological collagen remodeling. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. 75 FR 14333 - Airworthiness Directives; Empresa Brasileira de Aeronautica S.A. (EMBRAER) Model ERJ 170 and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-25

    ... Secondary Power Distribution Assemblies (SPDAs) the message ``RECIRC SMK DET FAIL'' is displayed in the... the distribution of power and responsibilities among the various levels of government. For the reasons...] controller cards and both Secondary Power Distribution Assemblies (SPDAs) the message ``RECIRC SMK DET FAIL...

  13. Optimal and Nonoptimal Computer-Based Test Designs for Making Pass-Fail Decisions

    ERIC Educational Resources Information Center

    Hambleton, Ronald K.; Xing, Dehui

    2006-01-01

    Now that many credentialing exams are being routinely administered by computer, new computer-based test designs, along with item response theory models, are being aggressively researched to identify specific designs that can increase the decision consistency and accuracy of pass-fail decisions. The purpose of this study was to investigate the…

  14. The EFQM Excellence Model[R]: Higher Education's Latest Management Fad?

    ERIC Educational Resources Information Center

    Temple, Paul

    2005-01-01

    Robert Birnbaum argues that higher education tends to adopt management fads -- newly conceived techniques enjoying brief popularity but which fail to live up to their promoters claims at the point when the corporate sector and government are discarding them. Although fads may have failed in these sectors because of various reasons, their failure…

  15. 76 FR 31453 - Special Conditions: Gulfstream Model GVI Airplane; Single-Occupant Side-Facing Seats

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-01

    .... SID TTI data must be processed as defined in Federal Motor Vehicle Safety Standard (FMVSS) part 571...). Pass/fail injury assessments: TTI and pelvic acceleration. 2. One longitudinal test with the Hybrid II... pelvic acceleration. 3. Vertical (14g) test with modified Hybrid II ATDs using existing pass/fail...

  16. A connectionist model of category learning by individuals with high-functioning autism spectrum disorder.

    PubMed

    Dovgopoly, Alexander; Mercado, Eduardo

    2013-06-01

    Individuals with autism spectrum disorder (ASD) show atypical patterns of learning and generalization. We explored the possible impacts of autism-related neural abnormalities on perceptual category learning using a neural network model of visual cortical processing. When applied to experiments in which children or adults were trained to classify complex two-dimensional images, the model can account for atypical patterns of perceptual generalization. This is only possible, however, when individual differences in learning are taken into account. In particular, analyses performed with a self-organizing map suggested that individuals with high-functioning ASD show two distinct generalization patterns: one that is comparable to typical patterns, and a second in which there is almost no generalization. The model leads to novel predictions about how individuals will generalize when trained with simplified input sets and can explain why some researchers have failed to detect learning or generalization deficits in prior studies of category learning by individuals with autism. On the basis of these simulations, we propose that deficits in basic neural plasticity mechanisms may be sufficient to account for the atypical patterns of perceptual category learning and generalization associated with autism, but they do not account for why only a subset of individuals with autism would show such deficits. If variations in performance across subgroups reflect heterogeneous neural abnormalities, then future behavioral and neuroimaging studies of individuals with ASD will need to account for such disparities.

  17. Synoptic analysis and hindcast of an intense bow echo in Western Europe: The 09 June 2014 storm

    NASA Astrophysics Data System (ADS)

    Mathias, Luca; Ermert, Volker; Kelemen, Fanni D.; Ludwig, Patrick; Pinto, Joaquim G.

    2017-04-01

    On Pentecost Monday of 09 June 2014, a severe mesoscale convective system (MCS) hit Belgium and Western Germany. This storm was one of the most severe thunderstorms in Germany for decades. The synoptic-scale and mesoscale characteristics of this storm are analyzed based on remote sensing data and in-situ measurements. Moreover, the forecast potential of the storm is evaluated using sensitivity experiments with a regional climate model. The key ingredients for the development of the Pentecost storm were the concurrent presence of low-level moisture, atmospheric conditional instability and wind shear. The synoptic and mesoscale analysis shows that the outflow of a decaying MCS above northern France triggered the storm, which exhibited the typical features of a bow echo like a mesovortex and rear inflow jet. This resulted in hurricane-force wind gusts (reaching 40 m/s) along a narrow swath in the Rhine-Ruhr region leading to substantial damage. Operational numerical weather predictions models mostly failed to forecast the storm, but high-resolution regional model hindcasts enable a realistic simulation of the storm. The model experiments reveal that the development of the bow echo is particularly sensitive to the initial wind field and the lower tropospheric moisture content. Correct initial and boundary conditions are therefore necessary for realistic numerical forecasts of such a bow echo event. We conclude that the Pentecost storm exhibited a comparable structure and a similar intensity to the observed bow echo systems in the United States.

  18. Heat up and potential failure of BWR upper internals during a severe accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robb, Kevin R

    2015-01-01

    In boiling water reactors, the steam dome, steam separators, and dryers above the core are comprised of approximately 100 tons of stainless steel. During a severe accident in which the coolant boils away and exothermic oxidation of zirconium occurs, gases (steam and hydrogen) are superheated in the core region and pass through the upper internals. Historically, the upper internals have been modeled using severe accident codes with relatively simple approximations. The upper internals are typically modeled in MELCOR as two lumped volumes with simplified heat transfer characteristics, with no structural integrity considerations, and with limited ability to oxidize, melt, andmore » relocate. The potential for and the subsequent impact of the upper internals to heat up, oxidize, fail, and relocate during a severe accident was investigated. A higher fidelity representation of the shroud dome, steam separators, and steam driers was developed in MELCOR v1.8.6 by extending the core region upwards. This modeling effort entailed adding 45 additional core cells and control volumes, 98 flow paths, and numerous control functions. The model accounts for the mechanical loading and structural integrity, oxidation, melting, flow area blockage, and relocation of the various components. The results indicate that the upper internals can reach high temperatures during a severe accident; they are predicted to reach a high enough temperature such that they lose their structural integrity and relocate. The additional 100 tons of stainless steel debris influences the subsequent in-vessel and ex-vessel accident progression.« less

  19. Modeling dynamic interactions and coherence between marine zooplankton and fishes linked to environmental variability

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Fogarty, Michael J.; Hare, Jonathan A.; Hsieh, Chih-hao; Glaser, Sarah M.; Ye, Hao; Deyle, Ethan; Sugihara, George

    2014-03-01

    The dynamics of marine fishes are closely related to lower trophic levels and the environment. Quantitatively understanding ecosystem dynamics linking environmental variability and prey resources to exploited fishes is crucial for ecosystem-based management of marine living resources. However, standard statistical models typically grounded in the concept of linear system may fail to capture the complexity of ecological processes. We have attempted to model ecosystem dynamics using a flexible, nonparametric class of nonlinear forecasting models. We analyzed annual time series of four environmental indices, 22 marine copepod taxa, and four ecologically and commercially important fish species during 1977 to 2009 on Georges Bank, a highly productive and intensively studied area of the northeast U.S. continental shelf ecosystem. We examined the underlying dynamic features of environmental indices and copepods, quantified the dynamic interactions and coherence with fishes, and explored the potential control mechanisms of ecosystem dynamics from a nonlinear perspective. We found: (1) the dynamics of marine copepods and environmental indices exhibiting clear nonlinearity; (2) little evidence of complex dynamics across taxonomic levels of copepods; (3) strong dynamic interactions and coherence between copepods and fishes; and (4) the bottom-up forcing of fishes and top-down control of copepods coexisting as target trophic levels vary. These findings highlight the nonlinear interactions among ecosystem components and the importance of marine zooplankton to fish populations which point to two forcing mechanisms likely interactively regulating the ecosystem dynamics on Georges Bank under a changing environment.

  20. Comparison between a typical and a simplified model for blast load-induced structural response

    NASA Astrophysics Data System (ADS)

    Abd-Elhamed, A.; Mahmoud, S.

    2017-02-01

    As explosive blasts continue to cause severe damage as well as victims in both civil and military environments. There is a bad need for understanding the behavior of structural elements to such extremely short duration dynamic loads where it is of great concern nowadays. Due to the complexity of the typical blast pressure profile model and in order to reduce the modelling and computational efforts, the simplified triangle model for blast loads profile is used to analyze structural response. This simplified model considers only the positive phase and ignores the suction phase which characterizes the typical one in simulating blast loads. The closed from solution for the equation of motion under blast load as a forcing term modelled either typical or simplified models has been derived. The considered herein two approaches have been compared using the obtained results from simulation response analysis of a building structure under an applied blast load. The computed error in simulating response using the simplified model with respect to the typical one has been computed. In general, both simplified and typical models can perform the dynamic blast-load induced response of building structures. However, the simplified one shows a remarkably different response behavior as compared to the typical one despite its simplicity and the use of only positive phase for simulating the explosive loads. The prediction of the dynamic system responses using the simplified model is not satisfactory due to the obtained larger errors as compared to the system responses obtained using the typical one.

  1. High-dose neutron irradiation performance of dielectric mirrors

    DOE PAGES

    Nimishakavi Anantha Phani Kiran Kumar; Leonard, Keith J.; Jellison, Jr., Gerald Earle; ...

    2015-05-01

    The study presents the high-dose behavior of dielectric mirrors specifically engineered for radiation-tolerance: alternating layers of Al 2O 3/SiO 2 and HfO 2/SiO 2 were grown on sapphire substrates and exposed to neutron doses of 1 and 4 dpa at 458 10K in the High Flux Isotope Reactor (HFIR). In comparison to previously reported results, these higher doses of 1 and 4 dpa results in a drastic drop in optical reflectance, caused by a failure of the multilayer coating. HfO 2/SiO 2 mirrors failed completely when exposed to 1 dpa, whereas the reflectance of Al 2O 3/SiO 2 mirrors reducedmore » to 44%, eventually failing at 4 dpa. Transmission electron microscopy (TEM) observation of the Al 2O 3/SiO 2 specimens showed SiO 2 layer defects which increases size with irradiation dose. The typical size of each defect was 8 nm in 1 dpa and 42 nm in 4 dpa specimens. Buckling type delamination of the interface between the substrate and first layer was typically observed in both 1 and 4 dpa HfO 2/SiO 2 specimens. Composition changes across the layers were measured in high resolution scanning-TEM mode using energy dispersive spectroscopy. A significant interdiffusion between the film layers was observed in Al 2O 3/SiO 2 mirror, though less evident in HfO 2/SiO 2 system. Lastly, the ultimate goal of this work is the provide insight into the radiation-induced failure mechanisms of these mirrors.« less

  2. Modeling 13.3nm Fe XXIII Flare Emissions Using the GOES-R EXIS Instrument

    NASA Astrophysics Data System (ADS)

    Rook, H.; Thiemann, E.

    2017-12-01

    The solar EUV spectrum is dominated by atomic transitions in ionized atoms in the solar atmosphere. As solar flares evolve, plasma temperatures and densities change, influencing abundances of various ions, changing intensities of different EUV wavelengths observed from the sun. Quantifying solar flare spectral irradiance is important for constraining models of Earth's atmosphere, improving communications quality, and controlling satellite navigation. However, high time cadence measurements of flare irradiance across the entire EUV spectrum were not available prior to the launch of SDO. The EVE MEGS-A instrument aboard SDO collected 0.1nm EUV spectrum data from 2010 until 2014, when the instrument failed. No current or future instrument is capable of similar high resolution and time cadence EUV observation. This necessitates a full EUV spectrum model to study EUV phenomena at Earth. It has been recently demonstrated that one hot flare EUV line, such as the 13.3nm Fe XXIII line, can be used to model cooler flare EUV line emissions, filling the role of MEGS-A. Since unblended measurements of Fe XXIII are typically unavailable, a proxy for the Fe XXIII line must be found. In this study, we construct two models of this line, first using the GOES 0.1-0.8nm soft x-ray (SXR) channel as the Fe XXIII proxy, and second using a physics-based model dependent on GOES emission measure and temperature data. We determine that the more sophisticated physics-based model shows better agreement with Fe XXIII measurements, although the simple proxy model also performs well. We also conclude that the high correlation between Fe XXIII emissions and the GOES 0.1-0.8nm band is because both emissions tend to peak near the GOES emission measure peak despite large differences in their contribution functions.

  3. Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.

    PubMed

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2016-10-05

    Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.

  4. The complex links between governance and biodiversity.

    PubMed

    Barrett, Christopher B; Gibson, Clark C; Hoffman, Barak; McCubbins, Mathew D

    2006-10-01

    We argue that two problems weaken the claims of those who link corruption and the exploitation of natural resources. The first is conceptual and the second is methodological. Studies that use national-level indicators of corruption fail to note that corruption comes in many forms, at multiple levels, that may affect resource use quite differently: negatively, positively, or not at all. Without a clear causal model of the mechanism by which corruption affects resources, one should treat with caution any estimated relationship between corruption and the state of natural resources. Simple, atheoretical models linking corruption measures and natural resource use typically do not account for other important control variables pivotal to the relationship between humans and natural resources. By way of illustration of these two general concerns, we used statistical methods to demonstrate that the findings of a recent, well-known study that posits a link between corruption and decreases in forests and elephants are not robust to simple conceptual and methodological refinements. In particular, once we controlled for a few plausible anthropogenic and biophysical conditioning factors, estimated the effects in changes rather than levels so as not to confound cross-sectional and longitudinal variation, and incorporated additional observations from the same data sources, corruption levels no longer had any explanatory power.

  5. Adipose tissue serves as a reservoir for recrudescent Rickettsia prowazekii infection in a mouse model.

    PubMed

    Bechah, Yassina; Paddock, Christopher D; Capo, Christian; Mege, Jean-Louis; Raoult, Didier

    2010-01-01

    Brill-Zinsser disease, the relapsing form of epidemic typhus, typically occurs in a susceptible host years or decades after the primary infection; however, the mechanisms of reactivation and the cellular reservoir during latency are poorly understood. Herein we describe a murine model for Brill-Zinsser disease, and use PCR and cell culture to show transient rickettsemia in mice treated with dexamethasone >3 months after clinical recovery from the primary infection. Treatment of similarly infected mice with cyclosporine failed to produce recrudescent bacteremia. Therapy with doxycycline for the primary infection prevented recrudescent bacteremia in most of these mice following treatment with dexamethasone. Rickettsia prowazekii (the etiologic agent of epidemic typhus) was detected by PCR, cell culture, and immunostaining methods in murine adipose tissue, but not in liver, spleen, lung, or central nervous system tissues of mice 4 months after recovery from the primary infection. The lungs of dexamethasone-treated mice showed impaired expression of beta-defensin transcripts that may be involved in the pathogenesis of pulmonary lesions. In vitro, R. prowazekii rickettsiae infected and replicated in the murine adipocyte cell line 3T3-L1. Collectively these data suggest a role for adipose tissue as a potential reservoir for dormant infections with R. prowazekii.

  6. Statistical mechanics of influence maximization with thermal noise

    NASA Astrophysics Data System (ADS)

    Lynn, Christopher W.; Lee, Daniel D.

    2017-03-01

    The problem of optimally distributing a budget of influence among individuals in a social network, known as influence maximization, has typically been studied in the context of contagion models and deterministic processes, which fail to capture stochastic interactions inherent in real-world settings. Here, we show that by introducing thermal noise into influence models, the dynamics exactly resemble spins in a heterogeneous Ising system. In this way, influence maximization in the presence of thermal noise has a natural physical interpretation as maximizing the magnetization of an Ising system given a budget of external magnetic field. Using this statistical mechanical formulation, we demonstrate analytically that for small external-field budgets, the optimal influence solutions exhibit a highly non-trivial temperature dependence, focusing on high-degree hub nodes at high temperatures and on easily influenced peripheral nodes at low temperatures. For the general problem, we present a projected gradient ascent algorithm that uses the magnetic susceptibility to calculate locally optimal external-field distributions. We apply our algorithm to synthetic and real-world networks, demonstrating that our analytic results generalize qualitatively. Our work establishes a fruitful connection with statistical mechanics and demonstrates that influence maximization depends crucially on the temperature of the system, a fact that has not been appreciated by existing research.

  7. Modeling thermal infrared (2-14 micrometer) reflectance spectra of frost and snow

    NASA Technical Reports Server (NTRS)

    Wald, Andrew E.

    1994-01-01

    Existing theories of radiative transfer in close-packed media assume that each particle scatters independently of its neighbors. For opaque particles, such as are common in the thermal infrared, this assumption is not valid, and these radiative transfer theories will not be accurate. A new method is proposed, called 'diffraction subtraction', which modifies the scattering cross section of close-packed large, opaque spheres to account for the effect of close packing on the diffraction cross section of a scattering particle. This method predicts the thermal infrared reflectance of coarse (greater than 50 micrometers radius), disaggregated granular snow. However, such coarse snow is typically old and metamorphosed, with adjacent grains welded together. The reflectance of such a welded block can be described as partly Fresnel in nature and cannot be predicted using Mie inputs to radiative transfer theory. Owing to the high absorption coefficient of ice in the thermal infrared, a rough surface reflectance model can be used to calculate reflectance from such a block. For very small (less than 50 micrometers), disaggregated particles, it is incorrect in principle to treat diffraction independently of reflection and refraction, and the theory fails. However, for particles larger than 50 micrometers, independent scattering is a valid assumption, and standard radiative transfer theory works.

  8. Thermodynamic description of Hofmeister effects on the LCST of thermosensitive polymers.

    PubMed

    Heyda, Jan; Dzubiella, Joachim

    2014-09-18

    Cosolvent effects on protein or polymer collapse transitions are typically discussed in terms of a two-state free energy change that is strictly linear in cosolute concentration. Here we investigate in detail the nonlinear thermodynamic changes of the collapse transition occurring at the lower critical solution temperature (LCST) of the role-model polymer poly(N-isopropylacrylamide) [PNIPAM] induced by Hofmeister salts. First, we establish an equation, based on the second-order expansion of the two-state free energy in concentration and temperature space, which excellently fits the experimental LCST curves and enables us to directly extract the corresponding thermodynamic parameters. Linear free energy changes, grounded on generic excluded-volume mechanisms, are indeed found for strongly hydrated kosmotropes. In contrast, for weakly hydrated chaotropes, we find significant nonlinear changes related to higher order thermodynamic derivatives of the preferential interaction parameter between salts and polymer. The observed non-monotonic behavior of the LCST can then be understood from a not yet recognized sign change of the preferential interaction parameter with salt concentration. Finally, we find that solute partitioning models can possibly predict the linear free energy changes for the kosmotropes, but fail for chaotropes. Our findings cast strong doubt on their general applicability to protein unfolding transitions induced by chaotropes.

  9. Endothelial chimerism and vascular sequestration protect pancreatic islet grafts from antibody-mediated rejection

    PubMed Central

    Chen, Chien-Chia; Pouliquen, Eric; Broisat, Alexis; Andreata, Francesco; Racapé, Maud; Bruneval, Patrick; Kessler, Laurence; Ahmadi, Mitra; Bacot, Sandrine; Saison-Delaplace, Carole; Marcaud, Marina; Van Huyen, Jean-Paul Duong; Loupy, Alexandre; Villard, Jean; Demuylder-Mischler, Sandrine; Morelon, Emmanuel; Tsai, Meng-Kun; Kolopp-Sarda, Marie-Nathalie; Koenig, Alice; Mathias, Virginie; Ghezzi, Catherine; Dubois, Valerie; Defrance, Thierry

    2017-01-01

    Humoral rejection is the most common cause of solid organ transplant failure. Here, we evaluated a cohort of 49 patients who were successfully grafted with allogenic islets and determined that the appearance of donor-specific anti-HLA antibodies (DSAs) did not accelerate the rate of islet graft attrition, suggesting resistance to humoral rejection. Murine DSAs bound to allogeneic targets expressed by islet cells and induced their destruction in vitro; however, passive transfer of the same DSAs did not affect islet graft survival in murine models. Live imaging revealed that DSAs were sequestrated in the circulation of the recipients and failed to reach the endocrine cells of grafted islets. We used murine heart transplantation models to confirm that endothelial cells were the only accessible targets for DSAs, which induced the development of typical microvascular lesions in allogeneic transplants. In contrast, the vasculature of DSA-exposed allogeneic islet grafts was devoid of lesions because sprouting of recipient capillaries reestablished blood flow in grafted islets. Thus, we conclude that endothelial chimerism combined with vascular sequestration of DSAs protects islet grafts from humoral rejection. The reduced immunoglobulin concentrations in the interstitial tissue, confirmed in patients, may have important implications for biotherapies such as vaccines and monoclonal antibodies. PMID:29202467

  10. Fail-safe transcription termination: Because one is never enough.

    PubMed

    Lemay, Jean-François; Bachand, François

    2015-01-01

    Termination of RNA polymerase II (RNAPII) transcription is a fundamental step of gene expression that involves the release of the nascent transcript and dissociation of RNAPII from the DNA template. As transcription termination is intimately linked to RNA 3' end processing, termination pathways have a key decisive influence on the fate of the transcribed RNA. Quite remarkably, when reaching the 3' end of genes, a substantial fraction of RNAPII fail to terminate transcription, requiring the contribution of alternative or "fail-safe" mechanisms of termination to release the polymerase. This point of view covers redundant mechanisms of transcription termination and how they relate to conventional termination models. In particular, we expand on recent findings that propose a reverse torpedo model of termination, in which the 3'5' exonucleolytic activity of the RNA exosome targets transcription events associated with paused and backtracked RNAPII.

  11. The Objective Borderline method (OBM): a probability-based model for setting up an objective pass/fail cut-off score in medical programme assessments.

    PubMed

    Shulruf, Boaz; Turner, Rolf; Poole, Phillippa; Wilkinson, Tim

    2013-05-01

    The decision to pass or fail a medical student is a 'high stakes' one. The aim of this study is to introduce and demonstrate the feasibility and practicality of a new objective standard-setting method for determining the pass/fail cut-off score from borderline grades. Three methods for setting up pass/fail cut-off scores were compared: the Regression Method, the Borderline Group Method, and the new Objective Borderline Method (OBM). Using Year 5 students' OSCE results from one medical school we established the pass/fail cut-off scores by the abovementioned three methods. The comparison indicated that the pass/fail cut-off scores generated by the OBM were similar to those generated by the more established methods (0.840 ≤ r ≤ 0.998; p < .0001). Based on theoretical and empirical analysis, we suggest that the OBM has advantages over existing methods in that it combines objectivity, realism, robust empirical basis and, no less importantly, is simple to use.

  12. Children with autism can track others' beliefs in a competitive game.

    PubMed

    Peterson, Candida C; Slaughter, Virginia; Peterson, James; Premack, David

    2013-05-01

    Theory of mind (ToM) development, assessed via 'litmus' false belief tests, is severely delayed in autism, but the standard testing procedure may underestimate these children's genuine understanding. To explore this, we developed a novel test involving competition to win a reward as the motive for tracking other players' beliefs (the 'Dot-Midge task'). Ninety-six children, including 23 with autism (mean age: 10.36 years), 50 typically developing 4-year-olds (mean age: 4.40) and 23 typically developing 3-year-olds (mean age: 3.59) took a standard 'Sally-Ann' false belief test, the Dot-Midge task (which was closely matched to the Sally-Ann task procedure) and a norm-referenced verbal ability test. Results revealed that, of the children with autism, 74% passed the Dot-Midge task, yet only 13% passed the standard Sally-Ann procedure. A similar pattern of performance was observed in the older, but not the younger, typically developing control groups. This finding demonstrates that many children with autism who fail motivationally barren standard false belief tests can spontaneously use ToM to track their social partners' beliefs in the context of a competitive game. © 2013 Blackwell Publishing Ltd.

  13. Prevalence of extrapyramidal syndromes in psychiatric inpatients and the relationship of clozapine treatment to tardive dyskinesia.

    PubMed

    Modestin, J; Stephan, P L; Erni, T; Umari, T

    2000-05-05

    In 200 inpatients on regular neuroleptics, point prevalence of extrapyramidal syndromes, including Parkinson syndrome, akathisia and tardive dyskinesia (TD), was studied and found to be 20, 11 and 22%, respectively. A total of 46 patients have currently, and for a longer time, (average about 3years, median over 1year) been treated with clozapine, and 127 with typical neuroleptics (NLs). Comparing both groups, higher TD scores were found in the clozapine sample. Investigating the influence of a set of seven clinical variables on the TD score with the help of multiple regression analysis, the influence of the treatment modality disappeared, whereas the age proved to be the only significant variable. Studying the role of past clozapine therapy in patients currently on typical NLs and comparing 10 matched pairs of chronic patients with and without TD in whom a complete life-time cumulative dose of NLs was identified, a relationship between TD and length of current typical NL therapy and life-time typical NL dosage could be demonstrated. On the whole, long-term relatively extensive use of clozapine has not markedly reduced the prevalence of extrapyramidal syndromes in our psychiatric inpatient population. In particular, we failed to demonstrate a beneficial effect of clozapine on prevalence of TD. There are certainly patients who suffer from TD in spite of a long-term intensive clozapine treatment.

  14. Comparison of nutritional status between children with autism spectrum disorder and typically developing children in the Mediterranean Region (Valencia, Spain).

    PubMed

    Marí-Bauset, Salvador; Llopis-González, Agustín; Zazpe, Itziar; Marí-Sanchis, Amelia; Morales Suárez-Varela, Maria

    2017-04-01

    This case-control study investigated nutrient intake, healthy eating index with 10 items on foods and nutrients, on 3-day food diaries and anthropometric measurements in 105 children with autism spectrum disorder and 495 typically developing children (6-9 years) in Valencia (Spain). Children with autism spectrum disorder were at a higher risk for underweight, eating more legumes, vegetables, fiber, and some micronutrients (traditional Mediterranean diet) but fewer dairy and cereal products, and less iodine, sodium, and calcium than their typically developing peers. Differences existed in total energy intake but healthy eating index and food variety score differences were not significant. Autism spectrum disorder group failed to meet dietary recommendations for thiamin, riboflavin, vitamin C, or calcium. Risk of inadequate intake of fiber, vitamin E, and sodium was lower in children with autism spectrum disorder than typically developing children. Results suggest that (1) risk of inadequate intake of some micronutrients in children with autism spectrum disorder and (2) cultural patterns and environment may influence food intake and anthropometric characteristics in autism spectrum disorder. Primary care should include anthropometric and nutritional surveillance in this population to identify intervention on a case-by-case basis. Future research should explore dietary patterns and anthropometric characteristics in different autism spectrum disorder populations in other countries, enhancing our understanding of the disorder's impact.

  15. T-tubule disease: Relationship between t-tubule organization and regional contractile performance in human dilated cardiomyopathy.

    PubMed

    Crossman, David J; Young, Alistair A; Ruygrok, Peter N; Nason, Guy P; Baddelely, David; Soeller, Christian; Cannell, Mark B

    2015-07-01

    Evidence from animal models suggest that t-tubule changes may play an important role in the contractile deficit associated with heart failure. However samples are usually taken at random with no regard as to regional variability present in failing hearts which leads to uncertainty in the relationship between contractile performance and possible t-tubule derangement. Regional contraction in human hearts was measured by tagged cine MRI and model fitting. At transplant, failing hearts were biopsy sampled in identified regions and immunocytochemistry was used to label t-tubules and sarcomeric z-lines. Computer image analysis was used to assess 5 different unbiased measures of t-tubule structure/organization. In regions of failing hearts that showed good contractile performance, t-tubule organization was similar to that seen in normal hearts, with worsening structure correlating with the loss of regional contractile performance. Statistical analysis showed that t-tubule direction was most highly correlated with local contractile performance, followed by the amplitude of the sarcomeric peak in the Fourier transform of the t-tubule image. Other area based measures were less well correlated. We conclude that regional contractile performance in failing human hearts is strongly correlated with the local t-tubule organization. Cluster tree analysis with a functional definition of failing contraction strength allowed a pathological definition of 't-tubule disease'. The regional variability in contractile performance and cellular structure is a confounding issue for analysis of samples taken from failing human hearts, although this may be overcome with regional analysis by using tagged cMRI and biopsy mapping. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Imbalanced multi-modal multi-label learning for subcellular localization prediction of human proteins with both single and multiple sites.

    PubMed

    He, Jianjun; Gu, Hong; Liu, Wenqi

    2012-01-01

    It is well known that an important step toward understanding the functions of a protein is to determine its subcellular location. Although numerous prediction algorithms have been developed, most of them typically focused on the proteins with only one location. In recent years, researchers have begun to pay attention to the subcellular localization prediction of the proteins with multiple sites. However, almost all the existing approaches have failed to take into account the correlations among the locations caused by the proteins with multiple sites, which may be the important information for improving the prediction accuracy of the proteins with multiple sites. In this paper, a new algorithm which can effectively exploit the correlations among the locations is proposed by using gaussian process model. Besides, the algorithm also can realize optimal linear combination of various feature extraction technologies and could be robust to the imbalanced data set. Experimental results on a human protein data set show that the proposed algorithm is valid and can achieve better performance than the existing approaches.

  17. State analysis requirements database for engineering complex embedded systems

    NASA Technical Reports Server (NTRS)

    Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.

  18. Conditional deletion of WT1 in the septum transversum mesenchyme causes congenital diaphragmatic hernia in mice

    PubMed Central

    Carmona, Rita; Cañete, Ana; Cano, Elena; Ariza, Laura; Rojas, Anabel; Muñoz-Chápuli, Ramon

    2016-01-01

    Congenital diaphragmatic hernia (CDH) is a severe birth defect. Wt1-null mouse embryos develop CDH but the mechanisms regulated by WT1 are unknown. We have generated a murine model with conditional deletion of WT1 in the lateral plate mesoderm, using the G2 enhancer of the Gata4 gene as a driver. 80% of G2-Gata4Cre;Wt1fl/fl embryos developed typical Bochdalek-type CDH. We show that the posthepatic mesenchymal plate coelomic epithelium gives rise to a mesenchyme that populates the pleuroperitoneal folds isolating the pleural cavities before the migration of the somitic myoblasts. This process fails when Wt1 is deleted from this area. Mutant embryos show Raldh2 downregulation in the lateral mesoderm, but not in the intermediate mesoderm. The mutant phenotype was partially rescued by retinoic acid treatment of the pregnant females. Replacement of intermediate by lateral mesoderm recapitulates the evolutionary origin of the diaphragm in mammals. CDH might thus be viewed as an evolutionary atavism. DOI: http://dx.doi.org/10.7554/eLife.16009.001 PMID:27642710

  19. A systems engineering perspective on the human-centered design of health information systems.

    PubMed

    Samaras, George M; Horst, Richard L

    2005-02-01

    The discipline of systems engineering, over the past five decades, has used a structured systematic approach to managing the "cradle to grave" development of products and processes. While elements of this approach are typically used to guide the development of information systems that instantiate a significant user interface, it appears to be rare for the entire process to be implemented. In fact, a number of authors have put forth development lifecycle models that are subsets of the classical systems engineering method, but fail to include steps such as incremental hazard analysis and post-deployment corrective and preventative actions. In that most health information systems have safety implications, we argue that the design and development of such systems would benefit by implementing this systems engineering approach in full. Particularly with regard to bringing a human-centered perspective to the formulation of system requirements and the configuration of effective user interfaces, this classical systems engineering method provides an excellent framework for incorporating human factors (ergonomics) knowledge and integrating ergonomists in the interdisciplinary development of health information systems.

  20. KIRMES: kernel-based identification of regulatory modules in euchromatic sequences.

    PubMed

    Schultheiss, Sebastian J; Busch, Wolfgang; Lohmann, Jan U; Kohlbacher, Oliver; Rätsch, Gunnar

    2009-08-15

    Understanding transcriptional regulation is one of the main challenges in computational biology. An important problem is the identification of transcription factor (TF) binding sites in promoter regions of potential TF target genes. It is typically approached by position weight matrix-based motif identification algorithms using Gibbs sampling, or heuristics to extend seed oligos. Such algorithms succeed in identifying single, relatively well-conserved binding sites, but tend to fail when it comes to the identification of combinations of several degenerate binding sites, as those often found in cis-regulatory modules. We propose a new algorithm that combines the benefits of existing motif finding with the ones of support vector machines (SVMs) to find degenerate motifs in order to improve the modeling of regulatory modules. In experiments on microarray data from Arabidopsis thaliana, we were able to show that the newly developed strategy significantly improves the recognition of TF targets. The python source code (open source-licensed under GPL), the data for the experiments and a Galaxy-based web service are available at http://www.fml.mpg.de/raetsch/suppl/kirmes/.

  1. Application of advanced material systems to composite frame elements

    NASA Technical Reports Server (NTRS)

    Llorente, Steven; Minguet, Pierre; Fay, Russell; Medwin, Steven

    1992-01-01

    A three phase program has been conducted to investigate DuPont's Long Discontinuous Fiber (LDF) composites. Additional tests were conducted to compare LDF composites against toughened thermosets and a baseline thermoset system. Results have shown that the LDF AS4/PEKK offers improved interlaminar (flange bending) strength with little reduction in mechanical properties due to the discontinuous nature of the fibers. In the third phase, a series of AS4/PEKK LDF C-section curved frames (representing a typical rotorcraft light frame) were designed, manufactured and tested. Specimen reconsolidation after 'stretch forming' and frame thickness were found to be key factors in this light frame's performance. A finite element model was constructed to correlate frame test results with expected strain levels determined from material property tests. Adequately reconsolidated frames performed well and failed at strain levels at or above baseline thermoset material test strains. Finally a cost study was conducted which has shown that the use of LDF for this frame would result in a significant cost savings, for moderate to large lot sizes compared with the hand lay-up of a thermoset frame.

  2. Cross-protection in nonhuman primates against Argentine hemorrhagic fever.

    PubMed Central

    Weissenbacher, M C; Coto, C E; Calello, M A; Rondinone, S N; Damonte, E B; Frigerio, M J

    1982-01-01

    The susceptibility of the marmoset Callithrix jacchus to Tacaribe virus infection was investigated to perform cross-protection studies between Junin and Tacaribe viruses. Five marmosets inoculated with Tacaribe virus failed to show any signs of disease, any alterations in erythrocyte, leukocyte, reticulocyte, and platelet counts or any changes in hematocrit or hemoglobin values. No Tacaribe virus could be recovered from blood at any time postinfection. Anti-Tacaribe neutralizing antibodies appeared 3 weeks postinfection. The five Tacaribe-infected marmosets and four noninfected controls were challenged with the pathogenic strain of Junin virus on day 60 post-Tacaribe infection. The former group showed no signs of disease, no viremia, and no challenge virus replication, whereas the control group exhibited the typical symptoms of Argentine hemorrhagic fever, high viremia, and viral titers in organs. Soon after challenge, the Tacaribe-protected marmosets synthesized neutralizing antibodies against Junin virus. These results indicate that the marmoset C. jacchus can be considered an experimental model for protection studies with arenaviruses and that the Tacaribe virus could be considered as a potential vaccine against Junin virus. PMID:6276301

  3. 75 FR 77524 - Special Conditions: Sikorsky Aircraft Corporation Model S-92A Helicopter; Installation of a...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-13

    ... must provide fail-safe operations during coupled maneuvers. The demonstration of fail-safe operations... receive your comments by February 11, 2011. ADDRESSES: You must mail or deliver two copies of your... your comments: Docket No. SW023. You can inspect comments in the Docket on weekdays, except Federal...

  4. The Consequences of Failing to Imitate.

    ERIC Educational Resources Information Center

    Richman, Charles L.; And Others

    This demonstration study examines the affective reactions of infants when they imitate or fail to imitate play behavior modeled by an adult. Subjects were twenty-four 18-month-old and twenty-four 24-month-old male and female infants. Each infant visited the laboratory twice with an inter-session interval of 48 hours. At each session, the infant…

  5. Predicting Student Success by Modeling Student Interaction in Asynchronous Online Courses

    ERIC Educational Resources Information Center

    Shelton, Brett E.; Hung, Jui-Long; Lowenthal, Patrick R.

    2017-01-01

    Early-warning intervention for students at risk of failing their online courses is increasingly important for higher education institutions. Students who show high levels of engagement appear less likely to be at risk of failing, and how engaged a student is in their online experience can be characterized as factors contributing to their social…

  6. Appreciative Inquiry of Texas Elementary Classroom Assessment: Action Research for a School-Wide Framework

    ERIC Educational Resources Information Center

    Clint, Frank Anthony

    2012-01-01

    This qualitative, action-research study used themes from appreciative interviews of Texas elementary teachers to recommend a framework for a school-wide assessment model for a Texas elementary school. The specific problem was that the Texas accountability system used a yearly measurement that failed to track progress over time and failed to…

  7. Disentangling Similarity Judgments from Pragmatic Judgments: Response to Sloutsky and Fisher (2012)

    ERIC Educational Resources Information Center

    Noles, Nicholaus S.; Gelman, Susan A.

    2012-01-01

    Sloutsky and Fisher (2012) attempt to reframe the results presented in Noles and Gelman (2012) as a pure replication of their original work validating the similarity, induction, naming, and categorization (SINC) model. However, their critique fails to engage with the central findings reported in Noles and Gelman, and their reanalysis fails to…

  8. Modeling tracer transport in randomly heterogeneous porous media by nonlocal moment equations: Anomalous transport

    NASA Astrophysics Data System (ADS)

    Morales-Casique, E.; Lezama-Campos, J. L.; Guadagnini, A.; Neuman, S. P.

    2013-05-01

    Modeling tracer transport in geologic porous media suffers from the corrupt characterization of the spatial distribution of hydrogeologic properties of the system and the incomplete knowledge of processes governing transport at multiple scales. Representations of transport dynamics based on a Fickian model of the kind considered in the advection-dispersion equation (ADE) fail to capture (a) the temporal variation associated with the rate of spreading of a tracer, and (b) the distribution of early and late arrival times which are often observed in field and/or laboratory scenarios and are considered as the signature of anomalous transport. Elsewhere we have presented exact stochastic moment equations to model tracer transport in randomly heterogeneous aquifers. We have also developed a closure scheme which enables one to provide numerical solutions of such moment equations at different orders of approximations. The resulting (ensemble) average and variance of concentration fields were found to display a good agreement against Monte Carlo - based simulation results for mildly heterogeneous (or well-conditioned strongly heterogeneous) media. Here we explore the ability of the moment equations approach to describe the distribution of early arrival times and late time tailing effects which can be observed in Monte-Carlo based breakthrough curves (BTCs) of the (ensemble) mean concentration. We show that BTCs of mean resident concentration calculated at a fixed space location through higher-order approximations of moment equations display long tailing features of the kind which is typically associated with anomalous transport behavior and are not represented by an ADE model with constant dispersive parameter, such as the zero-order approximation.

  9. Why Can’t Rodents Vomit? A Comparative Behavioral, Anatomical, and Physiological Study

    PubMed Central

    Horn, Charles C.; Kimball, Bruce A.; Wang, Hong; Kaus, James; Dienel, Samuel; Nagy, Allysa; Gathright, Gordon R.; Yates, Bill J.; Andrews, Paul L. R.

    2013-01-01

    The vomiting (emetic) reflex is documented in numerous mammalian species, including primates and carnivores, yet laboratory rats and mice appear to lack this response. It is unclear whether these rodents do not vomit because of anatomical constraints (e.g., a relatively long abdominal esophagus) or lack of key neural circuits. Moreover, it is unknown whether laboratory rodents are representative of Rodentia with regards to this reflex. Here we conducted behavioral testing of members of all three major groups of Rodentia; mouse-related (rat, mouse, vole, beaver), Ctenohystrica (guinea pig, nutria), and squirrel-related (mountain beaver) species. Prototypical emetic agents, apomorphine (sc), veratrine (sc), and copper sulfate (ig), failed to produce either retching or vomiting in these species (although other behavioral effects, e.g., locomotion, were noted). These rodents also had anatomical constraints, which could limit the efficiency of vomiting should it be attempted, including reduced muscularity of the diaphragm and stomach geometry that is not well structured for moving contents towards the esophagus compared to species that can vomit (cat, ferret, and musk shrew). Lastly, an in situ brainstem preparation was used to make sensitive measures of mouth, esophagus, and shoulder muscular movements, and phrenic nerve activity–key features of emetic episodes. Laboratory mice and rats failed to display any of the common coordinated actions of these indices after typical emetic stimulation (resiniferatoxin and vagal afferent stimulation) compared to musk shrews. Overall the results suggest that the inability to vomit is a general property of Rodentia and that an absent brainstem neurological component is the most likely cause. The implications of these findings for the utility of rodents as models in the area of emesis research are discussed. PMID:23593236

  10. Failure of flight feathers under uniaxial compression.

    PubMed

    Schelestow, Kristina; Troncoso, Omar P; Torres, Fernando G

    2017-09-01

    Flight feathers are light weight engineering structures. They have a central shaft divided in two parts: the calamus and the rachis. The rachis is a thinly walled conical shell filled with foam, while the calamus is a hollow tube-like structure. Due to the fact that bending loads are produced during birds' flight, the resistance to bending of feathers has been reported in different studies. However, the analysis of bent feathers has shown that compression could induce failure by buckling. Here, we have studied the compression of feathers in order to assess the failure mechanisms involved. Axial compression tests were carried out on the rachis and the calamus of dove and pelican feathers. The failure mechanisms and folding structures that resulted from the compression tests were observed from images obtained by scanning electron microscopy (SEM). The rachis and calamus fail due to structural instability. In the case of the calamus, this instability leads to a progressive folding process. In contrast, the rachis undergoes a typical Euler column-type buckling failure. The study of failed specimens showed that delamination buckling, cell collapse and cell densification are the primary failure mechanisms of the rachis structure. The role of the foam is also discussed with regard to the mechanical response of the samples and the energy dissipated during the compression tests. Critical stress values were calculated using delamination buckling models and were found to be in very good agreement with the experimental values measured. Failure analysis and mechanical testing have confirmed that flight feathers are complex thin walled structures with mechanical adaptations that allow them to fulfil their functions. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Study of a fail-safe abort system for an actively cooled hypersonic aircraft: Computer program documentation

    NASA Technical Reports Server (NTRS)

    Haas, L. A., Sr.

    1976-01-01

    The Fail-Safe Abort System TEMPerature Analysis Program, (FASTEMP), user's manual is presented. This program was used to analyze fail-safe abort systems for an actively cooled hypersonic aircraft. FASTEMP analyzes the steady state or transient temperature response of a thermal model defined in rectangular, cylindrical, conical and/or spherical coordinate system. FASTEMP provides the user with a large selection of subroutines for heat transfer calculations. The various modes of heat transfer available from these subroutines are: heat storage, conduction, radiation, heat addition or generation, convection, and fluid flow.

  12. Engaging Students with Active Thinking

    NASA Astrophysics Data System (ADS)

    Wieman, Carl E.

    This Peer Review issue focuses on science and engaged learning. As any advertising executive or politician can tell you, engaging people is all about attitudes and beliefs, not abstract tacts. There is a lot we can learn from these professional communicators about how to effectively engage students. Far too often we, as educators, provide students with the content of science-often in the distilled formal representations that we have found to be the most concise and general-but fail to address students' own attitudes and beliefs. (Although heaven forbid that we should totally abandon reason and facts, as is typical in politics and advertising).

  13. Collaboration in Global Software Engineering Based on Process Description Integration

    NASA Astrophysics Data System (ADS)

    Klein, Harald; Rausch, Andreas; Fischer, Edward

    Globalization is one of the big trends in software development. Development projects need a variety of different resources with appropriate expert knowledge to be successful. More and more of these resources are nowadays obtained from specialized organizations and countries all over the world, varying in development approaches, processes, and culture. As seen with early outsourcing attempts, collaboration may fail due to these differences. Hence, the major challenge in global software engineering is to streamline collaborating organizations towards a successful conjoint development. Based on typical collaboration scenarios, this paper presents a structured approach to integrate processes in a comprehensible way.

  14. The Capgras delusion: a critique of its psychodynamic theories.

    PubMed

    Sinkman, A M

    1983-07-01

    The psychodynamic explanations for the Capgras delusion are reviewed. A critique is offered, showing how these theories fail to account for several important clinical phenomena found in patients with the Capgras delusion. A new psychodynamic theory is suggested that attempts to encompass all of the significant clinical phenomena. This hypothesis is based on findings in a series of fourteen schizophrenic patients with the typical delusion. The focus is on the patient's loss of a stable sense of identity. By the process of projection the patient ascribes his identity diffusion to those around him who are then seen as unreal impostors.

  15. Stochastic scheduling on a repairable manufacturing system

    NASA Astrophysics Data System (ADS)

    Li, Wei; Cao, Jinhua

    1995-08-01

    In this paper, we consider some stochastic scheduling problems with a set of stochastic jobs on a manufacturing system with a single machine that is subject to multiple breakdowns and repairs. When the machine processing a job fails, the job processing must restart some time later when the machine is repaired. For this typical manufacturing system, we find the optimal policies that minimize the following objective functions: (1) the weighed sum of the completion times; (2) the weighed number of late jobs having constant due dates; (3) the weighted number of late jobs having random due dates exponentially distributed, which generalize some previous results.

  16. The science is in the data.

    PubMed

    Helliwell, John R; McMahon, Brian; Guss, J Mitchell; Kroon-Batenburg, Loes M J

    2017-11-01

    Understanding published research results should be through one's own eyes and include the opportunity to work with raw diffraction data to check the various decisions made in the analyses by the original authors. Today, preserving raw diffraction data is technically and organizationally viable at a growing number of data archives, both centralized and distributed, which are empowered to register data sets and obtain a preservation descriptor, typically a 'digital object identifier'. This introduces an important role of preserving raw data, namely understanding where we fail in or could improve our analyses. Individual science area case studies in crystallography are provided.

  17. Haloperidol, a Novel Treatment for Cannabinoid Hyperemesis Syndrome.

    PubMed

    Witsil, Joanne C; Mycyk, Mark B

    Cannabinoid hyperemesis syndrome (CHS) is typically unresponsive to conventional pharmacologic antiemetics, and patients often require excessive laboratory and radiographic testing and hospital admission. We report 4 cases of CHS that failed standard emergency department therapy but improved significantly after treatment with haloperidol. Although the exact mechanism for CHS remains unclear, dysregulation at cannabinoid type 1 seems to play a role. Recent animal data demonstrate complex interactions between dopamine and cannabinoid type 1 signaling, a potential mechanism for haloperidol success in patients with CHS. Our success with haloperidol in these 4 patients warrants further investigation of haloperidol as an emergency department treatment for CHS.

  18. The science is in the data

    PubMed Central

    Kroon-Batenburg, Loes M. J.

    2017-01-01

    Understanding published research results should be through one’s own eyes and include the opportunity to work with raw diffraction data to check the various decisions made in the analyses by the original authors. Today, preserving raw diffraction data is technically and organizationally viable at a growing number of data archives, both centralized and distributed, which are empowered to register data sets and obtain a preservation descriptor, typically a ‘digital object identifier’. This introduces an important role of preserving raw data, namely understanding where we fail in or could improve our analyses. Individual science area case studies in crystallography are provided. PMID:29123672

  19. Integrated rate isolation sensor

    NASA Technical Reports Server (NTRS)

    Brady, Tye (Inventor); Henderson, Timothy (Inventor); Phillips, Richard (Inventor); Zimpfer, Doug (Inventor); Crain, Tim (Inventor)

    2012-01-01

    In one embodiment, a system for providing fault-tolerant inertial measurement data includes a sensor for measuring an inertial parameter and a processor. The sensor has less accuracy than a typical inertial measurement unit (IMU). The processor detects whether a difference exists between a first data stream received from a first inertial measurement unit and a second data stream received from a second inertial measurement unit. Upon detecting a difference, the processor determines whether at least one of the first or second inertial measurement units has failed by comparing each of the first and second data streams to the inertial parameter.

  20. A Case Study of a Combat Aircraft’s Single Hit Vulnerability

    DTIC Science & Technology

    1986-09-01

    Survivability Life Cycle 21 3.2 Interfaces of the FMECA Process 27 3.3 Example FMEA Format 29 3.4 Example DMEA Matrix 33 3.5 Example Disablement Diagram 34...Typical Hi-Hi/Hi-Hi Mission 58 5.5 A-20 Conceptual Tactics 60 7.1 A-20 Fuel System 73 7.2 A-20 Hydraulics System 75 7.3 A-20 Flight Controls System 77 7.4...effect severity. The FMECA procedure is performed in two steps, (1) a Fail- ure Mode and Effects Analysis ( FMEA ) and (2) a Damage Mode and Effects

  1. Catatonia as presenting clinical feature of subacute sclerosing panencephalitis

    PubMed Central

    Dayal, Prabhoo; Balhara, Yatan Pal Singh

    2014-01-01

    Catatonia is not a usual clinical presentation of subacute sclerosing panencephalitis (SSPE), especially in the initial stages of illness. However, there is only one reported case of SSPE presenting as catatonia among children. In this report, however, there were SSPE-specific changes on EEG and the catatonia failed to respond to lorazepam. We describe a case of SSPE in a child presenting as catatonia that presented with clinical features of catatonia and did not have typical EEG findings when assessed at first contact. He responded to lorazepam and EEG changes emerged during the course of follow-up. PMID:24891908

  2. Fail-safe transcription termination: Because one is never enough

    PubMed Central

    Lemay, Jean-François; Bachand, François

    2015-01-01

    Termination of RNA polymerase II (RNAPII) transcription is a fundamental step of gene expression that involves the release of the nascent transcript and dissociation of RNAPII from the DNA template. As transcription termination is intimately linked to RNA 3′ end processing, termination pathways have a key decisive influence on the fate of the transcribed RNA. Quite remarkably, when reaching the 3′ end of genes, a substantial fraction of RNAPII fail to terminate transcription, requiring the contribution of alternative or “fail-safe” mechanisms of termination to release the polymerase. This point of view covers redundant mechanisms of transcription termination and how they relate to conventional termination models. In particular, we expand on recent findings that propose a reverse torpedo model of termination, in which the 3′5′ exonucleolytic activity of the RNA exosome targets transcription events associated with paused and backtracked RNAPII. PMID:26273910

  3. Higher Education Funding and Incentives: Evidence from the Norwegian Funding Reform

    ERIC Educational Resources Information Center

    Frolich, Nicoline; Strom, Bjarne

    2008-01-01

    In this article we examine how the introduction of an output-based funding scheme in Norwegian public higher education in 2003 affects pass requirement standards. Based on a survey of faculty at the institutions concerned, we find that the propensity to expect that the new funding model will affect the failing/non-failing decision in exams is…

  4. Process Time Refinement for Reusable Launch Vehicle Regeneration Modeling

    DTIC Science & Technology

    2008-03-01

    predicted to fail, or have failed. 3) Augmenting existing space systems with redundant or additional capability to enhance space system performance or...Canopies, External Tanks/Pods/Pylon Ejectors , Armament Bay Doors, Missile Launchers, Wing and Fuselage Center Line Racks, Bomb Bay Release...Systems Test 04583 Thrust Maintenance Operation 04584 Silo Door Operation 04650 Initial Build-up-Recovery Vehicle (RV) 147 04610 Nondestructive

  5. Salvage Gamma Knife Stereotactic Radiosurgery for Surgically Refractory Trigeminal Neuralgia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Little, Andrew S.; Shetter, Andrew G.; Shetter, Mary E.

    2009-06-01

    Purpose: To evaluate the clinical outcome of patients with surgically refractory trigeminal neuralgia (TN) treated with rescue gamma knife radiosurgery (GKRS). Methods and Materials: Seventy-nine patients with typical TN received salvage GKRS between 1997 and 2002 at the Barrow Neurological Institute (BNI). All patients had recurrent pain following at least one prior surgical intervention. Prior surgical interventions included percutaneous destructive procedures, microvascular decompression (MVD), or GKRS. Thirty-one (39%) had undergone at least two prior procedures. The most common salvage dose was 80 Gy, although 40-50 Gy was typical in patients who had received prior radiosurgery. Pain outcome was assessed usingmore » the BNI Pain Intensity Score, and quality of life was assessed using the Brief Pain Inventory. Results: Median follow-up after salvage GKRS was 5.3 years. Actuarial analysis demonstrated that at 5 years, 20% of patients were pain-free and 50% had pain relief. Pain recurred in patients who had an initial response to GKRS at a median of 1.1 years. Twenty-eight (41%) required a subsequent surgical procedure for recurrence. A multivariate Cox proportional hazards model suggested that the strongest predictor of GKRS failure was a history of prior MVD (p=0.029). There were no instances of serious morbidity or mortality. Ten percent of patients developed worsening facial numbness and 8% described their numbness as 'very bothersome.' Conclusions: GKRS salvage for refractory TN is well tolerated and results in long-term pain relief in approximately half the patients treated. Clinicians may reconsider using GKRS to salvage patients who have failed prior MVD.« less

  6. Room-temperature fracture in V-(4-5)Cr-(4-5)Ti tensile specimens irradiated in Fusion-1 BOR-60 experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gazda, J.; Meshii, M.; Tsai, H.

    Specimens of V-(4-5)Cr-(4-5)Ti alloys were irradiated to {approx}18 dpa at 320 C in the Fusion-1 capsule inserted into the BOR-60 reactor. Tensile tests at 23 C indicated dramatic yield strength increase (>300%), lack of work hardening, and minimal (<1%) total elongations. SEM analysis of fracture and side surfaces were conducted to determine reduction in are and the mode of fracture. The reduction of area was negligible. All but one specimen failed by a combination of ductile shear deformation and cleavage crack growth. Transgranular cleavage cracks were initiated by stress concentrations at the tips of the shear bands. In side-view observations,more » evidence was found of slip bands typically associated with dislocation channeling. No differences due to pre-irradiation heat treatment and heat-to-heat composition variations were detected. The only deviation from this behavior was found in V-4Cr-4Ti-B alloy, which failed in the grip portion by complete cleavage cracking.« less

  7. Neutralizing antibody fails to impact the course of Ebola virus infection in monkeys.

    PubMed

    Oswald, Wendelien B; Geisbert, Thomas W; Davis, Kelly J; Geisbert, Joan B; Sullivan, Nancy J; Jahrling, Peter B; Parren, Paul W H I; Burton, Dennis R

    2007-01-01

    Prophylaxis with high doses of neutralizing antibody typically offers protection against challenge with viruses producing acute infections. In this study, we have investigated the ability of the neutralizing human monoclonal antibody, KZ52, to protect against Ebola virus in rhesus macaques. This antibody was previously shown to fully protect guinea pigs from infection. Four rhesus macaques were given 50 mg/kg of neutralizing human monoclonal antibody KZ52 intravenously 1 d before challenge with 1,000 plaque-forming units of Ebola virus, followed by a second dose of 50 mg/kg antibody 4 d after challenge. A control animal was exposed to virus in the absence of antibody treatment. Passive transfer of the neutralizing human monoclonal antibody not only failed to protect macaques against challenge with Ebola virus but also had a minimal effect on the explosive viral replication following infection. We show that the inability of antibody to impact infection was not due to neutralization escape. It appears that Ebola virus has a mechanism of infection propagation in vivo in macaques that is uniquely insensitive even to high concentrations of neutralizing antibody.

  8. Absence of spontaneous action anticipation by false belief attribution in children with autism spectrum disorder.

    PubMed

    Senju, Atsushi; Southgate, Victoria; Miura, Yui; Matsui, Tomoko; Hasegawa, Toshikazu; Tojo, Yoshikuni; Osanai, Hiroo; Csibra, Gergely

    2010-05-01

    Recently, a series of studies demonstrated false belief understanding in young children through completely nonverbal measures. These studies have revealed that children younger than 3 years of age, who consistently fail the standard verbal false belief test, can anticipate others' actions based on their attributed false beliefs. The current study examined whether children with autism spectrum disorder (ASD), who are known to have difficulties in the verbal false belief test, may also show such action anticipation in a nonverbal false belief test. We presented video stimuli of an actor watching an object being hidden in a box. The object was then displaced while the actor was looking away. We recorded children's eye movements and coded whether they spontaneously anticipated the actor's subsequent behavior, which could only have been predicted if they had attributed a false belief to her. Although typically developing children correctly anticipated the action, children with ASD failed to show such action anticipation. The results suggest that children with ASD have an impairment in false belief attribution, which is independent of their verbal ability.

  9. Corrosion Evaluation of Tank 40 Leak Detection Box

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mickalonis, J.I.

    1999-07-29

    'Leak detection from the transfer lines in the tank farm has been a concern for many years because of the need to minimize exposure of personnel and contamination of the environment. The leak detection box (LDB) is one line of defense, which must be maintained to meet this objective. The evaluation of a failed LDB was one item from an action plan aimed at minimizing the degradation of LDBs. The Tank 40 LDB, which failed in service, was dug up and shipped to SRTC for evaluation. During a video inspection while in service, this LDB was found to have blackmore » tubercles on the interior, which suggested possible microbial involvement. The failure point, however, was believed to have occurred in the drain line from the transfer line jacket. Visual, metallurgical, and biological analyses were performed on the LDB. The analysis results showed that there was not any adverse microbiological growth or significant localized corrosion. The corrosion of the LDB was caused by exposure to aqueous environments and was typical of carbon steel pipes in soil environments.'« less

  10. A prospect theory explanation of the disposition to trade losing investments for less than market price.

    PubMed

    Johnstone, D J

    2002-06-01

    Investors have a proven general reluctance to realize losses. The theory of "mental accounting" suggests that losses are easier to accept when mentally integrated with either preceding losses or with compensatory gains. Mental integration is made easier when a failed asset is exchanged against a new, apparently profitable, acquisition. The alternative is to sell the existing asset on the open market before re-investing the proceeds as desired. This is emotionally less appealing than "rolling over" a losing investment into a new venture by way of an asset trade. The psychological benefits of exchanging rather than selling a failed asset come at a cost. It is typical of trade-in arrangements, e.g., where one trades an old car against a new one, that the effective sale price of the existing asset is less than current market value. Acceptance of this low price adds to the investor's total monetary loss on the existing asset but is essential to an overall package deal apart from which that asset would often remain belatedly unsold.

  11. Hamiltonian dynamics of thermostated systems: two-temperature heat-conducting phi4 chains.

    PubMed

    Hoover, Wm G; Hoover, Carol G

    2007-04-28

    We consider and compare four Hamiltonian formulations of thermostated mechanics, three of them kinetic, and the other one configurational. Though all four approaches "work" at equilibrium, their application to many-body nonequilibrium simulations can fail to provide a proper flow of heat. All the Hamiltonian formulations considered here are applied to the same prototypical two-temperature "phi4" model of a heat-conducting chain. This model incorporates nearest-neighbor Hooke's-Law interactions plus a quartic tethering potential. Physically correct results, obtained with the isokinetic Gaussian and Nose-Hoover thermostats, are compared with two other Hamiltonian results. The latter results, based on constrained Hamiltonian thermostats, fail to model correctly the flow of heat.

  12. Earthquake recurrence models fail when earthquakes fail to reset the stress field

    USGS Publications Warehouse

    Tormann, Thessa; Wiemer, Stefan; Hardebeck, Jeanne L.

    2012-01-01

    Parkfield's regularly occurring M6 mainshocks, about every 25 years, have over two decades stoked seismologists' hopes to successfully predict an earthquake of significant size. However, with the longest known inter-event time of 38 years, the latest M6 in the series (28 Sep 2004) did not conform to any of the applied forecast models, questioning once more the predictability of earthquakes in general. Our study investigates the spatial pattern of b-values along the Parkfield segment through the seismic cycle and documents a stably stressed structure. The forecasted rate of M6 earthquakes based on Parkfield's microseismicity b-values corresponds well to observed rates. We interpret the observed b-value stability in terms of the evolution of the stress field in that area: the M6 Parkfield earthquakes do not fully unload the stress on the fault, explaining why time recurrent models fail. We present the 1989 M6.9 Loma Prieta earthquake as counter example, which did release a significant portion of the stress along its fault segment and yields a substantial change in b-values.

  13. Time-dependent deformation of titanium metal matrix composites

    NASA Technical Reports Server (NTRS)

    Bigelow, C. A.; Bahei-El-din, Y. A.; Mirdamadi, M.

    1995-01-01

    A three-dimensional finite element program called VISCOPAC was developed and used to conduct a micromechanics analysis of titanium metal matrix composites. The VISCOPAC program uses a modified Eisenberg-Yen thermo-viscoplastic constitutive model to predict matrix behavior under thermomechanical fatigue loading. The analysis incorporated temperature-dependent elastic properties in the fiber and temperature-dependent viscoplastic properties in the matrix. The material model was described and the necessary material constants were determined experimentally. Fiber-matrix interfacial behavior was analyzed using a discrete fiber-matrix model. The thermal residual stresses due to the fabrication cycle were predicted with a failed interface, The failed interface resulted in lower thermal residual stresses in the matrix and fiber. Stresses due to a uniform transverse load were calculated at two temperatures, room temperature and an elevated temperature of 650 C. At both temperatures, a large stress concentration was calculated when the interface had failed. The results indicate the importance of accuracy accounting for fiber-matrix interface failure and the need for a micromechanics-based analytical technique to understand and predict the behavior of titanium metal matrix composites.

  14. Identifying demographic variables related to failed dental appointments in a university hospital-based residency program.

    PubMed

    Mathu-Muju, Kavita R; Li, Hsin-Fang; Hicks, James; Nash, David A; Kaplan, Alan; Bush, Heather M

    2014-01-01

    The objective of this study was to identify characteristics of pediatric patients who failed to keep the majority of their scheduled dental appointments in a pediatric dental clinic staffed by pediatric dental residents and faculty members. The electronic records of all patients appointed over a continuous 54 month period were analyzed. Appointment history and demographic variables were collected. The rate of failed appointments was calculated by dividing the number of failed appointments with the total number of appointments scheduled for the patient. There were 7,591 patients in the analyzable dataset scheduled with a total of 48,932 appointments. Factors associated with an increased rate of failed appointments included self-paying for dental care, having a resident versus a faculty member as the provider, rural residence, and adolescent aged patients. Multivariable regression models indicated self-paying patients had higher odds and rates of failed appointments than patients with Medicaid and private insurance. Access to care for children may be improved by increasing the availability of private and public insurance. The establishment of a dental home and its relationship to a child receiving continuous care in an institutional setting depends upon establishing a relationship with a specific dentist.

  15. Advanced Duct Sealing Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sherman, Max H.; Walker, Iain S.

    Duct leakage has been identified as a major source of energy loss in residential buildings. Most duct leakage occurs at the connections to registers, plenums or branches in the duct system. At each of these connections a method of sealing the duct system is required. Typical sealing methods include tapes or mastics applied around the joints in the system. Field examinations of duct systems have typically shown that these seals tend to fail over extended periods of time. The Lawrence Berkeley National Laboratory has been testing sealant durability for several years. Typical duct tape (i.e. fabric backed tapes with naturalmore » rubber adhesives) was found to fail more rapidly than all other duct sealants. This report summarizes the results of duct sealant durability testing of five UL 181B-FX listed duct tapes (three cloth tapes, a foil tape and an Oriented Polypropylene (OPP) tape). One of the cloth tapes was specifically developed in collaboration with a tape manufacturer to perform better in our durability testing. The first test involved the aging of common ''core-to-collar joints'' of flexible duct to sheet metal collars, and sheet metal ''collar-to-plenum joints'' pressurized with 200 F (93 C) air. The second test consisted of baking duct tape specimens in a constant 212 F (100 C) oven following the UL 181B-FX ''Temperature Test'' requirements. Additional tests were also performed on only two tapes using sheet metal collar-to-plenum joints. Since an unsealed flexible duct joint can have a variable leakage depending on the positioning of the flexible duct core, the durability of the flexible duct joints could not be based on the 10% of unsealed leakage criteria. Nevertheless, the leakage of the sealed specimens prior to testing could be considered as a basis for a failure criteria. Visual inspection was also documented throughout the tests. The flexible duct core-to-collar joints were inspected monthly, while the sheet metal collar-to-plenum joints were inspected weekly. The baking test specimens were visually inspected weekly, and the durability was judged by the observed deterioration in terms of brittleness, cracking, flaking and blistering (the terminology used in the UL 181B-FX test procedure).« less

  16. Cocklebur (Xanthium strumarium, L. var. strumarium) intoxication in swine: review and redefinition of the toxic principle.

    PubMed

    Stuart, B P; Cole, R J; Gosser, H S

    1981-05-01

    Cocklebur (Xanthium strumarium) fed to feeder pigs was associated with acute to subacute hepatotoxicosis. Cotyledonary seedings fed at 0.75% to 3% of body weight or ground bur fed at 20% to 30% of the ration caused acute depression, convulsions, and death. Principle gross lesions were marked serofibrinous ascites, edema of the gallbladder wall, and lobular accentuation of the liver. Acute to subacute centrilobular hepatic necrosis was present microscopically. The previously reported toxic principle, hydroquinone, was not recovered from the plant or bur of X. strumarium. Authentic hydroquinone administered orally failed to produce lesions typical of cocklebur intoxication but did produce marked hyperglycemia. Carboxyatractyloside recovered from the aqueous extract of X. strumarium and authentic carboxyatractyloside, when fed to pigs, caused signs and lesions typical of cocklebur intoxication. Marked hypoglycemia and elevated serum glutamic oxaloacetic transaminase and serum isocitric dehydrogenase concentrations occurred in pigs with acute hepatic necrosis that had received either cocklebur seedlings, ground bur or carboxyatractyloside.

  17. Acoustic emission source localization based on distance domain signal representation

    NASA Astrophysics Data System (ADS)

    Gawronski, M.; Grabowski, K.; Russek, P.; Staszewski, W. J.; Uhl, T.; Packo, P.

    2016-04-01

    Acoustic emission is a vital non-destructive testing technique and is widely used in industry for damage detection, localisation and characterization. The latter two aspects are particularly challenging, as AE data are typically noisy. What is more, elastic waves generated by an AE event, propagate through a structural path and are significantly distorted. This effect is particularly prominent for thin elastic plates. In these media the dispersion phenomenon results in severe localisation and characterization issues. Traditional Time Difference of Arrival methods for localisation techniques typically fail when signals are highly dispersive. Hence, algorithms capable of dispersion compensation are sought. This paper presents a method based on the Time - Distance Domain Transform for an accurate AE event localisation. The source localisation is found through a minimization problem. The proposed technique focuses on transforming the time signal to the distance domain response, which would be recorded at the source. Only, basic elastic material properties and plate thickness are used in the approach, avoiding arbitrary parameters tuning.

  18. What is a delusion? Epistemological dimensions.

    PubMed

    Leeser, J; O'Donohue, W

    1999-11-01

    Although the Diagnostic and Statistical Manual of Mental Disorders (American Psychiatric Association, 1994) clearly indicates delusions have an epistemic dimension, it fails to accurately identify the epistemic properties of delusions. The authors explicate the regulative causes of belief revision for rational agents and argue that delusions are unresponsive to these. They argue that delusions are (a) protected beliefs made unfalsifiable either in principle or because the agent refuses to admit anything as a potential falsifier; (b) the protected belief is not typically considered a "properly basic" belief; (c) the belief is not of the variety of protected scientific beliefs; (d) in response to an apparent falsification, the subject posits not a simple, testable explanation for the inconsistency but one that is more complicated, less testable, and provides no new corroborations; (e) the subject has a strong emotional attachment to the belief; and (f) the belief is typically supported by (or originates from) trivial occurrences that are interpreted by the subject as highly unusual, significant, having personal reference, or some combination of these.

  19. Removal of Perfluorinated Grease Components from NTO Oxidizer

    NASA Technical Reports Server (NTRS)

    McClure, Mark B.; Greene, Ben; Johnson, Harry T.

    2004-01-01

    Perfluorinated greases are typically used as a thread lubricant in the assembly of non-welded nitrogen tetroxide (NTO) oxidizer systems. These greases, typically a perfluoroalkylether, with suspended polytetrafluoroethylene (PTFE) micro-powder, have attractive lubricating properties toward threaded components and are relatively chemically inert toward NTO oxidizers. A major drawback, however, is that perfluoroalkylether greases are soluble or dispersible in NTO oxidizers and can contaminate the propellant. The result is propellant that fails the non-volatile residue (NVR) specification analyses and that may have negative effects on test hardware performance and lifetime. Consequently, removal of the grease contaminants from NTO may be highly desirable. Methods for the removal of perfluorinated grease components from NTO oxidizers including distillation, adsorption, filtration, and adjustment of temperature are investigated and reported in this work. Solubility or dispersibility data for the perfluoroalkylether oil (Krytox(tm)143 AC) component of a perfluorinated grease (Krytox 240 AC) and for Krytox 240 AC in NTO were determined and are reported.

  20. Accurate Temperature Feedback Control for MRI-Guided, Phased Array HICU Endocavitary Therapy

    NASA Astrophysics Data System (ADS)

    Salomir, Rares; Rata, Mihaela; Cadis, Daniela; Lafon, Cyril; Chapelon, Jean Yves; Cotton, François; Bonmartin, Alain; Cathignol, Dominique

    2007-05-01

    Effective treatment of malignant tumours demands well controlled energy deposition in the region of interest. Generally, two major steps must be fulfilled: 1. pre-operative optimal planning of the thermal dosimetry and 2. per-operative active spatial-and-temporal control of the delivered thermal dose. The second issue is made possible by using fast MR thermometry data and adjusting on line the sonication parameters. This approach is addressed here in the particular case of the ultrasound therapy for endocavitary tumours (oesophagus, colon or rectum) with phased array cylindrical applicators of High Intensity Contact Ultrasound (HICU). Two specific methodological objectives have been defined for this study: 1. to implement a robust and effective temperature controller for the specific geometry of endocavitary HICU and 2. to determine the stability (ie convergence) domain of the controller with respect to possible errors affecting the empirical parameters of the underlying physical model. Experimental setup included a Philips 1.5T clinical MR scanner and a cylindrical phased array transducer (64 elements) driven by a computer-controlled multi-channel generator. Performance of the temperature controller was tested ex vivo on fresh meat samples with planar and slightly focused beams, for a temperature elevation range from 10°C to 30°C. During the steady state regime, typical error of the temperature mean value was inferior to 1%, while the typical standard deviation of the temperature was inferior to 2% (relative to the targeted temperature elevation). Further, the empirical parameters of the physical model have been deliberately set to erroneous values and the impact on the controller stability was evaluated. Excellent tolerance of the controller was demonstrated, as this one failed to performed stable feedback only in the extreme case of a strong underestimation for the ultrasound absorption parameter by a factor of 4 or more.

  1. Flying the smoky skies: secondhand smoke exposure of flight attendants.

    PubMed

    Repace, J

    2004-03-01

    To assess the contribution of secondhand smoke (SHS) to aircraft cabin air pollution and flight attendants' SHS exposure relative to the general population. Published air quality measurements, modelling studies, and dosimetry studies were reviewed, analysed, and generalised. Flight attendants reported suffering greatly from SHS pollution on aircraft. Both government and airline sponsored studies concluded that SHS created an air pollution problem in aircraft cabins, while tobacco industry sponsored studies yielding similar data concluded that ventilation controlled SHS, and that SHS pollution levels were low. Between the time that non-smoking sections were established on US carriers in 1973, and the two hour US smoking ban in 1988, commercial aircraft ventilation rates had declined three times as fast as smoking prevalence. The aircraft cabin provided the least volume and lowest ventilation rate per smoker of any social venue, including stand up bars and smoking lounges, and afforded an abnormal respiratory environment. Personal monitors showed little difference in SHS exposures between flight attendants assigned to smoking sections and those assigned to non-smoking sections of aircraft cabins. In-flight air quality measurements in approximately 250 aircraft, generalised by models, indicate that when smoking was permitted aloft, 95% of the harmful respirable suspended particle (RSP) air pollution in the smoking sections and 85% of that in the non-smoking sections of aircraft cabins was caused by SHS. Typical levels of SHS-RSP on aircraft violated current (PM(2.5)) federal air quality standards approximately threefold for flight attendants, and exceeded SHS irritation thresholds by 10 to 100 times. From cotinine dosimetry, SHS exposure of typical flight attendants in aircraft cabins is estimated to have been >6-fold that of the average US worker and approximately 14-fold that of the average person. Thus, ventilation systems massively failed to control SHS air pollution in aircraft cabins. These results have implications for studies of the past and future health of flight attendants.

  2. The Herschel-PACS Legacy of Low-mass Protostars: The Properties of Warm and Hot Gas Components and Their Origin in Far-UV Illuminated Shocks

    NASA Astrophysics Data System (ADS)

    Karska, Agata; Kaufman, Michael J.; Kristensen, Lars E.; van Dishoeck, Ewine F.; Herczeg, Gregory J.; Mottram, Joseph C.; Tychoniec, Łukasz; Lindberg, Johan E.; Evans, Neal J., II; Green, Joel D.; Yang, Yao-Lun; Gusdorf, Antoine; Itrich, Dominika; Siódmiak, Natasza

    2018-04-01

    Recent observations from Herschel allow the identification of important mechanisms responsible both for the heating of the gas that surrounds low-mass protostars and for its subsequent cooling in the far-infrared. Shocks are routinely invoked to reproduce some properties of the far-IR spectra, but standard models fail to reproduce the emission from key molecules, e.g., H2O. Here, we present the Herschel Photodetector Array Camera and Spectrometer (PACS) far-IR spectroscopy of 90 embedded low-mass protostars (Class 0/I). The Herschel-PACS spectral maps, covering ∼55–210 μm with a field of view of ∼50″, are used to quantify the gas excitation conditions and spatial extent using rotational transitions of H2O, high-J CO, and OH, as well as [O I] and [C II]. We confirm that a warm (∼300 K) CO reservoir is ubiquitous and that a hotter component (760 ± 170 K) is frequently detected around protostars. The line emission is extended beyond ∼1000 au spatial scales in 40/90 objects, typically in molecular tracers in Class 0 and atomic tracers in Class I objects. High-velocity emission (≳90 km s‑1) is detected in only 10 sources in the [O I] line, suggesting that the bulk of [O I] arises from gas that is moving slower than typical jets. Line flux ratios show an excellent agreement with models of C-shocks illuminated by ultraviolet (UV) photons for pre-shock densities of ∼105 cm‑3 and UV fields 0.1–10 times the interstellar value. The far-IR molecular and atomic lines are a unique diagnostic of feedback from UV emission and shocks in envelopes of deeply embedded protostars.

  3. Flying the smoky skies: secondhand smoke exposure of flight attendants

    PubMed Central

    Repace, J

    2004-01-01

    Objective: To assess the contribution of secondhand smoke (SHS) to aircraft cabin air pollution and flight attendants' SHS exposure relative to the general population. Methods: Published air quality measurements, modelling studies, and dosimetry studies were reviewed, analysed, and generalised. Results: Flight attendants reported suffering greatly from SHS pollution on aircraft. Both government and airline sponsored studies concluded that SHS created an air pollution problem in aircraft cabins, while tobacco industry sponsored studies yielding similar data concluded that ventilation controlled SHS, and that SHS pollution levels were low. Between the time that non-smoking sections were established on US carriers in 1973, and the two hour US smoking ban in 1988, commercial aircraft ventilation rates had declined three times as fast as smoking prevalence. The aircraft cabin provided the least volume and lowest ventilation rate per smoker of any social venue, including stand up bars and smoking lounges, and afforded an abnormal respiratory environment. Personal monitors showed little difference in SHS exposures between flight attendants assigned to smoking sections and those assigned to non-smoking sections of aircraft cabins. Conclusions: In-flight air quality measurements in ~250 aircraft, generalised by models, indicate that when smoking was permitted aloft, 95% of the harmful respirable suspended particle (RSP) air pollution in the smoking sections and 85% of that in the non-smoking sections of aircraft cabins was caused by SHS. Typical levels of SHS-RSP on aircraft violated current (PM2.5) federal air quality standards ~threefold for flight attendants, and exceeded SHS irritation thresholds by 10 to 100 times. From cotinine dosimetry, SHS exposure of typical flight attendants in aircraft cabins is estimated to have been >6-fold that of the average US worker and ~14-fold that of the average person. Thus, ventilation systems massively failed to control SHS air pollution in aircraft cabins. These results have implications for studies of the past and future health of flight attendants. PMID:14985612

  4. Typical gray matter axons in mammalian brain fail to conduct action potentials faithfully at fever-like temperatures.

    PubMed

    Pekala, Dobromila; Szkudlarek, Hanna; Raastad, Morten

    2016-10-01

    We studied the ability of typical unmyelinated cortical axons to conduct action potentials at fever-like temperatures because fever often gives CNS symptoms. We investigated such axons in cerebellar and hippocampal slices from 10 to 25 days old rats at temperatures between 30 and 43°C. By recording with two electrodes along axonal pathways, we confirmed that the axons were able to initiate action potentials, but at temperatures >39°C, the propagation of the action potentials to a more distal recording site was reduced. This temperature-sensitive conduction may be specific for the very thin unmyelinated axons because similar recordings from myelinated CNS axons did not show conduction failures. We found that the conduction fidelity improved with 1 mmol/L TEA in the bath, probably due to block of voltage-sensitive potassium channels responsible for the fast repolarization of action potentials. Furthermore, by recording electrically activated antidromic action potentials from the soma of cerebellar granule cells, we showed that the axons failed less if they were triggered 10-30 msec after another action potential. This was because individual action potentials were followed by a depolarizing after-potential, of constant amplitude and shape, which facilitated conduction of the following action potentials. The temperature-sensitive conduction failures above, but not below, normal body temperature, and the failure-reducing effect of the spike's depolarizing after-potential, are two intrinsic mechanisms in normal gray matter axons that may help us understand how the hyperthermic brain functions. © 2016 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of the American Physiological Society and The Physiological Society.

  5. An Apparatus for Monitoring the Health of Electrical Cables

    NASA Technical Reports Server (NTRS)

    Pai, Devdas M.; Tatum, Paul; Pace, Rachel

    2004-01-01

    As with most elements of infrastructure, electrical wiring is innocuous; usually hidden away and unnoticed until it fails. Failure of infrastructure, however, sometimes leads to serious health and safety hazards. Electrical wiring fails when the polymeric (usually rubber) insulation material that sheathes the conductor gets embrittled with age from exposure to pressure, temperature or radiation cycling or when the insulation gets removed by the chafing of wires against each other. Miles of such wiring can be found in typical aircraft, with significant lengths of the wiring immersed in aviation fuel - a recipe for an explosion if a spark were to occur. Diagnosing the health of wiring is thus an important aspect of monitoring the health of aging aircraft. Stress wave propagation through wiring affords a quick and non-invasive method for health monitoring. The extent to which a stress wave propagating through the cable core gets attenuated depends on the condition of the surrounding insulation. When the insulation is in good condition - supple and pliable, there is more damping or attenuation of the waveform. As the insulation gets embrittled and cracked, the attenuation is likely to reduce and the waveform of the propagating stress wave is likely to change. The monitoring of these changes provides a potential tool to evaluate wiring or cabling in service that is not accessible for visual inspection. This experiment has been designed for use in an introductory mechanical or materials engineering instrumentation lab. Initial setup (after procuring all the materials) should take the lab instructor about 4 hours. A single measurement can be initiated and saved to disk in less than 3 minutes, allowing for all the students in a typical lab section to take their own data rather than share a single set of data for the entire class.

  6. A model of nonparticipation in alcohol treatment programs.

    PubMed

    Burton, T L; Williamson, D L

    1997-01-01

    Why do the vast majority of those who suffer harm from drinking fail to obtain treatment? Based on a review of research literature and educational and treatment program materials, a model of nonparticipation in treatment is proposed whereby particular population groups are separated out according to whether or not they exhibit specified characteristics related to both harm from drinking and attitudes towards treatment. Eleven groups have been identified in the model, each of which has different reasons for failing to seek and/or obtain treatment. It is suggested that differing educational program messages should be sent to each group. While the model does not purport to be wholly inclusive of all nonparticipation, it offers a basis for addressing the variety of disparate groups that suffer harm from drinking but do not obtain treatment.

  7. OGLE-2014-SN-073 as a fallback accretion powered supernova

    NASA Astrophysics Data System (ADS)

    Moriya, Takashi J.; Terreran, Giacomo; Blinnikov, Sergei I.

    2018-03-01

    We investigate the possibility that the energetic Type II supernova OGLE-2014-SN-073 is powered by a fallback accretion following the failed explosion of a massive star. Taking massive hydrogen-rich supernova progenitor models, we estimate the fallback accretion rate and calculate the light-curve evolution of supernovae powered by the fallback accretion. We find that such fallback accretion powered models can reproduce the overall observational properties of OGLE-2014-SN-073. It may imply that some failed explosions could be observed as energetic supernovae like OGLE-2014-SN-073 instead of faint supernovae as previously proposed.

  8. Failure Analysis of Space Shuttle Orbiter Valve Poppet

    NASA Technical Reports Server (NTRS)

    Russell, Rick

    2010-01-01

    The poppet failed during STS-126 due to fatigue cracking that most likely was initiated during MDC ground-testing. This failure ultimately led to the discovery that the cracking problem was a generic issue effecting numerous poppets throughout the Shuttle program's history. This presentation has focused on the laboratory analysis of the failed hardware, but this analysis was only one aspect of a comprehensive failure investigation. One critical aspect of the overall investigation was modeling of the fluid flow through this valve to determine the possible sources of cyclic loading. This work has led to the conclusion that the poppets are failing due to flow-induced vibration.

  9. Numerical modeling of a vortex stabilized arcjet

    NASA Astrophysics Data System (ADS)

    Pawlas, Gary Edward

    Arcjet thrusters are being actively considered for use in Earth orbit maneuvering applications. Satellite station-keeping is an example of a maneuvering application requiring the low thrust, high specific impulse of an arcjet. Experimental studies are currently the chief means of determining an optimal thruster configuration. Earlier numerical studies have failed to include all of the effects found in typical arcjets including complex geometries, viscosity and swirling flow. Arcjet geometries are large area ratio converging-diverging nozzles with centerbodies in the subsonic portion of the nozzle. The nozzle walls serve as the anode while the centerbody functions as the cathode. Viscous effects are important because the Reynolds number, based on the throat radius, is typically less than 1,000. Experimental studies have shown a swirl or circumferential velocity component stabilizes a constricted arc. The equations are described which governs the flow through a constricted arcjet thruster. An assumption that the flowfield is in local thermodynamic equilibrium leads to a single fluid plasma temperature model. An order of magnitude analysis reveals the governing fluid mechanics equations are uncoupled from the electromagnetic field equations. A numerical method is developed to solve the governing fluid mechanics equations, the Thin Layer Navier-Stokes equations. A coordinate transformation is used in deriving the governing equations to simplify the application of boundary conditions in complex geometries. An axisymmetric formulation is employed to include the swirl velocity component as well as the axial and redial velocity components. The numerical method is an implicit finite-volume technique and allows for large time steps to reach a converged steady-state solution. The inviscid fluxes are flux-split and Gauss-Seidel line relaxation is used to accelerate convergence. 'Converging diverging' nozzles with exit-to-throat area ratios up to 100:1 and annual nozzles were examined. Comparisons with experimental data and previous numerical results were in excellent agreement. Quantities examined included Mach number and static wall pressure distributions, and oblique shock structures.

  10. The economics and ethics of aerosol geoengineering strategies

    NASA Astrophysics Data System (ADS)

    Goes, Marlos; Keller, Klaus; Tuana, Nancy

    2010-05-01

    Anthropogenic greenhouse gas emissions are changing the Earth's climate and impose substantial risks for current and future generations. What are scientifically sound, economically viable, and ethically defendable strategies to manage these climate risks? Ratified international agreements call for a reduction of greenhouse gas emissions to avoid dangerous anthropogenic interference with the climate system. Recent proposals, however, call for a different approach: geoengineering climate by injecting aerosol precursors into the stratosphere. Published economic studies typically neglect the risks of aerosol geoengineering due to (i) a potential failure to sustain the aerosol forcing and (ii) due to potential negative impacts associated with aerosol forcings. Here we use a simple integrated assessment model of climate change to analyze potential economic impacts of aerosol geoengineering strategies over a wide range of uncertain parameters such as climate sensitivity, the economic damages due to climate change, and the economic damages due to aerosol geoengineering forcings. The simplicity of the model provides the advantages of parsimony and transparency, but it also imposes considerable caveats. For example, the analysis is based on a globally aggregated model and is hence silent on intragenerational distribution of costs and benefits. In addition, the analysis neglects the effects of future learning and is based on a simple representation of climate change impacts. We use this integrated assessment model to show three main points. First, substituting aerosol geoengineering for the reduction of greenhouse gas emissions can fail the test of economic efficiency. One key to this finding is that a failure to sustain the aerosol forcing can lead to sizeable and abrupt climatic changes. The monetary damages due to such a discontinuous aerosol geoengineering can dominate the cost-benefit analysis because the monetary damages of climate change are expected to increase with the rate of change. Second, the relative contribution of aerosol geoengineering to an economically optimal portfolio hinges critically on deeply uncertain estimates of the damages due to aerosol forcing. Even if we assume that aerosol forcing could be deployed continuously, the aerosol geoengineering does not considerably displace the reduction of greenhouse gas emissions in the simple economic optimal growth model until the damages due to the aerosol forcing are rather low. Third, deploying aerosol geoengineering may also fail an ethical test regarding issues of intergenerational justice. Substituting aerosol geoengineering for reducing greenhouse gas emissions constitutes a conscious risk transfer to future generations, for example due to the increased risk of future abrupt climate change. This risk transfer is in tension with the requirement of intergenerational justice that present generations should not create benefits for themselves in exchange for burdens on future generations.

  11. Genetic podocyte lineage reveals progressive podocytopenia with parietal cell hyperplasia in a murine model of cellular/collapsing focal segmental glomerulosclerosis.

    PubMed

    Suzuki, Taisei; Matsusaka, Taiji; Nakayama, Makiko; Asano, Takako; Watanabe, Teruo; Ichikawa, Iekuni; Nagata, Michio

    2009-05-01

    Focal segmental glomerulosclerosis (FSGS) is a progressive renal disease, and the glomerular visceral cell hyperplasia typically observed in cellular/collapsing FSGS is an important pathological factor in disease progression. However, the cellular features that promote FSGS currently remain obscure. To determine both the origin and phenotypic alterations in hyperplastic cells in cellular/collapsing FSGS, the present study used a previously described FSGS model in p21-deficient mice with visceral cell hyperplasia and identified the podocyte lineage by genetic tagging. The p21-deficient mice with nephropathy showed significantly higher urinary protein levels, extracapillary hyperplastic indices on day 5, and glomerular sclerosis indices on day 14 than wild-type controls. X-gal staining and immunohistochemistry for podocyte and parietal epithelial cell (PEC) markers revealed progressive podocytopenia with capillary collapse accompanied by PEC hyperplasia leading to FSGS. In our investigation, non-tagged cells expressed neither WT1 nor nestin. Ki-67, a proliferation marker, was rarely associated with podocytes but was expressed at high levels in PECs. Both terminal deoxynucleotidyl transferase dUTP nick-end labeling staining and electron microscopy failed to show evidence of significant podocyte apoptosis on days 5 and 14. These findings suggest that extensive podocyte loss and simultaneous PEC hyperplasia is an actual pathology that may contribute to the progression of cellular/collapsing FSGS in this mouse model. Additionally, this is the first study to demonstrate the regulatory role of p21 in the PEC cell cycle.

  12. Did 250 years of forest management in Europe cool the climate?

    NASA Astrophysics Data System (ADS)

    Naudts, Kim; Chen, Yiying; McGrath, Matthew; Ryder, James; Valade, Aude; Otto, Juliane; Luyssaert, Sebastiaan

    2016-04-01

    Over the past two centuries European forest has evolved from being an over-exploited source of timber to a sustainably managed provider of diverse ecosystem services. Although this transition is often perceived as exemplary in resources management, the loss of unmanaged forest, the progressive shift from traditional coppice forestry to the current production-oriented management and the massive conversion of broadleaved to coniferous species are typically overlooked when assessing the impact of land-use change on climate. Here we present a study that addressed this gap by: (1) developing and reparameterizing the ORCHIDEE land surface model to simulate the biogeochemical and biophysical effects of forest management, (2) reconstructing the land-use history of Europe, accounting for changes in forest management and land cover. The model was coupled to the atmospheric model LMDz in a factorial simulation experiment to attribute climate change to global anthropogenic greenhouse gas emission and European land-use change since 1750 (i.e., afforestation, wood extraction and species conversion). We find that, despite considerable afforestation, Europe's forests failed to realize a net removal of CO2 from the atmosphere due to wood extraction. Moreover, biophysical changes due to the conversion of deciduous forest into coniferous forest have offset mitigation through the carbon cycle. Thus, two and a half centuries of forest management in Europe did not mitigate climate warming (Naudts et al., 2016). Naudts, K., Chen, Y., McGrath, M.J., Ryder, J., Valade, A., Otto, J., Luyssaert, S, Europe's forest management did not mitigate climate warming, Science, Accepted.

  13. High Ethanol Fuel Endurance: A Study of the Effects of Running Gasoline with 15% Ethanol Concentration in Current Production Outboard Four-Stroke Engines and Conventional Two-Stroke Outboard Marine Engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hilbert, D.

    2011-10-01

    Three Mercury Marine outboard marine engines were evaluated for durability using E15 fuel -- gasoline blended with 15% ethanol. Direct comparison was made to operation on E0 (ethanol-free gasoline) to determine the effects of increased ethanol on engine durability. Testing was conducted using a 300-hour wide-open throttle (WOT) test protocol, a typical durability cycle used by the outboard marine industry. Use of E15 resulted in reduced CO emissions, as expected for open-loop, non-feedback control engines. HC emissions effects were variable. Exhaust gas and engine operating temperatures increased as a consequence of leaner operation. Each E15 test engine exhibited some deteriorationmore » that may have been related to the test fuel. The 9.9 HP, four-stroke E15 engine exhibited variable hydrocarbon emissions at 300 hours -- an indication of lean misfire. The 300HP, four-stroke, supercharged Verado engine and the 200HP, two-stroke legacy engine tested with E15 fuel failed to complete the durability test. The Verado engine failed three exhaust valves at 285 endurance hours while the 200HP legacy engine failed a main crank bearing at 256 endurance hours. All E0-dedicated engines completed the durability cycle without incident. Additional testing is necessary to link the observed engine failures to ethanol in the test fuel.« less

  14. Chimpanzee ‘folk physics’: bringing failures into focus

    PubMed Central

    Seed, Amanda; Seddon, Eleanor; Greene, Bláthnaid; Call, Josep

    2012-01-01

    Differences between individuals are the raw material from which theories of the evolution and ontogeny of cognition are built. For example, when 4-year-old children pass a test requiring them to communicate the content of another's falsely held belief, while 3-year-olds fail, we know that something must change over the course of the third year of life. In the search for what develops or evolves, the typical route is to probe the extents and limits of successful individuals' ability. Another is to focus on those that failed, and find out what difference or lack prevented them from passing the task. Recent research in developmental psychology has harnessed individual differences to illuminate the cognitive mechanisms that emerge to enable success. We apply this approach to explaining some of the failures made by chimpanzees when using tools to solve problems. Twelve of 16 chimpanzees failed to discriminate between a complete and a broken tool when, after being set down, the ends of the broken one were aligned in front of them. There was a correlation between performance on this aligned task and another in which after being set down, the centre of both tools was covered, suggesting that the limiting factor was not the representation of connection, but memory or attention. Some chimpanzees that passed the aligned task passed a task in which the location of the broken tool was never visible but had to be inferred. PMID:22927573

  15. 76 FR 6536 - Airworthiness Directives; Bombardier, Inc. Model CL-215-1A10 (CL-215), CL-215-6B11 (CL-215T...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-07

    ... analysis of the systems and structure in the potential line of trajectory of a failed screw cap/end cap for... of aileron control [and consequent reduced controllability of the airplane]. * * * * * We are issuing... in the potential line of trajectory of a failed screw cap/end cap for each accumulator has been...

  16. 75 FR 68728 - Airworthiness Directives; Bombardier, Inc. Model CL-215-1A10 (CL-215), CL-215-6B11 (CL-215T...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-09

    ... structure in the potential line of trajectory of a failed screw cap/end cap for each accumulator has been..., potentially resulting in fuel spillage, uncommanded flap movement, or loss of aileron control [and consequent... and structure in the potential line of trajectory of a failed screw cap/end cap for each accumulator...

  17. Predicting High School Completion Using Student Performance in High School Algebra: A Mixed Methods Research Study

    ERIC Educational Resources Information Center

    Chiado, Wendy S.

    2012-01-01

    Too many of our nation's youth have failed to complete high school. Determining why so many of our nation's students fail to graduate is a complex, multi-faceted problem and beyond the scope of any one study. The study presented herein utilized a thirteen-step mixed methods model developed by Leech and Onwuegbuzie (2007) to demonstrate within a…

  18. A Gathering Storm: How Palm Beach County Schools Fail Poor and Minority Children.

    ERIC Educational Resources Information Center

    Carmona, Lisa A.; Wheelock, Anne; First, Joan

    This report takes a hard look at the day-to-day workings of Palm Beach County (Florida) schools to explain why the systemic change model of Florida's current reform legislation is likely to fail the students in greatest need of improved schooling. The Palm Beach County School District is the 4th largest district in Florida, and the 15th largest in…

  19. In Search of Black Swans: Identifying Students at Risk of Failing Licensing Examinations.

    PubMed

    Barber, Cassandra; Hammond, Robert; Gula, Lorne; Tithecott, Gary; Chahine, Saad

    2018-03-01

    To determine which admissions variables and curricular outcomes are predictive of being at risk of failing the Medical Council of Canada Qualifying Examination Part 1 (MCCQE1), how quickly student risk of failure can be predicted, and to what extent predictive modeling is possible and accurate in estimating future student risk. Data from five graduating cohorts (2011-2015), Schulich School of Medicine & Dentistry, Western University, were collected and analyzed using hierarchical generalized linear models (HGLMs). Area under the receiver operating characteristic curve (AUC) was used to evaluate the accuracy of predictive models and determine whether they could be used to predict future risk, using the 2016 graduating cohort. Four predictive models were developed to predict student risk of failure at admissions, year 1, year 2, and pre-MCCQE1. The HGLM analyses identified gender, MCAT verbal reasoning score, two preclerkship course mean grades, and the year 4 summative objective structured clinical examination score as significant predictors of student risk. The predictive accuracy of the models varied. The pre-MCCQE1 model was the most accurate at predicting a student's risk of failing (AUC 0.66-0.93), while the admissions model was not predictive (AUC 0.25-0.47). Key variables predictive of students at risk were found. The predictive models developed suggest, while it is not possible to identify student risk at admission, we can begin to identify and monitor students within the first year. Using such models, programs may be able to identify and monitor students at risk quantitatively and develop tailored intervention strategies.

  20. Heat up and failure of BWR upper internals during a severe accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robb, Kevin R.

    In boiling water reactors, the shroud dome, separators, and dryers above the core are made of approximately 100,000 kg of stainless steel. During a severe accident in which the coolant boils away and exothermic oxidation of zirconium occurs, gases (steam and hydrogen) are superheated in the core region and pass through the upper internals. In this scenario, the upper internals can also be heated by thermal radiation from the hot degrading core. Historically, models of the upper internals have been relatively simple in severe accident codes. The upper internals are typically modeled in MELCOR as two lumped volumes with simplifiedmore » heat transfer characteristics and no structural integrity considerations, and with limited ability to oxidize, melt, and relocate. The potential for and the subsequent impact of the upper internals to heat up, oxidize, fail, and relocate during a severe accident was investigated. A higher fidelity representation of the shroud dome, steam separators, and steam driers was developed in MELCOR v1.8.6 by extending the core region upwards. The MELCOR modeling effort entailed adding 45 additional core cells and control volumes, 98 flow paths, and numerous control functions. The model accounts for the mechanical loading and structural integrity, oxidation, melting, flow area blockage, and relocation of the various components. Consistent with a previous study, the results indicate that the upper internals can reach high temperatures during a severe accident sufficient to lose their structural integrity and relocate. Finally, the additional 100 metric tons of stainless steel debris influences the subsequent in-vessel and ex-vessel accident progression.« less

  1. Heat up and failure of BWR upper internals during a severe accident

    DOE PAGES

    Robb, Kevin R.

    2017-02-21

    In boiling water reactors, the shroud dome, separators, and dryers above the core are made of approximately 100,000 kg of stainless steel. During a severe accident in which the coolant boils away and exothermic oxidation of zirconium occurs, gases (steam and hydrogen) are superheated in the core region and pass through the upper internals. In this scenario, the upper internals can also be heated by thermal radiation from the hot degrading core. Historically, models of the upper internals have been relatively simple in severe accident codes. The upper internals are typically modeled in MELCOR as two lumped volumes with simplifiedmore » heat transfer characteristics and no structural integrity considerations, and with limited ability to oxidize, melt, and relocate. The potential for and the subsequent impact of the upper internals to heat up, oxidize, fail, and relocate during a severe accident was investigated. A higher fidelity representation of the shroud dome, steam separators, and steam driers was developed in MELCOR v1.8.6 by extending the core region upwards. The MELCOR modeling effort entailed adding 45 additional core cells and control volumes, 98 flow paths, and numerous control functions. The model accounts for the mechanical loading and structural integrity, oxidation, melting, flow area blockage, and relocation of the various components. Consistent with a previous study, the results indicate that the upper internals can reach high temperatures during a severe accident sufficient to lose their structural integrity and relocate. Finally, the additional 100 metric tons of stainless steel debris influences the subsequent in-vessel and ex-vessel accident progression.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leskovar, Matjaz; Koncar, Bostjan

    An ex-vessel steam explosion may occur when during a severe reactor accident the reactor vessel fails and the molten core pours into the water in the reactor cavity. A steam explosion is a fuel coolant interaction process where the heat transfer from the melt to water is so intense and rapid that the timescale for heat transfer is shorter than the timescale for pressure relief. This can lead to the formation of shock waves and production of missiles at later times, during the expansion of the highly pressurized water vapor, that may endanger surrounding structures. In contrast to specialized steammore » explosion CFD codes, where the steam explosion is modeled on micro-scale using fundamental averaged multiphase flow conservation equations, in the presented approach the steam explosion is modeled in a simplified manner as an expanding high-pressure pre-mixture of dispersed molten fuel, liquid water and vapor. Applying the developed steam explosion model, a comprehensive analysis of the ex-vessel steam explosion in a typical PWR reactor cavity was done using the CFD code CFX-10. At four selected locations, which are of importance for the assessment of the vulnerability of cavity structures, the pressure histories were recorded and the corresponding pressure impulses calculated. The pressure impulses determine the destructive potential of the steam explosion and represent the input for the structural mechanical analysis of the cavity structures. The simulation results show that the pressure impulses depend mainly on the steam explosion energy conversion ratio, whereas the influence of the pre-mixture vapor volume fraction, which is a parameter in our model and determines the maximum steam explosion pressure, is not significant. (authors)« less

  3. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    USGS Publications Warehouse

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  4. Modeling Emergent Macrophyte Distributions: Including Sub-dominant Species

    EPA Science Inventory

    Mixed stands of emergent vegetation are often present following drawdowns but models of wetland plant distributions fail to include subdominant species when predicting distributions. Three variations of a spatial plant distribution cellular automaton model were developed to explo...

  5. Modeling Organic Contaminant Desorption from Municipal Solid Waste Components

    NASA Astrophysics Data System (ADS)

    Knappe, D. R.; Wu, B.; Barlaz, M. A.

    2002-12-01

    Approximately 25% of the sites on the National Priority List (NPL) of Superfund are municipal landfills that accepted hazardous waste. Unlined landfills typically result in groundwater contamination, and priority pollutants such as alkylbenzenes are often present. To select cost-effective risk management alternatives, better information on factors controlling the fate of hydrophobic organic contaminants (HOCs) in landfills is required. The objectives of this study were (1) to investigate the effects of HOC aging time, anaerobic sorbent decomposition, and leachate composition on HOC desorption rates, and (2) to simulate HOC desorption rates from polymers and biopolymer composites with suitable diffusion models. Experiments were conducted with individual components of municipal solid waste (MSW) including polyvinyl chloride (PVC), high-density polyethylene (HDPE), newsprint, office paper, and model food and yard waste (rabbit food). Each of the biopolymer composites (office paper, newsprint, rabbit food) was tested in both fresh and anaerobically decomposed form. To determine the effects of aging on alkylbenzene desorption rates, batch desorption tests were performed after sorbents were exposed to toluene for 30 and 250 days in flame-sealed ampules. Desorption tests showed that alkylbenzene desorption rates varied greatly among MSW components (PVC slowest, fresh rabbit food and newsprint fastest). Furthermore, desorption rates decreased as aging time increased. A single-parameter polymer diffusion model successfully described PVC and HDPE desorption data, but it failed to simulate desorption rate data for biopolymer composites. For biopolymer composites, a three-parameter biphasic polymer diffusion model was employed, which successfully simulated both the initial rapid and the subsequent slow desorption of toluene. Toluene desorption rates from MSW mixtures were predicted for typical MSW compositions in the years 1960 and 1997. For the older MSW mixture, which had a low plastics content, the model predicted that 50% of the initially sorbed toluene desorbed over a period of 5.8 days. In contrast, the model predicted that 50% of the initially sorbed toluene desorbed over a period of 4 years for the newer MSW mixture. These results suggest that toluene desorption rates from old MSW mixtures exceed methanogenic toluene degradation rates (toluene half-lives of about 30 to 100 days have been reported for methanogenic systems) and thus imply that biodegradation kinetics control the rate at which sorbed toluene is mineralized in old landfills. For newer MSW mixtures with a larger plastics content, toluene desorption rates are substantially slower; therefore, toluene desorption kinetics likely control the rate at which sorbed toluene can be mineralized in new landfills.

  6. Intraepidermal Merkel cell carcinoma: A case series of a rare entity with clinical follow up.

    PubMed

    Jour, George; Aung, Phyu P; Rozas-Muñoz, Eduardo; Curry, Johnathan L; Prieto, Victor; Ivan, Doina

    2017-08-01

    Merkel cell carcinoma (MCC) is a rare but aggressive cutaneous carcinoma. MCC typically involves dermis and although epidermotropism has been reported, MCC strictly intraepidermal or in situ (MCCIS) is exceedingly rare. Most of the cases of MCCIS described so far have other associated lesions, such as squamous or basal cell carcinoma, actinic keratosis and so on. Herein, we describe 3 patients with MCC strictly in situ, without a dermal component. Our patients were elderly. 2 of the lesions involved the head and neck area and 1 was on a finger. All tumors were strictly intraepidermal in the diagnostic biopsies, and had histomorphologic features and an immunohistochemical profile supporting the diagnosis of MCC. Excisional biopsies were performed in 2 cases and failed to reveal dermal involvement by MCC or other associated malignancies. Our findings raise the awareness that MCC strictly in situ does exist and it should be included in the differential diagnosis of Paget's or extramammary Paget's disease, pagetoid squamous cell carcinoma, melanoma and other neoplasms that typically show histologically pagetoid extension of neoplastic cells. Considering the limited number of cases reported to date, the diagnosis of isolated MCCIS should not warrant a change in management from the typical MCC. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Effective learning and retention of braille letter tactile discrimination skills in children with developmental dyslexia.

    PubMed

    Hayek, Maisam; Dorfberger, Shoshi; Karni, Avi

    2016-01-01

    Children with developmental dyslexia (DD) may differ from typical readers in aspects other than reading. The notion of a general deficit in the ability to acquire and retain procedural ('how to') knowledge as long-term procedural memory has been proposed. Here, we compared the ability of elementary school children, with and without reading difficulties (DD, typical readers), to improve their tactile discrimination with practice and tested the children's ability to retain the gains. Forty 10-11-year-olds practiced the tactile discrimination of four braille letters, presented as pairs, while blindfolded. In a trial, participants were asked to report whether the target stimuli were identical or different from each other. The structured training session consisted of six blocks of 16 trials each. Performance was re-tested at 24 hours and two weeks post-training. Both groups improved in speed and in accuracy. In session 1, children with DD started as significantly less accurate and were slower than the typical readers but showed rapid learning and successfully closed the gap. Only two children with DD failed to benefit from training and were not included in subsequent data analyses. At 24 hours post-training both groups showed effective retention of the gains in speed and accuracy. Importantly, children with DD were able to retain the gains in speed and accuracy, over a two-week interval as effectively as typical readers. Thus, children with DD were as effective in the acquisition and retention of tactile discrimination of braille letters as typical readers of the same age. The results do not support the notion of a general procedural learning disability in DD. © 2015 John Wiley & Sons Ltd.

  8. Comparative Normal/Failing Rat Myocardium Cell Membrane Chromatographic Analysis System for Screening Specific Components That Counteract Doxorubicin-Induced Heart Failure from Acontium carmichaeli

    PubMed Central

    2015-01-01

    Cell membrane chromatography (CMC) derived from pathological tissues is ideal for screening specific components acting on specific diseases from complex medicines owing to the maximum simulation of in vivo drug-receptor interactions. However, there are no pathological tissue-derived CMC models that have ever been developed, as well as no visualized affinity comparison of potential active components between normal and pathological CMC columns. In this study, a novel comparative normal/failing rat myocardium CMC analysis system based on online column selection and comprehensive two-dimensional (2D) chromatography/monolithic column/time-of-flight mass spectrometry was developed for parallel comparison of the chromatographic behaviors on both normal and pathological CMC columns, as well as rapid screening of the specific therapeutic agents that counteract doxorubicin (DOX)-induced heart failure from Acontium carmichaeli (Fuzi). In total, 16 potential active alkaloid components with similar structures in Fuzi were retained on both normal and failing myocardium CMC models. Most of them had obvious decreases of affinities on failing myocardium CMC compared with normal CMC model except for four components, talatizamine (TALA), 14-acetyl-TALA, hetisine, and 14-benzoylneoline. One compound TALA with the highest affinity was isolated for further in vitro pharmacodynamic validation and target identification to validate the screen results. Voltage-dependent K+ channel was confirmed as a binding target of TALA and 14-acetyl-TALA with high affinities. The online high throughput comparative CMC analysis method is suitable for screening specific active components from herbal medicines by increasing the specificity of screened results and can also be applied to other biological chromatography models. PMID:24731167

  9. Cardiac arrhythmia mechanisms in rats with heart failure induced by pulmonary hypertension

    PubMed Central

    Benoist, David; Stones, Rachel; Drinkhill, Mark J.; Benson, Alan P.; Yang, Zhaokang; Cassan, Cecile; Gilbert, Stephen H.; Saint, David A.; Cazorla, Olivier; Steele, Derek S.; Bernus, Olivier

    2012-01-01

    Pulmonary hypertension provokes right heart failure and arrhythmias. Better understanding of the mechanisms underlying these arrhythmias is needed to facilitate new therapeutic approaches for the hypertensive, failing right ventricle (RV). The aim of our study was to identify the mechanisms generating arrhythmias in a model of RV failure induced by pulmonary hypertension. Rats were injected with monocrotaline to induce either RV hypertrophy or failure or with saline (control). ECGs were measured in conscious, unrestrained animals by telemetry. In isolated hearts, electrical activity was measured by optical mapping and myofiber orientation by diffusion tensor-MRI. Sarcoplasmic reticular Ca2+ handling was studied in single myocytes. Compared with control animals, the T-wave of the ECG was prolonged and in three of seven heart failure animals, prominent T-wave alternans occurred. Discordant action potential (AP) alternans occurred in isolated failing hearts and Ca2+ transient alternans in failing myocytes. In failing hearts, AP duration and dispersion were increased; conduction velocity and AP restitution were steeper. The latter was intrinsic to failing single myocytes. Failing hearts had greater fiber angle disarray; this correlated with AP duration. Failing myocytes had reduced sarco(endo)plasmic reticular Ca2+-ATPase activity, increased sarcoplasmic reticular Ca2+-release fraction, and increased Ca2+ spark leak. In hypertrophied hearts and myocytes, dysfunctional adaptation had begun, but alternans did not develop. We conclude that increased electrical and structural heterogeneity and dysfunctional sarcoplasmic reticular Ca2+ handling increased the probability of alternans, a proarrhythmic predictor of sudden cardiac death. These mechanisms are potential therapeutic targets for the correction of arrhythmias in hypertensive, failing RVs. PMID:22427523

  10. Learning from Evidence in a Complex World

    PubMed Central

    Sterman, John D.

    2006-01-01

    Policies to promote public health and welfare often fail or worsen the problems they are intended to solve. Evidence-based learning should prevent such policy resistance, but learning in complex systems is often weak and slow. Complexity hinders our ability to discover the delayed and distal impacts of interventions, generating unintended “side effects.” Yet learning often fails even when strong evidence is available: common mental models lead to erroneous but self-confirming inferences, allowing harmful beliefs and behaviors to persist and undermining implementation of beneficial policies. Here I show how systems thinking and simulation modeling can help expand the boundaries of our mental models, enhance our ability to generate and learn from evidence, and catalyze effective change in public health and beyond. PMID:16449579

  11. Visual Working Memory Cannot Trade Quantity for Quality.

    PubMed

    Ramaty, Ayelet; Luria, Roy

    2018-01-01

    Two main models have been proposed to describe how visual working memory (WM) allocates its capacity: the slot-model and the continuous resource-model. The purpose of the current study was to test a direct prediction of the resource model suggesting that WM can trade-off between the quantity and quality of the encoded information. Previous research reported equivocal results, with studies that failed to find such a trade-off and other studies that reported a trade-off. Following the design of previous studies, in Experiment 1 we replicated this trade-off, by presenting the memory array for 1200 ms. Experiment 2 failed to observe a trade-off between quantity and quality using a memory array interval of 300 ms (a standard interval for visual WM). Experiment 3 again failed to find this trade-off, when reinstating the 1200 ms memory array interval but adding an articulatory suppression manipulation. We argue that while participants can trade quantity for quality, this pattern depends on verbal encoding and transfer to long-term memory processes that were possible to perform only during the long retention interval. When these processes were eliminated, the trade-off disappeared. Thus, we didn't find any evidence that the trade-off between quantity for quality can occur within visual WM.

  12. Relationship between the transverse-field Ising model and the X Y model via the rotating-wave approximation

    NASA Astrophysics Data System (ADS)

    Kiely, Thomas G.; Freericks, J. K.

    2018-02-01

    In a large transverse field, there is an energy cost associated with flipping spins along the axis of the field. This penalty can be employed to relate the transverse-field Ising model in a large field to the X Y model in no field (when measurements are performed at the proper stroboscopic times). We describe the details for how this relationship works and, in particular, we also show under what circumstances it fails. We examine wave-function overlap between the two models and observables, such as spin-spin Green's functions. In general, the mapping is quite robust at short times, but will ultimately fail if the run time becomes too long. There is also a tradeoff between the length of time one can run a simulation out to and the time jitter of the stroboscopic measurements that must be balanced when planning to employ this mapping.

  13. Exploring the Social Impact of Being a Typical Peer Model for Included Children with Autism Spectrum Disorder

    PubMed Central

    Locke, Jill; Fuller, Erin Rotheram; Kasari, Connie

    2014-01-01

    This study examined the social impact of being a typical peer model as part of a social skills intervention for children with autism spectrum disorder (ASD). Participants were drawn from a randomized-controlled-treatment trial that examined the effects of targeted interventions on the social networks of 60 elementary-aged children with ASD. Results demonstrated that typical peer models had higher social network centrality, received friendships, friendship quality, and less loneliness than non-peer models. Peer models were also more likely to be connected with children with ASD than non-peer models at baseline and exit. These results suggest that typical peers can be socially connected to children with ASD, as well as other classmates, and maintain a strong and positive role within the classroom. PMID:22215436

  14. Learning Factor Models of Students at Risk of Failing in the Early Stage of Tertiary Education

    ERIC Educational Resources Information Center

    Gray, Geraldine; McGuinness, Colm; Owende, Philip; Hofmann, Markus

    2016-01-01

    This paper reports on a study to predict students at risk of failing based on data available prior to commencement of first year. The study was conducted over three years, 2010 to 2012, on a student population from a range of academic disciplines, n=1,207. Data was gathered from both student enrollment data and an online, self-reporting,…

  15. Successful Internet Entrepreneurs Don't Have to Be College Dropouts: A Model for Nurturing College Students to Become Successful Internet Entrepreneurs

    ERIC Educational Resources Information Center

    Zhang, Sonya

    2014-01-01

    Some of today's most successful Internet entrepreneurs didn't graduate from college. Many young people today followed the same path to pursue their dreams however ended up failing, not a surprise because 80% of the startups fail in first 5 years. As technology innovation and market competition on Internet continue to accelerate, college students…

  16. A high-throughput screening approach to discovering good forms of biologically inspired visual representation.

    PubMed

    Pinto, Nicolas; Doukhan, David; DiCarlo, James J; Cox, David D

    2009-11-01

    While many models of biological object recognition share a common set of "broad-stroke" properties, the performance of any one model depends strongly on the choice of parameters in a particular instantiation of that model--e.g., the number of units per layer, the size of pooling kernels, exponents in normalization operations, etc. Since the number of such parameters (explicit or implicit) is typically large and the computational cost of evaluating one particular parameter set is high, the space of possible model instantiations goes largely unexplored. Thus, when a model fails to approach the abilities of biological visual systems, we are left uncertain whether this failure is because we are missing a fundamental idea or because the correct "parts" have not been tuned correctly, assembled at sufficient scale, or provided with enough training. Here, we present a high-throughput approach to the exploration of such parameter sets, leveraging recent advances in stream processing hardware (high-end NVIDIA graphic cards and the PlayStation 3's IBM Cell Processor). In analogy to high-throughput screening approaches in molecular biology and genetics, we explored thousands of potential network architectures and parameter instantiations, screening those that show promising object recognition performance for further analysis. We show that this approach can yield significant, reproducible gains in performance across an array of basic object recognition tasks, consistently outperforming a variety of state-of-the-art purpose-built vision systems from the literature. As the scale of available computational power continues to expand, we argue that this approach has the potential to greatly accelerate progress in both artificial vision and our understanding of the computational underpinning of biological vision.

  17. Why the sacramento delta area differs from other parts of the great valley: numerical modeling of thermal structure and thermal subsidence of forearc basins

    USGS Publications Warehouse

    Mikhailov, V.O.; Parsons, T.; Simpson, R.W.; Timoshkina, E.P.; Williams, C.

    2007-01-01

    Data on present-day heat flow, subsidence history, and paleotemperature for the Sacramento Delta region, California, have been employed to constrain a numerical model of tectonic subsidence and thermal evolution of forearc basins. The model assumes an oceanic basement with an initial thermal profile dependent on its age subjected to refrigeration caused by a subducting slab. Subsidence in the Sacramento Delta region appears to be close to that expected for a forearc basin underlain by normal oceanic lithosphere of age 150 Ma, demonstrating that effects from both the initial thermal profile and the subduction process are necessary and sufficient. Subsidence at the eastern and northern borders of the Sacramento Valley is considerably less, approximating subsidence expected from the dynamics of the subduction zone alone. These results, together with other geophysical data, show that Sacramento Delta lithosphere, being thinner and having undergone deeper subsidence, must differ from lithosphere of the transitional type under other parts of the Sacramento Valley. Thermal modeling allows evaluation of the rheological properties of the lithosphere. Strength diagrams based on our thermal model show that, even under relatively slow deformation (10−17 s−1), the upper part of the delta crystalline crust (down to 20–22 km) can fail in brittle fashion, which is in agreement with deeper earthquake occurrence. Hypocentral depths of earthquakes under the Sacramento Delta region extend to nearly 20 km, whereas, in the Coast Ranges to the west, depths are typically less than 12–15 km. The greater width of the seismogenic zone in this area raises the possibility that, for fault segments of comparable length, earthquakes of somewhat greater magnitude might occur than in the Coast Ranges to the west.

  18. Successful Reconstruction of a Physiological Circuit with Known Connectivity from Spiking Activity Alone

    PubMed Central

    Gerhard, Felipe; Kispersky, Tilman; Gutierrez, Gabrielle J.; Marder, Eve; Kramer, Mark; Eden, Uri

    2013-01-01

    Identifying the structure and dynamics of synaptic interactions between neurons is the first step to understanding neural network dynamics. The presence of synaptic connections is traditionally inferred through the use of targeted stimulation and paired recordings or by post-hoc histology. More recently, causal network inference algorithms have been proposed to deduce connectivity directly from electrophysiological signals, such as extracellularly recorded spiking activity. Usually, these algorithms have not been validated on a neurophysiological data set for which the actual circuitry is known. Recent work has shown that traditional network inference algorithms based on linear models typically fail to identify the correct coupling of a small central pattern generating circuit in the stomatogastric ganglion of the crab Cancer borealis. In this work, we show that point process models of observed spike trains can guide inference of relative connectivity estimates that match the known physiological connectivity of the central pattern generator up to a choice of threshold. We elucidate the necessary steps to derive faithful connectivity estimates from a model that incorporates the spike train nature of the data. We then apply the model to measure changes in the effective connectivity pattern in response to two pharmacological interventions, which affect both intrinsic neural dynamics and synaptic transmission. Our results provide the first successful application of a network inference algorithm to a circuit for which the actual physiological synapses between neurons are known. The point process methodology presented here generalizes well to larger networks and can describe the statistics of neural populations. In general we show that advanced statistical models allow for the characterization of effective network structure, deciphering underlying network dynamics and estimating information-processing capabilities. PMID:23874181

  19. Measuring motivation using the transtheoretical (stages of change) model: A follow-up study of people who failed an online hearing screening.

    PubMed

    Ingo, Elisabeth; Brännström, K Jonas; Andersson, Gerhard; Lunner, Thomas; Laplante-Lévesque, Ariane

    2016-07-01

    Acceptance and readiness to seek professional help have shown to be important factors for favourable audiological rehabilitation outcomes. Theories from health psychology such as the transtheoretical (stages-of-change) model could help understand behavioural change in people with hearing impairment. In recent studies, the University of Rhode Island change assessment (URICA) has been found to have good predictive validity. In a previous study, 224 Swedish adults who had failed an online hearing screening completed URICA and two other measures of stages of change. This follow-up aimed to: (1) determine prevalence of help-seeking at a hearing clinic and hearing aid uptake, and (2) explore the predictive validity of the stages of change measures by a follow-up on the 224 participants who had failed a hearing screening 18 months previously. A total of 122 people (54%) completed the follow-up online questionnaire, including the three measures and questions regarding experience with hearing help-seeking and hearing aid uptake. Since failing the online hearing screening, 61% of participants had sought help. A good predictive validity for a one-item measure of stages of change was reported. The Staging algorithm was the stages of change measure with the best ability to predict help-seeking 18 months later.

  20. A New Model for Root Growth in Soil with Macropores

    NASA Astrophysics Data System (ADS)

    Landl, M.; Huber, K.; Schnepf, A.; Vanderborght, J.; Javaux, M.; Bengough, G.; Vereecken, H.

    2016-12-01

    In order to study soil-root interaction processes, dynamic root architecture models which are linked to models that simulate water flow and nutrient transport in the soil-root system are needed. Such models can be used to predict the impact of soil structural features, e.g. the presence of macropores in dense subsoil, on water and nutrient uptake by plants. In dynamic root architecture models, root growth is represented by moving root tips whose growth trajectory results in the creation of linear root segments. Typically, the direction of each new root segment is calculated as the vector sum of various direction-affecting components. The use of these established methods to simulate root growth in soil containing macropores, however, failed to reproduce experimentally observed root growth patterns. We therefore developed an alternative modelling approach where we distinguish between, firstly, the driving force for root growth which is determined by the orientation of the previous root segment as well as the influence of gravitropism and, secondly, soil mechanical resistance to root growth. The latter is expressed by root conductance which represents the inverse of soil penetration resistance and is treated similarly to hydraulic conductivity in Darcy's law. At the presence of macropores, root conductance is anisotropic which leads to a difference between the direction of the driving force and the direction of the root tip movement. The model was tested using data from the literature, at pot scale, at macropore scale, and in a series of simulations where sensitivity to gravity and macropore orientation was evaluated. The model simulated root growth trajectories in structured soil at both single root and whole root-system scales, generating root systems that were similar to images from experiments. Its implementation in the three dimensional soil and root water uptake model R-SWMS enables the use of the model in the future to evaluate the effect of macropores on crop access to water and nutrients.

  1. [Münchhausen syndrome].

    PubMed

    Robert, J C; Cremniter, D; Lejonc, J L

    1991-04-20

    Münchhausen's syndrome is characterized by fictitious illnesses associated with hospital peregrination, pseudologia fantastica with a mythomanic discourse that includes strongly structured medical elements, passivity and dependance at examinations, and aggressiveness. The whole picture is so typical that the syndrome can easily be recognized. Cases of Münchhausen's syndrome by proxy (Meadow's syndrome) have been reported during the last few years; the condition concerns children suffering from diseases which are entirely due to their parents and can be compared with the battered child syndrome. In terms of nosology, among pathomimias Münchhausen's syndrome figures as a borderline state. Since it is impossible to establish positive relations with these patients, treatment fails in almost every case.

  2. The effects of message framing, involvement, and nicotine dependence on anti-smoking public service announcements.

    PubMed

    Jung, Wan S; Villegas, Jorge

    2011-01-01

    Anti-smoking Public Service Announcements (PSAs) typically emphasize the negative consequences of failing to quit smoking (negative frame), as opposed to emphasizing the benefits of quitting (positive frame). However, stressing the benefits of quitting sometimes produces better communication outcomes. Previous research on message framing has tried to identify factors affecting the impact of positive framing and negative framing. Data were collected on 188 undergraduates attending a southeastern university in the United States who were assigned randomly to view either positive or negative messages. Our study found that involvement and nicotine dependence moderated the impact of framed smoking-cessation messages on attitude toward the ad.

  3. The challenge of determining handedness in electron tomography and the use of DNA origami gold nanoparticle helices as molecular standards

    PubMed Central

    Briegel, Ariane; Pilhofer, Martin; Mastronarde, David N.; Jensen, Grant J.

    2013-01-01

    The apparent handedness of an EM-tomography reconstruction depends on a number of conventions and can be confused in many ways. As the number of different hardware and software combinations being used for electron tomography continue to climb, and the reconstructions being produced reach higher and higher resolutions, the need to verify the hand of the results has increased. Here we enumerate various steps in a typical tomography experiment that affect handedness and show that DNA origami gold nanoparticle helices can be used as convenient and fail-safe handedness standards. PMID:23639902

  4. ["Long-branch Attraction" artifact in phylogenetic reconstruction].

    PubMed

    Li, Yi-Wei; Yu, Li; Zhang, Ya-Ping

    2007-06-01

    Phylogenetic reconstruction among various organisms not only helps understand their evolutionary history but also reveal several fundamental evolutionary questions. Understanding of the evolutionary relationships among organisms establishes the foundation for the investigations of other biological disciplines. However, almost all the widely used phylogenetic methods have limitations which fail to eliminate systematic errors effectively, preventing the reconstruction of true organismal relationships. "Long-branch Attraction" (LBA) artifact is one of the most disturbing factors in phylogenetic reconstruction. In this review, the conception and analytic method as well as the avoidance strategy of LBA were summarized. In addition, several typical examples were provided. The approach to avoid and resolve LBA artifact has been discussed.

  5. The size and shape of Gum's nebula

    NASA Technical Reports Server (NTRS)

    Johnson, H. M.

    1971-01-01

    The ionizing light of the supernova which produced the Gum nebula is now fossilized in the still live, though failing, H II region. The main body of the nebula suggests a hollow center or shell form, with a characteristic radius of about half the distance to the outlying fragments. The edges of the main body patches are typically sharp and often bright. The structure of the Gum nebula appears to be dependent on the event of ionization and possibly on the details of heating. It is not now an unstructured ambient medium, as it may have been before the recent ionization. Several hypotheses are presented for a structured ambient medium.

  6. Uniform color space analysis of LACIE image products

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F. (Principal Investigator); Balon, R. J.; Cicone, R. C.

    1979-01-01

    The author has identified the following significant results. Analysis and comparison of image products generated by different algorithms show that the scaling and biasing of data channels for control of PFC primaries lead to loss of information (in a probability-of misclassification sense) by two major processes. In order of importance they are: neglecting the input of one channel of data in any one image, and failing to provide sufficient color resolution of the data. The scaling and biasing approach tends to distort distance relationships in data space and provides less than desirable resolution when the data variation is typical of a developed, nonhazy agricultural scene.

  7. [Many faces of deinstitutionalization--sociological interpretation].

    PubMed

    Forster, R

    2000-09-01

    The article summarizes in an international perspective what kind of results psychiatric deinstitutionalization has brought so far: a profound change of size and functions of the psychiatric hospital; better services for people with less severe problems; and the failing of community services to compensate for some of the functions of the former asylums, resulting in trans-institutionalization and/or neglect for many chronic patients. Three different sociological versions to explain the background and typical outcomes of psychiatric deinstitutionalization have been brought forward so far: political economy, professional dominance and post-structuralism. They are confronted with an approach using the concept of medicalisation which offers a more comprehensive understanding of the process.

  8. A Pyridine Alkoxide Chelate Ligand That Promotes Both Unusually High Oxidation States and Water-Oxidation Catalysis

    DOE PAGES

    Michaelos, Thoe K.; Shopov, Dimitar Y.; Sinha, Shashi Bhushan; ...

    2017-03-08

    Here, water-oxidation catalysis is a critical bottleneck in the direct generation of solar fuels by artificial photosynthesis. Catalytic oxidation of difficult substrates such as water requires harsh conditions, so that the ligand must be designed both to stabilize high oxidation states of the metal center and to strenuously resist ligand degradation. Typical ligand choices either lack sufficient electron donor power or fail to stand up to the oxidizing conditions. This research on Ir-based water-oxidation catalysts (WOCs) has led us to identify a ligand, 2-(2'-pyridyl)-2-propanoate or “pyalk” that fulfills these requirements.

  9. Chronic suppurative otitis media due to nontuberculous mycobacteria: A case of successful treatment with topical boric acid.

    PubMed

    Lefebvre, Marie-Astrid; Quach, Caroline; Daniel, Sam J

    2015-07-01

    Nontuberculous mycobacteria (NTM) are an increasingly recognized cause of chronic suppurative otitis media in children with tympanostomy tubes. Treatment of this condition is difficult and typically requires a combination of systemic antibiotics and surgical debridement. We present the first case of a 2-year-old male with chronic suppurative otitis media due to NTM who failed systemic antibiotic therapy and was successfully managed with topical boric acid powder. This report highlights the challenges involved in treating this infection, and introduces boric acid as a potentially valuable component of therapy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Optical detection of tracer species in strongly scattering media.

    PubMed

    Brauser, Eric M; Rose, Peter E; McLennan, John D; Bartl, Michael H

    2015-03-01

    A combination of optical absorption and scattering is used to detect tracer species in a strongly scattering medium. An optical setup was developed, consisting of a dual-beam scattering detection scheme in which sample scattering beam overlaps with the characteristic absorption feature of quantum dot tracer species, while the reference scattering beam is outside any absorption features of the tracer. This scheme was successfully tested in engineered breakthrough tests typical of wastewater and subsurface fluid analysis, as well as in batch analysis of oil and gas reservoir fluids and biological samples. Tracers were detected even under highly scattering conditions, conditions in which conventional absorption or fluorescence methods failed.

  11. H-Theorem and Thermodynamic Efficiency of the Radiation Work Inducing a Chemically Nonequilibrium State of Matter

    NASA Astrophysics Data System (ADS)

    Seleznev, V. D.; Buchina, O.

    2015-06-01

    The Sun's radiation is a source of origin and maintenance of life on Earth. The Sun-Earth system is a thermodynamic machine transforming radiation into useful work of living organisms. Despite the importance of efficiency for such a thermodynamic machine, the analysis of its efficiency coefficient (EC) available in the literature has considerable shortcomings: As is noted by the author of the classical study on this subject (Oxenius in J Quant Spectrosc Radiat Transf 6:65-91, 1996), the second law of thermodynamics is violated for the radiation beam (without direction integration). The typical thermodynamic analysis of the interaction between radiation and matter is performed assuming an equilibrium of the chemical composition thereof as opposed to the radiation work in the biosphere (photosynthesis), which usually occurs under the conditions of a significant deviation of the active substance's composition from its equilibrium values. The "black box" model (Aoki in J Phys Soc Jpn 52:1075-1078, 1983) is traditionally used to analyze the work efficiency of the Sun-Earth thermodynamic machine. It fails to explain the influence of many internal characteristics of the radiation-matter interaction on the process's EC. The present paper overcomes the above shortcomings using a relatively simple model of interaction between anisotropic radiation and two-level molecules of a rarefied component in a buffer substance.

  12. A virtual robot to model the use of regenerated legs in a web-building spider.

    PubMed

    Krink; Vollrath

    1999-01-01

    The garden cross orb-spider, Araneus diadematus, shows behavioural responses to leg loss and regeneration that are reflected in the geometry of the web's capture spiral. We created a virtual spider robot that mimicked the web construction behaviour of thus handicapped real spiders. We used this approach to test the correctness and consistency of hypotheses about orb web construction. The behaviour of our virtual robot was implemented in a rule-based system supervising behaviour patterns that communicated with the robot's sensors and motors. By building the typical web of a nonhandicapped spider our first model failed and led to new observations on real spiders. We realized that in addition to leg position, leg posture could also be of importance. The implementation of this new hypothesis greatly improved the results of our simulation of a handicapped spider. Now simulated webs, like the real webs of handicapped spiders, had significantly more gaps in successive spiral turns compared with webs of nonhandicapped spiders. Moreover, webs built by the improved virtual spiders intercepted prey as well as the digitized real webs. However, the main factors that affected web interception frequency were prey size, size of capture area and individual variance; having a regenerated leg, surprisingly, was relatively unimportant for this trait. Copyright 1999 The Association for the Study of Animal Behaviour.

  13. Potential sources of variability in mesocosm experiments on the response of phytoplankton to ocean acidification

    NASA Astrophysics Data System (ADS)

    Moreno de Castro, Maria; Schartau, Markus; Wirtz, Kai

    2017-04-01

    Mesocosm experiments on phytoplankton dynamics under high CO2 concentrations mimic the response of marine primary producers to future ocean acidification. However, potential acidification effects can be hindered by the high standard deviation typically found in the replicates of the same CO2 treatment level. In experiments with multiple unresolved factors and a sub-optimal number of replicates, post-processing statistical inference tools might fail to detect an effect that is present. We propose that in such cases, data-based model analyses might be suitable tools to unearth potential responses to the treatment and identify the uncertainties that could produce the observed variability. As test cases, we used data from two independent mesocosm experiments. Both experiments showed high standard deviations and, according to statistical inference tools, biomass appeared insensitive to changing CO2 conditions. Conversely, our simulations showed earlier and more intense phytoplankton blooms in modeled replicates at high CO2 concentrations and suggested that uncertainties in average cell size, phytoplankton biomass losses, and initial nutrient concentration potentially outweigh acidification effects by triggering strong variability during the bloom phase. We also estimated the thresholds below which uncertainties do not escalate to high variability. This information might help in designing future mesocosm experiments and interpreting controversial results on the effect of acidification or other pressures on ecosystem functions.

  14. Illusory conjunctions in the time domain and the resulting time-course of the attentional blink.

    PubMed

    Botella, Juan; Arend, Isabel; Suero, Manuel

    2004-05-01

    Illusory conjunctions in the time domain are errors made in binding stimulus features presented In the same spatial position in Rapid Serial Visual Presentation (RSVP) conditions. Botella, Barriopedro, and Suero (2001) devised a model to explain how the distribution of responses originating from stimuli around the target in the series is generated. They proposed two routes consisting of two sequential attempts to make a response. The second attempt (sophisticated guessing) is only employed if the first one (focal attention) fails in producing an integrated perception. This general outline enables specific predictions to be made and tested related to the efficiency of focal attention in generating responses in the first attempt. Participants had to report the single letter in an RSVP stream of letters that was presented in a previously specified color (first target, T1) and then report whether an X (second target, T2) was or was not presented. Performance on T2 showed the typical U-shaped function across the T1-T2 lag that reflects the attentional blink phenomenon. However, as was predicted by Botella, Barriopedro, and Suero's model, the time-course of the interference was shorter for trials with a correct response to T1 than for trials with a T1 error. Furthermore, longer time-courses of interference associated with pre-target and post-target errors to the first target were indistinguishable.

  15. From the laboratory to the therapy room: National dissemination and implementation of evidence-based psychotherapies in the U.S. Department of Veterans Affairs Health Care System.

    PubMed

    Karlin, Bradley E; Cross, Gerald

    2014-01-01

    Despite their established efficacy and recommendation--often as first-line treatments--in clinical practice guidelines, evidence-based psychotherapies (EBPs) have largely failed to make their way into mainstream clinical settings. Numerous attempts over the years to promote the translation of EBPs from science to practice, typically relying on one-dimensional dissemination approaches, have yielded limited success. As part of the transformation of its mental health care system, the Veterans Health Administration (VHA) of the U.S. Department of Veterans Affairs (VA) is working to disseminate and implement a number of EBPs for various mental and behavioral health conditions throughout the VA health care system. This article examines VHA's multidimensional model and specific strategies, involving policy, provider, local systems, patient, and accountability levels, for promoting the national dissemination and implementation of EBPs in VHA. In addition, the article identifies key lessons learned and next steps for further promoting EBP delivery and sustainability in the VA health care system. Beyond promoting the availability of effective treatments for veterans returning from Iraq and Afghanistan and for veterans of previous combat eras, VHA's EBP dissemination and implementation model and key lessons learned may help to inform other private and public health care systems interested in disseminating and implementing EBPs. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  16. Random Dopant Induced Threshold Voltage Lowering and Fluctuations in Sub-0.1 (micron)meter MOSFET's: A 3-D 'Atomistic' Simulation Study

    NASA Technical Reports Server (NTRS)

    Asenov, Asen

    1998-01-01

    A three-dimensional (3-D) "atomistic" simulation study of random dopant induced threshold voltage lowering and fluctuations in sub-0.1 microns MOSFET's is presented. For the first time a systematic analysis of random dopant effects down to an individual dopant level was carried out in 3-D on a scale sufficient to provide quantitative statistical predictions. Efficient algorithms based on a single multigrid solution of the Poisson equation followed by the solution of a simplified current continuity equation are used in the simulations. The effects of various MOSFET design parameters, including the channel length and width, oxide thickness and channel doping, on the threshold voltage lowering and fluctuations are studied using typical samples of 200 atomistically different MOSFET's. The atomistic results for the threshold voltage fluctuations were compared with two analytical models based on dopant number fluctuations. Although the analytical models predict the general trends in the threshold voltage fluctuations, they fail to describe quantitatively the magnitude of the fluctuations. The distribution of the atomistically calculated threshold voltage and its correlation with the number of dopants in the channel of the MOSFET's was analyzed based on a sample of 2500 microscopically different devices. The detailed analysis shows that the threshold voltage fluctuations are determined not only by the fluctuation in the dopant number, but also in the dopant position.

  17. General and food-specific parenting: measures and interplay.

    PubMed

    Kremers, Stef; Sleddens, Ester; Gerards, Sanne; Gubbels, Jessica; Rodenburg, Gerda; Gevers, Dorus; van Assema, Patricia

    2013-08-01

    Parental influence on child food intake is typically conceptualized at three levels-parenting practices, feeding style, and parenting style. General parenting style is modeled at the most distal level of influence and food parenting practices are conceptualized as the most proximal level of influence. The goal of this article is to provide insights into contents and explanatory value of instruments that have been applied to assess food parenting practices, feeding style, and parenting style. Measures of food parenting practices, feeding style, and parenting style were reviewed, compared, and contrasted with regard to contents, explanatory value, and interrelationships. Measures that are used in the field often fail to cover the full scope and complexity of food parenting. Healthy parenting dimensions have generally been found to be positively associated with child food intake (i.e., healthier dietary intake and less intake of energy-dense food products and sugar-sweetened beverages), but effect sizes are low. Evidence for the operation of higher-order moderation has been found, in which the impact of proximal parental influences is moderated by more distal levels of parenting. Operationalizing parenting at different levels, while applying a contextual higher-order moderation approach, is advocated to have surplus value in understanding the complex process of parent-child interactions in the area of food intake. A research paradigm is presented that may guide future work regarding the conceptualization and modeling of parental influences on child dietary behavior.

  18. A Physical Model for Mass Ejection in Failed Supernovae

    NASA Astrophysics Data System (ADS)

    Coughlin, Eric Robert; Quataert, Eliot; Fernandez, Rodrigo; Kasen, Daniel

    2018-01-01

    During the core collapse of a massive star, the formation of the protoneutron star is accompanied by the emission of a significant amount of mass-energy (a few tenths of a Solar mass) in the form of neutrinos. This mass-energy loss generates an outward-propagating pressure wave that steepens into a shock near the stellar surface, potentially powering a weak transient associated with an otherwise-failed supernova -- where the shock associated with the original core collapse cannot unbind the envelope in a successful explosion. We provide both rough estimates of the energy contained in the shock that powers the transient and a general formalism for analyzing the propagation and steepening of the pressure wave, and we apply this formalism to polytropic stellar models. We compare our results to simulations, and we find excellent agreement in both the early evolution of the pressure wave and in the energy contained in the shock. Our estimates provide important constraints on the observational implications of failed supernovae.

  19. Synchronized Trajectories in a Climate "Supermodel"

    NASA Astrophysics Data System (ADS)

    Duane, Gregory; Schevenhoven, Francine; Selten, Frank

    2017-04-01

    Differences in climate projections among state-of-the-art models can be resolved by connecting the models in run-time, either through inter-model nudging or by directly combining the tendencies for corresponding variables. Since it is clearly established that averaging model outputs typically results in improvement as compared to any individual model output, averaged re-initializations at typical analysis time intervals also seems appropriate. The resulting "supermodel" is more like a single model than it is like an ensemble, because the constituent models tend to synchronize even with limited inter-model coupling. Thus one can examine the properties of specific trajectories, rather than averaging the statistical properties of the separate models. We apply this strategy to a study of the index cycle in a supermodel constructed from several imperfect copies of the SPEEDO model (a global primitive-equation atmosphere-ocean-land climate model). As with blocking frequency, typical weather statistics of interest like probabilities of heat waves or extreme precipitation events, are improved as compared to the standard multi-model ensemble approach. In contrast to the standard approach, the supermodel approach provides detailed descriptions of typical actual events.

  20. Does probability guided hysteroscopy reduce costs in women investigated for postmenopausal bleeding?

    PubMed

    Breijer, M C; van Hanegem, N; Visser, N C M; Verheijen, R H M; Mol, B W J; Pijnenborg, J M A; Opmeer, B C; Timmermans, A

    2015-01-01

    To evaluate whether a model to predict a failed endometrial biopsy in women with postmenopausal bleeding (PMB) and a thickened endometrium can reduce costs without compromising diagnostic accuracy. Model based cost-minimization analysis. A decision analytic model was designed to compare two diagnostic strategies for women with PMB: (I) attempting office endometrial biopsy and performing outpatient hysteroscopy after failed biopsy and (II) predicted probability of a failed endometrial biopsy based on patient characteristics to guide the decision for endometrial biopsy or immediate hysteroscopy. Robustness of assumptions regarding costs was evaluated in sensitivity analyses. Costs for the different strategies. At different cut-offs for the predicted probability of failure of an endometrial biopsy, strategy I was generally less expensive than strategy II. The costs for strategy I were always € 460; the costs for strategy II varied between € 457 and € 475. At a 65% cut-off, a possible saving of € 3 per woman could be achieved. Individualizing the decision to perform an endometrial biopsy or immediate hysteroscopy in women presenting with postmenopausal bleeding based on patient characteristics does not increase the efficiency of the diagnostic work-up.

Top