Planning for robust reserve networks using uncertainty analysis
Moilanen, A.; Runge, M.C.; Elith, Jane; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.
2006-01-01
Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.
ERIC Educational Resources Information Center
Recker, Margaret M.; Pirolli, Peter
Students learning to program recursive LISP functions in a typical school-like lesson on recursion were observed. The typical lesson contains text and examples and involves solving a series of programming problems. The focus of this study is on students' learning strategies in new domains. In this light, a Soar computational model of…
Watershed and Economic Data InterOperability (WEDO) System
Hydrologic modeling is essential for environmental, economic, and human health decision-making. However, sharing of modeling studies is limited within the watershed modeling community. Distribution of hydrologic modeling research typically involves publishing summarized data in p...
Watershed and Economic Data InterOperability (WEDO) System (presentation)
Hydrologic modeling is essential for environmental, economic, and human health decision- making. However, sharing of modeling studies is limited within the watershed modeling community. Distribution of hydrologic modeling research typically involves publishing summarized data in ...
Bias in Prediction: A Test of Three Models with Elementary School Children
ERIC Educational Resources Information Center
Frazer, William G.; And Others
1975-01-01
Explores the differences among the traditional single-equation prediction model of test bias, the Cleary and the Thorndike model in a situation involving typical educational variables with young female and male children. (Author/DEP)
A Cognitive Diagnosis Model for Cognitively Based Multiple-Choice Options
ERIC Educational Resources Information Center
de la Torre, Jimmy
2009-01-01
Cognitive or skills diagnosis models are discrete latent variable models developed specifically for the purpose of identifying the presence or absence of multiple fine-grained skills. However, applications of these models typically involve dichotomous or dichotomized data, including data from multiple-choice (MC) assessments that are scored as…
The politics of participation in watershed modeling.
Korfmacher, K S
2001-02-01
While researchers and decision-makers increasingly recognize the importance of public participation in environmental decision-making, there is less agreement about how to involve the public. One of the most controversial issues is how to involve citizens in producing scientific information. Although this question is relevant to many areas of environmental policy, it has come to the fore in watershed management. Increasingly, the public is becoming involved in the sophisticated computer modeling efforts that have been developed to inform watershed management decisions. These models typically have been treated as technical inputs to the policy process. However, model-building itself involves numerous assumptions, judgments, and decisions that are relevant to the public. This paper examines the politics of public involvement in watershed modeling efforts and proposes five guidelines for good practice for such efforts. Using these guidelines, I analyze four cases in which different approaches to public involvement in the modeling process have been attempted and make recommendations for future efforts to involve communities in watershed modeling. Copyright 2001 Springer-Verlag
ERIC Educational Resources Information Center
Hopwood, Christopher J.
2007-01-01
Second-generation early intervention research typically involves the specification of multivariate relations between interventions, outcomes, and other variables. Moderation and mediation involve variables or sets of variables that influence relations between interventions and outcomes. Following the framework of Baron and Kenny's (1986) seminal…
Automated watershed subdivision for simulations using multi-objective optimization
USDA-ARS?s Scientific Manuscript database
The development of watershed management plans to evaluate placement of conservation practices typically involves application of watershed models. Incorporating spatially variable watershed characteristics into a model often requires subdividing the watershed into small areas to accurately account f...
Estimating the effect of changes in water quality on non-market values for recreation involves estimating a change in aggregate consumer surplus. This aggregate value typically involves estimating both a per-person, per-trip change in willingness to pay, as well as defining the m...
The assessment of toxic exposure on wildlife populations involves the integration of organism level effects measured in toxicity tests (e.g., chronic life cycle) and population models. These modeling exercises typically ignore density dependence, primarily because information on ...
A Multitasking General Executive for Compound Continuous Tasks
ERIC Educational Resources Information Center
Salvucci, Dario D.
2005-01-01
As cognitive architectures move to account for increasingly complex real-world tasks, one of the most pressing challenges involves understanding and modeling human multitasking. Although a number of existing models now perform multitasking in real-world scenarios, these models typically employ customized executives that schedule tasks for the…
Impact of parental weight status on weight loss efforts in Hispanic children
USDA-ARS?s Scientific Manuscript database
Parents have been shown to play an important role in weight loss for children. Parents are typically involved either as models for change or as supporters of children's weight loss efforts. It is likely that overweight/obese parents will need to be involved in changing the environment for themselv...
Maximum Likelihood Estimation in Meta-Analytic Structural Equation Modeling
ERIC Educational Resources Information Center
Oort, Frans J.; Jak, Suzanne
2016-01-01
Meta-analytic structural equation modeling (MASEM) involves fitting models to a common population correlation matrix that is estimated on the basis of correlation coefficients that are reported by a number of independent studies. MASEM typically consist of two stages. The method that has been found to perform best in terms of statistical…
Using video modeling to teach reciprocal pretend play to children with autism.
MacDonald, Rebecca; Sacramone, Shelly; Mansfield, Renee; Wiltz, Kristine; Ahearn, William H
2009-01-01
The purpose of the present study was to use video modeling to teach children with autism to engage in reciprocal pretend play with typically developing peers. Scripted play scenarios involving various verbalizations and play actions with adults as models were videotaped. Two children with autism were each paired with a typically developing child, and a multiple-probe design across three play sets was used to evaluate the effects of the video modeling procedure. Results indicated that both children with autism and the typically developing peers acquired the sequences of scripted verbalizations and play actions quickly and maintained this performance during follow-up probes. In addition, probes indicated an increase in the mean number of unscripted verbalizations as well as reciprocal verbal interactions and cooperative play. These findings are discussed as they relate to the development of reciprocal pretend-play repertoires in young children with autism.
Using Video Modeling to Teach Reciprocal Pretend Play to Children with Autism
MacDonald, Rebecca; Sacramone, Shelly; Mansfield, Renee; Wiltz, Kristine; Ahearn, William H
2009-01-01
The purpose of the present study was to use video modeling to teach children with autism to engage in reciprocal pretend play with typically developing peers. Scripted play scenarios involving various verbalizations and play actions with adults as models were videotaped. Two children with autism were each paired with a typically developing child, and a multiple-probe design across three play sets was used to evaluate the effects of the video modeling procedure. Results indicated that both children with autism and the typically developing peers acquired the sequences of scripted verbalizations and play actions quickly and maintained this performance during follow-up probes. In addition, probes indicated an increase in the mean number of unscripted verbalizations as well as reciprocal verbal interactions and cooperative play. These findings are discussed as they relate to the development of reciprocal pretend-play repertoires in young children with autism. PMID:19721729
Global Water Cycle Agreement in the Climate Models Assessed in the IPCC AR4
NASA Technical Reports Server (NTRS)
Waliser, D.; Seo, K. -W.; Schubert, S.; Njoku, E.
2007-01-01
This study examines the fidelity of the global water cycle in the climate model simulations assessed in the IPCC Fourth Assessment Report. The results demonstrate good model agreement in quantities that have had a robust global observational basis and that are physically unambiguous. The worst agreement occurs for quantities that have both poor observational constraints and whose model representations can be physically ambiguous. In addition, components involving water vapor (frozen water) typically exhibit the best (worst) agreement, and fluxes typically exhibit better agreement than reservoirs. These results are discussed in relation to the importance of obtaining accurate model representation of the water cycle and its role in climate change. Recommendations are also given for facilitating the needed model improvements.
Development of a second order closure model for computation of turbulent diffusion flames
NASA Technical Reports Server (NTRS)
Varma, A. K.; Donaldson, C. D.
1974-01-01
A typical eddy box model for the second-order closure of turbulent, multispecies, reacting flows developed. The model structure was quite general and was valid for an arbitrary number of species. For the case of a reaction involving three species, the nine model parameters were determined from equations for nine independent first- and second-order correlations. The model enabled calculation of any higher-order correlation involving mass fractions, temperatures, and reaction rates in terms of first- and second-order correlations. Model predictions for the reaction rate were in very good agreement with exact solutions of the reaction rate equations for a number of assumed flow distributions.
SYNCHROTRON ORIGIN OF THE TYPICAL GRB BAND FUNCTION—A CASE STUDY OF GRB 130606B
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Bin-Bin; Briggs, Michael S.; Uhm, Z. Lucas
2016-01-10
We perform a time-resolved spectral analysis of GRB 130606B within the framework of a fast-cooling synchrotron radiation model with magnetic field strength in the emission region decaying with time, as proposed by Uhm and Zhang. The data from all time intervals can be successfully fit by the model. The same data can be equally well fit by the empirical Band function with typical parameter values. Our results, which involve only minimal physical assumptions, offer one natural solution to the origin of the observed GRB spectra and imply that, at least some, if not all, Band-like GRB spectra with typical Bandmore » parameter values can indeed be explained by synchrotron radiation.« less
Continuum-Kinetic Models and Numerical Methods for Multiphase Applications
NASA Astrophysics Data System (ADS)
Nault, Isaac Michael
This thesis presents a continuum-kinetic approach for modeling general problems in multiphase solid mechanics. In this context, a continuum model refers to any model, typically on the macro-scale, in which continuous state variables are used to capture the most important physics: conservation of mass, momentum, and energy. A kinetic model refers to any model, typically on the meso-scale, which captures the statistical motion and evolution of microscopic entitites. Multiphase phenomena usually involve non-negligible micro or meso-scopic effects at the interfaces between phases. The approach developed in the thesis attempts to combine the computational performance benefits of a continuum model with the physical accuracy of a kinetic model when applied to a multiphase problem. The approach is applied to modeling a single particle impact in Cold Spray, an engineering process that intimately involves the interaction of crystal grains with high-magnitude elastic waves. Such a situation could be classified a multiphase application due to the discrete nature of grains on the spatial scale of the problem. For this application, a hyper elasto-plastic model is solved by a finite volume method with approximate Riemann solver. The results of this model are compared for two types of plastic closure: a phenomenological macro-scale constitutive law, and a physics-based meso-scale Crystal Plasticity model.
Predictive modeling of developmental toxicity using EPA’s Virtual Embryo
Standard practice in prenatal developmental toxicology involves testing chemicals in pregnant laboratory animals of two species, typically rats and rabbits, exposed during organogenesis and evaluating for fetal growth retardation, structural malformations, and prenatal death just...
Reason, emotion and decision-making: risk and reward computation with feeling.
Quartz, Steven R
2009-05-01
Many models of judgment and decision-making posit distinct cognitive and emotional contributions to decision-making under uncertainty. Cognitive processes typically involve exact computations according to a cost-benefit calculus, whereas emotional processes typically involve approximate, heuristic processes that deliver rapid evaluations without mental effort. However, it remains largely unknown what specific parameters of uncertain decision the brain encodes, the extent to which these parameters correspond to various decision-making frameworks, and their correspondence to emotional and rational processes. Here, I review research suggesting that emotional processes encode in a precise quantitative manner the basic parameters of financial decision theory, indicating a reorientation of emotional and cognitive contributions to risky choice.
2014-06-11
typically of a few 10-11 torr using oil-free magnetically suspended turbomolecular pumps backed with dry scroll pumps . A cold finger assembled from...on line and in situ utilizing a Faraday cup mounted inside a differentially pumped chamber on an ultrahigh vacuum compatible translation state. The...down to a base pressure typically of a few 10-11 torr using oil-free magnetically suspended turbomolecular pumps backed with dry scroll pumps . A
ERIC Educational Resources Information Center
Lee, Scott Weng Fai
2013-01-01
The assessment of young children's thinking competence in task performances has typically followed the novice-to-expert regimen involving models of strategies that adults use when engaged in cognitive tasks such as problem-solving and decision-making. Socio-constructivists argue for a balanced pedagogical approach between the adult and child that…
ERIC Educational Resources Information Center
Huang, Xiaoxia; Cribbs, Jennifer
2017-01-01
This study examined mathematics and science teachers' perceptions and use of four types of examples, including typical textbook examples (standard worked examples) and erroneous worked examples in the written form as well as mastery modelling examples and peer modelling examples involving the verbalization of the problem-solving process. Data…
Brett G. Dickson; Thomas D. Sisk; Steven E. Sesnie; Richard T. Reynolds; Steven S. Rosenstock; Christina D. Vojta; Michael F. Ingraldi; Jill M. Rundall
2014-01-01
Conservation planners and land managers are often confronted with scale-associated challenges when assessing the relationship between land management objectives and species conservation. Conservation of individual species typically involves site-level analyses of habitat, whereas land management focuses on larger spatial extents. New models are needed to more...
ERIC Educational Resources Information Center
Servilio, Kathryn L.; Hollingshead, Aleksandra; Hott, Brittany L.
2017-01-01
In higher education, current teaching evaluation models typically involve senior faculty evaluating junior faculty. However, there is evidence that peer-to-peer junior faculty observations and feedback may be just as effective. This descriptive case study utilized an inductive analysis to examine experiences of six special education early career…
2015-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Soundscapes Michael B. Porter and Laurel J. Henderson...hindcasts, nowcasts, and forecasts of the time-evolving soundscape . In terms of the types of sound sources, we will focus initially on commercial...modeling of the soundscape due to noise involves running an acoustic model for a grid of source positions over latitude and longitude. Typically
Ground robotic measurement of aeolian processes
USDA-ARS?s Scientific Manuscript database
Models of aeolian processes rely on accurate measurements of the rates of sediment transport by wind, and careful evaluation of the environmental controls of these processes. Existing field approaches typically require intensive, event-based experiments involving dense arrays of instruments. These d...
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
Free and Open Source GIS Tools: Role and Relevance in the Environmental Assessment Community
The presence of an explicit geographical context in most environmental decisions can complicate assessment and selection of management options. These decisions typically involve numerous data sources, complex environmental and ecological processes and their associated models, ris...
Samuel V. Glass; Charles R. Boardman; Samuel L. Zelinka
2017-01-01
Recently, the dynamic vapor sorption (DVS) technique has been used to measure sorption isotherms and develop moisture-mechanics models for wood and cellulosic materials. This method typically involves measuring the time-dependent mass response of a sample following step changes in relative humidity (RH), fitting a kinetic model to the data, and extrapolating the...
Optimizing simulated fertilizer additions using a genetic algorithm with a nutrient uptake model
Wendell P. Cropper; N.B. Comerford
2005-01-01
Intensive management of pine plantations in the southeastern coastal plain typically involves weed and pest control, and the addition of fertilizer to meet the high nutrient demand of rapidly growing pines. In this study we coupled a mechanistic nutrient uptake model (SSAND, soil supply and nutrient demand) with a genetic algorithm (GA) in order to estimate the minimum...
Alimonti, Luca; Atalla, Noureddine; Berry, Alain; Sgard, Franck
2015-02-01
Practical vibroacoustic systems involve passive acoustic treatments consisting of highly dissipative media such as poroelastic materials. The numerical modeling of such systems at low to mid frequencies typically relies on substructuring methodologies based on finite element models. Namely, the master subsystems (i.e., structural and acoustic domains) are described by a finite set of uncoupled modes, whereas condensation procedures are typically preferred for the acoustic treatments. However, although accurate, such methodology is computationally expensive when real life applications are considered. A potential reduction of the computational burden could be obtained by approximating the effect of the acoustic treatment on the master subsystems without introducing physical degrees of freedom. To do that, the treatment has to be assumed homogeneous, flat, and of infinite lateral extent. Under these hypotheses, simple analytical tools like the transfer matrix method can be employed. In this paper, a hybrid finite element-transfer matrix methodology is proposed. The impact of the limiting assumptions inherent within the analytical framework are assessed for the case of plate-cavity systems involving flat and homogeneous acoustic treatments. The results prove that the hybrid model can capture the qualitative behavior of the vibroacoustic system while reducing the computational effort.
Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin
2016-04-01
Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure geophysical inversions are performed on meshes of space-filling cells (typically prisms or tetrahedra) and recover smoothly varying physical property distributions that are inconsistent with typical geological interpretations. There are several approaches through which mesh-based minimum-structure geophysical inversion can help recover models with some of the desired characteristics. However, a more effective strategy may be to consider two fundamentally different types of inversions: lithological and surface geometry inversions. A major advantage of these two inversion approaches is that joint inversion of multiple types of geophysical data is greatly simplified. In a lithological inversion, the subsurface is discretized into a mesh and each cell contains a particular rock type. A lithological model must be translated to a physical property model before geophysical data simulation. Each lithology may map to discrete property values or there may be some a priori probability density function associated with the mapping. Through this mapping, lithological inverse problems limit the parameter domain and consequently reduce the non-uniqueness from that presented by standard mesh-based inversions that allow physical property values on continuous ranges. Furthermore, joint inversion is greatly simplified because no additional mathematical coupling measure is required in the objective function to link multiple physical property models. In a surface geometry inversion, the model comprises wireframe surfaces representing contacts between rock units. This parameterization is then fully consistent with Earth models built by geologists, which in 3D typically comprise wireframe contact surfaces of tessellated triangles. As for the lithological case, the physical properties of the units lying between the contact surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.
ERIC Educational Resources Information Center
Priano, Christine
2013-01-01
This model-building activity provides a quick, visual, hands-on tool that allows students to examine more carefully the cloverleaf structure of a typical tRNA molecule. When used as a supplement to lessons that involve gene expression, this exercise reinforces several concepts in molecular genetics, including nucleotide base-pairing rules, the…
Computing diffuse fraction of global horizontal solar radiation: A model comparison.
Dervishi, Sokol; Mahdavi, Ardeshir
2012-06-01
For simulation-based prediction of buildings' energy use or expected gains from building-integrated solar energy systems, information on both direct and diffuse component of solar radiation is necessary. Available measured data are, however, typically restricted to global horizontal irradiance. There have been thus many efforts in the past to develop algorithms for the derivation of the diffuse fraction of solar irradiance. In this context, the present paper compares eight models for estimating diffuse fraction of irradiance based on a database of measured irradiance from Vienna, Austria. These models generally involve mathematical formulations with multiple coefficients whose values are typically valid for a specific location. Subsequent to a first comparison of these eight models, three better performing models were selected for a more detailed analysis. Thereby, the coefficients of the models were modified to account for Vienna data. The results suggest that some models can provide relatively reliable estimations of the diffuse fractions of the global irradiance. The calibration procedure could only slightly improve the models' performance.
NASA Technical Reports Server (NTRS)
Keller, Richard M.
1991-01-01
The construction of scientific software models is an integral part of doing science, both within NASA and within the scientific community at large. Typically, model-building is a time-intensive and painstaking process, involving the design of very large, complex computer programs. Despite the considerable expenditure of resources involved, completed scientific models cannot easily be distributed and shared with the larger scientific community due to the low-level, idiosyncratic nature of the implemented code. To address this problem, we have initiated a research project aimed at constructing a software tool called the Scientific Modeling Assistant. This tool provides automated assistance to the scientist in developing, using, and sharing software models. We describe the Scientific Modeling Assistant, and also touch on some human-machine interaction issues relevant to building a successful tool of this type.
Sela, Itamar; Izzetoglu, Meltem; Izzetoglu, Kurtulus; Onaral, Banu
2014-01-01
The dual route model (DRM) of reading suggests two routes of reading development: the phonological and the orthographic routes. It was proposed that although the two routes are active in the process of reading; the first is more involved at the initial stages of reading acquisition, whereas the latter needs more reading training to mature. A number of studies have shown that deficient phonological processing is a core deficit in developmental dyslexia. According to the DRM, when the Lexical Decision Task (LDT) is performed, the orthographic route should also be involved when decoding words, whereas it is clear that when decoding pseudowords the phonological route should be activated. Previous functional near-infrared spectroscopy (fNIR) studies have suggested that the upper left frontal lobe is involved in decision making in the LDT. The current study used fNIR to compare left frontal lobe activity during LDT performance among three reading-level groups: 12-year-old children, young adult dyslexic readers, and young adult typical readers. Compared to typical readers, the children demonstrated lower activity under the word condition only, whereas the dyslexic readers showed lower activity under the pseudoword condition only. The results provide evidence for upper left frontal lobe involvement in LDT and support the DRM and the phonological deficit theory of dyslexia.
ERIC Educational Resources Information Center
Edwards, Oliver W.; Ray, Shannon L.
2010-01-01
Those involved in circumstances in which children are raised by their grandparents often encounter serious problems that require assistance from counselors. Research suggests that grandparents and parents in these families typically experience heightened stress and psychosocial distress. Additionally, the children often encounter behavioral,…
A "Rainmaker" Process for Developing Internet-Based Retail Businesses
ERIC Educational Resources Information Center
Abrahams, Alan S.; Singh, Tirna
2011-01-01
Various systems development life cycles and business development models have been popularized by information systems researchers and practitioners over a number of decades. In the case of systems development life cycles, these have been targeted at software development projects within an organization, typically involving analysis, design,…
ERIC Educational Resources Information Center
Dew, Angela; Veitch, Craig; Lincoln, Michelle; Brentnall, Jennie; Bulkeley, Kim; Gallego, Gisselle; Bundy, Anita; Griffiths, Scott
2012-01-01
Therapy service delivery models to non-Indigenous and Indigenous people living in outer regional, remote, and very remote areas of Australia have typically involved irregular outreach from larger regional towns and capital cities. New South Wales (NSW) is the most populous Australian state with 7.23 million people of whom 4.58 million live in the…
Magnetic Reconnection in Different Environments: Similarities and Differences
NASA Technical Reports Server (NTRS)
Hesse, Michael; Aunai, Nicolas; Kuznetsova, Masha; Zenitani, Seiji; Birn, Joachim
2014-01-01
Depending on the specific situation, magnetic reconnection may involve symmetric or asymmetric inflow regions. Asymmetric reconnection applies, for example, to reconnection at the Earth's magnetopause, whereas reconnection in the nightside magnetotail tends to involve more symmetric geometries. A combination of review and new results pertaining to magnetic reconnection is being presented. The focus is on three aspects: A basic, MHD-based, analysis of the role magnetic reconnection plays in the transport of energy, followed by an analysis of a kinetic model of time dependent reconnection in a symmetric current sheet, similar to what is typically being encountered in the magnetotail of the Earth. The third element is a review of recent results pertaining to the orientation of the reconnection line in asymmetric geometries, which are typical for the magnetopause of the Earth, as well as likely to occur at other planets.
NASA Astrophysics Data System (ADS)
McConnell, William J.
Due to the call of current science education reform for the integration of engineering practices within science classrooms, design-based instruction is receiving much attention in science education literature. Although some aspect of modeling is often included in well-known design-based instructional methods, it is not always a primary focus. The purpose of this study was to better understand how design-based instruction with an emphasis on scientific modeling might impact students' spatial abilities and their model-based argumentation abilities. In the following mixed-method multiple case study, seven seventh grade students attending a secular private school in the Mid-Atlantic region of the United States underwent an instructional intervention involving design-based instruction, modeling and argumentation. Through the course of a lesson involving students in exploring the interrelatedness of the environment and an animal's form and function, students created and used multiple forms of expressed models to assist them in model-based scientific argument. Pre/post data were collected through the use of The Purdue Spatial Visualization Test: Rotation, the Mental Rotation Test and interviews. Other data included a spatial activities survey, student artifacts in the form of models, notes, exit tickets, and video recordings of students throughout the intervention. Spatial abilities tests were analyzed using descriptive statistics while students' arguments were analyzed using the Instrument for the Analysis of Scientific Curricular Arguments and a behavior protocol. Models were analyzed using content analysis and interviews and all other data were coded and analyzed for emergent themes. Findings in the area of spatial abilities included increases in spatial reasoning for six out of seven participants, and an immense difference in the spatial challenges encountered by students when using CAD software instead of paper drawings to create models. Students perceived 3D printed models to better assist them in scientific argumentation over paper drawing models. In fact, when given a choice, students rarely used paper drawing to assist in argument. There was also a difference in model utility between the two different model types. Participants explicitly used 3D printed models to complete gestural modeling, while participants rarely looked at 2D models when involved in gestural modeling. This study's findings added to current theory dealing with the varied spatial challenges involved in different modes of expressed models. This study found that depth, symmetry and the manipulation of perspectives are typically spatial challenges students will attend to using CAD while they will typically ignore them when drawing using paper and pencil. This study also revealed a major difference in model-based argument in a design-based instruction context as opposed to model-based argument in a typical science classroom context. In the context of design-based instruction, data revealed that design process is an important part of model-based argument. Due to the importance of design process in model-based argumentation in this context, trusted methods of argument analysis, like the coding system of the IASCA, was found lacking in many respects. Limitations and recommendations for further research were also presented.
Visual Modelling of Data Warehousing Flows with UML Profiles
NASA Astrophysics Data System (ADS)
Pardillo, Jesús; Golfarelli, Matteo; Rizzi, Stefano; Trujillo, Juan
Data warehousing involves complex processes that transform source data through several stages to deliver suitable information ready to be analysed. Though many techniques for visual modelling of data warehouses from the static point of view have been devised, only few attempts have been made to model the data flows involved in a data warehousing process. Besides, each attempt was mainly aimed at a specific application, such as ETL, OLAP, what-if analysis, data mining. Data flows are typically very complex in this domain; for this reason, we argue, designers would greatly benefit from a technique for uniformly modelling data warehousing flows for all applications. In this paper, we propose an integrated visual modelling technique for data cubes and data flows. This technique is based on UML profiling; its feasibility is evaluated by means of a prototype implementation.
Leadership in a Performative Context: A Framework for Decision-Making
ERIC Educational Resources Information Center
Chitpin, Stephanie; Jones, Ken
2015-01-01
This paper examines a model of decision-making within the context of current and emerging regimes of accountability being proposed and implemented for school systems in a number of jurisdictions. These approaches to accountability typically involve the use of various measurable student learning outcomes as well as other measures of performance to…
Undergraduate Student Perspectives on Electronic Portfolio Assessment in College Composition Courses
ERIC Educational Resources Information Center
Fullerton, Bridget Katherine Jean
2017-01-01
Though Linda Adler-Kassner and Peggy O'Neill claim that ethical writing assessment models "must be designed and built collaboratively, with careful attention to the values and passions of all involved, through a process that provides access to all," college students have not typically been included in scholarly conversations about…
Development and Implementation of a Collective Gaining Model in Teacher Negotiations.
ERIC Educational Resources Information Center
Brynildson, Gerald
The traditional approach to collective bargaining as a win/loss situation in the educational field adversely affects staff members' confidence, security, and morale. Typically, those involved in this form of negotiation see only two ways to negotiate: soft and hard. Neither approach proves satisfactory because the soft negotiator often ends up…
In-cell overlay metrology by using optical metrology tool
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Han, Sangjun; Hong, Minhyung; Kim, Seungyoung; Lee, Jieun; Lee, DongYoung; Oh, Eungryong; Choi, Ahlin; Park, Hyowon; Liang, Waley; Choi, DongSub; Kim, Nakyoon; Lee, Jeongpyo; Pandev, Stilian; Jeon, Sanghuck; Robinson, John C.
2018-03-01
Overlay is one of the most critical process control steps of semiconductor manufacturing technology. A typical advanced scheme includes an overlay feedback loop based on after litho optical imaging overlay metrology on scribeline targets. The after litho control loop typically involves high frequency sampling: every lot or nearly every lot. An after etch overlay metrology step is often included, at a lower sampling frequency, in order to characterize and compensate for bias. The after etch metrology step often involves CD-SEM metrology, in this case in-cell and ondevice. This work explores an alternative approach using spectroscopic ellipsometry (SE) metrology and a machine learning analysis technique. Advanced 1x nm DRAM wafers were prepared, including both nominal (POR) wafers with mean overlay offsets, as well as DOE wafers with intentional across wafer overlay modulation. After litho metrology was measured using optical imaging metrology, as well as after etch metrology using both SE and CD-SEM for comparison. We investigate 2 types of machine learning techniques with SE data: model-less and model-based, showing excellent performance for after etch in-cell on-device overlay metrology.
A Bayesian Model of the Memory Colour Effect.
Witzel, Christoph; Olkkonen, Maria; Gegenfurtner, Karl R
2018-01-01
According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects.
A Bayesian Model of the Memory Colour Effect
Olkkonen, Maria; Gegenfurtner, Karl R.
2018-01-01
According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects. PMID:29760874
A tensor approach to modeling of nonhomogeneous nonlinear systems
NASA Technical Reports Server (NTRS)
Yurkovich, S.; Sain, M.
1980-01-01
Model following control methodology plays a key role in numerous application areas. Cases in point include flight control systems and gas turbine engine control systems. Typical uses of such a design strategy involve the determination of nonlinear models which generate requested control and response trajectories for various commands. Linear multivariable techniques provide trim about these motions; and protection logic is added to secure the hardware from excursions beyond the specification range. This paper reports upon experience in developing a general class of such nonlinear models based upon the idea of the algebraic tensor product.
Cognitive and neural foundations of discrete sequence skill: a TMS study.
Ruitenberg, Marit F L; Verwey, Willem B; Schutter, Dennis J L G; Abrahamse, Elger L
2014-04-01
Executing discrete movement sequences typically involves a shift with practice from a relatively slow, stimulus-based mode to a fast mode in which performance is based on retrieving and executing entire motor chunks. The dual processor model explains the performance of (skilled) discrete key-press sequences in terms of an interplay between a cognitive processor and a motor system. In the present study, we tested and confirmed the core assumptions of this model at the behavioral level. In addition, we explored the involvement of the pre-supplementary motor area (pre-SMA) in discrete sequence skill by applying inhibitory 20 min 1-Hz off-line repetitive transcranial magnetic stimulation (rTMS). Based on previous work, we predicted pre-SMA involvement in the selection/initiation of motor chunks, and this was confirmed by our results. The pre-SMA was further observed to be more involved in more complex than in simpler sequences, while no evidence was found for pre-SMA involvement in direct stimulus-response translations or associative learning processes. In conclusion, support is provided for the dual processor model, and for pre-SMA involvement in the initiation of motor chunks. Copyright © 2014 Elsevier Ltd. All rights reserved.
Easi-CRISPR for creating knock-in and conditional knockout mouse models using long ssDNA donors.
Miura, Hiromi; Quadros, Rolen M; Gurumurthy, Channabasavaiah B; Ohtsuka, Masato
2018-01-01
CRISPR/Cas9-based genome editing can easily generate knockout mouse models by disrupting the gene sequence, but its efficiency for creating models that require either insertion of exogenous DNA (knock-in) or replacement of genomic segments is very poor. The majority of mouse models used in research involve knock-in (reporters or recombinases) or gene replacement (e.g., conditional knockout alleles containing exons flanked by LoxP sites). A few methods for creating such models have been reported that use double-stranded DNA as donors, but their efficiency is typically 1-10% and therefore not suitable for routine use. We recently demonstrated that long single-stranded DNAs (ssDNAs) serve as very efficient donors, both for insertion and for gene replacement. We call this method efficient additions with ssDNA inserts-CRISPR (Easi-CRISPR) because it is a highly efficient technology (efficiency is typically 30-60% and reaches as high as 100% in some cases). The protocol takes ∼2 months to generate the founder mice.
NASA Astrophysics Data System (ADS)
Gong, Rui; Wang, Qing; Shao, Xiaopeng; Zhou, Conghao
2016-12-01
This study aims to expand the applications of color appearance models to representing the perceptual attributes for digital images, which supplies more accurate methods for predicting image brightness and image colorfulness. Two typical models, i.e., the CIELAB model and the CIECAM02, were involved in developing algorithms to predict brightness and colorfulness for various images, in which three methods were designed to handle pixels of different color contents. Moreover, massive visual data were collected from psychophysical experiments on two mobile displays under three lighting conditions to analyze the characteristics of visual perception on these two attributes and to test the prediction accuracy of each algorithm. Afterward, detailed analyses revealed that image brightness and image colorfulness were predicted well by calculating the CIECAM02 parameters of lightness and chroma; thus, the suitable methods for dealing with different color pixels were determined for image brightness and image colorfulness, respectively. This study supplies an example of enlarging color appearance models to describe image perception.
The multiple time scales of sleep dynamics as a challenge for modelling the sleeping brain.
Olbrich, Eckehard; Claussen, Jens Christian; Achermann, Peter
2011-10-13
A particular property of the sleeping brain is that it exhibits dynamics on very different time scales ranging from the typical sleep oscillations such as sleep spindles and slow waves that can be observed in electroencephalogram (EEG) segments of several seconds duration over the transitions between the different sleep stages on a time scale of minutes to the dynamical processes involved in sleep regulation with typical time constants in the range of hours. There is an increasing body of work on mathematical and computational models addressing these different dynamics, however, usually considering only processes on a single time scale. In this paper, we review and present a new analysis of the dynamics of human sleep EEG at the different time scales and relate the findings to recent modelling efforts pointing out both the achievements and remaining challenges.
Kaiyala, Karl J
2014-01-01
Mathematical models for the dependence of energy expenditure (EE) on body mass and composition are essential tools in metabolic phenotyping. EE scales over broad ranges of body mass as a non-linear allometric function. When considered within restricted ranges of body mass, however, allometric EE curves exhibit 'local linearity.' Indeed, modern EE analysis makes extensive use of linear models. Such models typically involve one or two body mass compartments (e.g., fat free mass and fat mass). Importantly, linear EE models typically involve a non-zero (usually positive) y-intercept term of uncertain origin, a recurring theme in discussions of EE analysis and a source of confounding in traditional ratio-based EE normalization. Emerging linear model approaches quantify whole-body resting EE (REE) in terms of individual organ masses (e.g., liver, kidneys, heart, brain). Proponents of individual organ REE modeling hypothesize that multi-organ linear models may eliminate non-zero y-intercepts. This could have advantages in adjusting REE for body mass and composition. Studies reveal that individual organ REE is an allometric function of total body mass. I exploit first-order Taylor linearization of individual organ REEs to model the manner in which individual organs contribute to whole-body REE and to the non-zero y-intercept in linear REE models. The model predicts that REE analysis at the individual organ-tissue level will not eliminate intercept terms. I demonstrate that the parameters of a linear EE equation can be transformed into the parameters of the underlying 'latent' allometric equation. This permits estimates of the allometric scaling of EE in a diverse variety of physiological states that are not represented in the allometric EE literature but are well represented by published linear EE analyses.
2008-10-01
provide adequate means for thermal heat dissipation and cooling. Thus electronic packaging has four main functions [1]: • Signal distribution which... dissipation , involving structural and materials consideration. • Mechanical, chemical and electromagnetic protection of components and... nature when compared to phenomenological models. Microelectronic packaging industry spends typically several months building and reliability
ERIC Educational Resources Information Center
Champagne, Delight E.
Undergraduates on college campuses are one of the best resources for learning about college student development. Nonetheless, graduate programs which prepare student personnel professionals have typically neglected to involve undergraduates in courses which attempt to teach student development theory and research. Without input and feedback from…
Polymethylsilsesquioxanes through base-catalyzed redistribution of oligomethylhydridosiloxanes
DOE Office of Scientific and Technical Information (OSTI.GOV)
RAHIMIAN,KAMYAR; ASSINK,ROGER A.; LOY,DOUGLAS A.
2000-04-04
There has been an increasing amount of interest in silsesquioxanes and polysilsesquioxanes. They have been used as models for silica surfaces and have been shown to have great potential for several industrial applications. Typical synthesis of polysilsesquioxanes involves the hydrolysis of organotricholorosilanes and/or organotrialkoxysilanes in the presence of acid or base catalysts, usually in the presence of organic solvents.
Surface tension phenomena in the xylem sap of three diffuse porous temperate tree species
K. K. Christensen-Dalsgaard; M. T. Tyree; P. G. Mussone
2011-01-01
In plant physiology models involving bubble nucleation, expansion or elimination, it is typically assumed that the surface tension of xylem sap is equal to that of pure water, though this has never been tested. In this study we collected xylem sap from branches of the tree species Populus tremuloides, Betula papyrifera and Sorbus...
Cladé, Thierry; Snyder, Joshua C.
2010-01-01
Clinical trials which use imaging typically require data management and workflow integration across several parties. We identify opportunities for all parties involved to realize benefits with a modular interoperability model based on service-oriented architecture and grid computing principles. We discuss middleware products for implementation of this model, and propose caGrid as an ideal candidate due to its healthcare focus; free, open source license; and mature developer tools and support. PMID:20449775
Olfaction in the autism spectrum.
Galle, Sara A; Courchesne, Valérie; Mottron, Laurent; Frasnelli, Johannes
2013-01-01
The autism spectrum (AS) is characterised by enhanced perception in vision and audition, described by the enhanced perceptual functioning (EPF) model. This model predicts enhanced low-level (discrimination of psychophysical dimensions), and mid- and high-level (pattern detection and identification) perception. The EPF model is here tested for olfaction by investigating olfactory function in autistic and Asperger participants. Experiment 1 targeted higher-order olfactory processing by assessing olfactory identification in nine Asperger, ten autistic, and eleven typically developed individuals. Experiment 2 focused on low-level olfactory processing; we assessed odour detection thresholds and odour discrimination in five Asperger, five autistic, and five typically developed males. Olfactory identification was impaired in autistic participants relative to control and Asperger participants. Typical performance in low-level olfactory processing suggests that neural mechanisms involved in the perceptual phenotype of AS do not affect structures implicated in olfactory processing. Reduced olfactory identification is limited to autistic participants who displayed speech delay and may be due to a reduced facility to use verbal labels. The apparent absence of enhanced olfactory perception of AS participants distinguishes the olfactory system from the other sensory modalities and might be caused by the absence of an obligatory thalamic relay.
Revision of the Rawls et al. (1982) pedotransfer functions for their applicability to US croplands
USDA-ARS?s Scientific Manuscript database
Large scale environmental impact studies typically involve the use of simulation models and require a variety of inputs, some of which may need to be estimated in absence of adequate measured data. As an example, soil water retention needs to be estimated for a large number of soils that are to be u...
ERIC Educational Resources Information Center
Koutsouris, George; Norwich, Brahm; Fujita, Taro; Ralph, Thomas; Adlam, Anna; Milton, Fraser
2017-01-01
This article presents an evaluation of distance technology used in a novel Lesson Study (LS) approach involving a dispersed LS team for inter-professional purposes. A typical LS model with only school teachers as team members was modified by including university-based lecturers with the school-based teachers, using video-conferencing and online…
1980-12-31
surfaces. Reactions involving the Pt(O)- triphenylphosphine complexes Pt(PPh 3)n, where n = 2, 3, 4, have been shown to have precise analogues on Pt...12], the triphenylphosphine (PPh 3 ) group is modeled by the simpler but chemically similar phosphine (PH3) group. The appropriate Pt-P bond distances...typically refractory oxides ) are of sufficient magnitude as to suggest significant chemical and electronic modifications of the metal at the metal-support
Scrutinizing UML Activity Diagrams
NASA Astrophysics Data System (ADS)
Al-Fedaghi, Sabah
Building an information system involves two processes: conceptual modeling of the “real world domain” and designing the software system. Object-oriented methods and languages (e.g., UML) are typically used for describing the software system. For the system analysis process that produces the conceptual description, object-oriented techniques or semantics extensions are utilized. Specifically, UML activity diagrams are the “flow charts” of object-oriented conceptualization tools. This chapter proposes an alternative to UML activity diagrams through the development of a conceptual modeling methodology based on the notion of flow.
Representing Extremes in Agricultural Models
NASA Technical Reports Server (NTRS)
Ruane, Alex
2015-01-01
AgMIP and related projects are conducting several activities to understand and improve crop model response to extreme events. This involves crop model studies as well as the generation of climate datasets and scenarios more capable of capturing extremes. Models are typically less responsive to extreme events than we observe, and miss several forms of extreme events. Models also can capture interactive effects between climate change and climate extremes. Additional work is needed to understand response of markets and economic systems to food shocks. AgMIP is planning a Coordinated Global and Regional Assessment of Climate Change Impacts on Agricultural Production and Food Security with an aim to inform the IPCC Sixth Assessment Report.
Effective behavioral modeling and prediction even when few exemplars are available
NASA Astrophysics Data System (ADS)
Goan, Terrance; Kartha, Neelakantan; Kaneshiro, Ryan
2006-05-01
While great progress has been made in the lowest levels of data fusion, practical advances in behavior modeling and prediction remain elusive. The most critical limitation of existing approaches is their inability to support the required knowledge modeling and continuing refinement under realistic constraints (e.g., few historic exemplars, the lack of knowledge engineering support, and the need for rapid system deployment). This paper reports on our ongoing efforts to develop Propheteer, a system which will address these shortcomings through two primary techniques. First, with Propheteer we abandon the typical consensus-driven modeling approaches that involve infrequent group decision making sessions in favor of an approach that solicits asynchronous knowledge contributions (in the form of alternative future scenarios and indicators) without burdening the user with endless certainty or probability estimates. Second, we enable knowledge contributions by personnel beyond the typical core decision making group, thereby casting light on blind spots, mitigating human biases, and helping maintain the currency of the developed behavior models. We conclude with a discussion of the many lessons learned in the development of our prototype Propheteer system.
Computer model to simulate testing at the National Transonic Facility
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.
1995-01-01
A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.
A fuzzy logic approach to modeling a vehicle crash test
NASA Astrophysics Data System (ADS)
Pawlus, Witold; Karimi, Hamid Reza; Robbersmyr, Kjell G.
2013-03-01
This paper presents an application of fuzzy approach to vehicle crash modeling. A typical vehicle to pole collision is described and kinematics of a car involved in this type of crash event is thoroughly characterized. The basics of fuzzy set theory and modeling principles based on fuzzy logic approach are presented. In particular, exceptional attention is paid to explain the methodology of creation of a fuzzy model of a vehicle collision. Furthermore, the simulation results are presented and compared to the original vehicle's kinematics. It is concluded which factors have influence on the accuracy of the fuzzy model's output and how they can be adjusted to improve the model's fidelity.
Non-equilibrium phase transitions in a driven-dissipative system of interacting bosons
NASA Astrophysics Data System (ADS)
Young, Jeremy T.; Foss-Feig, Michael; Gorshkov, Alexey V.; Maghrebi, Mohammad F.
2017-04-01
Atomic, molecular, and optical systems provide unique opportunities to study simple models of driven-dissipative many-body quantum systems. Typically, one is interested in the resultant steady state, but the non-equilibrium nature of the physics involved presents several problems in understanding its behavior theoretically. Recently, it has been shown that in many of these models, it is possible to map the steady-state phase transitions onto classical equilibrium phase transitions. In the language of Keldysh field theory, this relation typically only becomes apparent after integrating out massive fields near the critical point, leaving behind a single massless field undergoing near-equilibrium dynamics. In this talk, we study a driven-dissipative XXZ bosonic model and discover critical points at which two fields become gapless. Each critical point separates three different possible phases: a uniform phase, an anti-ferromagnetic phase, and a limit cycle phase. Furthermore, a description in terms of an equilibrium phase transition does not seem possible, so the associated phase transitions appear to be inherently non-equilibrium.
Morphological Evolution of Pit-Patterned Si(001) Substrates Driven by Surface-Energy Reduction
NASA Astrophysics Data System (ADS)
Salvalaglio, Marco; Backofen, Rainer; Voigt, Axel; Montalenti, Francesco
2017-09-01
Lateral ordering of heteroepitaxial islands can be conveniently achieved by suitable pit-patterning of the substrate prior to deposition. Controlling shape, orientation, and size of the pits is not trivial as, being metastable, they can significantly evolve during deposition/annealing. In this paper, we exploit a continuum model to explore the typical metastable pit morphologies that can be expected on Si(001), depending on the initial depth/shape. Evolution is predicted using a surface-diffusion model, formulated in a phase-field framework, and tackling surface-energy anisotropy. Results are shown to nicely reproduce typical metastable shapes reported in the literature. Moreover, long time scale evolutions of pit profiles with different depths are found to follow a similar kinetic pathway. The model is also exploited to treat the case of heteroepitaxial growth involving two materials characterized by different facets in their equilibrium Wulff's shape. This can lead to significant changes in morphologies, such as a rotation of the pit during deposition as evidenced in Ge/Si experiments.
Morphological Evolution of Pit-Patterned Si(001) Substrates Driven by Surface-Energy Reduction.
Salvalaglio, Marco; Backofen, Rainer; Voigt, Axel; Montalenti, Francesco
2017-09-29
Lateral ordering of heteroepitaxial islands can be conveniently achieved by suitable pit-patterning of the substrate prior to deposition. Controlling shape, orientation, and size of the pits is not trivial as, being metastable, they can significantly evolve during deposition/annealing. In this paper, we exploit a continuum model to explore the typical metastable pit morphologies that can be expected on Si(001), depending on the initial depth/shape. Evolution is predicted using a surface-diffusion model, formulated in a phase-field framework, and tackling surface-energy anisotropy. Results are shown to nicely reproduce typical metastable shapes reported in the literature. Moreover, long time scale evolutions of pit profiles with different depths are found to follow a similar kinetic pathway. The model is also exploited to treat the case of heteroepitaxial growth involving two materials characterized by different facets in their equilibrium Wulff's shape. This can lead to significant changes in morphologies, such as a rotation of the pit during deposition as evidenced in Ge/Si experiments.
12 CFR 1070.22 - Fees for processing requests for CFPB records.
Code of Federal Regulations, 2012 CFR
2012-01-01
... of grades typically involved may be established. This charge shall include transportation of...), an average rate for the range of grades typically involved may be established. Fees shall be charged... research. (iii) Non-commercial scientific institution refers to an institution that is not operated on a...
Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.
Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan
2011-11-01
When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.
On the modulation of X ray fluxes in thunderstorms
NASA Technical Reports Server (NTRS)
Mccarthy, Michael P.; Parks, George K.
1992-01-01
The production of X-ray fluxes in thunderstorms has been attributed to bremsstrahlung. Assuming this, another question arises. How can a thunderstorm modulate the number density of electrons which are sufficiently energetic to produce X-rays? As a partial answer to this question, the effects of typical thunderstorm electric fields on a background population of energetic electrons, such as produced by cosmic ray secondaries and their decays or the decay of airborne radionuclides, are considered. The observed variation of X-ray flux is shown to be accounted for by a simple model involving typical electric field strengths. A necessary background electron number density is found from the model and is determined to be more than 2 orders of magnitude higher than that available from radon decay and a factor of 8 higher than that available from cosmic ray secondaries. The ionization enhancement due to energetic electrons and X-rays is discussed.
Emmetropisation and the aetiology of refractive errors
Flitcroft, D I
2014-01-01
The distribution of human refractive errors displays features that are not commonly seen in other biological variables. Compared with the more typical Gaussian distribution, adult refraction within a population typically has a negative skew and increased kurtosis (ie is leptokurtotic). This distribution arises from two apparently conflicting tendencies, first, the existence of a mechanism to control eye growth during infancy so as to bring refraction towards emmetropia/low hyperopia (ie emmetropisation) and second, the tendency of many human populations to develop myopia during later childhood and into adulthood. The distribution of refraction therefore changes significantly with age. Analysis of the processes involved in shaping refractive development allows for the creation of a life course model of refractive development. Monte Carlo simulations based on such a model can recreate the variation of refractive distributions seen from birth to adulthood and the impact of increasing myopia prevalence on refractive error distributions in Asia. PMID:24406411
Optimization of Operations Resources via Discrete Event Simulation Modeling
NASA Technical Reports Server (NTRS)
Joshi, B.; Morris, D.; White, N.; Unal, R.
1996-01-01
The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.
Kaiyala, Karl J.
2014-01-01
Mathematical models for the dependence of energy expenditure (EE) on body mass and composition are essential tools in metabolic phenotyping. EE scales over broad ranges of body mass as a non-linear allometric function. When considered within restricted ranges of body mass, however, allometric EE curves exhibit ‘local linearity.’ Indeed, modern EE analysis makes extensive use of linear models. Such models typically involve one or two body mass compartments (e.g., fat free mass and fat mass). Importantly, linear EE models typically involve a non-zero (usually positive) y-intercept term of uncertain origin, a recurring theme in discussions of EE analysis and a source of confounding in traditional ratio-based EE normalization. Emerging linear model approaches quantify whole-body resting EE (REE) in terms of individual organ masses (e.g., liver, kidneys, heart, brain). Proponents of individual organ REE modeling hypothesize that multi-organ linear models may eliminate non-zero y-intercepts. This could have advantages in adjusting REE for body mass and composition. Studies reveal that individual organ REE is an allometric function of total body mass. I exploit first-order Taylor linearization of individual organ REEs to model the manner in which individual organs contribute to whole-body REE and to the non-zero y-intercept in linear REE models. The model predicts that REE analysis at the individual organ-tissue level will not eliminate intercept terms. I demonstrate that the parameters of a linear EE equation can be transformed into the parameters of the underlying ‘latent’ allometric equation. This permits estimates of the allometric scaling of EE in a diverse variety of physiological states that are not represented in the allometric EE literature but are well represented by published linear EE analyses. PMID:25068692
Copper Corrosion and Biocorrosion Events in Premise Plumbing
Fischer, Diego A.; Alsina, Marco A.; Pastén, Pablo A.
2017-01-01
Corrosion of copper pipes may release high amounts of copper into the water, exceeding the maximum concentration of copper for drinking water standards. Typically, the events with the highest release of copper into drinking water are related to the presence of biofilms. This article reviews this phenomenon, focusing on copper ingestion and its health impacts, the physicochemical mechanisms and the microbial involvement on copper release, the techniques used to describe and understand this phenomenon, and the hydrodynamic effects. A conceptual model is proposed and the mathematical models are reviewed. PMID:28872628
Common omissions and misconceptions of wave propagation in turbulence: discussion.
Charnotskii, Mikhail
2012-05-01
This review paper addresses typical mistakes and omissions that involve theoretical research and modeling of optical propagation through atmospheric turbulence. We discuss the disregard of some general properties of narrow-angle propagation in refractive random media, the careless use of simplified models of turbulence, and omissions in the calculations of the second moment of the propagating wave. We also review some misconceptions regarding short-exposure imaging, propagation of polarized waves, and calculations of the scintillation index of the beam waves. © 2012 Optical Society of America
Copper Corrosion and Biocorrosion Events in Premise Plumbing.
Vargas, Ignacio T; Fischer, Diego A; Alsina, Marco A; Pavissich, Juan P; Pastén, Pablo A; Pizarro, Gonzalo E
2017-09-05
Corrosion of copper pipes may release high amounts of copper into the water, exceeding the maximum concentration of copper for drinking water standards. Typically, the events with the highest release of copper into drinking water are related to the presence of biofilms. This article reviews this phenomenon, focusing on copper ingestion and its health impacts, the physicochemical mechanisms and the microbial involvement on copper release, the techniques used to describe and understand this phenomenon, and the hydrodynamic effects. A conceptual model is proposed and the mathematical models are reviewed.
Simulation of talking faces in the human brain improves auditory speech recognition
von Kriegstein, Katharina; Dogan, Özgür; Grüter, Martina; Giraud, Anne-Lise; Kell, Christian A.; Grüter, Thomas; Kleinschmidt, Andreas; Kiebel, Stefan J.
2008-01-01
Human face-to-face communication is essentially audiovisual. Typically, people talk to us face-to-face, providing concurrent auditory and visual input. Understanding someone is easier when there is visual input, because visual cues like mouth and tongue movements provide complementary information about speech content. Here, we hypothesized that, even in the absence of visual input, the brain optimizes both auditory-only speech and speaker recognition by harvesting speaker-specific predictions and constraints from distinct visual face-processing areas. To test this hypothesis, we performed behavioral and neuroimaging experiments in two groups: subjects with a face recognition deficit (prosopagnosia) and matched controls. The results show that observing a specific person talking for 2 min improves subsequent auditory-only speech and speaker recognition for this person. In both prosopagnosics and controls, behavioral improvement in auditory-only speech recognition was based on an area typically involved in face-movement processing. Improvement in speaker recognition was only present in controls and was based on an area involved in face-identity processing. These findings challenge current unisensory models of speech processing, because they show that, in auditory-only speech, the brain exploits previously encoded audiovisual correlations to optimize communication. We suggest that this optimization is based on speaker-specific audiovisual internal models, which are used to simulate a talking face. PMID:18436648
Applications of hybrid genetic algorithms in seismic tomography
NASA Astrophysics Data System (ADS)
Soupios, Pantelis; Akca, Irfan; Mpogiatzis, Petros; Basokur, Ahmet T.; Papazachos, Constantinos
2011-11-01
Almost all earth sciences inverse problems are nonlinear and involve a large number of unknown parameters, making the application of analytical inversion methods quite restrictive. In practice, most analytical methods are local in nature and rely on a linearized form of the problem equations, adopting an iterative procedure which typically employs partial derivatives in order to optimize the starting (initial) model by minimizing a misfit (penalty) function. Unfortunately, especially for highly non-linear cases, the final model strongly depends on the initial model, hence it is prone to solution-entrapment in local minima of the misfit function, while the derivative calculation is often computationally inefficient and creates instabilities when numerical approximations are used. An alternative is to employ global techniques which do not rely on partial derivatives, are independent of the misfit form and are computationally robust. Such methods employ pseudo-randomly generated models (sampling an appropriately selected section of the model space) which are assessed in terms of their data-fit. A typical example is the class of methods known as genetic algorithms (GA), which achieves the aforementioned approximation through model representation and manipulations, and has attracted the attention of the earth sciences community during the last decade, with several applications already presented for several geophysical problems. In this paper, we examine the efficiency of the combination of the typical regularized least-squares and genetic methods for a typical seismic tomography problem. The proposed approach combines a local (LOM) and a global (GOM) optimization method, in an attempt to overcome the limitations of each individual approach, such as local minima and slow convergence, respectively. The potential of both optimization methods is tested and compared, both independently and jointly, using the several test models and synthetic refraction travel-time date sets that employ the same experimental geometry, wavelength and geometrical characteristics of the model anomalies. Moreover, real data from a crosswell tomographic project for the subsurface mapping of an ancient wall foundation are used for testing the efficiency of the proposed algorithm. The results show that the combined use of both methods can exploit the benefits of each approach, leading to improved final models and producing realistic velocity models, without significantly increasing the required computation time.
Rodgers, Joseph Lee
2016-01-01
The Bayesian-frequentist debate typically portrays these statistical perspectives as opposing views. However, both Bayesian and frequentist statisticians have expanded their epistemological basis away from a singular focus on the null hypothesis, to a broader perspective involving the development and comparison of competing statistical/mathematical models. For frequentists, statistical developments such as structural equation modeling and multilevel modeling have facilitated this transition. For Bayesians, the Bayes factor has facilitated this transition. The Bayes factor is treated in articles within this issue of Multivariate Behavioral Research. The current presentation provides brief commentary on those articles and more extended discussion of the transition toward a modern modeling epistemology. In certain respects, Bayesians and frequentists share common goals.
Taylor, Mark J; Taylor, Natasha
2014-12-01
England and Wales are moving toward a model of 'opt out' for use of personal confidential data in health research. Existing research does not make clear how acceptable this move is to the public. While people are typically supportive of health research, when asked to describe the ideal level of control there is a marked lack of consensus over the preferred model of consent (e.g. explicit consent, opt out etc.). This study sought to investigate a relatively unexplored difference between the consent model that people prefer and that which they are willing to accept. It also sought to explore any reasons for such acceptance.A mixed methods approach was used to gather data, incorporating a structured questionnaire and in-depth focus group discussions led by an external facilitator. The sampling strategy was designed to recruit people with different involvement in the NHS but typically with experience of NHS services. Three separate focus groups were carried out over three consecutive days.The central finding is that people are typically willing to accept models of consent other than that which they would prefer. Such acceptance is typically conditional upon a number of factors, including: security and confidentiality, no inappropriate commercialisation or detrimental use, transparency, independent overview, the ability to object to any processing considered to be inappropriate or particularly sensitive.This study suggests that most people would find research use without the possibility of objection to be unacceptable. However, the study also suggests that people who would prefer to be asked explicitly before data were used for purposes beyond direct care may be willing to accept an opt out model of consent if the reasons for not seeking explicit consent are accessible to them and they trust that data is only going to be used under conditions, and with safeguards, that they would consider to be acceptable even if not preferable.
NASA Astrophysics Data System (ADS)
Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.
2017-12-01
Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.
Is “Maturing out” of Problematic Alcohol Involvement Related to Personality Change?
Littlefield, Andrew K.; Sher, Kenneth J.; Wood, Phillip K.
2009-01-01
Problematic alcohol involvement typically peaks in the early 20s and declines with age. This maturing out of alcohol involvement is usually attributed to individuals attaining adult statuses incompatible with heavy drinking. Nevertheless, little is known about how changes in problematic alcohol use during emerging/early adulthood relate to changes in etiologically relevant personality traits that also change during this period. This study examined the relation between changes in problematic alcohol involvement and personality (measures of impulsivity, neuroticism, and extraversion) from ages 18 to 35 in a cohort of college students (N = 489) at varying risk for alcohol use disorders. Latent growth models indicated that both normative and individual changes in alcohol involvement occur between ages 18 and 35 and that these changes are associated with changes in neuroticism and impulsivity. Moreover, marital and parental role statuses did not appear to be third-variable explanations of the associated changes in alcohol involvement and personality. Findings suggest that personality change may be an important mechanism in the maturing-out effect. PMID:19413410
ERIC Educational Resources Information Center
Solomon, Olga; Heritage, John; Yin, Larry; Maynard, Douglas W.; Bauman, Margaret L.
2016-01-01
Conversation and discourse analyses were used to examine medical problem presentation in pediatric care. Healthcare visits involving children with ASD and typically developing children were analyzed. We examined how children's communicative and epistemic capabilities, and their opportunities to be socialized into a competent patient role are…
Transforming community access to space science models
NASA Astrophysics Data System (ADS)
MacNeice, Peter; Hesse, Michael; Kuznetsova, Maria; Maddox, Marlo; Rastaetter, Lutz; Berrios, David; Pulkkinen, Antti
2012-04-01
Researching and forecasting the ever changing space environment (often referred to as space weather) and its influence on humans and their activities are model-intensive disciplines. This is true because the physical processes involved are complex, but, in contrast to terrestrial weather, the supporting observations are typically sparse. Models play a vital role in establishing a physically meaningful context for interpreting limited observations, testing theory, and producing both nowcasts and forecasts. For example, with accurate forecasting of hazardous space weather conditions, spacecraft operators can place sensitive systems in safe modes, and power utilities can protect critical network components from damage caused by large currents induced in transmission lines by geomagnetic storms.
Transforming Community Access to Space Science Models
NASA Technical Reports Server (NTRS)
MacNeice, Peter; Heese, Michael; Kunetsova, Maria; Maddox, Marlo; Rastaetter, Lutz; Berrios, David; Pulkkinen, Antti
2012-01-01
Researching and forecasting the ever changing space environment (often referred to as space weather) and its influence on humans and their activities are model-intensive disciplines. This is true because the physical processes involved are complex, but, in contrast to terrestrial weather, the supporting observations are typically sparse. Models play a vital role in establishing a physically meaningful context for interpreting limited observations, testing theory, and producing both nowcasts and forecasts. For example, with accurate forecasting of hazardous space weather conditions, spacecraft operators can place sensitive systems in safe modes, and power utilities can protect critical network components from damage caused by large currents induced in transmission lines by geomagnetic storms.
Modelling Market Dynamics with a "Market Game"
NASA Astrophysics Data System (ADS)
Katahira, Kei; Chen, Yu
In the financial market, traders, especially speculators, typically behave as to yield capital gains by the difference between selling and buying prices. Making use of the structure of Minority Game, we build a novel market toy model which takes account of such the speculative mind involving a round-trip trade to analyze the market dynamics as a system. Even though the micro-level behavioral rules of players in this new model is quite simple, its macroscopic aggregational output has the reproducibility of the well-known stylized facts such as volatility clustering and heavy tails. The proposed model may become a new alternative bottom-up approach in order to study the emerging mechanism of those stylized qualitative properties of asset returns.
Barberà, Miquel; Collantes-Alegre, Jorge Mariano; Martínez-Torres, David
2017-04-01
Aphids are typical photoperiodic insects that switch from viviparous parthenogenetic reproduction typical of long day seasons to oviparous sexual reproduction triggered by the shortening of photoperiod in autumn yielding an overwintering egg in which an embryonic diapause takes place. While the involvement of the circadian clock genes in photoperiodism in mammals is well established, there is still some controversy on their participation in insects. The availability of the genome of the pea aphid Acyrthosiphon pisum places this species as an excellent model to investigate the involvement of the circadian system in the aphid seasonal response. In the present report, we have advanced in the characterisation of the circadian clock genes and showed that these genes display extensive alternative splicing. Moreover, the expression of circadian clock genes, analysed at different moments of the day, showed a robust cycling of central clock genes period and timeless. Furthermore, the rhythmic expression of these genes was shown to be rapidly dampened under DD (continuous darkness conditions), thus supporting the model of a seasonal response based on a heavily dampened circadian oscillator. Additionally, increased expression of some of the circadian clock genes under short-day conditions suggest their involvement in the induction of the aphid seasonal response. Finally, in situ localisation of transcripts of genes period and timeless in the aphid brain revealed the site of clock neurons for the first time in aphids. Two groups of clock cells were identified: the Dorsal Neurons (DN) and the Lateral Neurons (LN), both in the protocerebrum. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reimers, Jeffrey R; McKemmish, Laura K; McKenzie, Ross H; Hush, Noel S
2015-10-14
While diabatic approaches are ubiquitous for the understanding of electron-transfer reactions and have been mooted as being of general relevance, alternate applications have not been able to unify the same wide range of observed spectroscopic and kinetic properties. The cause of this is identified as the fundamentally different orbital configurations involved: charge-transfer phenomena involve typically either 1 or 3 electrons in two orbitals whereas most reactions are typically closed shell. As a result, two vibrationally coupled electronic states depict charge-transfer scenarios whereas three coupled states arise for closed-shell reactions of non-degenerate molecules and seven states for the reactions implicated in the aromaticity of benzene. Previous diabatic treatments of closed-shell processes have considered only two arbitrarily chosen states as being critical, mapping these states to those for electron transfer. We show that such effective two-state diabatic models are feasible but involve renormalized electronic coupling and vibrational coupling parameters, with this renormalization being property dependent. With this caveat, diabatic models are shown to provide excellent descriptions of the spectroscopy and kinetics of the ammonia inversion reaction, proton transfer in N2H7(+), and aromaticity in benzene. This allows for the development of a single simple theory that can semi-quantitatively describe all of these chemical phenomena, as well as of course electron-transfer reactions. It forms a basis for understanding many technologically relevant aspects of chemical reactions, condensed-matter physics, chemical quantum entanglement, nanotechnology, and natural or artificial solar energy capture and conversion.
Modernizing Earth and Space Science Modeling Workflows in the Big Data Era
NASA Astrophysics Data System (ADS)
Kinter, J. L.; Feigelson, E.; Walker, R. J.; Tino, C.
2017-12-01
Modeling is a major aspect of the Earth and space science research. The development of numerical models of the Earth system, planetary systems or astrophysical systems is essential to linking theory with observations. Optimal use of observations that are quite expensive to obtain and maintain typically requires data assimilation that involves numerical models. In the Earth sciences, models of the physical climate system are typically used for data assimilation, climate projection, and inter-disciplinary research, spanning applications from analysis of multi-sensor data sets to decision-making in climate-sensitive sectors with applications to ecosystems, hazards, and various biogeochemical processes. In space physics, most models are from first principles, require considerable expertise to run and are frequently modified significantly for each case study. The volume and variety of model output data from modeling Earth and space systems are rapidly increasing and have reached a scale where human interaction with data is prohibitively inefficient. A major barrier to progress is that modeling workflows isn't deemed by practitioners to be a design problem. Existing workflows have been created by a slow accretion of software, typically based on undocumented, inflexible scripts haphazardly modified by a succession of scientists and students not trained in modern software engineering methods. As a result, existing modeling workflows suffer from an inability to onboard new datasets into models; an inability to keep pace with accelerating data production rates; and irreproducibility, among other problems. These factors are creating an untenable situation for those conducting and supporting Earth system and space science. Improving modeling workflows requires investments in hardware, software and human resources. This paper describes the critical path issues that must be targeted to accelerate modeling workflows, including script modularization, parallelization, and automation in the near term, and longer term investments in virtualized environments for improved scalability, tolerance for lossy data compression, novel data-centric memory and storage technologies, and tools for peer reviewing, preserving and sharing workflows, as well as fundamental statistical and machine learning algorithms.
Arab, Ali; Holan, Scott H.; Wikle, Christopher K.; Wildhaber, Mark L.
2012-01-01
Ecological studies involving counts of abundance, presence–absence or occupancy rates often produce data having a substantial proportion of zeros. Furthermore, these types of processes are typically multivariate and only adequately described by complex nonlinear relationships involving externally measured covariates. Ignoring these aspects of the data and implementing standard approaches can lead to models that fail to provide adequate scientific understanding of the underlying ecological processes, possibly resulting in a loss of inferential power. One method of dealing with data having excess zeros is to consider the class of univariate zero-inflated generalized linear models. However, this class of models fails to address the multivariate and nonlinear aspects associated with the data usually encountered in practice. Therefore, we propose a semiparametric bivariate zero-inflated Poisson model that takes into account both of these data attributes. The general modeling framework is hierarchical Bayes and is suitable for a broad range of applications. We demonstrate the effectiveness of our model through a motivating example on modeling catch per unit area for multiple species using data from the Missouri River Benthic Fishes Study, implemented by the United States Geological Survey.
Complement Coercion: The Joint Effects of Type and Typicality.
Zarcone, Alessandra; McRae, Ken; Lenci, Alessandro; Padó, Sebastian
2017-01-01
Complement coercion ( begin a book → reading ) involves a type clash between an event-selecting verb and an entity-denoting object, triggering a covert event ( reading ). Two main factors involved in complement coercion have been investigated: the semantic type of the object (event vs. entity), and the typicality of the covert event ( the author began a book → writing ). In previous research, reading times have been measured at the object. However, the influence of the typicality of the subject-object combination on processing an aspectual verb such as begin has not been studied. Using a self-paced reading study, we manipulated semantic type and subject-object typicality, exploiting German word order to measure reading times at the aspectual verb. These variables interacted at the target verb. We conclude that both type and typicality probabilistically guide expectations about upcoming input. These results are compatible with an expectation-based view of complement coercion and language comprehension more generally in which there is rapid interaction between what is typically viewed as linguistic knowledge, and what is typically viewed as domain general knowledge about how the world works.
Complement Coercion: The Joint Effects of Type and Typicality
Zarcone, Alessandra; McRae, Ken; Lenci, Alessandro; Padó, Sebastian
2017-01-01
Complement coercion (begin a book →reading) involves a type clash between an event-selecting verb and an entity-denoting object, triggering a covert event (reading). Two main factors involved in complement coercion have been investigated: the semantic type of the object (event vs. entity), and the typicality of the covert event (the author began a book →writing). In previous research, reading times have been measured at the object. However, the influence of the typicality of the subject–object combination on processing an aspectual verb such as begin has not been studied. Using a self-paced reading study, we manipulated semantic type and subject–object typicality, exploiting German word order to measure reading times at the aspectual verb. These variables interacted at the target verb. We conclude that both type and typicality probabilistically guide expectations about upcoming input. These results are compatible with an expectation-based view of complement coercion and language comprehension more generally in which there is rapid interaction between what is typically viewed as linguistic knowledge, and what is typically viewed as domain general knowledge about how the world works. PMID:29225585
Objective Model Selection for Identifying the Human Feedforward Response in Manual Control.
Drop, Frank M; Pool, Daan M; van Paassen, Marinus Rene M; Mulder, Max; Bulthoff, Heinrich H
2018-01-01
Realistic manual control tasks typically involve predictable target signals and random disturbances. The human controller (HC) is hypothesized to use a feedforward control strategy for target-following, in addition to feedback control for disturbance-rejection. Little is known about human feedforward control, partly because common system identification methods have difficulty in identifying whether, and (if so) how, the HC applies a feedforward strategy. In this paper, an identification procedure is presented that aims at an objective model selection for identifying the human feedforward response, using linear time-invariant autoregressive with exogenous input models. A new model selection criterion is proposed to decide on the model order (number of parameters) and the presence of feedforward in addition to feedback. For a range of typical control tasks, it is shown by means of Monte Carlo computer simulations that the classical Bayesian information criterion (BIC) leads to selecting models that contain a feedforward path from data generated by a pure feedback model: "false-positive" feedforward detection. To eliminate these false-positives, the modified BIC includes an additional penalty on model complexity. The appropriate weighting is found through computer simulations with a hypothesized HC model prior to performing a tracking experiment. Experimental human-in-the-loop data will be considered in future work. With appropriate weighting, the method correctly identifies the HC dynamics in a wide range of control tasks, without false-positive results.
Choice Rules and Accumulator Networks
2015-01-01
This article presents a preference accumulation model that can be used to implement a number of different multi-attribute heuristic choice rules, including the lexicographic rule, the majority of confirming dimensions (tallying) rule and the equal weights rule. The proposed model differs from existing accumulators in terms of attribute representation: Leakage and competition, typically applied only to preference accumulation, are also assumed to be involved in processing attribute values. This allows the model to perform a range of sophisticated attribute-wise comparisons, including comparisons that compute relative rank. The ability of a preference accumulation model composed of leaky competitive networks to mimic symbolic models of heuristic choice suggests that these 2 approaches are not incompatible, and that a unitary cognitive model of preferential choice, based on insights from both these approaches, may be feasible. PMID:28670592
Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context
Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan
2012-01-01
When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720
The correlation of fragmentation and structure of a protein
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Qinyuan; Cheng, Xueheng; Van Orden, S.
1995-12-31
Characterization of proteins of similar structures is important to understanding the biological function of the proteins and the processes with which they are involved. Cytochrome c variants typically have similar sequences, and have similar conformations in solution with almost identical absorption spectra and redox potentials. The authors chose cytochrome c`s from bovine, tuna, rabbit and horse as a model system in studying large biomolecules using MS{sup n} of multiply charged ions generated from electrospray ionization (ESI).
Design of a Pictorial Program Reference Language.
1984-08-01
be fixed during the typing process. If the manuscript is given to an English teacher moonlighting as a typist, the IThe effort and expense involved in...objects with stereotyped pur- these programming cliches automatically. In the poses. These are called typical programming pat- second method, the user uses...fixed. If you gave it to alm options found in many tools today. Modeling large 0 K.glmi m teacher moonlighting as a typiA, you bodies of facts and
Enhanced perceptual functioning in autism: an update, and eight principles of autistic perception.
Mottron, Laurent; Dawson, Michelle; Soulières, Isabelle; Hubert, Benedicte; Burack, Jake
2006-01-01
We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception of first order static stimuli, diminished perception of complex movement, autonomy of low-level information processing toward higher-order operations, and differential relation between perception and general intelligence. Increased perceptual expertise may be implicated in the choice of special ability in savant autistics, and in the variability of apparent presentations within PDD (autism with and without typical speech, Asperger syndrome) in non-savant autistics. The overfunctioning of brain regions typically involved in primary perceptual functions may explain the autistic perceptual endophenotype.
Learning Setting-Generalized Activity Models for Smart Spaces
Cook, Diane J.
2011-01-01
The data mining and pervasive computing technologies found in smart homes offer unprecedented opportunities for providing context-aware services, including health monitoring and assistance to individuals experiencing difficulties living independently at home. In order to provide these services, smart environment algorithms need to recognize and track activities that people normally perform as part of their daily routines. However, activity recognition has typically involved gathering and labeling large amounts of data in each setting to learn a model for activities in that setting. We hypothesize that generalized models can be learned for common activities that span multiple environment settings and resident types. We describe our approach to learning these models and demonstrate the approach using eleven CASAS datasets collected in seven environments. PMID:21461133
Break modeling for RELAP5 analyses of ISP-27 Bethsy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petelin, S.; Gortnar, O.; Mavko, B.
This paper presents pre- and posttest analyses of International Standard Problem (ISP) 27 on the Bethsy facility and separate RELAP5 break model tests considering the measured boundary condition at break inlet. This contribution also demonstrates modifications which have assured the significant improvement of model response in posttest simulations. Calculations were performed using the RELAP5/MOD2/36.05 and RELAP5/MOD3.5M5 codes on the MicroVAX, SUN, and CONVEX computers. Bethsy is an integral test facility that simulates a typical 900-MW (electric) Framatome pressurized water reactor. The ISP-27 scenario involves a 2-in. cold-leg break without HPSI and with delayed operator procedures for secondary system depressurization.
NASA Astrophysics Data System (ADS)
McCune, Matthew; Shafiee, Ashkan; Forgacs, Gabor; Kosztin, Ioan
2014-03-01
Cellular Particle Dynamics (CPD) is an effective computational method for describing and predicting the time evolution of biomechanical relaxation processes of multicellular systems. A typical example is the fusion of spheroidal bioink particles during post bioprinting structure formation. In CPD cells are modeled as an ensemble of cellular particles (CPs) that interact via short-range contact interactions, characterized by an attractive (adhesive interaction) and a repulsive (excluded volume interaction) component. The time evolution of the spatial conformation of the multicellular system is determined by following the trajectories of all CPs through integration of their equations of motion. CPD was successfully applied to describe and predict the fusion of 3D tissue construct involving identical spherical aggregates. Here, we demonstrate that CPD can also predict tissue formation involving uneven spherical aggregates whose volumes decrease during the fusion process. Work supported by NSF [PHY-0957914]. Computer time provided by the University of Missouri Bioinformatics Consortium.
van Doorn, Andrea
2017-01-01
Generic red, green, and blue images can be regarded as data sources of coarse (three bins) local spectra, typical data volumes are 104 to 107 spectra. Image data bases often yield hundreds or thousands of images, yielding data sources of 109 to 1010 spectra. There is usually no calibration, and there often are various nonlinear image transformations involved. However, we argue that sheer numbers make up for such ambiguity. We propose a model of spectral data mining that applies to the sublunar realm, spectra due to the scattering of daylight by objects from the generic terrestrial environment. The model involves colorimetry and ecological physics. Whereas the colorimetry is readily dealt with, one needs to handle the ecological physics with heuristic methods. The results suggest evolutionary causes of the human visual system. We also suggest effective methods to generate red, green, and blue color gamuts for various terrains. PMID:28989697
Molecular genetic models related to schizophrenia and psychotic illness: heuristics and challenges.
O'Tuathaigh, Colm M P; Desbonnet, Lieve; Moran, Paula M; Kirby, Brian P; Waddington, John L
2011-01-01
Schizophrenia is a heritable disorder that may involve several common genes of small effect and/or rare copy number variation, with phenotypic heterogeneity across patients. Furthermore, any boundaries vis-à-vis other psychotic disorders are far from clear. Consequently, identification of informative animal models for this disorder, which typically relate to pharmacological and putative pathophysiological processes of uncertain validity, faces considerable challenges. In juxtaposition, the majority of mutant models for schizophrenia relate to the functional roles of a diverse set of genes associated with risk for the disorder or with such putative pathophysiological processes. This chapter seeks to outline the evidence from phenotypic studies in mutant models related to schizophrenia. These have commonly assessed the degree to which mutation of a schizophrenia-related gene is associated with the expression of several aspects of the schizophrenia phenotype or more circumscribed, schizophrenia-related endophenotypes; typically, they place specific emphasis on positive and negative symptoms and cognitive deficits, and extend to structural and other pathological features. We first consider the primary technological approaches to the generation of such mutants, to include their relative merits and demerits, and then highlight the diverse phenotypic approaches that have been developed for their assessment. The chapter then considers the application of mutant phenotypes to study pathobiological and pharmacological mechanisms thought to be relevant for schizophrenia, particularly in terms of dopaminergic and glutamatergic dysfunction, and to an increasing range of candidate susceptibility genes and copy number variants. Finally, we discuss several pertinent issues and challenges within the field which relate to both phenotypic evaluation and a growing appreciation of the functional genomics of schizophrenia and the involvement of gene × environment interactions.
How to assess the impact of a physical parameterization in simulations of moist convection?
NASA Astrophysics Data System (ADS)
Grabowski, Wojciech
2017-04-01
A numerical model capable in simulating moist convection (e.g., cloud-resolving model or large-eddy simulation model) consists of a fluid flow solver combined with required representations (i.e., parameterizations) of physical processes. The later typically include cloud microphysics, radiative transfer, and unresolved turbulent transport. Traditional approaches to investigate impacts of such parameterizations on convective dynamics involve parallel simulations with different parameterization schemes or with different scheme parameters. Such methodologies are not reliable because of the natural variability of a cloud field that is affected by the feedback between the physics and dynamics. For instance, changing the cloud microphysics typically leads to a different realization of the cloud-scale flow, and separating dynamical and microphysical impacts is difficult. This presentation will present a novel modeling methodology, the piggybacking, that allows studying the impact of a physical parameterization on cloud dynamics with confidence. The focus will be on the impact of cloud microphysics parameterization. Specific examples of the piggybacking approach will include simulations concerning the hypothesized deep convection invigoration in polluted environments, the validity of the saturation adjustment in modeling condensation in moist convection, and separation of physical impacts from statistical uncertainty in simulations applying particle-based Lagrangian microphysics, the super-droplet method.
Chan, Ho Yin; Lankevich, Vladimir; Vekilov, Peter G.; Lubchenko, Vassiliy
2012-01-01
Toward quantitative description of protein aggregation, we develop a computationally efficient method to evaluate the potential of mean force between two folded protein molecules that allows for complete sampling of their mutual orientation. Our model is valid at moderate ionic strengths and accounts for the actual charge distribution on the surface of the molecules, the dielectric discontinuity at the protein-solvent interface, and the possibility of protonation or deprotonation of surface residues induced by the electric field due to the other protein molecule. We apply the model to the protein lysozyme, whose solutions exhibit both mesoscopic clusters of protein-rich liquid and liquid-liquid separation; the former requires that protein form complexes with typical lifetimes of approximately milliseconds. We find the electrostatic repulsion is typically lower than the prediction of the Derjaguin-Landau-Verwey-Overbeek theory. The Coulomb interaction in the lowest-energy docking configuration is nonrepulsive, despite the high positive charge on the molecules. Typical docking configurations barely involve protonation or deprotonation of surface residues. The obtained potential of mean force between folded lysozyme molecules is consistent with the location of the liquid-liquid coexistence, but produces dimers that are too short-lived for clusters to exist, suggesting lysozyme undergoes conformational changes during cluster formation. PMID:22768950
Case for diagnosis. Systemic light chain amyloidosis with cutaneous involvement*
Gontijo, João Renato Vianna; Pinto, Jackson Machado; de Paula, Maysa Carla
2017-01-01
Systemic light chain amiloydosis is a rare disease. Due to its typical cutaneous lesions, dermatologists play an essential role in its diagnosis. Clinical manifestations vary according to the affected organ and are often unspecific. Definitive diagnosis is achieved through biopsy. We report a patient with palpebral amyloidosis, typical bilateral ecchymoses and cardiac involvement, without plasma cell dyscrasia or lymphomas. The patient died shortly after the diagnosis. PMID:29166521
Diabatic models with transferrable parameters for generalized chemical reactions
NASA Astrophysics Data System (ADS)
Reimers, Jeffrey R.; McKemmish, Laura K.; McKenzie, Ross H.; Hush, Noel S.
2017-05-01
Diabatic models applied to adiabatic electron-transfer theory yield many equations involving just a few parameters that connect ground-state geometries and vibration frequencies to excited-state transition energies and vibration frequencies to the rate constants for electron-transfer reactions, utilizing properties of the conical-intersection seam linking the ground and excited states through the Pseudo Jahn-Teller effect. We review how such simplicity in basic understanding can also be obtained for general chemical reactions. The key feature that must be recognized is that electron-transfer (or hole transfer) processes typically involve one electron (hole) moving between two orbitals, whereas general reactions typically involve two electrons or even four electrons for processes in aromatic molecules. Each additional moving electron leads to new high-energy but interrelated conical-intersection seams that distort the shape of the critical lowest-energy seam. Recognizing this feature shows how conical-intersection descriptors can be transferred between systems, and how general chemical reactions can be compared using the same set of simple parameters. Mathematical relationships are presented depicting how different conical-intersection seams relate to each other, showing that complex problems can be reduced into an effective interaction between the ground-state and a critical excited state to provide the first semi-quantitative implementation of Shaik’s “twin state” concept. Applications are made (i) demonstrating why the chemistry of the first-row elements is qualitatively so different to that of the second and later rows, (ii) deducing the bond-length alternation in hypothetical cyclohexatriene from the observed UV spectroscopy of benzene, (iii) demonstrating that commonly used procedures for modelling surface hopping based on inclusion of only the first-derivative correction to the Born-Oppenheimer approximation are valid in no region of the chemical parameter space, and (iv), demonstrating the types of chemical reactions that may be suitable for exploitation as a chemical qubit in some quantum information processor.
Assessing cost-effectiveness of drug interventions for schizophrenia.
Magnus, Anne; Carr, Vaughan; Mihalopoulos, Cathrine; Carter, Rob; Vos, Theo
2005-01-01
To assess from a health sector perspective the incremental cost-effectiveness of eight drug treatment scenarios for established schizophrenia. Using a standardized methodology, costs and outcomes are modelled over the lifetime of prevalent cases of schizophrenia in Australia in 2000. A two-stage approach to assessment of health benefit is used. The first stage involves a quantitative analysis based on disability-adjusted life years (DALYs) averted, using best available evidence. The robustness of results is tested using probabilistic uncertainty analysis. The second stage involves application of 'second filter' criteria (equity, strength of evidence, feasibility and acceptability) to allow broader concepts of benefit to be considered. Replacing oral typicals with risperidone or olanzapine has an incremental cost-effectiveness ratio (ICER) of 48,000 Australian dollars and 92,000 Australian dollars/DALY respectively. Switching from low-dose typicals to risperidone has an ICER of 80,000 Australian dollars. Giving risperidone to people experiencing side-effects on typicals is more cost-effective at 20,000 Australian dollars. Giving clozapine to people taking typicals, with the worst course of the disorder and either little or clear deterioration, is cost-effective at 42,000 Australian dollars or 23,000 Australian dollars/DALY respectively. The least cost-effective intervention is to replace risperidone with olanzapine at 160,000 Australian dollars/DALY. Based on an 50,000 Australian dollars/DALY threshold, low-dose typical neuroleptics are indicated as the treatment of choice for established schizophrenia, with risperidone being reserved for those experiencing moderate to severe side-effects on typicals. The more expensive olanzapine should only be prescribed when risperidone is not clinically indicated. The high cost of risperidone and olanzapine relative to modest health gains underlie this conclusion. Earlier introduction of clozapine however, would be cost-effective. This work is limited by weaknesses in trials (lack of long-term efficacy data, quality of life and consumer satisfaction evidence) and the translation of effect size into a DALY change. Some stakeholders, including SANE Australia, argue the modest health gains reported in the literature do not adequately reflect perceptions by patients, clinicians and carers, of improved quality of life with these atypicals.
[Local involvement of the optic nerve by acute lymphoblastic leukemia].
Bernardczyk-Meller, Jadwiga; Stefańska, Katarzyna
2005-01-01
The leucemias quite commonly involve the eyes and adnexa. In some cases it causes visual complants. Both, the anterior chamber of the eye and the posterior portion of the globe may sites of acute or chronic leukemia and leucemic relapse. We report an unique case of a 14 years old leucemic patient who suffered visual loss and papilloedema, due to a unilateral local involvement within optic nerve, during second relapse of acute lymphocytic leuemia. In spite of typical treatment of main disease, the boy had died. The authors present typical ophthalmic features of the leucemia, too.
Twelve tips for "flipping" the classroom.
Moffett, Jennifer
2015-04-01
The flipped classroom is a pedagogical model in which the typical lecture and homework elements of a course are reversed. The following tips outline the steps involved in making a successful transition to a flipped classroom approach. The tips are based on the available literature alongside the author's experience of using the approach in a medical education setting. Flipping a classroom has a number of potential benefits, for example increased educator-student interaction, but must be planned and implemented carefully to support effective learning.
2009-03-01
axis was really historical volatility of the return on a particular stock (capital gains of losses as well as dividends). Markowitz’s theory is an...market, the risk involved in a particular stock is determined by the historical volatility of the return. “But investments like IT projects or new...product development don’t typically have ‘ historical volatility .’ They do, however, share another characteristic of risk that is more fundamental than
Herpes zoster - typical and atypical presentations.
Dayan, Roy Rafael; Peleg, Roni
2017-08-01
Varicella- zoster virus infection is an intriguing medical entity that involves many medical specialties including infectious diseases, immunology, dermatology, and neurology. It can affect patients from early childhood to old age. Its treatment requires expertise in pain management and psychological support. While varicella is caused by acute viremia, herpes zoster occurs after the dormant viral infection, involving the cranial nerve or sensory root ganglia, is re-activated and spreads orthodromically from the ganglion, via the sensory nerve root, to the innervated target tissue (skin, cornea, auditory canal, etc.). Typically, a single dermatome is involved, although two or three adjacent dermatomes may be affected. The lesions usually do not cross the midline. Herpes zoster can also present with unique or atypical clinical manifestations, such as glioma, zoster sine herpete and bilateral herpes zoster, which can be a challenging diagnosis even for experienced physicians. We discuss the epidemiology, pathophysiology, diagnosis and management of Herpes Zoster, typical and atypical presentations.
NASA Astrophysics Data System (ADS)
Raschke, E.; Kinne, S.
2013-05-01
Multi-year average radiative flux maps of three satellite data-sets (CERES, ISSCP and GEWEX-SRB) are compared to each other and to typical values by global modeling (median values of results of 20 climate models of the 4th IPCC Assessment). Diversity assessments address radiative flux products and at the top of the atmosphere (TOA) and the surface, with particular attention to impacts by clouds. Involving both data from surface and TOA special attention is given to the vertical radiation flux divergence and on the infrared Greenhouse effect, which are rarely shown in literature.
Minimal but non-minimal inflation and electroweak symmetry breaking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzola, Luca; Institute of Physics, University of Tartu,Ravila 14c, 50411 Tartu; Racioppi, Antonio
2016-10-07
We consider the most minimal scale invariant extension of the standard model that allows for successful radiative electroweak symmetry breaking and inflation. The framework involves an extra scalar singlet, that plays the rôle of the inflaton, and is compatibile with current experimental bounds owing to the non-minimal coupling of the latter to gravity. This inflationary scenario predicts a very low tensor-to-scalar ratio r≈10{sup −3}, typical of Higgs-inflation models, but in contrast yields a scalar spectral index n{sub s}≃0.97 which departs from the Starobinsky limit. We briefly discuss the collider phenomenology of the framework.
Social and Non-Social Cueing of Visuospatial Attention in Autism and Typical Development
ERIC Educational Resources Information Center
Pruett, John R.; LaMacchia, Angela; Hoertel, Sarah; Squire, Emma; McVey, Kelly; Todd, Richard D.; Constantino, John N.; Petersen, Steven E.
2011-01-01
Three experiments explored attention to eye gaze, which is incompletely understood in typical development and is hypothesized to be disrupted in autism. Experiment 1 (n = 26 typical adults) involved covert orienting to box, arrow, and gaze cues at two probabilities and cue-target times to test whether reorienting for gaze is endogenous, exogenous,…
Separation of time scales in one-dimensional directed nucleation-growth processes
NASA Astrophysics Data System (ADS)
Pierobon, Paolo; Miné-Hattab, Judith; Cappello, Giovanni; Viovy, Jean-Louis; Lagomarsino, Marco Cosentino
2010-12-01
Proteins involved in homologous recombination such as RecA and hRad51 polymerize on single- and double-stranded DNA according to a nucleation-growth kinetics, which can be monitored by single-molecule in vitro assays. The basic models currently used to extract biochemical rates rely on ensemble averages and are typically based on an underlying process of bidirectional polymerization, in contrast with the often observed anisotropic polymerization of similar proteins. For these reasons, if one considers single-molecule experiments, the available models are useful to understand observations only in some regimes. In particular, recent experiments have highlighted a steplike polymerization kinetics. The classical model of one-dimensional nucleation growth, the Kolmogorov-Avrami-Mehl-Johnson (KAMJ) model, predicts the correct polymerization kinetics only in some regimes and fails to predict the steplike behavior. This work illustrates by simulations and analytical arguments the limitation of applicability of the KAMJ description and proposes a minimal model for the statistics of the steps based on the so-called stick-breaking stochastic process. We argue that this insight might be useful to extract information on the time and length scales involved in the polymerization kinetics.
Modelling Polymer Deformation during 3D Printing
NASA Astrophysics Data System (ADS)
McIlroy, Claire; Olmsted, Peter
Three-dimensional printing has the potential to transform manufacturing processes, yet improving the strength of printed parts, to equal that of traditionally-manufactured parts, remains an underlying issue. The fused deposition modelling technique involves melting a thermoplastic, followed by layer-by-layer extrusion to fabricate an object. The key to ensuring strength at the weld between layers is successful inter-diffusion. However, prior to welding, both the extrusion process and the cooling temperature profile can significantly deform the polymer micro-structure and, consequently, how well the polymers are able to ``re-entangle'' across the weld. In particular, polymer alignment in the flow can cause de-bonding of the layers and create defects. We have developed a simple model of the non-isothermal extrusion process to explore the effects that typical printing conditions and material rheology have on the conformation of a polymer melt. In particular, we incorporate both stretch and orientation using the Rolie-Poly constitutive equation to examine the melt structure as it flows through the nozzle, the subsequent alignment with the build plate and the resulting deformation due to the fixed nozzle height, which is typically less than the nozzle radius.
Transient analysis of a thermal storage unit involving a phase change material
NASA Technical Reports Server (NTRS)
Griggs, E. I.; Pitts, D. R.; Humphries, W. R.
1974-01-01
The transient response of a single cell of a typical phase change material type thermal capacitor has been modeled using numerical conductive heat transfer techniques. The cell consists of a base plate, an insulated top, and two vertical walls (fins) forming a two-dimensional cavity filled with a phase change material. Both explicit and implicit numerical formulations are outlined. A mixed explicit-implicit scheme which treats the fin implicity while treating the phase change material explicitly is discussed. A band algorithmic scheme is used to reduce computer storage requirements for the implicit approach while retaining a relatively fine grid. All formulations are presented in dimensionless form thereby enabling application to geometrically similar problems. Typical parametric results are graphically presented for the case of melting with constant heat input to the base of the cell.
Dark Matter "Collider" from Inelastic Boosted Dark Matter.
Kim, Doojin; Park, Jong-Chul; Shin, Seodong
2017-10-20
We propose a novel dark matter (DM) detection strategy for models with a nonminimal dark sector. The main ingredients in the underlying DM scenario are a boosted DM particle and a heavier dark sector state. The relativistic DM impinged on target material scatters off inelastically to the heavier state, which subsequently decays into DM along with lighter states including visible (standard model) particles. The expected signal event, therefore, accompanies a visible signature by the secondary cascade process associated with a recoiling of the target particle, differing from the typical neutrino signal not involving the secondary signature. We then discuss various kinematic features followed by DM detection prospects at large-volume neutrino detectors with a model framework where a dark gauge boson is the mediator between the standard model particles and DM.
Dynamics of a durable commodity market involving trade at disequilibrium
NASA Astrophysics Data System (ADS)
Panchuk, A.; Puu, T.
2018-05-01
The present work considers a simple model of a durable commodity market involving two agents who trade stocks of two different types. Stock commodities, in contrast to flow commodities, remain on the market from period to period and, consequently, there is neither unique demand function nor unique supply function exists. We also set up exact conditions for trade at disequilibrium, the issue being usually neglected, though a fact of reality. The induced iterative system has infinite number of fixed points and path dependent dynamics. We show that a typical orbit is either attracted to one of the fixed points or eventually sticks at a no-trade point. For the latter the stock distribution always remains the same while the price displays periodic or chaotic oscillations.
A Matter of Timing: Developmental Theories of Romantic Involvement and Psychosocial Adjustment
Furman, Wyndol; Collibee, Charlene
2014-01-01
The present study compared two theories of the association between romantic involvement and adjustment—a social timetable theory and a developmental task theory. We examined seven waves of longitudinal data on a community based sample of 200 participants (M age Wave 1 = 15 years, 10 months). In each wave, multiple measures of substance use, externalizing symptoms, and internalizing symptoms were gathered, typically from multiple reporters. Multilevel modeling revealed that greater levels of romantic involvement in adolescence were associated with higher levels of substance use and externalizing symptoms, but became associated with lower levels in adulthood. Similarly, having a romantic partner was associated with greater levels of substance use, externalizing symptoms, and internalizing symptoms in adolescence, but was associated with lower levels in young adulthood. The findings were not consistent with a social timetable theory, which predicts that nonnormative involvement is associated with poor adjustment. Instead, the findings are consistent with a developmental task theory which predicts that precocious romantic involvement undermines development and adaptation, but when romantic involvement becomes a salient developmental task in adulthood, it is associated with positive adjustment. Discussion focuses on the processes that may underlie the changing nature of the association between romantic involvement and adjustment. PMID:24703413
Evaluating landfill aftercare strategies: A life cycle assessment approach.
Turner, David A; Beaven, Richard P; Woodman, Nick D
2017-05-01
This study investigates the potential impacts caused by the loss of active environmental control measures during the aftercare period of landfill management. A combined mechanistic solute flow model and life cycle assessment (LCA) approach was used to evaluate the potential impacts of leachate emissions over a 10,000year time horizon. A continuum of control loss possibilities occurring at different times and for different durations were investigated for four different basic aftercare scenarios, including a typical aftercare scenario involving a low permeability cap and three accelerated aftercare scenarios involving higher initial infiltration rates. Assuming a 'best case' where control is never lost, the largest potential impacts resulted from the typical aftercare scenario. The maximum difference between potential impacts from the 'best case' and the 'worst case', where control fails at the earliest possible point and is never reinstated, was only a fourfold increase. This highlights potential deficiencies in standard life cycle impact assessment practice, which are discussed. Nevertheless, the results show how the influence of active control loss on the potential impacts of landfilling varies considerably depending on the aftercare strategy used and highlight the importance that leachate treatment efficiencies have upon impacts. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cardillo, Ramona; Menazza, Cristina; Mammarella, Irene C
2018-06-07
Visuospatial processing in autism spectrum disorder (ASD) without intellectual disability remains only partly understood. The aim of the present study was to investigate global versus local visuospatial processing in individuals with ASD, comparing them with typically developing (TD) controls in visuoconstructive and visuospatial memory tasks. There were 21 participants with ASD without intellectual disability, and 21 TD controls matched for chronological age (M = 161.37 months, SD = 38.19), gender, and perceptual reasoning index who were tested. Participants were administered tasks assessing the visuoconstructive domain and involving fine motor skills, and visuospatial memory tasks in which visuospatial information had to be manipulated mentally. Using a mixed-effects model approach, our results showed different effects of local bias in the ASD group, depending on the domain considered: the use of a local approach only emerged for the visuoconstructive domain-in which fine motor skills were involved. These results seem to suggest that the local bias typical of the cognitive profile of ASD without intellectual disability could be a property of specific cognitive domains rather than a central mechanism. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Nickell, Joe
2000-03-01
Since the beginning of the modern UFO craze in 1947, an elaborate mythology has developed concerning alleged extraterrestrial visitations. ``Flying saucer" sightings (typically involving misperceptions of such mundane phenomena as meteors and research balloons) began to be accompanied in the 1950s by reports from ``contactees," persons who claimed to have had close encounters with, even to have been transported to distant planets by, UFO occupants. By the 1960s came reports of sporadic ``abductions" which have proliferated in correlation with media interest. (Indeed, by interaction between claimants and media the portrayal of aliens has evolved from a multiplicity of types into the rather standardized big-eyed humanoid model.) While evidence of alien contact has often been faked--as by spurious photos, ``crop circles," and the notorious ``Alien Autopsy" film--few alien abduction reports appear to be hoaxes. Most seem instead to come from sincere, sane individuals. Nevertheless, not one has been authenticated, and serious investigation shows that such claims can be explained as sleep-related phenomena (notably ``waking dreams"), hypnotic confabulation, and other psychological factors. As is typical of other mythologies, the alien myth involves supernormal beings that may interact with humans, and it purports to explain the workings of the universe and humanity's place within it.
LeBlanc, Shannon K; Taranath, Deepa; Morris, Scott; Barnett, Christopher P
2014-02-01
Colobomata are etiologically heterogeneous and may occur as an isolated defect or as a feature of a variety of single-gene disorders, chromosomal syndromes, or malformation syndromes. Although not classically associated with Marfan syndrome, colobomata have been described in several reports of Marfan syndrome, typically involving the lens and rarely involving other ocular structures. While colobomata of the lens have been described in Marfan syndrome, there are very few reports of coloboma involving other ocular structures. We report a newborn boy presenting with coloboma of the iris, lens, retina, and optic disk who was subsequently diagnosed with Marfan syndrome. Marfan syndrome is a disorder of increased TGFβ signaling, and recent work in the mouse model suggests a role for TGFβ signaling in eye development and coloboma formation, suggesting a causal association between Marfan syndrome and coloboma. Crown Copyright © 2014. Published by Mosby, Inc. All rights reserved.
A Method for Direct Fabrication of a Lingual Splint for Management of Pediatric Mandibular Fractures
Davies, Sarah; Costello, Bernard J.
2013-01-01
Summary: Pediatric mandibular fractures have successfully been managed in various ways. The use of a lingual splint is one such option. The typical indirect method for acrylic lingual splint fabrication involves obtaining dental impressions. Dental models are produced from those impressions so that model surgery may be performed. The splint is then made on those models using resin powder and liquid monomer in a wet laboratory and transferred to the patient. Obvious limitations to this technique exist for both patient and operator. We present a technique for direct, intraoperative, fabrication of a splint using commercially available light-cured material that avoids some of the shortcomings of the indirect method. Recommendations are made based on available material safety information. PMID:25289246
Strength tests for elite rowers: low- or high-repetition?
Lawton, Trent W; Cronin, John B; McGuigan, Michael R
2014-01-01
The purpose of this project was to evaluate the utility of low- and high-repetition maximum (RM) strength tests used to assess rowers. Twenty elite heavyweight males (age 23.7 ± 4.0 years) performed four tests (5 RM, 30 RM, 60 RM and 120 RM) using leg press and seated arm pulling exercise on a dynamometer. Each test was repeated on two further occasions; 3 and 7 days from the initial trial. Per cent typical error (within-participant variation) and intraclass correlation coefficients (ICCs) were calculated using log-transformed repeated-measures data. High-repetition tests (30 RM, 60 RM and 120 RM), involving seated arm pulling exercise are not recommended to be included in an assessment battery, as they had unsatisfactory measurement precision (per cent typical error > 5% or ICC < 0.9). Conversely, low-repetition tests (5 RM) involving leg press and seated arm pulling exercises could be used to assess elite rowers (per cent typical error ≤ 5% and ICC ≥ 0.9); however, only 5 RM leg pressing met criteria (per cent typical error = 2.7%, ICC = 0.98) for research involving small samples (n = 20). In summary, low-repetition 5 RM strength testing offers greater utility as assessments of rowers, as they can be used to measure upper- and lower-body strength; however, only the leg press exercise is recommended for research involving small squads of elite rowers.
DOT National Transportation Integrated Search
1988-01-01
Operational monitoring situations, in contrast to typical laboratory vigilance tasks, generally involve more than just stimulus detection and recognition. They frequently involve complex multidimensional discriminations, interpretations of significan...
Ozaki, Vitor A.; Ghosh, Sujit K.; Goodwin, Barry K.; Shirota, Ricardo
2009-01-01
This article presents a statistical model of agricultural yield data based on a set of hierarchical Bayesian models that allows joint modeling of temporal and spatial autocorrelation. This method captures a comprehensive range of the various uncertainties involved in predicting crop insurance premium rates as opposed to the more traditional ad hoc, two-stage methods that are typically based on independent estimation and prediction. A panel data set of county-average yield data was analyzed for 290 counties in the State of Paraná (Brazil) for the period of 1990 through 2002. Posterior predictive criteria are used to evaluate different model specifications. This article provides substantial improvements in the statistical and actuarial methods often applied to the calculation of insurance premium rates. These improvements are especially relevant to situations where data are limited. PMID:19890450
Wu, Hao
2018-05-01
In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.
Comparison of Low-Thrust Control Laws for Application in Planetocentric Space
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Sjauw, Waldy K.; Smith, David A.
2014-01-01
Recent interest at NASA for the application of solar electric propulsion for the transfer of significant payloads in cislunar space has led to the development of high-fidelity simulations of such missions. With such transfers involving transfer times on the order of months, simulation time can be significant. In the past, the examination of such missions typically began with the use of lower-fidelity trajectory optimization tools such as SEPSPOT to develop and tune guidance laws which delivered optimal or near- optimal trajectories, where optimal is generally defined as minimizing propellant expenditure or time of flight. The transfer of these solutions to a high-fidelity simulation is typically an iterative process whereby the initial solution may nearly, but not precisely, meet mission objectives. Further tuning of the guidance algorithm is typically necessary when accounting for high-fidelity perturbations such as those due to more detailed gravity models, secondary-body effects, solar radiation pressure, etc. While trajectory optimization is a useful method for determining optimal performance metrics, algorithms which deliver nearly optimal performance with minimal tuning are an attractive alternative.
Colorful Twisted Top Partners and Partnerium at the LHC
NASA Astrophysics Data System (ADS)
Kats, Yevgeny; McCullough, Matthew; Perez, Gilad; Soreq, Yotam; Thaler, Jesse
2017-06-01
In scenarios that stabilize the electroweak scale, the top quark is typically accompanied by partner particles. In this work, we demonstrate how extended stabilizing symmetries can yield scalar or fermionic top partners that transform as ordinary color triplets but carry exotic electric charges. We refer to these scenarios as "hypertwisted" since they involve modifications to hypercharge in the top sector. As proofs of principle, we construct two hypertwisted scenarios: a supersymmetric construction with spin-0 top partners, and a composite Higgs construction with spin-1/2 top partners. In both cases, the top partners are still phenomenologically compatible with the mass range motivated by weak-scale naturalness. The phenomenology of hypertwisted scenarios is diverse, since the lifetimes and decay modes of the top partners are model dependent. The novel coupling structure opens up search channels that do not typically arise in top-partner scenarios, such as pair production of top-plus-jet resonances. Furthermore, hypertwisted top partners are typically sufficiently long lived to form "top-partnerium" bound states that decay predominantly via annihilation, motivating searches for rare narrow resonances with diboson decay modes.
Unraveling ferulate role in suberin and periderm biology by reverse genetics
Serra, Olga; Figueras, Mercè; Franke, Rochus; Prat, Salome
2010-01-01
Plant cell walls are dramatically affected by suberin deposition, becoming an impermeable barrier to water and pathogens. Suberin is a complex layered heteropolymer that comprises both a poly(aliphatic) and a poly(aromatic) lignin-like domain. Current structural models for suberin attribute the crosslinking of aliphatic and aromatic domains within the typical lamellar ultrastructure of the polymer to esterified ferulate. BAHD feruloyl transferases involved in suberin biosynthesis have been recently characterized in Arabidopsis and potato (Solanum tuberosum). In defective mutants, suberin, even lacks most of the esterified ferulate, but maintains the typical lamellar ultrastructure. However, suberized tissues display increased water permeability, in spite of exhibiting a similar lipid load to wild type. Therefore, the role of ferulate in suberin needs to be reconsidered. Moreover, silencing the feruloyl transferase in potato turns the typical smooth skin of cv. Desirée into a rough scabbed skin distinctive of Russet varieties and impairs the normal skin maturation that confers resistance to skinning. Concomitantly to these changes, the skin of silenced potatoes shows an altered profile of soluble phenolics with the emergence of conjugated polyamines. PMID:20657184
A general regression framework for a secondary outcome in case-control studies.
Tchetgen Tchetgen, Eric J
2014-01-01
Modern case-control studies typically involve the collection of data on a large number of outcomes, often at considerable logistical and monetary expense. These data are of potentially great value to subsequent researchers, who, although not necessarily concerned with the disease that defined the case series in the original study, may want to use the available information for a regression analysis involving a secondary outcome. Because cases and controls are selected with unequal probability, regression analysis involving a secondary outcome generally must acknowledge the sampling design. In this paper, the author presents a new framework for the analysis of secondary outcomes in case-control studies. The approach is based on a careful re-parameterization of the conditional model for the secondary outcome given the case-control outcome and regression covariates, in terms of (a) the population regression of interest of the secondary outcome given covariates and (b) the population regression of the case-control outcome on covariates. The error distribution for the secondary outcome given covariates and case-control status is otherwise unrestricted. For a continuous outcome, the approach sometimes reduces to extending model (a) by including a residual of (b) as a covariate. However, the framework is general in the sense that models (a) and (b) can take any functional form, and the methodology allows for an identity, log or logit link function for model (a).
Real-Time Simulation of the X-33 Aerospace Engine
NASA Technical Reports Server (NTRS)
Aguilar, Robert
1999-01-01
This paper discusses the development and performance of the X-33 Aerospike Engine RealTime Model. This model was developed for the purposes of control law development, six degree-of-freedom trajectory analysis, vehicle system integration testing, and hardware-in-the loop controller verification. The Real-Time Model uses time-step marching solution of non-linear differential equations representing the physical processes involved in the operation of a liquid propellant rocket engine, albeit in a simplified form. These processes include heat transfer, fluid dynamics, combustion, and turbomachine performance. Two engine models are typically employed in order to accurately model maneuvering and the powerpack-out condition where the power section of one engine is used to supply propellants to both engines if one engine malfunctions. The X-33 Real-Time Model is compared to actual hot fire test data and is been found to be in good agreement.
Comparative analysis of existing models for power-grid synchronization
NASA Astrophysics Data System (ADS)
Nishikawa, Takashi; Motter, Adilson E.
2015-01-01
The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks—a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. Using simple illustrative examples, test systems, and real power-grid datasets, we study the inherent frequencies of the oscillators as well as their coupling structure, comparing across the different models. We demonstrate, in particular, that if the network structure is not homogeneous, generators with identical parameters need to be modeled as non-identical oscillators in general. We also discuss an approach to estimate the required (dynamical) system parameters that are unavailable in typical power-grid datasets, their use for computing the constants of each of the three models, and an open-source MATLAB toolbox that we provide for these computations.
The adaption and use of research codes for performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebetrau, A.M.
1987-05-01
Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less
Father involvement: Identifying and predicting family members' shared and unique perceptions.
Dyer, W Justin; Day, Randal D; Harper, James M
2014-08-01
Father involvement research has typically not recognized that reports of involvement contain at least two components: 1 reflecting a view of father involvement that is broadly recognized in the family, and another reflecting each reporter's unique perceptions. Using a longitudinal sample of 302 families, this study provides a first examination of shared and unique views of father involvement (engagement and warmth) from the perspectives of fathers, children, and mothers. This study also identifies influences on these shared and unique perspectives. Father involvement reports were obtained when the child was 12 and 14 years old. Mother reports overlapped more with the shared view than father or child reports. This suggests the mother's view may be more in line with broadly recognized father involvement. Regarding antecedents, for fathers' unique view, a compensatory model partially explains results; that is, negative aspects of family life were positively associated with fathers' unique view. Children's unique view of engagement may partially reflect a sentiment override with father antisocial behaviors being predictive. Mothers' unique view of engagement was predicted by father and mother work hours and her unique view of warmth was predicted by depression and maternal gatekeeping. Taken, together finding suggests a far more nuanced view of father involvement should be considered.
Cross-validation to select Bayesian hierarchical models in phylogenetics.
Duchêne, Sebastián; Duchêne, David A; Di Giallonardo, Francesca; Eden, John-Sebastian; Geoghegan, Jemma L; Holt, Kathryn E; Ho, Simon Y W; Holmes, Edward C
2016-05-26
Recent developments in Bayesian phylogenetic models have increased the range of inferences that can be drawn from molecular sequence data. Accordingly, model selection has become an important component of phylogenetic analysis. Methods of model selection generally consider the likelihood of the data under the model in question. In the context of Bayesian phylogenetics, the most common approach involves estimating the marginal likelihood, which is typically done by integrating the likelihood across model parameters, weighted by the prior. Although this method is accurate, it is sensitive to the presence of improper priors. We explored an alternative approach based on cross-validation that is widely used in evolutionary analysis. This involves comparing models according to their predictive performance. We analysed simulated data and a range of viral and bacterial data sets using a cross-validation approach to compare a variety of molecular clock and demographic models. Our results show that cross-validation can be effective in distinguishing between strict- and relaxed-clock models and in identifying demographic models that allow growth in population size over time. In most of our empirical data analyses, the model selected using cross-validation was able to match that selected using marginal-likelihood estimation. The accuracy of cross-validation appears to improve with longer sequence data, particularly when distinguishing between relaxed-clock models. Cross-validation is a useful method for Bayesian phylogenetic model selection. This method can be readily implemented even when considering complex models where selecting an appropriate prior for all parameters may be difficult.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lave, Matthew; Hayes, William; Pohl, Andrew
2015-02-02
We report an evaluation of the accuracy of combinations of models that estimate plane-of-array (POA) irradiance from measured global horizontal irradiance (GHI). This estimation involves two steps: 1) decomposition of GHI into direct and diffuse horizontal components and 2) transposition of direct and diffuse horizontal irradiance (DHI) to POA irradiance. Measured GHI and coincident measured POA irradiance from a variety of climates within the United States were used to evaluate combinations of decomposition and transposition models. A few locations also had DHI measurements, allowing for decoupled analysis of either the decomposition or the transposition models alone. Results suggest that decompositionmore » models had mean bias differences (modeled versus measured) that vary with climate. Transposition model mean bias differences depended more on the model than the location. Lastly, when only GHI measurements were available and combinations of decomposition and transposition models were considered, the smallest mean bias differences were typically found for combinations which included the Hay/Davies transposition model.« less
A hybrid life cycle inventory of nano-scale semiconductor manufacturing.
Krishnan, Nikhil; Boyd, Sarah; Somani, Ajay; Raoux, Sebastien; Clark, Daniel; Dornfeld, David
2008-04-15
The manufacturing of modern semiconductor devices involves a complex set of nanoscale fabrication processes that are energy and resource intensive, and generate significant waste. It is important to understand and reduce the environmental impacts of semiconductor manufacturing because these devices are ubiquitous components in electronics. Furthermore, the fabrication processes used in the semiconductor industry are finding increasing application in other products, such as microelectromechanical systems (MEMS), flat panel displays, and photovoltaics. In this work we develop a library of typical gate-to-gate materials and energy requirements, as well as emissions associated with a complete set of fabrication process models used in manufacturing a modern microprocessor. In addition, we evaluate upstream energy requirements associated with chemicals and materials using both existing process life cycle assessment (LCA) databases and an economic input-output (EIO) model. The result is a comprehensive data set and methodology that may be used to estimate and improve the environmental performance of a broad range of electronics and other emerging applications that involve nano and micro fabrication.
Developmental heterochrony and the evolution of autistic perception, cognition and behavior
2013-01-01
Background Autism is usually conceptualized as a disorder or disease that involves fundamentally abnormal neurodevelopment. In the present work, the hypothesis that a suite of core autism-related traits may commonly represent simple delays or non-completion of typical childhood developmental trajectories is evaluated. Discussion A comprehensive review of the literature indicates that, with regard to the four phenotypes of (1) restricted interests and repetitive behavior, (2) short-range and long-range structural and functional brain connectivity, (3) global and local visual perception and processing, and (4) the presence of absolute pitch, the differences between autistic individuals and typically developing individuals closely parallel the differences between younger and older children. Summary The results of this study are concordant with a model of ‘developmental heterochrony’, and suggest that evolutionary extension of child development along the human lineage has potentiated and structured genetic risk for autism and the expression of autistic perception, cognition and behavior. PMID:23639054
Developmental heterochrony and the evolution of autistic perception, cognition and behavior.
Crespi, Bernard
2013-05-02
Autism is usually conceptualized as a disorder or disease that involves fundamentally abnormal neurodevelopment. In the present work, the hypothesis that a suite of core autism-related traits may commonly represent simple delays or non-completion of typical childhood developmental trajectories is evaluated. A comprehensive review of the literature indicates that, with regard to the four phenotypes of (1) restricted interests and repetitive behavior, (2) short-range and long-range structural and functional brain connectivity, (3) global and local visual perception and processing, and (4) the presence of absolute pitch, the differences between autistic individuals and typically developing individuals closely parallel the differences between younger and older children. The results of this study are concordant with a model of 'developmental heterochrony', and suggest that evolutionary extension of child development along the human lineage has potentiated and structured genetic risk for autism and the expression of autistic perception, cognition and behavior.
Simulation of Earthquake-Generated Sea-Surface Deformation
NASA Astrophysics Data System (ADS)
Vogl, Chris; Leveque, Randy
2016-11-01
Earthquake-generated tsunamis can carry with them a powerful, destructive force. One of the most well-known, recent examples is the tsunami generated by the Tohoku earthquake, which was responsible for the nuclear disaster in Fukushima. Tsunami simulation and forecasting, a necessary element of emergency procedure planning and execution, is typically done using the shallow-water equations. A typical initial condition is that using the Okada solution for a homogeneous, elastic half-space. This work focuses on simulating earthquake-generated sea-surface deformations that are more true to the physics of the materials involved. In particular, a water layer is added on top of the half-space that models the seabed. Sea-surface deformations are then simulated using the Clawpack hyperbolic PDE package. Results from considering the water layer both as linearly elastic and as "nearly incompressible" are compared to that of the Okada solution.
Neural correlates of the implicit association test: evidence for semantic and emotional processing.
Williams, John K; Themanson, Jason R
2011-09-01
The implicit association test (IAT) has been widely used in social cognitive research over the past decade. Controversies have arisen over what cognitive processes are being tapped into using this task. While most models use behavioral (RT) results to support their claims, little research has examined neurocognitive correlates of these behavioral measures. The present study measured event-related brain potentials (ERPs) of participants while completing a gay-straight IAT in order to further understand the processes involved in a typical group bias IAT. Results indicated significantly smaller N400 amplitudes and significantly larger LPP amplitudes for compatible trials than for incompatible trials, suggesting that both the semantic and emotional congruence of stimuli paired together in an IAT trial contribute to the typical RT differences found, while no differences were present for earlier ERP components including the N2. These findings are discussed with respect to early and late processing in group bias IATs.
Mathematical models for the early detection and treatment of colorectal cancer.
Harper, P R; Jones, S K
2005-05-01
Colorectal cancer is a major cause of death for men and women in the Western world. When the cancer is detected through an awareness of the symptoms by a patient, typically it is at an advanced stage. It is possible to detect cancer at an early stage through screening and the marked differences in survival for early and late stages provide the incentive for the primary prevention or early detection of colorectal cancer. This paper considers mathematical models for colorectal cancer screening together with models for the treatment of patients. Illustrative results demonstrate that detailed attention to the processes involved in diseases, interventions and treatment enable us to combine data and expert knowledge from various sources. Thus a detailed operational model is a very useful tool in helping to make decisions about screening at national and local levels.
Processing of energy materials in electromagnetic field
NASA Astrophysics Data System (ADS)
Rodzevich, A. P.; Kuzmina, L. V.; Gazenaur, E. G.; Krasheninin, V. I.
2015-09-01
This paper presents the research results of complex impact of mechanical stress and electromagnetic field on the defect structure of energy materials. As the object of research quite a typical energy material - silver azide was chosen, being a model in chemistry of solids. According to the experiments co-effect of magnetic field and mechanical stress in silver azide crystals furthers multiplication, stopper breakaway, shift of dislocations, and generation of superlattice dislocations - micro-cracks. A method of mechanical and electric strengthening has been developed and involves changing the density of dislocations in whiskers.
Comment on 'General nonlocality in quantum fields'[J. Math. Phys. 49, 033513 (2008)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Haijun
2010-05-15
In a recent paper [H.-J. Wang, J. Math. Phys. 49, 033513 (2008)] a complex-geometry model was proposed to interpret the interaction of electromagnetism and the interaction between quarks while the nonlocal effects are involved. In that theoretical frame, from the metric matrix one can obtain a determinant-form condition to describe qualitatively the typical characteristics for the aforementioned interactions. In this comment we attempt to extend this kind of qualitative description to weak interaction by finding out an appropriate metric tensor for it.
Left hemisphere regions are critical for language in the face of early left focal brain injury.
Raja Beharelle, Anjali; Dick, Anthony Steven; Josse, Goulven; Solodkin, Ana; Huttenlocher, Peter R; Levine, Susan C; Small, Steven L
2010-06-01
A predominant theory regarding early stroke and its effect on language development, is that early left hemisphere lesions trigger compensatory processes that allow the right hemisphere to assume dominant language functions, and this is thought to underlie the near normal language development observed after early stroke. To test this theory, we used functional magnetic resonance imaging to examine brain activity during category fluency in participants who had sustained pre- or perinatal left hemisphere stroke (n = 25) and in neurologically normal siblings (n = 27). In typically developing children, performance of a category fluency task elicits strong involvement of left frontal and lateral temporal regions and a lesser involvement of right hemisphere structures. In our cohort of atypically developing participants with early stroke, expressive and receptive language skills correlated with activity in the same left inferior frontal regions that support language processing in neurologically normal children. This was true independent of either the amount of brain injury or the extent that the injury was located in classical cortical language processing areas. Participants with bilateral activation in left and right superior temporal-inferior parietal regions had better language function than those with either predominantly left- or right-sided unilateral activation. The advantage conferred by left inferior frontal and bilateral temporal involvement demonstrated in our study supports a strong predisposition for typical neural language organization, despite an intervening injury, and argues against models suggesting that the right hemisphere fully accommodates language function following early injury.
Absorbable energy monitoring scheme: new design protocol to test vehicle structural crashworthiness.
Ofochebe, Sunday M; Enibe, Samuel O; Ozoegwu, Chigbogu G
2016-05-01
In vehicle crashworthiness design optimization detailed system evaluation capable of producing reliable results are basically achieved through high-order numerical computational (HNC) models such as the dynamic finite element model, mesh-free model etc. However the application of these models especially during optimization studies is basically challenged by their inherent high demand on computational resources, conditional stability of the solution process, and lack of knowledge of viable parameter range for detailed optimization studies. The absorbable energy monitoring scheme (AEMS) presented in this paper suggests a new design protocol that attempts to overcome such problems in evaluation of vehicle structure for crashworthiness. The implementation of the AEMS involves studying crash performance of vehicle components at various absorbable energy ratios based on a 2DOF lumped-mass-spring (LMS) vehicle impact model. This allows for prompt prediction of useful parameter values in a given design problem. The application of the classical one-dimensional LMS model in vehicle crash analysis is further improved in the present work by developing a critical load matching criterion which allows for quantitative interpretation of the results of the abstract model in a typical vehicle crash design. The adequacy of the proposed AEMS for preliminary vehicle crashworthiness design is demonstrated in this paper, however its extension to full-scale design-optimization problem involving full vehicle model that shows greater structural detail requires more theoretical development.
Logical fallacies in animal model research.
Sjoberg, Espen A
2017-02-15
Animal models of human behavioural deficits involve conducting experiments on animals with the hope of gaining new knowledge that can be applied to humans. This paper aims to address risks, biases, and fallacies associated with drawing conclusions when conducting experiments on animals, with focus on animal models of mental illness. Researchers using animal models are susceptible to a fallacy known as false analogy, where inferences based on assumptions of similarities between animals and humans can potentially lead to an incorrect conclusion. There is also a risk of false positive results when evaluating the validity of a putative animal model, particularly if the experiment is not conducted double-blind. It is further argued that animal model experiments are reconstructions of human experiments, and not replications per se, because the animals cannot follow instructions. This leads to an experimental setup that is altered to accommodate the animals, and typically involves a smaller sample size than a human experiment. Researchers on animal models of human behaviour should increase focus on mechanistic validity in order to ensure that the underlying causal mechanisms driving the behaviour are the same, as relying on face validity makes the model susceptible to logical fallacies and a higher risk of Type 1 errors. We discuss measures to reduce bias and risk of making logical fallacies in animal research, and provide a guideline that researchers can follow to increase the rigour of their experiments.
Roess, Deborah A.; Smith, Steven M. L.; Winter, Peter; Zhou, Jun; Dou, Ping; Baruah, Bharat; Trujillo, Alejandro M.; Levinger, Nancy E.; Yang, Xioda; Barisas, B. George; Crans, Debbie C.
2011-01-01
There is increasing evidence for the involvement of plasma membrane microdomains in insulin receptor function. Moreover, disruption of these structures, which are typically enriched in sphingomyelin and cholesterol, results in insulin resistance. Treatment strategies for insulin resistance include the use of vanadium compounds which have been shown in animal models to enhance insulin responsiveness. One possible mechanism for insulin-enhancing effects might involve direct effects of vanadium compounds on membrane lipid organization. These changes in lipid organization promote the partitioning of insulin receptors and other receptors into membrane microdomains where receptors are optimally functional. To explore this possibility, we have used several strategies involving vanadium complexes such as [VO2dipic]− (pyridin-2,6-dicarboxylatodioxovanadium(V)), decavanadate (V10O286−, V10), BMOV (bis(maltolato)oxovanadium(IV)) and [VO(saltris)]2 (2-salicylideniminato-2-(hydroxymethyl)-1,3-dihydroxypropane-oxovanadium(V)). Our strategies include an evaluation of interactions between vanadium-containing compounds and model lipid systems, an evaluation of the effects of vanadium compounds on lipid fluidity in erythrocyte membranes, and studies of the effects of vanadium-containing compounds on signaling events initiated by receptors known to use membrane microdomains as signaling platforms. PMID:18729092
NASA Astrophysics Data System (ADS)
Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef
2016-12-01
Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.
Speciation in the Derrida-Higgs model with finite genomes and spatial populations
NASA Astrophysics Data System (ADS)
de Aguiar, Marcus A. M.
2017-02-01
The speciation model proposed by Derrida and Higgs demonstrated that a sexually reproducing population can split into different species in the absence of natural selection or any type of geographic isolation, provided that mating is assortative and the number of genes involved in the process is infinite. Here we revisit this model and simulate it for finite genomes, focusing on the question of how many genes it actually takes to trigger neutral sympatric speciation. We find that, for typical parameters used in the original model, it takes the order of 105 genes. We compare the results with a similar spatially explicit model where about 100 genes suffice for speciation. We show that when the number of genes is small the species that emerge are strongly segregated in space. For a larger number of genes, on the other hand, the spatial structure of the population is less important and the species distribution overlap considerably.
Electric Power Distribution System Model Simplification Using Segment Substitution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
A multitasking general executive for compound continuous tasks.
Salvucci, Dario D
2005-05-06
As cognitive architectures move to account for increasingly complex real-world tasks, one of the most pressing challenges involves understanding and modeling human multitasking. Although a number of existing models now perform multitasking in real-world scenarios, these models typically employ customized executives that schedule tasks for the particular domain but do not generalize easily to other domains. This article outlines a general executive for the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture that, given independent models of individual tasks, schedules and interleaves the models' behavior into integrated multitasking behavior. To demonstrate the power of the proposed approach, the article describes an application to the domain of driving, showing how the general executive can interleave component subtasks of the driving task (namely, control and monitoring) and interleave driving with in-vehicle secondary tasks (radio tuning and phone dialing). 2005 Lawrence Erlbaum Associates, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melin, Alexander M.; Zhang, Yichen; Djouadi, Seddik
In this paper, a model reference control based inertia emulation strategy is proposed. Desired inertia can be precisely emulated through this control strategy so that guaranteed performance is ensured. A typical frequency response model with parametrical inertia is set to be the reference model. A measurement at a specific location delivers the information of disturbance acting on the diesel-wind system to the referencemodel. The objective is for the speed of the diesel-wind system to track the reference model. Since active power variation is dominantly governed by mechanical dynamics and modes, only mechanical dynamics and states, i.e., a swing-engine-governor system plusmore » a reduced-order wind turbine generator, are involved in the feedback control design. The controller is implemented in a three-phase diesel-wind system feed microgrid. The results show exact synthetic inertia is emulated, leading to guaranteed performance and safety bounds.« less
A Layered Decision Model for Cost-Effective System Security
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Huaqiang; Alves-Foss, James; Soule, Terry
System security involves decisions in at least three areas: identification of well-defined security policies, selection of cost-effective defence strategies, and implementation of real-time defence tactics. Although choices made in each of these areas affect the others, existing decision models typically handle these three decision areas in isolation. There is no comprehensive tool that can integrate them to provide a single efficient model for safeguarding a network. In addition, there is no clear way to determine which particular combinations of defence decisions result in cost-effective solutions. To address these problems, this paper introduces a Layered Decision Model (LDM) for use inmore » deciding how to address defence decisions based on their cost-effectiveness. To validate the LDM and illustrate how it is used, we used simulation to test model rationality and applied the LDM to the design of system security for an e-commercial business case.« less
Spectra of conditionalization and typicality in the multiverse
NASA Astrophysics Data System (ADS)
Azhar, Feraz
2016-02-01
An approach to testing theories describing a multiverse, that has gained interest of late, involves comparing theory-generated probability distributions over observables with their experimentally measured values. It is likely that such distributions, were we indeed able to calculate them unambiguously, will assign low probabilities to any such experimental measurements. An alternative to thereby rejecting these theories, is to conditionalize the distributions involved by restricting attention to domains of the multiverse in which we might arise. In order to elicit a crisp prediction, however, one needs to make a further assumption about how typical we are of the chosen domains. In this paper, we investigate interactions between the spectra of available assumptions regarding both conditionalization and typicality, and draw out the effects of these interactions in a concrete setting; namely, on predictions of the total number of species that contribute significantly to dark matter. In particular, for each conditionalization scheme studied, we analyze how correlations between densities of different dark matter species affect the prediction, and explicate the effects of assumptions regarding typicality. We find that the effects of correlations can depend on the conditionalization scheme, and that in each case atypicality can significantly change the prediction. In doing so, we demonstrate the existence of overlaps in the predictions of different "frameworks" consisting of conjunctions of theory, conditionalization scheme and typicality assumption. This conclusion highlights the acute challenges involved in using such tests to identify a preferred framework that aims to describe our observational situation in a multiverse.
Dugas, Martin; Dugas-Breit, Susanne
2014-01-01
Design, execution and analysis of clinical studies involves several stakeholders with different professional backgrounds. Typically, principle investigators are familiar with standard office tools, data managers apply electronic data capture (EDC) systems and statisticians work with statistics software. Case report forms (CRFs) specify the data model of study subjects, evolve over time and consist of hundreds to thousands of data items per study. To avoid erroneous manual transformation work, a converting tool for different representations of study data models was designed. It can convert between office format, EDC and statistics format. In addition, it supports semantic annotations, which enable precise definitions for data items. A reference implementation is available as open source package ODMconverter at http://cran.r-project.org.
Port, Russell G; Gaetz, William; Bloy, Luke; Wang, Dah-Jyuu; Blaskey, Lisa; Kuschner, Emily S; Levy, Susan E; Brodkin, Edward S; Roberts, Timothy P L
2017-04-01
Autism spectrum disorder (ASD) is hypothesized to arise from imbalances between excitatory and inhibitory neurotransmission (E/I imbalance). Studies have demonstrated E/I imbalance in individuals with ASD and also corresponding rodent models. One neural process thought to be reliant on E/I balance is gamma-band activity (Gamma), with support arising from observed correlations between motor, as well as visual, Gamma and underlying GABA concentrations in healthy adults. Additionally, decreased Gamma has been observed in ASD individuals and relevant animal models, though the direct relationship between Gamma and GABA concentrations in ASD remains unexplored. This study combined magnetoencephalography (MEG) and edited magnetic resonance spectroscopy (MRS) in 27 typically developing individuals (TD) and 30 individuals with ASD. Auditory cortex localized phase-locked Gamma was compared to resting Superior Temporal Gyrus relative cortical GABA concentrations for both children/adolescents and adults. Children/adolescents with ASD exhibited significantly decreased GABA+/Creatine (Cr) levels, though typical Gamma. Additionally, these children/adolescents lacked the typical maturation of GABA+/Cr concentrations and gamma-band coherence. Furthermore, children/adolescents with ASD additionally failed to exhibit the typical GABA+/Cr to gamma-band coherence association. This altered coupling during childhood/adolescence may result in Gamma decreases observed in the adults with ASD. Therefore, individuals with ASD exhibit improper local neuronal circuitry maturation during a childhood/adolescence critical period, when GABA is involved in configuring of such circuit functioning. Provocatively a novel line of treatment is suggested (with a critical time window); by increasing neural GABA levels in children/adolescents with ASD, proper local circuitry maturation may be restored resulting in typical Gamma in adulthood. Autism Res 2017, 10: 593-607. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Gonizzi Barsanti, S.; Guidi, G.
2017-02-01
Conservation of Cultural Heritage is a key issue and structural changes and damages can influence the mechanical behaviour of artefacts and buildings. The use of Finite Elements Methods (FEM) for mechanical analysis is largely used in modelling stress behaviour. The typical workflow involves the use of CAD 3D models made by Non-Uniform Rational B-splines (NURBS) surfaces, representing the ideal shape of the object to be simulated. Nowadays, 3D documentation of CH has been widely developed through reality-based approaches, but the models are not suitable for a direct use in FEA: the mesh has in fact to be converted to volumetric, and the density has to be reduced since the computational complexity of a FEA grows exponentially with the number of nodes. The focus of this paper is to present a new method aiming at generate the most accurate 3D representation of a real artefact from highly accurate 3D digital models derived from reality-based techniques, maintaining the accuracy of the high-resolution polygonal models in the solid ones. The approach proposed is based on a wise use of retopology procedures and a transformation of this model to a mathematical one made by NURBS surfaces suitable for being processed by volumetric meshers typically embedded in standard FEM packages. The strong simplification with little loss of consistency possible with the retopology step is used for maintaining as much coherence as possible between the original acquired mesh and the simplified model, creating in the meantime a topology that is more favourable for the automatic NURBS conversion.
Mathematical Model of Solid Food Pasteurization by Ohmic Heating: Influence of Process Parameters
2014-01-01
Pasteurization of a solid food undergoing ohmic heating has been analysed by means of a mathematical model, involving the simultaneous solution of Laplace's equation, which describes the distribution of electrical potential within a food, the heat transfer equation, using a source term involving the displacement of electrical potential, the kinetics of inactivation of microorganisms likely to be contaminating the product. In the model, thermophysical and electrical properties as function of temperature are used. Previous works have shown the occurrence of heat loss from food products to the external environment during ohmic heating. The current model predicts that, when temperature gradients are established in the proximity of the outer ohmic cell surface, more cold areas are present at junctions of electrodes with lateral sample surface. For these reasons, colder external shells are the critical areas to be monitored, instead of internal points (typically geometrical center) as in classical pure conductive heat transfer. Analysis is carried out in order to understand the influence of pasteurisation process parameters on this temperature distribution. A successful model helps to improve understanding of these processing phenomenon, which in turn will help to reduce the magnitude of the temperature differential within the product and ultimately provide a more uniformly pasteurized product. PMID:24574874
Mathematical model of solid food pasteurization by ohmic heating: influence of process parameters.
Marra, Francesco
2014-01-01
Pasteurization of a solid food undergoing ohmic heating has been analysed by means of a mathematical model, involving the simultaneous solution of Laplace's equation, which describes the distribution of electrical potential within a food, the heat transfer equation, using a source term involving the displacement of electrical potential, the kinetics of inactivation of microorganisms likely to be contaminating the product. In the model, thermophysical and electrical properties as function of temperature are used. Previous works have shown the occurrence of heat loss from food products to the external environment during ohmic heating. The current model predicts that, when temperature gradients are established in the proximity of the outer ohmic cell surface, more cold areas are present at junctions of electrodes with lateral sample surface. For these reasons, colder external shells are the critical areas to be monitored, instead of internal points (typically geometrical center) as in classical pure conductive heat transfer. Analysis is carried out in order to understand the influence of pasteurisation process parameters on this temperature distribution. A successful model helps to improve understanding of these processing phenomenon, which in turn will help to reduce the magnitude of the temperature differential within the product and ultimately provide a more uniformly pasteurized product.
Goodspeed, Kimberly; Newsom, Cassandra; Morris, Mary Ann; Powell, Craig; Evans, Patricia; Golla, Sailaja
2018-03-01
Pitt-Hopkins syndrome (PTHS) is a rare, genetic disorder caused by a molecular variant of TCF4 which is involved in embryologic neuronal differentiation. PTHS is characterized by syndromic facies, psychomotor delay, and intellectual disability. Other associated features include early-onset myopia, seizures, constipation, and hyperventilation-apneic spells. Many also meet criteria for autism spectrum disorder. Here the authors present a series of 23 PTHS patients with molecularly confirmed TCF4 variants and describe 3 unique individuals. The first carries a small deletion but does not exhibit the typical facial features nor the typical pattern of developmental delay. The second exhibits typical facial features, but has attained more advanced motor and verbal skills than other reported cases to date. The third displays typical features of PTHS, however inherited a large chromosomal duplication involving TCF4 from his unaffected father with somatic mosaicism. To the authors' knowledge, this is the first chromosomal duplication case reported to date.
Comparison between a typical and a simplified model for blast load-induced structural response
NASA Astrophysics Data System (ADS)
Abd-Elhamed, A.; Mahmoud, S.
2017-02-01
As explosive blasts continue to cause severe damage as well as victims in both civil and military environments. There is a bad need for understanding the behavior of structural elements to such extremely short duration dynamic loads where it is of great concern nowadays. Due to the complexity of the typical blast pressure profile model and in order to reduce the modelling and computational efforts, the simplified triangle model for blast loads profile is used to analyze structural response. This simplified model considers only the positive phase and ignores the suction phase which characterizes the typical one in simulating blast loads. The closed from solution for the equation of motion under blast load as a forcing term modelled either typical or simplified models has been derived. The considered herein two approaches have been compared using the obtained results from simulation response analysis of a building structure under an applied blast load. The computed error in simulating response using the simplified model with respect to the typical one has been computed. In general, both simplified and typical models can perform the dynamic blast-load induced response of building structures. However, the simplified one shows a remarkably different response behavior as compared to the typical one despite its simplicity and the use of only positive phase for simulating the explosive loads. The prediction of the dynamic system responses using the simplified model is not satisfactory due to the obtained larger errors as compared to the system responses obtained using the typical one.
From the trees to the forest: a review of radiative neutrino mass models
NASA Astrophysics Data System (ADS)
Cai, Yi; Herrero García, Juan; Schmidt, Michael A.; Vicente, Avelino; Volkas, Raymond R.
2017-12-01
A plausible explanation for the lightness of neutrino masses is that neutrinos are massless at tree level, with their mass (typically Majorana) being generated radiatively at one or more loops. The new couplings, together with the suppression coming from the loop factors, imply that the new degrees of freedom cannot be too heavy (they are typically at the TeV scale). Therefore, in these models there are no large mass hierarchies and they can be tested using different searches, making their detailed phenomenological study very appealing. In particular, the new particles can be searched for at colliders and generically induce signals in lepton-flavor and lepton-number violating processes (in the case of Majorana neutrinos), which are not independent from reproducing correctly the neutrino masses and mixings. The main focus of the review is on Majorana neutrinos. We order the allowed theory space from three different perspectives: (i) using an effective operator approach to lepton number violation, (ii) by the number of loops at which the Weinberg operator is generated, (iii) within a given loop order, by the possible irreducible topologies. We also discuss in more detail some popular radiative models which involve qualitatively different features, revisiting their most important phenomenological implications. Finally, we list some promising avenues to pursue.
Reducing the fine-tuning of gauge-mediated SUSY breaking
NASA Astrophysics Data System (ADS)
Casas, J. Alberto; Moreno, Jesús M.; Robles, Sandra; Rolbiecki, Krzysztof
2016-08-01
Despite their appealing features, models with gauge-mediated supersymmetry breaking (GMSB) typically present a high degree of fine-tuning, due to the initial absence of the top trilinear scalar couplings, A_t=0. In this paper, we carefully evaluate such a tuning, showing that is worse than per mil in the minimal model. Then, we examine some existing proposals to generate A_t≠ 0 term in this context. We find that, although the stops can be made lighter, usually the tuning does not improve (it may be even worse), with some exceptions, which involve the generation of A_t at one loop or tree level. We examine both possibilities and propose a conceptually simplified version of the latter; which is arguably the optimum GMSB setup (with minimal matter content), concerning the fine-tuning issue. The resulting fine-tuning is better than one per mil, still severe but similar to other minimal supersymmetric standard model constructions. We also explore the so-called "little A_t^2/m^2 problem", i.e. the fact that a large A_t-term is normally accompanied by a similar or larger sfermion mass, which typically implies an increase in the fine-tuning. Finally, we find the version of GMSB for which this ratio is optimized, which, nevertheless, does not minimize the fine-tuning.
Mathematical and Computational Modeling in Complex Biological Systems
Li, Wenyang; Zhu, Xiaoliang
2017-01-01
The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology. PMID:28386558
Mathematical and Computational Modeling in Complex Biological Systems.
Ji, Zhiwei; Yan, Ke; Li, Wenyang; Hu, Haigen; Zhu, Xiaoliang
2017-01-01
The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology.
An empirical study of race times in recreational endurance runners.
Vickers, Andrew J; Vertosick, Emily A
2016-01-01
Studies of endurance running have typically involved elite athletes, small sample sizes and measures that require special expertise or equipment. We examined factors associated with race performance and explored methods for race time prediction using information routinely available to a recreational runner. An Internet survey was used to collect data from recreational endurance runners (N = 2303). The cohort was split 2:1 into a training set and validation set to create models to predict race time. Sex, age, BMI and race training were associated with mean race velocity for all race distances. The difference in velocity between males and females decreased with increasing distance. Tempo runs were more strongly associated with velocity for shorter distances, while typical weekly training mileage and interval training had similar associations with velocity for all race distances. The commonly used Riegel formula for race time prediction was well-calibrated for races up to a half-marathon, but dramatically underestimated marathon time, giving times at least 10 min too fast for half of runners. We built two models to predict marathon time. The mean squared error for Riegel was 381 compared to 228 (model based on one prior race) and 208 (model based on two prior races). Our findings can be used to inform race training and to provide more accurate race time predictions for better pacing.
Involving Teachers in Charter School Governance: A Guide for State Policymakers
ERIC Educational Resources Information Center
Sam, Cecilia
2008-01-01
This guide for state policymakers examines teacher involvement in charter school governance. Teacher involvement is defined to include the gamut of decision-making roles not typically afforded teachers in traditional public schools, including founding schools, serving on governing boards, and engaging in site-based collective bargaining. Different…
ERIC Educational Resources Information Center
Gilbert, George L., Ed.
1985-01-01
Background information, procedures, and typical results obtained are provided for two demonstrations. The first involves the colorful complexes of copper(II). The second involves reverse-phase separation of Food, Drug, and Cosmetic (FD & C) dyes using a solvent gradient. (JN)
Zoccolotti, Pierluigi; De Luca, Maria; Marinelli, Chiara V.; Spinelli, Donatella
2014-01-01
This study was aimed at predicting individual differences in text reading fluency. The basic proposal included two factors, i.e., the ability to decode letter strings (measured by discrete pseudo-word reading) and integration of the various sub-components involved in reading (measured by Rapid Automatized Naming, RAN). Subsequently, a third factor was added to the model, i.e., naming of discrete digits. In order to use homogeneous measures, all contributing variables considered the entire processing of the item, including pronunciation time. The model, which was based on commonality analysis, was applied to data from a group of 43 typically developing readers (11- to 13-year-olds) and a group of 25 chronologically matched dyslexic children. In typically developing readers, both orthographic decoding and integration of reading sub-components contributed significantly to the overall prediction of text reading fluency. The model prediction was higher (from ca. 37 to 52% of the explained variance) when we included the naming of discrete digits variable, which had a suppressive effect on pseudo-word reading. In the dyslexic readers, the variance explained by the two-factor model was high (69%) and did not change when the third factor was added. The lack of a suppression effect was likely due to the prominent individual differences in poor orthographic decoding of the dyslexic children. Analyses on data from both groups of children were replicated by using patches of colors as stimuli (both in the RAN task and in the discrete naming task) obtaining similar results. We conclude that it is possible to predict much of the variance in text-reading fluency using basic processes, such as orthographic decoding and integration of reading sub-components, even without taking into consideration higher-order linguistic factors such as lexical, semantic and contextual abilities. The approach validity of using proximal vs. distal causes to predict reading fluency is discussed. PMID:25477856
Estrogenic control of behavioral sex change in the bluehead wrasse, Thalassoma bifasciatum.
Marsh-Hunkin, K Erica; Heinz, Heather M; Hawkins, M Beth; Godwin, John
2013-12-01
Estrogens activate male-typical sexual behavior in several mammalian and avian models. Estrogen signaling also appears critical in the control of sex change in some fishes, in which it is instead decreases in estradiol levels that may permit development of male-typical behaviors. The bluehead wrasse is a protogynous hermaphrodite that exhibits rapid increases in aggressive and male-typical courtship behavior as females undergo sex change. Removal of the ovaries does not prevent these changes. In two field experiments involving gonadally-intact and gonadectomized females, estradiol (E2) implants prevented behavioral sex change in large females who were made the largest members of their social groups through removals of more dominant fish. In contrast, cholesterol-implanted control females showed full behavioral sex change, along with a higher frequency both of aggressive interactions and of male-typical courtship displays than occurred in E2-implanted animals. To assess potential neural correlates of these behavioral effects of E2, we evaluated abundances of aromatase mRNA using in situ hybridization. Aromatase mRNA was more abundant in the POA of E2-implanted females than in cholesterol-implanted controls in gonadally-intact females. The lack of behavioral sex change coupled with increased levels of aromatase mRNA are consistent with an inhibitory role for E2, likely of neural origin, in regulating socially controlled sex change.
241Am Ingrowth and Its Effect on Internal Dose
Konzen, Kevin
2016-07-01
Generally, plutonium has been manufactured to support commercial and military applications involving heat sources, weapons and reactor fuel. This work focuses on three typical plutonium mixtures, while observing the potential of 241Am ingrowth and its effect on internal dose. The term “ingrowth” is used to describe 241Am production due solely from the decay of 241Pu as part of a plutonium mixture, where it is initially absent or present in a smaller quantity. Dose calculation models do not account for 241Am ingrowth unless the 241Pu quantity is specified. This work suggested that 241Am ingrowth be considered in bioassay analysis when theremore » is a potential of a 10% increase to the individual’s committed effective dose. It was determined that plutonium fuel mixtures, initially absent of 241Am, would likely exceed 10% for typical reactor grade fuel aged less than 30 years; however, heat source grade and aged weapons grade fuel would normally fall below this threshold. In conclusion, although this work addresses typical plutonium mixtures following separation, it may be extended to irradiated commercial uranium fuel and is expected to be a concern in the recycling of spent fuel.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konzen, Kevin
Generally, plutonium has been manufactured to support commercial and military applications involving heat sources, weapons and reactor fuel. This work focuses on three typical plutonium mixtures, while observing the potential of 241Am ingrowth and its effect on internal dose. The term “ingrowth” is used to describe 241Am production due solely from the decay of 241Pu as part of a plutonium mixture, where it is initially absent or present in a smaller quantity. Dose calculation models do not account for 241Am ingrowth unless the 241Pu quantity is specified. This work suggested that 241Am ingrowth be considered in bioassay analysis when theremore » is a potential of a 10% increase to the individual’s committed effective dose. It was determined that plutonium fuel mixtures, initially absent of 241Am, would likely exceed 10% for typical reactor grade fuel aged less than 30 years; however, heat source grade and aged weapons grade fuel would normally fall below this threshold. In conclusion, although this work addresses typical plutonium mixtures following separation, it may be extended to irradiated commercial uranium fuel and is expected to be a concern in the recycling of spent fuel.« less
Towards a Simple Constitutive Model for Bread Dough
NASA Astrophysics Data System (ADS)
Tanner, Roger I.
2008-07-01
Wheat flour dough is an example of a soft solid material consisting of a gluten (rubbery) network with starch particles as a filler. The volume fraction of the starch filler is high-typically 60%. A computer-friendly constitutive model has been lacking for this type of material and here we report on progress towards finding such a model. The model must describe the response to small strains, simple shearing starting from rest, simple elongation, biaxial straining, recoil and various other transient flows. A viscoelastic Lodge-type model involving a damage function. which depends on strain from an initial reference state fits the given data well, and it is also able to predict the thickness at exit from dough sheeting, which has been a long-standing unsolved puzzle. The model also shows an apparent rate-dependent yield stress, although no explicit yield stress is built into the model. This behaviour agrees with the early (1934) observations of Schofield and Scott Blair on dough recoil after unloading.
Schedule-induced polydipsia: a rat model of obsessive-compulsive disorder.
Platt, Brian; Beyer, Chad E; Schechter, Lee E; Rosenzweig-Lipson, Sharon
2008-04-01
Obsessive-compulsive disorder (OCD) is difficult to model in animals due to the involvement of both mental (obsessions) and physical (compulsions) symptoms. Due to limitations of using animals to evaluate obsessions, OCD models are limited to evaluation of the compulsive and repetitive behaviors of animals. Of these, models of adjunctive behaviors offer the most value in regard to predicting efficacy of anti-OCD drugs in the clinic. Adjunctive behaviors are those that are maintained indirectly by the variables that control another behavior, rather than directly by their own typical controlling variables. Schedule-induced polydipsia (SIP) is an adjunctive model in which rats exhibit exaggerated drinking behavior (polydipsia) when presented with food pellets under a fixed-time schedule. The polydipsic response is an excessive manifestation of a normal behavior (drinking), providing face validity to the model. Furthermore, clinically effective drugs for the treatment of OCD decrease SIP. This protocol describes a rat SIP model of OCD and provides preclinical data for drugs that decrease polydipsia and are clinically effective in the treatment of OCD.
Levine, Dani; Strother-Garcia, Kristina; Golinkoff, Roberta Michnick; Hirsh-Pasek, Kathy
2016-02-01
Language development is a multifaceted, dynamic process involving the discovery of complex patterns, and the refinement of native language competencies in the context of communicative interactions. This process is already advanced by the end of the first year of life for hearing children, but prelingually deaf children who initially lack a language model may miss critical experiences during this early window. The purpose of this review is twofold. First, we examine the published literature on language development during the first 12 months in typically developing children. Second, we use this literature to inform our understanding of the language outcomes of prelingually deaf children who receive cochlear implants (CIs), and therefore language input, either before or after the first year. During the first 12 months, typically developing infants exhibit advances in speech segmentation, word learning, syntax acquisition, and communication, both verbal and nonverbal. Infants and their caregivers coconstruct a communication foundation during this time, supporting continued language growth. The language outcomes of hearing children are robustly predicted by their experiences and acquired competencies during the first year; yet these predictive links are absent among prelingually deaf infants lacking a language model (i.e., those without exposure to sign). For deaf infants who receive a CI, implantation timing is crucial. Children receiving CIs before 12 months frequently catch up with their typically developing peers, whereas those receiving CIs later do not. Explanations for the language difficulties of late-implanted children are discussed.
Zhai, Yi; Wang, Yan; Wang, Zhaoqi; Liu, Yongji; Zhang, Lin; He, Yuanqing; Chang, Shengjiang
2014-01-01
An achromatic element eliminating only longitudinal chromatic aberration (LCA) while maintaining transverse chromatic aberration (TCA) is established for the eye model, which involves the angle formed by the visual and optical axis. To investigate the impacts of higher-order aberrations on vision, the actual data of higher-order aberrations of human eyes with three typical levels are introduced into the eye model along visual axis. Moreover, three kinds of individual eye models are established to investigate the impacts of higher-order aberrations, chromatic aberration (LCA+TCA), LCA and TCA on vision under the photopic condition, respectively. Results show that for most human eyes, the impact of chromatic aberration on vision is much stronger than that of higher-order aberrations, and the impact of LCA in chromatic aberration dominates. The impact of TCA is approximately equal to that of normal level higher-order aberrations and it can be ignored when LCA exists.
Models for financing the regulation of pharmaceutical promotion.
Lexchin, Joel
2012-07-11
Pharmaceutical companies spend huge sums promoting their products whereas regulation of promotional activities is typically underfinanced. Any option for financing the monitoring and regulation of promotion should adhere to three basic principles: stability, predictability and lack of (perverse) ties between the level of financing and performance. This paper explores the strengths and weaknesses of six different models. All these six models considered here have positive and negative features and none may necessarily be ideal in any particular country. Different countries may choose to utilize a combination of two or more of these models in order to raise sufficient revenue. Financing of regulation of drug promotion should more than pay for itself through the prevention of unnecessary drug costs and the avoidance of adverse health effects due to inappropriate prescribing. However, it involves an initial outlay of money that is currently not being spent and many national governments, in both rich and poor countries, are unwilling to incur extra costs.
Measurements and models of CO2 and CH4 Flux in the Baltimore/Washington area.
NASA Astrophysics Data System (ADS)
Dickerson, R. R.; Ren, X.; Salawitch, R. J.; Ahn, D.; Karion, A.; Shepson, P. B.; Whetstone, J. R.; Martin, C.
2017-12-01
Direct measurements of concentrations of pollutants such as CO2 and CH4 can be combined with wind fields to determine the flux of these species and to evaluate emissions inventories or models. The mass balance approach, assumng linear flow into and out of a volume set over a city, works best where wind fields are simplest. Over typical American east coast cities, upwind sources and complex circulation (e.g., the sea breeze) complicate such analyses. We will present findings from a coupled measurement and modeling project involving a network of surface-based tower measurements, aircraft observations, and remote sensing that constrain model calculations. Summer and winter scenarios are contrasted, and results help evaluate the emissions of short-lived pollutants. Determinations are compared to several emissions inventories and are being used to help States evaluate evaluate plans for pollution control.
USDA-ARS?s Scientific Manuscript database
Introduction: Detection of foodborne pathogens typically involves microbiological enrichment with subsequent isolation and identification of a pure culture. This is typically followed by strain typing, which provides information critical to outbreak and source investigations. In the early 1990’s pul...
ERIC Educational Resources Information Center
Wand, Sean; Thermos, Adam C.
1998-01-01
Explains the issues to consider before a college decides to purchase a card-access system. The benefits of automation, questions involving implementation, the criteria for technology selection, what typical card technology involves, privacy concerns, and the placement of card readers are discussed. (GR)
A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.; Watson, Layne T.
1998-01-01
Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.
NASA Technical Reports Server (NTRS)
Wohlrab, R.
1983-01-01
Instabilities in turbine operation can be caused by forces which are produced in connection with motions involving the oil film in the bearings. An experimental investigation regarding the characteristics of such forces in the case of three typical steam turbine stages is conducted, taking into account the effect of various parameters. Supplementary kinetic tests are carried out to obtain an estimate of the flow forces which are proportional to the velocity. The measurements are based on the theoretical study of the damping characteristics of a vibrational model. A computational analysis of the effect of the measured fluid forces on the stability characteristics of simple rotor model is also conducted.
An Organization's Extended (Soft) Competencies Model
NASA Astrophysics Data System (ADS)
Rosas, João; Macedo, Patrícia; Camarinha-Matos, Luis M.
One of the steps usually undertaken in partnerships formation is the assessment of organizations’ competencies. Typically considered competencies of a functional or technical nature, which provide specific outcomes can be considered as hard competencies. Yet, the very act of collaboration has its specific requirements, for which the involved organizations must be apt to exercise other type of competencies that affect their own performance and the partnership success. These competencies are more of a behavioral nature, and can be named as soft-competencies. This research aims at addressing the effects of the soft competencies on the performance of the hard ones. An extended competencies model is thus proposed, allowing the construction of adjusted competencies profiles, in which the competency levels are adjusted dynamically according to the requirements of collaboration opportunities.
Traffic Flow Management Using Aggregate Flow Models and the Development of Disaggregation Methods
NASA Technical Reports Server (NTRS)
Sun, Dengfeng; Sridhar, Banavar; Grabbe, Shon
2010-01-01
A linear time-varying aggregate traffic flow model can be used to develop Traffic Flow Management (tfm) strategies based on optimization algorithms. However, there are no methods available in the literature to translate these aggregate solutions into actions involving individual aircraft. This paper describes and implements a computationally efficient disaggregation algorithm, which converts an aggregate (flow-based) solution to a flight-specific control action. Numerical results generated by the optimization method and the disaggregation algorithm are presented and illustrated by applying them to generate TFM schedules for a typical day in the U.S. National Airspace System. The results show that the disaggregation algorithm generates control actions for individual flights while keeping the air traffic behavior very close to the optimal solution.
Quantifying T Lymphocyte Turnover
De Boer, Rob J.; Perelson, Alan S.
2013-01-01
Peripheral T cell populations are maintained by production of naive T cells in the thymus, clonal expansion of activated cells, cellular self-renewal (or homeostatic proliferation), and density dependent cell life spans. A variety of experimental techniques have been employed to quantify the relative contributions of these processes. In modern studies lymphocytes are typically labeled with 5-bromo-2′-deoxyuridine (BrdU), deuterium, or the fluorescent dye carboxy-fluorescein diacetate succinimidyl ester (CFSE), their division history has been studied by monitoring telomere shortening and the dilution of T cell receptor excision circles (TRECs) or the dye CFSE, and clonal expansion has been documented by recording changes in the population densities of antigen specific cells. Proper interpretation of such data in terms of the underlying rates of T cell production, division, and death has proven to be notoriously difficult and involves mathematical modeling. We review the various models that have been developed for each of these techniques, discuss which models seem most appropriate for what type of data, reveal open problems that require better models, and pinpoint how the assumptions underlying a mathematical model may influence the interpretation of data. Elaborating various successful cases where modeling has delivered new insights in T cell population dynamics, this review provides quantitative estimates of several processes involved in the maintenance of naive and memory, CD4+ and CD8+ T cell pools in mice and men. PMID:23313150
An fMRI Study of Parietal Cortex Involvement in the Visual Guidance of Locomotion
ERIC Educational Resources Information Center
Billington, Jac; Field, David T.; Wilkie, Richard M.; Wann, John P.
2010-01-01
Locomoting through the environment typically involves anticipating impending changes in heading trajectory in addition to maintaining the current direction of travel. We explored the neural systems involved in the "far road" and "near road" mechanisms proposed by Land and Horwood (1995) using simulated forward or backward travel where participants…
A rapid boundary integral equation technique for protein electrostatics
NASA Astrophysics Data System (ADS)
Grandison, Scott; Penfold, Robert; Vanden-Broeck, Jean-Marc
2007-06-01
A new boundary integral formulation is proposed for the solution of electrostatic field problems involving piecewise uniform dielectric continua. Direct Coulomb contributions to the total potential are treated exactly and Green's theorem is applied only to the residual reaction field generated by surface polarisation charge induced at dielectric boundaries. The implementation shows significantly improved numerical stability over alternative schemes involving the total field or its surface normal derivatives. Although strictly respecting the electrostatic boundary conditions, the partitioned scheme does introduce a jump artefact at the interface. Comparison against analytic results in canonical geometries, however, demonstrates that simple interpolation near the boundary is a cheap and effective way to circumvent this characteristic in typical applications. The new scheme is tested in a naive model to successfully predict the ground state orientation of biomolecular aggregates comprising the soybean storage protein, glycinin.
Handren, Lindsay; Crano, William D.
2018-01-01
Culturally, people tend to abstain from alcohol intake during the weekdays and wait to consume in greater frequency and quantity during the weekends. The current research sought to empirically justify the days representing weekday versus weekend alcohol consumption. In study 1 (N = 419), item response theory was applied to a two-parameter (difficulty and discrimination) model that evaluated the days of drinking (frequency) during the typical 7-day week. Item characteristic curves were most similar for Monday, Tuesday, and Wednesday (prototypical weekday) and for Friday and Saturday (prototypical weekend). Thursday and Sunday, however, exhibited item characteristics that bordered the properties of weekday and weekend consumption. In study 2 (N = 403), confirmatory factor analysis was applied to test six hypothesized measurement structures representing drinks per day (quantity) during the typical week. The measurement model producing the strongest fit indices was a correlated two-factor structure involving separate weekday and weekend factors that permitted Thursday and Sunday to double load on both dimensions. The proper conceptualization and accurate measurement of the days demarcating the normative boundaries of “dry” weekdays and “wet” weekends are imperative to inform research and prevention efforts targeting temporal alcohol intake patterns. PMID:27488456
Lac, Andrew; Handren, Lindsay; Crano, William D
2016-10-01
Culturally, people tend to abstain from alcohol intake during the weekdays and wait to consume in greater frequency and quantity during the weekends. The current research sought to empirically justify the days representing weekday versus weekend alcohol consumption. In study 1 (N = 419), item response theory was applied to a two-parameter (difficulty and discrimination) model that evaluated the days of drinking (frequency) during the typical 7-day week. Item characteristic curves were most similar for Monday, Tuesday, and Wednesday (prototypical weekday) and for Friday and Saturday (prototypical weekend). Thursday and Sunday, however, exhibited item characteristics that bordered the properties of weekday and weekend consumption. In study 2 (N = 403), confirmatory factor analysis was applied to test six hypothesized measurement structures representing drinks per day (quantity) during the typical week. The measurement model producing the strongest fit indices was a correlated two-factor structure involving separate weekday and weekend factors that permitted Thursday and Sunday to double load on both dimensions. The proper conceptualization and accurate measurement of the days demarcating the normative boundaries of "dry" weekdays and "wet" weekends are imperative to inform research and prevention efforts targeting temporal alcohol intake patterns.
Modeling Longitudinal Data Containing Non-Normal Within Subject Errors
NASA Technical Reports Server (NTRS)
Feiveson, Alan; Glenn, Nancy L.
2013-01-01
The mission of the National Aeronautics and Space Administration’s (NASA) human research program is to advance safe human spaceflight. This involves conducting experiments, collecting data, and analyzing data. The data are longitudinal and result from a relatively few number of subjects; typically 10 – 20. A longitudinal study refers to an investigation where participant outcomes and possibly treatments are collected at multiple follow-up times. Standard statistical designs such as mean regression with random effects and mixed–effects regression are inadequate for such data because the population is typically not approximately normally distributed. Hence, more advanced data analysis methods are necessary. This research focuses on four such methods for longitudinal data analysis: the recently proposed linear quantile mixed models (lqmm) by Geraci and Bottai (2013), quantile regression, multilevel mixed–effects linear regression, and robust regression. This research also provides computational algorithms for longitudinal data that scientists can directly use for human spaceflight and other longitudinal data applications, then presents statistical evidence that verifies which method is best for specific situations. This advances the study of longitudinal data in a broad range of applications including applications in the sciences, technology, engineering and mathematics fields.
Wu, Bi; Shi, Yan; Gong, Xia; Yu, Lin; Chen, Qiuju; Wang, Jian; Sun, Zhaogui
2015-01-01
To evaluate multiple follicular development synchronization after estrogen stimulation in prepubertal mice, follicular responsiveness to gonadotropin superovulation, the prospective reproductive potential and ovarian polycystic ovary syndrome (PCOS)-like symptoms at adulthood, prepubertal mice were intraperitoneally injected with estrogen to establish an animal model with solvent as control. When synchronized tertiary follicles in ovaries, in vitro oocyte maturation and fertilization rates, blastocyst formation rate, developmental potential into offspring by embryo transfer, adult fertility and PCOS-like symptoms, and involved molecular mechanisms were focused, it was found that estrogen stimulation (10μg/gBW) leads to follicular development synchronization at the early tertiary stage in prepubertal mice; reproduction from oocytes to offspring could be realized by means of the artificial reproductive technology though the model mice lost their natural fertility when they were reared to adulthood; and typical symptoms of PCOS, except changes in inflammatory pathways, were not remained up to adulthood. So in conclusion, estrogen can lead to synchronization in follicular development in prepubertal mice, but does not affect reproductive outcome of oocytes, and no typical symptoms of PCOS remained at adulthood despite changes related to inflammation. PMID:26010950
AGN outflows and feedback twenty years on
NASA Astrophysics Data System (ADS)
Harrison, C. M.; Costa, T.; Tadhunter, C. N.; Flütsch, A.; Kakkad, D.; Perna, M.; Vietri, G.
2018-03-01
It is twenty years since the seminal works by Magorrian and co-authors and by Silk and Rees, which, along with other related work, ignited an explosion of publications connecting active galactic nucleus (AGN)-driven outflows to galaxy evolution. With a surge in observations of AGN outflows, studies are attempting to test AGN feedback models directly using the outflow properties. With a focus on outflows traced by optical and CO emission lines, we discuss significant challenges that greatly complicate this task, from both an observational and theoretical perspective. We highlight the observational uncertainties involved and the assumptions required when deriving kinetic coupling efficiencies (that is, outflow kinetic power as a fraction of AGN luminosity) from typical observations. Based on recent models we demonstrate that extreme caution should be taken when comparing observationally derived kinetic coupling efficiencies to coupling efficiencies from fiducial feedback models.
Macroeconomic effects on mortality revealed by panel analysis with nonlinear trends.
Ionides, Edward L; Wang, Zhen; Tapia Granados, José A
2013-10-03
Many investigations have used panel methods to study the relationships between fluctuations in economic activity and mortality. A broad consensus has emerged on the overall procyclical nature of mortality: perhaps counter-intuitively, mortality typically rises above its trend during expansions. This consensus has been tarnished by inconsistent reports on the specific age groups and mortality causes involved. We show that these inconsistencies result, in part, from the trend specifications used in previous panel models. Standard econometric panel analysis involves fitting regression models using ordinary least squares, employing standard errors which are robust to temporal autocorrelation. The model specifications include a fixed effect, and possibly a linear trend, for each time series in the panel. We propose alternative methodology based on nonlinear detrending. Applying our methodology on data for the 50 US states from 1980 to 2006, we obtain more precise and consistent results than previous studies. We find procyclical mortality in all age groups. We find clear procyclical mortality due to respiratory disease and traffic injuries. Predominantly procyclical cardiovascular disease mortality and countercyclical suicide are subject to substantial state-to-state variation. Neither cancer nor homicide have significant macroeconomic association.
Macroeconomic effects on mortality revealed by panel analysis with nonlinear trends
Ionides, Edward L.; Wang, Zhen; Tapia Granados, José A.
2013-01-01
Many investigations have used panel methods to study the relationships between fluctuations in economic activity and mortality. A broad consensus has emerged on the overall procyclical nature of mortality: perhaps counter-intuitively, mortality typically rises above its trend during expansions. This consensus has been tarnished by inconsistent reports on the specific age groups and mortality causes involved. We show that these inconsistencies result, in part, from the trend specifications used in previous panel models. Standard econometric panel analysis involves fitting regression models using ordinary least squares, employing standard errors which are robust to temporal autocorrelation. The model specifications include a fixed effect, and possibly a linear trend, for each time series in the panel. We propose alternative methodology based on nonlinear detrending. Applying our methodology on data for the 50 US states from 1980 to 2006, we obtain more precise and consistent results than previous studies. We find procyclical mortality in all age groups. We find clear procyclical mortality due to respiratory disease and traffic injuries. Predominantly procyclical cardiovascular disease mortality and countercyclical suicide are subject to substantial state-to-state variation. Neither cancer nor homicide have significant macroeconomic association. PMID:24587843
ERIC Educational Resources Information Center
Potter, Carol
2016-01-01
Father involvement in education has been shown to result in a range of positive outcomes for typically developing children. However, the nature of paternal involvement in the education of children with disabilities and especially autism has been under-researched and is little understood. This study aimed to explore the nature of the involvement of…
Animal models of the non-motor features of Parkinson’s disease
McDowell, Kimberly; Chesselet, Marie-Françoise
2012-01-01
The non-motor symptoms (NMS) of Parkinson’s disease (PD) occur in roughly 90% of patients, have a profound negative impact on their quality of life, and often go undiagnosed. NMS typically involve many functional systems, and include sleep disturbances, neuropsychiatric and cognitive deficits, and autonomic and sensory dysfunction. The development and use of animal models have provided valuable insight into the classical motor symptoms of PD over the past few decades. Toxin-induced models provide a suitable approach to study aspects of the disease that derive from the loss of nigrostriatal dopaminergic neurons, a cardinal feature of PD. This also includes some NMS, primarily cognitive dysfunction. However, several NMS poorly respond to dopaminergic treatments, suggesting that they may be due to other pathologies. Recently developed genetic models of PD are providing new ways to model these NMS and identify their mechanisms. This review summarizes the current available literature on the ability of both toxin-induced and genetically-based animal models to reproduce the NMS of PD. PMID:22236386
A density-adaptive SPH method with kernel gradient correction for modeling explosive welding
NASA Astrophysics Data System (ADS)
Liu, M. B.; Zhang, Z. L.; Feng, D. L.
2017-09-01
Explosive welding involves processes like the detonation of explosive, impact of metal structures and strong fluid-structure interaction, while the whole process of explosive welding has not been well modeled before. In this paper, a novel smoothed particle hydrodynamics (SPH) model is developed to simulate explosive welding. In the SPH model, a kernel gradient correction algorithm is used to achieve better computational accuracy. A density adapting technique which can effectively treat large density ratio is also proposed. The developed SPH model is firstly validated by simulating a benchmark problem of one-dimensional TNT detonation and an impact welding problem. The SPH model is then successfully applied to simulate the whole process of explosive welding. It is demonstrated that the presented SPH method can capture typical physics in explosive welding including explosion wave, welding surface morphology, jet flow and acceleration of the flyer plate. The welding angle obtained from the SPH simulation agrees well with that from a kinematic analysis.
Chen, Gang; Adleman, Nancy E.; Saad, Ziad S.; Leibenluft, Ellen; Cox, RobertW.
2014-01-01
All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance–covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within- subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT)with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse–Geisser and Huynh–Feldt) with MVT-WS. PMID:24954281
Terband, H.; Maassen, B.; Guenther, F.H.; Brumberg, J.
2014-01-01
Background/Purpose Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. Method In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Results Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. Conclusions These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. PMID:24491630
Evaluation of a Commercial Tractor Safety Monitoring System Using a Reverse Engineering Procedure.
Casazza, Camilla; Martelli, Roberta; Rondelli, Valda
2016-10-17
There is a high rate of work-related deaths in agriculture. In Italy, despite the obligato-ry installation of ROPS, fatal accidents involving tractors represent more than 40% of work-related deaths in agriculture. As death is often due to an overturn that the driver is incapable of predicting, driver assistance devices that can signal critical stability conditions have been studied and marketed to prevent accidents. These devices measure the working parameters of the tractor through sensors and elaborate the values using an algorithm that, taking into account the geometric characteristics of the tractor, pro-vides a risk index based on models elaborated on a theoretical basis. This research aimed to verify one of these stability indexes in the field, using a commercial driver as-sistance device to monitor five tractors on the University of Bologna experimental farm. The setup of the device involved determining the coordinates of the center of gravity of the tractor and the implement mounted on the tractor. The analysis of the stability in-dex, limited to events with a significant risk level, revealed a clear separation into two groups: events with high values of roll or pitch and low speeds, typical of a tractor when working, and events with low values of roll and pitch and high steering angle and forward speed, typical of travel on the road. The equation for calculating the critical speed when turning provided a significant contribution only for events that were typi-cal of travel rather than field work, suggesting a diversified calculation approach ac-cording to the work phase. Copyright© by the American Society of Agricultural Engineers.
The Peer-Related Social Competence of Young Children with Down Syndrome
Guralnick, Michael J.; Connor, Robert T.; Johnson, L. Clark
2014-01-01
The peer-related social competence of children with Down syndrome was examined in an observational study. Dyadic interactions with peers of children with Down syndrome were compared to the dyadic interactions of matched groups of typically developing children and with playmates differing in both familiarity and social skills. Results suggested that both risk and protective factors influenced the peer interactions of children with Down syndrome. Recommendations were made for applying contemporary models of peer-related social competence to etiologic subgroups to better understand the mechanisms involved and to provide direction for the design of intervention programs. PMID:21291310
Zero Gyro Kalman Filtering in the presence of a Reaction Wheel Failure
NASA Technical Reports Server (NTRS)
Hur-Diaz, Sun; Wirzburger, John; Smith, Dan; Myslinski, Mike
2007-01-01
Typical implementation of Kalman filters for spacecraft attitude estimation involves the use of gyros for three-axis rate measurements. When there are less than three axes of information available, the accuracy of the Kalman filter depends highly on the accuracy of the dynamics model. This is particularly significant during the transient period when a reaction wheel with a high momentum fails, is taken off-line, and spins down. This paper looks at how a reaction wheel failure can affect the zero-gyro Kalman filter performance for the Hubble Space Telescope and what steps are taken to minimize its impact.
Zero Gyro Kalman Filtering in the Presence of a Reaction Wheel Failure
NASA Technical Reports Server (NTRS)
Hur-Diaz, Sun; Wirzburger, John; Smith, Dan; Myslinski, Mike
2007-01-01
Typical implementation of Kalman filters for spacecraft attitude estimation involves the use of gyros for three-axis rate measurements. When there are less than three axes of information available, the accuracy of the Kalman filter depends highly on the accuracy of the dynamics model. This is particularly significant during the transient period when a reaction wheel with a high momentum fails, is taken off-line, and spins down. This paper looks at how a reaction wheel failure can affect the zero-gyro Kalman filter performance for the Hubble Space Telescope and what steps are taken to minimize its impact.
Envisioning Cognitive Robots for Future Space Exploration
NASA Technical Reports Server (NTRS)
Huntsberger, Terry; Stoica, Adrian
2010-01-01
Cognitive robots in the context of space exploration are envisioned with advanced capabilities of model building, continuous planning/re-planning, self-diagnosis, as well as the ability to exhibit a level of 'understanding' of new situations. An overview of some JPL components (e.g. CASPER, CAMPOUT) and a description of the architecture CARACaS (Control Architecture for Robotic Agent Command and Sensing) that combines these in the context of a cognitive robotic system operating in a various scenarios are presented. Finally, two examples of typical scenarios of a multi-robot construction mission and a human-robot mission, involving direct collaboration with humans is given.
Investigation of high efficiency GaAs solar cells
NASA Technical Reports Server (NTRS)
Olsen, Larry C.; Dunham, Glen; Addis, F. W.; Huber, Dan; Linden, Kurt
1989-01-01
Investigations of basic mechanisms which limit the performance of high efficiency GaAs solar cells are discussed. P/N heteroface structures have been fabricated from MOCVD epiwafers. Typical AM1 efficiencies are in the 21 to 22 percent range, with a SERI measurement for one cell being 21.5 percent. The cells are nominally 1.5 x 1.5 cm in size. Studies have involved photoresponse, T-I-V analyses, and interpretation of data in terms of appropriate models to determine key cell parameters. Results of these studies are utilized to determine future approaches for increasing GaAs solar cell efficiencies.
Modelling non-steady-state isotope enrichment of leaf water in a gas-exchange cuvette environment.
Song, Xin; Simonin, Kevin A; Loucos, Karen E; Barbour, Margaret M
2015-12-01
The combined use of a gas-exchange system and laser-based isotope measurement is a tool of growing interest in plant ecophysiological studies, owing to its relevance for assessing isotopic variability in leaf water and/or transpiration under non-steady-state (NSS) conditions. However, the current Farquhar & Cernusak (F&C) NSS leaf water model, originally developed for open-field scenarios, is unsuited for use in a gas-exchange cuvette environment where isotope composition of water vapour (δv ) is intrinsically linked to that of transpiration (δE ). Here, we modified the F&C model to make it directly compatible with the δv -δE dynamic characteristic of a typical cuvette setting. The resultant new model suggests a role of 'net-flux' (rather than 'gross-flux' as suggested by the original F&C model)-based leaf water turnover rate in controlling the time constant (τ) for the approach to steady sate. The validity of the new model was subsequently confirmed in a cuvette experiment involving cotton leaves, for which we demonstrated close agreement between τ values predicted from the model and those measured from NSS variations in isotope enrichment of transpiration. Hence, we recommend that our new model be incorporated into future isotope studies involving a cuvette condition where the transpiration flux directly influences δv . There is an increasing popularity among plant ecophysiologists to use a gas-exchange system coupled to laser-based isotope measurement for investigating non-steady state (NSS) isotopic variability in leaf water (and/or transpiration); however, the current Farquhar & Cernusak (F&C) NSS leaf water model is unsuited for use in a gas-exchange cuvette environment due to its implicit assumption of isotope composition of water vapor (δv ) being constant and independent of that of transpiration (δE ). In the present study, we modified the F&C model to make it compatible with the dynamic relationship between δv and δE as is typically associated with a cuvette setting. Using an experiment conducted on cotton leaves, we show that the modified NSS model performed well in predicting the time constant for the exponential approach of leaf water toward steady state under cuvette conditions. Such a result demonstrates the applicability of this new model to gas-exchange cuvette conditions where the transpiration flux directly influences δv , and therefore suggests the need to incorporate this model into future isotope studies that employ a laser-cuvette coupled system. © 2015 John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Egbert, Robert I.; Stone, Lorene H.; Adams, David L.
2011-01-01
Four-year cooperative engineering programs are becoming more common in the United States. Cooperative engineering programs typically involve a "parent" institution with an established engineering program and one or more "satellite" institutions which typically have few or no engineering programs and are located in an area where…
Distributed Group Design Process: Lessons Learned.
ERIC Educational Resources Information Center
Eseryel, Deniz; Ganesan, Radha
A typical Web-based training development team consists of a project manager, an instructional designer, a subject-matter expert, a graphic artist, and a Web programmer. The typical scenario involves team members working together in the same setting during the entire design and development process. What happens when the team is distributed, that is…
Visual Foraging With Fingers and Eye Gaze
Thornton, Ian M.; Smith, Irene J.; Chetverikov, Andrey; Kristjánsson, Árni
2016-01-01
A popular model of the function of selective visual attention involves search where a single target is to be found among distractors. For many scenarios, a more realistic model involves search for multiple targets of various types, since natural tasks typically do not involve a single target. Here we present results from a novel multiple-target foraging paradigm. We compare finger foraging where observers cancel a set of predesignated targets by tapping them, to gaze foraging where observers cancel items by fixating them for 100 ms. During finger foraging, for most observers, there was a large difference between foraging based on a single feature, where observers switch easily between target types, and foraging based on a conjunction of features where observers tended to stick to one target type. The pattern was notably different during gaze foraging where these condition differences were smaller. Two conclusions follow: (a) The fact that a sizeable number of observers (in particular during gaze foraging) had little trouble switching between different target types raises challenges for many prominent theoretical accounts of visual attention and working memory. (b) While caveats must be noted for the comparison of gaze and finger foraging, the results suggest that selection mechanisms for gaze and pointing have different operational constraints. PMID:27433323
ERIC Educational Resources Information Center
Zou, Di
2017-01-01
This research inspects the allocation of involvement load to the evaluation component of the involvement load hypothesis, examining how three typical approaches to evaluation (cloze-exercises, sentence-writing, and composition-writing) promote word learning. The results of this research were partially consistent with the predictions of the…
Lozano, Miguel; Serrano, Miguel A; López-Colina, Carlos; Gayarre, Fernando L; Suárez, Jesús
2018-02-09
Eurocode 3 establishes the component method to analytically characterize the structural joints between beam and columns. When one of the members involved in the joint is a hollow section (i.e., a tube) there is a lack of information for the specific components present in the joint. There are two different ways to bridge the gap: experimental testing on the actual beam column joints involving tubular sections; or numerical modelization, typically by means of finite element analysis. For this second option, it is necessary to know the actual mechanical properties of the material. As long as the joint implies a welding process, there is a concern related to how the mechanical properties in the heat-affected zone (HAZ) influence the behavior of the joint. In this work, some coupons were extracted from the HAZ of the beam-column joint. The coupons were tested and the results were implemented in the numerical model of the joint, in an attempt to bring it closer to the experimental results of the tested joints.
Amestoy, Anouck; Guillaud, Etienne; Bouvard, Manuel P.; Cazalets, Jean-René
2015-01-01
Individuals with autism spectrum disorder (ASD) present reduced visual attention to faces. However, contradictory conclusions have been drawn about the strategies involved in visual face scanning due to the various methodologies implemented in the study of facial screening. Here, we used a data-driven approach to compare children and adults with ASD subjected to the same free viewing task and to address developmental aspects of face scanning, including its temporal patterning, in healthy children, and adults. Four groups (54 subjects) were included in the study: typical adults, typically developing children, and adults and children with ASD. Eye tracking was performed on subjects viewing unfamiliar faces. Fixations were analyzed using a data-driven approach that employed spatial statistics to provide an objective, unbiased definition of the areas of interest. Typical adults expressed a spatial and temporal strategy for visual scanning that differed from the three other groups, involving a sequential fixation of the right eye (RE), left eye (LE), and mouth. Typically developing children, adults and children with autism exhibited similar fixation patterns and they always started by looking at the RE. Children (typical or with ASD) subsequently looked at the LE or the mouth. Based on the present results, the patterns of fixation for static faces that mature from childhood to adulthood in typical subjects are not found in adults with ASD. The atypical patterns found after developmental progression and experience in ASD groups appear to remain blocked in an immature state that cannot be differentiated from typical developmental child patterns of fixation. PMID:26236264
Modeling the reversible, diffusive sink effect in response to transient contaminant sources.
Zhao, D; Little, J C; Hodgson, A T
2002-09-01
A physically based diffusion model is used to evaluate the sink effect of diffusion-controlled indoor materials and to predict the transient contaminant concentration in indoor air in response to several time-varying contaminant sources. For simplicity, it is assumed the predominant indoor material is a homogeneous slab, initially free of contaminant, and the air within the room is well mixed. The model enables transient volatile organic compound (VOC) concentrations to be predicted based on the material/air partition coefficient (K) and the material-phase diffusion coefficient (D) of the sink. Model predictions are made for three scenarios, each mimicking a realistic situation in a building. Styrene, phenol, and naphthalene are used as representative VOCs. A styrene butadiene rubber (SBR) backed carpet, vinyl flooring (VF), and a polyurethane foam (PUF) carpet cushion are considered as typical indoor sinks. In scenarios involving a sinusoidal VOC input and a double exponential decaying input, the model predicts the sink has a modest impact for SBR/styrene, but the effect increases for VF/phenol and PUF/naphthalene. In contrast, for an episodic chemical spill, SBR is predicted to reduce the peak styrene concentration considerably. A parametric study reveals for systems involving a large equilibrium constant (K), the kinetic constant (D) will govern the shape of the resulting gasphase concentration profile. On the other hand, for systems with a relaxed mass transfer resistance, K will dominate the profile.
Comments of statistical issue in numerical modeling for underground nuclear test monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicholson, W.L.; Anderson, K.K.
1993-03-01
The Symposium concluded with prepared summaries by four experts in the involved disciplines. These experts made no mention of statistics and/or the statistical content of issues. The first author contributed an extemporaneous statement at the Symposium because there are important issues associated with conducting and evaluating numerical modeling that are familiar to statisticians and often treated successfully by them. This note expands upon these extemporaneous remarks. Statistical ideas may be helpful in resolving some numerical modeling issues. Specifically, we comment first on the role of statistical design/analysis in the quantification process to answer the question ``what do we know aboutmore » the numerical modeling of underground nuclear tests?`` and second on the peculiar nature of uncertainty analysis for situations involving numerical modeling. The simulations described in the workshop, though associated with topic areas, were basically sets of examples. Each simulation was tuned towards agreeing with either empirical evidence or an expert`s opinion of what empirical evidence would be. While the discussions were reasonable, whether the embellishments were correct or a forced fitting of reality is unclear and illustrates that ``simulation is easy.`` We also suggest that these examples of simulation are typical and the questions concerning the legitimacy and the role of knowing the reality are fair, in general, with respect to simulation. The answers will help us understand why ``prediction is difficult.``« less
Exploring sensitivity of a multistate occupancy model to inform management decisions
Green, A.W.; Bailey, L.L.; Nichols, J.D.
2011-01-01
Dynamic occupancy models are often used to investigate questions regarding the processes that influence patch occupancy and are prominent in the fields of population and community ecology and conservation biology. Recently, multistate occupancy models have been developed to investigate dynamic systems involving more than one occupied state, including reproductive states, relative abundance states and joint habitat-occupancy states. Here we investigate the sensitivities of the equilibrium-state distribution of multistate occupancy models to changes in transition rates. We develop equilibrium occupancy expressions and their associated sensitivity metrics for dynamic multistate occupancy models. To illustrate our approach, we use two examples that represent common multistate occupancy systems. The first example involves a three-state dynamic model involving occupied states with and without successful reproduction (California spotted owl Strix occidentalis occidentalis), and the second involves a novel way of using a multistate occupancy approach to accommodate second-order Markov processes (wood frog Lithobates sylvatica breeding and metamorphosis). In many ways, multistate sensitivity metrics behave in similar ways as standard occupancy sensitivities. When equilibrium occupancy rates are low, sensitivity to parameters related to colonisation is high, while sensitivity to persistence parameters is greater when equilibrium occupancy rates are high. Sensitivities can also provide guidance for managers when estimates of transition probabilities are not available. Synthesis and applications. Multistate models provide practitioners a flexible framework to define multiple, distinct occupied states and the ability to choose which state, or combination of states, is most relevant to questions and decisions about their own systems. In addition to standard multistate occupancy models, we provide an example of how a second-order Markov process can be modified to fit a multistate framework. Assuming the system is near equilibrium, our sensitivity analyses illustrate how to investigate the sensitivity of the system-specific equilibrium state(s) to changes in transition rates. Because management will typically act on these transition rates, sensitivity analyses can provide valuable information about the potential influence of different actions and when it may be prudent to shift the focus of management among the various transition rates. ?? 2011 The Authors. Journal of Applied Ecology ?? 2011 British Ecological Society.
A View on Future Building System Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetter, Michael
This chapter presents what a future environment for building system modeling and simulation may look like. As buildings continue to require increased performance and better comfort, their energy and control systems are becoming more integrated and complex. We therefore focus in this chapter on the modeling, simulation and analysis of building energy and control systems. Such systems can be classified as heterogeneous systems because they involve multiple domains, such as thermodynamics, fluid dynamics, heat and mass transfer, electrical systems, control systems and communication systems. Also, they typically involve multiple temporal and spatial scales, and their evolution can be described bymore » coupled differential equations, discrete equations and events. Modeling and simulating such systems requires a higher level of abstraction and modularisation to manage the increased complexity compared to what is used in today's building simulation programs. Therefore, the trend towards more integrated building systems is likely to be a driving force for changing the status quo of today's building simulation programs. Thischapter discusses evolving modeling requirements and outlines a path toward a future environment for modeling and simulation of heterogeneous building systems.A range of topics that would require many additional pages of discussion has been omitted. Examples include computational fluid dynamics for air and particle flow in and around buildings, people movement, daylight simulation, uncertainty propagation and optimisation methods for building design and controls. For different discussions and perspectives on the future of building modeling and simulation, we refer to Sahlin (2000), Augenbroe (2001) and Malkawi and Augenbroe (2004).« less
Modelling the petrogenesis of high Rb/Sr silicic magmas
Halliday, A.N.; Davidson, J.P.; Hildreth, W.; Holden, P.
1991-01-01
Rhyolites can be highly evolved with Sr contents as low as 0.1 ppm and Rb Sr > 2,000. In contrast, granite batholiths are commonly comprised of rocks with Rb Sr 100. Mass-balance modelling of source compositions, differentiation and contamination using the trace-element geochemistry of granites are therefore commonly in error because of the failure to account for evolved differentiates that may have been erupted from the system. Rhyolitic magmas with very low Sr concentrations (???1 ppm) cannot be explained by any partial melting models involving typical crustal source compositions. The only plausible mechanism for the production of such rhyolites is Rayleigh fractional crystallization involving substantial volumes of cumulates. A variety of methods for modelling the differentiation of magmas with extremely high Rb/Sr is discussed. In each case it is concluded that the bulk partition coefficients for Sr have to be large. In the simplest models, the bulk DSr of the most evolved types is modelled as > 50. Evidence from phenocryst/glass/whole-rock concentrations supports high Sr partition coefficients in feldspars from high silica rhyolites. However, the low modal abundance of plagioclase commonly observed in such rocks is difficult to reconcile with such simple fractionation models of the observed trace-element trends. In certain cases, this may be because the apparent trace-element trend defined by the suite of cognetic rhyolites is the product of different batches of magma with separate differentiation histories accumulating in the magma chamber roof zone. ?? 1991.
A Machine LearningFramework to Forecast Wave Conditions
NASA Astrophysics Data System (ADS)
Zhang, Y.; James, S. C.; O'Donncha, F.
2017-12-01
Recently, significant effort has been undertaken to quantify and extract wave energy because it is renewable, environmental friendly, abundant, and often close to population centers. However, a major challenge is the ability to accurately and quickly predict energy production, especially across a 48-hour cycle. Accurate forecasting of wave conditions is a challenging undertaking that typically involves solving the spectral action-balance equation on a discretized grid with high spatial resolution. The nature of the computations typically demands high-performance computing infrastructure. Using a case-study site at Monterey Bay, California, a machine learning framework was trained to replicate numerically simulated wave conditions at a fraction of the typical computational cost. Specifically, the physics-based Simulating WAves Nearshore (SWAN) model, driven by measured wave conditions, nowcast ocean currents, and wind data, was used to generate training data for machine learning algorithms. The model was run between April 1st, 2013 and May 31st, 2017 generating forecasts at three-hour intervals yielding 11,078 distinct model outputs. SWAN-generated fields of 3,104 wave heights and a characteristic period could be replicated through simple matrix multiplications using the mapping matrices from machine learning algorithms. In fact, wave-height RMSEs from the machine learning algorithms (9 cm) were less than those for the SWAN model-verification exercise where those simulations were compared to buoy wave data within the model domain (>40 cm). The validated machine learning approach, which acts as an accurate surrogate for the SWAN model, can now be used to perform real-time forecasts of wave conditions for the next 48 hours using available forecasted boundary wave conditions, ocean currents, and winds. This solution has obvious applications to wave-energy generation as accurate wave conditions can be forecasted with over a three-order-of-magnitude reduction in computational expense. The low computational cost (and by association low computer-power requirement) means that the machine learning algorithms could be installed on a wave-energy converter as a form of "edge computing" where a device could forecast its own 48-hour energy production.
Wind energy development: methods for assessing risks to birds and bats pre-construction
Katzner, Todd E.; Bennett, Victoria; Miller, Tricia A.; Duerr, Adam E.; Braham, Melissa A.; Hale, Amanda
2016-01-01
Wind power generation is rapidly expanding. Although wind power is a low-carbon source of energy, it can impact negatively birds and bats, either directly through fatality or indirectly by displacement or habitat loss. Pre-construction risk assessment at wind facilities within the United States is usually required only on public lands. When conducted, it generally involves a 3-tier process, with each step leading to more detailed and rigorous surveys. Preliminary site assessment (U.S. Fish and Wildlife Service, Tier 1) is usually conducted remotely and involves evaluation of existing databases and published materials. If potentially at-risk wildlife are present and the developer wishes to continue the development process, then on-site surveys are conducted (Tier 2) to verify the presence of those species and to assess site-specific features (e.g., topography, land cover) that may influence risk from turbines. The next step in the process (Tier 3) involves quantitative or scientific studies to assess the potential risk of the proposed project to wildlife. Typical Tier-3 research may involve acoustic, aural, observational, radar, capture, tracking, or modeling studies, all designed to understand details of risk to specific species or groups of species at the given site. Our review highlights several features lacking from many risk assessments, particularly the paucity of before-and-after-control- impact (BACI) studies involving modeling and a lack of understanding of cumulative effects of wind facilities on wildlife. Both are essential to understand effective designs for pre-construction monitoring and both would help expand risk assessment beyond eagles.
Electric Power Distribution System Model Simplification Using Segment Substitution
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; ...
2017-09-20
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
Inferring human mobility using communication patterns.
Palchykov, Vasyl; Mitrović, Marija; Jo, Hang-Hyun; Saramäki, Jari; Pan, Raj Kumar
2014-08-22
Understanding the patterns of mobility of individuals is crucial for a number of reasons, from city planning to disaster management. There are two common ways of quantifying the amount of travel between locations: by direct observations that often involve privacy issues, e.g., tracking mobile phone locations, or by estimations from models. Typically, such models build on accurate knowledge of the population size at each location. However, when this information is not readily available, their applicability is rather limited. As mobile phones are ubiquitous, our aim is to investigate if mobility patterns can be inferred from aggregated mobile phone call data alone. Using data released by Orange for Ivory Coast, we show that human mobility is well predicted by a simple model based on the frequency of mobile phone calls between two locations and their geographical distance. We argue that the strength of the model comes from directly incorporating the social dimension of mobility. Furthermore, as only aggregated call data is required, the model helps to avoid potential privacy problems.
Naive scoring of human sleep based on a hidden Markov model of the electroencephalogram.
Yaghouby, Farid; Modur, Pradeep; Sunderam, Sridhar
2014-01-01
Clinical sleep scoring involves tedious visual review of overnight polysomnograms by a human expert. Many attempts have been made to automate the process by training computer algorithms such as support vector machines and hidden Markov models (HMMs) to replicate human scoring. Such supervised classifiers are typically trained on scored data and then validated on scored out-of-sample data. Here we describe a methodology based on HMMs for scoring an overnight sleep recording without the benefit of a trained initial model. The number of states in the data is not known a priori and is optimized using a Bayes information criterion. When tested on a 22-subject database, this unsupervised classifier agreed well with human scores (mean of Cohen's kappa > 0.7). The HMM also outperformed other unsupervised classifiers (Gaussian mixture models, k-means, and linkage trees), that are capable of naive classification but do not model dynamics, by a significant margin (p < 0.05).
A Space Weather Forecasting System with Multiple Satellites Based on a Self-Recognizing Network
Tokumitsu, Masahiro; Ishida, Yoshiteru
2014-01-01
This paper proposes a space weather forecasting system at geostationary orbit for high-energy electron flux (>2 MeV). The forecasting model involves multiple sensors on multiple satellites. The sensors interconnect and evaluate each other to predict future conditions at geostationary orbit. The proposed forecasting model is constructed using a dynamic relational network for sensor diagnosis and event monitoring. The sensors of the proposed model are located at different positions in space. The satellites for solar monitoring equip with monitoring devices for the interplanetary magnetic field and solar wind speed. The satellites orbit near the Earth monitoring high-energy electron flux. We investigate forecasting for typical two examples by comparing the performance of two models with different numbers of sensors. We demonstrate the prediction by the proposed model against coronal mass ejections and a coronal hole. This paper aims to investigate a possibility of space weather forecasting based on the satellite network with in-situ sensing. PMID:24803190
A space weather forecasting system with multiple satellites based on a self-recognizing network.
Tokumitsu, Masahiro; Ishida, Yoshiteru
2014-05-05
This paper proposes a space weather forecasting system at geostationary orbit for high-energy electron flux (>2 MeV). The forecasting model involves multiple sensors on multiple satellites. The sensors interconnect and evaluate each other to predict future conditions at geostationary orbit. The proposed forecasting model is constructed using a dynamic relational network for sensor diagnosis and event monitoring. The sensors of the proposed model are located at different positions in space. The satellites for solar monitoring equip with monitoring devices for the interplanetary magnetic field and solar wind speed. The satellites orbit near the Earth monitoring high-energy electron flux. We investigate forecasting for typical two examples by comparing the performance of two models with different numbers of sensors. We demonstrate the prediction by the proposed model against coronal mass ejections and a coronal hole. This paper aims to investigate a possibility of space weather forecasting based on the satellite network with in-situ sensing.
Inferring human mobility using communication patterns
NASA Astrophysics Data System (ADS)
Palchykov, Vasyl; Mitrović, Marija; Jo, Hang-Hyun; Saramäki, Jari; Pan, Raj Kumar
2014-08-01
Understanding the patterns of mobility of individuals is crucial for a number of reasons, from city planning to disaster management. There are two common ways of quantifying the amount of travel between locations: by direct observations that often involve privacy issues, e.g., tracking mobile phone locations, or by estimations from models. Typically, such models build on accurate knowledge of the population size at each location. However, when this information is not readily available, their applicability is rather limited. As mobile phones are ubiquitous, our aim is to investigate if mobility patterns can be inferred from aggregated mobile phone call data alone. Using data released by Orange for Ivory Coast, we show that human mobility is well predicted by a simple model based on the frequency of mobile phone calls between two locations and their geographical distance. We argue that the strength of the model comes from directly incorporating the social dimension of mobility. Furthermore, as only aggregated call data is required, the model helps to avoid potential privacy problems.
Song, M; Ouyang, Z; Liu, Z L
2009-05-01
Composed of linear difference equations, a discrete dynamical system (DDS) model was designed to reconstruct transcriptional regulations in gene regulatory networks (GRNs) for ethanologenic yeast Saccharomyces cerevisiae in response to 5-hydroxymethylfurfural (HMF), a bioethanol conversion inhibitor. The modelling aims at identification of a system of linear difference equations to represent temporal interactions among significantly expressed genes. Power stability is imposed on a system model under the normal condition in the absence of the inhibitor. Non-uniform sampling, typical in a time-course experimental design, is addressed by a log-time domain interpolation. A statistically significant DDS model of the yeast GRN derived from time-course gene expression measurements by exposure to HMF, revealed several verified transcriptional regulation events. These events implicate Yap1 and Pdr3, transcription factors consistently known for their regulatory roles by other studies or postulated by independent sequence motif analysis, suggesting their involvement in yeast tolerance and detoxification of the inhibitor.
Electric Power Distribution System Model Simplification Using Segment Substitution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
Using Model-Based Reasoning for Autonomous Instrument Operation - Lessons Learned From IMAGE/LENA
NASA Technical Reports Server (NTRS)
Johnson, Michael A.; Rilee, Michael L.; Truszkowski, Walt; Bailin, Sidney C.
2001-01-01
Model-based reasoning has been applied as an autonomous control strategy on the Low Energy Neutral Atom (LENA) instrument currently flying on board the Imager for Magnetosphere-to-Aurora Global Exploration (IMAGE) spacecraft. Explicit models of instrument subsystem responses have been constructed and are used to dynamically adapt the instrument to the spacecraft's environment. These functions are cast as part of a Virtual Principal Investigator (VPI) that autonomously monitors and controls the instrument. In the VPI's current implementation, LENA's command uplink volume has been decreased significantly from its previous volume; typically, no uplinks are required for operations. This work demonstrates that a model-based approach can be used to enhance science instrument effectiveness. The components of LENA are common in space science instrumentation, and lessons learned by modeling this system may be applied to other instruments. Future work involves the extension of these methods to cover more aspects of LENA operation and the generalization to other space science instrumentation.
Macrophage polarization in virus-host interactions
USDA-ARS?s Scientific Manuscript database
Macrophage involvement in viral infections and antiviral states is common. However, this involvement has not been well-studied in the paradigm of macrophage polarization, which typically has been categorized by the dichotomy of classical (M1) and alternative (M2) statuses. Recent studies have reveal...
Sensemaking Handoffs: Why? How? and When?
ERIC Educational Resources Information Center
Sharma, Nikhil
2010-01-01
Sensemaking tasks are challenging and typically involve collecting, organizing and understanding information. Sensemaking often involves a handoff where a subsequent recipient picks up work done by a provider. Sensemaking handoffs are very challenging because handoffs introduce discontinuity in sensemaking. This dissertation attempts to explore…
Implicit level set algorithms for modelling hydraulic fracture propagation.
Peirce, A
2016-10-13
Hydraulic fractures are tensile cracks that propagate in pre-stressed solid media due to the injection of a viscous fluid. Developing numerical schemes to model the propagation of these fractures is particularly challenging due to the degenerate, hypersingular nature of the coupled integro-partial differential equations. These equations typically involve a singular free boundary whose velocity can only be determined by evaluating a distinguished limit. This review paper describes a class of numerical schemes that have been developed to use the multiscale asymptotic behaviour typically encountered near the fracture boundary as multiple physical processes compete to determine the evolution of the fracture. The fundamental concepts of locating the free boundary using the tip asymptotics and imposing the tip asymptotic behaviour in a weak form are illustrated in two quite different formulations of the governing equations. These formulations are the displacement discontinuity boundary integral method and the extended finite-element method. Practical issues are also discussed, including new models for proppant transport able to capture 'tip screen-out'; efficient numerical schemes to solve the coupled nonlinear equations; and fast methods to solve resulting linear systems. Numerical examples are provided to illustrate the performance of the numerical schemes. We conclude the paper with open questions for further research. This article is part of the themed issue 'Energy and the subsurface'. © 2016 The Author(s).
Implicit level set algorithms for modelling hydraulic fracture propagation
2016-01-01
Hydraulic fractures are tensile cracks that propagate in pre-stressed solid media due to the injection of a viscous fluid. Developing numerical schemes to model the propagation of these fractures is particularly challenging due to the degenerate, hypersingular nature of the coupled integro-partial differential equations. These equations typically involve a singular free boundary whose velocity can only be determined by evaluating a distinguished limit. This review paper describes a class of numerical schemes that have been developed to use the multiscale asymptotic behaviour typically encountered near the fracture boundary as multiple physical processes compete to determine the evolution of the fracture. The fundamental concepts of locating the free boundary using the tip asymptotics and imposing the tip asymptotic behaviour in a weak form are illustrated in two quite different formulations of the governing equations. These formulations are the displacement discontinuity boundary integral method and the extended finite-element method. Practical issues are also discussed, including new models for proppant transport able to capture ‘tip screen-out’; efficient numerical schemes to solve the coupled nonlinear equations; and fast methods to solve resulting linear systems. Numerical examples are provided to illustrate the performance of the numerical schemes. We conclude the paper with open questions for further research. This article is part of the themed issue ‘Energy and the subsurface’. PMID:27597787
NASA Astrophysics Data System (ADS)
Javed, Hassan; Armstrong, Peter
2015-08-01
The efficiency bar for a Minimum Equipment Performance Standard (MEPS) generally aims to minimize energy consumption and life cycle cost of a given chiller type and size category serving a typical load profile. Compressor type has a significant chiller performance impact. Performance of screw and reciprocating compressors is expressed in terms of pressure ratio and speed for a given refrigerant and suction density. Isentropic efficiency for a screw compressor is strongly affected by under- and over-compression (UOC) processes. The theoretical simple physical UOC model involves a compressor-specific (but sometimes unknown) volume index parameter and the real gas properties of the refrigerant used. Isentropic efficiency is estimated by the UOC model and a bi-cubic, used to account for flow, friction and electrical losses. The unknown volume index, a smoothing parameter (to flatten the UOC model peak) and bi-cubic coefficients are identified by curve fitting to minimize an appropriate residual norm. Chiller performance maps are produced for each compressor type by selecting optimized sub-cooling and condenser fan speed options in a generic component-based chiller model. SEER is the sum of hourly load (from a typical building in the climate of interest) and specific power for the same hourly conditions. An empirical UAE cooling load model, scalable to any equipment capacity, is used to establish proposed UAE MEPS. Annual electricity use and cost, determined from SEER and annual cooling load, and chiller component cost data are used to find optimal chiller designs and perform life-cycle cost comparison between screw and reciprocating compressor-based chillers. This process may be applied to any climate/load model in order to establish optimized MEPS for any country and/or region.
Seremwe, Mutsa; Schnellmann, Rick G.
2015-01-01
Aldosterone is a steroid hormone important in the regulation of blood pressure. Aberrant production of aldosterone results in the development and progression of diseases including hypertension and congestive heart failure; therefore, a complete understanding of aldosterone production is important for developing more effective treatments. Angiotensin II (AngII) regulates steroidogenesis, in part through its ability to increase intracellular calcium levels. Calcium can activate calpains, proteases classified as typical or atypical based on the presence or absence of penta-EF-hands, which are involved in various cellular responses. We hypothesized that calpain, in particular calpain-10, is activated by AngII in adrenal glomerulosa cells and underlies aldosterone production. Our studies showed that pan-calpain inhibitors reduced AngII-induced aldosterone production in 2 adrenal glomerulosa cell models, primary bovine zona glomerulosa and human adrenocortical carcinoma (HAC15) cells, as well as CYP11B2 expression in the HAC15 cells. Although AngII induced calpain activation in these cells, typical calpain inhibitors had no effect on AngII-elicited aldosterone production, suggesting a lack of involvement of classical calpains in this process. However, an inhibitor of the atypical calpain, calpain-10, decreased AngII-induced aldosterone production. Consistent with this result, small interfering RNA (siRNA)-mediated knockdown of calpain-10 inhibited aldosterone production and CYP11B2 expression, whereas adenovirus-mediated overexpression of calpain-10 resulted in increased AngII-induced aldosterone production. Our results indicate that AngII-induced activation of calpain-10 in glomerulosa cells underlies aldosterone production and identify calpain-10 or its downstream pathways as potential targets for the development of drug therapies for the treatment of hypertension. PMID:25836666
Shuryak, Igor; Loucas, Bradford D; Cornforth, Michael N
2017-01-01
The concept of curvature in dose-response relationships figures prominently in radiation biology, encompassing a wide range of interests including radiation protection, radiotherapy and fundamental models of radiation action. In this context, the ability to detect even small amounts of curvature becomes important. Standard (ST) statistical approaches used for this purpose typically involve least-squares regression, followed by a test on sums of squares. Because we have found that these methods are not particularly robust, we investigated an alternative information theoretic (IT) approach, which involves Poisson regression followed by information-theoretic model selection. Our first objective was to compare the performances of the ST and IT methods by using them to analyze mFISH data on gamma-ray-induced simple interchanges in human lymphocytes, and on Monte Carlo simulated data. Real and simulated data sets that contained small-to-moderate curvature were deliberately selected for this exercise. The IT method tended to detect curvature with higher confidence than the ST method. The finding of curvature in the dose response for true simple interchanges is discussed in the context of fundamental models of radiation action. Our second objective was to optimize the design of experiments aimed specifically at detecting curvature. We used Monte Carlo simulation to investigate the following parameters. Constrained by available resources (i.e., the total number of cells to be scored) these include: the optimal number of dose points to use; the best way to apportion the total number of cells among these dose points; and the spacing of dose intervals. Counterintuitively, our simulation results suggest that 4-5 radiation doses were typically optimal, whereas adding more dose points may actually prove detrimental. Superior results were also obtained by implementing unequal dose spacing and unequal distributions in the number of cells scored at each dose.
Training Class Inclusion Responding in Typically Developing Children and Individuals with Autism
ERIC Educational Resources Information Center
Ming, Siri; Mulhern, Teresa; Stewart, Ian; Moran, Laura; Bynum, Kellie
2018-01-01
In a "class inclusion" task, a child must respond to stimuli as being involved in two different though hierarchically related categories. This study used a Relational Frame Theory (RFT) paradigm to assess and train this ability in three typically developing preschoolers and three individuals with autism spectrum disorder, all of whom had…
Applying ecological concepts to the management of widespread grass invasions [Chapter 7
Carla M. D' Antonio; Jeanne C. Chambers; Rhonda Loh; J. Tim Tunison
2009-01-01
The management of plant invasions has typically focused on the removal of invading populations or control of existing widespread species to unspecified but lower levels. Invasive plant management typically has not involved active restoration of background vegetation to reduce the likelihood of invader reestablishment. Here, we argue that land managers could benefit...
ERIC Educational Resources Information Center
Tobia, Valentina; Bonifacci, Paola; Ottaviani, Cristina; Borsato, Thomas; Marzocchi, Gian Marco
2016-01-01
The aim of this study was to investigate physiological activation during reading and control tasks in children with dyslexia and typical readers. Skin conductance response (SCR) recorded during four tasks involving reading aloud, reading silently, and describing illustrated stories aloud and silently was compared for children with dyslexia (n =…
APPLIED ORIGAMI. Origami of thick panels.
Chen, Yan; Peng, Rui; You, Zhong
2015-07-24
Origami patterns, including the rigid origami patterns in which flat inflexible sheets are joined by creases, are primarily created for zero-thickness sheets. In order to apply them to fold structures such as roofs, solar panels, and space mirrors, for which thickness cannot be disregarded, various methods have been suggested. However, they generally involve adding materials to or offsetting panels away from the idealized sheet without altering the kinematic model used to simulate folding. We develop a comprehensive kinematic synthesis for rigid origami of thick panels that differs from the existing kinematic model but is capable of reproducing motions identical to that of zero-thickness origami. The approach, proven to be effective for typical origami, can be readily applied to fold real engineering structures. Copyright © 2015, American Association for the Advancement of Science.
Current opinion in Alzheimer's disease therapy by nanotechnology-based approaches.
Ansari, Shakeel Ahmed; Satar, Rukhsana; Perveen, Asma; Ashraf, Ghulam Md
2017-03-01
Nanotechnology typically deals with the measuring and modeling of matter at nanometer scale by incorporating the fields of engineering and technology. The most prominent feature of these engineered materials involves their manipulation/modification for imparting new functional properties. The current review covers the most recent findings of Alzheimer's disease (AD) therapeutics based on nanoscience and technology. Current studies involve the application of nanotechnology in developing novel diagnostic and therapeutic tools for neurological disorders. Nanotechnology-based approaches can be exploited for limiting/reversing these diseases for promoting functional regeneration of damaged neurons. These strategies offer neuroprotection by facilitating the delivery of drugs and small molecules more effectively across the blood-brain barrier. Nanotechnology based approaches show promise in improving AD therapeutics. Further replication work on synthesis and surface modification of nanoparticles, longer-term clinical trials, and attempts to increase their impact in treating AD are required.
Self-organizing periodicity in development: organ positioning in plants.
Bhatia, Neha; Heisler, Marcus G
2018-02-08
Periodic patterns during development often occur spontaneously through a process of self-organization. While reaction-diffusion mechanisms are often invoked, other types of mechanisms that involve cell-cell interactions and mechanical buckling have also been identified. Phyllotaxis, or the positioning of plant organs, has emerged as an excellent model system to study the self-organization of periodic patterns. At the macro scale, the regular spacing of organs on the growing plant shoot gives rise to the typical spiral and whorled arrangements of plant organs found in nature. In turn, this spacing relies on complex patterns of cell polarity that involve feedback between a signaling molecule - the plant hormone auxin - and its polar, cell-to-cell transport. Here, we review recent progress in understanding phyllotaxis and plant cell polarity and highlight the development of new tools that can help address the remaining gaps in our understanding. © 2018. Published by The Company of Biologists Ltd.
Quantum and Multidimensional Explanations in a Neurobiological Context of Mind.
Korf, Jakob
2015-08-01
This article examines the possible relevance of physical-mathematical multidimensional or quantum concepts aiming at understanding the (human) mind in a neurobiological context. Some typical features of the quantum and multidimensional concepts are briefly introduced, including entanglement, superposition, holonomic, and quantum field theories. Next, we consider neurobiological principles, such as the brain and its emerging (physical) mind, evolutionary and ontological origins, entropy, syntropy/neg-entropy, causation, and brain energy metabolism. In many biological processes, including biochemical conversions, protein folding, and sensory perception, the ubiquitous involvement of quantum mechanisms is well recognized. Quantum and multidimensional approaches might be expected to help describe and model both brain and mental processes, but an understanding of their direct involvement in mental activity, that is, without mediation by molecular processes, remains elusive. More work has to be done to bridge the gap between current neurobiological and physical-mathematical concepts with their associated quantum-mind theories. © The Author(s) 2014.
Selfishness as second-order altruism
Eldakar, Omar Tonsi; Wilson, David Sloan
2008-01-01
Selfishness is seldom considered a group-beneficial strategy. In the typical evolutionary formulation, altruism benefits the group, selfishness undermines altruism, and the purpose of the model is to identify mechanisms, such as kinship or reciprocity, that enable altruism to evolve. Recent models have explored punishment as an important mechanism favoring the evolution of altruism, but punishment can be costly to the punisher, making it a form of second-order altruism. This model identifies a strategy called “selfish punisher” that involves behaving selfishly in first-order interactions and altruistically in second-order interactions by punishing other selfish individuals. Selfish punishers cause selfishness to be a self-limiting strategy, enabling altruists to coexist in a stable equilibrium. This polymorphism can be regarded as a division of labor, or mutualism, in which the benefits obtained by first-order selfishness help to “pay” for second-order altruism. PMID:18448681
High frequency sound propagation in a network of interconnecting streets
NASA Astrophysics Data System (ADS)
Hewett, D. P.
2012-12-01
We propose a new model for the propagation of acoustic energy from a time-harmonic point source through a network of interconnecting streets in the high frequency regime, in which the wavelength is small compared to typical macro-lengthscales such as street widths/lengths and building heights. Our model, which is based on geometrical acoustics (ray theory), represents the acoustic power flow from the source along any pathway through the network as the integral of a power density over the launch angle of a ray emanating from the source, and takes into account the key phenomena involved in the propagation, namely energy loss by wall absorption, energy redistribution at junctions, and, in 3D, energy loss to the atmosphere. The model predicts strongly anisotropic decay away from the source, with the power flow decaying exponentially in the number of junctions from the source, except along the axial directions of the network, where the decay is algebraic.
Cosmology and accelerator tests of strongly interacting dark matter
Berlin, Asher; Blinov, Nikita; Gori, Stefania; ...
2018-03-23
A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experimentsmore » such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. As a result, we also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.« less
Simulation of high-energy radiation belt electron fluxes using NARMAX-VERB coupled codes
Pakhotin, I P; Drozdov, A Y; Shprits, Y Y; Boynton, R J; Subbotin, D A; Balikhin, M A
2014-01-01
This study presents a fusion of data-driven and physics-driven methodologies of energetic electron flux forecasting in the outer radiation belt. Data-driven NARMAX (Nonlinear AutoRegressive Moving Averages with eXogenous inputs) model predictions for geosynchronous orbit fluxes have been used as an outer boundary condition to drive the physics-based Versatile Electron Radiation Belt (VERB) code, to simulate energetic electron fluxes in the outer radiation belt environment. The coupled system has been tested for three extended time periods totalling several weeks of observations. The time periods involved periods of quiet, moderate, and strong geomagnetic activity and captured a range of dynamics typical of the radiation belts. The model has successfully simulated energetic electron fluxes for various magnetospheric conditions. Physical mechanisms that may be responsible for the discrepancies between the model results and observations are discussed. PMID:26167432
Murfee, Walter L.; Sweat, Richard S.; Tsubota, Ken-ichi; Gabhann, Feilim Mac; Khismatullin, Damir; Peirce, Shayn M.
2015-01-01
Microvascular network remodelling is a common denominator for multiple pathologies and involves both angiogenesis, defined as the sprouting of new capillaries, and network patterning associated with the organization and connectivity of existing vessels. Much of what we know about microvascular remodelling at the network, cellular and molecular scales has been derived from reductionist biological experiments, yet what happens when the experiments provide incomplete (or only qualitative) information? This review will emphasize the value of applying computational approaches to advance our understanding of the underlying mechanisms and effects of microvascular remodelling. Examples of individual computational models applied to each of the scales will highlight the potential of answering specific questions that cannot be answered using typical biological experimentation alone. Looking into the future, we will also identify the needs and challenges associated with integrating computational models across scales. PMID:25844149
Cosmology and accelerator tests of strongly interacting dark matter
NASA Astrophysics Data System (ADS)
Berlin, Asher; Blinov, Nikita; Gori, Stefania; Schuster, Philip; Toro, Natalia
2018-03-01
A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experiments such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. We also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.
[Model aeroplanes: a not to be ignored source of complex injuries].
Laback, C; Vasilyeva, A; Rappl, T; Lumenta, D; Giunta, R E; Kamolz, L
2013-12-01
With the incidence of work-related injuries decreasing, we continue to observe an unchanged trend in leisure-related accidents. As in any other hobby, model flying devices bear the risk for accidents among builders and flyers ranging from skin lacerations to complicated and even life-threatening injuries. The fast-moving razor-sharp propeller blades predominantly cause trauma to the hands and fingers resulting in typical multiple parallel skin injuries also affecting structures deep to the dermis (e. g., tendons, vessels and nerves). The resultant clinical management involves complex reconstructive surgical procedures and prolonged rehabilitative follow-up. Improving the legal framework (e. g., warnings by the manufacturer) on the one hand, providing informative action and sensitising those affected on the other, should form a basis for an altered prevention strategy to reduce model flying device-related injuries in the future. © Georg Thieme Verlag KG Stuttgart · New York.
Cosmology and accelerator tests of strongly interacting dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berlin, Asher; Blinov, Nikita; Gori, Stefania
A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experimentsmore » such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. As a result, we also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.« less
Remote measurement of surface roughness, surface reflectance, and body reflectance with LiDAR.
Li, Xiaolu; Liang, Yu
2015-10-20
Light detection and ranging (LiDAR) intensity data are attracting increasing attention because of the great potential for use of such data in a variety of remote sensing applications. To fully investigate the data potential for target classification and identification, we carried out a series of experiments with typical urban building materials and employed our reconstructed built-in-lab LiDAR system. Received intensity data were analyzed on the basis of the derived bidirectional reflectance distribution function (BRDF) model and the established integration method. With an improved fitting algorithm, parameters involved in the BRDF model can be obtained to depict the surface characteristics. One of these parameters related to surface roughness was converted to a most used roughness parameter, the arithmetical mean deviation of the roughness profile (Ra), which can be used to validate the feasibility of the BRDF model in surface characterizations and performance evaluations.
Newman, Andrew J; Hayes, Sarah H; Rao, Abhiram S; Allman, Brian L; Manohar, Senthilvelan; Ding, Dalian; Stolzberg, Daniel; Lobarinas, Edward; Mollendorf, Joseph C; Salvi, Richard
2015-03-15
Military personnel and civilians living in areas of armed conflict have increased risk of exposure to blast overpressures that can cause significant hearing loss and/or brain injury. The equipment used to simulate comparable blast overpressures in animal models within laboratory settings is typically very large and prohibitively expensive. To overcome the fiscal and space limitations introduced by previously reported blast wave generators, we developed a compact, low-cost blast wave generator to investigate the effects of blast exposures on the auditory system and brain. The blast wave generator was constructed largely from off the shelf components, and reliably produced blasts with peak sound pressures of up to 198dB SPL (159.3kPa) that were qualitatively similar to those produced from muzzle blasts or explosions. Exposure of adult rats to 3 blasts of 188dB peak SPL (50.4kPa) resulted in significant loss of cochlear hair cells, reduced outer hair cell function and a decrease in neurogenesis in the hippocampus. Existing blast wave generators are typically large, expensive, and are not commercially available. The blast wave generator reported here provides a low-cost method of generating blast waves in a typical laboratory setting. This compact blast wave generator provides scientists with a low cost device for investigating the biological mechanisms involved in blast wave injury to the rodent cochlea and brain that may model many of the damaging effects sustained by military personnel and civilians exposed to intense blasts. Copyright © 2015 Elsevier B.V. All rights reserved.
Newman, Andrew J.; Hayes, Sarah H.; Rao, Abhiram S.; Allman, Brian L.; Manohar, Senthilvelan; Ding, Dalian; Stolzberg, Daniel; Lobarinas, Edward; Mollendorf, Joseph C.; Salvi, Richard
2015-01-01
Background Military personnel and civilians living in areas of armed conflict have increased risk of exposure to blast overpressures that can cause significant hearing loss and/or brain injury. The equipment used to simulate comparable blast overpressures in animal models within laboratory settings is typically very large and prohibitively expensive. New Method To overcome the fiscal and space limitations introduced by previously reported blast wave generators, we developed a compact, low-cost blast wave generator to investigate the effects of blast exposures on the auditory system and brain. Results The blast wave generator was constructed largely from off the shelf components, and reliably produced blasts with peak sound pressures of up to 198 dB SPL (159.3 kPa) that were qualitatively similar to those produced from muzzle blasts or explosions. Exposure of adult rats to 3 blasts of 188 dB peak SPL (50.4 kPa) resulted in significant loss of cochlear hair cells, reduced outer hair cell function and a decrease in neurogenesis in the hippocampus. Comparison to existing methods Existing blast wave generators are typically large, expensive, and are not commercially available. The blast wave generator reported here provides a low-cost method of generating blast waves in a typical laboratory setting. Conclusions This compact blast wave generator provides scientists with a low cost device for investigating the biological mechanisms involved in blast wave injury to the rodent cochlea and brain that may model many of the damaging effects sustained by military personnel and civilians exposed to intense blasts. PMID:25597910
How to Appropriately Extrapolate Costs and Utilities in Cost-Effectiveness Analysis.
Bojke, Laura; Manca, Andrea; Asaria, Miqdad; Mahon, Ronan; Ren, Shijie; Palmer, Stephen
2017-08-01
Costs and utilities are key inputs into any cost-effectiveness analysis. Their estimates are typically derived from individual patient-level data collected as part of clinical studies the follow-up duration of which is often too short to allow a robust quantification of the likely costs and benefits a technology will yield over the patient's entire lifetime. In the absence of long-term data, some form of temporal extrapolation-to project short-term evidence over a longer time horizon-is required. Temporal extrapolation inevitably involves assumptions regarding the behaviour of the quantities of interest beyond the time horizon supported by the clinical evidence. Unfortunately, the implications for decisions made on the basis of evidence derived following this practice and the degree of uncertainty surrounding the validity of any assumptions made are often not fully appreciated. The issue is compounded by the absence of methodological guidance concerning the extrapolation of non-time-to-event outcomes such as costs and utilities. This paper considers current approaches to predict long-term costs and utilities, highlights some of the challenges with the existing methods, and provides recommendations for future applications. It finds that, typically, economic evaluation models employ a simplistic approach to temporal extrapolation of costs and utilities. For instance, their parameters (e.g. mean) are typically assumed to be homogeneous with respect to both time and patients' characteristics. Furthermore, costs and utilities have often been modelled to follow the dynamics of the associated time-to-event outcomes. However, cost and utility estimates may be more nuanced, and it is important to ensure extrapolation is carried out appropriately for these parameters.
Adequacy of the Regular Early Education Classroom Environment for Students with Visual Impairment
ERIC Educational Resources Information Center
Brown, Cherylee M.; Packer, Tanya L.; Passmore, Anne
2013-01-01
This study describes the classroom environment that students with visual impairment typically experience in regular Australian early education. Adequacy of the classroom environment (teacher training and experience, teacher support, parent involvement, adult involvement, inclusive attitude, individualization of the curriculum, physical…
Treating Families of Bone Marrow Recipients and Donors
ERIC Educational Resources Information Center
Cohen, Marie; And Others
1977-01-01
Luekemia and aplastic anemia are beginning to be treated by bone marrow transplants, involving donors and recipients from the same family. Such intimate involvement in the patient's life and death struggles typically produces a family crisis and frequent maladaptive responses by various family members. (Author)
Bioremediation of oil-contaminated beaches typically involves fertilization with nutrients that are thought to limit the growth rate of hydrocarbon-degrading bacteria. Much of the available technology involves application of fertilizers that release nutrients in a water-soluble ...
Gupta, Vikas; Estrada, April D; Blakley, Ivory; Reid, Rob; Patel, Ketan; Meyer, Mason D; Andersen, Stig Uggerhøj; Brown, Allan F; Lila, Mary Ann; Loraine, Ann E
2015-01-01
Blueberries are a rich source of antioxidants and other beneficial compounds that can protect against disease. Identifying genes involved in synthesis of bioactive compounds could enable the breeding of berry varieties with enhanced health benefits. Toward this end, we annotated a previously sequenced draft blueberry genome assembly using RNA-Seq data from five stages of berry fruit development and ripening. Genome-guided assembly of RNA-Seq read alignments combined with output from ab initio gene finders produced around 60,000 gene models, of which more than half were similar to proteins from other species, typically the grape Vitis vinifera. Comparison of gene models to the PlantCyc database of metabolic pathway enzymes identified candidate genes involved in synthesis of bioactive compounds, including bixin, an apocarotenoid with potential disease-fighting properties, and defense-related cyanogenic glycosides, which are toxic. Cyanogenic glycoside (CG) biosynthetic enzymes were highly expressed in green fruit, and a candidate CG detoxification enzyme was up-regulated during fruit ripening. Candidate genes for ethylene, anthocyanin, and 400 other biosynthetic pathways were also identified. Homology-based annotation using Blast2GO and InterPro assigned Gene Ontology terms to around 15,000 genes. RNA-Seq expression profiling showed that blueberry growth, maturation, and ripening involve dynamic gene expression changes, including coordinated up- and down-regulation of metabolic pathway enzymes and transcriptional regulators. Analysis of RNA-seq alignments identified developmentally regulated alternative splicing, promoter use, and 3' end formation. We report genome sequence, gene models, functional annotations, and RNA-Seq expression data that provide an important new resource enabling high throughput studies in blueberry.
Shalaeva, Daria N; Dibrova, Daria V; Galperin, Michael Y; Mulkidjanian, Armen Y
2015-05-27
Binding of cytochrome c, released from the damaged mitochondria, to the apoptotic protease activating factor 1 (Apaf-1) is a key event in the apoptotic signaling cascade. The binding triggers a major domain rearrangement in Apaf-1, which leads to oligomerization of Apaf-1/cytochrome c complexes into an apoptosome. Despite the availability of crystal structures of cytochrome c and Apaf-1 and cryo-electron microscopy models of the entire apoptosome, the binding mode of cytochrome c to Apaf-1, as well as the nature of the amino acid residues of Apaf-1 involved remain obscure. We investigated the interaction between cytochrome c and Apaf-1 by combining several modeling approaches. We have applied protein-protein docking and energy minimization, evaluated the resulting models of the Apaf-1/cytochrome c complex, and carried out a further analysis by means of molecular dynamics simulations. We ended up with a single model structure where all the lysine residues of cytochrome c that are known as functionally-relevant were involved in forming salt bridges with acidic residues of Apaf-1. This model has revealed three distinctive bifurcated salt bridges, each involving a single lysine residue of cytochrome c and two neighboring acidic resides of Apaf-1. Salt bridge-forming amino acids of Apaf-1 showed a clear evolutionary pattern within Metazoa, with pairs of acidic residues of Apaf-1, involved in bifurcated salt bridges, reaching their highest numbers in the sequences of vertebrates, in which the cytochrome c-mediated mechanism of apoptosome formation seems to be typical. The reported model of an Apaf-1/cytochrome c complex provides insights in the nature of protein-protein interactions which are hard to observe in crystallographic or electron microscopy studies. Bifurcated salt bridges can be expected to be stronger than simple salt bridges, and their formation might promote the conformational change of Apaf-1, leading to the formation of an apoptosome. Combination of structural and sequence analyses provides hints on the evolution of the cytochrome c-mediated apoptosis.
NASA Astrophysics Data System (ADS)
Mondal, Puspen; Manekar, Meghmalhar; Srivastava, A. K.; Roy, S. B.
2009-07-01
We present the results of magnetization measurements on an as-cast nanocrystalline Nb3Al superconductor embedded in Nb-Al matrix. The typical grain size of Nb3Al ranges from about 2-8 nm with the maximum number of grains at around 3.5 nm, as visualized using transmission electron microscopy. The isothermal magnetization hysteresis loops in the superconducting state can be reasonably fitted within the well-known Kim-Anderson critical-state model. By using the same fitting parameters, we calculate the variation in field with respect to distance inside the sample and show the existence of a critical state over length scales much larger than the typical size of the superconducting grains. Our results indicate that a bulk critical current is possible in a system comprising of nanoparticles. The nonsuperconducting Nb-Al matrix thus appears to play a major role in the bulk current flow through the sample. The superconducting coherence length ξ is estimated to be around 3 nm, which is comparable to the typical grain size. The penetration depth λ is estimated to be about 94 nm, which is much larger than the largest of the superconducting grains. Our results could be useful for tuning the current carrying capability of conductors made out of composite materials which involve superconducting nanoparticles.
Locke, Jill; Fuller, Erin Rotheram; Kasari, Connie
2014-01-01
This study examined the social impact of being a typical peer model as part of a social skills intervention for children with autism spectrum disorder (ASD). Participants were drawn from a randomized-controlled-treatment trial that examined the effects of targeted interventions on the social networks of 60 elementary-aged children with ASD. Results demonstrated that typical peer models had higher social network centrality, received friendships, friendship quality, and less loneliness than non-peer models. Peer models were also more likely to be connected with children with ASD than non-peer models at baseline and exit. These results suggest that typical peers can be socially connected to children with ASD, as well as other classmates, and maintain a strong and positive role within the classroom. PMID:22215436
Part-to-itself model inversion in process compensated resonance testing
NASA Astrophysics Data System (ADS)
Mayes, Alexander; Jauriqui, Leanne; Biedermann, Eric; Heffernan, Julieanne; Livings, Richard; Aldrin, John C.; Goodlet, Brent; Mazdiyasni, Siamack
2018-04-01
Process Compensated Resonance Testing (PCRT) is a non-destructive evaluation (NDE) method involving the collection and analysis of a part's resonance spectrum to characterize its material or damage state. Prior work used the finite element method (FEM) to develop forward modeling and model inversion techniques. In many cases, the inversion problem can become confounded by multiple parameters having similar effects on a part's resonance frequencies. To reduce the influence of confounding parameters and isolate the change in a part (e.g., creep), a part-to-itself (PTI) approach can be taken. A PTI approach involves inverting only the change in resonance frequencies from the before and after states of a part. This approach reduces the possible inversion parameters to only those that change in response to in-service loads and damage mechanisms. To evaluate the effectiveness of using a PTI inversion approach, creep strain and material properties were estimated in virtual and real samples using FEM inversion. Virtual and real dog bone samples composed of nickel-based superalloy Mar-M-247 were examined. Virtual samples were modeled with typically observed variations in material properties and dimensions. Creep modeling was verified with the collected resonance spectra from an incrementally crept physical sample. All samples were inverted against a model space that allowed for change in the creep damage state and the material properties but was blind to initial part dimensions. Results quantified the capabilities of PTI inversion in evaluating creep strain and material properties, as well as its sensitivity to confounding initial dimensions.
SOCIAL AND NON-SOCIAL CUEING OF VISUOSPATIAL ATTENTION IN AUTISM AND TYPICAL DEVELOPMENT
Pruett, John R.; LaMacchia, Angela; Hoertel, Sarah; Squire, Emma; McVey, Kelly; Todd, Richard D.; Constantino, John N.; Petersen, Steven E.
2013-01-01
Three experiments explored attention to eye gaze, which is incompletely understood in typical development and is hypothesized to be disrupted in autism. Experiment 1 (n=26 typical adults) involved covert orienting to box, arrow, and gaze cues at two probabilities and cue-target times to test whether reorienting for gaze is endogenous, exogenous, or unique; experiment 2 (total n=80: male and female children and adults) studied age and sex effects on gaze cueing. Gaze cueing appears endogenous and may strengthen in typical development. Experiment 3 tested exogenous, endogenous, and/or gaze-based orienting in 25 typical and 27 Autistic Spectrum Disorder (ASD) children. ASD children made more saccades, slowing their reaction times; however, exogenous and endogenous orienting, including gaze cueing, appear intact in ASD. PMID:20809377
Model-Based and Model-Free Pavlovian Reward Learning: Revaluation, Revision and Revelation
Dayan, Peter; Berridge, Kent C.
2014-01-01
Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation. PMID:24647659
Inflation in the mixed Higgs-R2 model
NASA Astrophysics Data System (ADS)
He, Minxi; Starobinsky, Alexei A.; Yokoyama, Jun'ichi
2018-05-01
We analyze a two-field inflationary model consisting of the Ricci scalar squared (R2) term and the standard Higgs field non-minimally coupled to gravity in addition to the Einstein R term. Detailed analysis of the power spectrum of this model with mass hierarchy is presented, and we find that one can describe this model as an effective single-field model in the slow-roll regime with a modified sound speed. The scalar spectral index predicted by this model coincides with those given by the R2 inflation and the Higgs inflation implying that there is a close relation between this model and the R2 inflation already in the original (Jordan) frame. For a typical value of the self-coupling of the standard Higgs field at the high energy scale of inflation, the role of the Higgs field in parameter space involved is to modify the scalaron mass, so that the original mass parameter in the R2 inflation can deviate from its standard value when non-minimal coupling between the Ricci scalar and the Higgs field is large enough.
Model-based and model-free Pavlovian reward learning: revaluation, revision, and revelation.
Dayan, Peter; Berridge, Kent C
2014-06-01
Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations, and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response, and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation.
Climate Model Diagnostic Analyzer
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Pan, Lei; Zhai, Chengxing; Tang, Benyang; Kubar, Terry; Zhang, Zia; Wang, Wei
2015-01-01
The comprehensive and innovative evaluation of climate models with newly available global observations is critically needed for the improvement of climate model current-state representation and future-state predictability. A climate model diagnostic evaluation process requires physics-based multi-variable analyses that typically involve large-volume and heterogeneous datasets, making them both computation- and data-intensive. With an exploratory nature of climate data analyses and an explosive growth of datasets and service tools, scientists are struggling to keep track of their datasets, tools, and execution/study history, let alone sharing them with others. In response, we have developed a cloud-enabled, provenance-supported, web-service system called Climate Model Diagnostic Analyzer (CMDA). CMDA enables the physics-based, multivariable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. At the same time, CMDA provides a crowd-sourcing space where scientists can organize their work efficiently and share their work with others. CMDA is empowered by many current state-of-the-art software packages in web service, provenance, and semantic search.
At the forefront of thought: the effect of media exposure on airplane typicality.
Novick, Laura R
2003-12-01
The terrorist attacks of September 11, 2001 provided a unique opportunity to investigate the causal status of frequency on typicality for one exemplar of a common conceptual category--namely, the typicality of airplane as a member of the category of vehicles. The extensive media coverage following the attacks included numerous references to the hijacked airplanes and to the consequences of suspending air travel to and from the United States for several days. The present study, involving 152 undergraduates, assessed airplane typicality at three time points ranging from 5 h to 1 month after the attacks and then again at 4.5 months after the attacks. Airplane was judged to be a more typical vehicle for 1 month following the attacks, relative to a baseline calculated from data collected yearly for 5 years preceding the attacks. By 4.5 months, however, typicality was back to baseline.
Williams, Claire; Lewsey, James D.; Mackay, Daniel F.; Briggs, Andrew H.
2016-01-01
Modeling of clinical-effectiveness in a cost-effectiveness analysis typically involves some form of partitioned survival or Markov decision-analytic modeling. The health states progression-free, progression and death and the transitions between them are frequently of interest. With partitioned survival, progression is not modeled directly as a state; instead, time in that state is derived from the difference in area between the overall survival and the progression-free survival curves. With Markov decision-analytic modeling, a priori assumptions are often made with regard to the transitions rather than using the individual patient data directly to model them. This article compares a multi-state modeling survival regression approach to these two common methods. As a case study, we use a trial comparing rituximab in combination with fludarabine and cyclophosphamide v. fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia. We calculated mean Life Years and QALYs that involved extrapolation of survival outcomes in the trial. We adapted an existing multi-state modeling approach to incorporate parametric distributions for transition hazards, to allow extrapolation. The comparison showed that, due to the different assumptions used in the different approaches, a discrepancy in results was evident. The partitioned survival and Markov decision-analytic modeling deemed the treatment cost-effective with ICERs of just over £16,000 and £13,000, respectively. However, the results with the multi-state modeling were less conclusive, with an ICER of just over £29,000. This work has illustrated that it is imperative to check whether assumptions are realistic, as different model choices can influence clinical and cost-effectiveness results. PMID:27698003
Williams, Claire; Lewsey, James D; Mackay, Daniel F; Briggs, Andrew H
2017-05-01
Modeling of clinical-effectiveness in a cost-effectiveness analysis typically involves some form of partitioned survival or Markov decision-analytic modeling. The health states progression-free, progression and death and the transitions between them are frequently of interest. With partitioned survival, progression is not modeled directly as a state; instead, time in that state is derived from the difference in area between the overall survival and the progression-free survival curves. With Markov decision-analytic modeling, a priori assumptions are often made with regard to the transitions rather than using the individual patient data directly to model them. This article compares a multi-state modeling survival regression approach to these two common methods. As a case study, we use a trial comparing rituximab in combination with fludarabine and cyclophosphamide v. fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia. We calculated mean Life Years and QALYs that involved extrapolation of survival outcomes in the trial. We adapted an existing multi-state modeling approach to incorporate parametric distributions for transition hazards, to allow extrapolation. The comparison showed that, due to the different assumptions used in the different approaches, a discrepancy in results was evident. The partitioned survival and Markov decision-analytic modeling deemed the treatment cost-effective with ICERs of just over £16,000 and £13,000, respectively. However, the results with the multi-state modeling were less conclusive, with an ICER of just over £29,000. This work has illustrated that it is imperative to check whether assumptions are realistic, as different model choices can influence clinical and cost-effectiveness results.
Synchronized Trajectories in a Climate "Supermodel"
NASA Astrophysics Data System (ADS)
Duane, Gregory; Schevenhoven, Francine; Selten, Frank
2017-04-01
Differences in climate projections among state-of-the-art models can be resolved by connecting the models in run-time, either through inter-model nudging or by directly combining the tendencies for corresponding variables. Since it is clearly established that averaging model outputs typically results in improvement as compared to any individual model output, averaged re-initializations at typical analysis time intervals also seems appropriate. The resulting "supermodel" is more like a single model than it is like an ensemble, because the constituent models tend to synchronize even with limited inter-model coupling. Thus one can examine the properties of specific trajectories, rather than averaging the statistical properties of the separate models. We apply this strategy to a study of the index cycle in a supermodel constructed from several imperfect copies of the SPEEDO model (a global primitive-equation atmosphere-ocean-land climate model). As with blocking frequency, typical weather statistics of interest like probabilities of heat waves or extreme precipitation events, are improved as compared to the standard multi-model ensemble approach. In contrast to the standard approach, the supermodel approach provides detailed descriptions of typical actual events.
Perspective: Evolutionary design of granular media and block copolymer patterns
NASA Astrophysics Data System (ADS)
Jaeger, Heinrich M.; de Pablo, Juan J.
2016-05-01
The creation of new materials "by design" is a process that starts from desired materials properties and proceeds to identify requirements for the constituent components. Such process is challenging because it inverts the typical modeling approach, which starts from given micro-level components to predict macro-level properties. We describe how to tackle this inverse problem using concepts from evolutionary computation. These concepts have widespread applicability and open up new opportunities for design as well as discovery. Here we apply them to design tasks involving two very different classes of soft materials, shape-optimized granular media and nanopatterned block copolymer thin films.
A study of the mechanism of metal deposition by the laser-induced forward transfer process
NASA Astrophysics Data System (ADS)
Adrian, F. J.; Bohandy, J.; Kim, B. F.; Jette, A. N.; Thompson, P.
1987-10-01
The mechanism of the laser-induced forward transfer (LIFT) technique for transferring metal features from a film to a substrate is examined by using the one-dimensional thermal diffusion equation with a moving solid-melt boundary to model the heating, melting, and vaporization of the metal film by the laser. For typical LIFT conditions the calculations show that the back of the film (i.e., the part exposed to the laser) will reach the boiling point before the film melts through, which supports the qualitative picture that the LIFT process involves vapor-driven propulsion of metal from the film onto the target.
Amplified crossflow disturbances in the laminar boundary layer on swept wings with suction
NASA Technical Reports Server (NTRS)
Dagenhart, J. R.
1981-01-01
Solution charts of the Orr-Sommerfeld equation for stationary crossflow disturbances are presented for 10 typical velocity profiles on a swept laminar flow control wing. The critical crossflow Reynolds number is shown to be a function of a boundary layer shape factor. Amplification rates for crossflow disturbances are shown to be proportional to the maximum crossflow velocity. A computer stability program called MARIA, employing the amplification rate data for the 10 crossflow velocity profiles, is constructed. This code is shown to adequately approximate more involved computer stability codes using less than two percent as much computer time while retaining the essential physical disturbance growth model.
Díaz, Estrella; Vargas, Juan Pedro; Quintero, Esperanza; Gonzalo de la Casa, Luis; O'Donnell, Patricio; Lopez, Juan Carlos
2014-05-01
The dorsal striatum has been ascribed to different behavioral roles. While the lateral area (dls) is implicated in habitual actions, its medial part (dms) is linked to goal expectancy. According to this model, dls function includes representation of stimulus-response associations, but not of goals. Dls function has been typically analyzed with regard to movement, and there is no data indicating whether this region could processes specific stimulus-outcome associations. To test this possibility, we analyzed the effects of dls and dms inactivation on the retrieval phase, and dms lesion on the acquisition phase of a latent inhibition procedure using two conditions, long and short presentations of the future conditioned stimulus. Contrary to current theories of basal ganglia function, we report evidence in favor of the dls involvement in cognitive processes of learning and retrieval. Moreover, we provide data about the sequential relationship between dms and dls, in which the dms could be involved, but it would not be critical, in new learning and the dls could be subsequently involved in consolidating cognitive routines. Copyright © 2014 Elsevier Inc. All rights reserved.
Photoactivated methods for enabling cartilage-to-cartilage tissue fixation
NASA Astrophysics Data System (ADS)
Sitterle, Valerie B.; Roberts, David W.
2003-06-01
The present study investigates whether photoactivated attachment of cartilage can provide a viable method for more effective repair of damaged articular surfaces by providing an alternative to sutures, barbs, or fibrin glues for initial fixation. Unlike artificial materials, biological constructs do not possess the initial strength for press-fitting and are instead sutured or pinned in place, typically inducing even more tissue trauma. A possible alternative involves the application of a photosensitive material, which is then photoactivated with a laser source to attach the implant and host tissues together in either a photothermal or photochemical process. The photothermal version of this method shows potential, but has been almost entirely applied to vascularized tissues. Cartilage, however, exhibits several characteristics that produce appreciable differences between applying and refining these techniques when compared to previous efforts involving vascularized tissues. Preliminary investigations involving photochemical photosensitizers based on singlet oxygen and electron transfer mechanisms are discussed, and characterization of the photodynamic effects on bulk collagen gels as a simplified model system using FTIR is performed. Previous efforts using photothermal welding applied to cartilaginous tissues are reviewed.
Relative Age Effects in a Cognitive Task: A Case Study of Youth Chess
ERIC Educational Resources Information Center
Helsen, Werner F.; Baker, Joseph; Schorer, Joerg; Steingröver, Christina; Wattie, Nick; Starkes, Janet L.
2016-01-01
The relative age effect (RAE) has been demonstrated in many youth and professional sports. In this study, we hypothesized that there would also be a RAE among youth chess players who are typically involved in a complex cognitive task without significant physical requirements. While typical RAEs have been observed in adult chess players, in this…
ERIC Educational Resources Information Center
Teixeira, Jennifer M.; Byers, Jessie Nedrow; Perez, Marilu G.; Holman, R. W.
2010-01-01
Experimental exercises within second-year-level organic laboratory manuals typically involve a statement of a principle that is then validated by student generation of data in a single experiment. These experiments are structured in the exact opposite order of the scientific method, in which data interpretation, typically from multiple related…
Muiño, Elena; Gallego-Fabrega, Cristina; Cullell, Natalia; Carrera, Caty; Torres, Nuria; Krupinski, Jurek; Roquer, Jaume; Montaner, Joan; Fernández-Cadenas, Israel
2017-09-13
CADASIL (cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy) is caused by mutations in the NOTCH3 gene, affecting the number of cysteines in the extracellular domain of the receptor, causing protein misfolding and receptor aggregation. The pathogenic role of cysteine-sparing NOTCH3 missense mutations in patients with typical clinical CADASIL syndrome is unknown. The aim of this article is to describe these mutations to clarify if any could be potentially pathogenic. Articles on cysteine-sparing NOTCH3 missense mutations in patients with clinical suspicion of CADASIL were reviewed. Mutations were considered potentially pathogenic if patients had: (a) typical clinical CADASIL syndrome; (b) diffuse white matter hyperintensities; (c) the 33 NOTCH3 exons analyzed; (d) mutations that were not polymorphisms; and (e) Granular osmiophilic material (GOM) deposits in the skin biopsy. Twenty-five different mutations were listed. Four fulfill the above criteria: p.R61W; p.R75P; p.D80G; and p.R213K. Patients carrying these mutations had typical clinical CADASIL syndrome and diffuse white matter hyperintensities, mostly without anterior temporal pole involvement. Cysteine-sparing NOTCH3 missense mutations are associated with typical clinical CADASIL syndrome and typical magnetic resonance imaging (MRI) findings, although with less involvement of the anterior temporal lobe. Hence, these mutations should be further studied to confirm their pathological role in CADASIL.
Multibody Parachute Flight Simulations for Planetary Entry Trajectories Using "Equilibrium Points"
NASA Technical Reports Server (NTRS)
Raiszadeh, Ben
2003-01-01
A method has been developed to reduce numerical stiffness and computer CPU requirements of high fidelity multibody flight simulations involving parachutes for planetary entry trajectories. Typical parachute entry configurations consist of entry bodies suspended from a parachute, connected by flexible lines. To accurately calculate line forces and moments, the simulations need to keep track of the point where the flexible lines meet (confluence point). In previous multibody parachute flight simulations, the confluence point has been modeled as a point mass. Using a point mass for the confluence point tends to make the simulation numerically stiff, because its mass is typically much less that than the main rigid body masses. One solution for stiff differential equations is to use a very small integration time step. However, this results in large computer CPU requirements. In the method described in the paper, the need for using a mass as the confluence point has been eliminated. Instead, the confluence point is modeled using an "equilibrium point". This point is calculated at every integration step as the point at which sum of all line forces is zero (static equilibrium). The use of this "equilibrium point" has the advantage of both reducing the numerical stiffness of the simulations, and eliminating the dynamical equations associated with vibration of a lumped mass on a high-tension string.
Natale, Alessandra; Boeckmans, Joost; Desmae, Terry; De Boe, Veerle; De Kock, Joery; Vanhaecke, Tamara; Rogiers, Vera; Rodrigues, Robim M
2018-03-01
Phospholipidosis is a metabolic disorder characterized by intracellular accumulation of phospholipids. It can be caused by short-term or chronic exposure to cationic amphiphilic drugs (CADs). These compounds bind to phospholipids, leading to inhibition of their degradation and consequently to their accumulation in lysosomes. Drug-induced phospholipidosis (DIPL) is frequently at the basis of discontinuation of drug development and post-market drug withdrawal. Therefore, reliable human-relevant in vitro models must be developed to speed up the identification of compounds that are potential inducers of phospholipidosis. Here, hepatic cells derived from human skin (hSKP-HPC) were evaluated as an in vitro model for DIPL. These cells were exposed over time to amiodarone, a CAD known to induce phospholipidosis in humans. Transmission electron microscopy revealed the formation of the typical lamellar inclusions in the cell cytoplasm. Increase of phospholipids was already detected after 24 h exposure to amiodarone, whereas a significant increase of neutral lipid vesicles could be observed after 72 h. At the transcriptional level, the modulation of genes involved in DIPL was detected. These results provide a valuable indication of the applicability of hSKP-HPC for the quick assessment of drug-induced phospholipidosis in vitro, early in the drug development process. Copyright © 2017 Elsevier B.V. All rights reserved.
Tate, Cathy M.; Broshears, Robert E.; McKnight, Diane M.
1995-01-01
Acid mine drainage streams in the Rocky Mountains typically have few algal species and abundant iron oxide deposits which can sorb phosphate. An instream injection of radiolabeled phosphate (32P0,) into St. Kevin Gulch, an acid mine drainage stream, was used to test the ability of a dominant algal species, Ulothrix sp., to rapidly assimilate phosphate. Approximately 90% of the injected phosphate was removed from the water column in the 175-m stream reach. When shaded stream reaches were exposed to full sunlight after the injection ended, photoreductive dissolution of iron oxide released sorbed 32P, which was then also removed downstream. The removal from the stream was modeled as a first-order process by using a reactive solute transport transient storage model. Concentrations of 32P mass-’ of algae were typically lo-fold greater than concentrations in hydrous iron oxides. During the injection, concentrations of 32P increased in the cellular P pool containing soluble, low-molecular-weight compounds and confirmed direct algal uptake of 32P0, from water. Mass balance calculations indicated that algal uptake and sorption on iron oxides were significant in removing phosphate. We conclude that in stream ecosystems, PO, sorbed by iron oxides can act as a dynamic nutrient reservoir regulated by photoreduction.
Qu, Hong-En; Niu, Chuanxin M; Li, Si; Hao, Man-Zhao; Hu, Zi-Xiang; Xie, Qing; Lan, Ning
2017-12-01
Essential tremor, also referred to as familial tremor, is an autosomal dominant genetic disease and the most common movement disorder. It typically involves a postural and motor tremor of the hands, head or other part of the body. Essential tremor is driven by a central oscillation signal in the brain. However, the corticospinal mechanisms involved in the generation of essential tremor are unclear. Therefore, in this study, we used a neural computational model that includes both monosynaptic and multisynaptic corticospinal pathways interacting with a propriospinal neuronal network. A virtual arm model is driven by the central oscillation signal to simulate tremor activity behavior. Cortical descending commands are classified as alpha or gamma through monosynaptic or multisynaptic corticospinal pathways, which converge respectively on alpha or gamma motoneurons in the spinal cord. Several scenarios are evaluated based on the central oscillation signal passing down to the spinal motoneurons via each descending pathway. The simulated behaviors are compared with clinical essential tremor characteristics to identify the corticospinal pathways responsible for transmitting the central oscillation signal. A propriospinal neuron with strong cortical inhibition performs a gating function in the generation of essential tremor. Our results indicate that the propriospinal neuronal network is essential for relaying the central oscillation signal and the production of essential tremor.
Structural equation modeling for observational studies
Grace, J.B.
2008-01-01
Structural equation modeling (SEM) represents a framework for developing and evaluating complex hypotheses about systems. This method of data analysis differs from conventional univariate and multivariate approaches familiar to most biologists in several ways. First, SEMs are multiequational and capable of representing a wide array of complex hypotheses about how system components interrelate. Second, models are typically developed based on theoretical knowledge and designed to represent competing hypotheses about the processes responsible for data structure. Third, SEM is conceptually based on the analysis of covariance relations. Most commonly, solutions are obtained using maximum-likelihood solution procedures, although a variety of solution procedures are used, including Bayesian estimation. Numerous extensions give SEM a very high degree of flexibility in dealing with nonnormal data, categorical responses, latent variables, hierarchical structure, multigroup comparisons, nonlinearities, and other complicating factors. Structural equation modeling allows researchers to address a variety of questions about systems, such as how different processes work in concert, how the influences of perturbations cascade through systems, and about the relative importance of different influences. I present 2 example applications of SEM, one involving interactions among lynx (Lynx pardinus), mongooses (Herpestes ichneumon), and rabbits (Oryctolagus cuniculus), and the second involving anuran species richness. Many wildlife ecologists may find SEM useful for understanding how populations function within their environments. Along with the capability of the methodology comes a need for care in the proper application of SEM.
Causality in Psychiatry: A Hybrid Symptom Network Construct Model
Young, Gerald
2015-01-01
Causality or etiology in psychiatry is marked by standard biomedical, reductionistic models (symptoms reflect the construct involved) that inform approaches to nosology, or classification, such as in the DSM-5 [Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition; (1)]. However, network approaches to symptom interaction [i.e., symptoms are formative of the construct; e.g., (2), for posttraumatic stress disorder (PTSD)] are being developed that speak to bottom-up processes in mental disorder, in contrast to the typical top-down psychological construct approach. The present article presents a hybrid top-down, bottom-up model of the relationship between symptoms and mental disorder, viewing symptom expression and their causal complex as a reciprocally dynamic system with multiple levels, from lower-order symptoms in interaction to higher-order constructs affecting them. The hybrid model hinges on good understanding of systems theory in which it is embedded, so that the article reviews in depth non-linear dynamical systems theory (NLDST). The article applies the concept of emergent circular causality (3) to symptom development, as well. Conclusions consider that symptoms vary over several dimensions, including: subjectivity; objectivity; conscious motivation effort; and unconscious influences, and the degree to which individual (e.g., meaning) and universal (e.g., causal) processes are involved. The opposition between science and skepticism is a complex one that the article addresses in final comments. PMID:26635639
Efficient Modeling and Active Learning Discovery of Biological Responses
Naik, Armaghan W.; Kangas, Joshua D.; Langmead, Christopher J.; Murphy, Robert F.
2013-01-01
High throughput and high content screening involve determination of the effect of many compounds on a given target. As currently practiced, screening for each new target typically makes little use of information from screens of prior targets. Further, choices of compounds to advance to drug development are made without significant screening against off-target effects. The overall drug development process could be made more effective, as well as less expensive and time consuming, if potential effects of all compounds on all possible targets could be considered, yet the cost of such full experimentation would be prohibitive. In this paper, we describe a potential solution: probabilistic models that can be used to predict results for unmeasured combinations, and active learning algorithms for efficiently selecting which experiments to perform in order to build those models and determining when to stop. Using simulated and experimental data, we show that our approaches can produce powerful predictive models without exhaustive experimentation and can learn them much faster than by selecting experiments at random. PMID:24358322
Quantitative analysis of intra-Golgi transport shows intercisternal exchange for all cargo
Dmitrieff, Serge; Rao, Madan; Sens, Pierre
2013-01-01
The mechanisms controlling the transport of proteins through the Golgi stack of mammalian and plant cells is the subject of intense debate, with two models, cisternal progression and intercisternal exchange, emerging as major contenders. A variety of transport experiments have claimed support for each of these models. We reevaluate these experiments using a single quantitative coarse-grained framework of intra-Golgi transport that accounts for both transport models and their many variants. Our analysis makes a definitive case for the existence of intercisternal exchange both for small membrane proteins and large protein complexes––this implies that membrane structures larger than the typical protein-coated vesicles must be involved in transport. Notwithstanding, we find that current observations on protein transport cannot rule out cisternal progression as contributing significantly to the transport process. To discriminate between the different models of intra-Golgi transport, we suggest experiments and an analysis based on our extended theoretical framework that compare the dynamics of transiting and resident proteins. PMID:24019488
Models for financing the regulation of pharmaceutical promotion
2012-01-01
Pharmaceutical companies spend huge sums promoting their products whereas regulation of promotional activities is typically underfinanced. Any option for financing the monitoring and regulation of promotion should adhere to three basic principles: stability, predictability and lack of (perverse) ties between the level of financing and performance. This paper explores the strengths and weaknesses of six different models. All these six models considered here have positive and negative features and none may necessarily be ideal in any particular country. Different countries may choose to utilize a combination of two or more of these models in order to raise sufficient revenue. Financing of regulation of drug promotion should more than pay for itself through the prevention of unnecessary drug costs and the avoidance of adverse health effects due to inappropriate prescribing. However, it involves an initial outlay of money that is currently not being spent and many national governments, in both rich and poor countries, are unwilling to incur extra costs. PMID:22784944
Modeling of the Bosphorus exchange flow dynamics
NASA Astrophysics Data System (ADS)
Sözer, Adil; Özsoy, Emin
2017-04-01
The fundamental hydrodynamic behavior of the Bosphorus Strait is investigated through a numerical modeling study using alternative configurations of idealized or realistic geometry. Strait geometry and basin stratification conditions allow for hydraulic controls and are ideally suited to support the maximal-exchange regime, which determines the rate of exchange of waters originating from the adjacent Black and Mediterranean Seas for a given net transport. Steady-state hydraulic controls are demonstrated by densimetric Froude number calculations under layered flow approximations when corrections are applied to account for high velocity shears typically observed in the Bosphorus. Analyses of the model results reveal many observed features of the strait, including critical transitions at hydraulic controls and dissipation by turbulence and hydraulic jumps. It is found that the solution depends on initialization, especially with respect to the basin initial conditions. Significant differences between the controlled maximal-exchange and drowned solutions suggest that a detailed modeling implementation involving coupling with adjacent basins needs to take full account of the Bosphorus Strait in terms of the physical processes to be resolved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goltz, M.N.; Oxley, M.E.
Aquifer cleanup efforts at contaminated sites frequently involve operation of a system of extraction wells. It has been found that contaminant load discharged by extraction wells typically declines with time, asymptotically approaching a residual level. Such behavior could be due to rate-limited desorption of an organic contaminant from aquifer solids. An analytical model is presented which accounts for rate-limited desorption of an organic solute during cleanup of a contaminated site. Model equations are presented which describe transport of a sorbing contaminant in a converging radial flow field, with sorption described by (1) equilibrium, (2) first-order rate, and (3) Fickian diffusionmore » expressions. The model equations are solved in the Laplace domain and numerically inverted to simulate contaminant concentrations at an extraction well. A Laplace domain solution for the total contaminant mass remaining in the aquifer is also derived. It is shown that rate-limited sorption can have a significant impact upon aquifer remediation. Approximate equivalence among the various rate-limited models is also demonstrated.« less
MSC/NASTRAN Stress Analysis of Complete Models Subjected to Random and Quasi-Static Loads
NASA Technical Reports Server (NTRS)
Hampton, Roy W.
2000-01-01
Space payloads, such as those which fly on the Space Shuttle in Spacelab, are designed to withstand dynamic loads which consist of combined acoustic random loads and quasi-static acceleration loads. Methods for computing the payload stresses due to these loads are well known and appear in texts and NASA documents, but typically involve approximations such as the Miles' equation, as well as possible adjustments based on "modal participation factors." Alternatively, an existing capability in MSC/NASTRAN may be used to output exact root mean square [rms] stresses due to the random loads for any specified elements in the Finite Element Model. However, it is time consuming to use this methodology to obtain the rms stresses for the complete structural model and then combine them with the quasi-static loading induced stresses. Special processing was developed as described here to perform the stress analysis of all elements in the model using existing MSC/NASTRAN and MSC/PATRAN and UNIX utilities. Fail-safe and buckling analyses applications are also described.
Engaging Undergraduates in Social Science Research: The Taking the Pulse of Saskatchewan Project
ERIC Educational Resources Information Center
Berdahl, Loleen
2014-01-01
Although student involvement in research and inquiry can advance undergraduate learning, there are limited opportunities for undergraduate students to be directly involved in social science research. Social science faculty members typically work outside of laboratory settings, with the limited research assistance work being completed by graduate…
43 CFR 10005.12 - Policy regarding the scope of measures to be included in the plan.
Code of Federal Regulations, 2011 CFR
2011-10-01
... the site of the impact typically involves restoration or replacement. Off-site mitigation might involve protection, restoration, or enhancement of a similar resource value at a different location... responsibilities, the Commission sees an obligation to give priority to protection and restoration activities that...
Small School Ritual and Parent Involvement.
ERIC Educational Resources Information Center
Bushnell, Mary
This paper examines the ritual socialization of parents into a school community. Rituals may be mundane or sacred and typically involve actions that have transformative potential. In the context of groups, rituals may serve the purposes of identifying and constructing group identity, maintaining cohesion, and constructing and communicating values.…
A probabilistic NF2 relational algebra for integrated information retrieval and database systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuhr, N.; Roelleke, T.
The integration of information retrieval (IR) and database systems requires a data model which allows for modelling documents as entities, representing uncertainty and vagueness and performing uncertain inference. For this purpose, we present a probabilistic data model based on relations in non-first-normal-form (NF2). Here, tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. Thus, the set of weighted index terms of a document are represented as a probabilistic subrelation. In a similar way, imprecise attribute values are modelled as a set-valued attribute. We redefine the relational operators for this type of relations such thatmore » the result of each operator is again a probabilistic NF2 relation, where the weight of a tuple gives the probability that this tuple belongs to the result. By ordering the tuples according to decreasing probabilities, the model yields a ranking of answers like in most IR models. This effect also can be used for typical database queries involving imprecise attribute values as well as for combinations of database and IR queries.« less
Spatial self-organization in hybrid models of multicellular adhesion
NASA Astrophysics Data System (ADS)
Bonforti, Adriano; Duran-Nebreda, Salva; Montañez, Raúl; Solé, Ricard
2016-10-01
Spatial self-organization emerges in distributed systems exhibiting local interactions when nonlinearities and the appropriate propagation of signals are at work. These kinds of phenomena can be modeled with different frameworks, typically cellular automata or reaction-diffusion systems. A different class of dynamical processes involves the correlated movement of agents over space, which can be mediated through chemotactic movement or minimization of cell-cell interaction energy. A classic example of the latter is given by the formation of spatially segregated assemblies when cells display differential adhesion. Here, we consider a new class of dynamical models, involving cell adhesion among two stochastically exchangeable cell states as a minimal model capable of exhibiting well-defined, ordered spatial patterns. Our results suggest that a whole space of pattern-forming rules is hosted by the combination of physical differential adhesion and the value of probabilities modulating cell phenotypic switching, showing that Turing-like patterns can be obtained without resorting to reaction-diffusion processes. If the model is expanded allowing cells to proliferate and die in an environment where diffusible nutrient and toxic waste are at play, different phases are observed, characterized by regularly spaced patterns. The analysis of the parameter space reveals that certain phases reach higher population levels than other modes of organization. A detailed exploration of the mean-field theory is also presented. Finally, we let populations of cells with different adhesion matrices compete for reproduction, showing that, in our model, structural organization can improve the fitness of a given cell population. The implications of these results for ecological and evolutionary models of pattern formation and the emergence of multicellularity are outlined.
Mulej Bratec, Satja; Xie, Xiyao; Schmid, Gabriele; Doll, Anselm; Schilbach, Leonhard; Zimmer, Claus; Wohlschläger, Afra; Riedl, Valentin; Sorg, Christian
2015-12-01
Cognitive emotion regulation is a powerful way of modulating emotional responses. However, despite the vital role of emotions in learning, it is unknown whether the effect of cognitive emotion regulation also extends to the modulation of learning. Computational models indicate prediction error activity, typically observed in the striatum and ventral tegmental area, as a critical neural mechanism involved in associative learning. We used model-based fMRI during aversive conditioning with and without cognitive emotion regulation to test the hypothesis that emotion regulation would affect prediction error-related neural activity in the striatum and ventral tegmental area, reflecting an emotion regulation-related modulation of learning. Our results show that cognitive emotion regulation reduced emotion-related brain activity, but increased prediction error-related activity in a network involving ventral tegmental area, hippocampus, insula and ventral striatum. While the reduction of response activity was related to behavioral measures of emotion regulation success, the enhancement of prediction error-related neural activity was related to learning performance. Furthermore, functional connectivity between the ventral tegmental area and ventrolateral prefrontal cortex, an area involved in regulation, was specifically increased during emotion regulation and likewise related to learning performance. Our data, therefore, provide first-time evidence that beyond reducing emotional responses, cognitive emotion regulation affects learning by enhancing prediction error-related activity, potentially via tegmental dopaminergic pathways. Copyright © 2015 Elsevier Inc. All rights reserved.
Fridriksson, Julius; den Ouden, Dirk-Bart; Hillis, Argye E; Hickok, Gregory; Rorden, Chris; Basilakos, Alexandra; Yourganov, Grigori; Bonilha, Leonardo
2018-01-17
In most cases, aphasia is caused by strokes involving the left hemisphere, with more extensive damage typically being associated with more severe aphasia. The classical model of aphasia commonly adhered to in the Western world is the Wernicke-Lichtheim model. The model has been in existence for over a century, and classification of aphasic symptomatology continues to rely on it. However, far more detailed models of speech and language localization in the brain have been formulated. In this regard, the dual stream model of cortical brain organization proposed by Hickok and Poeppel is particularly influential. Their model describes two processing routes, a dorsal stream and a ventral stream, that roughly support speech production and speech comprehension, respectively, in normal subjects. Despite the strong influence of the dual stream model in current neuropsychological research, there has been relatively limited focus on explaining aphasic symptoms in the context of this model. Given that the dual stream model represents a more nuanced picture of cortical speech and language organization, cortical damage that causes aphasic impairment should map clearly onto the dual processing streams. Here, we present a follow-up study to our previous work that used lesion data to reveal the anatomical boundaries of the dorsal and ventral streams supporting speech and language processing. Specifically, by emphasizing clinical measures, we examine the effect of cortical damage and disconnection involving the dorsal and ventral streams on aphasic impairment. The results reveal that measures of motor speech impairment mostly involve damage to the dorsal stream, whereas measures of impaired speech comprehension are more strongly associated with ventral stream involvement. Equally important, many clinical tests that target behaviours such as naming, speech repetition, or grammatical processing rely on interactions between the two streams. This latter finding explains why patients with seemingly disparate lesion locations often experience similar impairments on given subtests. Namely, these individuals' cortical damage, although dissimilar, affects a broad cortical network that plays a role in carrying out a given speech or language task. The current data suggest this is a more accurate characterization than ascribing specific lesion locations as responsible for specific language deficits.awx363media15705668782001. © The Author(s) (2018). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Terband, H; Maassen, B; Guenther, F H; Brumberg, J
2014-01-01
Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. The reader will be able to: (1) identify the difficulties in studying disordered speech motor development; (2) describe the differences in speech motor characteristics between SSD and subtype CAS; (3) describe the different types of learning that occur in the sensory-motor system during babbling and early speech acquisition; (4) identify the neural control subsystems involved in speech production; (5) describe the potential role of auditory self-monitoring in developmental speech disorders. Copyright © 2014 Elsevier Inc. All rights reserved.
The Importance of Modelling in the Teaching and Popularization of Science.
ERIC Educational Resources Information Center
Giordan, Andre
1991-01-01
Discusses the epistemology and typical applications of learning models focusing on practical methods to operationally introduce the distinctive, alloseric models into the educational environment. Alloseric learning models strive to minimize the characteristic resistance that learners typically exhibit when confronted with the need to reorganize or…
Human-Robot Interface: Issues in Operator Performance, Interface Design, and Technologies
2006-07-01
and the use of lightweight portable robotic sensor platforms. 5 robotics has reached a point where some generalities of HRI transcend specific...displays with control devices such as joysticks, wheels, and pedals (Kamsickas, 2003). Typical control stations include panels displaying (a) sensor ...tasks that do not involve mobility and usually involve camera control or data fusion from sensors Active search: Search tasks that involve mobility
ERIC Educational Resources Information Center
Jarrold, Christopher; Thorn, Annabel S. C.; Stephens, Emma
2009-01-01
This study examined the correlates of new word learning in a sample of 64 typically developing children between 5 and 8 years of age and a group of 22 teenagers and young adults with Down syndrome. Verbal short-term memory and phonological awareness skills were assessed to determine whether learning new words involved accurately representing…
ERIC Educational Resources Information Center
Nowakowski, Matilda E.; Tasker, Susan L.; Cunningham, Charles E.; McHolm, Angela E.; Edison, Shannon; St. Pierre, Jeff; Boyle, Michael H.; Schmidt, Louis A.
2011-01-01
Although joint attention processes are known to play an important role in adaptive social behavior in typical development, we know little about these processes in clinical child populations. We compared early school age children with selective mutism (SM; n = 19) versus mixed anxiety (MA; n = 18) and community controls (CC; n = 26) on joint…
Intraepidermal Merkel cell carcinoma: A case series of a rare entity with clinical follow up.
Jour, George; Aung, Phyu P; Rozas-Muñoz, Eduardo; Curry, Johnathan L; Prieto, Victor; Ivan, Doina
2017-08-01
Merkel cell carcinoma (MCC) is a rare but aggressive cutaneous carcinoma. MCC typically involves dermis and although epidermotropism has been reported, MCC strictly intraepidermal or in situ (MCCIS) is exceedingly rare. Most of the cases of MCCIS described so far have other associated lesions, such as squamous or basal cell carcinoma, actinic keratosis and so on. Herein, we describe 3 patients with MCC strictly in situ, without a dermal component. Our patients were elderly. 2 of the lesions involved the head and neck area and 1 was on a finger. All tumors were strictly intraepidermal in the diagnostic biopsies, and had histomorphologic features and an immunohistochemical profile supporting the diagnosis of MCC. Excisional biopsies were performed in 2 cases and failed to reveal dermal involvement by MCC or other associated malignancies. Our findings raise the awareness that MCC strictly in situ does exist and it should be included in the differential diagnosis of Paget's or extramammary Paget's disease, pagetoid squamous cell carcinoma, melanoma and other neoplasms that typically show histologically pagetoid extension of neoplastic cells. Considering the limited number of cases reported to date, the diagnosis of isolated MCCIS should not warrant a change in management from the typical MCC. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Agent-Based Modeling in Molecular Systems Biology.
Soheilypour, Mohammad; Mofrad, Mohammad R K
2018-07-01
Molecular systems orchestrating the biology of the cell typically involve a complex web of interactions among various components and span a vast range of spatial and temporal scales. Computational methods have advanced our understanding of the behavior of molecular systems by enabling us to test assumptions and hypotheses, explore the effect of different parameters on the outcome, and eventually guide experiments. While several different mathematical and computational methods are developed to study molecular systems at different spatiotemporal scales, there is still a need for methods that bridge the gap between spatially-detailed and computationally-efficient approaches. In this review, we summarize the capabilities of agent-based modeling (ABM) as an emerging molecular systems biology technique that provides researchers with a new tool in exploring the dynamics of molecular systems/pathways in health and disease. © 2018 WILEY Periodicals, Inc.
Exploring the southern ocean response to climate change
NASA Technical Reports Server (NTRS)
Martinson, Douglas G.; Rind, David; Parkinson, Claire
1993-01-01
The purpose of this project was to couple a regional (Southern Ocean) ocean/sea ice model to the existing Goddard Institute for Space Science (GISS) atmospheric general circulation model (GCM). This modification recognizes: the relative isolation of the Southern Ocean; the need to account, prognostically, for the significant air/sea/ice interaction through all involved components; and the advantage of translating the atmospheric lower boundary (typically the rapidly changing ocean surface) to a level that is consistent with the physical response times governing the system evolution (that is, to the base of the fast responding ocean surface layer). The deeper ocean beneath this layer varies on time scales several orders of magnitude slower than the atmosphere and surface ocean, and therefore the boundary between the upper and deep ocean represents a more reasonable fixed boundary condition.
Grip force coordination during bimanual tasks in unilateral cerebral palsy.
Islam, Mominul; Gordon, Andrew M; Sköld, Annika; Forssberg, Hans; Eliasson, Ann-Christin
2011-10-01
The aim of the study was to investigate coordination of fingertip forces during an asymmetrical bimanual task in children with unilateral cerebral palsy (CP). Twelve participants (six males, six females; mean age 14y 4mo, SD 3.3y; range 9-20y;) with unilateral CP (eight right-sided, four left-sided) and 15 age-matched typically developing participants (five males, 10 females; mean age 14y 3mo, SD 2.9y; range 9-18y,) were included. Participants were instructed to hold custom-made grip devices in each hand and place one device on top of the other. The grip force and load force were recorded simultaneously in both hands. Temporal coordination between the two hands was impaired in the participants with CP (compared with that in typically developing participants), that is they initiated the task by decreasing grip force in the releasing hand before increasing the force in the holding hand. The grip force increase in the holding hand was also smaller in participants with CP (involved hand/non-dominant hand releasing, p<0.001; non-involved hand/dominant hand releasing, p=0.007), indicating deficient scaling of force amplitude. The impairment was greater when participants with CP used their non-involved hand as the holding hand. Temporal coordination and scaling of fingertip forces were impaired in both hands in participants with CP. The non-involved hand was strongly affected by activity in the involved hand, which may explain why children with unilateral CP prefer to use only one hand during tasks that are typically performed with both hands. © The Authors. Developmental Medicine & Child Neurology © 2011 Mac Keith Press.
Using a dynamic model to assess trends in land degradation by water erosion in Spanish Rangelands
NASA Astrophysics Data System (ADS)
Ibáñez, Javier; Francisco Lavado-Contador, Joaquín; Schnabel, Susanne; Pulido-Fernández, Manuel; Martínez Valderrama, Jaime
2014-05-01
This work presents a model aimed at evaluating land degradation by water erosion in dehesas and montados of the Iberian Peninsula, that constitute valuable rangelands in the area. A multidisciplinary dynamic model was built including weather, biophysical and economic variables that reflect the main causes and processes affecting sheet erosion on hillsides of the study areas. The model has two main and two derived purposes: Purpose 1: Assessing the risk of degradation that a land-use system is running. Derived purpose 1: Early warning about land-use systems that are particularly threatened by degradation. Purpose 2: Assessing the degree to which different factors would hasten degradation if they changed from the typical values they show at present. Derived purpose 2: Evaluating the role of human activities on degradation. Model variables and parameters have been calibrated for a typical open woodland rangeland (dehesa or montado) defined along 22 working units selected from 10 representative farms and distributed throughout the Spanish region of Extremadura. The model is the basis for a straightforward assessment methodology which is summarized by the three following points: i) The risk of losing a given amount of soil before a given number of years was specifically estimated as the percentage of 1000 simulations where such a loss occurs, being the simulations run under randomly-generated scenarios of rainfall amount and intensity and meat and supplemental feed market prices; ii) Statistics about the length of time that a given amount of soil takes to be lost were calculated over 1000 stochastic simulations run until year 1000, thereby ensuring that such amount of soil has been lost in all of the simulations, i.e. the total risk is 100%; iii) Exogenous factors potentially affecting degradation, mainly climatic and economic, were ranked in order of importance by means of a sensitivity analysis. Particularly remarkable in terms of model performance is the major role played in our case study by two positive feedback loops in which the erosion rate is involved. Those loops are responsible for erosion to accelerate over time, thereby outweighing the effect of negative feedbacks also involved in the erosion rate. The estimated lengths of time to loss the upper 5, 10, 15 and 20 cm of the soil (with and initial depth of 23.4 cm) corresponds to 138, 245, 317 and 360 years, respectively. The importance of climatic factors on soil removal considerably exceeds that of the economic ones, which showed low impacts on the final model results.
NASA Astrophysics Data System (ADS)
Jacques, Diederik
2017-04-01
As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different environmental and geo-engineering applications. SeSBench will organize new workshops to add new benchmarks in a new special issue. Steefel, C. I., et al. (2015). "Reactive transport codes for subsurface environmental simulation." Computational Geosciences 19: 445-478.
Fatal crash involvement and laws against alcohol-impaired driving.
Zador, P L; Lund, A K; Fields, M; Weinberg, K
1989-01-01
It is estimated that in 1985 about 1,560 fewer drivers were involved in fatal crashes because of three types of drinking-driving laws. The laws studied were per se laws that define driving under the influence using blood alcohol concentration (BAC) thresholds; laws that provide for administrative license suspension or revocation prior to conviction for driving under the influence (often referred to as "administrative per se" laws); and laws that mandate jail or community service for first convictions of driving under the influence. It is estimated that if all 48 of the contiguous states adopted laws similar to those studied here, and if these new laws had effects comparable to those reported here, another 2,600 fatal driver involvements could be prevented each year. During hours when typically at least half of all fatally injured drivers have a BAC over 0.10 percent, administrative suspension/revocation is estimated to reduce the involvement of drivers in fatal crashes by about 9 percent; during the same hours, first offense mandatory jail/community service laws are estimated to have reduced driver involvement by about 6 percent. The effect of per se laws was estimated to be a 6 percent reduction during hours when fatal crashes typically are less likely to involve alcohol. These results are based on analyses of drivers involved in fatal crashes in the 48 contiguous states of the United States during the years 1978 to 1985.
Pozza, Giandomenico; Borgo, Stefano; Oltramari, Alessandro; Contalbrigo, Laura; Marangon, Stefano
2016-09-08
Ontologies are widely used both in the life sciences and in the management of public and private companies. Typically, the different offices in an organization develop their own models and related ontologies to capture specific tasks and goals. Although there might be an overall coordination, the use of distinct ontologies can jeopardize the integration of data across the organization since data sharing and reusability are sensitive to modeling choices. The paper provides a study of the entities that are typically found at the reception, analysis and report phases in public institutes in the life science domain. Ontological considerations and techniques are introduced and their implementation exemplified by studying the Istituto Zooprofilattico Sperimentale delle Venezie (IZSVe), a public veterinarian institute with different geographical locations and several laboratories. Different modeling issues are discussed like the identification and characterization of the main entities in these phases; the classification of the (types of) data; the clarification of the contexts and the roles of the involved entities. The study is based on a foundational ontology and shows how it can be extended to a comprehensive and coherent framework comprising the different institute's roles, processes and data. In particular, it shows how to use notions lying at the borderline between ontology and applications, like that of knowledge object. The paper aims to help the modeler to understand the core viewpoint of the organization and to improve data transparency. The study shows that the entities at play can be analyzed within a single ontological perspective allowing us to isolate a single ontological framework for the whole organization. This facilitates the development of coherent representations of the entities and related data, and fosters the use of integrated software for data management and reasoning across the company.
NASA Astrophysics Data System (ADS)
Ceriotti, G.; Porta, G. M.; Geloni, C.; Dalla Rosa, M.; Guadagnini, A.
2017-09-01
We develop a methodological framework and mathematical formulation which yields estimates of the uncertainty associated with the amounts of CO2 generated by Carbonate-Clays Reactions (CCR) in large-scale subsurface systems to assist characterization of the main features of this geochemical process. Our approach couples a one-dimensional compaction model, providing the dynamics of the evolution of porosity, temperature and pressure along the vertical direction, with a chemical model able to quantify the partial pressure of CO2 resulting from minerals and pore water interaction. The modeling framework we propose allows (i) estimating the depth at which the source of gases is located and (ii) quantifying the amount of CO2 generated, based on the mineralogy of the sediments involved in the basin formation process. A distinctive objective of the study is the quantification of the way the uncertainty affecting chemical equilibrium constants propagates to model outputs, i.e., the flux of CO2. These parameters are considered as key sources of uncertainty in our modeling approach because temperature and pressure distributions associated with deep burial depths typically fall outside the range of validity of commonly employed geochemical databases and typically used geochemical software. We also analyze the impact of the relative abundancy of primary phases in the sediments on the activation of CCR processes. As a test bed, we consider a computational study where pressure and temperature conditions are representative of those observed in real sedimentary formation. Our results are conducive to the probabilistic assessment of (i) the characteristic pressure and temperature at which CCR leads to generation of CO2 in sedimentary systems, (ii) the order of magnitude of the CO2 generation rate that can be associated with CCR processes.
Reading in dyslexia across literacy development: A longitudinal study of effective connectivity.
Morken, Frøydis; Helland, Turid; Hugdahl, Kenneth; Specht, Karsten
2017-01-01
Dyslexia is a literacy disorder affecting the efficient acquisition of reading and writing skills. The disorder is neurobiological in origin. Due to its developmental nature, longitudinal studies of dyslexia are of essence. They are, however, relatively scarce. The present study took a longitudinal approach to cortical connectivity of brain imaging data in reading tasks in children with dyslexia and children with typical reading development. The participants were followed with repeated measurements through Pre-literacy (6 years old), Emergent Literacy (8 years old) and Literacy (12 years old) stages, using Dynamic Causal Modelling (DCM) when analysing functional magnetic resonance imaging (fMRI) data. Even though there are a few longitudinal studies on effective connectivity in typical reading, to our knowledge, no studies have previously investigated these issues in relation to dyslexia. We set up a model of a brain reading network involving five cortical regions (inferior frontal gyrus, precentral gyrus, superior temporal gyrus, inferior parietal lobule, and occipito-temporal cortex). Using DCM, connectivity measures were calculated for each connection in the model. These measures were further analysed using factorial ANOVA. The results showed that the difference between groups centred on connections going to and from the inferior frontal gyrus (two connections) and the occipito-temporal cortex (three connections). For all five connections, the typical group showed stable or decreasing connectivity measures. The dyslexia group, on the other hand, showed a marked up-regulation (occipito-temporal connections) or down-regulation (inferior frontal gyrus connections) from 6 years to 8 years, followed by normalization from 8 years to 12 years. We interpret this as a delay in the dyslexia group in developing into the Pre-literacy and Emergent literacy stages. This delay could possibly be detrimental to literacy development. By age 12, there was no statistically significant difference in connectivity between the groups, but differences in literacy skills were still present, and were in fact larger than when measured at younger ages. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Constrained exceptional supersymmetric standard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athron, P.; King, S. F.; Miller, D. J.
2009-08-01
We propose and study a constrained version of the exceptional supersymmetric standard model (E{sub 6}SSM), which we call the cE{sub 6}SSM, based on a universal high energy scalar mass m{sub 0}, trilinear scalar coupling A{sub 0} and gaugino mass M{sub 1/2}. We derive the renormalization group (RG) Equations for the cE{sub 6}SSM, including the extra U(1){sub N} gauge factor and the low-energy matter content involving three 27 representations of E{sub 6}. We perform a numerical RG analysis for the cE{sub 6}SSM, imposing the usual low-energy experimental constraints and successful electroweak symmetry breaking. Our analysis reveals that the sparticle spectrum ofmore » the cE{sub 6}SSM involves a light gluino, two light neutralinos, and a light chargino. Furthermore, although the squarks, sleptons, and Z{sup '} boson are typically heavy, the exotic quarks and squarks can also be relatively light. We finally specify a set of benchmark points, which correspond to particle spectra, production modes, and decay patterns peculiar to the cE{sub 6}SSM, altogether leading to spectacular new physics signals at the Large Hadron Collider.« less
Fluctuations of the gluon distribution from the small- x effective action
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumitru, Adrian; Skokov, Vladimir
The computation of observables in high-energy QCD involves an average over stochastic semiclassical small-x gluon fields. The weight of various configurations is determined by the effective action. We introduce a method to study fluctuations of observables, functionals of the small-x fields, which does not explicitly involve dipoles. We integrate out those fluctuations of the semiclassical gluon field under which a given observable is invariant. Thereby we obtain the effective potential for that observable describing its fluctuations about the average. Here, we determine explicitly the effective potential for the covariant gauge gluon distribution both for the McLerran-Venugopalan (MV) model and formore » a (nonlocal) Gaussian approximation for the small-x effective action. This provides insight into the correlation of fluctuations of the number of hard gluons versus their typical transverse momentum. We find that the spectral shape of the fluctuations of the gluon distribution is fundamentally different in the MV model, where there is a pileup of gluons near the saturation scale, versus the solution of the small-x JIMWLK renormalization group, which generates essentially scale-invariant fluctuations above the absorptive boundary set by the saturation scale.« less
Kausel, Wilfried; Chatziioannou, Vasileios; Moore, Thomas R; Gorman, Britta R; Rokni, Michelle
2015-06-01
Previous work has demonstrated that structural vibrations of brass wind instruments can audibly affect the radiated sound. Furthermore, these broadband effects are not explainable by assuming perfect coincidence of the frequency of elliptical structural modes with air column resonances. In this work a mechanism is proposed that has the potential to explain the broadband influences of structural vibrations on acoustical characteristics such as input impedance, transfer function, and radiated sound. The proposed mechanism involves the coupling of axial bell vibrations to the internal air column. The acoustical effects of such axial bell vibrations have been studied by extending an existing transmission line model to include the effects of a parasitic flow into vibrating walls, as well as distributed sound pressure sources due to periodic volume fluctuations in a duct with oscillating boundaries. The magnitude of these influences in typical trumpet bells, as well as in a complete instrument with an unbraced loop, has been studied theoretically. The model results in predictions of input impedance and acoustical transfer function differences that are approximately 1 dB for straight instruments and significantly higher when coiled tubes are involved or when very thin brass is used.
Initiating heavy-atom-based phasing by multi-dimensional molecular replacement.
Pedersen, Bjørn Panyella; Gourdon, Pontus; Liu, Xiangyu; Karlsen, Jesper Lykkegaard; Nissen, Poul
2016-03-01
To obtain an electron-density map from a macromolecular crystal the phase problem needs to be solved, which often involves the use of heavy-atom derivative crystals and concomitant heavy-atom substructure determination. This is typically performed by dual-space methods, direct methods or Patterson-based approaches, which however may fail when only poorly diffracting derivative crystals are available. This is often the case for, for example, membrane proteins. Here, an approach for heavy-atom site identification based on a molecular-replacement parameter matrix (MRPM) is presented. It involves an n-dimensional search to test a wide spectrum of molecular-replacement parameters, such as different data sets and search models with different conformations. Results are scored by the ability to identify heavy-atom positions from anomalous difference Fourier maps. The strategy was successfully applied in the determination of a membrane-protein structure, the copper-transporting P-type ATPase CopA, when other methods had failed to determine the heavy-atom substructure. MRPM is well suited to proteins undergoing large conformational changes where multiple search models should be considered, and it enables the identification of weak but correct molecular-replacement solutions with maximum contrast to prime experimental phasing efforts.
Fluctuations of the gluon distribution from the small- x effective action
Dumitru, Adrian; Skokov, Vladimir
2017-09-29
The computation of observables in high-energy QCD involves an average over stochastic semiclassical small-x gluon fields. The weight of various configurations is determined by the effective action. We introduce a method to study fluctuations of observables, functionals of the small-x fields, which does not explicitly involve dipoles. We integrate out those fluctuations of the semiclassical gluon field under which a given observable is invariant. Thereby we obtain the effective potential for that observable describing its fluctuations about the average. Here, we determine explicitly the effective potential for the covariant gauge gluon distribution both for the McLerran-Venugopalan (MV) model and formore » a (nonlocal) Gaussian approximation for the small-x effective action. This provides insight into the correlation of fluctuations of the number of hard gluons versus their typical transverse momentum. We find that the spectral shape of the fluctuations of the gluon distribution is fundamentally different in the MV model, where there is a pileup of gluons near the saturation scale, versus the solution of the small-x JIMWLK renormalization group, which generates essentially scale-invariant fluctuations above the absorptive boundary set by the saturation scale.« less
Initiating heavy-atom-based phasing by multi-dimensional molecular replacement
Pedersen, Bjørn Panyella; Gourdon, Pontus; Liu, Xiangyu; Karlsen, Jesper Lykkegaard; Nissen, Poul
2016-01-01
To obtain an electron-density map from a macromolecular crystal the phase problem needs to be solved, which often involves the use of heavy-atom derivative crystals and concomitant heavy-atom substructure determination. This is typically performed by dual-space methods, direct methods or Patterson-based approaches, which however may fail when only poorly diffracting derivative crystals are available. This is often the case for, for example, membrane proteins. Here, an approach for heavy-atom site identification based on a molecular-replacement parameter matrix (MRPM) is presented. It involves an n-dimensional search to test a wide spectrum of molecular-replacement parameters, such as different data sets and search models with different conformations. Results are scored by the ability to identify heavy-atom positions from anomalous difference Fourier maps. The strategy was successfully applied in the determination of a membrane-protein structure, the copper-transporting P-type ATPase CopA, when other methods had failed to determine the heavy-atom substructure. MRPM is well suited to proteins undergoing large conformational changes where multiple search models should be considered, and it enables the identification of weak but correct molecular-replacement solutions with maximum contrast to prime experimental phasing efforts. PMID:26960131
Evaluation of process errors in bed load sampling using a Dune Model
Gomez, Basil; Troutman, Brent M.
1997-01-01
Reliable estimates of the streamwide bed load discharge obtained using sampling devices are dependent upon good at-a-point knowledge across the full width of the channel. Using field data and information derived from a model that describes the geometric features of a dune train in terms of a spatial process observed at a fixed point in time, we show that sampling errors decrease as the number of samples collected increases, and the number of traverses of the channel over which the samples are collected increases. It also is preferable that bed load sampling be conducted at a pace which allows a number of bed forms to pass through the sampling cross section. The situations we analyze and simulate pertain to moderate transport conditions in small rivers. In such circumstances, bed load sampling schemes typically should involve four or five traverses of a river, and the collection of 20–40 samples at a rate of five or six samples per hour. By ensuring that spatial and temporal variability in the transport process is accounted for, such a sampling design reduces both random and systematic errors and hence minimizes the total error involved in the sampling process.
The Dynamics of "Market-Making" in Higher Education
ERIC Educational Resources Information Center
Komljenovic, Janja; Robertson, Susan L.
2016-01-01
This paper examines what to some is a well-worked furrow; the processes and outcomes involved in what is typically referred to as "marketization" in the higher education sector. We do this through a case study of Newton University, where we reveal a rapid proliferation of market exchanges involving the administrative division of the…
ERIC Educational Resources Information Center
Bugeja, Clare
2009-01-01
This article investigates parental involvement in the musical education of violin students and the changing role of the parents' across the learning process. Two contexts were compared, one emphasising the Suzuki methodology and the other a "traditional" approach. Students learning "traditionally" are typically taught note reading from the…
ERIC Educational Resources Information Center
Hall, Natalie; Durand, Marie-Anne; Mengoni, Silvana E.
2017-01-01
Background: Despite experiencing health inequalities, people with intellectual disabilities are under-represented in health research. Previous research has identified barriers but has typically focused on under-recruitment to specific studies. This study aimed to explore care staff's attitudes to health research involving people with intellectual…
Liu, Yun; Li, Hong; Sun, Sida; Fang, Sheng
2017-09-01
An enhanced air dispersion modelling scheme is proposed to cope with the building layout and complex terrain of a typical Chinese nuclear power plant (NPP) site. In this modelling, the California Meteorological Model (CALMET) and the Stationary Wind Fit and Turbulence (SWIFT) are coupled with the Risø Mesoscale PUFF model (RIMPUFF) for refined wind field calculation. The near-field diffusion coefficient correction scheme of the Atmospheric Relative Concentrations in the Building Wakes Computer Code (ARCON96) is adopted to characterize dispersion in building arrays. The proposed method is evaluated by a wind tunnel experiment that replicates the typical Chinese NPP site. For both wind speed/direction and air concentration, the enhanced modelling predictions agree well with the observations. The fraction of the predictions within a factor of 2 and 5 of observations exceeds 55% and 82% respectively in the building area and the complex terrain area. This demonstrates the feasibility of the new enhanced modelling for typical Chinese NPP sites. Copyright © 2017 Elsevier Ltd. All rights reserved.
Spin-Up and Tuning of the Global Carbon Cycle Model Inside the GISS ModelE2 GCM
NASA Technical Reports Server (NTRS)
Aleinov, Igor; Kiang, Nancy Y.; Romanou, Anastasia
2015-01-01
Planetary carbon cycle involves multiple phenomena, acting at variety of temporal and spacial scales. The typical times range from minutes for leaf stomata physiology to centuries for passive soil carbon pools and deep ocean layers. So, finding a satisfactory equilibrium state becomes a challenging and computationally expensive task. Here we present the spin-up processes for different configurations of the GISS Carbon Cycle model from the model forced with MODIS observed Leaf Area Index (LAI) and prescribed ocean to the prognostic LAI and to the model fully coupled to the dynamic ocean and ocean biology. We investigate the time it takes the model to reach the equilibrium and discuss the ways to speed up this process. NASA Goddard Institute for Space Studies General Circulation Model (GISS ModelE2) is currently equipped with all major algorithms necessary for the simulation of the Global Carbon Cycle. The terrestrial part is presented by Ent Terrestrial Biosphere Model (Ent TBM), which includes leaf biophysics, prognostic phenology and soil biogeochemistry module (based on Carnegie-Ames-Stanford model). The ocean part is based on the NASA Ocean Biogeochemistry Model (NOBM). The transport of atmospheric CO2 is performed by the atmospheric part of ModelE2, which employs quadratic upstream algorithm for this purpose.
Environmentally-induced discharge transient coupling to spacecraft
NASA Technical Reports Server (NTRS)
Viswanathan, R.; Barbay, G.; Stevens, N. J.
1985-01-01
The Hughes SCREENS (Space Craft Response to Environments of Space) technique was applied to generic spin and 3-axis stabilized spacecraft models. It involved the NASCAP modeling for surface charging and lumped element modeling for transients coupling into a spacecraft. A differential voltage between antenna and spun shelf of approx. 400 V and current of 12 A resulted from discharge at antenna for the spinner and approx. 3 kv and 0.3 A from a discharge at solar panels for the 3-axis stabilized Spacecraft. A typical interface circuit response was analyzed to show that the transients would couple into the Spacecraft System through ground points, which are most vulnerable. A compilation and review was performed on 15 years of available data from electron and ion current collection phenomena. Empirical models were developed to match data and compared with flight data of Pix-1 and Pix-2 mission. It was found that large space power systems would float negative and discharge if operated at or above 300 V. Several recommendations are given to improve the models and to apply them to large space systems.
LES ARM Symbiotic Simulation and Observation (LASSO) Implementation Strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustafson Jr., WI; Vogelmann, AM
2015-09-01
This document illustrates the design of the Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) workflow to provide a routine, high-resolution modeling capability to augment the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility’s high-density observations. LASSO will create a powerful new capability for furthering ARM’s mission to advance understanding of cloud, radiation, aerosol, and land-surface processes. The combined observational and modeling elements will enable a new level of scientific inquiry by connecting processes and context to observations and providing needed statistics for details that cannot be measured. The result will be improved process understandingmore » that facilitates concomitant improvements in climate model parameterizations. The initial LASSO implementation will be for ARM’s Southern Great Plains site in Oklahoma and will focus on shallow convection, which is poorly simulated by climate models due in part to clouds’ typically small spatial scale compared to model grid spacing, and because the convection involves complicated interactions of microphysical and boundary layer processes.« less
Edwards, Ryan W J; Doster, Florian; Celia, Michael A; Bandilla, Karl W
2017-12-05
Hydraulic fracturing in shale gas formations involves the injection of large volumes of aqueous fluid deep underground. Only a small proportion of the injected water volume is typically recovered, raising concerns that the remaining water may migrate upward and potentially contaminate groundwater aquifers. We implement a numerical model of two-phase water and gas flow in a shale gas formation to test the hypothesis that the remaining water is imbibed into the shale rock by capillary forces and retained there indefinitely. The model includes the essential physics of the system and uses the simplest justifiable geometrical structure. We apply the model to simulate wells from a specific well pad in the Horn River Basin, British Columbia, where there is sufficient available data to build and test the model. Our simulations match the water and gas production data from the wells remarkably closely and show that all the injected water can be accounted for within the shale system, with most imbibed into the shale rock matrix and retained there for the long term.
Overview of refinement procedures within REFMAC5: utilizing data from different sources.
Kovalevskiy, Oleg; Nicholls, Robert A; Long, Fei; Carlon, Azzurra; Murshudov, Garib N
2018-03-01
Refinement is a process that involves bringing into agreement the structural model, available prior knowledge and experimental data. To achieve this, the refinement procedure optimizes a posterior conditional probability distribution of model parameters, including atomic coordinates, atomic displacement parameters (B factors), scale factors, parameters of the solvent model and twin fractions in the case of twinned crystals, given observed data such as observed amplitudes or intensities of structure factors. A library of chemical restraints is typically used to ensure consistency between the model and the prior knowledge of stereochemistry. If the observation-to-parameter ratio is small, for example when diffraction data only extend to low resolution, the Bayesian framework implemented in REFMAC5 uses external restraints to inject additional information extracted from structures of homologous proteins, prior knowledge about secondary-structure formation and even data obtained using different experimental methods, for example NMR. The refinement procedure also generates the `best' weighted electron-density maps, which are useful for further model (re)building. Here, the refinement of macromolecular structures using REFMAC5 and related tools distributed as part of the CCP4 suite is discussed.
Luu-The, Van; Duche, Daniel; Ferraris, Corinne; Meunier, Jean-Roch; Leclaire, Jacques; Labrie, Fernand
2009-09-01
Episkin and full thickness model from Episkin (FTM) are human skin models obtained from in vitro growth of keratinocytes into the five typical layers of the epidermis. FTM is a full thickness reconstructed skin model that also contains fibroblasts seeded in a collagen matrix. To assess whether enzymes involved in chemical detoxification are expressed in Episkin and FTM and how their levels compare with the human epidermis, dermis and total skin. Quantification of the mRNA expression levels of phases 1 and 2 metabolizing enzymes in cultured Episkin and FTM and human epidermis, dermis and total skin using Realtime PCR. The data show that the expression profiles of 61 phases 1 and 2 metabolizing enzymes in Episkin, FTM and epidermis are generally similar, with some exceptions. Cytochrome P450-dependent enzymes and flavin monooxygenases are expressed at low levels, while phase 2 metabolizing enzymes are expressed at much higher levels, especially, glutathione-S-transferase P1 (GSTP1) catechol-O-methyl transferase (COMT), steroid sulfotransferase (SULT2B1b), and N-acetyl transferase (NAT5). The present study also identifies the presence of many enzymes involved in cholesterol, arachidonic acid, leukotriene, prostaglandin, eicosatrienoic acids, and vitamin D3 metabolisms. The present data strongly suggest that Episkin and FTM represent reliable and valuable in vitro human skin models for studying the function of phases 1 and 2 metabolizing enzymes in xenobiotic metabolisms. They could be used to replace invasive methods or laboratory animals for skin experiments.
Toward Model Building for Visual Aesthetic Perception
Lughofer, Edwin; Zeng, Xianyi
2017-01-01
Several models of visual aesthetic perception have been proposed in recent years. Such models have drawn on investigations into the neural underpinnings of visual aesthetics, utilizing neurophysiological techniques and brain imaging techniques including functional magnetic resonance imaging, magnetoencephalography, and electroencephalography. The neural mechanisms underlying the aesthetic perception of the visual arts have been explained from the perspectives of neuropsychology, brain and cognitive science, informatics, and statistics. Although corresponding models have been constructed, the majority of these models contain elements that are difficult to be simulated or quantified using simple mathematical functions. In this review, we discuss the hypotheses, conceptions, and structures of six typical models for human aesthetic appreciation in the visual domain: the neuropsychological, information processing, mirror, quartet, and two hierarchical feed-forward layered models. Additionally, the neural foundation of aesthetic perception, appreciation, or judgement for each model is summarized. The development of a unified framework for the neurobiological mechanisms underlying the aesthetic perception of visual art and the validation of this framework via mathematical simulation is an interesting challenge in neuroaesthetics research. This review aims to provide information regarding the most promising proposals for bridging the gap between visual information processing and brain activity involved in aesthetic appreciation. PMID:29270194
Lindsay, Sally; McDougall, Carolyn; Sanford, Robyn; Menna-Dack, Dolly; Kingsnorth, Shauna; Adams, Tracey
2015-01-01
To assess performance differences in a mock job interview and workplace role-play exercise for youth with disabilities compared to their typically developing peers. We evaluated a purposive sample of 31 youth (15 with a physical disability and 16 typically developing) on their performance (content and delivery) in employment readiness role-play exercises. Our findings show significant differences between youth with disabilities compared to typically developing peers in several areas of the mock interview content (i.e. responses to the questions: "tell me about yourself", "how would you provide feedback to someone not doing their share" and a problem-solving scenario question) and delivery (i.e. voice clarity and mean latency). We found no significant differences in the workplace role-play performances of youth with and without disabilities. Youth with physical disabilities performed poorer in some areas of a job interview compared to their typically developing peers. They could benefit from further targeted employment readiness training. Clinicians should: Coach youth with physical disability on how to "sell" their abilities to potential employers and encourage youth to get involved in volunteer activities and employment readiness training programs. Consider using mock job interviews and other employment role-play exercises as assessment and training tools for youth with physical disabilities. Involve speech pathologists in the development of employment readiness programs that address voice clarity as a potential delivery issue.
Embodied practice: claiming the body's experience, agency, and knowledge for social work.
Tangenberg, Kathleen M; Kemp, Susan
2002-01-01
Although social work practice typically is concerned with physical conditions and experiences such as poverty, addiction, and violence, relatively little attention has been given to the body in professional literature. Emphasizing both physical and sociocultural dimensions of the body, this article argues for an invigorated, more complex understanding of the body in social work theory, practice, and research. Drawing from scholarship in the humanities, social sciences, and social work, a framework involving three dimensions of the body is proposed for integration with accepted ecological practice models. The nature and implications of three primary dimensions of the body for multiple domains of social work practice are explored, citing examples from narratives of mothers living with HIV disease: (1) the experiencing body, focused on the physicality of daily life; (2) the body of power, focused on the physicality of oppression and marginality, typically based on race or ethnicity, socioeconomic status, gender, sexual orientation, age, disability, physical appearance, and illness; and (3) the client body, reflecting the bodily experiences of those identified as clients who participate in relationships with social workers.
Metaphysics and medical education: taking holism seriously.
Wilson, Bruce
2013-06-01
Medical education is now suffused with concepts that have their source outside the traditional scientific and medical disciplines: concepts such as holism, connectedness and reflective practice. Teaching of these, and other problematic concepts such as medical uncertainty and error, has been defined more by the challenge they pose to the standard model rather than being informed by a strong positive understanding. This challenge typically involves a critical engagement with the idea of objectivity, which is rarely acknowledged as an inherently metaphysical critique. Consequently, these ideas prove to be difficult to teach well. I suggest that the lack of an integrating, positive narrative is the reason for teaching difficulty, and propose that what is needed is an explicit commitment to teach the metaphysics of medicine, with the concept of holism being the fulcrum on which the remaining concepts turn. An acknowledged metaphysical narrative will encompass the scientific realism that medical students typically bring to their tertiary education, and at the same time enable a bigger picture to be drawn that puts the newer and more problematic concepts into context. © 2013 John Wiley & Sons Ltd.
Image simulation for automatic license plate recognition
NASA Astrophysics Data System (ADS)
Bala, Raja; Zhao, Yonghui; Burry, Aaron; Kozitsky, Vladimir; Fillion, Claude; Saunders, Craig; Rodríguez-Serrano, José
2012-01-01
Automatic license plate recognition (ALPR) is an important capability for traffic surveillance applications, including toll monitoring and detection of different types of traffic violations. ALPR is a multi-stage process comprising plate localization, character segmentation, optical character recognition (OCR), and identification of originating jurisdiction (i.e. state or province). Training of an ALPR system for a new jurisdiction typically involves gathering vast amounts of license plate images and associated ground truth data, followed by iterative tuning and optimization of the ALPR algorithms. The substantial time and effort required to train and optimize the ALPR system can result in excessive operational cost and overhead. In this paper we propose a framework to create an artificial set of license plate images for accelerated training and optimization of ALPR algorithms. The framework comprises two steps: the synthesis of license plate images according to the design and layout for a jurisdiction of interest; and the modeling of imaging transformations and distortions typically encountered in the image capture process. Distortion parameters are estimated by measurements of real plate images. The simulation methodology is successfully demonstrated for training of OCR.
A cognitive perspective on medical expertise: theory and implication.
Schmidt, H G; Norman, G R; Boshuizen, H P
1990-10-01
A new theory of the development of expertise in medicine is outlined. Contrary to existing views, this theory assumes that expertise is not so much a matter of superior reasoning skills or in-depth knowledge of pathophysiological states as it is based on cognitive structures that describe the features of prototypical or even actual patients. These cognitive structures, referred to as "illness scripts," contain relatively little knowledge about pathophysiological causes of symptoms and complaints but a wealth of clinically relevant information about disease, its consequences, and the context under which illness develops. By contrast, intermediate-level students without clinical experience typically use pathophysiological, causal models of disease when solving problems. The authors review evidence supporting the theory and discuss its implications for the understanding of five phenomena extensively documented in the clinical-reasoning literature: (1) content specificity in diagnostic performance; (2) typical differences in data-gathering techniques between medical students and physicians; (3) difficulties involved in setting standards; (4) a decline in performance on certain measures of clinical reasoning with increasing expertise; and (5) a paradoxical association between errors and longer response times in visual diagnosis.
Bioelectric memory: modeling resting potential bistability in amphibian embryos and mammalian cells.
Law, Robert; Levin, Michael
2015-10-15
Bioelectric gradients among all cells, not just within excitable nerve and muscle, play instructive roles in developmental and regenerative pattern formation. Plasma membrane resting potential gradients regulate cell behaviors by regulating downstream transcriptional and epigenetic events. Unlike neurons, which fire rapidly and typically return to the same polarized state, developmental bioelectric signaling involves many cell types stably maintaining various levels of resting potential during morphogenetic events. It is important to begin to quantitatively model the stability of bioelectric states in cells, to understand computation and pattern maintenance during regeneration and remodeling. To facilitate the analysis of endogenous bioelectric signaling and the exploitation of voltage-based cellular controls in synthetic bioengineering applications, we sought to understand the conditions under which somatic cells can stably maintain distinct resting potential values (a type of state memory). Using the Channelpedia ion channel database, we generated an array of amphibian oocyte and mammalian membrane models for voltage evolution. These models were analyzed and searched, by simulation, for a simple dynamical property, multistability, which forms a type of voltage memory. We find that typical mammalian models and amphibian oocyte models exhibit bistability when expressing different ion channel subsets, with either persistent sodium or inward-rectifying potassium, respectively, playing a facilitative role in bistable memory formation. We illustrate this difference using fast sodium channel dynamics for which a comprehensive theory exists, where the same model exhibits bistability under mammalian conditions but not amphibian conditions. In amphibians, potassium channels from the Kv1.x and Kv2.x families tend to disrupt this bistable memory formation. We also identify some common principles under which physiological memory emerges, which suggest specific strategies for implementing memories in bioengineering contexts. Our results reveal conditions under which cells can stably maintain one of several resting voltage potential values. These models suggest testable predictions for experiments in developmental bioelectricity, and illustrate how cells can be used as versatile physiological memory elements in synthetic biology, and unconventional computation contexts.
NASA Astrophysics Data System (ADS)
Balaji, V.; Benson, Rusty; Wyman, Bruce; Held, Isaac
2016-10-01
Climate models represent a large variety of processes on a variety of timescales and space scales, a canonical example of multi-physics multi-scale modeling. Current hardware trends, such as Graphical Processing Units (GPUs) and Many Integrated Core (MIC) chips, are based on, at best, marginal increases in clock speed, coupled with vast increases in concurrency, particularly at the fine grain. Multi-physics codes face particular challenges in achieving fine-grained concurrency, as different physics and dynamics components have different computational profiles, and universal solutions are hard to come by. We propose here one approach for multi-physics codes. These codes are typically structured as components interacting via software frameworks. The component structure of a typical Earth system model consists of a hierarchical and recursive tree of components, each representing a different climate process or dynamical system. This recursive structure generally encompasses a modest level of concurrency at the highest level (e.g., atmosphere and ocean on different processor sets) with serial organization underneath. We propose to extend concurrency much further by running more and more lower- and higher-level components in parallel with each other. Each component can further be parallelized on the fine grain, potentially offering a major increase in the scalability of Earth system models. We present here first results from this approach, called coarse-grained component concurrency, or CCC. Within the Geophysical Fluid Dynamics Laboratory (GFDL) Flexible Modeling System (FMS), the atmospheric radiative transfer component has been configured to run in parallel with a composite component consisting of every other atmospheric component, including the atmospheric dynamics and all other atmospheric physics components. We will explore the algorithmic challenges involved in such an approach, and present results from such simulations. Plans to achieve even greater levels of coarse-grained concurrency by extending this approach within other components, such as the ocean, will be discussed.
ERIC Educational Resources Information Center
Burack, Jacob A.; Russo, Natalie; Kovshoff, Hannah; Palma Fernandes, Tania; Ringo, Jason; Landry, Oriane; Iarocci, Grace
2016-01-01
Evidence from the study of attention among persons with autism spectrum disorder (ASD) and typically developing (TD) children suggests a rethinking of the notion that performance inherently reflects disability, ability, or capacity in favor of a more nuanced story that involves an emphasis on styles and biases that reflect real-world attending. We…
Leonard, J L
2000-05-01
Understanding how species-typical movement patterns are organized in the nervous system is a central question in neurobiology. The current explanations involve 'alphabet' models in which an individual neuron may participate in the circuit for several behaviors but each behavior is specified by a specific neural circuit. However, not all of the well-studied model systems fit the 'alphabet' model. The 'equation' model provides an alternative possibility, whereby a system of parallel motor neurons, each with a unique (but overlapping) field of innervation, can account for the production of stereotyped behavior patterns by variable circuits. That is, it is possible for such patterns to arise as emergent properties of a generalized neural network in the absence of feedback, a simple version of a 'self-organizing' behavioral system. Comparison of systems of identified neurons suggest that the 'alphabet' model may account for most observations where CPGs act to organize motor patterns. Other well-known model systems, involving architectures corresponding to feed-forward neural networks with a hidden layer, may organize patterned behavior in a manner consistent with the 'equation' model. Such architectures are found in the Mauthner and reticulospinal circuits, 'escape' locomotion in cockroaches, CNS control of Aplysia gill, and may also be important in the coordination of sensory information and motor systems in insect mushroom bodies and the vertebrate hippocampus. The hidden layer of such networks may serve as an 'internal representation' of the behavioral state and/or body position of the animal, allowing the animal to fine-tune oriented, or particularly context-sensitive, movements to the prevalent conditions. Experiments designed to distinguish between the two models in cases where they make mutually exclusive predictions provide an opportunity to elucidate the neural mechanisms by which behavior is organized in vivo and in vitro. Copyright 2000 S. Karger AG, Basel
Analyzing Cyber Security Threats on Cyber-Physical Systems Using Model-Based Systems Engineering
NASA Technical Reports Server (NTRS)
Kerzhner, Aleksandr; Pomerantz, Marc; Tan, Kymie; Campuzano, Brian; Dinkel, Kevin; Pecharich, Jeremy; Nguyen, Viet; Steele, Robert; Johnson, Bryan
2015-01-01
The spectre of cyber attacks on aerospace systems can no longer be ignored given that many of the components and vulnerabilities that have been successfully exploited by the adversary on other infrastructures are the same as those deployed and used within the aerospace environment. An important consideration with respect to the mission/safety critical infrastructure supporting space operations is that an appropriate defensive response to an attack invariably involves the need for high precision and accuracy, because an incorrect response can trigger unacceptable losses involving lives and/or significant financial damage. A highly precise defensive response, considering the typical complexity of aerospace environments, requires a detailed and well-founded understanding of the underlying system where the goal of the defensive response is to preserve critical mission objectives in the presence of adversarial activity. In this paper, a structured approach for modeling aerospace systems is described. The approach includes physical elements, network topology, software applications, system functions, and usage scenarios. We leverage Model-Based Systems Engineering methodology by utilizing the Object Management Group's Systems Modeling Language to represent the system being analyzed and also utilize model transformations to change relevant aspects of the model into specialized analyses. A novel visualization approach is utilized to visualize the entire model as a three-dimensional graph, allowing easier interaction with subject matter experts. The model provides a unifying structure for analyzing the impact of a particular attack or a particular type of attack. Two different example analysis types are demonstrated in this paper: a graph-based propagation analysis based on edge labels, and a graph-based propagation analysis based on node labels.
Optimization of Ballast Design: A Case Study of the Physics Entrepreneurship Program
NASA Astrophysics Data System (ADS)
Ding, Jun; Cheng, Norman; Lamouri, Abbas; Sulcs, Juris; Brown, Robert; Taylor, Cyrus
2001-10-01
This talk presents a typical internship project for students in the Physics Entrepreneurship Program at Case Western Reserve University. As part of their overall strategy, Advanced Lighting International (ADLT) is involved in the production of magnetic ballasts for metal halide lamps. The systems in which these ballasts function is undergoing rapid evolution, leading to the question of how the design of the ballasts can be optimized in order to deliver superior performance for lower cost. Addressing this question requires a full understanding of a variety of issues ranging from the basic modeling of the physics of the magnetic ballasts to questions of overall market strategy, manufacturing considerations, and the competitive environment.
Aerodynamic preliminary analysis system 2. Part 1: Theory
NASA Technical Reports Server (NTRS)
Bonner, E.; Clever, W.; Dunn, K.
1981-01-01
A subsonic/supersonic/hypersonic aerodynamic analysis was developed by integrating the Aerodynamic Preliminary Analysis System (APAS), and the inviscid force calculation modules of the Hypersonic Arbitrary Body Program. APAS analysis was extended for nonlinear vortex forces using a generalization of the Polhamus analogy. The interactive system provides appropriate aerodynamic models for a single input geometry data base and has a run/output format similar to a wind tunnel test program. The user's manual was organized to cover the principle system activities of a typical application, geometric input/editing, aerodynamic evaluation, and post analysis review/display. Sample sessions are included to illustrate the specific task involved and are followed by a comprehensive command/subcommand dictionary used to operate the system.
Loncke, Filip T; Campbell, Jamie; England, Amanda M; Haley, Tanya
2006-02-15
Message generating is a complex process involving a number of processes, including the selection of modes to use. When expressing a message, human communicators typically use a combination of modes. This phenomenon is often termed multimodality. This article explores the use of models that explain multimodality as an explanatory framework for augmentative and alternative communication (AAC). Multimodality is analysed from a communication, psycholinguistic, and cognitive perspective. Theoretical and applied topics within AAC can be explained or described within the multimodality framework considering iconicity, simultaneous communication, lexical organization, and compatibility of communication modes. Consideration of multimodality is critical to understanding underlying processes in individuals who use AAC and individuals who interact with them.
Evidence from mixed hydrate nucleation for a funnel model of crystallization.
Hall, Kyle Wm; Carpendale, Sheelagh; Kusalik, Peter G
2016-10-25
The molecular-level details of crystallization remain unclear for many systems. Previous work has speculated on the phenomenological similarities between molecular crystallization and protein folding. Here we demonstrate that molecular crystallization can involve funnel-shaped potential energy landscapes through a detailed analysis of mixed gas hydrate nucleation, a prototypical multicomponent crystallization process. Through this, we contribute both: (i) a powerful conceptual framework for exploring and rationalizing molecular crystallization, and (ii) an explanation of phenomenological similarities between protein folding and crystallization. Such funnel-shaped potential energy landscapes may be typical of broad classes of molecular ordering processes, and can provide a new perspective for both studying and understanding these processes.
A note on adding viscoelasticity to earthquake simulators
Pollitz, Fred
2017-01-01
Here, I describe how time‐dependent quasi‐static stress transfer can be implemented in an earthquake simulator code that is used to generate long synthetic seismicity catalogs. Most existing seismicity simulators use precomputed static stress interaction coefficients to rapidly implement static stress transfer in fault networks with typically tens of thousands of fault patches. The extension to quasi‐static deformation, which accounts for viscoelasticity of Earth’s ductile lower crust and mantle, involves the precomputation of additional interaction coefficients that represent time‐dependent stress transfer among the model fault patches, combined with defining and evolving additional state variables that track this stress transfer. The new approach is illustrated with application to a California‐wide synthetic fault network.
NASA Astrophysics Data System (ADS)
Baker, G. N.
This paper examines the constraints upon a typical manufacturer of gyros and strapdown systems. It describes that while being responsive to exchange and keeping abreast of 'state of the art' technology, there are many reasons why the manufacturer must satisfy the market using existing technology and production equipment. The Single-Degree-of-Freedom Rate Integrating Gyro is a well established product, yet is capable of achieving far higher performances than originally envisaged due to modelling and characterization within digital strapdown systems. The parameters involved are discussed, and a description given of the calibration process undertaken on a strapdown system being manufactured in a production environment in batch quantities.
Evidence from mixed hydrate nucleation for a funnel model of crystallization
Hall, Kyle Wm.; Carpendale, Sheelagh; Kusalik, Peter G.
2016-01-01
The molecular-level details of crystallization remain unclear for many systems. Previous work has speculated on the phenomenological similarities between molecular crystallization and protein folding. Here we demonstrate that molecular crystallization can involve funnel-shaped potential energy landscapes through a detailed analysis of mixed gas hydrate nucleation, a prototypical multicomponent crystallization process. Through this, we contribute both: (i) a powerful conceptual framework for exploring and rationalizing molecular crystallization, and (ii) an explanation of phenomenological similarities between protein folding and crystallization. Such funnel-shaped potential energy landscapes may be typical of broad classes of molecular ordering processes, and can provide a new perspective for both studying and understanding these processes. PMID:27790987
NASA Technical Reports Server (NTRS)
Gupta, U. K.; Ali, M.
1988-01-01
The theoretical basis and operation of LEBEX, a machine-learning system for jet-engine performance monitoring, are described. The behavior of the engine is modeled in terms of four parameters (the rotational speeds of the high- and low-speed sections and the exhaust and combustion temperatures), and parameter variations indicating malfunction are transformed into structural representations involving instances and events. LEBEX extracts descriptors from a set of training data on normal and faulty engines, represents them hierarchically in a knowledge base, and uses them to diagnose and predict faults on a real-time basis. Diagrams of the system architecture and printouts of typical results are shown.
ERIC Educational Resources Information Center
Schoonenboom, Judith
2016-01-01
Educational innovations often involve intact subgroups, such as school classes or university departments. In small-scale educational evaluation research, typically involving 1 to 20 subgroups, differences among these subgroups are often neglected. This article presents a mixed method from a qualitative perspective, in which differences among…
[Secondary bladder lymphoma in a patient with AIDS].
Vendrell, J R; Alcaraz, A; Gutíerrez, R; Rodríguez, A; Barranco, M A; Carretero, P
1996-10-01
Contribution of one case of non-Hodgkin lymphoma (NHL) with vesical involvement, that presented clinically with urological symptomatology. Vesical involvement is typical of NHL, and is becoming more frequent in association with the increased number of AIDS patients under immunosuppressive therapy. It should be expected that this currently unusual entity will become more common in the future.
Discussion of David Thissen's Bad Questions: An Essay Involving Item Response Theory
ERIC Educational Resources Information Center
Wainer, Howard
2016-01-01
The usual role of a discussant is to clarify and correct the paper being discussed, but in this case, the author, Howard Wainer, generally agrees with everything David Thissen says in his essay, "Bad Questions: An Essay Involving Item Response Theory." This essay expands on David Thissen's statement that there are typically two principal…
Involving Your Child or Teen with ASD in Integrated Community Activities
ERIC Educational Resources Information Center
McKee, Rebecca
2011-01-01
Participating in outside activities and community-based endeavors can be tricky for people with special needs, like Autism Spectrum Disorder (ASD). Families meet more than a few obstacles attempting to integrate their children or teens who have special needs like ASD. Most typical children are highly involved in sports, clubs and camps. If a…
Foster Care Involvement among Medicaid-Enrolled Children with Autism
ERIC Educational Resources Information Center
Cidav, Zuleyha; Xie, Ming; Mandell, David S.
2018-01-01
The prevalence and risk of foster care involvement among children with autism spectrum disorder (ASD) relative to children with intellectual disability (ID), children with ASD and ID, and typically developing children were examined using 2001-2007 Medicaid data. Children were followed up to the first foster care placement or until the end of 2007;…
Wilczynski, Bartek; Furlong, Eileen E M
2010-04-15
Development is regulated by dynamic patterns of gene expression, which are orchestrated through the action of complex gene regulatory networks (GRNs). Substantial progress has been made in modeling transcriptional regulation in recent years, including qualitative "coarse-grain" models operating at the gene level to very "fine-grain" quantitative models operating at the biophysical "transcription factor-DNA level". Recent advances in genome-wide studies have revealed an enormous increase in the size and complexity or GRNs. Even relatively simple developmental processes can involve hundreds of regulatory molecules, with extensive interconnectivity and cooperative regulation. This leads to an explosion in the number of regulatory functions, effectively impeding Boolean-based qualitative modeling approaches. At the same time, the lack of information on the biophysical properties for the majority of transcription factors within a global network restricts quantitative approaches. In this review, we explore the current challenges in moving from modeling medium scale well-characterized networks to more poorly characterized global networks. We suggest to integrate coarse- and find-grain approaches to model gene regulatory networks in cis. We focus on two very well-studied examples from Drosophila, which likely represent typical developmental regulatory modules across metazoans. Copyright (c) 2009 Elsevier Inc. All rights reserved.
Integral modeling of human eyes: from anatomy to visual response
NASA Astrophysics Data System (ADS)
Navarro, Rafael
2006-02-01
Three basic stages towards the global modeling of the eye are presented. In the first stage, an adequate choice of the basis geometrical model, general ellipsoid in this case, permits, to fit in a natural way the typical "melon" shape of the cornea with minimum complexity. In addition it facilitates to extract most of its optically relevant parameters, such as the position and orientation of it optical axis in the 3D space, the paraxial and overall refractive power, the amount and axis of astigmatism, etc. In the second level, this geometrical model, along with optical design and optimization tools, is applied to build customized optical models of individual eyes, able to reproduce the measured wave aberration with high fidelity. Finally, we put together a sequence of schematic, but functionally realistic models of the different stages of image acquisition, coding and analysis in the visual system, along with a probabilistic Bayesian maximum a posteriori identification approach. This permitted us to build a realistic simulation of the all the essential processes involved in a visual acuity clinical exam. It is remarkable that at all three levels, it has been possible for the models to predict the experimental data with high accuracy.
Modeling Adhesive Anchors in a Discrete Element Framework
Marcon, Marco; Vorel, Jan; Ninčević, Krešimir; Wan-Wendner, Roman
2017-01-01
In recent years, post-installed anchors are widely used to connect structural members and to fix appliances to load-bearing elements. A bonded anchor typically denotes a threaded bar placed into a borehole filled with adhesive mortar. The high complexity of the problem, owing to the multiple materials and failure mechanisms involved, requires a numerical support for the experimental investigation. A reliable model able to reproduce a system’s short-term behavior is needed before the development of a more complex framework for the subsequent investigation of the lifetime of fasteners subjected to various deterioration processes can commence. The focus of this contribution is the development and validation of such a model for bonded anchors under pure tension load. Compression, modulus, fracture and splitting tests are performed on standard concrete specimens. These serve for the calibration and validation of the concrete constitutive model. The behavior of the adhesive mortar layer is modeled with a stress-slip law, calibrated on a set of confined pull-out tests. The model validation is performed on tests with different configurations comparing load-displacement curves, crack patterns and concrete cone shapes. A model sensitivity analysis and the evaluation of the bond stress and slippage along the anchor complete the study. PMID:28786964
Low-order modelling of a drop on a highly-hydrophobic substrate: statics and dynamics
NASA Astrophysics Data System (ADS)
Wray, Alexander W.; Matar, Omar K.; Davis, Stephen H.
2017-11-01
We analyse the behaviour of droplets resting on highly-hydrophobic substrates. This problem is of practical interest due to its appearance in many physical contexts involving the spreading, wetting, and dewetting of fluids on solid substrates. In mathematical terms, it exhibits an interesting challenge as the interface is multi-valued as a function of the natural Cartesian co-ordinates, presenting a stumbling block to typical low-order modelling techniques. Nonetheless, we show that in the static case, the interfacial shape is governed by the Young-Laplace equation, which may be solved explicitly in terms of elliptic functions. We present simple low-order expressions that faithfully reproduce the shapes. We then consider the dynamic case, showing that the predictions of our low-order model compare favourably with those obtained from direct numerical simulations. We also examine the characteristic flow regimes of interest. EPSRC, UK, MEMPHIS program Grant (EP/K003976/1), RAEng Research Chair (OKM).
Surgical Models of Roux-en-Y Gastric Bypass Surgery and Sleeve Gastrectomy in Rats and Mice
Bruinsma, Bote G.; Uygun, Korkut; Yarmush, Martin L.; Saeidi, Nima
2015-01-01
Bariatric surgery is the only definitive solution currently available for the present obesity pandemic. These operations typically involve reconfiguration of gastrointestinal tract anatomy and impose profound metabolic and physiological benefits, such as substantially reducing body weight and ameliorating type II diabetes. Therefore, animal models of these surgeries offer unique and exciting opportunities to delineate the underlying mechanisms that contribute to the resolution of obesity and diabetes. Here we describe a standardized procedure for mouse and rat models of Roux-en-Y gastric bypass (80–90 minutes operative time) and sleeve gastrectomy (30–45 minutes operative time), which, to a high degree resemble operations in human. We also provide detailed protocols for both pre- and post-operative techniques that ensure a high success rate in the operations. These protocols provide the opportunity to mechanistically investigate the systemic effects of the surgical interventions, such as regulation of body weight, glucose homeostasis, and gut microbiome. PMID:25719268
IB2d: a Python and MATLAB implementation of the immersed boundary method.
Battista, Nicholas A; Strickland, W Christopher; Miller, Laura A
2017-03-29
The development of fluid-structure interaction (FSI) software involves trade-offs between ease of use, generality, performance, and cost. Typically there are large learning curves when using low-level software to model the interaction of an elastic structure immersed in a uniform density fluid. Many existing codes are not publicly available, and the commercial software that exists usually requires expensive licenses and may not be as robust or allow the necessary flexibility that in house codes can provide. We present an open source immersed boundary software package, IB2d, with full implementations in both MATLAB and Python, that is capable of running a vast range of biomechanics models and is accessible to scientists who have experience in high-level programming environments. IB2d contains multiple options for constructing material properties of the fiber structure, as well as the advection-diffusion of a chemical gradient, muscle mechanics models, and artificial forcing to drive boundaries with a preferred motion.
Role of Feline Immunodeficiency Virus in Lymphomagenesis--Going Alone or Colluding?
Kaye, Sarah; Wang, Wenqi; Miller, Craig; McLuckie, Alicia; Beatty, Julia A; Grant, Chris K; VandeWoude, Sue; Bielefeldt-Ohmann, Helle
2016-01-01
Feline immunodeficiency virus (FIV) is a naturally occurring lentivirus of domestic and nondomestic feline species. Infection in domestic cats leads to immune dysfunction via mechanisms similar to those caused by human immunodeficiency virus (HIV) and, as such, is a valuable natural animal model for acquired immunodeficiency syndrome (AIDS) in humans. An association between FIV and an increased incidence of neoplasia has long been recognized, with frequencies of up to 20% in FIV-positive cats recorded in some studies. This is similar to the rate of neoplasia seen in HIV-positive individuals, and in both species neoplasia typically requires several years to arise. The most frequently reported type of neoplasia associated with FIV infection is lymphoma. Here we review the possible mechanisms involved in FIV lymphomagenesis, including the possible involvement of coinfections, notably those with gamma-herpesviruses. © The Author 2016. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Longitudinal relationships between resources, motivation, and functioning.
Hess, Thomas M; Emery, Lisa; Neupert, Shevaun D
2012-05-01
We investigated how fluctuations and linear changes in health and cognitive resources influence the motivation to engage in complex cognitive activity and the extent to which motivation mediated the relationship between changing resources and cognitively demanding activities. Longitudinal data from 332 adults aged 20-85 years were examined. Motivation was assessed using a composite of Need for Cognition and Personal Need for Structure and additional measures of health, sensory functioning, cognitive ability, and self-reported activity engagement. Multilevel modeling revealed that age-typical changes in health, sensory functions, and ability were associated with changes in motivation, with the impact of declining health on motivation being particularly strong in older adulthood. Changes in motivation, in turn, predicted involvement in cognitive and social activities as well as changes in cognitive ability. Finally, motivation was observed to partially mediate the relationship between changes in resources and cognitively demanding activities. Our results suggest that motivation may play an important role in determining the course of cognitive change and involvement in cognitively demanding everyday activities in adulthood.
Shi, Min; Bradner, Joshua; Bammler, Theo K.; Eaton, David L.; Zhang, JianPeng; Ye, ZuCheng; Wilson, Angela M.; Montine, Thomas J.; Pan, Catherine; Zhang, Jing
2009-01-01
Parkinson disease (PD) typically affects the cortical regions during the later stages of disease, with neuronal loss, gliosis, and formation of diffuse cortical Lewy bodies in a significant portion of patients with dementia. To identify novel proteins involved in PD progression, we prepared synaptosomal fractions from the frontal cortices of pathologically verified PD patients at different stages along with age-matched controls. Protein expression profiles were compared using a robust quantitative proteomic technique. Approximately 100 proteins displayed significant differences in their relative abundances between PD patients at various stages and controls; three of these proteins were validated using independent techniques. One of the confirmed proteins, glutathione S-transferase Pi, was further investigated in cellular models of PD, demonstrating that its level was intimately associated with several critical cellular processes that are directly related to neurodegeneration in PD. These results have, for the first time, suggested that the levels of glutathione S-transferase Pi may play an important role in modulating the progression of PD. PMID:19498008
Relative Power in Sibling Relationships Across Adolescence.
Lindell, Anna K; Campione-Barr, Nicole
2017-06-01
During childhood, older siblings typically hold a more powerful position in their relationship with their younger siblings, but these relationships are thought to become more egalitarian during adolescence as siblings begin to prepare for their relationships as adults and as younger siblings become more socially and cognitively competent. Little is known about relationship factors that may explain this shift in power dynamics, however. The present study therefore examined longitudinal changes in adolescents' and their siblings' perceptions of sibling relative power from age 12 to 18 (n = 145 dyads), and examined whether different levels of sibling relationship positivity and negativity, as well as sibling structural variables, indicated different over-time changes in relative power. Multilevel models indicated that adolescents reported significant declines in their siblings' relative power across adolescence, with older siblings relinquishing the most power over time. However, only siblings with less positively involved relationships reported declines in relative power, suggesting that siblings who maintain highly involved relationships may not become more egalitarian during adolescence. © 2017 Wiley Periodicals, Inc.
CSMA Versus Prioritized CSMA for Air-Traffic-Control Improvement
NASA Technical Reports Server (NTRS)
Robinson, Daryl C.
2001-01-01
OPNET version 7.0 simulations are presented involving an important application of the Aeronautical Telecommunications Network (ATN), Controller Pilot Data Link Communications (CPDLC) over the Very High Frequency Data Link, Mode 2 (VDL-2). Communication is modeled for essentially all incoming and outgoing nonstop air-traffic for just three United States cities: Cleveland, Cincinnati, and Detroit. There are 32 airports in the simulation, 29 of which are either sources or destinations for the air-traffic of the aforementioned three airports. The simulation involves 111 Air Traffic Control (ATC) ground stations, and 1,235 equally equipped aircraft-taking off, flying realistic free-flight trajectories, and landing in a 24-hr period. Collisionless, Prioritized Carrier Sense Multiple Access (CSMA) is successfully tested and compared with the traditional CSMA typically associated with VDL-2. The performance measures include latency, throughput, and packet loss. As expected, Prioritized CSMA is much quicker and more efficient than traditional CSMA. These simulation results show the potency of Prioritized CSMA for implementing low latency, high throughput, and efficient connectivity.
Decision making generalized by a cumulative probability weighting function
NASA Astrophysics Data System (ADS)
dos Santos, Lindomar Soares; Destefano, Natália; Martinez, Alexandre Souto
2018-01-01
Typical examples of intertemporal decision making involve situations in which individuals must choose between a smaller reward, but more immediate, and a larger one, delivered later. Analogously, probabilistic decision making involves choices between options whose consequences differ in relation to their probability of receiving. In Economics, the expected utility theory (EUT) and the discounted utility theory (DUT) are traditionally accepted normative models for describing, respectively, probabilistic and intertemporal decision making. A large number of experiments confirmed that the linearity assumed by the EUT does not explain some observed behaviors, as nonlinear preference, risk-seeking and loss aversion. That observation led to the development of new theoretical models, called non-expected utility theories (NEUT), which include a nonlinear transformation of the probability scale. An essential feature of the so-called preference function of these theories is that the probabilities are transformed by decision weights by means of a (cumulative) probability weighting function, w(p) . We obtain in this article a generalized function for the probabilistic discount process. This function has as particular cases mathematical forms already consecrated in the literature, including discount models that consider effects of psychophysical perception. We also propose a new generalized function for the functional form of w. The limiting cases of this function encompass some parametric forms already proposed in the literature. Far beyond a mere generalization, our function allows the interpretation of probabilistic decision making theories based on the assumption that individuals behave similarly in the face of probabilities and delays and is supported by phenomenological models.
McCrea, Simon M.; Robinson, Thomas P.
2011-01-01
In this study, five consecutive patients with focal strokes and/or cortical excisions were examined with the Wechsler Adult Intelligence Scale and Wechsler Memory Scale—Fourth Editions along with a comprehensive battery of other neuropsychological tasks. All five of the lesions were large and typically involved frontal, temporal, and/or parietal lobes and were lateralized to one hemisphere. The clinical case method was used to determine the cognitive neuropsychological correlates of mental rotation (Visual Puzzles), Piagetian balance beam (Figure Weights), and visual search (Cancellation) tasks. The pattern of results on Visual Puzzles and Figure Weights suggested that both subtests involve predominately right frontoparietal networks involved in visual working memory. It appeared that Visual Puzzles could also critically rely on the integrity of the left temporoparietal junction. The left temporoparietal junction could be involved in temporal ordering and integration of local elements into a nonverbal gestalt. In contrast, the Figure Weights task appears to critically involve the right temporoparietal junction involved in numerical magnitude estimation. Cancellation was sensitive to left frontotemporal lesions and not right posterior parietal lesions typical of other visual search tasks. In addition, the Cancellation subtest was sensitive to verbal search strategies and perhaps object-based attention demands, thereby constituting a unique task in comparison with previous visual search tasks. PMID:22389807
The kinetics and acoustics of fingering and note transitions on the flute.
Almeida, André; Chow, Renee; Smith, John; Wolfe, Joe
2009-09-01
Motion of the keys was measured in a transverse flute while beginner, amateur, and professional flutists played a range of exercises. The time taken for a key to open or close was typically 10 ms when pushed by a finger or 16 ms when moved by a spring. Because the opening and closing of keys will never be exactly simultaneous, transitions between notes that involve the movement of multiple fingers can occur via several possible pathways with different intermediate fingerings. A transition is classified as "safe" if it is possible to be slurred from the initial to final note with little perceptible change in pitch or volume. Some transitions are "unsafe" and possibly involve a transient change in pitch or a decrease in volume. Players, on average, used safe transitions more frequently than unsafe transitions. Delays between the motion of the fingers were typically tens of milliseconds, with longer delays as more fingers become involved. Professionals exhibited smaller average delays between the motion of their fingers than did amateurs.
Functional Neuroimaging of Spike-Wave Seizures
Motelow, Joshua E.; Blumenfeld, Hal
2013-01-01
Generalized spike-wave seizures are typically brief events associated with dynamic changes in brain physiology, metabolism, and behavior. Functional magnetic resonance imaging (fMRI) provides a relatively high spatio-temporal resolution method for imaging cortical-subcortical network activity during spike-wave seizures. Patients with spike-wave seizures often have episodes of staring and unresponsiveness which interfere with normal behavior. Results from human fMRI studies suggest that spike-wave seizures disrupt specific networks in the thalamus and fronto-parietal association cortex which are critical for normal attentive consciousness. However, the neuronal activity underlying imaging changes seen during fMRI is not well understood, particularly in abnormal conditions such as seizures. Animal models have begun to provide important fundamental insights into the neuronal basis for fMRI changes during spike-wave activity. Work from these models including both fMRI and direct neuronal recordings suggest that, like in humans, specific cortical-subcortical networks are involved in spike-wave, while other regions are spared. Regions showing fMRI increases demonstrate correlated increases in neuronal activity in animal models. The mechanisms of fMRI decreases in spike-wave will require further investigation. A better understanding of the specific brain regions involved in generating spike-wave seizures may help guide efforts to develop targeted therapies aimed at preventing or reversing abnormal excitability in these brain regions, ultimately leading to a cure for this disorder. PMID:18839093
Wind Plant Power Optimization and Control under Uncertainty
NASA Astrophysics Data System (ADS)
Jha, Pankaj; Ulker, Demet; Hutchings, Kyle; Oxley, Gregory
2017-11-01
The development of optimized cooperative wind plant control involves the coordinated operation of individual turbines co-located within a wind plant to improve the overall power production. This is typically achieved by manipulating the trajectory and intensity of wake interactions between nearby turbines, thereby reducing wake losses. However, there are various types of uncertainties involved, such as turbulent inflow and microscale and turbine model input parameters. In a recent NREL-Envision collaboration, a controller that performs wake steering was designed and implemented for the Longyuan Rudong offshore wind plant in Jiangsu, China. The Rudong site contains 25 Envision EN136-4 MW turbines, of which a subset was selected for the field test campaign consisting of the front two rows for the northeasterly wind direction. In the first row, a turbine was selected as the reference turbine, providing comparison power data, while another was selected as the controlled turbine. This controlled turbine wakes three different turbines in the second row depending on the wind direction. A yaw misalignment strategy was designed using Envision's GWCFD, a multi-fidelity plant-scale CFD tool based on SOWFA with a generalized actuator disc (GAD) turbine model, which, in turn, was used to tune NREL's FLORIS model used for wake steering and yaw control optimization. The presentation will account for some associated uncertainties, such as those in atmospheric turbulence and wake profile.
Antón-Fernández, Alejandro; Merchán-Rubira, Jesús; Avila, Jesús; Hernández, Félix; DeFelipe, Javier; Muñoz, Alberto
2017-01-01
The Golgi apparatus (GA) is a highly dynamic organelle involved in the processing and sorting of cellular proteins. In Alzheimer’s disease (AD), it has been shown to decrease in size and become fragmented in neocortical and hippocampal neuronal subpopulations. This fragmentation and decrease in size of the GA in AD has been related to the accumulation of hyperphosphorylated tau. However, the involvement of other pathological factors associated with the course of the disease, such as the extracellular accumulation of amyloid-β (Aβ) aggregates, cannot be ruled out, since both pathologies are present in AD patients. Here we use the P301S tauopathy mouse model to examine possible alterations of the GA in neurons that overexpress human tau (P301S mutated gene) in neocortical and hippocampal neurons, using double immunofluorescence techniques and confocal microscopy. Quantitative analysis revealed that neurofibrillary tangle (NFT)-bearing neurons had important morphological alterations and reductions in the surface area and volume of the GA compared with NFT-free neurons. Since in this mouse model there are no Aβ aggregates typical of AD, the present findings support the idea that the progressive accumulation of phospho-tau is associated with structural alterations of the GA, and that these changes may occur in the absence of Aβ pathology. PMID:28922155
Antón-Fernández, Alejandro; Merchán-Rubira, Jesús; Avila, Jesús; Hernández, Félix; DeFelipe, Javier; Muñoz, Alberto
2017-01-01
The Golgi apparatus (GA) is a highly dynamic organelle involved in the processing and sorting of cellular proteins. In Alzheimer's disease (AD), it has been shown to decrease in size and become fragmented in neocortical and hippocampal neuronal subpopulations. This fragmentation and decrease in size of the GA in AD has been related to the accumulation of hyperphosphorylated tau. However, the involvement of other pathological factors associated with the course of the disease, such as the extracellular accumulation of amyloid-β (Aβ) aggregates, cannot be ruled out, since both pathologies are present in AD patients. Here we use the P301S tauopathy mouse model to examine possible alterations of the GA in neurons that overexpress human tau (P301S mutated gene) in neocortical and hippocampal neurons, using double immunofluorescence techniques and confocal microscopy. Quantitative analysis revealed that neurofibrillary tangle (NFT)-bearing neurons had important morphological alterations and reductions in the surface area and volume of the GA compared with NFT-free neurons. Since in this mouse model there are no Aβ aggregates typical of AD, the present findings support the idea that the progressive accumulation of phospho-tau is associated with structural alterations of the GA, and that these changes may occur in the absence of Aβ pathology.
Stability of subsystem solutions in agent-based models
NASA Astrophysics Data System (ADS)
Perc, Matjaž
2018-01-01
The fact that relatively simple entities, such as particles or neurons, or even ants or bees or humans, give rise to fascinatingly complex behaviour when interacting in large numbers is the hallmark of complex systems science. Agent-based models are frequently employed for modelling and obtaining a predictive understanding of complex systems. Since the sheer number of equations that describe the behaviour of an entire agent-based model often makes it impossible to solve such models exactly, Monte Carlo simulation methods must be used for the analysis. However, unlike pairwise interactions among particles that typically govern solid-state physics systems, interactions among agents that describe systems in biology, sociology or the humanities often involve group interactions, and they also involve a larger number of possible states even for the most simplified description of reality. This begets the question: when can we be certain that an observed simulation outcome of an agent-based model is actually stable and valid in the large system-size limit? The latter is key for the correct determination of phase transitions between different stable solutions, and for the understanding of the underlying microscopic processes that led to these phase transitions. We show that a satisfactory answer can only be obtained by means of a complete stability analysis of subsystem solutions. A subsystem solution can be formed by any subset of all possible agent states. The winner between two subsystem solutions can be determined by the average moving direction of the invasion front that separates them, yet it is crucial that the competing subsystem solutions are characterised by a proper composition and spatiotemporal structure before the competition starts. We use the spatial public goods game with diverse tolerance as an example, but the approach has relevance for a wide variety of agent-based models.
Predicting network modules of cell cycle regulators using relative protein abundance statistics.
Oguz, Cihan; Watson, Layne T; Baumann, William T; Tyson, John J
2017-02-28
Parameter estimation in systems biology is typically done by enforcing experimental observations through an objective function as the parameter space of a model is explored by numerical simulations. Past studies have shown that one usually finds a set of "feasible" parameter vectors that fit the available experimental data equally well, and that these alternative vectors can make different predictions under novel experimental conditions. In this study, we characterize the feasible region of a complex model of the budding yeast cell cycle under a large set of discrete experimental constraints in order to test whether the statistical features of relative protein abundance predictions are influenced by the topology of the cell cycle regulatory network. Using differential evolution, we generate an ensemble of feasible parameter vectors that reproduce the phenotypes (viable or inviable) of wild-type yeast cells and 110 mutant strains. We use this ensemble to predict the phenotypes of 129 mutant strains for which experimental data is not available. We identify 86 novel mutants that are predicted to be viable and then rank the cell cycle proteins in terms of their contributions to cumulative variability of relative protein abundance predictions. Proteins involved in "regulation of cell size" and "regulation of G1/S transition" contribute most to predictive variability, whereas proteins involved in "positive regulation of transcription involved in exit from mitosis," "mitotic spindle assembly checkpoint" and "negative regulation of cyclin-dependent protein kinase by cyclin degradation" contribute the least. These results suggest that the statistics of these predictions may be generating patterns specific to individual network modules (START, S/G2/M, and EXIT). To test this hypothesis, we develop random forest models for predicting the network modules of cell cycle regulators using relative abundance statistics as model inputs. Predictive performance is assessed by the areas under receiver operating characteristics curves (AUC). Our models generate an AUC range of 0.83-0.87 as opposed to randomized models with AUC values around 0.50. By using differential evolution and random forest modeling, we show that the model prediction statistics generate distinct network module-specific patterns within the cell cycle network.
The genetic architecture of resistance to virus infection in Drosophila.
Cogni, Rodrigo; Cao, Chuan; Day, Jonathan P; Bridson, Calum; Jiggins, Francis M
2016-10-01
Variation in susceptibility to infection has a substantial genetic component in natural populations, and it has been argued that selection by pathogens may result in it having a simpler genetic architecture than many other quantitative traits. This is important as models of host-pathogen co-evolution typically assume resistance is controlled by a small number of genes. Using the Drosophila melanogaster multiparent advanced intercross, we investigated the genetic architecture of resistance to two naturally occurring viruses, the sigma virus and DCV (Drosophila C virus). We found extensive genetic variation in resistance to both viruses. For DCV resistance, this variation is largely caused by two major-effect loci. Sigma virus resistance involves more genes - we mapped five loci, and together these explained less than half the genetic variance. Nonetheless, several of these had a large effect on resistance. Models of co-evolution typically assume strong epistatic interactions between polymorphisms controlling resistance, but we were only able to detect one locus that altered the effect of the main effect loci we had mapped. Most of the loci we mapped were probably at an intermediate frequency in natural populations. Overall, our results are consistent with major-effect genes commonly affecting susceptibility to infectious diseases, with DCV resistance being a near-Mendelian trait. © 2016 The Authors. Molecular Ecology Published by John Wiley & Sons Ltd.
Information management in DNA replication modeled by directional, stochastic chains with memory
NASA Astrophysics Data System (ADS)
Arias-Gonzalez, J. Ricardo
2016-11-01
Stochastic chains represent a key variety of phenomena in many branches of science within the context of information theory and thermodynamics. They are typically approached by a sequence of independent events or by a memoryless Markov process. Stochastic chains are of special significance to molecular biology, where genes are conveyed by linear polymers made up of molecular subunits and transferred from DNA to proteins by specialized molecular motors in the presence of errors. Here, we demonstrate that when memory is introduced, the statistics of the chain depends on the mechanism by which objects or symbols are assembled, even in the slow dynamics limit wherein friction can be neglected. To analyze these systems, we introduce a sequence-dependent partition function, investigate its properties, and compare it to the standard normalization defined by the statistical physics of ensembles. We then apply this theory to characterize the enzyme-mediated information transfer involved in DNA replication under the real, non-equilibrium conditions, reproducing measured error rates and explaining the typical 100-fold increase in fidelity that is experimentally found when proofreading and edition take place. Our model further predicts that approximately 1 kT has to be consumed to elevate fidelity in one order of magnitude. We anticipate that our results are necessary to interpret configurational order and information management in many molecular systems within biophysics, materials science, communication, and engineering.
Myeloid-derived suppressor cells in breast cancer.
Markowitz, Joseph; Wesolowski, Robert; Papenfuss, Tracey; Brooks, Taylor R; Carson, William E
2013-07-01
Myeloid-derived suppressor cells (MDSCs) are a population of immature myeloid cells defined by their suppressive actions on immune cells such as T cells, dendritic cells, and natural killer cells. MDSCs typically are positive for the markers CD33 and CD11b but express low levels of HLADR in humans. In mice, MDSCs are typically positive for both CD11b and Gr1. These cells exert their suppressive activity on the immune system via the production of reactive oxygen species, arginase, and cytokines. These factors subsequently inhibit the activity of multiple protein targets such as the T cell receptor, STAT1, and indoleamine-pyrrole 2,3-dioxygenase. The numbers of MDSCs tend to increase with cancer burden while inhibiting MDSCs improves disease outcome in murine models. MDSCs also inhibit immune cancer therapeutics. In light of the poor prognosis of metastatic breast cancer in women and the correlation of increasing levels of MDSCs with increasing disease burden, the purposes of this review are to (1) discuss why MDSCs may be important in breast cancer, (2) describe model systems used to study MDSCs in vitro and in vivo, (3) discuss mechanisms involved in MDSC induction/function in breast cancer, and (4) present pre-clinical and clinical studies that explore modulation of the MDSC-immune system interaction in breast cancer. MDSCs inhibit the host immune response in breast cancer patients and diminishing MDSC actions may improve therapeutic outcomes.
An experimental nonlinear low dynamic stiffness device for shock isolation
NASA Astrophysics Data System (ADS)
Francisco Ledezma-Ramirez, Diego; Ferguson, Neil S.; Brennan, Michael J.; Tang, Bin
2015-07-01
The problem of shock generated vibration is very common in practice and difficult to isolate due to the high levels of excitation involved and its transient nature. If not properly isolated it could lead to large transmitted forces and displacements. Typically, classical shock isolation relies on the use of passive stiffness elements to absorb energy by deformation and some damping mechanism to dissipate residual vibration. The approach of using nonlinear stiffness elements is explored in this paper, focusing in providing an isolation system with low dynamic stiffness. The possibilities of using such a configuration for a shock mount are studied experimentally following previous theoretical models. The model studied considers electromagnets and permanent magnets in order to obtain nonlinear stiffness forces using different voltage configurations. It is found that the stiffness nonlinearities could be advantageous in improving shock isolation in terms of absolute displacement and acceleration response when compared with linear elastic elements.
NASA Astrophysics Data System (ADS)
Zieliński, Tomasz G.
2017-11-01
The paper proposes and investigates computationally-efficient microstructure representations for sound absorbing fibrous media. Three-dimensional volume elements involving non-trivial periodic arrangements of straight fibres are examined as well as simple two-dimensional cells. It has been found that a simple 2D quasi-representative cell can provide similar predictions as a volume element which is in general much more geometrically accurate for typical fibrous materials. The multiscale modelling allowed to determine the effective speeds and damping of acoustic waves propagating in such media, which brings up a discussion on the correlation between the speed, penetration range and attenuation of sound waves. Original experiments on manufactured copper-wire samples are presented and the microstructure-based calculations of acoustic absorption are compared with the corresponding experimental results. In fact, the comparison suggested the microstructure modifications leading to representations with non-uniformly distributed fibres.
Factoring consumers' perspectives into policy decisions for nursing competence.
Lazarus, Jean B; Lee, N Genell
2006-08-01
Health care delivery competence and accountability have typically been defined from providers' perspectives, rather than those of consumers as purchasers of services. In 1999, in the face of broad public concern about nursing competence the Alabama Board of Nursing developed an accountability model that established consumers at the center of the model and placed accountability for competent nursing practice at all levels of providers including regulatory agencies, health care organizations, educators, and licensees. The Board then authorized two research projects involving first, consumers perceptions on nursing competence and regulation, and second, comparing their perceptions with those of licensees, nurse educators, and organizational leaders (N = 1,127). Comparative data evidenced significant differences between consumers' and other participants' perceptions. This article highlights how policy implications derived from research resulted in regulatory changes for nursing competence. Five years of progress in policy changes made in the interest of public safety are summarized.
Role of Auxin in Orchid Development
Novak, Stacey D.; Luna, Lila J.; Gamage, Roshan N.
2014-01-01
Auxin's capacity to regulate aspects of plant development has been well characterized in model plant systems. In contrast, orchids have received considerably less attention, but the realization that many orchid species are endangered has led to culture-based propagation studies which have unveiled some functions for auxin in this system. This mini-review summarizes the many auxin-mediated developmental responses in orchids that are consistent with model systems; however, it also brings to the forefront auxin responses that are unique to orchid development, namely protocorm formation and ovary/ovule maturation. With regard to shoot establishment, we also assess auxin's involvement in orchid germination, PLB formation, and somatic embryogenesis. Further, it makes evident that auxin flow during germination of the undifferentiated, but mature, orchid embryo mirrors late embryogenesis of typical angiosperms. Also discussed is the use of orchid protocorms in future phytohormone studies to better understand the mechanisms behind meristem formation and organogenesis. PMID:25482818
The arcuate nucleus and NPY contribute to the antitumorigenic effect of calorie restriction
Minor, Robin K.; López, Miguel; Younts, Caitlin M.; Jones, Bruce; Pearson, Kevin J.; Anson, R. Michael; Diéguez, Carlos; de Cabo, Rafael
2011-01-01
Summary Calorie restriction (CR) is known to have profound effects on tumor incidence. A typical consequence of CR is hunger, and we hypothesized that the neuroendocrine response to CR might in part mediate CR's antitumor effects. We tested CR under appetite suppression using two models: neuropeptide Y (NPY) knockout mice and monosodium glutamate (MSG)-injected mice. While CR was protective in control mice challenged with a two-stage skin carcinogenesis model, papilloma development was neither delayed nor reduced by CR in the MSG-treated and NPY knockout mice. Adiponectin levels were also not increased by CR in the appetite-suppressed mice. We propose that some of CR’s beneficial effects cannot be separated from those imposed on appetite, and that NPY neurons in the arcuate nucleus of the hypothalamus (ARC) are involved in the translation of reduced intake to downstream physiological and functional benefits. PMID:21385308
Gabbard, Carl; Bobbio, Tatiana
2011-03-01
Several research studies indicate that children with developmental coordination disorder (DCD) show delays with an array of perceptual-motor skills. One of the explanations, based on limited research, is that these children have problems generating and/or monitoring a mental (action) representation of intended actions, termed the "internal modeling deficit" (IMD) hypothesis. According to the hypothesis, children with DCD have significant limitations in their ability to accurately generate and utilize internal models of motor planning and control. The focus of this review is on one of the methods used to examine action representation-motor imagery, which theorists argue provides a window into the process of action representation (e.g., Jeannerod, 2001 . Neural simulation of action: A unifying mechanism for motor cognition. Neuroimage, 14, 103-109.). Included in the review are performance studies of typically developing and DCD children, and possible brain structures involved.
Composing problem solvers for simulation experimentation: a case study on steady state estimation.
Leye, Stefan; Ewald, Roland; Uhrmacher, Adelinde M
2014-01-01
Simulation experiments involve various sub-tasks, e.g., parameter optimization, simulation execution, or output data analysis. Many algorithms can be applied to such tasks, but their performance depends on the given problem. Steady state estimation in systems biology is a typical example for this: several estimators have been proposed, each with its own (dis-)advantages. Experimenters, therefore, must choose from the available options, even though they may not be aware of the consequences. To support those users, we propose a general scheme to aggregate such algorithms to so-called synthetic problem solvers, which exploit algorithm differences to improve overall performance. Our approach subsumes various aggregation mechanisms, supports automatic configuration from training data (e.g., via ensemble learning or portfolio selection), and extends the plugin system of the open source modeling and simulation framework James II. We show the benefits of our approach by applying it to steady state estimation for cell-biological models.
Image re-sampling detection through a novel interpolation kernel.
Hilal, Alaa
2018-06-01
Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.
Decision-Tree Models of Categorization Response Times, Choice Proportions, and Typicality Judgments
ERIC Educational Resources Information Center
Lafond, Daniel; Lacouture, Yves; Cohen, Andrew L.
2009-01-01
The authors present 3 decision-tree models of categorization adapted from T. Trabasso, H. Rollins, and E. Shaughnessy (1971) and use them to provide a quantitative account of categorization response times, choice proportions, and typicality judgments at the individual-participant level. In Experiment 1, the decision-tree models were fit to…
NASA Astrophysics Data System (ADS)
Ern, Manfred; Trinh, Quang Thai; Preusse, Peter; Gille, John C.; Mlynczak, Martin G.; Russell, James M., III; Riese, Martin
2018-04-01
Gravity waves are one of the main drivers of atmospheric dynamics. The spatial resolution of most global atmospheric models, however, is too coarse to properly resolve the small scales of gravity waves, which range from tens to a few thousand kilometers horizontally, and from below 1 km to tens of kilometers vertically. Gravity wave source processes involve even smaller scales. Therefore, general circulation models (GCMs) and chemistry climate models (CCMs) usually parametrize the effect of gravity waves on the global circulation. These parametrizations are very simplified. For this reason, comparisons with global observations of gravity waves are needed for an improvement of parametrizations and an alleviation of model biases. We present a gravity wave climatology based on atmospheric infrared limb emissions observed by satellite (GRACILE). GRACILE is a global data set of gravity wave distributions observed in the stratosphere and the mesosphere by the infrared limb sounding satellite instruments High Resolution Dynamics Limb Sounder (HIRDLS) and Sounding of the Atmosphere using Broadband Emission Radiometry (SABER). Typical distributions (zonal averages and global maps) of gravity wave vertical wavelengths and along-track horizontal wavenumbers are provided, as well as gravity wave temperature variances, potential energies and absolute momentum fluxes. This global data set captures the typical seasonal variations of these parameters, as well as their spatial variations. The GRACILE data set is suitable for scientific studies, and it can serve for comparison with other instruments (ground-based, airborne, or other satellite instruments) and for comparison with gravity wave distributions, both resolved and parametrized, in GCMs and CCMs. The GRACILE data set is available as supplementary data at https://doi.org/10.1594/PANGAEA.879658.
Dual Processing Model for Medical Decision-Making: An Extension to Diagnostic Testing
Tsalatsanis, Athanasios; Hozo, Iztok; Kumar, Ambuj; Djulbegovic, Benjamin
2015-01-01
Dual Processing Theories (DPT) assume that human cognition is governed by two distinct types of processes typically referred to as type 1 (intuitive) and type 2 (deliberative). Based on DPT we have derived a Dual Processing Model (DPM) to describe and explain therapeutic medical decision-making. The DPM model indicates that doctors decide to treat when treatment benefits outweigh its harms, which occurs when the probability of the disease is greater than the so called “threshold probability” at which treatment benefits are equal to treatment harms. Here we extend our work to include a wider class of decision problems that involve diagnostic testing. We illustrate applicability of the proposed model in a typical clinical scenario considering the management of a patient with prostate cancer. To that end, we calculate and compare two types of decision-thresholds: one that adheres to expected utility theory (EUT) and the second according to DPM. Our results showed that the decisions to administer a diagnostic test could be better explained using the DPM threshold. This is because such decisions depend on objective evidence of test/treatment benefits and harms as well as type 1 cognition of benefits and harms, which are not considered under EUT. Given that type 1 processes are unique to each decision-maker, this means that the DPM threshold will vary among different individuals. We also showed that when type 1 processes exclusively dominate decisions, ordering a diagnostic test does not affect a decision; the decision is based on the assessment of benefits and harms of treatment. These findings could explain variations in the treatment and diagnostic patterns documented in today’s clinical practice. PMID:26244571
Loeber, Rolf; Hinshaw, Stephen P.; Pardini, Dustin A.
2018-01-01
Coercive parent–child interaction models posit that an escalating cycle of negative, bidirectional interchanges influences the development of boys’ externalizing problems and caregivers’ maladaptive parenting over time. However, longitudinal studies examining this hypothesis have been unable to rule out the possibility that between-individual factors account for bidirectional associations between child externalizing problems and maladaptive parenting. Using a longitudinal sample of boys (N = 503) repeatedly assessed eight times across 6-month intervals in childhood (in a range between 6 and 13 years), the current study is the first to use novel within-individual change (fixed effects) models to examine whether parents tend to increase their use of maladaptive parenting strategies following an increase in their son’s externalizing problems, or vice versa. These bidirectional associations were examined using multiple facets of externalizing problems (i.e., interpersonal callousness, conduct and oppositional defiant problems, hyperactivity/impulsivity) and parenting behaviors (i.e., physical punishment, involvement, parent–child communication). Analyses failed to support the notion that when boys increase their typical level of problem behaviors, their parents show an increase in their typical level of maladaptive parenting across the subsequent 6 month period, and vice versa. Instead, across 6-month intervals, within parent-son dyads, changes in maladaptive parenting and child externalizing problems waxed and waned in concert. Fixed effects models to address the topic of bidirectional relations between parent and child behavior are severely underrepresented. We recommend that other researchers who have found significant bidirectional parent–child associations using rank-order change models reexamine their data to determine whether these findings hold when examining changes within parent–child dyads. PMID:26780209
Computation at a coordinate singularity
NASA Astrophysics Data System (ADS)
Prusa, Joseph M.
2018-05-01
Coordinate singularities are sometimes encountered in computational problems. An important example involves global atmospheric models used for climate and weather prediction. Classical spherical coordinates can be used to parameterize the manifold - that is, generate a grid for the computational spherical shell domain. This particular parameterization offers significant benefits such as orthogonality and exact representation of curvature and connection (Christoffel) coefficients. But it also exhibits two polar singularities and at or near these points typical continuity/integral constraints on dependent fields and their derivatives are generally inadequate and lead to poor model performance and erroneous results. Other parameterizations have been developed that eliminate polar singularities, but problems of weaker singularities and enhanced grid noise compared to spherical coordinates (away from the poles) persist. In this study reparameterization invariance of geometric objects (scalars, vectors and the forms generated by their covariant derivatives) is utilized to generate asymptotic forms for dependent fields of interest valid in the neighborhood of a pole. The central concept is that such objects cannot be altered by the metric structure of a parameterization. The new boundary conditions enforce symmetries that are required for transformations of geometric objects. They are implemented in an implicit polar filter of a structured grid, nonhydrostatic global atmospheric model that is simulating idealized Held-Suarez flows. A series of test simulations using different configurations of the asymptotic boundary conditions are made, along with control simulations that use the default model numerics with no absorber, at three different grid sizes. Typically the test simulations are ∼ 20% faster in wall clock time than the control-resulting from a decrease in noise at the poles in all cases. In the control simulations adverse numerical effects from the polar singularity are observed to increase with grid resolution. In contrast, test simulations demonstrate robust polar behavior independent of grid resolution.
Dual Processing Model for Medical Decision-Making: An Extension to Diagnostic Testing.
Tsalatsanis, Athanasios; Hozo, Iztok; Kumar, Ambuj; Djulbegovic, Benjamin
2015-01-01
Dual Processing Theories (DPT) assume that human cognition is governed by two distinct types of processes typically referred to as type 1 (intuitive) and type 2 (deliberative). Based on DPT we have derived a Dual Processing Model (DPM) to describe and explain therapeutic medical decision-making. The DPM model indicates that doctors decide to treat when treatment benefits outweigh its harms, which occurs when the probability of the disease is greater than the so called "threshold probability" at which treatment benefits are equal to treatment harms. Here we extend our work to include a wider class of decision problems that involve diagnostic testing. We illustrate applicability of the proposed model in a typical clinical scenario considering the management of a patient with prostate cancer. To that end, we calculate and compare two types of decision-thresholds: one that adheres to expected utility theory (EUT) and the second according to DPM. Our results showed that the decisions to administer a diagnostic test could be better explained using the DPM threshold. This is because such decisions depend on objective evidence of test/treatment benefits and harms as well as type 1 cognition of benefits and harms, which are not considered under EUT. Given that type 1 processes are unique to each decision-maker, this means that the DPM threshold will vary among different individuals. We also showed that when type 1 processes exclusively dominate decisions, ordering a diagnostic test does not affect a decision; the decision is based on the assessment of benefits and harms of treatment. These findings could explain variations in the treatment and diagnostic patterns documented in today's clinical practice.
Besemer, Sytske; Loeber, Rolf; Hinshaw, Stephen P; Pardini, Dustin A
2016-10-01
Coercive parent-child interaction models posit that an escalating cycle of negative, bidirectional interchanges influences the development of boys' externalizing problems and caregivers' maladaptive parenting over time. However, longitudinal studies examining this hypothesis have been unable to rule out the possibility that between-individual factors account for bidirectional associations between child externalizing problems and maladaptive parenting. Using a longitudinal sample of boys (N = 503) repeatedly assessed eight times across 6-month intervals in childhood (in a range between 6 and 13 years), the current study is the first to use novel within-individual change (fixed effects) models to examine whether parents tend to increase their use of maladaptive parenting strategies following an increase in their son's externalizing problems, or vice versa. These bidirectional associations were examined using multiple facets of externalizing problems (i.e., interpersonal callousness, conduct and oppositional defiant problems, hyperactivity/impulsivity) and parenting behaviors (i.e., physical punishment, involvement, parent-child communication). Analyses failed to support the notion that when boys increase their typical level of problem behaviors, their parents show an increase in their typical level of maladaptive parenting across the subsequent 6 month period, and vice versa. Instead, across 6-month intervals, within parent-son dyads, changes in maladaptive parenting and child externalizing problems waxed and waned in concert. Fixed effects models to address the topic of bidirectional relations between parent and child behavior are severely underrepresented. We recommend that other researchers who have found significant bidirectional parent-child associations using rank-order change models reexamine their data to determine whether these findings hold when examining changes within parent-child dyads.
Validation of Ground-based Optical Estimates of Auroral Electron Precipitation Energy Deposition
NASA Astrophysics Data System (ADS)
Hampton, D. L.; Grubbs, G. A., II; Conde, M.; Lynch, K. A.; Michell, R.; Zettergren, M. D.; Samara, M.; Ahrns, M. J.
2017-12-01
One of the major energy inputs into the high latitude ionosphere and mesosphere is auroral electron precipitation. Not only does the kinetic energy get deposited, the ensuing ionization in the E and F-region ionosphere modulates parallel and horizontal currents that can dissipate in the form of Joule heating. Global models to simulate these interactions typically use electron precipitation models that produce a poor representation of the spatial and temporal complexity of auroral activity as observed from the ground. This is largely due to these precipitation models being based on averages of multiple satellite overpasses separated by periods much longer than typical auroral feature durations. With the development of regional and continental observing networks (e.g. THEMIS ASI), the possibility of ground-based optical observations producing quantitative estimates of energy deposition with temporal and spatial scales comparable to those known to be exhibited in auroral activity become a real possibility. Like empirical precipitation models based on satellite overpasses such optics-based estimates are subject to assumptions and uncertainties, and therefore require validation. Three recent sounding rocket missions offer such an opportunity. The MICA (2012), GREECE (2014) and Isinglass (2017) missions involved detailed ground based observations of auroral arcs simultaneously with extensive on-board instrumentation. These have afforded an opportunity to examine the results of three optical methods of determining auroral electron energy flux, namely 1) ratio of auroral emissions, 2) green line temperature vs. emission altitude, and 3) parametric estimates using white-light images. We present comparisons from all three methods for all three missions and summarize the temporal and spatial scales and coverage over which each is valid.
Design search and optimization in aerospace engineering.
Keane, A J; Scanlan, J P
2007-10-15
In this paper, we take a design-led perspective on the use of computational tools in the aerospace sector. We briefly review the current state-of-the-art in design search and optimization (DSO) as applied to problems from aerospace engineering, focusing on those problems that make heavy use of computational fluid dynamics (CFD). This ranges over issues of representation, optimization problem formulation and computational modelling. We then follow this with a multi-objective, multi-disciplinary example of DSO applied to civil aircraft wing design, an area where this kind of approach is becoming essential for companies to maintain their competitive edge. Our example considers the structure and weight of a transonic civil transport wing, its aerodynamic performance at cruise speed and its manufacturing costs. The goals are low drag and cost while holding weight and structural performance at acceptable levels. The constraints and performance metrics are modelled by a linked series of analysis codes, the most expensive of which is a CFD analysis of the aerodynamics using an Euler code with coupled boundary layer model. Structural strength and weight are assessed using semi-empirical schemes based on typical airframe company practice. Costing is carried out using a newly developed generative approach based on a hierarchical decomposition of the key structural elements of a typical machined and bolted wing-box assembly. To carry out the DSO process in the face of multiple competing goals, a recently developed multi-objective probability of improvement formulation is invoked along with stochastic process response surface models (Krigs). This approach both mitigates the significant run times involved in CFD computation and also provides an elegant way of balancing competing goals while still allowing the deployment of the whole range of single objective optimizers commonly available to design teams.
Distributions of observed death tolls govern sensitivity to human fatalities
Olivola, Christopher Y.; Sagara, Namika
2009-01-01
How we react to humanitarian crises, epidemics, and other tragic events involving the loss of human lives depends largely on the extent to which we are moved by the size of their associated death tolls. Many studies have demonstrated that people generally exhibit a diminishing sensitivity to the number of human fatalities and, equivalently, a preference for risky (vs. sure) alternatives in decisions under risk involving human losses. However, the reason for this tendency remains unknown. Here we show that the distributions of event-related death tolls that people observe govern their evaluations of, and risk preferences concerning, human fatalities. In particular, we show that our diminishing sensitivity to human fatalities follows from the fact that these death tolls are approximately power-law distributed. We further show that, by manipulating the distribution of mortality-related events that people observe, we can alter their risk preferences in decisions involving fatalities. Finally, we show that the tendency to be risk-seeking in mortality-related decisions is lower in countries in which high-mortality events are more frequently observed. Our results support a model of magnitude evaluation based on memory sampling and relative judgment. This model departs from the utility-based approaches typically encountered in psychology and economics in that it does not rely on stable, underlying value representations to explain valuation and choice, or on choice behavior to derive value functions. Instead, preferences concerning human fatalities emerge spontaneously from the distributions of sampled events and the relative nature of the evaluation process. PMID:20018778
Cysteine desulfurase IscS2 plays a role in oxygen resistance in Clostridium difficile.
Giordano, Nicole; Hastie, Jessica L; Smith, Ashley D; Foss, Elissa D; Gutierrez-Munoz, Daniela F; Carlson, Paul E
2018-06-04
Clostridium difficile is an anaerobic, spore-forming bacterium capable of colonizing the gastrointestinal tract of humans following disruption of the normal microbiota, typically from antibiotic therapy for an unrelated infection. With approximately 500,000 confirmed infections leading to 29,000 deaths per year in the United States, C. difficile infection (CDI) is an urgent public health threat. We previously determined C. difficile survives in up to 3% oxygen. Low levels of oxygen are present in the intestinal tract with the higher concentrations being associated with the epithelial cell surface. Additionally, antibiotic treatment, the greatest risk factor for CDI, increases intestinal oxygen concentration. Therefore, we hypothesized that the C. difficile genome encodes mechanisms for survival during oxidative stress. Previous data have shown that cysteine desulfurases involved in iron-sulfur cluster assembly are involved in protecting bacteria from oxidative stress. In this study, deletion of a putative cysteine desulfurase ( Cd 630_12790/IscS2) involved in the iron sulfur cluster (Isc) system caused a severe growth defect in the presence of 2% oxygen. Additionally, this mutant delayed colonization in a conventional mouse model of CDI, and failed to colonize in a germ-free model, which has higher intestinal oxygen levels. These data imply an undefined role for this cysteine desulfurase in protecting C. difficile from low levels of oxygen in the gut. This is a work of the U.S. Government and is not subject to copyright protection in the United States. Foreign copyrights may apply.
Baron-Epel, Orna; Obid, Samira; Fertig, Shahar; Gitelman, Victoria
2016-01-01
Involvement in car crashes is higher among Israeli Arabs compared to Jews. This study characterized perceived descriptive driving norms (PDDNs) within and outside Arab towns/villages and estimated their association with involvement in car crashes. Arab drivers (594) living in 19 towns and villages were interviewed in face-to-face interviews. The questionnaire included questions about involvement in car crashes, PDDNs within and outside the towns/villages, attitudes toward traffic safety laws, traffic law violations, and socioeconomic and demographic variables. PDDNs represent individuals' perceptions on how safe other people typically drive. The low scores indicate a low percentage of drivers performing unsafe behaviors (safer driving-related norms). A structural equation modeling analysis was applied to identify factors associated with PDDNs and involvement in car crashes. A large difference was found in PDDNs within and outside the towns/villages. Mostly, the respondents reported higher rates of unsafe PDDNs within the towns/villages (mean = 3.76, SD = 0.63) and lower rates of PDDNs outside the towns/villages (mean = 2.12, SD = 0.60). PDDNs outside the towns/villages were associated with involvement in a car crash (r = -0.12, P <.01), but those within the towns/villages were not. Within the towns/villages, attitudes toward traffic laws and PDDNs were positively associated with traffic law violations (r = 0.56, P <.001; r = 0.11, P <.001 respectively), where traffic law violations were directly associated with involvement in a car crash (r = -0.14, P <.001). Unsafe PDDNs may add directly and indirectly to unsafe driving and involvement in car crashes in Arab Israelis. Because PDDNs outside towns/villages were better, increased law enforcement within towns/villages may improve these norms and decrease involvement in car crashes.
... usually involves taking prescription hormones. This can include hydrocortisone, prednisone, or cortisone acetate. If your body is ... treatment typically consists of intravenous (IV) injections of hydrocortisone, saline (salt water), and dextrose (sugar). These injections ...
Atypical Pityriasis rosea in a black child: a case report
2009-01-01
Introduction Pityriasis rosea is a self-limited inflammatory condition of the skin that mostly affects healthy children and adolescents. Atypical cases of Pityriasis rosea are fairly common and less readily recognized than typical eruptions, and may pose a diagnostic challenge. Case presentation We report the case of a 12-year-old black child that developed an intense pruritic papular eruption with intense facial involvement that was diagnosed of Pityriasis rosea and resolved after five weeks leaving a slight hyperpigmentation. Conclusion Facial and scalp involvement, post-inflammatory disorders of pigmentation and papular lesions are characteristics typically associated to black patients with Pityriasis rosea. The knowledge of features found more frequently in dark-skinned population may be helpful to physicians for diagnosing an atypical Pityriasis rosea in these patients. PMID:20181179
High-Power, High-Thrust Ion Thruster (HPHTion)
NASA Technical Reports Server (NTRS)
Peterson, Peter Y.
2015-01-01
Advances in high-power photovoltaic technology have enabled the possibility of reasonably sized, high-specific power solar arrays. At high specific powers, power levels ranging from 50 to several hundred kilowatts are feasible. Ion thrusters offer long life and overall high efficiency (typically greater than 70 percent efficiency). In Phase I, the team at ElectroDynamic Applications, Inc., built a 25-kW, 50-cm ion thruster discharge chamber and fabricated a laboratory model. This was in response to the need for a single, high-powered engine to fill the gulf between the 7-kW NASA's Evolutionary Xenon Thruster (NEXT) system and a notional 25-kW engine. The Phase II project matured the laboratory model into a protoengineering model ion thruster. This involved the evolution of the discharge chamber to a high-performance thruster by performance testing and characterization via simulated and full beam extraction testing. Through such testing, the team optimized the design and built a protoengineering model thruster. Coupled with gridded ion thruster technology, this technology can enable a wide range of missions, including ambitious near-Earth NASA missions, Department of Defense missions, and commercial satellite activities.
The logical primitives of thought: Empirical foundations for compositional cognitive models.
Piantadosi, Steven T; Tenenbaum, Joshua B; Goodman, Noah D
2016-07-01
The notion of a compositional language of thought (LOT) has been central in computational accounts of cognition from earliest attempts (Boole, 1854; Fodor, 1975) to the present day (Feldman, 2000; Penn, Holyoak, & Povinelli, 2008; Fodor, 2008; Kemp, 2012; Goodman, Tenenbaum, & Gerstenberg, 2015). Recent modeling work shows how statistical inferences over compositionally structured hypothesis spaces might explain learning and development across a variety of domains. However, the primitive components of such representations are typically assumed a priori by modelers and theoreticians rather than determined empirically. We show how different sets of LOT primitives, embedded in a psychologically realistic approximate Bayesian inference framework, systematically predict distinct learning curves in rule-based concept learning experiments. We use this feature of LOT models to design a set of large-scale concept learning experiments that can determine the most likely primitives for psychological concepts involving Boolean connectives and quantification. Subjects' inferences are most consistent with a rich (nonminimal) set of Boolean operations, including first-order, but not second-order, quantification. Our results more generally show how specific LOT theories can be distinguished empirically. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Haupt, Sue Ellen; Beyer-Lout, Anke; Long, Kerrie J.; Young, George S.
Assimilating concentration data into an atmospheric transport and dispersion model can provide information to improve downwind concentration forecasts. The forecast model is typically a one-way coupled set of equations: the meteorological equations impact the concentration, but the concentration does not generally affect the meteorological field. Thus, indirect methods of using concentration data to influence the meteorological variables are required. The problem studied here involves a simple wind field forcing Gaussian dispersion. Two methods of assimilating concentration data to infer the wind direction are demonstrated. The first method is Lagrangian in nature and treats the puff as an entity using feature extraction coupled with nudging. The second method is an Eulerian field approach akin to traditional variational approaches, but minimizes the error by using a genetic algorithm (GA) to directly optimize the match between observations and predictions. Both methods show success at inferring the wind field. The GA-variational method, however, is more accurate but requires more computational time. Dynamic assimilation of a continuous release modeled by a Gaussian plume is also demonstrated using the genetic algorithm approach.
Choice Shift in Opinion Network Dynamics
NASA Astrophysics Data System (ADS)
Gabbay, Michael
Choice shift is a phenomenon associated with small group dynamics whereby group discussion causes group members to shift their opinions in a more extreme direction so that the mean post-discussion opinion exceeds the mean pre-discussion opinion. Also known as group polarization, choice shift is a robust experimental phenomenon and has been well-studied within social psychology. In opinion network models, shifts toward extremism are typically produced by the presence of stubborn agents at the extremes of the opinion axis, whose opinions are much more resistant to change than moderate agents. However, we present a model in which choice shift can arise without the assumption of stubborn agents; the model evolves member opinions and uncertainties using coupled nonlinear differential equations. In addition, we briefly describe the results of a recent experiment conducted involving online group discussion concerning the outcome of National Football League games are described. The model predictions concerning the effects of network structure, disagreement level, and team choice (favorite or underdog) are in accord with the experimental results. This research was funded by the Office of Naval Research and the Defense Threat Reduction Agency.
Benefit and cost curves for typical pollination mutualisms.
Morris, William F; Vázquez, Diego P; Chacoff, Natacha P
2010-05-01
Mutualisms provide benefits to interacting species, but they also involve costs. If costs come to exceed benefits as population density or the frequency of encounters between species increases, the interaction will no longer be mutualistic. Thus curves that represent benefits and costs as functions of interaction frequency are important tools for predicting when a mutualism will tip over into antagonism. Currently, most of what we know about benefit and cost curves in pollination mutualisms comes from highly specialized pollinating seed-consumer mutualisms, such as the yucca moth-yucca interaction. There, benefits to female reproduction saturate as the number of visits to a flower increases (because the amount of pollen needed to fertilize all the flower's ovules is finite), but costs continue to increase (because pollinator offspring consume developing seeds), leading to a peak in seed production at an intermediate number of visits. But for most plant-pollinator mutualisms, costs to the plant are more subtle than consumption of seeds, and how such costs scale with interaction frequency remains largely unknown. Here, we present reasonable benefit and cost curves that are appropriate for typical pollinator-plant interactions, and we show how they can result in a wide diversity of relationships between net benefit (benefit minus cost) and interaction frequency. We then use maximum-likelihood methods to fit net-benefit curves to measures of female reproductive success for three typical pollination mutualisms from two continents, and for each system we chose the most parsimonious model using information-criterion statistics. We discuss the implications of the shape of the net-benefit curve for the ecology and evolution of plant-pollinator mutualisms, as well as the challenges that lie ahead for disentangling the underlying benefit and cost curves for typical pollination mutualisms.
Diagnosis by integrating model-based reasoning with knowledge-based reasoning
NASA Technical Reports Server (NTRS)
Bylander, Tom
1988-01-01
Our research investigates how observations can be categorized by integrating a qualitative physical model with experiential knowledge. Our domain is diagnosis of pathologic gait in humans, in which the observations are the gait motions, muscle activity during gait, and physical exam data, and the diagnostic hypotheses are the potential muscle weaknesses, muscle mistimings, and joint restrictions. Patients with underlying neurological disorders typically have several malfunctions. Among the problems that need to be faced are: the ambiguity of the observations, the ambiguity of the qualitative physical model, correspondence of the observations and hypotheses to the qualitative physical model, the inherent uncertainty of experiential knowledge, and the combinatorics involved in forming composite hypotheses. Our system divides the work so that the knowledge-based reasoning suggests which hypotheses appear more likely than others, the qualitative physical model is used to determine which hypotheses explain which observations, and another process combines these functionalities to construct a composite hypothesis based on explanatory power and plausibility. We speculate that the reasoning architecture of our system is generally applicable to complex domains in which a less-than-perfect physical model and less-than-perfect experiential knowledge need to be combined to perform diagnosis.
Brignoli, Riccardo; Brown, J Steven; Skye, H; Domanski, Piotr A
2017-08-01
Preliminary refrigerant screenings typically rely on using cycle simulation models involving thermodynamic properties alone. This approach has two shortcomings. First, it neglects transport properties, whose influence on system performance is particularly strong through their impact on the performance of the heat exchangers. Second, the refrigerant temperatures in the evaporator and condenser are specified as input, while real-life equipment operates at imposed heat sink and heat source temperatures; the temperatures in the evaporator and condensers are established based on overall heat transfer resistances of these heat exchangers and the balance of the system. The paper discusses a simulation methodology and model that addresses the above shortcomings. This model simulates the thermodynamic cycle operating at specified heat sink and heat source temperature profiles, and includes the ability to account for the effects of thermophysical properties and refrigerant mass flux on refrigerant heat transfer and pressure drop in the air-to-refrigerant evaporator and condenser. Additionally, the model can optimize the refrigerant mass flux in the heat exchangers to maximize the Coefficient of Performance. The new model is validated with experimental data and its predictions are contrasted to those of a model based on thermodynamic properties alone.
Modelling Oil Droplet Breakup in a Turbulent Jet
NASA Astrophysics Data System (ADS)
Philip, Rachel; Hewitt, Ian; Howell, Peter
2017-11-01
In a deep-sea oil spill, a broken pipe near the seabed can result in the release of a turbulent oil jet into the surrounding ocean. The jet's shearing motion will typically cause the oil to break up into smaller droplets, which are then more readily dispersed and decomposed by sea microbes. In order to understand this natural clean-up process, we develop analytical and numerical models for the drop size distribution at different locations in the jet. This involves examining and unifying disparate scales, from the macroscopic jet to the microscopic droplets. We first examine the turbulent jet and we can use its self-similarity to simplify our models. We then turn to the droplet scale, considering the rate at which drops are deformed and broken up. Droplet deformation is precipitated by the jet's turbulent mixing and shearing and thus depends on the macroscopic jet models. We combine these large and small scale models to determine the droplet size distribution, as it varies with jet location. By varying the initial conditions and parameters in these models, we obtain insights into the factors affecting this droplet breakup process and how it may be optimised.
Consistent parameter fixing in the quark-meson model with vacuum fluctuations
NASA Astrophysics Data System (ADS)
Carignano, Stefano; Buballa, Michael; Elkamhawy, Wael
2016-08-01
We revisit the renormalization prescription for the quark-meson model in an extended mean-field approximation, where vacuum quark fluctuations are included. At a given cutoff scale the model parameters are fixed by fitting vacuum quantities, typically including the sigma-meson mass mσ and the pion decay constant fπ. In most publications the latter is identified with the expectation value of the sigma field, while for mσ the curvature mass is taken. When quark loops are included, this prescription is however inconsistent, and the correct identification involves the renormalized pion decay constant and the sigma pole mass. In the present article we investigate the influence of the parameter-fixing scheme on the phase structure of the model at finite temperature and chemical potential. Despite large differences between the model parameters in the two schemes, we find that in homogeneous matter the effect on the phase diagram is relatively small. For inhomogeneous phases, on the other hand, the choice of the proper renormalization prescription is crucial. In particular, we show that if renormalization effects on the pion decay constant are not considered, the model does not even present a well-defined renormalized limit when the cutoff is sent to infinity.
NASA Astrophysics Data System (ADS)
Gross, N. A.; Hughes, W.
2011-12-01
This talk will outline the organization of a summer school designed to introduce young professions to a sub-discipline of geophysics. Through out the 10 year life time of the Center for Integrated Space Weather Modeling (CISM) the CISM Team has offered a two week summer school that introduces new graduate students and other interested professional to the fundamentals of space weather. The curriculum covers basic concepts in space physics, the hazards of space weather, and the utility of computer models of the space environment. Graduate students attend from both inside and outside CISM, from all the sub-disciplines involved in space weather (solar, heliosphere, geomagnetic, and aeronomy), and from across the nation and around the world. In addition, between 1/4 and 1/3 of the participants each year are professionals involved in space weather in some way, such as: forecasters from NOAA and the Air Force, Air Force satellite program directors, NASA specialists involved in astronaut radiation safety, and representatives from industries affected by space weather. The summer school has adopted modern pedagogy that has been used successfully at the undergraduate level. A typical daily schedule involves three morning lectures followed by an afternoon lab session. During the morning lectures, student interaction is encouraged using "Timeout to Think" questions and peer instruction, along with question cards for students to ask follow up questions. During the afternoon labs students, working in groups of four, answer thought provoking questions using results from simulations and observation data from a variety of source. Through the interactions with each other and the instructors, as well as social interactions during the two weeks, students network and form bonds that will last them through out their careers. We believe that this summer school can be used as a model for summer schools in a wide variety of disciplines.
Multiple Cranial Nerve Palsies in Giant Cell Arteritis.
Ross, Michael; Bursztyn, Lulu; Superstein, Rosanne; Gans, Mark
2017-12-01
Giant cell arteritis (GCA) is a systemic vasculitis of medium and large arteries often with ophthalmic involvement, including ischemic optic neuropathy, retinal artery occlusion, and ocular motor cranial nerve palsies. This last complication occurs in 2%-15% of patients, but typically involves only 1 cranial nerve. We present 2 patients with biopsy-proven GCA associated with multiple cranial nerve palsies.
Jeffrey, Jennifer; Whelan, Jodie; Pirouz, Dante M; Snowdon, Anne W
2016-07-01
Campaigns advocating behavioural changes often employ social norms as a motivating technique, favouring injunctive norms (what is typically approved or disapproved) over descriptive norms (what is typically done). Here, we investigate an upside to including descriptive norms in health and safety appeals. Because descriptive norms are easy to process and understand, they should provide a heuristic to guide behaviour in those individuals who lack the interest or motivation to reflect on the advocated behaviour more deeply. When those descriptive norms are positive - suggesting that what is done is consistent with what ought to be done - including them in campaigns should be particularly beneficial at influencing this low-involvement segment. We test this proposition via research examining booster seat use amongst parents with children of booster seat age, and find that incorporating positive descriptive norms into a related campaign is particularly impactful for parents who report low involvement in the topic of booster seat safety. Descriptive norms are easy to state and easy to understand, and our research suggests that these norms resonate with low involvement individuals. As a result, we recommend incorporating descriptive norms when possible into health and safety campaigns. Copyright © 2016. Published by Elsevier Ltd.
The economic impact of project MARS (motivating adolescents to reduce sexual risk).
Dealy, Bern C; Horn, Brady P; Callahan, Tiffany J; Bryan, Angela D
2013-09-01
The purpose of this study was to economically evaluate Project MARS (Motivating Adolescents to Reduce Sexual Risk; T. J. Callahan, E. A. Montanaro, R. E. Magnan, & A. D. Bryan, 2013, "Project MARS: Design of a multi-behavior intervention trial for justice-involved youth," Translational Behavioral Medicine, Vol. 3, pp. 122-130), an ongoing, randomized, sexual-risk-reduction intervention for justice-involved youth. We consider the effect of including viral STIs in the economic analysis, and explore the impact of the MARS intervention on the perceived cost of acquiring STIs to justice-involved youth. 206 participants, ages 14 to 18, participated in a sexual-risk-reduction intervention that included screening and treatment for chlamydia and gonorrhea. A Bernoulli probability model was used to estimate averted STIs attributable to the MARS intervention. The economic benefit of averted STIs was monetized using the direct medical cost of treatment. In addition, we used a contingent valuation (willingness-to-pay) model to investigate the impact of the Project MARS on participants' perceived cost of acquiring an STI. Using the standard outcome domains typically used to evaluate STI interventions, Project MARS resulted in a reduction of $2.08 in direct medical costs for every $1 spent. When viral STIs were added to the economic model, a considerable increase in averted direct medical costs ($2.68 for every $1 spent) was found. Preliminary contingent valuation estimates suggest that participants' willingness-to-pay for averted STIs significantly increased after receiving the MARS intervention. From an economic perspective, Project MARS is a worthwhile program to adopt. Future attention should be given to the impact of behavioral interventions on viral infections. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Rheumatoid pseudocyst (geode) of the femoral neck without apparent joint involvement.
Morrey, B F
1987-05-01
Typically, rheumatoid cysts are associated with obvious joint involvement and are located in the subchondral portion of the adjacent joint. Giant pseudocysts (geodes) are uncommon and are characteristically associated with extensive joint destruction. The patient described in this report had a giant pseudocyst of the femoral neck but no joint involvement. To the best of my knowledge, this is the first report of such a manifestation of a giant pseudocyst. As such, it posed a somewhat difficult diagnostic problem.
Langille, Megan M.; Desai, Jay
2015-01-01
Encephalitis due to antibodies to voltage gated potassium channel (VGKC) typically presents with limbic encephalitis and medial temporal lobe involvement on neuroimaging. We describe a case of 13 year girl female with encephalitis due to antibodies to VGKC with signal changes in the cerebellar dentate nuclei bilaterally and clinical features that suggested predominant cerebellar involvement. These have never been reported previously in the literature. Our case expands the phenotypic spectrum of this rare condition. PMID:26019428
Langille, Megan M; Desai, Jay
2015-01-01
Encephalitis due to antibodies to voltage gated potassium channel (VGKC) typically presents with limbic encephalitis and medial temporal lobe involvement on neuroimaging. We describe a case of 13 year girl female with encephalitis due to antibodies to VGKC with signal changes in the cerebellar dentate nuclei bilaterally and clinical features that suggested predominant cerebellar involvement. These have never been reported previously in the literature. Our case expands the phenotypic spectrum of this rare condition.
... a life-threatening reaction called anaphylaxis (an-a-fi-LAK-sis). Symptoms of anaphylaxis typically involve more ... Immunology 555 East Wells Street Suite 1100, Milwaukee , WI 53202-3823 (414) 272-6071 Additional Contact Information ...
Stephensen, C B; Welter, J; Thaker, S R; Taylor, J; Tartaglia, J; Paoletti, E
1997-01-01
Canine distemper virus (CDV) infection of ferrets causes an acute systemic disease involving multiple organ systems, including the respiratory tract, lymphoid system, and central nervous system (CNS). We have tested candidate CDV vaccines incorporating the fusion (F) and hemagglutinin (HA) proteins in the highly attenuated NYVAC strain of vaccinia virus and in the ALVAC strain of canarypox virus, which does not productively replicate in mammalian hosts. Juvenile ferrets were vaccinated twice with these constructs, or with an attenuated live-virus vaccine, while controls received saline or the NYVAC and ALVAC vectors expressing rabies virus glycoprotein. Control animals did not develop neutralizing antibody and succumbed to distemper after developing fever, weight loss, leukocytopenia, decreased activity, conjunctivitis, an erythematous rash typical of distemper, CNS signs, and viremia in peripheral blood mononuclear cells (as measured by reverse transcription-PCR). All three CDV vaccines elicited neutralizing titers of at least 1:96. All vaccinated ferrets survived, and none developed viremia. Both recombinant vaccines also protected against the development of symptomatic distemper. However, ferrets receiving the live-virus vaccine lost weight, became lymphocytopenic, and developed the erythematous rash typical of CDV. These data show that ferrets are an excellent model for evaluating the ability of CDV vaccines to protect against symptomatic infection. Because the pathogenesis and clinical course of CDV infection of ferrets is quite similar to that of other Morbillivirus infections, including measles, this model will be useful in testing new candidate Morbillivirus vaccines. PMID:8995676
Stephensen, C B; Welter, J; Thaker, S R; Taylor, J; Tartaglia, J; Paoletti, E
1997-02-01
Canine distemper virus (CDV) infection of ferrets causes an acute systemic disease involving multiple organ systems, including the respiratory tract, lymphoid system, and central nervous system (CNS). We have tested candidate CDV vaccines incorporating the fusion (F) and hemagglutinin (HA) proteins in the highly attenuated NYVAC strain of vaccinia virus and in the ALVAC strain of canarypox virus, which does not productively replicate in mammalian hosts. Juvenile ferrets were vaccinated twice with these constructs, or with an attenuated live-virus vaccine, while controls received saline or the NYVAC and ALVAC vectors expressing rabies virus glycoprotein. Control animals did not develop neutralizing antibody and succumbed to distemper after developing fever, weight loss, leukocytopenia, decreased activity, conjunctivitis, an erythematous rash typical of distemper, CNS signs, and viremia in peripheral blood mononuclear cells (as measured by reverse transcription-PCR). All three CDV vaccines elicited neutralizing titers of at least 1:96. All vaccinated ferrets survived, and none developed viremia. Both recombinant vaccines also protected against the development of symptomatic distemper. However, ferrets receiving the live-virus vaccine lost weight, became lymphocytopenic, and developed the erythematous rash typical of CDV. These data show that ferrets are an excellent model for evaluating the ability of CDV vaccines to protect against symptomatic infection. Because the pathogenesis and clinical course of CDV infection of ferrets is quite similar to that of other Morbillivirus infections, including measles, this model will be useful in testing new candidate Morbillivirus vaccines.
Abar, Caitlin C.; Turrisi, Robert J.; Mallett, Kimberly A.
2015-01-01
This study examined the extent to which profiles of perceived parenting are associated with trajectories of alcohol-related behaviors across the first year of college. Method Participants were surveyed five times from the summer prior to college to the fall of the second year. A total 285 college students were enrolled from the incoming classes of consecutive cohorts of students at a large, public university in the Northeastern U.S. At baseline, participants provided information on their parents’ alcohol-related behaviors (e.g., parental modeling of use; perceived approval of underage use) and parenting characteristics (e.g., parental monitoring; parent-child relationship quality). Students also reported on their personal alcohol-related behaviors at each time point. Results Latent profile analysis was used to identify four subgroups based on the set of parenting characteristics: High Quality (14%) – highest parent-teen relationship quality; High Monitoring (31%) – highest parental monitoring and knowledge; Low Involvement (30%) – poor relationship quality, little monitoring and communication; and Pro-Alcohol (21%) – highest parental modeling and approval. Students were then assigned to profiles, and their alcohol-related behaviors were examined longitudinally using latent growth curve modeling. In general, students in the Pro-Alcohol profile displayed the highest baseline levels of typical weekend drinking, heavy episodic drinking, and peak BAC, in addition to showing steeper increases in typical weekend drinking across the first year of college. Discussion Results support the notion that parental behaviors remain relevant across the first year of college. Differential alcohol-related behaviors across parenting profiles highlight the potential for tailored college intervention. PMID:23915366
Budiman, Cahyo; Koga, Yuichi; Takano, Kazufumi; Kanaya, Shigenori
2011-01-01
Adaptation of microorganisms to low temperatures remains to be fully elucidated. It has been previously reported that peptidyl prolyl cis-trans isomerases (PPIases) are involved in cold adaptation of various microorganisms whether they are hyperthermophiles, mesophiles or phsycrophiles. The rate of cis-trans isomerization at low temperatures is much slower than that at higher temperatures and may cause problems in protein folding. However, the mechanisms by which PPIases are involved in cold adaptation remain unclear. Here we used FK506-binding protein 22, a cold shock protein from the psychrophilic bacterium Shewanella sp. SIB1 (SIB1 FKBP22) as a model protein to decipher the involvement of PPIases in cold adaptation. SIB1 FKBP22 is homodimer that assumes a V-shaped structure based on a tertiary model. Each monomer consists of an N-domain responsible for dimerization and a C-catalytic domain. SIB1 FKBP22 is a typical cold-adapted enzyme as indicated by the increase of catalytic efficiency at low temperatures, the downward shift in optimal temperature of activity and the reduction in the conformational stability. SIB1 FKBP22 is considered as foldase and chaperone based on its ability to catalyze refolding of a cis-proline containing protein and bind to a folding intermediate protein, respectively. The foldase and chaperone activites of SIB1 FKBP22 are thought to be important for cold adaptation of Shewanella sp. SIB1. These activities are also employed by other PPIases for being involved in cold adaptation of various microorganisms. Despite other biological roles of PPIases, we proposed that foldase and chaperone activities of PPIases are the main requirement for overcoming the cold-stress problem in microorganisms due to folding of proteins. PMID:21954357
Bridging the PSI Knowledge Gap: A Multi-Scale Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wirth, Brian D.
2015-01-08
Plasma-surface interactions (PSI) pose an immense scientific hurdle in magnetic confinement fusion and our present understanding of PSI in confinement environments is highly inadequate; indeed, a recent Fusion Energy Sciences Advisory Committee report found that 4 out of the 5 top five fusion knowledge gaps were related to PSI. The time is appropriate to develop a concentrated and synergistic science effort that would expand, exploit and integrate the wealth of laboratory ion-beam and plasma research, as well as exciting new computational tools, towards the goal of bridging the PSI knowledge gap. This effort would broadly advance plasma and material sciences,more » while providing critical knowledge towards progress in fusion PSI. This project involves the development of a Science Center focused on a new approach to PSI science; an approach that both exploits access to state-of-the-art PSI experiments and modeling, as well as confinement devices. The organizing principle is to develop synergistic experimental and modeling tools that treat the truly coupled multi-scale aspect of the PSI issues in confinement devices. This is motivated by the simple observation that while typical lab experiments and models allow independent manipulation of controlling variables, the confinement PSI environment is essentially self-determined with few outside controls. This means that processes that may be treated independently in laboratory experiments, because they involve vastly different physical and time scales, will now affect one another in the confinement environment. Also, lab experiments cannot simultaneously match all exposure conditions found in confinement devices typically forcing a linear extrapolation of lab results. At the same time programmatic limitations prevent confinement experiments alone from answering many key PSI questions. The resolution to this problem is to usefully exploit access to PSI science in lab devices, while retooling our thinking from a linear and de-coupled extrapolation to a multi-scale, coupled approach. The PSI Plasma Center consisted of three equal co-centers; one located at the MIT Plasma Science and Fusion Center, one at UC San Diego Center for Energy Research and one at the UC Berkeley Department of Nuclear Engineering, which moved to the University of Tennessee, Knoxville (UTK) with Professor Brian Wirth in July 2010. The Center had three co-directors: Prof. Dennis Whyte led the MIT co-center, the UCSD co-center was led by Dr. Russell Doerner, and Prof. Brian Wirth led the UCB/UTK center. The directors have extensive experience in PSI and material research, and have been internationally recognized in the magnetic fusion, materials and plasma research fields. The co-centers feature keystone PSI experimental and modeling facilities dedicated to PSI science: the DIONISOS/CLASS facility at MIT, the PISCES facility at UCSD, and the state-of-the-art numerical modeling capabilities at UCB/UTK. A collaborative partner in the center is Sandia National Laboratory at Livermore (SNL/CA), which has extensive capabilities with low energy ion beams and surface diagnostics, as well as supporting plasma facilities, including the Tritium Plasma Experiment, all of which significantly augment the Center. Interpretive, continuum material models are available through SNL/CA, UCSD and MIT. The participating institutions of MIT, UCSD, UCB/UTK, SNL/CA and LLNL brought a formidable array of experimental tools and personnel abilities into the PSI Plasma Center. Our work has focused on modeling activities associated with plasma surface interactions that are involved in effects of He and H plasma bombardment on tungsten surfaces. This involved performing computational material modeling of the surface evolution during plasma bombardment using molecular dynamics modeling. The principal outcomes of the research efforts within the combined experimental – modeling PSI center are to provide a knowledgebase of the mechanisms of surface degradation, and the influence of the surface on plasma conditions.« less
Inanlouganji, Alireza; Reddy, T. Agami; Katipamula, Srinivas
2018-04-13
Forecasting solar irradiation has acquired immense importance in view of the exponential increase in the number of solar photovoltaic (PV) system installations. In this article, analyses results involving statistical and machine-learning techniques to predict solar irradiation for different forecasting horizons are reported. Yearlong typical meteorological year 3 (TMY3) datasets from three cities in the United States with different climatic conditions have been used in this analysis. A simple forecast approach that assumes consecutive days to be identical serves as a baseline model to compare forecasting alternatives. To account for seasonal variability and to capture short-term fluctuations, different variants of themore » lagged moving average (LMX) model with cloud cover as the input variable are evaluated. Finally, the proposed LMX model is evaluated against an artificial neural network (ANN) model. How the one-hour and 24-hour models can be used in conjunction to predict different short-term rolling horizons is discussed, and this joint application is illustrated for a four-hour rolling horizon forecast scheme. Lastly, the effect of using predicted cloud cover values, instead of measured ones, on the accuracy of the models is assessed. Results show that LMX models do not degrade in forecast accuracy if models are trained with the forecast cloud cover data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inanlouganji, Alireza; Reddy, T. Agami; Katipamula, Srinivas
Forecasting solar irradiation has acquired immense importance in view of the exponential increase in the number of solar photovoltaic (PV) system installations. In this article, analyses results involving statistical and machine-learning techniques to predict solar irradiation for different forecasting horizons are reported. Yearlong typical meteorological year 3 (TMY3) datasets from three cities in the United States with different climatic conditions have been used in this analysis. A simple forecast approach that assumes consecutive days to be identical serves as a baseline model to compare forecasting alternatives. To account for seasonal variability and to capture short-term fluctuations, different variants of themore » lagged moving average (LMX) model with cloud cover as the input variable are evaluated. Finally, the proposed LMX model is evaluated against an artificial neural network (ANN) model. How the one-hour and 24-hour models can be used in conjunction to predict different short-term rolling horizons is discussed, and this joint application is illustrated for a four-hour rolling horizon forecast scheme. Lastly, the effect of using predicted cloud cover values, instead of measured ones, on the accuracy of the models is assessed. Results show that LMX models do not degrade in forecast accuracy if models are trained with the forecast cloud cover data.« less
Fecal microbiota transplantation and its potential therapeutic uses in gastrointestinal disorders.
Heath, Ryan D; Cockerell, Courtney; Mankoo, Ravinder; Ibdah, Jamal A; Tahan, Veysel
2018-01-01
Typical human gut flora has been well characterized in previous studies and has been noted to have significant differences when compared with the typical microbiome of various disease states involving the gastrointestinal tract. Such diseases include Clostridium difficile colitis, inflammatory bowel disease, functional bowel syndromes, and various states of liver disease. A growing number of studies have investigated the use of a fecal microbiota transplant as a potential therapy for these disease states.
Fecal microbiota transplantation and its potential therapeutic uses in gastrointestinal disorders
Heath, Ryan D.; Cockerell, Courtney; Mankoo, Ravinder; Ibdah, Jamal A.; Tahan, Veysel
2018-01-01
Typical human gut flora has been well characterized in previous studies and has been noted to have significant differences when compared with the typical microbiome of various disease states involving the gastrointestinal tract. Such diseases include Clostridium difficile colitis, inflammatory bowel disease, functional bowel syndromes, and various states of liver disease. A growing number of studies have investigated the use of a fecal microbiota transplant as a potential therapy for these disease states. PMID:29607440
Neuroradiological findings in maple syrup urine disease
Indiran, Venkatraman; Gunaseelan, R. Emmanuel
2013-01-01
Maple syrup urine disease is a rare inborn error of amino acid metabolism involving catabolic pathway of the branched-chain amino acids. This disease, if left untreated, may cause damage to the brain and may even cause death. These patients typically present with distinctive maple syrup odour of sweat and urine. Patients typically present with skin and urine smelling like maple syrup. Here we describe a case with relevant magnetic resonance imaging findings and confirmatory biochemical findings. PMID:23772241
Neuroradiological findings in maple syrup urine disease.
Indiran, Venkatraman; Gunaseelan, R Emmanuel
2013-01-01
Maple syrup urine disease is a rare inborn error of amino acid metabolism involving catabolic pathway of the branched-chain amino acids. This disease, if left untreated, may cause damage to the brain and may even cause death. These patients typically present with distinctive maple syrup odour of sweat and urine. Patients typically present with skin and urine smelling like maple syrup. Here we describe a case with relevant magnetic resonance imaging findings and confirmatory biochemical findings.
Model Parameter Variability for Enhanced Anaerobic Bioremediation of DNAPL Source Zones
NASA Astrophysics Data System (ADS)
Mao, X.; Gerhard, J. I.; Barry, D. A.
2005-12-01
The objective of the Source Area Bioremediation (SABRE) project, an international collaboration of twelve companies, two government agencies and three research institutions, is to evaluate the performance of enhanced anaerobic bioremediation for the treatment of chlorinated ethene source areas containing dense, non-aqueous phase liquids (DNAPL). This 4-year, 5.7 million dollars research effort focuses on a pilot-scale demonstration of enhanced bioremediation at a trichloroethene (TCE) DNAPL field site in the United Kingdom, and includes a significant program of laboratory and modelling studies. Prior to field implementation, a large-scale, multi-laboratory microcosm study was performed to determine the optimal system properties to support dehalogenation of TCE in site soil and groundwater. This statistically-based suite of experiments measured the influence of key variables (electron donor, nutrient addition, bioaugmentation, TCE concentration and sulphate concentration) in promoting the reductive dechlorination of TCE to ethene. As well, a comprehensive biogeochemical numerical model was developed for simulating the anaerobic dehalogenation of chlorinated ethenes. An appropriate (reduced) version of this model was combined with a parameter estimation method based on fitting of the experimental results. Each of over 150 individual microcosm calibrations involved matching predicted and observed time-varying concentrations of all chlorinated compounds. This study focuses on an analysis of this suite of fitted model parameter values. This includes determining the statistical correlation between parameters typically employed in standard Michaelis-Menten type rate descriptions (e.g., maximum dechlorination rates, half-saturation constants) and the key experimental variables. The analysis provides insight into the degree to which aqueous phase TCE and cis-DCE inhibit dechlorination of less-chlorinated compounds. Overall, this work provides a database of the numerical modelling parameters typically employed for simulating TCE dechlorination relevant for a range of system conditions (e.g, bioaugmented, high TCE concentrations, etc.). The significance of the obtained variability of parameters is illustrated with one-dimensional simulations of enhanced anaerobic bioremediation of residual TCE DNAPL.
Simplified Modeling of Oxidation of Hydrocarbons
NASA Technical Reports Server (NTRS)
Bellan, Josette; Harstad, Kenneth
2008-01-01
A method of simplified computational modeling of oxidation of hydrocarbons is undergoing development. This is one of several developments needed to enable accurate computational simulation of turbulent, chemically reacting flows. At present, accurate computational simulation of such flows is difficult or impossible in most cases because (1) the numbers of grid points needed for adequate spatial resolution of turbulent flows in realistically complex geometries are beyond the capabilities of typical supercomputers now in use and (2) the combustion of typical hydrocarbons proceeds through decomposition into hundreds of molecular species interacting through thousands of reactions. Hence, the combination of detailed reaction- rate models with the fundamental flow equations yields flow models that are computationally prohibitive. Hence, further, a reduction of at least an order of magnitude in the dimension of reaction kinetics is one of the prerequisites for feasibility of computational simulation of turbulent, chemically reacting flows. In the present method of simplified modeling, all molecular species involved in the oxidation of hydrocarbons are classified as either light or heavy; heavy molecules are those having 3 or more carbon atoms. The light molecules are not subject to meaningful decomposition, and the heavy molecules are considered to decompose into only 13 specified constituent radicals, a few of which are listed in the table. One constructs a reduced-order model, suitable for use in estimating the release of heat and the evolution of temperature in combustion, from a base comprising the 13 constituent radicals plus a total of 26 other species that include the light molecules and related light free radicals. Then rather than following all possible species through their reaction coordinates, one follows only the reduced set of reaction coordinates of the base. The behavior of the base was examined in test computational simulations of the combustion of heptane in a stirred reactor at various initial pressures ranging from 0.1 to 6 MPa. Most of the simulations were performed for stoichiometric mixtures; some were performed for fuel/oxygen mole ratios of 1/2 and 2.
An inverse modeling strategy and a computer program to model garnet growth and resorption
NASA Astrophysics Data System (ADS)
Lanari, Pierre; Giuntoli, Francesco
2017-04-01
GrtMod is a computer program that allows numerical simulation of the pressure-temperature (P-T) evolution of garnet porphyroblasts based on the composition of successive growth zones preserved in natural samples. For each garnet growth stage, a new reactive bulk composition is optimized, allowing for resorption and/or fractionation of the previously crystalized garnet. The successive minimizations are performed using a heuristic search method and an objective function that quantify the amount by which the predicted garnet composition deviates from the measured values. The automated strategy of GrtMod includes a two stages optimization and one refinement stage. In this contribution, we will present several application examples. The new strategy provides quantitative estimates of the optimal P-T conditions whereas it was generally derived in a qualitatively way by using garnet isopleth intersections in equilibrium phase diagrams. GrtMod can also be used to model the evolution of the reactive bulk composition along any P-T trajectories. The results for typical MORB and metapelite compositions demonstrate that fractional crystallization models are required to derive accurate P-T information from garnet compositional zoning. GrtMod can also be used to retrieve complex garnet histories involving several stages of resorption. For instance, it has been used to model the P-T condition of garnet growth in grains from the Sesia Zone (Western Alps). The compositional variability of successive growth zones is characterized using standardized X-ray maps and the program XMapTools. Permian garnet cores crystalized under granulite facies conditions (T > 800°C and P = 6 kbar), whereas Alpine garnet rims grew at eclogite facies conditions (650°C and 16 kbar) involving several successive episodes of resorption. The model predicts that up to 50 vol% of garnet was dissolved before a new episode of garnet growth.
Collins, Anne G. E.; Frank, Michael J.
2012-01-01
Instrumental learning involves corticostriatal circuitry and the dopaminergic system. This system is typically modeled in the reinforcement learning (RL) framework by incrementally accumulating reward values of states and actions. However, human learning also implicates prefrontal cortical mechanisms involved in higher level cognitive functions. The interaction of these systems remains poorly understood, and models of human behavior often ignore working memory (WM) and therefore incorrectly assign behavioral variance to the RL system. Here we designed a task that highlights the profound entanglement of these two processes, even in simple learning problems. By systematically varying the size of the learning problem and delay between stimulus repetitions, we separately extracted WM-specific effects of load and delay on learning. We propose a new computational model that accounts for the dynamic integration of RL and WM processes observed in subjects' behavior. Incorporating capacity-limited WM into the model allowed us to capture behavioral variance that could not be captured in a pure RL framework even if we (implausibly) allowed separate RL systems for each set size. The WM component also allowed for a more reasonable estimation of a single RL process. Finally, we report effects of two genetic polymorphisms having relative specificity for prefrontal and basal ganglia functions. Whereas the COMT gene coding for catechol-O-methyl transferase selectively influenced model estimates of WM capacity, the GPR6 gene coding for G-protein-coupled receptor 6 influenced the RL learning rate. Thus, this study allowed us to specify distinct influences of the high-level and low-level cognitive functions on instrumental learning, beyond the possibilities offered by simple RL models. PMID:22487033
Atypical cross talk between mentalizing and mirror neuron networks in autism spectrum disorder.
Fishman, Inna; Keown, Christopher L; Lincoln, Alan J; Pineda, Jaime A; Müller, Ralph-Axel
2014-07-01
Converging evidence indicates that brain abnormalities in autism spectrum disorder (ASD) involve atypical network connectivity, but it is unclear whether altered connectivity is especially prominent in brain networks that participate in social cognition. To investigate whether adolescents with ASD show altered functional connectivity in 2 brain networks putatively impaired in ASD and involved in social processing, theory of mind (ToM) and mirror neuron system (MNS). Cross-sectional study using resting-state functional magnetic resonance imaging involving 25 adolescents with ASD between the ages of 11 and 18 years and 25 typically developing adolescents matched for age, handedness, and nonverbal IQ. Statistical parametric maps testing the degree of whole-brain functional connectivity and social functioning measures. Relative to typically developing controls, participants with ASD showed a mixed pattern of both over- and underconnectivity in the ToM network, which was associated with greater social impairment. Increased connectivity in the ASD group was detected primarily between the regions of the MNS and ToM, and was correlated with sociocommunicative measures, suggesting that excessive ToM-MNS cross talk might be associated with social impairment. In a secondary analysis comparing a subset of the 15 participants with ASD with the most severe symptomology and a tightly matched subset of 15 typically developing controls, participants with ASD showed exclusive overconnectivity effects in both ToM and MNS networks, which were also associated with greater social dysfunction. Adolescents with ASD showed atypically increased functional connectivity involving the mentalizing and mirror neuron systems, largely reflecting greater cross talk between the 2. This finding is consistent with emerging evidence of reduced network segregation in ASD and challenges the prevailing theory of general long-distance underconnectivity in ASD. This excess ToM-MNS connectivity may reflect immature or aberrant developmental processes in 2 brain networks involved in understanding of others, a domain of impairment in ASD. Further, robust links with sociocommunicative symptoms of ASD implicate atypically increased ToM-MNS connectivity in social deficits observed in ASD.
Similarity Metrics for Closed Loop Dynamic Systems
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Yang, Lee C.; Bedrossian, Naz; Hall, Robert A.
2008-01-01
To what extent and in what ways can two closed-loop dynamic systems be said to be "similar?" This question arises in a wide range of dynamic systems modeling and control system design applications. For example, bounds on error models are fundamental to the controller optimization with modern control design methods. Metrics such as the structured singular value are direct measures of the degree to which properties such as stability or performance are maintained in the presence of specified uncertainties or variations in the plant model. Similarly, controls-related areas such as system identification, model reduction, and experimental model validation employ measures of similarity between multiple realizations of a dynamic system. Each area has its tools and approaches, with each tool more or less suited for one application or the other. Similarity in the context of closed-loop model validation via flight test is subtly different from error measures in the typical controls oriented application. Whereas similarity in a robust control context relates to plant variation and the attendant affect on stability and performance, in this context similarity metrics are sought that assess the relevance of a dynamic system test for the purpose of validating the stability and performance of a "similar" dynamic system. Similarity in the context of system identification is much more relevant than are robust control analogies in that errors between one dynamic system (the test article) and another (the nominal "design" model) are sought for the purpose of bounding the validity of a model for control design and analysis. Yet system identification typically involves open-loop plant models which are independent of the control system (with the exception of limited developments in closed-loop system identification which is nonetheless focused on obtaining open-loop plant models from closed-loop data). Moreover the objectives of system identification are not the same as a flight test and hence system identification error metrics are not directly relevant. In applications such as launch vehicles where the open loop plant is unstable it is similarity of the closed-loop system dynamics of a flight test that are relevant.
Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data
NASA Astrophysics Data System (ADS)
Glüsenkamp, Thorsten
2018-06-01
Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.
Klin, Ami; Jones, Warren
2006-06-01
The weak central coherence (WCC) account of autism characterizes the learning style of individuals with this condition as favoring localized and fragmented (to the detriment of global and integrative) processing of information. This pattern of learning is thought to lead to deficits in aspects of perception (e.g., face processing), cognition, and communication (e.g., focus on disjointed details rather than "gist" or context), ultimately leading to social impairments. This study was carried out to examine whether WCC applies to social and to non-social aspects of learning alike, or, alternatively, some areas of learning (e.g., physical reasoning) are spared in autism. classic social animation as quantified in the Social Attribution Task (SAT) () and a novel animation involving physical reasoning (the Physical Attribution Task; PAT) were used to test the domain specificity of the WCC hypothesis. A pilot study involving a reference group of typically developing young adults and a group of individuals with higher-functioning autism spectrum disorders (ASDs) revealed gender differences in the reference group in regards to performance on the PAT (males outperformed females). In a follow-up case-control comparison involving only males where the ASD group was matched on age and IQ to a typically developing (TD) group of children, adolescents, and adults, the ASD group showed lower SAT scores and comparable PAT scores relative to the TD group. The interaction of diagnostic group by task was highly significant, with little overlap between the groups in the distributions of SAT minus PAT scores. These results indicated preserved integrative skills in the area of physical attribution in the ASD group, thus failing to support the WCC account as a domain-independent (or more general) model of learning in autism, while highlighting the centrality of the social deficits in the characterization of learning style in the autism spectrum disorders.
A Canine Model of Chronic Graft-versus-Host Disease.
Graves, Scott S; Rezvani, Andrew; Sale, George; Stone, Diane; Parker, Maura; Rosinski, Steven; Spector, Michele; Swearingen, Bruce; Kean, Leslie; Storb, Rainer
2017-03-01
In long-term survivors of allogeneic hematopoietic cell transplantation (HCT), chronic graft-versus-host disease (GVHD) is the major cause of morbidity and mortality and a major determinant of quality of life. Chronic GVHD responds poorly to current immunosuppressive drugs, and while T cell depletion may be preventive, this gain is offset by increased relapse rates. A significant impediment to progress in treating chronic GVHD has been the limitations of existing animal models. The goal of this study was to develop a reproducible comprehensive model of chronic GVHD in the dog. Ten recipient dogs received 920 cGy total body irradiation, infusion of marrow, and an infusion of buffy coat cells from a dog leukocyte antigen (DLA)-mismatched unrelated donor. Postgrafting immunosuppression consisted of methotrexate (days 1, 3, 6, 11) and cyclosporine. The duration of cyclosporine administration was limited to 80 days instead of the clinically used 180 days. This was done to contain costs, as chronic GVHD was expected to develop at earlier time points. All recipients were given ursodiol for liver protection. One dog had graft failure and 9 dogs showed stable engraftment. Eight of the 9 developed de novo chronic GVHD. Dogs progressed with clinical signs of chronic GVHD over a period of 43 to 164 (median, 88) days after discontinuation of cyclosporine. Target organs showed the spectrum of chronic GVHD manifestations that are typically seen clinically. These included lichenoid changes of the skin, fasciitis, ocular involvement (xerophthalmia), conjunctivitis, bronchiolitis obliterans, salivary gland involvement, gingivitis, esophageal involvement, and hepatic involvement. Peripheral blood lymphocyte surface antigen expression of CD28 and inducible costimulator was elevated in dogs with GHVD compared with those in normal dogs, but not significantly so. Serum levels of IL-8 and monocyte chemotactic protein-1 in GVHD-affected dogs at time of euthanasia were elevated, whereas levels of IL-15 were depressed compared with those in normal dogs. Results indicate that the canine model is well suited for future studies aimed at preventing or treating chronic GVHD. Copyright © 2017 The American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.
DehydroalanylGly, a new post translational modification resulting from the breakdown of glutathione.
Friedrich, Michael G; Wang, Zhen; Schey, Kevin L; Truscott, Roger J W
2018-04-01
The human body contains numerous long-lived proteins which deteriorate with age, typically by racemisation, deamidation, crosslinking and truncation. Previously we elucidated one reaction responsible for age-related crosslinking, the spontaneous formation of dehydroalanine (DHA) intermediates from phosphoserine and cysteine. This resulted in non-disulphide covalent crosslinks. The current paper outlines a novel posttranslational modification (PTM) in human proteins, which involves the addition of dehydroalanylglycine (DHAGly) to Lys residues. Human lens digests were examined by mass spectrometry for the presence of (DHA)Gly (+144.0535 Da) adducts to Lys residues. Peptide model studies were undertaken to elucidate the mechanism of formation. In the lens, this PTM was detected at 18 lysine sites in 7 proteins. Using model peptides, a pathway for its formation was found to involve initial formation of the glutathione degradation product, γ-Glu(DHA)Gly from oxidised glutathione (GSSG). Once the Lys adduct formed, the Glu residue was lost in a hydrolytic mechanism apparently catalysed by the ε-amino group of the Lys. This discovery suggests that within cells, the functional groups of amino acids in proteins may be susceptible to modification by reactive metabolites derived from GSSG. Our finding demonstrates a novel +144.0535 Da PTM arising from the breakdown of oxidised glutathione. Copyright © 2018. Published by Elsevier B.V.
Ohta, Yukari; Nishi, Shinro; Hasegawa, Ryoichi; Hatada, Yuji
2015-01-01
Lignin, an aromatic polymer of phenylpropane units joined predominantly by β-O-4 linkages, is the second most abundant biomass component on Earth. Despite the continuous discharge of terrestrially produced lignin into marine environments, few studies have examined lignin degradation by marine microorganisms. Here, we screened marine isolates for β-O-4 cleavage activity and determined the genes responsible for this enzymatic activity in one positive isolate. Novosphingobium sp. strain MBES04 converted all four stereoisomers of guaiacylglycerol-β-guaiacyl ether (GGGE), a structural mimic of lignin, to guaiacylhydroxypropanone as an end metabolite in three steps involving six enzymes, including a newly identified Nu-class glutathione-S-transferase (GST). In silico searches of the strain MBES04 genome revealed that four GGGE-metabolizing GST genes were arranged in a cluster. Transcriptome analysis demonstrated that the lignin model compounds GGGE and (2-methoxyphenoxy)hydroxypropiovanillone (MPHPV) enhanced the expression of genes in involved in energy metabolism, including aromatic-monomer assimilation, and evoked defense responses typically expressed upon exposure to toxic compounds. The findings from this study provide insight into previously unidentified bacterial enzymatic systems and the physiological acclimation of microbes associated with the biological transformation of lignin-containing materials in marine environments. PMID:26477321
Physics Applied to Oil and Gas Exploration
NASA Astrophysics Data System (ADS)
Schwartz, Larry
2002-03-01
Problems involving transport in porous media are of interest throughout the fields of petroleum exploration and environmental monitoring and remediation. The systems being studied can vary in size from centimeter scale rock or soil samples to kilometer scale reservoirs and aquifers. Clearly, the smaller the sample the more easily can the medium's structure and composition be characterized, and the better defined are the associated experimental and theoretical modeling problems. The study of transport in such geological systems is then similar to corresponding problems in the study of other heterogeneous systems such as polymer gels, catalytic beds and cementitious materials. The defining characteristic of porous media is that they are comprised of two percolating interconnected channels, the solid and pore networks. Transport processes of interest in such systems typically involve the flow of electrical current, viscous fluids or fine grained particles. A closely related phenomena, nuclear magnetic resonance (NMR), is controlled by diffusion in the pore network. Also of interest is the highly non-linear character of the stress-strain response of granular porous media. We will review the development of two and three dimensional model porous media, and will outline the calculation of their physical properties. We will also discuss the direct measurement of the pore structure by synchrotron X-ray microtomography.
Rare earth element scavenging in seawater
NASA Astrophysics Data System (ADS)
Byrne, Robert H.; Kim, Ki-Hyun
1990-10-01
Examinations of rare earth element (REE) adsorption in seawater, using a variety of surface-types, indicated that, for most surfaces, light rare earth elements (LREEs) are preferentially adsorbed compared to the heavy rare earths (HREEs). Exceptions to this behavior were observed only for silica phases (glass surfaces, acid-cleaned diatomaceous earth, and synthetic SiO 2). The affinity of the rare earths for surfaces can be strongly affected by thin organic coatings. Glass surfaces which acquired an organic coating through immersion in Tampa Bay exhibited adsorptive behavior typical of organic-rich, rather than glass, surfaces. Models of rare earth distributions between seawater and carboxylate-rich surfaces indicate that scavenging processes which involve such surfaces should exhibit a strong dependence on pH and carbonate complexation. Scavenging models involving carboxylate surfaces produce relative REE abundance patterns in good general agreement with observed shale-normalized REE abundances in seawater. Scavenging by carboxylate-rich surfaces should produce HREE enrichments in seawater relative to the LREEs and may produce enrichments of lanthanum relative to its immediate trivalent neighbors. Due to the origin of distribution coefficients as a difference between REE solution complexation (which increases strongly with atomic number) and surface complexation (which apparently also increases with atomic number) the relative solution abundance patterns of the REEs produced by scavenging reactions can be quite complex.
Predicting the risk of sudden cardiac death.
Lerma, Claudia; Glass, Leon
2016-05-01
Sudden cardiac death (SCD) is the result of a change of cardiac activity from normal (typically sinus) rhythm to a rhythm that does not pump adequate blood to the brain. The most common rhythms leading to SCD are ventricular tachycardia (VT) or ventricular fibrillation (VF). These result from an accelerated ventricular pacemaker or ventricular reentrant waves. Despite significant efforts to develop accurate predictors for the risk of SCD, current methods for risk stratification still need to be improved. In this article we briefly review current approaches to risk stratification. Then we discuss the mathematical basis for dynamical transitions (called bifurcations) that may lead to VT and VF. One mechanism for transition to VT or VF involves a perturbation by a premature ventricular complex (PVC) during sinus rhythm. We describe the main mechanisms of PVCs (reentry, independent pacemakers and abnormal depolarizations). An emerging approach to risk stratification for SCD involves the development of individualized dynamical models of a patient based on measured anatomy and physiology. Careful analysis and modelling of dynamics of ventricular arrhythmia on an individual basis will be essential in order to improve risk stratification for SCD and to lay a foundation for personalized (precision) medicine in cardiology. © 2015 The Authors. The Journal of Physiology © 2015 The Physiological Society.
[The consolidation of memory, one century on].
Prado-Alcala, R A; Quirarte, G L
The theory of memory consolidation, based on the work published by Georg Elias Muller and Alfons Pilzecker over a century ago, continues to guide research into the neurobiology of memory, either directly or indirectly. In their classic monographic work, they concluded that fixing memory requires the passage of time (consolidation) and that memory is vulnerable during this period of consolidation, as symptoms of amnesia appear when brain functioning is interfered with before the consolidation process is completed. Most of the experimental data concerning this phenomenon strongly support the theory. In this article we present a review of experiments that have made it possible to put forward a model that explains the amnesia produced in conventional learning conditions, as well as another model related to the protection of memory when the same instances of learning are submitted to a situation involving intensive training. Findings from relatively recent studies have shown that treatments that typically produce amnesia when they are administered immediately after a learning experience (during the period in which the memory would be consolidating itself) no longer have any effect when the instances of learning involve a relatively large number of trials or training sessions, or relatively high intensity aversive events. These results are not congruent with the prevailing theories about consolidation.
The influence of resident involvement on surgical outcomes.
Raval, Mehul V; Wang, Xue; Cohen, Mark E; Ingraham, Angela M; Bentrem, David J; Dimick, Justin B; Flynn, Timothy; Hall, Bruce L; Ko, Clifford Y
2011-05-01
Although the training of surgical residents is often considered in national policy addressing complications and safety, the influence of resident intraoperative involvement on surgical outcomes has not been well studied. We identified 607,683 surgical cases from 234 hospitals from the 2006 to 2009 American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP). Outcomes were compared by resident involvement for all general and vascular cases as well as for specific general surgical procedures. After typical ACS NSQIP comorbidity risk adjustment and further adjustment for hospital teaching status and operative time in modeling, resident intraoperative involvement was associated with slightly increased morbidity when assessing overall general or vascular procedures (odds ratio [OR] 1.06; 95% CI 1.04 to 1.09), pancreatectomy or esophagectomy (OR 1.26; 95% CI 1.08 to 1.45), and colorectal resections (OR 1.15; 95% CI 1.09 to 1.22). In contrast, for mortality, resident intraoperative involvement was associated with reductions for overall general and vascular procedures (OR 0.91; 95% CI 0.84 to 0.99), colorectal resections (OR 0.88; 95% CI 0.78 to 0.99), and abdominal aortic aneurysm repair (OR 0.71; 95% CI 0.53 to 0.95). Results were moderated somewhat after hierarchical modeling was performed to account for hospital-level variation, with mortality results no longer reaching significance (overall morbidity OR 1.07; 95% CI 1.03 to 1.10, overall mortality OR 0.97; 95% CI 0.90 to 1.05). Based on risk-adjusted event rates, resident intraoperative involvement is associated with approximately 6.1 additional morbidity events but 1.4 fewer deaths per 1,000 general and vascular surgery procedures. Resident intraoperative participation is associated with slightly higher morbidity rates but slightly decreased mortality rates across a variety of procedures and is minimized further after taking into account hospital-level variation. These clinically small effects may serve to reassure patients and others that resident involvement in surgical care is safe and possibly protective with regard to mortality. Copyright © 2011 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
... shop employees People who work in poultry processing plants Veterinarians Typical birds involved are parrots, parakeets, and budgerigars, although other birds have also caused the disease. Psittacosis is a rare disease. Very few cases are reported each year ...
Click It or Ticket Evaluation, 2011
DOT National Transportation Integrated Search
2013-05-01
The 2011 Click It or Ticket (CIOT) mobilization followed a typical selective traffic enforcement program (STEP) sequence, involving paid media, earned media, and enforcement. A nationally representative telephone survey indicated that the mobilizatio...
Genetics Home Reference: Swyer syndrome
... they help determine whether a person will develop male or female sex characteristics. Girls and women typically ... Y protein starts processes that are involved in male sexual development. These processes cause a fetus to ...
Acceleration feedback improves balancing against reflex delay
Insperger, Tamás; Milton, John; Stépán, Gábor
2013-01-01
A model for human postural balance is considered in which the time-delayed feedback depends on position, velocity and acceleration (proportional–derivative–acceleration (PDA) feedback). It is shown that a PDA controller is equivalent to a predictive controller, in which the prediction is based on the most recent information of the state, but the control input is not involved into the prediction. A PDA controller is superior to the corresponding proportional–derivative controller in the sense that the PDA controller can stabilize systems with approximately 40 per cent larger feedback delays. The addition of a sensory dead zone to account for the finite thresholds for detection by sensory receptors results in highly intermittent, complex oscillations that are a typical feature of human postural sway. PMID:23173196
NASA Astrophysics Data System (ADS)
Yahiaoui, Riad; Manjappa, Manukumara; Srivastava, Yogesh Kumar; Singh, Ranjan
2017-07-01
Electromagnetically induced transparency (EIT) arises from coupling between the bright and dark mode resonances that typically involve subwavelength structures with broken symmetry, which results in an extremely sharp transparency band. Here, we demonstrate a tunable broadband EIT effect in a symmetry preserved metamaterial structure at the terahertz frequencies. Alongside, we also envisage a photo-active EIT effect in a hybrid metal-semiconductor metamaterial, where the transparency window can be dynamically switched by shining near-infrared light beam. A robust coupled oscillator model explains the coupling mechanism in the proposed design, which shows a good agreement with the observed results on tunable broadband transparency effect. Such active, switchable, and broadband metadevices could have applications in delay bandwidth management, terahertz filtering, and slow light effects.
Obsessive Compulsive Disorder: Beyond Segregated Cortico-striatal Pathways
Milad, Mohammed R.; Rauch, Scott L.
2016-01-01
Obsessive-compulsive disorder (OCD) affects ∼2-3% of the population and is characterized by recurrent intrusive thoughts (obsessions) and repetitive behaviors or mental acts (compulsions), typically performed in response to obsessions or related anxiety. In the past few decades, the prevailing models of OCD pathophysiology have focused on cortico-striatal circuitry. More recent neuroimaging evidence, however, points to critical involvement of the lateral and medial orbitofrontal cortices, the dorsal anterior cingulate cortex and amygdalo-cortical circuitry, in addition to cortico-striatal circuitry, in the pathophysiology of the disorder. In this review, we elaborate proposed features of OCD pathophysiology beyond the classic parallel cortico-striatal pathways and argue that this evidence suggests that fear extinction, in addition to behavioral inhibition, may be impaired in OCD. PMID:22138231
Neutral and Non-Neutral Evolution of Duplicated Genes with Gene Conversion
Fawcett, Jeffrey A.; Innan, Hideki
2011-01-01
Gene conversion is one of the major mutational mechanisms involved in the DNA sequence evolution of duplicated genes. It contributes to create unique patters of DNA polymorphism within species and divergence between species. A typical pattern is so-called concerted evolution, in which the divergence between duplicates is maintained low for a long time because of frequent exchanges of DNA fragments. In addition, gene conversion affects the DNA evolution of duplicates in various ways especially when selection operates. Here, we review theoretical models to understand the evolution of duplicates in both neutral and non-neutral cases. We also explain how these theories contribute to interpreting real polymorphism and divergence data by using some intriguing examples. PMID:24710144
NASA Technical Reports Server (NTRS)
Stevens, N. J.
1979-01-01
Cases where the charged-particle environment acts on the spacecraft (e.g., spacecraft charging phenomena) and cases where a system on the spacecraft causes the interaction (e.g., high voltage space power systems) are considered. Both categories were studied in ground simulation facilities to understand the processes involved and to measure the pertinent parameters. Computer simulations are based on the NASA Charging Analyzer Program (NASCAP) code. Analytical models are developed in this code and verified against the experimental data. Extrapolation from the small test samples to space conditions are made with this code. Typical results from laboratory and computer simulations are presented for both types of interactions. Extrapolations from these simulations to performance in space environments are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winklehner, D.; Leitner, D., E-mail: leitnerd@nscl.msu.edu; Cole, D.
2014-02-15
In this paper we describe the first systematic measurement of beam neutralization (space charge compensation) in the ECR low energy transport line with a retarding field analyzer, which can be used to measure the potential of the beam. Expected trends for the space charge compensation levels such as increase with residual gas pressure, beam current, and beam density could be observed. However, the overall levels of neutralization are consistently low (<60%). The results and the processes involved for neutralizing ion beams are discussed for conditions typical for ECR injector beam lines. The results are compared to a simple theoretical beammore » plasma model as well as simulations.« less
In search of how people change. Applications to addictive behaviors.
Prochaska, J O; DiClemente, C C; Norcross, J C
1992-09-01
How people intentionally change addictive behaviors with and without treatment is not well understood by behavioral scientists. This article summarizes research on self-initiated and professionally facilitated change of addictive behaviors using the key trans-theoretical constructs of stages and processes of change. Modification of addictive behaviors involves progression through five stages--pre-contemplation, contemplation, preparation, action, and maintenance--and individuals typically recycle through these stages several times before termination of the addiction. Multiple studies provide strong support for these stages as well as for a finite and common set of change processes used to progress through the stages. Research to date supports a trans-theoretical model of change that systematically integrates the stages with processes of change from diverse theories of psychotherapy.
NASA Astrophysics Data System (ADS)
Drury, Luke O.'C.; Strong, Andrew W.
2017-01-01
We make quantitative estimates of the power supplied to the Galactic cosmic ray population by second-order Fermi acceleration in the interstellar medium, or as it is usually termed in cosmic ray propagation studies, diffusive reacceleration. Using recent results on the local interstellar spectrum, following Voyager 1's crossing of the heliopause, we show that for parameter values, in particular the Alfvén speed, typically used in propagation codes such as GALPROP to fit the B/C ratio, the power contributed by diffusive reacceleration is significant and can be of order 50% of the total Galactic cosmic ray power. The implications for the damping of interstellar turbulence are briefly considered.
Low Wages as Occupational Health Hazards.
Leigh, J Paul; De Vogli, Roberto
2016-05-01
The history of occupational medicine has been characterized by ever-widening recognition of hazards, from fires in 1911 to asbestos in the 1960s, to job strain in the 1990s. In this essay, we argue for broadening the recognition further to include low wages. We first review possible mechanisms explaining the effects of wages on health or health behaviors. Mechanisms involve self-esteem, job satisfaction, deprivation, social rank, the "full" price of bad health, patience, and the ability to purchase health-producing goods and services. Second, we discuss empirical studies that rely on large, typically national, data sets and statistical models that use either instrumental variables or natural experiments and also account for other family income. Finally, we draw implications for laws governing minimum wages and labor unions.
NASA Technical Reports Server (NTRS)
Pi, Xiaoqing; Mannucci, Anthony J.; Verkhoglyadova, Olga P.; Stephens, Philip; Wilson, Brian D.; Akopian, Vardan; Komjathy, Attila; Lijima, Byron A.
2013-01-01
ISOGAME is designed and developed to assess quantitatively the impact of new observation systems on the capability of imaging and modeling the ionosphere. With ISOGAME, one can perform observation system simulation experiments (OSSEs). A typical OSSE using ISOGAME would involve: (1) simulating various ionospheric conditions on global scales; (2) simulating ionospheric measurements made from a constellation of low-Earth-orbiters (LEOs), particularly Global Navigation Satellite System (GNSS) radio occultation data, and from ground-based global GNSS networks; (3) conducting ionospheric data assimilation experiments with the Global Assimilative Ionospheric Model (GAIM); and (4) analyzing modeling results with visualization tools. ISOGAME can provide quantitative assessment of the accuracy of assimilative modeling with the interested observation system. Other observation systems besides those based on GNSS are also possible to analyze. The system is composed of a suite of software that combines the GAIM, including a 4D first-principles ionospheric model and data assimilation modules, an Internal Reference Ionosphere (IRI) model that has been developed by international ionospheric research communities, observation simulator, visualization software, and orbit design, simulation, and optimization software. The core GAIM model used in ISOGAME is based on the GAIM++ code (written in C++) that includes a new high-fidelity geomagnetic field representation (multi-dipole). New visualization tools and analysis algorithms for the OSSEs are now part of ISOGAME.
LANL*V2.0: global modeling and validation
NASA Astrophysics Data System (ADS)
Koller, J.; Zaharia, S.
2011-08-01
We describe in this paper the new version of LANL*, an artificial neural network (ANN) for calculating the magnetic drift invariant L*. This quantity is used for modeling radiation belt dynamics and for space weather applications. We have implemented the following enhancements in the new version: (1) we have removed the limitation to geosynchronous orbit and the model can now be used for a much larger region. (2) The new version is based on the improved magnetic field model by Tsyganenko and Sitnov (2005) (TS05) instead of the older model by Tsyganenko et al. (2003). We have validated the model and compared our results to L* calculations with the TS05 model based on ephemerides for CRRES, Polar, GPS, a LANL geosynchronous satellite, and a virtual RBSP type orbit. We find that the neural network performs very well for all these orbits with an error typically ΔL* < 0.2 which corresponds to an error of 3 % at geosynchronous orbit. This new LANL* V2.0 artificial neural network is orders of magnitudes faster than traditional numerical field line integration techniques with the TS05 model. It has applications to real-time radiation belt forecasting, analysis of data sets involving decades of satellite of observations, and other problems in space weather.
Burst fractures of the lumbar spine in frontal crashes.
Kaufman, Robert P; Ching, Randal P; Willis, Margaret M; Mack, Christopher D; Gross, Joel A; Bulger, Eileen M
2013-10-01
In the United States, major compression and burst type fractures (>20% height loss) of the lumbar spine occur as a result of motor vehicle crashes, despite the improvements in restraint technologies. Lumbar burst fractures typically require an axial compressive load and have been known to occur during a non-horizontal crash event that involve high vertical components of loading. Recently these fracture patterns have also been observed in pure horizontal frontal crashes. This study sought to examine the contributing factors that would induce an axial compressive force to the lumbar spine in frontal motor vehicle crashes. We searched the National Automotive Sampling System (NASS, 1993-2011) and Crash Injury Research and Engineering Network (CIREN, 1996-2012) databases to identify all patients with major compression lumbar spine (MCLS) fractures and then specifically examined those involved in frontal crashes. National trends were assessed based on weighted NASS estimates. Using a case-control study design, NASS and CIREN cases were utilized and a conditional logistic regression was performed to assess driver and vehicle characteristics. CIREN case studies and biomechanical data were used to illustrate the kinematics and define the mechanism of injury. During the study period 132 NASS cases involved major compression lumbar spine fractures for all crash directions. Nationally weighted, this accounted for 800 cases annually with 44% of these in horizontal frontal crashes. The proportion of frontal crashes resulting in MCLS fractures was 2.5 times greater in late model vehicles (since 2000) as compared to 1990s models. Belted occupants in frontal crashes had a 5 times greater odds of a MCLS fracture than those not belted, and an increase in age also greatly increased the odds. In CIREN, 19 cases were isolated as horizontal frontal crashes and 12 of these involved a major compression lumbar burst fracture primarily at L1. All were belted and almost all occurred in late model vehicles with belt pretensioners and buckets seats. Major compression burst fractures of the lumbar spine in frontal crashes were induced via a dynamic axial force transmitted to the pelvis/buttocks into the seat cushion/pan involving belted occupants in late model vehicles with increasing age as a significant factor. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Ozgun, Ozkan; Honig, Alice Sterling
2005-01-01
In this low-income Turkish sample, parents reported on father and mother division of childcare labor and satisfaction with division. Regardless of whether they were rearing typical or atypical children, mothers reported a higher level of involvement than fathers in every domain of childcare. In general, both mothers and fathers reported slight…
ERIC Educational Resources Information Center
Smith, Herbert A.
This study involved examining an instructional unit with regard to its concept content and appropriateness for its target audience. The study attempted to determine (1) what concepts are treated explicitly or implicitly, (2) whether there is a hierarchical conceptual structure within the unit, (3) what level of sophistication is required to…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gowitt, G.T.; Hanzlick, R.L.
1992-06-01
So-called typical' autoerotic fatalities are the result of asphyxia due to mechanical compression of the neck, chest, or abdomen, whereas atypical' autoeroticism involves sexual self-stimulation by other means. The authors present five atypical autoerotic fatalities that involved the use of dichlorodifluoromethane, nitrous oxide, isobutyl nitrite, cocaine, or compounds containing 1-1-1-trichloroethane. Mechanisms of death are discussed in each case and the pertinent literature is reviewed.
Improving UWB-Based Localization in IoT Scenarios with Statistical Models of Distance Error.
Monica, Stefania; Ferrari, Gianluigi
2018-05-17
Interest in the Internet of Things (IoT) is rapidly increasing, as the number of connected devices is exponentially growing. One of the application scenarios envisaged for IoT technologies involves indoor localization and context awareness. In this paper, we focus on a localization approach that relies on a particular type of communication technology, namely Ultra Wide Band (UWB). UWB technology is an attractive choice for indoor localization, owing to its high accuracy. Since localization algorithms typically rely on estimated inter-node distances, the goal of this paper is to evaluate the improvement brought by a simple (linear) statistical model of the distance error. On the basis of an extensive experimental measurement campaign, we propose a general analytical framework, based on a Least Square (LS) method, to derive a novel statistical model for the range estimation error between a pair of UWB nodes. The proposed statistical model is then applied to improve the performance of a few illustrative localization algorithms in various realistic scenarios. The obtained experimental results show that the use of the proposed statistical model improves the accuracy of the considered localization algorithms with a reduction of the localization error up to 66%.
On discrete control of nonlinear systems with applications to robotics
NASA Technical Reports Server (NTRS)
Eslami, Mansour
1989-01-01
Much progress has been reported in the areas of modeling and control of nonlinear dynamic systems in a continuous-time framework. From implementation point of view, however, it is essential to study these nonlinear systems directly in a discrete setting that is amenable for interfacing with digital computers. But to develop discrete models and discrete controllers for a nonlinear system such as robot is a nontrivial task. Robot is also inherently a variable-inertia dynamic system involving additional complications. Not only the computer-oriented models of these systems must satisfy the usual requirements for such models, but these must also be compatible with the inherent capabilities of computers and must preserve the fundamental physical characteristics of continuous-time systems such as the conservation of energy and/or momentum. Preliminary issues regarding discrete systems in general and discrete models of a typical industrial robot that is developed with full consideration of the principle of conservation of energy are presented. Some research on the pertinent tactile information processing is reviewed. Finally, system control methods and how to integrate these issues in order to complete the task of discrete control of a robot manipulator are also reviewed.
Assessing the direct effects of deep brain stimulation using embedded axon models
NASA Astrophysics Data System (ADS)
Sotiropoulos, Stamatios N.; Steinmetz, Peter N.
2007-06-01
To better understand the spatial extent of the direct effects of deep brain stimulation (DBS) on neurons, we implemented a geometrically realistic finite element electrical model incorporating anisotropic and inhomogenous conductivities. The model included the subthalamic nucleus (STN), substantia nigra (SN), zona incerta (ZI), fields of Forel H2 (FF), internal capsule (IC) and Medtronic 3387/3389 electrode. To quantify the effects of stimulation, we extended previous studies by using multi-compartment axon models with geometry and orientation consistent with anatomical features of the brain regions of interest. Simulation of axonal firing produced a map of relative changes in axonal activation. Voltage-controlled stimulation, with clinically typical parameters at the dorso-lateral STN, caused axon activation up to 4 mm from the target. This activation occurred within the FF, IC, SN and ZI with current intensities close to the average injected during DBS (3 mA). A sensitivity analysis of model parameters (fiber size, fiber orientation, degree of inhomogeneity, degree of anisotropy, electrode configuration) revealed that the FF and IC were consistently activated. Direct activation of axons outside the STN suggests that other brain regions may be involved in the beneficial effects of DBS when treating Parkinsonian symptoms.
Pyne, Saumyadipta; Lee, Sharon X; Wang, Kui; Irish, Jonathan; Tamayo, Pablo; Nazaire, Marc-Danie; Duong, Tarn; Ng, Shu-Kay; Hafler, David; Levy, Ronald; Nolan, Garry P; Mesirov, Jill; McLachlan, Geoffrey J
2014-01-01
In biomedical applications, an experimenter encounters different potential sources of variation in data such as individual samples, multiple experimental conditions, and multivariate responses of a panel of markers such as from a signaling network. In multiparametric cytometry, which is often used for analyzing patient samples, such issues are critical. While computational methods can identify cell populations in individual samples, without the ability to automatically match them across samples, it is difficult to compare and characterize the populations in typical experiments, such as those responding to various stimulations or distinctive of particular patients or time-points, especially when there are many samples. Joint Clustering and Matching (JCM) is a multi-level framework for simultaneous modeling and registration of populations across a cohort. JCM models every population with a robust multivariate probability distribution. Simultaneously, JCM fits a random-effects model to construct an overall batch template--used for registering populations across samples, and classifying new samples. By tackling systems-level variation, JCM supports practical biomedical applications involving large cohorts. Software for fitting the JCM models have been implemented in an R package EMMIX-JCM, available from http://www.maths.uq.edu.au/~gjm/mix_soft/EMMIX-JCM/.
Simplified particulate model for coarse-grained hemodynamics simulations
NASA Astrophysics Data System (ADS)
Janoschek, F.; Toschi, F.; Harting, J.
2010-11-01
Human blood flow is a multiscale problem: in first approximation, blood is a dense suspension of plasma and deformable red cells. Physiological vessel diameters range from about one to thousands of cell radii. Current computational models either involve a homogeneous fluid and cannot track particulate effects or describe a relatively small number of cells with high resolution but are incapable to reach relevant time and length scales. Our approach is to simplify much further than existing particulate models. We combine well-established methods from other areas of physics in order to find the essential ingredients for a minimalist description that still recovers hemorheology. These ingredients are a lattice Boltzmann method describing rigid particle suspensions to account for hydrodynamic long-range interactions and—in order to describe the more complex short-range behavior of cells—anisotropic model potentials known from molecular-dynamics simulations. Paying detailedness, we achieve an efficient and scalable implementation which is crucial for our ultimate goal: establishing a link between the collective behavior of millions of cells and the macroscopic properties of blood in realistic flow situations. In this paper we present our model and demonstrate its applicability to conditions typical for the microvasculature.
NASA Astrophysics Data System (ADS)
Tabourot, Laurent; Charleux, Ludovic; Balland, Pascale; Sène, Ndèye Awa; Andreasson, Eskil
2018-05-01
This paper is based on the hypothesis that introducing distribution of mechanical properties is beneficial for modeling all kinds of mechanical behavior, even of ordinary metallic materials. To bring proof of its admissibility, it has to be first shown that modeling based on this assertion is able to efficiently describe standard mechanical behavior of materials. Searching for typical study case, it has been assessed that at a low scale, yield stresses could be strongly distributed in ultrathin aluminum foils used in packaging industry, offering opportunities to identifying their distribution and showing its role on the mechanical properties. Considering initially reduced modeling allow to establish a valuable connection between the hardening curve and the distribution of local yield stresses. This serves for finding initial value of distribution parameters in a more sophisticated identification procedure. With finally limited number of representative classes of local yield stresses, concretely 3 is enough, it is shown that a 3D finite element simulation involving limited numbers of elements returns realistic behavior of an ultrathin aluminum foil exerted to tensile test, in reference to experimental results. This gives way to large possibilities in modeling in order to give back complex experimental evidence.
Independent validation of Swarm Level 2 magnetic field products and `Quick Look' for Level 1b data
NASA Astrophysics Data System (ADS)
Beggan, Ciarán D.; Macmillan, Susan; Hamilton, Brian; Thomson, Alan W. P.
2013-11-01
Magnetic field models are produced on behalf of the European Space Agency (ESA) by an independent scientific consortium known as the Swarm Satellite Constellation Application and Research Facility (SCARF), through the Level 2 Processor (L2PS). The consortium primarily produces magnetic field models for the core, lithosphere, ionosphere and magnetosphere. Typically, for each magnetic product, two magnetic field models are produced in separate chains using complementary data selection and processing techniques. Hence, the magnetic field models from the complementary processing chains will be similar but not identical. The final step in the overall L2PS therefore involves inspection and validation of the magnetic field models against each other and against data from (semi-) independent sources (e.g. ground observatories). We describe the validation steps for each magnetic field product and the comparison against independent datasets, and we show examples of the output of the validation. In addition, the L2PS also produces a daily set of `Quick Look' output graphics and statistics to monitor the overall quality of Level 1b data issued by ESA. We describe the outputs of the `Quick Look' chain.
NASA Astrophysics Data System (ADS)
Yang, Qian; Sing-Long, Carlos; Chen, Enze; Reed, Evan
2017-06-01
Complex chemical processes, such as the decomposition of energetic materials and the chemistry of planetary interiors, are typically studied using large-scale molecular dynamics simulations that run for weeks on high performance parallel machines. These computations may involve thousands of atoms forming hundreds of molecular species and undergoing thousands of reactions. It is natural to wonder whether this wealth of data can be utilized to build more efficient, interpretable, and predictive models. In this talk, we will use techniques from statistical learning to develop a framework for constructing Kinetic Monte Carlo (KMC) models from molecular dynamics data. We will show that our KMC models can not only extrapolate the behavior of the chemical system by as much as an order of magnitude in time, but can also be used to study the dynamics of entirely different chemical trajectories with a high degree of fidelity. Then, we will discuss three different methods for reducing our learned KMC models, including a new and efficient data-driven algorithm using L1-regularization. We demonstrate our framework throughout on a system of high-temperature high-pressure liquid methane, thought to be a major component of gas giant planetary interiors.
Wong, Wang I; Pasterski, Vickie; Hindmarsh, Peter C; Geffner, Mitchell E; Hines, Melissa
2013-04-01
Influences of prenatal androgen exposure on human sex-typical behavior have been established largely through studies of individuals with congenital adrenal hyperplasia (CAH). However, evidence that addresses the potential confounding influence of parental socialization is limited. Parental socialization and its relationship to sex-typical toy play and spatial ability were investigated in two samples involving 137 individuals with CAH and 107 healthy controls. Females with CAH showed more boy-typical toy play and better targeting performance than control females, but did not differ in mental rotations performance. Males with CAH showed worse mental rotations performance than control males, but did not differ in sex-typical toy play or targeting. Reported parental encouragement of girl-typical toy play correlated with girl-typical toy play in all four groups. Moreover, parents reported encouraging less girl-typical, and more boy-typical, toy play in females with CAH than in control females and this reported encouragement partially mediated the relationship between CAH status and sex-typical toy play. Other evidence suggests that the reported parental encouragement of sex-atypical toy play in girls with CAH may be a response to the girls' preferences for boys' toys. Nevertheless, this encouragement could further increase boy-typical behavior in girls with CAH. In contrast to the results for toy play, we found no differential parental socialization for spatial activities and little evidence linking parental socialization to spatial ability. Overall, evidence suggests that prenatal androgen exposure and parental socialization both contribute to sex-typical toy play.
Click It or Ticket Evaluation, 2010
DOT National Transportation Integrated Search
2013-05-01
The 201 Click It or Ticket (CIOT) mobilization followed a typical elective traffic enforcement program TEP) sequence, involving paid media, earned media, and enforcement. A nationally representative telephone survey indicated that the mobilization wa...
SMARTE'S SITE CHARACTERIZATION TOOL
Site Characterization involves collecting environmental data to evaluate the nature and extent of contamination. Environmental data could consist of chemical analyses of soil, sediment, water or air samples. Typically site characterization data are statistically evaluated for thr...
RFID Reader Antenna with Multi-Linear Polarization Diversity
NASA Technical Reports Server (NTRS)
Fink, Patrick; Lin, Greg; Ngo, Phong; Kennedy, Timothy; Rodriguez, Danny; Chu, Andrew; Broyan, James; Schmalholz, Donald
2018-01-01
This paper describes an RFID reader antenna that offers reduced polarization loss compared to that typically associated with reader-tag communications involving arbitrary relative orientation of the reader antenna and the tag.
The White Adolescent's Drug Odyssey.
ERIC Educational Resources Information Center
Lipton, Douglas S.; Marel, Rozanne
1980-01-01
Presents a "typical" case history of a White middle-class teenager who becomes involved with marihuana and subsequently begins to abuse other drugs. Sociological findings from other research are interspersed in the anecdotal account. (GC)
... through the stages than do adults. Stages are: Experimental use. Typically involves peers, done for recreational use; ... Hostility when confronted about drug dependence Lack of control ... Secretive behavior to hide drug use Using drugs even when alone
Brownfields Environmental Insurance and Risk Management Tools Glossary of Terms
This document provides a list of terms that are typically used by the environmental insurance industry, transactional specialists, and other parties involved in using environmental insurance or risk management tools.
1972-08-01
of public health hazards and may alter reuse approaches to de -emphasize the fertilizer uses of these sludges because of the heavy metals involved...materials are removed with organic sludges, or lime sludges where that process is used. Toxic solids would typically include phenols and heavy metals , 80...solids would typically include phenols and heavy metals , 80 percent and 40 percent respectively being removable with the organic sludges. - 8
Solomon, Olga; Heritage, John; Yin, Larry; Marynard, Douglas; Bauman, Margaret
2015-01-01
Conversation and discourse analyses were used to examine medical problem presentation in pediatric care. Healthcare visits involving children with ASD and typically developing children were analyzed. We examined how children’s communicative and epistemic capabilities and their opportunities to be socialized into a competent patient role are interactionally achieved. We found that medical problem presentation is designed to contain a ‘pre-visit’ account of the interactional and epistemic work that children and caregivers carry out at home to identify the child’s health problems; and that the intersubjective accessibility of children’s experiences that becomes disrupted by ASD presents a dilemma to all participants in the visit. The article examines interactional roots of unmet healthcare needs and foregone medical care of people with ASD. PMID:26463739
NASA Astrophysics Data System (ADS)
Hanan, E. J.; Tague, C.; Choate, J.; Liu, M.; Adam, J. C.
2016-12-01
Disturbance is a major force regulating C dynamics in terrestrial ecosystems. Evaluating future C balance in disturbance-prone systems requires understanding the underlying mechanisms that drive ecosystem processes over multiple scales of space and time. Simulation modeling is a powerful tool for bridging these scales, however, model projections are limited by large uncertainties in the initial state of vegetation C and N stores. Watershed models typically use one of two methods to initialize these stores. Spin up involves running a model until vegetation reaches steady state based on climate. This "potential" state however assumes the vegetation across the entire watershed has reached maturity and has a homogeneous age distribution. Yet to reliably represent C and N dynamics in disturbance-prone systems, models should be initialized to reflect their non-equilibrium conditions. Alternatively, remote sensing of a single vegetation parameter (typically leaf area index; LAI) can be combined with allometric relationships to allocate C and N to model stores and can reflect non-steady-state conditions. However, allometric relationships are species and region specific and do not account for environmental variation, thus resulting in C and N stores that may be unstable. To address this problem, we developed a new approach for initializing C and N pools using the watershed-scale ecohydrologic model RHESSys. The new approach merges the mechanistic stability of spinup with the spatial fidelity of remote sensing. Unlike traditional spin up, this approach supports non-homogeneous stand ages. We tested our approach in a pine-dominated watershed in central Idaho, which partially burned in July of 2000. We used LANDSAT and MODIS data to calculate LAI across the watershed following the 2000 fire. We then ran three sets of simulations using spin up, direct measurements, and the combined approach to initialize vegetation C and N stores, and compared our results to remotely sensed LAI following the simulation period. Model estimates of C, N, and water fluxes varied depending on which approach was used. The combined approach provided the best LAI estimates after 10 years of simulation. This method shows promise for improving projections of C, N, and water fluxes in disturbance-prone watersheds.
Cortical Thickness Change in Autism during Early Childhood
Smith, Elizabeth; Thurm, Audrey; Greenstein, Deanna; Farmer, Cristan; Swedo, Susan; Giedd, Jay; Raznahan, Armin
2016-01-01
Structural magnetic resonance imaging (MRI) scans at high spatial resolution can detect potential foci of early brain dysmaturation in children with autism spectrum disorders (ASD). In addition, comparison between MRI and behavior measures over time can identify patterns of brain change accompanying specific outcomes. We report structural MRI data from two time points for a total of 84 scans in children with ASD and 30 scans in typical controls (mean age time one=4.1 years, mean age at time two=6.6 years). Surface-based cortical morphometry and linear mixed effects models were used to link changes in cortical anatomy to both diagnostic status and individual differences in changes in language and autism severity. Compared to controls, children with ASD showed accelerated gray matter volume gain with age, which was driven by a lack of typical age-related cortical thickness (CT) decrease within ten cortical regions involved in language, social cognition and behavioral control. Greater expressive communication gains with age in children with ASD were associated with greater CT gains in a set of right hemisphere homologues to dominant language cortices, potentially identifying a compensatory system for closer translational study. PMID:27061356
Non-radioactive TRF assay modifications to improve telomeric DNA detection efficiency in plants
Nigmatullina, Liliia R.; Sharipova, Margarita R.; Shakirov, Eugene V.
2016-01-01
The length of telomeric DNA is often considered a cellular biomarker of aging and general health status. Several telomere length measuring assays have been developed, of which the most common is the Telomere Restriction Fragment (TRF) analysis, which typically involves the use of radioactively labeled oligonucleotide probes. While highly effective, this method potentially poses substantial health concerns and generates radioactive waste. Digoxigenin (DIG) alternatives to radioactive probes have been developed and used successfully in a number of assays. Here we optimize the DIG protocol to measure telomere length in the model plant Arabidopsis thaliana and present evidence that this approach can be used successfully to efficiently and accurately measure telomere length in plants. Specifically, hybridization temperature of 42 °C instead of the typical 55 °C appears to generate stronger signals. In addition, DIG incorporation at 5′-end instead of 3′-end of the labeled oligonucleotide greatly enhances signal. We conclude that non-radioactive TRF assays can be as efficient as radioactive methods in detecting and measuring telomere length in plants, making this assay suitable for medical and research laboratories unable to utilize radioactivity due to hazardous waste disposal and safety concerns. PMID:28133587
Punzalan, Florencio Rusty; Kunieda, Yoshitoshi; Amano, Akira
2015-01-01
Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs). Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code generator can be used to generate code for physiological simulations and provides a tool for studying cardiac electrophysiology. PMID:26356082
Zhang, Hongshen; Chen, Ming
2013-11-01
In-depth studies on the recycling of typical automotive exterior plastic parts are significant and beneficial for environmental protection, energy conservation, and sustainable development of China. In the current study, several methods were used to analyze the recycling industry model for typical exterior parts of passenger vehicles in China. The strengths, weaknesses, opportunities, and challenges of the current recycling industry for typical exterior parts of passenger vehicles were analyzed comprehensively based on the SWOT method. The internal factor evaluation matrix and external factor evaluation matrix were used to evaluate the internal and external factors of the recycling industry. The recycling industry was found to respond well to all the factors and it was found to face good developing opportunities. Then, the cross-link strategies analysis for the typical exterior parts of the passenger car industry of China was conducted based on the SWOT analysis strategies and established SWOT matrix. Finally, based on the aforementioned research, the recycling industry model led by automobile manufacturers was promoted. Copyright © 2013 Elsevier Ltd. All rights reserved.
Synthetic, multi-layer, self-oscillating vocal fold model fabrication.
Murray, Preston R; Thomson, Scott L
2011-12-02
Sound for the human voice is produced via flow-induced vocal fold vibration. The vocal folds consist of several layers of tissue, each with differing material properties. Normal voice production relies on healthy tissue and vocal folds, and occurs as a result of complex coupling between aerodynamic, structural dynamic, and acoustic physical phenomena. Voice disorders affect up to 7.5 million annually in the United States alone and often result in significant financial, social, and other quality-of-life difficulties. Understanding the physics of voice production has the potential to significantly benefit voice care, including clinical prevention, diagnosis, and treatment of voice disorders. Existing methods for studying voice production include in vivo experimentation using human and animal subjects, in vitro experimentation using excised larynges and synthetic models, and computational modeling. Owing to hazardous and difficult instrument access, in vivo experiments are severely limited in scope. Excised larynx experiments have the benefit of anatomical and some physiological realism, but parametric studies involving geometric and material property variables are limited. Further, they are typically only able to be vibrated for relatively short periods of time (typically on the order of minutes). Overcoming some of the limitations of excised larynx experiments, synthetic vocal fold models are emerging as a complementary tool for studying voice production. Synthetic models can be fabricated with systematic changes to geometry and material properties, allowing for the study of healthy and unhealthy human phonatory aerodynamics, structural dynamics, and acoustics. For example, they have been used to study left-right vocal fold asymmetry, clinical instrument development, laryngeal aerodynamics, vocal fold contact pressure, and subglottal acoustics (a more comprehensive list can be found in Kniesburges et al.) Existing synthetic vocal fold models, however, have either been homogenous (one-layer models) or have been fabricated using two materials of differing stiffness (two-layer models). This approach does not allow for representation of the actual multi-layer structure of the human vocal folds that plays a central role in governing vocal fold flow-induced vibratory response. Consequently, one- and two-layer synthetic vocal fold models have exhibited disadvantages such as higher onset pressures than what are typical for human phonation (onset pressure is the minimum lung pressure required to initiate vibration), unnaturally large inferior-superior motion, and lack of a "mucosal wave" (a vertically-traveling wave that is characteristic of healthy human vocal fold vibration). In this paper, fabrication of a model with multiple layers of differing material properties is described. The model layers simulate the multi-layer structure of the human vocal folds, including epithelium, superficial lamina propria (SLP), intermediate and deep lamina propria (i.e., ligament; a fiber is included for anterior-posterior stiffness), and muscle (i.e., body) layers. Results are included that show that the model exhibits improved vibratory characteristics over prior one- and two-layer synthetic models, including onset pressure closer to human onset pressure, reduced inferior-superior motion, and evidence of a mucosal wave.
Speed of fast and slow rupture fronts along frictional interfaces
NASA Astrophysics Data System (ADS)
Trømborg, Jørgen Kjoshagen; Sveinsson, Henrik Andersen; Thøgersen, Kjetil; Scheibert, Julien; Malthe-Sørenssen, Anders
2015-07-01
The transition from stick to slip at a dry frictional interface occurs through the breaking of microjunctions between the two contacting surfaces. Typically, interactions between junctions through the bulk lead to rupture fronts propagating from weak and/or highly stressed regions, whose junctions break first. Experiments find rupture fronts ranging from quasistatic fronts, via fronts much slower than elastic wave speeds, to fronts faster than the shear wave speed. The mechanisms behind and selection between these fronts are still imperfectly understood. Here we perform simulations in an elastic two-dimensional spring-block model where the frictional interaction between each interfacial block and the substrate arises from a set of junctions modeled explicitly. We find that material slip speed and rupture front speed are proportional across the full range of front speeds we observe. We revisit a mechanism for slow slip in the model and demonstrate that fast slip and fast fronts have a different, inertial origin. We highlight the long transients in front speed even along homogeneous interfaces, and we study how both the local shear to normal stress ratio and the local strength are involved in the selection of front type and front speed. Last, we introduce an experimentally accessible integrated measure of block slip history, the Gini coefficient, and demonstrate that in the model it is a good predictor of the history-dependent local static friction coefficient of the interface. These results will contribute both to building a physically based classification of the various types of fronts and to identifying the important mechanisms involved in the selection of their propagation speed.
A Competency Framework for the Practice of Psychology: Procedures and Implications.
Hunsley, John; Spivak, Howard; Schaffer, Jack; Cox, Darcy; Caro, Carla; Rodolfa, Emil; Greenberg, Sandra
2016-09-01
Several competency models for training and practice in professional psychology have been proposed in the United States and Canada. Typically, the procedures used in developing and finalizing these models have involved both expert working groups and opportunities for input from interested parties. What has been missing, however, are empirical data to determine the degree to which the model reflects the views of members of the profession as a whole. Using survey data from 466 licensed or registered psychologists (approximately half of whom completed one of two versions of the survey), we examined the degree to which psychologists, both those engaged primarily in practice and those involved in doctoral training, agreed with the competency framework developed by the Association of State and Provincial Psychology Boards' Practice Analysis Task Force (Rodolfa et al., 2013). When distinct time points in training and licensure or registration were considered (i.e., entry-level supervised practice in practicum settings, advanced-level supervised practice during internship, entry level independent practice, and advanced practice), there was limited agreement by survey respondents with the competency framework's proposal about when specific competencies should be attained. In contrast, greater agreement was evident by respondents with the competency framework when the reference point was focused on entry to independent practice (i.e., the competencies necessary for licensure or registration). We discuss the implications of these findings for the development of competency models, as well as for the implementation of competency requirements in both licensure or registration and training contexts. © 2016 Wiley Periodicals, Inc.
Stratiform clouds and their interaction with atmospheric motion
NASA Technical Reports Server (NTRS)
Clark, John H. E.; Shirer, Hampton N.
1990-01-01
During 1989 and 1990, the researchers saw the publication of two papers and the submission of a third for review on work supported primarily by the previous contract, NAS8-36150; the delivery of an invited talk at the SIAM Conference on Dynamical Systems in Orlando, Florida; and the start of two new projects on the radiative effects of stratocumulus on the large-scale flow. The published papers discuss aspects of stratocumulus circulations (Laufersweiler and Shirer, 1989) and the Hadley to Rossby regime transition in rotating spherical systems (Higgins and Shirer, 1990). The submitted paper (Haack and Shirer, 1990) discusses a new nonlinear model of roll circulations that are forced both dynamically and thermally. The invited paper by H. N. Shirer and R. Wells presented an objective means for determining appropriate truncation levels for low-order models of flows involving two incommensurate periods; this work has application to the Hadley to Rossby transition problem in quasi-geostrophic flows (Moroz and Holmes, 1984). The new projects involve the development of a multi-layered quasi-geostrophic channel model for study of the modulation of the large-scale flow by stratocumulus clouds that typically develop off the coasts of continents. In this model the diabatic forcing in the lowest layer will change in response to the (parameterized) development of extensive fields of stratocumulus clouds. To guide creation of this parameterization scheme, researchers are producing climatologies of stratocumulus frequency and the authors correlate these frequencies with the phasing and amplitude of the large-scale flow pattern. Researchers discuss the above topics in greater detail.
Springer-Wanner, C; Brauns, T
2017-06-01
Ocular manifestation of sarcoidosis occurs in up to 60% of patients with confirmed systemic sarcoidosis and represents one of the most common forms of noninfectious uveitis. In known pulmonary sarcoidosis, ocular involvement can occur in up to 80% of cases. Sarcoidosis can also present only in the eye, without a systemic manifestation (ocular sarcoidosis). Typically, ocular sarcoidosis shows bilateral granulomatous uveitis and can involve all parts of the eye. Apart from an acute anterior uveitis, chronic intermediate or posterior uveitis can be found. In order to prevent a severe reduction of visual acuity leading to blindness, early diagnosis and treatment is essential. For diagnosis, specific clinical signs involving the eye (bilateral granulomatous changes in all parts of the eye) and typical laboratory investigations (angiotensin-converting enzyme, ACE; lysozyme; soluble interleukin 2 receptor, sIL2R; chest X‑ray; chest CT) have to be taken into account, since biopsy to prove noncaseating granulomas is not performed with changes restricted to the eye due to the high risk of vision loss. Ocular sarcoidosis mostly responds well to local or systemic steroid treatment. If the therapeutic effect is insufficient, immunosuppressive agents and biologics can be applied.