Theoretical models of parental HIV disclosure: a critical review.
Qiao, Shan; Li, Xiaoming; Stanton, Bonita
2013-01-01
This study critically examined three major theoretical models related to parental HIV disclosure (i.e., the Four-Phase Model [FPM], the Disclosure Decision Making Model [DDMM], and the Disclosure Process Model [DPM]), and the existing studies that could provide empirical support to these models or their components. For each model, we briefly reviewed its theoretical background, described its components and/or mechanisms, and discussed its strengths and limitations. The existing empirical studies supported most theoretical components in these models. However, hypotheses related to the mechanisms proposed in the models have not yet tested due to a lack of empirical evidence. This study also synthesized alternative theoretical perspectives and new issues in disclosure research and clinical practice that may challenge the existing models. The current study underscores the importance of including components related to social and cultural contexts in theoretical frameworks, and calls for more adequately designed empirical studies in order to test and refine existing theories and to develop new ones.
Modeling Lolium perenne L. roots in the presence of empirical black holes
USDA-ARS?s Scientific Manuscript database
Plant root models are designed for understanding structural or functional aspects of root systems. When a process is not thoroughly understood, a black box object is used. However, when a process exists but empirical data do not indicate its existence, you have a black hole. The object of this re...
NASA Technical Reports Server (NTRS)
Courey, Karim; Wright, Clara; Asfour, Shihab; Onar, Arzu; Bayliss, Jon; Ludwig, Larry
2009-01-01
In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This empirical model can be used to improve existing risk simulation models. FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.
Empirical intrinsic geometry for nonlinear modeling and time series filtering.
Talmon, Ronen; Coifman, Ronald R
2013-07-30
In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization.
NASA Astrophysics Data System (ADS)
Khazaee, I.
2015-05-01
In this study, the performance of a proton exchange membrane fuel cell in mobile applications is investigated analytically. At present the main use and advantages of fuel cells impact particularly strongly on mobile applications such as vehicles, mobile computers and mobile telephones. Some external parameters such as the cell temperature (Tcell ) , operating pressure of gases (P) and air stoichiometry (λair ) affect the performance and voltage losses in the PEM fuel cell. Because of the existence of many theoretical, empirical and semi-empirical models of the PEM fuel cell, it is necessary to compare the accuracy of these models. But theoretical models that are obtained from thermodynamic and electrochemical approach, are very exact but complex, so it would be easier to use the empirical and smi-empirical models in order to forecast the fuel cell system performance in many applications such as mobile applications. The main purpose of this study is to obtain the semi-empirical relation of a PEM fuel cell with the least voltage losses. Also, the results are compared with the existing experimental results in the literature and a good agreement is seen.
NASA Astrophysics Data System (ADS)
Boutillier, J.; Ehrhardt, L.; De Mezzo, S.; Deck, C.; Magnan, P.; Naz, P.; Willinger, R.
2018-03-01
With the increasing use of improvised explosive devices (IEDs), the need for better mitigation, either for building integrity or for personal security, increases in importance. Before focusing on the interaction of the shock wave with a target and the potential associated damage, knowledge must be acquired regarding the nature of the blast threat, i.e., the pressure-time history. This requirement motivates gaining further insight into the triple point (TP) path, in order to know precisely which regime the target will encounter (simple reflection or Mach reflection). Within this context, the purpose of this study is to evaluate three existing TP path empirical models, which in turn are used in other empirical models for the determination of the pressure profile. These three TP models are the empirical function of Kinney, the Unified Facilities Criteria (UFC) curves, and the model of the Natural Resources Defense Council (NRDC). As discrepancies are observed between these models, new experimental data were obtained to test their reliability and a new promising formulation is proposed for scaled heights of burst ranging from 24.6-172.9 cm/kg^{1/3}.
Including Finite Surface Span Effects in Empirical Jet-Surface Interaction Noise Models
NASA Technical Reports Server (NTRS)
Brown, Clifford A.
2016-01-01
The effect of finite span on the jet-surface interaction noise source and the jet mixing noise shielding and reflection effects is considered using recently acquired experimental data. First, the experimental setup and resulting data are presented with particular attention to the role of surface span on far-field noise. These effects are then included in existing empirical models that have previously assumed that all surfaces are semi-infinite. This extended abstract briefly describes the experimental setup and data leaving the empirical modeling aspects for the final paper.
The Structure of Psychopathology: Toward an Expanded Quantitative Empirical Model
Wright, Aidan G.C.; Krueger, Robert F.; Hobbs, Megan J.; Markon, Kristian E.; Eaton, Nicholas R.; Slade, Tim
2013-01-01
There has been substantial recent interest in the development of a quantitative, empirically based model of psychopathology. However, the majority of pertinent research has focused on analyses of diagnoses, as described in current official nosologies. This is a significant limitation because existing diagnostic categories are often heterogeneous. In the current research, we aimed to redress this limitation of the existing literature, and to directly compare the fit of categorical, continuous, and hybrid (i.e., combined categorical and continuous) models of syndromes derived from indicators more fine-grained than diagnoses. We analyzed data from a large representative epidemiologic sample (the 2007 Australian National Survey of Mental Health and Wellbeing; N = 8,841). Continuous models provided the best fit for each syndrome we observed (Distress, Obsessive Compulsivity, Fear, Alcohol Problems, Drug Problems, and Psychotic Experiences). In addition, the best fitting higher-order model of these syndromes grouped them into three broad spectra: Internalizing, Externalizing, and Psychotic Experiences. We discuss these results in terms of future efforts to refine emerging empirically based, dimensional-spectrum model of psychopathology, and to use the model to frame psychopathology research more broadly. PMID:23067258
NASA Technical Reports Server (NTRS)
Sebok, Angelia; Wickens, Christopher; Sargent, Robert
2015-01-01
One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.
Empirical microeconomics action functionals
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Du, Xin; Tanputraman, Winson
2015-06-01
A statistical generalization of microeconomics has been made in Baaquie (2013), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is modeled by an action functional-and the focus of this paper is to empirically determine the action functionals for different commodities. The correlation functions of the model are defined using a Feynman path integral. The model is calibrated using the unequal time correlation of the market commodity prices as well as their cubic and quartic moments using a perturbation expansion. The consistency of the perturbation expansion is verified by a numerical evaluation of the path integral. Nine commodities drawn from the energy, metal and grain sectors are studied and their market behavior is described by the model to an accuracy of over 90% using only six parameters. The paper empirically establishes the existence of the action functional for commodity prices that was postulated to exist in Baaquie (2013).
Armour, Cherie; O'Connor, Maja; Elklit, Ask; Elhai, Jon D
2013-10-01
The three-factor structure of posttraumatic stress disorder (PTSD) specified by the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, is not supported in the empirical literature. Two alternative four-factor models have received a wealth of empirical support. However, a consensus regarding which is superior has not been reached. A recent five-factor model has been shown to provide superior fit over the existing four-factor models. The present study investigated the fit of the five-factor model against the existing four-factor models and assessed the resultant factors' association with depression in a bereaved European trauma sample (N = 325). The participants were assessed for PTSD via the Harvard Trauma Questionnaire and depression via the Beck Depression Inventory. The five-factor model provided superior fit to the data compared with the existing four-factor models. In the dysphoric arousal model, depression was equally related to both dysphoric arousal and emotional numbing, whereas depression was more related to dysphoric arousal than to anxious arousal.
Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface
NASA Technical Reports Server (NTRS)
Brown, Cliff
2015-01-01
Empirical models for the shielding and refection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and rejection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.
Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface
NASA Technical Reports Server (NTRS)
Brown, Clifford A.
2016-01-01
Empirical models for the shielding and reflection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and reflection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.
ERIC Educational Resources Information Center
Wang, Jianjun; Oliver, Steve; Garcia, Augustine
2004-01-01
Positive self-concept and good understanding of science are important indicators of scientific literacy endorsed by professional organizations. The existing research literature suggests that these two indicators are reciprocally related and mutually reinforcing. Generalization of the reciprocal model demands empirical studies in different…
Attachment-Based Family Therapy: A Review of the Empirical Support.
Diamond, Guy; Russon, Jody; Levy, Suzanne
2016-09-01
Attachment-based family therapy (ABFT) is an empirically supported treatment designed to capitalize on the innate, biological desire for meaningful and secure relationships. The therapy is grounded in attachment theory and provides an interpersonal, process-oriented, trauma-focused approach to treating adolescent depression, suicidality, and trauma. Although a process-oriented therapy, ABFT offers a clear structure and road map to help therapists quickly address attachment ruptures that lie at the core of family conflict. Several clinical trials and process studies have demonstrated empirical support for the model and its proposed mechanism of change. This article provides an overview of the clinical model and the existing empirical support for ABFT. © 2016 Family Process Institute.
Fire risk in San Diego County, California: A weighted Bayesian model approach
Kolden, Crystal A.; Weigel, Timothy J.
2007-01-01
Fire risk models are widely utilized to mitigate wildfire hazards, but models are often based on expert opinions of less understood fire-ignition and spread processes. In this study, we used an empirically derived weights-of-evidence model to assess what factors produce fire ignitions east of San Diego, California. We created and validated a dynamic model of fire-ignition risk based on land characteristics and existing fire-ignition history data, and predicted ignition risk for a future urbanization scenario. We then combined our empirical ignition-risk model with a fuzzy fire behavior-risk model developed by wildfire experts to create a hybrid model of overall fire risk. We found that roads influence fire ignitions and that future growth will increase risk in new rural development areas. We conclude that empirically derived risk models and hybrid models offer an alternative method to assess current and future fire risk based on management actions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDeavitt, Sean; Shao, Lin; Tsvetkov, Pavel
2014-04-07
Advanced fast reactor systems being developed under the DOE's Advanced Fuel Cycle Initiative are designed to destroy TRU isotopes generated in existing and future nuclear energy systems. Over the past 40 years, multiple experiments and demonstrations have been completed using U-Zr, U-Pu-Zr, U-Mo and other metal alloys. As a result, multiple empirical and semi-empirical relationships have been established to develop empirical performance modeling codes. Many mechanistic questions about fission as mobility, bubble coalescience, and gas release have been answered through industrial experience, research, and empirical understanding. The advent of modern computational materials science, however, opens new doors of development suchmore » that physics-based multi-scale models may be developed to enable a new generation of predictive fuel performance codes that are not limited by empiricism.« less
Empirical models for the prediction of ground motion duration for intraplate earthquakes
NASA Astrophysics Data System (ADS)
Anbazhagan, P.; Neaz Sheikh, M.; Bajaj, Ketan; Mariya Dayana, P. J.; Madhura, H.; Reddy, G. R.
2017-07-01
Many empirical relationships for the earthquake ground motion duration were developed for interplate region, whereas only a very limited number of empirical relationships exist for intraplate region. Also, the existing relationships were developed based mostly on the scaled recorded interplate earthquakes to represent intraplate earthquakes. To the author's knowledge, none of the existing relationships for the intraplate regions were developed using only the data from intraplate regions. Therefore, an attempt is made in this study to develop empirical predictive relationships of earthquake ground motion duration (i.e., significant and bracketed) with earthquake magnitude, hypocentral distance, and site conditions (i.e., rock and soil sites) using the data compiled from intraplate regions of Canada, Australia, Peninsular India, and the central and southern parts of the USA. The compiled earthquake ground motion data consists of 600 records with moment magnitudes ranging from 3.0 to 6.5 and hypocentral distances ranging from 4 to 1000 km. The non-linear mixed-effect (NLMEs) and logistic regression techniques (to account for zero duration) were used to fit predictive models to the duration data. The bracketed duration was found to be decreased with an increase in the hypocentral distance and increased with an increase in the magnitude of the earthquake. The significant duration was found to be increased with the increase in the magnitude and hypocentral distance of the earthquake. Both significant and bracketed durations were predicted higher in rock sites than in soil sites. The predictive relationships developed herein are compared with the existing relationships for interplate and intraplate regions. The developed relationship for bracketed duration predicts lower durations for rock and soil sites. However, the developed relationship for a significant duration predicts lower durations up to a certain distance and thereafter predicts higher durations compared to the existing relationships.
Statistical microeconomics and commodity prices: theory and empirical results.
Baaquie, Belal E
2016-01-13
A review is made of the statistical generalization of microeconomics by Baaquie (Baaquie 2013 Phys. A 392, 4400-4416. (doi:10.1016/j.physa.2013.05.008)), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is given by the unequal time correlation function and is modelled by the Feynman path integral based on an action functional. The correlation functions of the model are defined using the path integral. The existence of the action functional for commodity prices that was postulated to exist in Baaquie (Baaquie 2013 Phys. A 392, 4400-4416. (doi:10.1016/j.physa.2013.05.008)) has been empirically ascertained in Baaquie et al. (Baaquie et al. 2015 Phys. A 428, 19-37. (doi:10.1016/j.physa.2015.02.030)). The model's action functionals for different commodities has been empirically determined and calibrated using the unequal time correlation functions of the market commodity prices using a perturbation expansion (Baaquie et al. 2015 Phys. A 428, 19-37. (doi:10.1016/j.physa.2015.02.030)). Nine commodities drawn from the energy, metal and grain sectors are empirically studied and their auto-correlation for up to 300 days is described by the model to an accuracy of R(2)>0.90-using only six parameters. © 2015 The Author(s).
Comparing an annual and daily time-step model for predicting field-scale phosphorus loss
USDA-ARS?s Scientific Manuscript database
Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...
Process-based soil erodibility estimation for empirical water erosion models
USDA-ARS?s Scientific Manuscript database
A variety of modeling technologies exist for water erosion prediction each with specific parameters. It is of interest to scrutinize parameters of a particular model from the point of their compatibility with dataset of other models. In this research, functional relationships between soil erodibilit...
Studies of the effects of curvature on dilution jet mixing
NASA Technical Reports Server (NTRS)
Holdeman, James D.; Srinivasan, Ram; Reynolds, Robert S.; White, Craig D.
1992-01-01
An analytical program was conducted using both three-dimensional numerical and empirical models to investigate the effects of transition liner curvature on the mixing of jets injected into a confined crossflow. The numerical code is of the TEACH type with hybrid numerics; it uses the power-law and SIMPLER algorithms, an orthogonal curvilinear coordinate system, and an algebraic Reynolds stress turbulence model. From the results of the numerical calculations, an existing empirical model for the temperature field downstream of single and multiple rows of jets injected into a straight rectangular duct was extended to model the effects of curvature. Temperature distributions, calculated with both the numerical and empirical models, are presented to show the effects of radius of curvature and inner and outer wall injection for single and opposed rows of cool dilution jets injected into a hot mainstream flow.
NASA Astrophysics Data System (ADS)
Mejnertsen, L.; Eastwood, J. P.; Hietala, H.; Schwartz, S. J.; Chittenden, J. P.
2018-01-01
Empirical models of the Earth's bow shock are often used to place in situ measurements in context and to understand the global behavior of the foreshock/bow shock system. They are derived statistically from spacecraft bow shock crossings and typically treat the shock surface as a conic section parameterized according to a uniform solar wind ram pressure, although more complex models exist. Here a global magnetohydrodynamic simulation is used to analyze the variability of the Earth's bow shock under real solar wind conditions. The shape and location of the bow shock is found as a function of time, and this is used to calculate the shock velocity over the shock surface. The results are compared to existing empirical models. Good agreement is found in the variability of the subsolar shock location. However, empirical models fail to reproduce the two-dimensional shape of the shock in the simulation. This is because significant solar wind variability occurs on timescales less than the transit time of a single solar wind phase front over the curved shock surface. Empirical models must therefore be used with care when interpreting spacecraft data, especially when observations are made far from the Sun-Earth line. Further analysis reveals a bias to higher shock speeds when measured by virtual spacecraft. This is attributed to the fact that the spacecraft only observes the shock when it is in motion. This must be accounted for when studying bow shock motion and variability with spacecraft data.
Review essay: empires, ancient and modern.
Hall, John A
2011-09-01
This essay drews attention to two books on empires by historians which deserve the attention of sociologists. Bang's model of the workings of the Roman economy powerfully demonstrates the tributary nature of per-industrial tributary empires. Darwin's analysis concentrates on modern overseas empires, wholly different in character as they involved the transportation of consumption items for the many rather than luxury goods for the few. Darwin is especially good at describing the conditions of existence of late nineteenth century empires, noting that their demise was caused most of all by the failure of balance of power politics in Europe. Concluding thoughts are offered about the USA. © London School of Economics and Political Science 2011.
Developing Rational-Empirical Views of Intelligent Adaptive Behavior
2004-08-01
biological frame to the information processing model and outline our understanding of intentions and beliefs that co-exist with rational and...notion that the evolution of cognition has produced memory/ knowledge systems that specialize in the processing of particular types of information ...1 PERMIS 2004 Developing Rational-Empirical Views of Intelligent Adaptive Behavior Gary Berg-Cross, Knowledge Strategies Potomac, Maryland
[Impact of small-area context on health: proposing a conceptual model].
Voigtländer, S; Mielck, A; Razum, O
2012-11-01
Recent empirical studies stress the impact of features related to the small-area context on individual health. However, so far there exists no standard explanatory model that integrates the different kinds of such features and that conceptualises their relation to individual characteristics of social inequality. A review of theoretical publications on the relationship between social position and health as well as existing conceptual models for the impact of features related to the small-area context on health was undertaken. In the present article we propose a conceptual model for the health impact of the small-area context. This model conceptualises the location of residence as one dimension of social inequality that affects health through the resources as well as stressors which are inherent in the small-area context. The proposed conceptual model offers an orientation for future empirical studies and can serve as a basis for further discussions concerning the health relevance of the small-area context. © Georg Thieme Verlag KG Stuttgart · New York.
Sheehan, D V; Sheehan, K H
1982-08-01
The history of the classification of anxiety, hysterical, and hypochondriacal disorders is reviewed. Problems in the ability of current classification schemes to predict, control, and describe the relationship between the symptoms and other phenomena are outlined. Existing classification schemes failed the first test of a good classification model--that of providing categories that are mutually exclusive. The independence of these diagnostic categories from each other does not appear to hold up on empirical testing. In the absence of inherently mutually exclusive categories, further empirical investigation of these classes is obstructed since statistically valid analysis of the nominal data and any useful multivariate analysis would be difficult if not impossible. It is concluded that the existing classifications are unsatisfactory and require some fundamental reconceptualization.
ERIC Educational Resources Information Center
Sheepway, Lyndal; Lincoln, Michelle; McAllister, Sue
2014-01-01
Background: Speech-language pathology students gain experience and clinical competency through clinical education placements. However, currently little empirical information exists regarding how competency develops. Existing research about the effectiveness of placement types and models in developing competency is generally descriptive and based…
Empirical likelihood inference in randomized clinical trials.
Zhang, Biao
2017-01-01
In individually randomized controlled trials, in addition to the primary outcome, information is often available on a number of covariates prior to randomization. This information is frequently utilized to undertake adjustment for baseline characteristics in order to increase precision of the estimation of average treatment effects; such adjustment is usually performed via covariate adjustment in outcome regression models. Although the use of covariate adjustment is widely seen as desirable for making treatment effect estimates more precise and the corresponding hypothesis tests more powerful, there are considerable concerns that objective inference in randomized clinical trials can potentially be compromised. In this paper, we study an empirical likelihood approach to covariate adjustment and propose two unbiased estimating functions that automatically decouple evaluation of average treatment effects from regression modeling of covariate-outcome relationships. The resulting empirical likelihood estimator of the average treatment effect is as efficient as the existing efficient adjusted estimators 1 when separate treatment-specific working regression models are correctly specified, yet are at least as efficient as the existing efficient adjusted estimators 1 for any given treatment-specific working regression models whether or not they coincide with the true treatment-specific covariate-outcome relationships. We present a simulation study to compare the finite sample performance of various methods along with some results on analysis of a data set from an HIV clinical trial. The simulation results indicate that the proposed empirical likelihood approach is more efficient and powerful than its competitors when the working covariate-outcome relationships by treatment status are misspecified.
Long, Kimberly; Wodarski, John S
2010-05-01
Over the past three decades, existing literature has demanded, and continues to demand, accountability in the delivery of social services through empirically based research and implementation of established norms: this is, and of itself, the true basis of social work. It is through these norms and empirically established models and theories of treatment that a social worker can really do what he/she wants to do: help the client. This article will describe the nuts and bolts of social work; i.e. those theories, models, and the established norms of practice. It is the desire of the author's that all social workers be educated in the nuts and bolts (basics) and that education will be based on empirical evidence that supports behavioral change through intervention and modification.
NASA Technical Reports Server (NTRS)
Ragusa, J. M.
1975-01-01
An optimum hypothetical organizational structure was studied for a large earth-orbiting, multidisciplinary research and applications space base manned by a crew of technologists. Because such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than with the empirical testing of the model. The essential finding of this research was that a four-level project type total matrix model will optimize the efficiency and effectiveness of space base technologists.
Brown, Patrick T; Li, Wenhong; Cordero, Eugene C; Mauget, Steven A
2015-04-21
The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20(th) century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal.
Brown, Patrick T.; Li, Wenhong; Cordero, Eugene C.; Mauget, Steven A.
2015-01-01
The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much public and scientific attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible unforced states of the climate system (the Envelope of Unforced Noise; EUN). Typically, the EUN is derived from climate models themselves, but climate models might not accurately simulate the correct characteristics of unforced GMT variability. Here, we simulate a new, empirical, EUN that is based on instrumental and reconstructed surface temperature records. We compare the forced GMT signal produced by climate models to observations while noting the range of GMT values provided by the empirical EUN. We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20th century does not necessarily require corresponding variability in the rate-of-increase of the forced signal. The empirical EUN also indicates that the reduced GMT warming over the past decade or so is still consistent with a middle emission scenario's forced signal, but is likely inconsistent with the steepest emission scenario's forced signal. PMID:25898351
Arefin, Md Shamsul
2012-01-01
This work presents a technique for the chirality (n, m) assignment of semiconducting single wall carbon nanotubes by solving a set of empirical equations of the tight binding model parameters. The empirical equations of the nearest neighbor hopping parameters, relating the term (2n− m) with the first and second optical transition energies of the semiconducting single wall carbon nanotubes, are also proposed. They provide almost the same level of accuracy for lower and higher diameter nanotubes. An algorithm is presented to determine the chiral index (n, m) of any unknown semiconducting tube by solving these empirical equations using values of radial breathing mode frequency and the first or second optical transition energy from resonant Raman spectroscopy. In this paper, the chirality of 55 semiconducting nanotubes is assigned using the first and second optical transition energies. Unlike the existing methods of chirality assignment, this technique does not require graphical comparison or pattern recognition between existing experimental and theoretical Kataura plot. PMID:28348319
Empirical flow parameters : a tool for hydraulic model validity
Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.
2013-01-01
The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.
Developing the next generation of forest ecosystem models
Christopher R. Schwalm; Alan R. Ek
2002-01-01
Forest ecology and management are model-rich areas for research. Models are often cast as either empirical or mechanistic. With evolving climate change, hybrid models gain new relevance because of their ability to integrate existing mechanistic knowledge with empiricism based on causal thinking. The utility of hybrid platforms results in the combination of...
Sensory Impairments and Autism: A Re-Examination of Causal Modelling
ERIC Educational Resources Information Center
Gerrard, Sue; Rugg, Gordon
2009-01-01
Sensory impairments are widely reported in autism, but remain largely unexplained by existing models. This article examines Kanner's causal reasoning and identifies unsupported assumptions implicit in later empirical work. Our analysis supports a heterogeneous causal model for autistic characteristics. We propose that the development of a…
Estimating standard errors in feature network models.
Frank, Laurence E; Heiser, Willem J
2007-05-01
Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.
Scaffolding in Complex Modelling Situations
ERIC Educational Resources Information Center
Stender, Peter; Kaiser, Gabriele
2015-01-01
The implementation of teacher-independent realistic modelling processes is an ambitious educational activity with many unsolved problems so far. Amongst others, there hardly exists any empirical knowledge about efficient ways of possible teacher support with students' activities, which should be mainly independent from the teacher. The research…
NASA Technical Reports Server (NTRS)
Campbell, J. W. (Editor)
1981-01-01
The detection of anthropogenic disturbances in the Earth's ozone layer was studied. Two topics were addressed: (1) the level at which a trend in total ozoning is detected by existing data sources; and (2) empirical evidence in the prediction of the depletion in total ozone. Error sources are identified. The predictability of climatological series, whether empirical models can be trusted, and how errors in the Dobson total ozone data impact trend detectability, are discussed.
Mishra, U.; Jastrow, J.D.; Matamala, R.; Hugelius, G.; Koven, C.D.; Harden, Jennifer W.; Ping, S.L.; Michaelson, G.J.; Fan, Z.; Miller, R.M.; McGuire, A.D.; Tarnocai, C.; Kuhry, P.; Riley, W.J.; Schaefer, K.; Schuur, E.A.G.; Jorgenson, M.T.; Hinzman, L.D.
2013-01-01
The vast amount of organic carbon (OC) stored in soils of the northern circumpolar permafrost region is a potentially vulnerable component of the global carbon cycle. However, estimates of the quantity, decomposability, and combustibility of OC contained in permafrost-region soils remain highly uncertain, thereby limiting our ability to predict the release of greenhouse gases due to permafrost thawing. Substantial differences exist between empirical and modeling estimates of the quantity and distribution of permafrost-region soil OC, which contribute to large uncertainties in predictions of carbon–climate feedbacks under future warming. Here, we identify research challenges that constrain current assessments of the distribution and potential decomposability of soil OC stocks in the northern permafrost region and suggest priorities for future empirical and modeling studies to address these challenges.
USDA-ARS?s Scientific Manuscript database
The comparison of observed global mean surface air temperature (GMT) change to the mean change simulated by climate models has received much attention. For a given global warming signal produced by a climate model ensemble, there exists an envelope of GMT values representing the range of possible un...
2016-06-01
site customization of existing models. The author performed an empirical study centered around a survey of United States Marine Corps (USMC) and United...recommends that more studies be performed to determine the best way forward for AM within the USMC and USN. 14. SUBJECT TERMS 3D printing, additive...customization of existing models. The author performed an em- pirical study centered around a survey of United States Marine Corps (USMC) and United
Application of natural analog studies to exploration for ore deposits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustafson, D.L.
1995-09-01
Natural analogs are viewed as similarities in nature and are routinely utilized by exploration geologists in their search for economic mineral deposits. Ore deposit modeling is undertaken by geologists to direct their exploration activities toward favorable geologic environments and, therefore, successful programs. Two types of modeling are presented: (i) empirical model development based on the study of known ore deposit characteristics, and (ii) concept model development based on theoretical considerations and field observations that suggest a new deposit type, not known to exist in nature, may exist and justifies an exploration program. Key elements that are important in empirical modelmore » development are described, and examples of successful applications of these natural analogs to exploration are presented. A classical example of successful concept model development, the discovery of the McLaughlin gold mine in California, is presented. The utilization of natural analogs is an important facet of mineral exploration. Natural analogs guide explorationists in their search for new discoveries, increase the probability of success, and may decrease overall exploration expenditure.« less
Ecological Forecasting in Chesapeake Bay: Using a Mechanistic-Empirical Modelling Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C. W.; Hood, Raleigh R.; Long, Wen
The Chesapeake Bay Ecological Prediction System (CBEPS) automatically generates daily nowcasts and three-day forecasts of several environmental variables, such as sea-surface temperature and salinity, the concentrations of chlorophyll, nitrate, and dissolved oxygen, and the likelihood of encountering several noxious species, including harmful algal blooms and water-borne pathogens, for the purpose of monitoring the Bay's ecosystem. While the physical and biogeochemical variables are forecast mechanistically using the Regional Ocean Modeling System configured for the Chesapeake Bay, the species predictions are generated using a novel mechanistic empirical approach, whereby real-time output from the coupled physical biogeochemical model drives multivariate empirical habitat modelsmore » of the target species. The predictions, in the form of digital images, are available via the World Wide Web to interested groups to guide recreational, management, and research activities. Though full validation of the integrated forecasts for all species is still a work in progress, we argue that the mechanistic–empirical approach can be used to generate a wide variety of short-term ecological forecasts, and that it can be applied in any marine system where sufficient data exist to develop empirical habitat models. This paper provides an overview of this system, its predictions, and the approach taken.« less
ERIC Educational Resources Information Center
Paton, David
2006-01-01
Rational choice models of teenage sexual behaviour lead to radically different predictions than do models that assume such behaviour is random. Existing empirical evidence has not been able to distinguish conclusively between these competing models. I use regional data from England between 1998 and 2001 to examine the impact of recent increases in…
Development of a rotor wake-vortex model, volume 1
NASA Technical Reports Server (NTRS)
Majjigi, R. K.; Gliebe, P. R.
1984-01-01
Certain empirical rotor wake and turbulence relationships were developed using existing low speed rotor wave data. A tip vortex model was developed by replacing the annulus wall with a row of image vortices. An axisymmetric turbulence spectrum model, developed in the context of rotor inflow turbulence, was adapted to predicting the turbulence spectrum of the stator gust upwash.
An optimum organizational structure for a large earth-orbiting multidisciplinary Space Base
NASA Technical Reports Server (NTRS)
Ragusa, J. M.
1973-01-01
The purpose of this exploratory study was to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The essential finding of this research was that a four-level project type 'total matrix' model will optimize the efficiency and effectiveness of Space Base technologists.
Data Retention and Anonymity Services
NASA Astrophysics Data System (ADS)
Berthold, Stefan; Böhme, Rainer; Köpsell, Stefan
The recently introduced legislation on data retention to aid prosecuting cyber-related crime in Europe also affects the achievable security of systems for anonymous communication on the Internet. We argue that data retention requires a review of existing security evaluations against a new class of realistic adversary models. In particular, we present theoretical results and first empirical evidence for intersection attacks by law enforcement authorities. The reference architecture for our study is the anonymity service AN.ON, from which we also collect empirical data. Our adversary model reflects an interpretation of the current implementation of the EC Directive on Data Retention in Germany.
Adopting epidemic model to optimize medication and surgical intervention of excess weight
NASA Astrophysics Data System (ADS)
Sun, Ruoyan
2017-01-01
We combined an epidemic model with an objective function to minimize the weighted sum of people with excess weight and the cost of a medication and surgical intervention in the population. The epidemic model is consisted of ordinary differential equations to describe three subpopulation groups based on weight. We introduced an intervention using medication and surgery to deal with excess weight. An objective function is constructed taking into consideration the cost of the intervention as well as the weight distribution of the population. Using empirical data, we show that fixed participation rate reduces the size of obese population but increases the size for overweight. An optimal participation rate exists and decreases with respect to time. Both theoretical analysis and empirical example confirm the existence of an optimal participation rate, u*. Under u*, the weighted sum of overweight (S) and obese (O) population as well as the cost of the program is minimized. This article highlights the existence of an optimal participation rate that minimizes the number of people with excess weight and the cost of the intervention. The time-varying optimal participation rate could contribute to designing future public health interventions of excess weight.
Hulvershorn, Leslie A; Quinn, Patrick D; Scott, Eric L
2015-01-01
The past several decades have seen dramatic growth in empirically supported treatments for adolescent substance use disorders (SUDs), yet even the most well-established approaches struggle to produce large or long-lasting improvements. These difficulties may stem, in part, from the high rates of comorbidity between SUDs and other psychiatric disorders. We critically reviewed the treatment outcome literature for adolescents with co-occurring SUDs and internalizing disorders. Our review identified components of existing treatments that might be included in an integrated, evidence-based approach to the treatment of SUDs and internalizing disorders. An effective program may involve careful assessment, inclusion of parents or guardians, and tailoring of interventions via a modular strategy. The existing literature guides the development of a conceptual evidence-based, modular treatment model targeting adolescents with co-occurring internalizing and SUDs. With empirical study, such a model may better address treatment outcomes for both disorder types in adolescents.
Hulvershorn, Leslie A.; Quinn, Patrick D.; Scott, Eric L.
2016-01-01
Background The past several decades have seen dramatic growth in empirically supported treatments for adolescent substance use disorders (SUDs), yet even the most well-established approaches struggle to produce large or long-lasting improvements. These difficulties may stem, in part, from the high rates of comorbidity between SUDs and other psychiatric disorders. Method We critically reviewed the treatment outcome literature for adolescents with co-occurring SUDs and internalizing disorders. Results Our review identified components of existing treatments that might be included in an integrated, evidence-based approach to the treatment of SUDs and internalizing disorders. An effective program may involve careful assessment, inclusion of parents or guardians, and tailoring of interventions via a modular strategy. Conclusions The existing literature guides the development of a conceptual evidence-based, modular treatment model targeting adolescents with co-occurring internalizing and SUDs. With empirical study, such a model may better address treatment outcomes for both disorder types in adolescents. PMID:25973718
A methodology for selecting optimum organizations for space communities
NASA Technical Reports Server (NTRS)
Ragusa, J. M.
1978-01-01
This paper suggests that a methodology exists for selecting optimum organizations for future space communities of various sizes and purposes. Results of an exploratory study to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists are presented. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The principal finding of this research was that a four-level project type 'total matrix' model will optimize the effectiveness of Space Base technologists. An overall conclusion which can be reached from the research is that application of this methodology, or portions of it, may provide planning insights for the formal organizations which will be needed during the Space Industrialization Age.
Multisample cross-validation of a model of childhood posttraumatic stress disorder symptomatology.
Anthony, Jason L; Lonigan, Christopher J; Vernberg, Eric M; Greca, Annette M La; Silverman, Wendy K; Prinstein, Mitchell J
2005-12-01
This study is the latest advancement of our research aimed at best characterizing children's posttraumatic stress reactions. In a previous study, we compared existing nosologic and empirical models of PTSD dimensionality and determined the superior model was a hierarchical one with three symptom clusters (Intrusion/Active Avoidance, Numbing/Passive Avoidance, and Arousal; Anthony, Lonigan, & Hecht, 1999). In this study, we cross-validate this model in two populations. Participants were 396 fifth graders who were exposed to either Hurricane Andrew or Hurricane Hugo. Multisample confirmatory factor analysis demonstrated the model's factorial invariance across populations who experienced traumatic events that differed in severity. These results show the model's robustness to characterize children's posttraumatic stress reactions. Implications for diagnosis, classification criteria, and an empirically supported theory of PTSD are discussed.
Multistate modelling extended by behavioural rules: An application to migration.
Klabunde, Anna; Zinn, Sabine; Willekens, Frans; Leuchter, Matthias
2017-10-01
We propose to extend demographic multistate models by adding a behavioural element: behavioural rules explain intentions and thus transitions. Our framework is inspired by the Theory of Planned Behaviour. We exemplify our approach with a model of migration from Senegal to France. Model parameters are determined using empirical data where available. Parameters for which no empirical correspondence exists are determined by calibration. Age- and period-specific migration rates are used for model validation. Our approach adds to the toolkit of demographic projection by allowing for shocks and social influence, which alter behaviour in non-linear ways, while sticking to the general framework of multistate modelling. Our simulations yield that higher income growth in Senegal leads to higher emigration rates in the medium term, while a decrease in fertility yields lower emigration rates.
Near transferable phenomenological n-body potentials for noble metals
NASA Astrophysics Data System (ADS)
Pontikis, Vassilis; Baldinozzi, Gianguido; Luneville, Laurence; Simeone, David
2017-09-01
We present a semi-empirical model of cohesion in noble metals with suitable parameters reproducing a selected set of experimental properties of perfect and defective lattices in noble metals. It consists of two short-range, n-body terms accounting respectively for attractive and repulsive interactions, the former deriving from the second moment approximation of the tight-binding scheme and the latter from the gas approximation of the kinetic energy of electrons. The stability of the face centred cubic versus the hexagonal compact stacking is obtained via a long-range, pairwise function of customary use with ionic pseudo-potentials. Lattice dynamics, molecular statics, molecular dynamics and nudged elastic band calculations show that, unlike previous potentials, this cohesion model reproduces and predicts quite accurately thermodynamic properties in noble metals. In particular, computed surface energies, largely underestimated by existing empirical cohesion models, compare favourably with measured values, whereas predicted unstable stacking-fault energy profiles fit almost perfectly ab initio evaluations from the literature. All together the results suggest that this semi-empirical model is nearly transferable.
Near transferable phenomenological n-body potentials for noble metals.
Pontikis, Vassilis; Baldinozzi, Gianguido; Luneville, Laurence; Simeone, David
2017-09-06
We present a semi-empirical model of cohesion in noble metals with suitable parameters reproducing a selected set of experimental properties of perfect and defective lattices in noble metals. It consists of two short-range, n-body terms accounting respectively for attractive and repulsive interactions, the former deriving from the second moment approximation of the tight-binding scheme and the latter from the gas approximation of the kinetic energy of electrons. The stability of the face centred cubic versus the hexagonal compact stacking is obtained via a long-range, pairwise function of customary use with ionic pseudo-potentials. Lattice dynamics, molecular statics, molecular dynamics and nudged elastic band calculations show that, unlike previous potentials, this cohesion model reproduces and predicts quite accurately thermodynamic properties in noble metals. In particular, computed surface energies, largely underestimated by existing empirical cohesion models, compare favourably with measured values, whereas predicted unstable stacking-fault energy profiles fit almost perfectly ab initio evaluations from the literature. All together the results suggest that this semi-empirical model is nearly transferable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sathaye, Jayant A.
2000-04-01
Integrated assessment (IA) modeling of climate policy is increasingly global in nature, with models incorporating regional disaggregation. The existing empirical basis for IA modeling, however, largely arises from research on industrialized economies. Given the growing importance of developing countries in determining long-term global energy and carbon emissions trends, filling this gap with improved statistical information on developing countries' energy and carbon-emissions characteristics is an important priority for enhancing IA modeling. Earlier research at LBNL on this topic has focused on assembling and analyzing statistical data on productivity trends and technological change in the energy-intensive manufacturing sectors of five developing countries,more » India, Brazil, Mexico, Indonesia, and South Korea. The proposed work will extend this analysis to the agriculture and electric power sectors in India, South Korea, and two other developing countries. They will also examine the impact of alternative model specifications on estimates of productivity growth and technological change for each of the three sectors, and estimate the contribution of various capital inputs--imported vs. indigenous, rigid vs. malleable-- in contributing to productivity growth and technological change. The project has already produced a data resource on the manufacturing sector which is being shared with IA modelers. This will be extended to the agriculture and electric power sectors, which would also be made accessible to IA modeling groups seeking to enhance the empirical descriptions of developing country characteristics. The project will entail basic statistical and econometric analysis of productivity and energy trends in these developing country sectors, with parameter estimates also made available to modeling groups. The parameter estimates will be developed using alternative model specifications that could be directly utilized by the existing IAMs for the manufacturing, agriculture, and electric power sectors.« less
Whither Causal Models in the Neuroscience of ADHD?
ERIC Educational Resources Information Center
Coghill, Dave; Nigg, Joel; Rothenberger, Aribert; Sonuga-Barke, Edmund; Tannock, Rosemary
2005-01-01
In this paper we examine the current status of the science of ADHD from a theoretical point of view. While the field has reached the point at which a number of causal models have been proposed, it remains some distance away from demonstrating the viability of such models empirically. We identify a number of existing barriers and make proposals as…
Geoffrey J. Cary; Robert E. Keane; Robert H. Gardner; Sandra Lavorel; Mike D. Flannigan; Ian D. Davies; Chao Li; James M. Lenihan; T. Scott Rupp; Florent Mouillot
2006-01-01
The relative importance of variables in determining area burned is an important management consideration although gaining insights from existing empirical data has proven difficult. The purpose of this study was to compare the sensitivity of modeled area burned to environmental factors across a range of independently-developed landscape-fire-succession models. The...
Modeling and prediction of ionospheric scintillation
NASA Technical Reports Server (NTRS)
Fremouw, E. J.
1974-01-01
Scintillation modeling performed thus far is based on the theory of diffraction by a weakly modulating phase screen developed by Briggs and Parkin (1963). Shortcomings of the existing empirical model for the scintillation index are discussed together with questions of channel modeling, giving attention to the needs of the communication engineers. It is pointed out that much improved scintillation index models may be available in a matter of a year or so.
Frequency-dependent selection predicts patterns of radiations and biodiversity.
Melián, Carlos J; Alonso, David; Vázquez, Diego P; Regetz, James; Allesina, Stefano
2010-08-26
Most empirical studies support a decline in speciation rates through time, although evidence for constant speciation rates also exists. Declining rates have been explained by invoking pre-existing niches, whereas constant rates have been attributed to non-adaptive processes such as sexual selection and mutation. Trends in speciation rate and the processes underlying it remain unclear, representing a critical information gap in understanding patterns of global diversity. Here we show that the temporal trend in the speciation rate can also be explained by frequency-dependent selection. We construct a frequency-dependent and DNA sequence-based model of speciation. We compare our model to empirical diversity patterns observed for cichlid fish and Darwin's finches, two classic systems for which speciation rates and richness data exist. Negative frequency-dependent selection predicts well both the declining speciation rate found in cichlid fish and explains their species richness. For groups like the Darwin's finches, in which speciation rates are constant and diversity is lower, speciation rate is better explained by a model without frequency-dependent selection. Our analysis shows that differences in diversity may be driven by incipient species abundance with frequency-dependent selection. Our results demonstrate that genetic-distance-based speciation and frequency-dependent selection are sufficient to explain the high diversity observed in natural systems and, importantly, predict decay through time in speciation rate in the absence of pre-existing niches.
Data: The Common Thread & Tie That Binds Exposure Science
While a number of ongoing efforts exist aimed at empirically measuring or modeling exposure data, problems persist regarding availability and access to this data. Innovations in managing proprietary data, establishing data quality, standardization of data sets, and sharing of exi...
Wavelet modeling and prediction of the stability of states: the Roman Empire and the European Union
NASA Astrophysics Data System (ADS)
Yaroshenko, Tatyana Y.; Krysko, Dmitri V.; Dobriyan, Vitalii; Zhigalov, Maksim V.; Vos, Hendrik; Vandenabeele, Peter; Krysko, Vadim A.
2015-09-01
How can the stability of a state be quantitatively determined and its future stability predicted? The rise and collapse of empires and states is very complex, and it is exceedingly difficult to understand and predict it. Existing theories are usually formulated as verbal models and, consequently, do not yield sharply defined, quantitative prediction that can be unambiguously validated with data. Here we describe a model that determines whether the state is in a stable or chaotic condition and predicts its future condition. The central model, which we test, is that growth and collapse of states is reflected by the changes of their territories, populations and budgets. The model was simulated within the historical societies of the Roman Empire (400 BC to 400 AD) and the European Union (1957-2007) by using wavelets and analysis of the sign change of the spectrum of Lyapunov exponents. The model matches well with the historical events. During wars and crises, the state becomes unstable; this is reflected in the wavelet analysis by a significant increase in the frequency ω (t) and wavelet coefficients W (ω, t) and the sign of the largest Lyapunov exponent becomes positive, indicating chaos. We successfully reconstructed and forecasted time series in the Roman Empire and the European Union by applying artificial neural network. The proposed model helps to quantitatively determine and forecast the stability of a state.
Wilson, Kaitlyn P
2013-01-01
Video modeling is an intervention strategy that has been shown to be effective in improving the social and communication skills of students with autism spectrum disorders, or ASDs. The purpose of this tutorial is to outline empirically supported, step-by-step instructions for the use of video modeling by school-based speech-language pathologists (SLPs) serving students with ASDs. This tutorial draws from the many reviews and meta-analyses of the video modeling literature that have been conducted over the past decade, presenting empirically supported considerations for school-based SLPs who are planning to incorporate video modeling into their service delivery for students with ASD. The 5 overarching procedural phases presented in this tutorial are (a) preparation, (b) recording of the video model, (c) implementation of the video modeling intervention, (d) monitoring of the student's response to the intervention, and (e) planning of the next steps. Video modeling is not only a promising intervention strategy for students with ASD, but it is also a practical and efficient tool that is well-suited to the school setting. This tutorial will facilitate school-based SLPs' incorporation of this empirically supported intervention into their existing strategies for intervention for students with ASD.
Toward the Development and Validation of a Career Coach Competency Model
ERIC Educational Resources Information Center
Hatala, John-Paul; Hisey, Lee
2011-01-01
The career coaching profession is a dynamic field that has grown over the last decade. However, there exists a limitation to this field's development, as there is no universally accepted definition or empirically based competencies. There were three phases to the study. In the first phase, a conceptual model was developed that highlights four…
ERIC Educational Resources Information Center
Moustafa, Ahmed A.; Gilbertson, Mark W.; Orr, Scott P.; Herzallah, Mohammad M.; Servatius, Richard J.; Myers, Catherine E.
2013-01-01
Empirical research has shown that the amygdala, hippocampus, and ventromedial prefrontal cortex (vmPFC) are involved in fear conditioning. However, the functional contribution of each brain area and the nature of their interactions are not clearly understood. Here, we extend existing neural network models of the functional roles of the hippocampus…
The Dubious Benefits of Multi-Level Modeling
ERIC Educational Resources Information Center
Gorard, Stephen
2007-01-01
This paper presents an argument against the wider adoption of complex forms of data analysis, using multi-level modeling (MLM) as an extended case study. MLM was devised to overcome some deficiencies in existing datasets, such as the bias caused by clustering. The paper suggests that MLM has an unclear theoretical and empirical basis, has not led…
Ćulibrk, Jelena; Delić, Milan; Mitrović, Slavica; Ćulibrk, Dubravko
2018-01-01
We conducted an empirical study aimed at identifying and quantifying the relationship between work characteristics, organizational commitment, job satisfaction, job involvement and organizational policies and procedures in the transition economy of Serbia, South Eastern Europe. The study, which included 566 persons, employed by 8 companies, revealed that existing models of work motivation need to be adapted to fit the empirical data, resulting in a revised research model elaborated in the paper. In the proposed model, job involvement partially mediates the effect of job satisfaction on organizational commitment. Job satisfaction in Serbia is affected by work characteristics but, contrary to many studies conducted in developed economies, organizational policies and procedures do not seem significantly affect employee satisfaction. PMID:29503623
Ćulibrk, Jelena; Delić, Milan; Mitrović, Slavica; Ćulibrk, Dubravko
2018-01-01
We conducted an empirical study aimed at identifying and quantifying the relationship between work characteristics, organizational commitment, job satisfaction, job involvement and organizational policies and procedures in the transition economy of Serbia, South Eastern Europe. The study, which included 566 persons, employed by 8 companies, revealed that existing models of work motivation need to be adapted to fit the empirical data, resulting in a revised research model elaborated in the paper. In the proposed model, job involvement partially mediates the effect of job satisfaction on organizational commitment. Job satisfaction in Serbia is affected by work characteristics but, contrary to many studies conducted in developed economies, organizational policies and procedures do not seem significantly affect employee satisfaction.
VMT Mix Modeling for Mobile Source Emissions Forecasting: Formulation and Empirical Application
DOT National Transportation Integrated Search
2000-05-01
The purpose of the current report is to propose and implement a methodology for obtaining improved link-specific vehicle miles of travel (VMT) mix values compared to those obtained from existent methods. Specifically, the research is developing a fra...
Malthusian dynamics in a diverging Europe: Northern Italy, 1650-1881.
Fernihough, Alan
2013-02-01
Recent empirical research questions the validity of using Malthusian theory in preindustrial England. Using real wage and vital rate data for the years 1650-1881, I provide empirical estimates for a different region: Northern Italy. The empirical methodology is theoretically underpinned by a simple Malthusian model, in which population, real wages, and vital rates are determined endogenously. My findings strongly support the existence of a Malthusian economy wherein population growth decreased living standards, which in turn influenced vital rates. However, these results also demonstrate how the system is best characterized as one of weak homeostasis. Furthermore, there is no evidence of Boserupian effects given that increases in population failed to spur any sustained technological progress.
Limits of Predictability in Commuting Flows in the Absence of Data for Calibration
Yang, Yingxiang; Herrera, Carlos; Eagle, Nathan; González, Marta C.
2014-01-01
The estimation of commuting flows at different spatial scales is a fundamental problem for different areas of study. Many current methods rely on parameters requiring calibration from empirical trip volumes. Their values are often not generalizable to cases without calibration data. To solve this problem we develop a statistical expression to calculate commuting trips with a quantitative functional form to estimate the model parameter when empirical trip data is not available. We calculate commuting trip volumes at scales from within a city to an entire country, introducing a scaling parameter α to the recently proposed parameter free radiation model. The model requires only widely available population and facility density distributions. The parameter can be interpreted as the influence of the region scale and the degree of heterogeneity in the facility distribution. We explore in detail the scaling limitations of this problem, namely under which conditions the proposed model can be applied without trip data for calibration. On the other hand, when empirical trip data is available, we show that the proposed model's estimation accuracy is as good as other existing models. We validated the model in different regions in the U.S., then successfully applied it in three different countries. PMID:25012599
Experimental Evaluation of Equivalent-Fluid Models for Melamine Foam
NASA Technical Reports Server (NTRS)
Allen, Albert R.; Schiller, Noah H.
2016-01-01
Melamine foam is a soft porous material commonly used in noise control applications. Many models exist to represent porous materials at various levels of fidelity. This work focuses on rigid frame equivalent fluid models, which represent the foam as a fluid with a complex speed of sound and density. There are several empirical models available to determine these frequency dependent parameters based on an estimate of the material flow resistivity. Alternatively, these properties can be experimentally educed using an impedance tube setup. Since vibroacoustic models are generally sensitive to these properties, this paper assesses the accuracy of several empirical models relative to impedance tube measurements collected with melamine foam samples. Diffuse field sound absorption measurements collected using large test articles in a laboratory are also compared with absorption predictions determined using model-based and measured foam properties. Melamine foam slabs of various thicknesses are considered.
Zhu, Yenan; Hsieh, Yee-Hsee; Dhingra, Rishi R; Dick, Thomas E; Jacono, Frank J; Galán, Roberto F
2013-02-01
Interactions between oscillators can be investigated with standard tools of time series analysis. However, these methods are insensitive to the directionality of the coupling, i.e., the asymmetry of the interactions. An elegant alternative was proposed by Rosenblum and collaborators [M. G. Rosenblum, L. Cimponeriu, A. Bezerianos, A. Patzak, and R. Mrowka, Phys. Rev. E 65, 041909 (2002); M. G. Rosenblum and A. S. Pikovsky, Phys. Rev. E 64, 045202 (2001)] which consists in fitting the empirical phases to a generic model of two weakly coupled phase oscillators. This allows one to obtain the interaction functions defining the coupling and its directionality. A limitation of this approach is that a solution always exists in the least-squares sense, even in the absence of coupling. To preclude spurious results, we propose a three-step protocol: (1) Determine if a statistical dependency exists in the data by evaluating the mutual information of the phases; (2) if so, compute the interaction functions of the oscillators; and (3) validate the empirical oscillator model by comparing the joint probability of the phases obtained from simulating the model with that of the empirical phases. We apply this protocol to a model of two coupled Stuart-Landau oscillators and show that it reliably detects genuine coupling. We also apply this protocol to investigate cardiorespiratory coupling in anesthetized rats. We observe reciprocal coupling between respiration and heartbeat and that the influence of respiration on the heartbeat is generally much stronger than vice versa. In addition, we find that the vagus nerve mediates coupling in both directions.
NASA Astrophysics Data System (ADS)
West, Damien; West, Bruce J.
2012-07-01
There are a substantial number of empirical relations that began with the identification of a pattern in data; were shown to have a terse power-law description; were interpreted using existing theory; reached the level of "law" and given a name; only to be subsequently fade away when it proved impossible to connect the "law" with a larger body of theory and/or data. Various forms of allometry relations (ARs) have followed this path. The ARs in biology are nearly two hundred years old and those in ecology, geophysics, physiology and other areas of investigation are not that much younger. In general if X is a measure of the size of a complex host network and Y is a property of a complex subnetwork embedded within the host network a theoretical AR exists between the two when Y = aXb. We emphasize that the reductionistic models of AR interpret X and Y as dynamic variables, albeit the ARs themselves are explicitly time independent even though in some cases the parameter values change over time. On the other hand, the phenomenological models of AR are based on the statistical analysis of data and interpret X and Y as averages to yield the empirical AR:
Ge Sun; Peter Caldwell; Asko Noormets; Steven G. McNulty; Erika Cohen; al. et.
2011-01-01
We developed a waterâcentric monthly scale simulation model (WaSSIâC) by integrating empirical water and carbon flux measurements from the FLUXNET network and an existing water supply and demand accounting model (WaSSI). The WaSSIâC model was evaluated with basinâscale evapotranspiration (ET), gross ecosystem productivity (GEP), and net ecosystem exchange (NEE)...
Modeling the risk of water pollution by pesticides from imbalanced data.
Trajanov, Aneta; Kuzmanovski, Vladimir; Real, Benoit; Perreau, Jonathan Marks; Džeroski, Sašo; Debeljak, Marko
2018-04-30
The pollution of ground and surface waters with pesticides is a serious ecological issue that requires adequate treatment. Most of the existing water pollution models are mechanistic mathematical models. While they have made a significant contribution to understanding the transfer processes, they face the problem of validation because of their complexity, the user subjectivity in their parameterization, and the lack of empirical data for validation. In addition, the data describing water pollution with pesticides are, in most cases, very imbalanced. This is due to strict regulations for pesticide applications, which lead to only a few pollution events. In this study, we propose the use of data mining to build models for assessing the risk of water pollution by pesticides in field-drained outflow water. Unlike the mechanistic models, the models generated by data mining are based on easily obtainable empirical data, while the parameterization of the models is not influenced by the subjectivity of ecological modelers. We used empirical data from field trials at the La Jaillière experimental site in France and applied the random forests algorithm to build predictive models that predict "risky" and "not-risky" pesticide application events. To address the problems of the imbalanced classes in the data, cost-sensitive learning and different measures of predictive performance were used. Despite the high imbalance between risky and not-risky application events, we managed to build predictive models that make reliable predictions. The proposed modeling approach can be easily applied to other ecological modeling problems where we encounter empirical data with highly imbalanced classes.
Review of Thawing Time Prediction Models Depending on Process Conditions and Product Characteristics
Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna
2016-01-01
Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387
ERIC Educational Resources Information Center
Mittal, Surabhi; Mehar, Mamta
2016-01-01
Purpose: The paper analyzes factors that affect the likelihood of adoption of different agriculture-related information sources by farmers. Design/Methodology/Approach: The paper links the theoretical understanding of the existing multiple sources of information that farmers use, with the empirical model to analyze the factors that affect the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Yunfei; Wood, Eric; Burton, Evan
A shift towards increased levels of driving automation is generally expected to result in improved safety and traffic congestion outcomes. However, little empirical data exists to estimate the impact that automated driving could have on energy consumption and greenhouse gas emissions. In the absence of empirical data on differences between drive cycles from present day vehicles (primarily operated by humans) and future vehicles (partially or fully operated by computers) one approach is to model both situations over identical traffic conditions. Such an exercise requires traffic micro-simulation to not only accurately model vehicle operation under high levels of automation, but alsomore » (and potentially more challenging) vehicle operation under present day human drivers. This work seeks to quantify the ability of a commercial traffic micro-simulation program to accurately model real-world drive cycles in vehicles operated primarily by humans in terms of driving speed, acceleration, and simulated fuel economy. Synthetic profiles from models of freeway and arterial facilities near Atlanta, Georgia, are compared to empirical data collected from real-world drivers on the same facilities. Empirical and synthetic drive cycles are then simulated in a powertrain efficiency model to enable comparison on the basis of fuel economy. Synthetic profiles from traffic micro-simulation were found to exhibit low levels of transient behavior relative to the empirical data. Even with these differences, the synthetic and empirical data in this study agree well in terms of driving speed and simulated fuel economy. The differences in transient behavior between simulated and empirical data suggest that larger stochastic contributions in traffic micro-simulation (relative to those present in the traffic micro-simulation tool used in this study) are required to fully capture the arbitrary elements of human driving. Interestingly, the lack of stochastic contributions from models of human drivers in this study did not result in a significant discrepancy between fuel economy simulations based on synthetic and empirical data; a finding with implications on the potential energy efficiency gains of automated vehicle technology.« less
Nahum-Shani, Inbal; Hekler, Eric B.; Spruijt-Metz, Donna
2016-01-01
Advances in wireless devices and mobile technology offer many opportunities for delivering just-in-time adaptive interventions (JITAIs)--suites of interventions that adapt over time to an individual’s changing status and circumstances with the goal to address the individual’s need for support, whenever this need arises. A major challenge confronting behavioral scientists aiming to develop a JITAI concerns the selection and integration of existing empirical, theoretical and practical evidence into a scientific model that can inform the construction of a JITAI and help identify scientific gaps. The purpose of this paper is to establish a pragmatic framework that can be used to organize existing evidence into a useful model for JITAI construction. This framework involves clarifying the conceptual purpose of a JITAI, namely the provision of just-in-time support via adaptation, as well as describing the components of a JITAI and articulating a list of concrete questions to guide the establishment of a useful model for JITAI construction. The proposed framework includes an organizing scheme for translating the relatively static scientific models underlying many health behavior interventions into a more dynamic model that better incorporates the element of time. This framework will help to guide the next generation of empirical work to support the creation of effective JITAIs. PMID:26651462
Hou, Chen; Amunugama, Kaushalya
2015-07-01
The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Empirical spatial econometric modelling of small scale neighbourhood
NASA Astrophysics Data System (ADS)
Gerkman, Linda
2012-07-01
The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.
DOT National Transportation Integrated Search
1998-04-01
The study reported here was conducted to assess how well some of the existing asphalt pavement mechanistic-empirical distress prediction models performed when used in conjunction with the data being collected as part of the national Long Term Pavemen...
Towards a feminist empowerment model of forgiveness psychotherapy.
McKay, Kevin M; Hill, Melanie S; Freedman, Suzanne R; Enright, Robert D
2007-03-01
In recent years Enright and Fitzgibbon's (2000) process model of forgiveness therapy has received substantial theoretical and empirical attention. However, both the process model of forgiveness therapy and the social-cognitive developmental model on which it is based have received criticism from feminist theorists. The current paper considers feminist criticisms of forgiveness therapy and uses a feminist lens to identify potential areas for growth. Specifically, Worell and Remer's (2003) model of synthesizing feminist ideals into existing theory was consulted, areas of bias within the forgiveness model of psychotherapy were identified, and strategies for restructuring areas of potential bias were introduced. Further, the authors consider unique aspects of forgiveness therapy that can potentially strengthen existing models of feminist therapy. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Feasibility of quasi-random band model in evaluating atmospheric radiance
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Mirakhur, N.
1980-01-01
The use of the quasi-random band model in evaluating upwelling atmospheric radiation is investigated. The spectral transmittance and total band adsorptance are evaluated for selected molecular bands by using the line by line model, quasi-random band model, exponential sum fit method, and empirical correlations, and these are compared with the available experimental results. The atmospheric transmittance and upwelling radiance were calculated by using the line by line and quasi random band models and were compared with the results of an existing program called LOWTRAN. The results obtained by the exponential sum fit and empirical relations were not in good agreement with experimental results and their use cannot be justified for atmospheric studies. The line by line model was found to be the best model for atmospheric applications, but it is not practical because of high computational costs. The results of the quasi random band model compare well with the line by line and experimental results. The use of the quasi random band model is recommended for evaluation of the atmospheric radiation.
Effects of Inventory Bias on Landslide Susceptibility Calculations
NASA Technical Reports Server (NTRS)
Stanley, T. A.; Kirschbaum, D. B.
2017-01-01
Many landslide inventories are known to be biased, especially inventories for large regions such as Oregon's SLIDO or NASA's Global Landslide Catalog. These biases must affect the results of empirically derived susceptibility models to some degree. We evaluated the strength of the susceptibility model distortion from postulated biases by truncating an unbiased inventory. We generated a synthetic inventory from an existing landslide susceptibility map of Oregon, then removed landslides from this inventory to simulate the effects of reporting biases likely to affect inventories in this region, namely population and infrastructure effects. Logistic regression models were fitted to the modified inventories. Then the process of biasing a susceptibility model was repeated with SLIDO data. We evaluated each susceptibility model with qualitative and quantitative methods. Results suggest that the effects of landslide inventory bias on empirical models should not be ignored, even if those models are, in some cases, useful. We suggest fitting models in well-documented areas and extrapolating across the study region as a possible approach to modeling landslide susceptibility with heavily biased inventories.
Effects of Inventory Bias on Landslide Susceptibility Calculations
NASA Technical Reports Server (NTRS)
Stanley, Thomas; Kirschbaum, Dalia B.
2017-01-01
Many landslide inventories are known to be biased, especially inventories for large regions such as Oregons SLIDO or NASAs Global Landslide Catalog. These biases must affect the results of empirically derived susceptibility models to some degree. We evaluated the strength of the susceptibility model distortion from postulated biases by truncating an unbiased inventory. We generated a synthetic inventory from an existing landslide susceptibility map of Oregon, then removed landslides from this inventory to simulate the effects of reporting biases likely to affect inventories in this region, namely population and infrastructure effects. Logistic regression models were fitted to the modified inventories. Then the process of biasing a susceptibility model was repeated with SLIDO data. We evaluated each susceptibility model with qualitative and quantitative methods. Results suggest that the effects of landslide inventory bias on empirical models should not be ignored, even if those models are, in some cases, useful. We suggest fitting models in well-documented areas and extrapolating across the study region as a possible approach to modelling landslide susceptibility with heavily biased inventories.
The analysis of magnesium oxide hydration in three-phase reaction system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Xiaojia; Guo, Lin; Chen, Chen
In order to investigate the magnesium oxide hydration process in gas–liquid–solid (three-phase) reaction system, magnesium hydroxide was prepared by magnesium oxide hydration in liquid–solid (two-phase) and three-phase reaction systems. A semi-empirical model and the classical shrinking core model were used to fit the experimental data. The fitting result shows that both models describe well the hydration process of three-phase system, while only the semi-empirical model right for the hydration process of two-phase system. The characterization of the hydration product using X-Ray diffraction (XRD) and scanning electron microscope (SEM) was performed. The XRD and SEM show hydration process in the two-phasemore » system follows common dissolution/precipitation mechanism. While in the three-phase system, the hydration process undergo MgO dissolution, Mg(OH){sub 2} precipitation, Mg(OH){sub 2} peeling off from MgO particle and leaving behind fresh MgO surface. - Graphical abstract: There was existence of a peeling-off process in the gas–liquid–solid (three-phase) MgO hydration system. - Highlights: • Magnesium oxide hydration in gas–liquid–solid system was investigated. • The experimental data in three-phase system could be fitted well by two models. • The morphology analysis suggested that there was existence of a peel-off process.« less
Comparing mechanistic and empirical approaches to modeling the thermal niche of almond
NASA Astrophysics Data System (ADS)
Parker, Lauren E.; Abatzoglou, John T.
2017-09-01
Delineating locations that are thermally viable for cultivating high-value crops can help to guide land use planning, agronomics, and water management. Three modeling approaches were used to identify the potential distribution and key thermal constraints on on almond cultivation across the southwestern United States (US), including two empirical species distribution models (SDMs)—one using commonly used bioclimatic variables (traditional SDM) and the other using more physiologically relevant climate variables (nontraditional SDM)—and a mechanistic model (MM) developed using published thermal limitations from field studies. While models showed comparable results over the majority of the domain, including over existing croplands with high almond density, the MM suggested the greatest potential for the geographic expansion of almond cultivation, with frost susceptibility and insufficient heat accumulation being the primary thermal constraints in the southwestern US. The traditional SDM over-predicted almond suitability in locations shown by the MM to be limited by frost, whereas the nontraditional SDM showed greater agreement with the MM in these locations, indicating that incorporating physiologically relevant variables in SDMs can improve predictions. Finally, opportunities for geographic expansion of almond cultivation under current climatic conditions in the region may be limited, suggesting that increasing production may rely on agronomical advances and densifying current almond plantations in existing locations.
Existing pavement input information for the mechanistic-empirical pavement design guide.
DOT National Transportation Integrated Search
2009-02-01
The objective of this study is to systematically evaluate the Iowa Department of Transportations (DOTs) existing Pavement Management Information System (PMIS) with respect to the input information required for Mechanistic-Empirical Pavement Des...
Multiple Roles: The Conflicted Realities of Community College Mission Statements
ERIC Educational Resources Information Center
Mrozinski, Mark D.
2010-01-01
Questions of efficacy have always plagued the use of mission statement as a strategic planning tool. In most planning models, the mission statement serves to clarify goals and guide the formation of strategies. However, little empirical evidence exists validating that mission statements actually improve the performance of organizations, even…
ERIC Educational Resources Information Center
Pavicic, Jurica; Alfirevic, Niksa; Mihanovic, Zoran
2009-01-01
In this paper, market orientation in Croatian higher education (HE) is discussed within the context of stakeholder-oriented management. Drawing on existing studies, the "classical" empirical model, describing the market orientation of generic nonprofit organisations, has been adapted to the contingencies of the Croatian HE sector.…
ERIC Educational Resources Information Center
Lillis, Deirdre
2012-01-01
Higher education institutions worldwide invest significant resources in their quality assurance systems. Little empirical evidence exists that demonstrates the effectiveness (or otherwise) of these systems. Methodological approaches for determining effectiveness are also underdeveloped. Self-study-with-peer-review is a widely used model for…
ERIC Educational Resources Information Center
Odegard-Koester, Melissa A.; Watkins, Paul
2016-01-01
The working relationship between principals and school counselors have received some attention in the literature, however, little empirical research exists that examines specifically the components that facilitate a collaborative working relationship between the principal and school counselor. This qualitative case study examined the unique…
ERIC Educational Resources Information Center
Tyndorf, Darryl; Glass, Chris R.
2016-01-01
Numerous microeconomic studies demonstrate the significant individual returns to tertiary education; however, little empirical evidence exists regarding the effects of higher education massification and diversification agendas on long-term macroeconomic growth. The researchers used the Uzawa-Lucas endogenous growth model to tertiary education…
NASA Astrophysics Data System (ADS)
Zhu, Yenan; Hsieh, Yee-Hsee; Dhingra, Rishi R.; Dick, Thomas E.; Jacono, Frank J.; Galán, Roberto F.
2013-02-01
Interactions between oscillators can be investigated with standard tools of time series analysis. However, these methods are insensitive to the directionality of the coupling, i.e., the asymmetry of the interactions. An elegant alternative was proposed by Rosenblum and collaborators [M. G. Rosenblum, L. Cimponeriu, A. Bezerianos, A. Patzak, and R. Mrowka, Phys. Rev. EPLEEE81063-651X10.1103/PhysRevE.65.041909 65, 041909 (2002); M. G. Rosenblum and A. S. Pikovsky, Phys. Rev. EPLEEE81063-651X10.1103/PhysRevE.64.045202 64, 045202 (2001)] which consists in fitting the empirical phases to a generic model of two weakly coupled phase oscillators. This allows one to obtain the interaction functions defining the coupling and its directionality. A limitation of this approach is that a solution always exists in the least-squares sense, even in the absence of coupling. To preclude spurious results, we propose a three-step protocol: (1) Determine if a statistical dependency exists in the data by evaluating the mutual information of the phases; (2) if so, compute the interaction functions of the oscillators; and (3) validate the empirical oscillator model by comparing the joint probability of the phases obtained from simulating the model with that of the empirical phases. We apply this protocol to a model of two coupled Stuart-Landau oscillators and show that it reliably detects genuine coupling. We also apply this protocol to investigate cardiorespiratory coupling in anesthetized rats. We observe reciprocal coupling between respiration and heartbeat and that the influence of respiration on the heartbeat is generally much stronger than vice versa. In addition, we find that the vagus nerve mediates coupling in both directions.
Recent progress in empirical modeling of ion composition in the topside ionosphere
NASA Astrophysics Data System (ADS)
Truhlik, Vladimir; Triskova, Ludmila; Bilitza, Dieter; Kotov, Dmytro; Bogomaz, Oleksandr; Domnin, Igor
2016-07-01
The last deep and prolonged solar minimum revealed shortcomings of existing empirical models, especially of parameter models that depend strongly on solar activity, such as the IRI (International Reference Ionosphere) ion composition model, and that are based on data sets from previous solar cycles. We have improved the TTS-03 ion composition model (Triskova et al., 2003) which is included in IRI since version 2007. The new model called AEIKion-13 employs an improved description of the dependence of ion composition on solar activity. We have also developed new global models of the upper transition height based on large data sets of vertical electron density profiles from ISIS, Alouette and COSMIC. The upper transition height is used as an anchor point for adjustment of the AEIKion-13 ion composition model. Additionally, we show also progress on improvements of the altitudinal dependence of the ion composition in the AEIKion-13 model. Results of the improved model are compared with data from other types of measurements including data from the Atmosphere Explorer C and E and C/NOFS satellites, and the Kharkiv and Arecibo incoherent scatter radars. Possible real time updating of the model by the upper transition height from the real time COSMIC vertical profiles is discussed. Triskova, L.,Truhlik,V., Smilauer, J.,2003. An empirical model of ion composition in the outer ionosphere. Adv. Space Res. 31(3), 653-663.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Junjian; Wang, Jianhui; Liu, Hui
Abstract: In this paper, nonlinear model reduction for power systems is performed by the balancing of empirical controllability and observability covariances that are calculated around the operating region. Unlike existing model reduction methods, the external system does not need to be linearized but is directly dealt with as a nonlinear system. A transformation is found to balance the controllability and observability covariances in order to determine which states have the greatest contribution to the input-output behavior. The original system model is then reduced by Galerkin projection based on this transformation. The proposed method is tested and validated on a systemmore » comprised of a 16-machine 68-bus system and an IEEE 50-machine 145-bus system. The results show that by using the proposed model reduction the calculation efficiency can be greatly improved; at the same time, the obtained state trajectories are close to those for directly simulating the whole system or partitioning the system while not performing reduction. Compared with the balanced truncation method based on a linearized model, the proposed nonlinear model reduction method can guarantee higher accuracy and similar calculation efficiency. It is shown that the proposed method is not sensitive to the choice of the matrices for calculating the empirical covariances.« less
NASA Technical Reports Server (NTRS)
Brewin, Robert J.W.; Sathyendranath, Shubha; Muller, Dagmar; Brockmann, Carsten; Deschamps, Pierre-Yves; Devred, Emmanuel; Doerffer, Roland; Fomferra, Norman; Franz, Bryan; Grant, Mike;
2013-01-01
Satellite-derived remote-sensing reflectance (Rrs) can be used for mapping biogeochemically relevant variables, such as the chlorophyll concentration and the Inherent Optical Properties (IOPs) of the water, at global scale for use in climate-change studies. Prior to generating such products, suitable algorithms have to be selected that are appropriate for the purpose. Algorithm selection needs to account for both qualitative and quantitative requirements. In this paper we develop an objective methodology designed to rank the quantitative performance of a suite of bio-optical models. The objective classification is applied using the NASA bio-Optical Marine Algorithm Dataset (NOMAD). Using in situ Rrs as input to the models, the performance of eleven semianalytical models, as well as five empirical chlorophyll algorithms and an empirical diffuse attenuation coefficient algorithm, is ranked for spectrally-resolved IOPs, chlorophyll concentration and the diffuse attenuation coefficient at 489 nm. The sensitivity of the objective classification and the uncertainty in the ranking are tested using a Monte-Carlo approach (bootstrapping). Results indicate that the performance of the semi-analytical models varies depending on the product and wavelength of interest. For chlorophyll retrieval, empirical algorithms perform better than semi-analytical models, in general. The performance of these empirical models reflects either their immunity to scale errors or instrument noise in Rrs data, or simply that the data used for model parameterisation were not independent of NOMAD. Nonetheless, uncertainty in the classification suggests that the performance of some semi-analytical algorithms at retrieving chlorophyll is comparable with the empirical algorithms. For phytoplankton absorption at 443 nm, some semi-analytical models also perform with similar accuracy to an empirical model. We discuss the potential biases, limitations and uncertainty in the approach, as well as additional qualitative considerations for algorithm selection for climate-change studies. Our classification has the potential to be routinely implemented, such that the performance of emerging algorithms can be compared with existing algorithms as they become available. In the long-term, such an approach will further aid algorithm development for ocean-colour studies.
Linear dynamical modes as new variables for data-driven ENSO forecast
NASA Astrophysics Data System (ADS)
Gavrilov, Andrey; Seleznev, Aleksei; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander; Kurths, Juergen
2018-05-01
A new data-driven model for analysis and prediction of spatially distributed time series is proposed. The model is based on a linear dynamical mode (LDM) decomposition of the observed data which is derived from a recently developed nonlinear dimensionality reduction approach. The key point of this approach is its ability to take into account simple dynamical properties of the observed system by means of revealing the system's dominant time scales. The LDMs are used as new variables for empirical construction of a nonlinear stochastic evolution operator. The method is applied to the sea surface temperature anomaly field in the tropical belt where the El Nino Southern Oscillation (ENSO) is the main mode of variability. The advantage of LDMs versus traditionally used empirical orthogonal function decomposition is demonstrated for this data. Specifically, it is shown that the new model has a competitive ENSO forecast skill in comparison with the other existing ENSO models.
Hattori, Masasi
2016-12-01
This paper presents a new theory of syllogistic reasoning. The proposed model assumes there are probabilistic representations of given signature situations. Instead of conducting an exhaustive search, the model constructs an individual-based "logical" mental representation that expresses the most probable state of affairs, and derives a necessary conclusion that is not inconsistent with the model using heuristics based on informativeness. The model is a unification of previous influential models. Its descriptive validity has been evaluated against existing empirical data and two new experiments, and by qualitative analyses based on previous empirical findings, all of which supported the theory. The model's behavior is also consistent with findings in other areas, including working memory capacity. The results indicate that people assume the probabilities of all target events mentioned in a syllogism to be almost equal, which suggests links between syllogistic reasoning and other areas of cognition. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Harper, Graham
2017-08-01
Unravelling the poorly understood processes that drive mass loss from red giant stars requires that we empirically constrain the intimately coupled momentum and energy balance. Hubble high spectral resolution observations of wind scattered line profiles, from neutral and singly ionized species, have provided measures of wind acceleration, turbulence, terminal speeds, and mass-loss rates. These wind properties inform us about the force-momentum balance, however, the spectra have not yielded measures of the much needed wind temperatures, which constrain the energy balance.We proposed to remedy this omission with STIS E140H observations of the Si III 1206 Ang. resonance emission line for two of the best studied red giants: Arcturus (alpha Boo: K2 III) and Aldebaran (alpha Tau: K5 III), both of which have detailed semi-empirical wind velocity models. The relative optical depths of wind scattered absorption in Si III 1206 Ang., O I 1303 Ang. triplet., C II 1335 Ang., and existing Mg II h & k and Fe II profiles give the wind temperatures through the thermally controlled ionization balance. The new temperature constraints will be used to test existing semi-empirical models by comparision with multi-frequency JVLA radio fluxes, and also to constrain the flux-tube geometry and wave energy spectrum of magnetic wave-driven winds.
Warm glow, free-riding and vehicle neutrality in a health-related contingent valuation study.
Hackl, Franz; Pruckner, Gerald J
2005-03-01
Criticism of contingent valuation (CV) stresses warm glow and free-riding as possible causes for biased willingness to pay figures. We present an empirical framework to study the existence of warm glow and free-riding in hypothetical WTP answers based on a CV survey for the measurement of health-related Red Cross services. Both in conventional double-bounded and spike models we do not find indication of warm glow phenomena and free-riding behaviour. The results are very robust and insensitive to the applied payment vehicles. Theoretical objections against CV do not find sufficient empirical support.
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).
An Investigation of the Electrical Short Circuit Characteristics of Tin Whiskers
NASA Technical Reports Server (NTRS)
Courey, Karim J.
2008-01-01
In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This model can be used to improve existing risk simulation models FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.
Nahum-Shani, Inbal; Hekler, Eric B; Spruijt-Metz, Donna
2015-12-01
Advances in wireless devices and mobile technology offer many opportunities for delivering just-in-time adaptive interventions (JITAIs)-suites of interventions that adapt over time to an individual's changing status and circumstances with the goal to address the individual's need for support, whenever this need arises. A major challenge confronting behavioral scientists aiming to develop a JITAI concerns the selection and integration of existing empirical, theoretical and practical evidence into a scientific model that can inform the construction of a JITAI and help identify scientific gaps. The purpose of this paper is to establish a pragmatic framework that can be used to organize existing evidence into a useful model for JITAI construction. This framework involves clarifying the conceptual purpose of a JITAI, namely, the provision of just-in-time support via adaptation, as well as describing the components of a JITAI and articulating a list of concrete questions to guide the establishment of a useful model for JITAI construction. The proposed framework includes an organizing scheme for translating the relatively static scientific models underlying many health behavior interventions into a more dynamic model that better incorporates the element of time. This framework will help to guide the next generation of empirical work to support the creation of effective JITAIs. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Xu, Qiang; Ding, Shuai; An, Jingwen
2017-12-01
This paper studies the energy efficiency of Beijing-Tianjin-Hebei region and to finds out the trend of energy efficiency in order to improve the economic development quality of Beijing-Tianjin-Hebei region. Based on Malmquist index and window analysis model, this paper estimates the total factor energy efficiency in Beijing-Tianjin-Hebei region empirically by using panel data in this region from 1991 to 2014, and provides the corresponding political recommendations. The empirical result shows that, the total factor energy efficiency in Beijing-Tianjin-Hebei region increased from 1991 to 2014, mainly relies on advances in energy technology or innovation, and obvious regional differences in energy efficiency to exist. Throughout the window period of 24 years, the regional differences of energy efficiency in Beijing-Tianjin-Hebei region shrank. There has been significant convergent trend in energy efficiency after 2000, mainly depends on the diffusion and spillover of energy technologies.
Weekly Cycles in Daily Report Data: An Overlooked Issue.
Liu, Yu; West, Stephen G
2016-10-01
Daily diaries and other everyday experience methods are increasingly used to study relationships between two time-varying variables X and Y. Although daily data potentially often have weekly cyclical patterns (e.g., stress may be higher on weekdays and lower on weekends), the majority of daily diary studies have ignored this possibility. In this study, we investigated the effect of ignoring existing weekly cycles. We reanalyzed an empirical dataset (stress and alcohol consumption) and performed Monte Carlo simulations to investigate the impact of omitting weekly cycles. In the empirical dataset, ignoring cycles led to the inference of a significant within-person X-Y relation whereas modeling cycles suggested that this relationship did not exist. Simulation results indicated that ignoring cycles that existed in both X and Y led to bias in the estimated within-person X-Y relationship. The amount and direction of bias depended on the magnitude of the cycles, magnitude of the true within-person X-Y relation, and synchronization of the cycles. We encourage researchers conducting daily diary studies to address potential weekly cycles in their data. We provide guidelines for detecting and modeling cycles to remove their influence and discuss challenges of causal inference in daily experience studies. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Bertagnolio, Franck; Madsen, Helge Aa.; Fischer, Andreas; Bak, Christian
2018-06-01
In the above-mentioned paper, two model formulae were tuned to fit experimental data of surface pressure spectra measured in various wind tunnels. They correspond to high and low Reynolds number flow scalings, respectively. It turns out that there exist typographical errors in both formulae numbered (9) and (10) in the original paper. There, these formulae read:
Essays on pricing electricity and electricity derivatives in deregulated markets
NASA Astrophysics Data System (ADS)
Popova, Julia
2008-10-01
This dissertation is composed of four essays on the behavior of wholesale electricity prices and their derivatives. The first essay provides an empirical model that takes into account the spatial features of a transmission network on the electricity market. The spatial structure of the transmission grid plays a key role in determining electricity prices, but it has not been incorporated into previous empirical models. The econometric model in this essay incorporates a simple representation of the transmission system into a spatial panel data model of electricity prices, and also accounts for the effect of dynamic transmission system constraints on electricity market integration. Empirical results using PJM data confirm the existence of spatial patterns in electricity prices and show that spatial correlation diminishes as transmission lines become more congested. The second essay develops and empirically tests a model of the influence of natural gas storage inventories on the electricity forward premium. I link a model of the effect of gas storage constraints on the higher moments of the distribution of electricity prices to a model of the effect of those moments on the forward premium. Empirical results using PJM data support the model's predictions that gas storage inventories sharply reduce the electricity forward premium when demand for electricity is high and space-heating demand for gas is low. The third essay examines the efficiency of PJM electricity markets. A market is efficient if prices reflect all relevant information, so that prices follow a random walk. The hypothesis of random walk is examined using empirical tests, including the Portmanteau, Augmented Dickey-Fuller, KPSS, and multiple variance ratio tests. The results are mixed though evidence of some level of market efficiency is found. The last essay investigates the possibility that previous researchers have drawn spurious conclusions based on classical unit root tests incorrectly applied to wholesale electricity prices. It is well known that electricity prices exhibit both cyclicity and high volatility which varies through time. Results indicate that heterogeneity in unconditional variance---which is not detected by classical unit root tests---may contribute to the appearance of non-stationarity.
NASA Astrophysics Data System (ADS)
Frey, Elaine F.
Even though environmental policy can greatly affect the path of technology diffusion, the economics literature contains limited empirical evidence of this relationship. My research will contribute to the available evidence by providing insight into the technology adoption decisions of electric generating firms. Since policies are often evaluated based on the incentives they provide to promote adoption of new technologies, it is important that policy makers understand the relationship between technological diffusion and regulation structure to make informed decisions. Lessons learned from this study can be used to guide future policies such as those directed to mitigate climate change. I first explore the diffusion of scrubbers, a sulfur dioxide (SO 2) abatement technology, in response to federal market-based regulations and state command-and-control regulations. I develop a simple theoretical model to describe the adoption decisions of scrubbers and use a survival model to empirically test the theoretical model. I find that power plants with strict command-and-control regulations have a high probability of installing a scrubber. These findings suggest that although market-based regulations have encouraged diffusion, many scrubbers have been installed because of state regulatory pressure. Although tradable permit systems are thought to give firms more flexibility in choosing abatement technologies, I show that interactions between a permit system and pre-existing command-and-control regulations can limit that flexibility. In a separate analysis, I explore the diffusion of combined cycle (CC) generating units, which are natural gas-fired generating units that are cleaner and more efficient than alternative generating units. I model the decision to consider adoption of a CC generating unit and the extent to which the technology is adopted in response to environmental regulations imposed on new sources of pollutants. To accomplish this, I use a zero-inflated Poisson model and focus on both the decision to adopt a CC unit at an existing power plant as well as the firm-level decision to adopt a CC unit in either a new or an existing power plant. Evidence from this empirical investigation shows that environmental regulation has a significant effect on both the decision to consider adoption as well as the extent of adoption.
Ethics in Neuroscience Graduate Training Programs: Views and Models from Canada
ERIC Educational Resources Information Center
Lombera, Sofia; Fine, Alan; Grunau, Ruth E.; Illes, Judy
2010-01-01
Consideration of the ethical, social, and policy implications of research has become increasingly important to scientists and scholars whose work focuses on brain and mind, but limited empirical data exist on the education in ethics available to them. We examined the current landscape of ethics training in neuroscience programs, beginning with the…
ERIC Educational Resources Information Center
Ardoin, Scott P.; Williams, Jessica C.; Christ, Theodore J.; Klubnik, Cynthia; Wellborn, Claire
2010-01-01
Beyond reliability and validity, measures used to model student growth must consist of multiple probes that are equivalent in level of difficulty to establish consistent measurement conditions across time. Although existing evidence supports the reliability of curriculum-based measurement in reading (CBMR), few studies have empirically evaluated…
ERIC Educational Resources Information Center
Sanatullova-Allison , Elvira
2014-01-01
This article reviews some essential theoretical and empirical research literature that discusses the role of memory in second language acquisition and instruction. Two models of literature review--thematic and study-by-study--were used to analyze and synthesize the existing research. First, issues of memory retention in second language acquisition…
ERIC Educational Resources Information Center
Winter, Paul A.
This paper draws on the literatures of educational administration, management, and marketing to address, empirically, two issues related to community college faculty recruitment: (a) factors influencing faculty application decisions, and (b) the utility of an existing model for recruiting community college faculty. It examines factors influencing…
ERIC Educational Resources Information Center
Mattern, Krista D.; Marini, Jessica P.; Shaw, Emily J.
2015-01-01
Throughout the college retention literature, there is a recurring theme that students leave college for a variety of reasons making retention a difficult phenomenon to model. In the current study, cluster analysis techniques were employed to investigate whether multiple empirically based profiles of nonreturning students existed to more fully…
ERIC Educational Resources Information Center
Confer, Jacob Russell
2013-01-01
The symptoms, assessment, and treatments of Post Traumatic Stress Disorder (PTSD) have been empirically investigated to the extent that there is a breadth of valid and reliable instruments investigating this psychopathological syndrome. There, too, exists a substantial evidence base for various treatment models demonstrating effectiveness in…
ERIC Educational Resources Information Center
Baker, Marshall A.; Robinson, J. Shane
2016-01-01
Experiential learning is an important pedagogical approach used in secondary agricultural education. Though anecdotal evidence supports the use of experiential learning, a paucity of empirical research exists supporting the effects of this approach when compared to a more conventional teaching method, such as direct instruction. Therefore, the…
Human Capital Augmentation versus the Signaling Value of MBA Education
ERIC Educational Resources Information Center
Hussey, Andrew
2012-01-01
Panel data on MBA graduates is used in an attempt to empirically distinguish between human capital and signaling models of education. The existence of employment observations prior to MBA enrollment allows for the control of unobserved ability or selection into MBA programs (through the use of individual fixed effects). In addition, variation in…
ERIC Educational Resources Information Center
Cooper, Amanda Mae
2013-01-01
This paper explores the increasingly prominent role of research brokering organizations (RBOs) in strengthening connections between education research, policy and practice across Canada. This paper is organized in three sections. First, it provides a literature review of research mediation--exploring terminology, models and empirical work (albeit…
A Research Synthesis of the Evaluation Capacity Building Literature
ERIC Educational Resources Information Center
Labin, Susan N.; Duffy, Jennifer L.; Meyers, Duncan C.; Wandersman, Abraham; Lesesne, Catherine A.
2012-01-01
The continuously growing demand for program results has produced an increased need for evaluation capacity building (ECB). The "Integrative ECB Model" was developed to integrate concepts from existing ECB theory literature and to structure a synthesis of the empirical ECB literature. The study used a broad-based research synthesis method with…
Skriabikova, Olga; Pavlova, Milena; Groot, Wim
2010-06-01
This paper reviews the existing empirical micro-level models of demand for out-patient physician services where the size of patient payment is included either directly as an independent variable (when a flat-rate co-payment fee) or indirectly as a level of deductibles and/or co-insurance defined by the insurance coverage. The paper also discusses the relevance of these models for the assessment of patient payment policies. For this purpose, a systematic literature review is carried out. In total, 46 relevant publications were identified. These publications are classified into categories based on their general approach to demand modeling, specifications of data collection, data analysis, and main empirical findings. The analysis indicates a rising research interest in the empirical micro-level models of demand for out-patient physician services that incorporate the size of patient payment. Overall, the size of patient payments, consumer socio-economic and demographic features, and quality of services provided emerge as important determinants of demand for out-patient physician services. However, there is a great variety in the modeling approaches and inconsistencies in the findings regarding the impact of price on demand for out-patient physician services. Hitherto, the empirical research fails to offer policy-makers a clear strategy on how to develop a country-specific model of demand for out-patient physician services suitable for the assessment of patient payment policies in their countries. In particular, theoretically important factors, such as provider behavior, consumer attitudes, experience and culture, and informal patient payments, are not considered. Although we recognize that it is difficult to measure these factors and to incorporate them in the demand models, it is apparent that there is a gap in research for the construction of effective patient payment schemes.
Skriabikova, Olga; Pavlova, Milena; Groot, Wim
2010-01-01
This paper reviews the existing empirical micro-level models of demand for out-patient physician services where the size of patient payment is included either directly as an independent variable (when a flat-rate co-payment fee) or indirectly as a level of deductibles and/or co-insurance defined by the insurance coverage. The paper also discusses the relevance of these models for the assessment of patient payment policies. For this purpose, a systematic literature review is carried out. In total, 46 relevant publications were identified. These publications are classified into categories based on their general approach to demand modeling, specifications of data collection, data analysis, and main empirical findings. The analysis indicates a rising research interest in the empirical micro-level models of demand for out-patient physician services that incorporate the size of patient payment. Overall, the size of patient payments, consumer socio-economic and demographic features, and quality of services provided emerge as important determinants of demand for out-patient physician services. However, there is a great variety in the modeling approaches and inconsistencies in the findings regarding the impact of price on demand for out-patient physician services. Hitherto, the empirical research fails to offer policy-makers a clear strategy on how to develop a country-specific model of demand for out-patient physician services suitable for the assessment of patient payment policies in their countries. In particular, theoretically important factors, such as provider behavior, consumer attitudes, experience and culture, and informal patient payments, are not considered. Although we recognize that it is difficult to measure these factors and to incorporate them in the demand models, it is apparent that there is a gap in research for the construction of effective patient payment schemes. PMID:20644697
Xu, Maoqi; Chen, Liang
2018-01-01
The individual sample heterogeneity is one of the biggest obstacles in biomarker identification for complex diseases such as cancers. Current statistical models to identify differentially expressed genes between disease and control groups often overlook the substantial human sample heterogeneity. Meanwhile, traditional nonparametric tests lose detailed data information and sacrifice the analysis power, although they are distribution free and robust to heterogeneity. Here, we propose an empirical likelihood ratio test with a mean-variance relationship constraint (ELTSeq) for the differential expression analysis of RNA sequencing (RNA-seq). As a distribution-free nonparametric model, ELTSeq handles individual heterogeneity by estimating an empirical probability for each observation without making any assumption about read-count distribution. It also incorporates a constraint for the read-count overdispersion, which is widely observed in RNA-seq data. ELTSeq demonstrates a significant improvement over existing methods such as edgeR, DESeq, t-tests, Wilcoxon tests and the classic empirical likelihood-ratio test when handling heterogeneous groups. It will significantly advance the transcriptomics studies of cancers and other complex disease. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Covariations in ecological scaling laws fostered by community dynamics.
Zaoli, Silvia; Giometto, Andrea; Maritan, Amos; Rinaldo, Andrea
2017-10-03
Scaling laws in ecology, intended both as functional relationships among ecologically relevant quantities and the probability distributions that characterize their occurrence, have long attracted the interest of empiricists and theoreticians. Empirical evidence exists of power laws associated with the number of species inhabiting an ecosystem, their abundances, and traits. Although their functional form appears to be ubiquitous, empirical scaling exponents vary with ecosystem type and resource supply rate. The idea that ecological scaling laws are linked has been entertained before, but the full extent of macroecological pattern covariations, the role of the constraints imposed by finite resource supply, and a comprehensive empirical verification are still unexplored. Here, we propose a theoretical scaling framework that predicts the linkages of several macroecological patterns related to species' abundances and body sizes. We show that such a framework is consistent with the stationary-state statistics of a broad class of resource-limited community dynamics models, regardless of parameterization and model assumptions. We verify predicted theoretical covariations by contrasting empirical data and provide testable hypotheses for yet unexplored patterns. We thus place the observed variability of ecological scaling exponents into a coherent statistical framework where patterns in ecology embed constrained fluctuations.
Low-Order Modeling of Dynamic Stall on Airfoils in Incompressible Flow
NASA Astrophysics Data System (ADS)
Narsipur, Shreyas
Unsteady aerodynamics has been a topic of research since the late 1930's and has increased in popularity among researchers studying dynamic stall in helicopters, insect/bird flight, micro air vehicles, wind-turbine aerodynamics, and ow-energy harvesting devices. Several experimental and computational studies have helped researchers gain a good understanding of the unsteady ow phenomena, but have proved to be expensive and time-intensive for rapid design and analysis purposes. Since the early 1970's, the push to develop low-order models to solve unsteady ow problems has resulted in several semi-empirical models capable of effectively analyzing unsteady aerodynamics in a fraction of the time required by high-order methods. However, due to the various complexities associated with time-dependent flows, several empirical constants and curve fits derived from existing experimental and computational results are required by the semi-empirical models to be an effective analysis tool. The aim of the current work is to develop a low-order model capable of simulating incompressible dynamic-stall type ow problems with a focus on accurately modeling the unsteady ow physics with the aim of reducing empirical dependencies. The lumped-vortex-element (LVE) algorithm is used as the baseline unsteady inviscid model to which augmentations are applied to model unsteady viscous effects. The current research is divided into two phases. The first phase focused on augmentations aimed at modeling pure unsteady trailing-edge boundary-layer separation and stall without leading-edge vortex (LEV) formation. The second phase is targeted at including LEV shedding capabilities to the LVE algorithm and combining with the trailing-edge separation model from phase one to realize a holistic, optimized, and robust low-order dynamic stall model. In phase one, initial augmentations to theory were focused on modeling the effects of steady trailing-edge separation by implementing a non-linear decambering flap to model the effect of the separated boundary-layer. Unsteady RANS results for several pitch and plunge motions showed that the differences in aerodynamic loads between steady and unsteady flows can be attributed to the boundary-layer convection lag, which can be modeled by choosing an appropriate value of the time lag parameter, tau2. In order to provide appropriate viscous corrections to inviscid unsteady calculations, the non-linear decambering flap is applied with a time lag determined by the tau2 value, which was found to be independent of motion kinematics for a given airfoil and Reynolds number. The predictions of the aerodynamic loads, unsteady stall, hysteresis loops, and ow reattachment from the low-order model agree well with CFD and experimental results, both for individual cases and for trends between motions. The model was also found to perform as well as existing semi-empirical models while using only a single empirically defined parameter. Inclusion of LEV shedding capabilities and combining the resulting algorithm with phase one's trailing-edge separation model was the primary objective of phase two. Computational results at low and high Reynolds numbers were used to analyze the ow morphology of the LEV to identify the common surface signature associated with LEV initiation at both low and high Reynolds numbers and relate it to the critical leading-edge suction parameter (LESP ) to control the initiation and termination of LEV shedding in the low-order model. The critical LESP, like the tau2 parameter, was found to be independent of motion kinematics for a given airfoil and Reynolds number. Results from the final low-order model compared excellently with CFD and experimental solutions, both in terms of aerodynamic loads and vortex ow pattern predictions. Overall, the final combined dynamic stall model that resulted from the current research was successful in accurately modeling the physics of unsteady ow thereby helping restrict the number of empirical coefficients to just two variables while successfully modeling the aerodynamic forces and ow patterns in a simple and precise manner.
NASA Astrophysics Data System (ADS)
Levitan, Nathaniel; Gross, Barry
2016-10-01
New, high-resolution aerosol products are required in urban areas to improve the spatial coverage of the products, in terms of both resolution and retrieval frequency. These new products will improve our understanding of the spatial variability of aerosols in urban areas and will be useful in the detection of localized aerosol emissions. Urban aerosol retrieval is challenging for existing algorithms because of the high spatial variability of the surface reflectance, indicating the need for improved urban surface reflectance models. This problem can be stated in the language of novelty detection as the problem of selecting aerosol parameters whose effective surface reflectance spectrum is not an outlier in some space. In this paper, empirical orthogonal functions, a reconstruction-based novelty detection technique, is used to perform single-pixel aerosol retrieval using the single angular and temporal sample provided by the MODIS sensor. The empirical orthogonal basis functions are trained for different land classes using the MODIS BRDF MCD43 product. Existing land classification products are used in training and aerosol retrieval. The retrieval is compared against the existing operational MODIS 3 KM Dark Target (DT) aerosol product and co-located AERONET data. Based on the comparison, our method allows for a significant increase in retrieval frequency and a moderate decrease in the known biases of MODIS urban aerosol retrievals.
NASA Astrophysics Data System (ADS)
Hofer, Marlis; MöLg, Thomas; Marzeion, Ben; Kaser, Georg
2010-06-01
Recently initiated observation networks in the Cordillera Blanca (Peru) provide temporally high-resolution, yet short-term, atmospheric data. The aim of this study is to extend the existing time series into the past. We present an empirical-statistical downscaling (ESD) model that links 6-hourly National Centers for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR) reanalysis data to air temperature and specific humidity, measured at the tropical glacier Artesonraju (northern Cordillera Blanca). The ESD modeling procedure includes combined empirical orthogonal function and multiple regression analyses and a double cross-validation scheme for model evaluation. Apart from the selection of predictor fields, the modeling procedure is automated and does not include subjective choices. We assess the ESD model sensitivity to the predictor choice using both single-field and mixed-field predictors. Statistical transfer functions are derived individually for different months and times of day. The forecast skill largely depends on month and time of day, ranging from 0 to 0.8. The mixed-field predictors perform better than the single-field predictors. The ESD model shows added value, at all time scales, against simpler reference models (e.g., the direct use of reanalysis grid point values). The ESD model forecast 1960-2008 clearly reflects interannual variability related to the El Niño/Southern Oscillation but is sensitive to the chosen predictor type.
Empirical testing of an analytical model predicting electrical isolation of photovoltaic models
NASA Astrophysics Data System (ADS)
Garcia, A., III; Minning, C. P.; Cuddihy, E. F.
A major design requirement for photovoltaic modules is that the encapsulation system be capable of withstanding large DC potentials without electrical breakdown. Presented is a simple analytical model which can be used to estimate material thickness to meet this requirement for a candidate encapsulation system or to predict the breakdown voltage of an existing module design. A series of electrical tests to verify the model are described in detail. The results of these verification tests confirmed the utility of the analytical model for preliminary design of photovoltaic modules.
High Reynolds number turbulence model of rotating shear flows
NASA Astrophysics Data System (ADS)
Masuda, S.; Ariga, I.; Koyama, H. S.
1983-09-01
A Reynolds stress closure model for rotating turbulent shear flows is developed. Special attention is paid to keeping the model constants independent of rotation. First, general forms of the model of a Reynolds stress equation and a dissipation rate equation are derived, the only restrictions of which are high Reynolds number and incompressibility. The model equations are then applied to two-dimensional equilibrium boundary layers and the effects of Coriolis acceleration on turbulence structures are discussed. Comparisons with the experimental data and with previous results in other external force fields show that there exists a very close analogy between centrifugal, buoyancy and Coriolis force fields. Finally, the model is applied to predict the two-dimensional boundary layers on rotating plane walls. Comparisons with existing data confirmed its capability of predicting mean and turbulent quantities without employing any empirical relations in rotating fields.
Trust in automation: integrating empirical evidence on factors that influence trust.
Hoff, Kevin Anthony; Bashir, Masooda
2015-05-01
We systematically review recent empirical research on factors that influence trust in automation to present a three-layered trust model that synthesizes existing knowledge. Much of the existing research on factors that guide human-automation interaction is centered around trust, a variable that often determines the willingness of human operators to rely on automation. Studies have utilized a variety of different automated systems in diverse experimental paradigms to identify factors that impact operators' trust. We performed a systematic review of empirical research on trust in automation from January 2002 to June 2013. Papers were deemed eligible only if they reported the results of a human-subjects experiment in which humans interacted with an automated system in order to achieve a goal. Additionally, a relationship between trust (or a trust-related behavior) and another variable had to be measured. All together, 101 total papers, containing 127 eligible studies, were included in the review. Our analysis revealed three layers of variability in human-automation trust (dispositional trust, situational trust, and learned trust), which we organize into a model. We propose design recommendations for creating trustworthy automation and identify environmental conditions that can affect the strength of the relationship between trust and reliance. Future research directions are also discussed for each layer of trust. Our three-layered trust model provides a new lens for conceptualizing the variability of trust in automation. Its structure can be applied to help guide future research and develop training interventions and design procedures that encourage appropriate trust. © 2014, Human Factors and Ergonomics Society.
Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.
2003-01-01
The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ≥6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ≥6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ≥6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the earthquake hazard and endorse the use of all credible earthquake probability models for the region, including the empirical model, with appropriate weighting, as was done in WGCEP (2002).
Empirical Profiling of Cold Hydrogen Plumes Formed from Venting Of LH2 Storage Vessels: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buttner, William J; Rivkin, Carl H; Schmidt, Kara
Liquid hydrogen (LH2) storage is a viable approach to assuring sufficient hydrogen capacity at commercial fuelling stations. Presently, LH2 is produced at remote facilities and then transported to the end-use site by road vehicles (i.e., LH2 tanker trucks). Venting of hydrogen to depressurize the transport storage tank is a routine part of the LH2 delivery process. The behaviour of cold hydrogen plumes has not been well-characterized because empirical field data is essentially non-existent. The NFPA 2 Hydrogen Storage Safety Task Group, which consists of hydrogen producers, safety experts, and CFD modellers, has identified the lack of understanding of hydrogen dispersionmore » during LH2 venting of storage vessel as a critical gap for establishing safety distances at LH2 facilities, especially commercial hydrogen fuelling stations. To address this need, the NREL sensor laboratory, in collaboration with the NFPA 2 Safety Task Group developed the Cold Hydrogen Plume Analyzer to empirically characterize the hydrogen plume formed during LH2 storage tank venting. A prototype Analyzer was developed and field-deployed at an actual LH2 venting operation with critical findings that included: - H2 being detected as much as 2 m lower than the release point, which is not predicted by existing models - A small and inconsistent correlation between oxygen depletion and the hydrogen concentration - A negligible to non-existent correlation between in-situ temperature and the hydrogen concentration The Analyzer is currently being upgraded for enhanced metrological capabilities including improved real-time spatial and temporal profiling of the plume and tracking of prevailing weather conditions. Additional deployments are planned to monitor plume behaviour under different wind, humidity, and temperatures. This data will be shared with the NFPA 2 Safety Task Group and ultimately will be used support theoretical models and code requirements prescribed in NFPA 2.« less
NASA Astrophysics Data System (ADS)
Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb
2017-10-01
In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.
Firm productivity, pollution, and output: theory and empirical evidence from China.
Tang, Erzi; Zhang, Jingjing; Haider, Zulfiqar
2015-11-01
Using a theoretical model, this paper argues that as firm productivity increases, there is a decrease in firm-level pollution intensity. However, as productivity increases, firms tend to increase their aggregate output, which requires the use of additional resources that increase pollution. Hence, an increase in productivity results in two opposing effects where increased productivity may in fact increase pollution created by a firm. We describe the joint effect of these two mechanisms on pollution emissions as the "productivity dilemma" of pollution emission. Based on firm-level data from China, we also empirically test this productivity dilemma hypothesis. Our empirical results suggest that, in general, firm productivity has a positive and statistically significant impact on pollution emission in China. However, the impact of productivity on pollution becomes negative when we control for increases in firm output. The empirical evidence also confirms the positive influence of productivity on output, which suggests that the main determinant of pollution is the firm's output. The empirical results provide evidence of the existence of, what we describe as, the productivity dilemma of pollution emission.
Returners and explorers dichotomy in human mobility
Pappalardo, Luca; Simini, Filippo; Rinzivillo, Salvatore; Pedreschi, Dino; Giannotti, Fosca; Barabási, Albert-László
2015-01-01
The availability of massive digital traces of human whereabouts has offered a series of novel insights on the quantitative patterns characterizing human mobility. In particular, numerous recent studies have lead to an unexpected consensus: the considerable variability in the characteristic travelled distance of individuals coexists with a high degree of predictability of their future locations. Here we shed light on this surprising coexistence by systematically investigating the impact of recurrent mobility on the characteristic distance travelled by individuals. Using both mobile phone and GPS data, we discover the existence of two distinct classes of individuals: returners and explorers. As existing models of human mobility cannot explain the existence of these two classes, we develop more realistic models able to capture the empirical findings. Finally, we show that returners and explorers play a distinct quantifiable role in spreading phenomena and that a correlation exists between their mobility patterns and social interactions. PMID:26349016
NASA Astrophysics Data System (ADS)
Wharmby, Andrew William
Existing fractional calculus models having a non-empirical basis used to describe constitutive relationships between stress and strain in viscoelastic materials are modified to employ all orders of fractional derivatives between zero and one. Parallels between viscoelastic and dielectric theory are drawn so that these modified fractional calculus based models for viscoelastic materials may be used to describe relationships between electric flux density and electric field intensity in dielectric materials. The resulting fractional calculus based dielectric relaxation model is tested using existing complex permittivity data in the radio-frequency bandwidth of a wide variety of homogeneous materials. The consequences that the application of this newly developed fractional calculus based dielectric relaxation model has on Maxwell's equations are also examined through the effects of dielectric dissipation and dispersion.
Developing an Empirical Model for Jet-Surface Interaction Noise
NASA Technical Reports Server (NTRS)
Brown, Clifford A.
2014-01-01
The process of developing an empirical model for jet-surface interaction noise is described and the resulting model evaluated. Jet-surface interaction noise is generated when the high-speed engine exhaust from modern tightly integrated or conventional high-bypass ratio engine aircraft strikes or flows over the airframe surfaces. An empirical model based on an existing experimental database is developed for use in preliminary design system level studies where computation speed and range of configurations is valued over absolute accuracy to select the most promising (or eliminate the worst) possible designs. The model developed assumes that the jet-surface interaction noise spectra can be separated from the jet mixing noise and described as a parabolic function with three coefficients: peak amplitude, spectral width, and peak frequency. These coefficients are fit to functions of surface length and distance from the jet lipline to form a characteristic spectra which is then adjusted for changes in jet velocity and/or observer angle using scaling laws from published theoretical and experimental work. The resulting model is then evaluated for its ability to reproduce the characteristic spectra and then for reproducing spectra measured at other jet velocities and observer angles; successes and limitations are discussed considering the complexity of the jet-surface interaction noise versus the desire for a model that is simple to implement and quick to execute.
Developing an Empirical Model for Jet-Surface Interaction Noise
NASA Technical Reports Server (NTRS)
Brown, Clif
2014-01-01
The process of developing an empirical model for jet-surface interaction noise is described and the resulting model evaluated. Jet-surface interaction noise is generated when the high-speed engine exhaust from modern tightly integrated or conventional high-bypass ratio engine aircraft strikes or flows over the airframe surfaces. An empirical model based on an existing experimental database is developed for use in preliminary design system level studies where computation speed and range of configurations is valued over absolute accuracy to select the most promising (or eliminate the worst) possible designs. The model developed assumes that the jet-surface interaction noise spectra can be separated from the jet mixing noise and described as a parabolic function with three coefficients: peak amplitude, spectral width, and peak frequency. These coefficients are t to functions of surface length and distance from the jet lipline to form a characteristic spectra which is then adjusted for changes in jet velocity and/or observer angle using scaling laws from published theoretical and experimental work. The resulting model is then evaluated for its ability to reproduce the characteristic spectra and then for reproducing spectra measured at other jet velocities and observer angles; successes and limitations are discussed considering the complexity of the jet-surface interaction noise versus the desire for a model that is simple to implement and quick to execute.
Seasonal forecast of St. Louis encephalitis virus transmission, Florida.
Shaman, Jeffrey; Day, Jonathan F; Stieglitz, Marc; Zebiak, Stephen; Cane, Mark
2004-05-01
Disease transmission forecasts can help minimize human and domestic animal health risks by indicating where disease control and prevention efforts should be focused. For disease systems in which weather-related variables affect pathogen proliferation, dispersal, or transmission, the potential for disease forecasting exists. We present a seasonal forecast of St. Louis encephalitis virus transmission in Indian River County, Florida. We derive an empiric relationship between modeled land surface wetness and levels of SLEV transmission in humans. We then use these data to forecast SLEV transmission with a seasonal lead. Forecast skill is demonstrated, and a real-time seasonal forecast of epidemic SLEV transmission is presented. This study demonstrates how weather and climate forecast skill-verification analyses may be applied to test the predictability of an empiric disease forecast model.
Seasonal Forecast of St. Louis Encephalitis Virus Transmission, Florida
Day, Jonathan F.; Stieglitz, Marc; Zebiak, Stephen; Cane, Mark
2004-01-01
Disease transmission forecasts can help minimize human and domestic animal health risks by indicating where disease control and prevention efforts should be focused. For disease systems in which weather-related variables affect pathogen proliferation, dispersal, or transmission, the potential for disease forecasting exists. We present a seasonal forecast of St. Louis encephalitis virus transmission in Indian River County, Florida. We derive an empirical relationship between modeled land surface wetness and levels of SLEV transmission in humans. We then use these data to forecast SLEV transmission with a seasonal lead. Forecast skill is demonstrated, and a real-time seasonal forecast of epidemic SLEV transmission is presented. This study demonstrates how weather and climate forecast skill verification analyses may be applied to test the predictability of an empirical disease forecast model. PMID:15200812
Day-Ahead Crude Oil Price Forecasting Using a Novel Morphological Component Analysis Based Model
Zhu, Qing; Zou, Yingchao; Lai, Kin Keung
2014-01-01
As a typical nonlinear and dynamic system, the crude oil price movement is difficult to predict and its accurate forecasting remains the subject of intense research activity. Recent empirical evidence suggests that the multiscale data characteristics in the price movement are another important stylized fact. The incorporation of mixture of data characteristics in the time scale domain during the modelling process can lead to significant performance improvement. This paper proposes a novel morphological component analysis based hybrid methodology for modeling the multiscale heterogeneous characteristics of the price movement in the crude oil markets. Empirical studies in two representative benchmark crude oil markets reveal the existence of multiscale heterogeneous microdata structure. The significant performance improvement of the proposed algorithm incorporating the heterogeneous data characteristics, against benchmark random walk, ARMA, and SVR models, is also attributed to the innovative methodology proposed to incorporate this important stylized fact during the modelling process. Meanwhile, work in this paper offers additional insights into the heterogeneous market microstructure with economic viable interpretations. PMID:25061614
An Analytical-Numerical Model for Two-Phase Slug Flow through a Sudden Area Change in Microchannels
Momen, A. Mehdizadeh; Sherif, S. A.; Lear, W. E.
2016-01-01
In this article, two new analytical models have been developed to calculate two-phase slug flow pressure drop in microchannels through a sudden contraction. Even though many studies have been reported on two-phase flow in microchannels, considerable discrepancies still exist, mainly due to the difficulties in experimental setup and measurements. Numerical simulations were performed to support the new analytical models and to explore in more detail the physics of the flow in microchannels with a sudden contraction. Both analytical and numerical results were compared to the available experimental data and other empirical correlations. Results show that models, which were developed basedmore » on the slug and semi-slug assumptions, agree well with experiments in microchannels. Moreover, in contrast to the previous empirical correlations which were tuned for a specific geometry, the new analytical models are capable of taking geometrical parameters as well as flow conditions into account.« less
Day-ahead crude oil price forecasting using a novel morphological component analysis based model.
Zhu, Qing; He, Kaijian; Zou, Yingchao; Lai, Kin Keung
2014-01-01
As a typical nonlinear and dynamic system, the crude oil price movement is difficult to predict and its accurate forecasting remains the subject of intense research activity. Recent empirical evidence suggests that the multiscale data characteristics in the price movement are another important stylized fact. The incorporation of mixture of data characteristics in the time scale domain during the modelling process can lead to significant performance improvement. This paper proposes a novel morphological component analysis based hybrid methodology for modeling the multiscale heterogeneous characteristics of the price movement in the crude oil markets. Empirical studies in two representative benchmark crude oil markets reveal the existence of multiscale heterogeneous microdata structure. The significant performance improvement of the proposed algorithm incorporating the heterogeneous data characteristics, against benchmark random walk, ARMA, and SVR models, is also attributed to the innovative methodology proposed to incorporate this important stylized fact during the modelling process. Meanwhile, work in this paper offers additional insights into the heterogeneous market microstructure with economic viable interpretations.
How to reach linguistic consensus: a proof of convergence for the naming game.
De Vylder, Bart; Tuyls, Karl
2006-10-21
In this paper we introduce a mathematical model of naming games. Naming games have been widely used within research on the origins and evolution of language. Despite the many interesting empirical results these studies have produced, most of this research lacks a formal elucidating theory. In this paper we show how a population of agents can reach linguistic consensus, i.e. learn to use one common language to communicate with one another. Our approach differs from existing formal work in two important ways: one, we relax the too strong assumption that an agent samples infinitely often during each time interval. This assumption is usually made to guarantee convergence of an empirical learning process to a deterministic dynamical system. Two, we provide a proof that under these new realistic conditions, our model converges to a common language for the entire population of agents. Finally the model is experimentally validated.
The 2 × 2 Standpoints Model of Achievement Goals
Korn, Rachel M.; Elliot, Andrew J.
2016-01-01
In the present research, we proposed and tested a 2 × 2 standpoints model of achievement goals grounded in the development-demonstration and approach-avoidance distinctions. Three empirical studies are presented. Study 1 provided evidence supporting the structure and psychometric properties of a newly developed measure of the goals of the 2 × 2 standpoints model. Study 2 documented the predictive utility of these goal constructs for intrinsic motivation: development-approach and development-avoidance goals were positive predictors, and demonstration-avoidance goals were a negative predictor of intrinsic motivation. Study 3 documented the predictive utility of these goal constructs for performance attainment: Demonstration-approach goals were a positive predictor and demonstration-avoidance goals were a negative predictor of exam performance. The conceptual and empirical contributions of the present research were discussed within the broader context of existing achievement goal theory and research. PMID:27242641
NASA Astrophysics Data System (ADS)
Blake, T.; Egede, U.; Owen, P.; Petridis, K. A.; Pomery, G.
2018-06-01
A method for analysing the hadronic resonance contributions in \\overline{B}{} ^0 → \\overline{K}{} ^{*0} μ ^+ μ ^- decays is presented. This method uses an empirical model that relies on measurements of the branching fractions and polarisation amplitudes of final states involving J^{PC}=1^{-} resonances, relative to the short-distance component, across the full dimuon mass spectrum of \\overline{B}{} ^0 → \\overline{K}{} ^{*0} μ ^+ μ ^- transitions. The model is in good agreement with existing calculations of hadronic non-local effects. The effect of this contribution to the angular observables is presented and it is demonstrated how the narrow resonances in the q^2 spectrum provide a dramatic enhancement to CP-violating effects in the short-distance amplitude. Finally, a study of the hadronic resonance effects on lepton universality ratios, R_{K^{(*)}}, in the presence of new physics is presented.
Modelled drift patterns of fish larvae link coastal morphology to seabird colony distribution.
Sandvik, Hanno; Barrett, Robert T; Erikstad, Kjell Einar; Myksvoll, Mari S; Vikebø, Frode; Yoccoz, Nigel G; Anker-Nilssen, Tycho; Lorentsen, Svein-Håkon; Reiertsen, Tone K; Skarðhamar, Jofrid; Skern-Mauritzen, Mette; Systad, Geir Helge
2016-05-13
Colonial breeding is an evolutionary puzzle, as the benefits of breeding in high densities are still not fully explained. Although the dynamics of existing colonies are increasingly understood, few studies have addressed the initial formation of colonies, and empirical tests are rare. Using a high-resolution larval drift model, we here document that the distribution of seabird colonies along the Norwegian coast can be explained by variations in the availability and predictability of fish larvae. The modelled variability in concentration of fish larvae is, in turn, predicted by the topography of the continental shelf and coastline. The advection of fish larvae along the coast translates small-scale topographic characteristics into a macroecological pattern, viz. the spatial distribution of top-predator breeding sites. Our findings provide empirical corroboration of the hypothesis that seabird colonies are founded in locations that minimize travel distances between breeding and foraging locations, thereby enabling optimal foraging by central-place foragers.
NASA Astrophysics Data System (ADS)
Gastis, P.; Perdikakis, G.; Robertson, D.; Almus, R.; Anderson, T.; Bauder, W.; Collon, P.; Lu, W.; Ostdiek, K.; Skulski, M.
2016-04-01
Equilibrium charge state distributions of stable 60Ni, 59Co, and 63Cu beams passing through a 1 μm thick Mo foil were measured at beam energies of 1.84 MeV/u, 2.09 MeV/u, and 2.11 MeV/u respectively. A 1-D position sensitive Parallel Grid Avalanche Counter detector (PGAC) was used at the exit of a spectrograph magnet, enabling us to measure the intensity of several charge states simultaneously. The number of charge states measured for each beam constituted more than 99% of the total equilibrium charge state distribution for that element. Currently, little experimental data exists for equilibrium charge state distributions for heavy ions with 19 ≲Zp,Zt ≲ 54 (Zp and Zt, are the projectile's and target's atomic numbers respectively). Hence the success of the semi-empirical models in predicting typical characteristics of equilibrium CSDs (mean charge states and distribution widths), has not been thoroughly tested at the energy region of interest. A number of semi-empirical models from the literature were evaluated in this study, regarding their ability to reproduce the characteristics of the measured charge state distributions. The evaluated models were selected from the literature based on whether they are suitable for the given range of atomic numbers and on their frequent use by the nuclear physics community. Finally, an attempt was made to combine model predictions for the mean charge state, the distribution width and the distribution shape, to come up with a more reliable model. We discuss this new ;combinatorial; prescription and compare its results with our experimental data and with calculations using the other semi-empirical models studied in this work.
Altruism and Indirect Reciprocity: The Interaction of Person and Situation in Prosocial Behavior
ERIC Educational Resources Information Center
Simpson, Brent; Willer, Robb
2008-01-01
A persistent puzzle in the social and biological sciences is the existence of prosocial behavior, actions that benefit others, often at a cost to oneself. Recent theoretical models and empirical studies of indirect reciprocity show that actors behave prosocially in order to develop an altruistic reputation and receive future benefits from third…
ERIC Educational Resources Information Center
González-Brenes, José P.; Huang, Yun
2015-01-01
Classification evaluation metrics are often used to evaluate adaptive tutoring systems-- programs that teach and adapt to humans. Unfortunately, it is not clear how intuitive these metrics are for practitioners with little machine learning background. Moreover, our experiments suggest that existing convention for evaluating tutoring systems may…
Understanding hind limb lameness signs in horses using simple rigid body mechanics.
Starke, S D; May, S A; Pfau, T
2015-09-18
Hind limb lameness detection in horses relies on the identification of movement asymmetry which can be based on multiple pelvic landmarks. This study explains the poorly understood relationship between hind limb lameness pointers, related to the tubera coxae and sacrum, based on experimental data in context of a simple rigid body model. Vertical displacement of tubera coxae and sacrum was quantified experimentally in 107 horses with varying lameness degrees. A geometrical rigid-body model of pelvis movement during lameness was created in Matlab. Several asymmetry measures were calculated and contrasted. Results showed that model predictions for tubera coxae asymmetry during lameness matched experimental observations closely. Asymmetry for sacrum and comparative tubera coxae movement showed a strong association both empirically (R(2)≥ 0.92) and theoretically. We did not find empirical or theoretical evidence for a systematic, pronounced adaptation in the pelvic rotation pattern with increasing lameness. The model showed that the overall range of movement between tubera coxae does not allow the appreciation of asymmetry changes beyond mild lameness. When evaluating movement relative to the stride cycle we did find empirical evidence for asymmetry being slightly more visible when comparing tubera coxae amplitudes rather than sacrum amplitudes, although variation exists for mild lameness. In conclusion, the rigidity of the equine pelvis results in tightly linked movement trajectories of different pelvic landmarks. The model allows the explanation of empirical observations in the context of the underlying mechanics, helping the identification of potentially limited assessment choices when evaluating gait. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
McMillan, Mitchell; Hu, Zhiyong
2017-10-01
Streambank erosion is a major source of fluvial sediment, but few large-scale, spatially distributed models exist to quantify streambank erosion rates. We introduce a spatially distributed model for streambank erosion applicable to sinuous, single-thread channels. We argue that such a model can adequately characterize streambank erosion rates, measured at the outsides of bends over a 2-year time period, throughout a large region. The model is based on the widely-used excess-velocity equation and comprised three components: a physics-based hydrodynamic model, a large-scale 1-dimensional model of average monthly discharge, and an empirical bank erodibility parameterization. The hydrodynamic submodel requires inputs of channel centerline, slope, width, depth, friction factor, and a scour factor A; the large-scale watershed submodel utilizes watershed-averaged monthly outputs of the Noah-2.8 land surface model; bank erodibility is based on tree cover and bank height as proxies for root density. The model was calibrated with erosion rates measured in sand-bed streams throughout the northern Gulf of Mexico coastal plain. The calibrated model outperforms a purely empirical model, as well as a model based only on excess velocity, illustrating the utility of combining a physics-based hydrodynamic model with an empirical bank erodibility relationship. The model could be improved by incorporating spatial variability in channel roughness and the hydrodynamic scour factor, which are here assumed constant. A reach-scale application of the model is illustrated on ∼1 km of a medium-sized, mixed forest-pasture stream, where the model identifies streambank erosion hotspots on forested and non-forested bends.
Selecting Single Model in Combination Forecasting Based on Cointegration Test and Encompassing Test
Jiang, Chuanjin; Zhang, Jing; Song, Fugen
2014-01-01
Combination forecasting takes all characters of each single forecasting method into consideration, and combines them to form a composite, which increases forecasting accuracy. The existing researches on combination forecasting select single model randomly, neglecting the internal characters of the forecasting object. After discussing the function of cointegration test and encompassing test in the selection of single model, supplemented by empirical analysis, the paper gives the single model selection guidance: no more than five suitable single models can be selected from many alternative single models for a certain forecasting target, which increases accuracy and stability. PMID:24892061
Selecting single model in combination forecasting based on cointegration test and encompassing test.
Jiang, Chuanjin; Zhang, Jing; Song, Fugen
2014-01-01
Combination forecasting takes all characters of each single forecasting method into consideration, and combines them to form a composite, which increases forecasting accuracy. The existing researches on combination forecasting select single model randomly, neglecting the internal characters of the forecasting object. After discussing the function of cointegration test and encompassing test in the selection of single model, supplemented by empirical analysis, the paper gives the single model selection guidance: no more than five suitable single models can be selected from many alternative single models for a certain forecasting target, which increases accuracy and stability.
NASA Astrophysics Data System (ADS)
Bell, M. D.; Walker, J. T.
2017-12-01
Atmospheric deposition of nitrogen compounds are determined using a variety of measurement and modeling methods. These values are then used to calculate fluxes to the ecosystem which can then be linked to ecological responses. But, for this data to be used outside of the system in which it is developed, it is necessary to understand how the deposition estimates relate to one another. Therefore, we first identified sources of "bulk" deposition data and compared methods, reliability of data, and consistency of results to one another. Then we looked at the variation within photochemical models that are used by Federal Agencies to evaluate national trends. Finally, we identified some best practices for researchers to consider if their assessment is intended for use at broader scales. Empirical measurements used in this assessment include passive collection of atmospheric molecules, throughfall deposition of precipitation, snowpack measurements, and using biomonitors such as lichen. The three most common photochemical models used to model deposition within the United States are CMAQ, CAMx, and TDep (which uses empirical data to refine modeled values). These models all use meteorological and emission data to estimate deposition at local, regional, or national scales. We identified the range of uncertainty that exists within the types of deposition measurements and how these vary over space and time. Uncertainty is assessed by comparing deposition estimates from differing collection methods and comparing modeled estimates to empirical deposition data. Each collection method has benefits and downfalls that need to be taken into account if the results are to be expanded outside of the research area. Comparing field measured values to modeled values highlight the importance of each in the greater goals of understanding current conditions and trends within deposition patterns in the US. While models work well on a larger scale, they cannot replicate the local heterogeneity that exists at a site. Often, each researcher has a favorite method of analysis, but if the data cannot be related to other efforts then it becomes harder to apply it to broader policy considerations.
Grummer, Jared A; Bryson, Robert W; Reeder, Tod W
2014-03-01
Current molecular methods of species delimitation are limited by the types of species delimitation models and scenarios that can be tested. Bayes factors allow for more flexibility in testing non-nested species delimitation models and hypotheses of individual assignment to alternative lineages. Here, we examined the efficacy of Bayes factors in delimiting species through simulations and empirical data from the Sceloporus scalaris species group. Marginal-likelihood scores of competing species delimitation models, from which Bayes factor values were compared, were estimated with four different methods: harmonic mean estimation (HME), smoothed harmonic mean estimation (sHME), path-sampling/thermodynamic integration (PS), and stepping-stone (SS) analysis. We also performed model selection using a posterior simulation-based analog of the Akaike information criterion through Markov chain Monte Carlo analysis (AICM). Bayes factor species delimitation results from the empirical data were then compared with results from the reversible-jump MCMC (rjMCMC) coalescent-based species delimitation method Bayesian Phylogenetics and Phylogeography (BP&P). Simulation results show that HME and sHME perform poorly compared with PS and SS marginal-likelihood estimators when identifying the true species delimitation model. Furthermore, Bayes factor delimitation (BFD) of species showed improved performance when species limits are tested by reassigning individuals between species, as opposed to either lumping or splitting lineages. In the empirical data, BFD through PS and SS analyses, as well as the rjMCMC method, each provide support for the recognition of all scalaris group taxa as independent evolutionary lineages. Bayes factor species delimitation and BP&P also support the recognition of three previously undescribed lineages. In both simulated and empirical data sets, harmonic and smoothed harmonic mean marginal-likelihood estimators provided much higher marginal-likelihood estimates than PS and SS estimators. The AICM displayed poor repeatability in both simulated and empirical data sets, and produced inconsistent model rankings across replicate runs with the empirical data. Our results suggest that species delimitation through the use of Bayes factors with marginal-likelihood estimates via PS or SS analyses provide a useful and complementary alternative to existing species delimitation methods.
Development of a machine learning potential for graphene
NASA Astrophysics Data System (ADS)
Rowe, Patrick; Csányi, Gábor; Alfè, Dario; Michaelides, Angelos
2018-02-01
We present an accurate interatomic potential for graphene, constructed using the Gaussian approximation potential (GAP) machine learning methodology. This GAP model obtains a faithful representation of a density functional theory (DFT) potential energy surface, facilitating highly accurate (approaching the accuracy of ab initio methods) molecular dynamics simulations. This is achieved at a computational cost which is orders of magnitude lower than that of comparable calculations which directly invoke electronic structure methods. We evaluate the accuracy of our machine learning model alongside that of a number of popular empirical and bond-order potentials, using both experimental and ab initio data as references. We find that whilst significant discrepancies exist between the empirical interatomic potentials and the reference data—and amongst the empirical potentials themselves—the machine learning model introduced here provides exemplary performance in all of the tested areas. The calculated properties include: graphene phonon dispersion curves at 0 K (which we predict with sub-meV accuracy), phonon spectra at finite temperature, in-plane thermal expansion up to 2500 K as compared to NPT ab initio molecular dynamics simulations and a comparison of the thermally induced dispersion of graphene Raman bands to experimental observations. We have made our potential freely available online at [http://www.libatoms.org].
Models of Solar Wind Structures and Their Interaction with the Earth's Space Environment
NASA Astrophysics Data System (ADS)
Watermann, J.; Wintoft, P.; Sanahuja, B.; Saiz, E.; Poedts, S.; Palmroth, M.; Milillo, A.; Metallinou, F.-A.; Jacobs, C.; Ganushkina, N. Y.; Daglis, I. A.; Cid, C.; Cerrato, Y.; Balasis, G.; Aylward, A. D.; Aran, A.
2009-11-01
The discipline of “Space Weather” is built on the scientific foundation of solar-terrestrial physics but with a strong orientation toward applied research. Models describing the solar-terrestrial environment are therefore at the heart of this discipline, for both physical understanding of the processes involved and establishing predictive capabilities of the consequences of these processes. Depending on the requirements, purely physical models, semi-empirical or empirical models are considered to be the most appropriate. This review focuses on the interaction of solar wind disturbances with geospace. We cover interplanetary space, the Earth’s magnetosphere (with the exception of radiation belt physics), the ionosphere (with the exception of radio science), the neutral atmosphere and the ground (via electromagnetic induction fields). Space weather relevant state-of-the-art physical and semi-empirical models of the various regions are reviewed. They include models for interplanetary space, its quiet state and the evolution of recurrent and transient solar perturbations (corotating interaction regions, coronal mass ejections, their interplanetary remnants, and solar energetic particle fluxes). Models of coupled large-scale solar wind-magnetosphere-ionosphere processes (global magnetohydrodynamic descriptions) and of inner magnetosphere processes (ring current dynamics) are discussed. Achievements in modeling the coupling between magnetospheric processes and the neutral and ionized upper and middle atmospheres are described. Finally we mention efforts to compile comprehensive and flexible models from selections of existing modules applicable to particular regions and conditions in interplanetary space and geospace.
Zhang, Tao; Yang, Xiaojun
2013-01-01
Watershed-wide land-cover proportions can be used to predict the in-stream non-point source pollutant loadings through regression modeling. However, the model performance can vary greatly across different study sites and among various watersheds. Existing literature has shown that this type of regression modeling tends to perform better for large watersheds than for small ones, and that such a performance variation has been largely linked with different interwatershed landscape heterogeneity levels. The purpose of this study is to further examine the previously mentioned empirical observation based on a set of watersheds in the northern part of Georgia (USA) to explore the underlying causes of the variation in model performance. Through the combined use of the neutral landscape modeling approach and a spatially explicit nutrient loading model, we tested whether the regression model performance variation over the watershed groups ranging in size is due to the different watershed landscape heterogeneity levels. We adopted three neutral landscape modeling criteria that were tied with different similarity levels in watershed landscape properties and used the nutrient loading model to estimate the nitrogen loads for these neutral watersheds. Then we compared the regression model performance for the real and neutral landscape scenarios, respectively. We found that watershed size can affect the regression model performance both directly and indirectly. Along with the indirect effect through interwatershed heterogeneity, watershed size can directly affect the model performance over the watersheds varying in size. We also found that the regression model performance can be more significantly affected by other physiographic properties shaping nitrogen delivery effectiveness than the watershed land-cover heterogeneity. This study contrasts with many existing studies because it goes beyond hypothesis formulation based on empirical observations and into hypothesis testing to explore the fundamental mechanism.
Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach
Jaiswal, Kishor; Wald, David J.; Hearne, Mike
2009-01-01
We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.
NASA Technical Reports Server (NTRS)
Tan, C. M.; Carr, L. W.
1996-01-01
A variety of empirical and computational fluid dynamics two-dimensional (2-D) dynamic stall models were compared to recently obtained three-dimensional (3-D) dynamic stall data in a workshop on modeling of 3-D dynamic stall of an unswept, rectangular wing, of aspect ratio 10. Dynamic stall test data both below and above the static stall angle-of-attack were supplied to the participants, along with a 'blind' case where only the test conditions were supplied in advance, with results being compared to experimental data at the workshop itself. Detailed graphical comparisons are presented in the report, which also includes discussion of the methods and the results. The primary conclusion of the workshop was that the 3-D effects of dynamic stall on the oscillating wing studied in the workshop can be reasonably reproduced by existing semi-empirical models once 2-D dynamic stall data have been obtained. The participants also emphasized the need for improved quantification of 2-D dynamic stall.
Estimating the volume of Alpine glacial lakes
NASA Astrophysics Data System (ADS)
Cook, S. J.; Quincey, D. J.
2015-12-01
Supraglacial, moraine-dammed and ice-dammed lakes represent a potential glacial lake outburst flood (GLOF) threat to downstream communities in many mountain regions. This has motivated the development of empirical relationships to predict lake volume given a measurement of lake surface area obtained from satellite imagery. Such relationships are based on the notion that lake depth, area and volume scale predictably. We critically evaluate the performance of these existing empirical relationships by examining a global database of glacial lake depths, areas and volumes. Results show that lake area and depth are not always well correlated (r2 = 0.38) and that although lake volume and area are well correlated (r2 = 0.91), and indeed are auto-correlated, there are distinct outliers in the data set. These outliers represent situations where it may not be appropriate to apply existing empirical relationships to predict lake volume and include growing supraglacial lakes, glaciers that recede into basins with complex overdeepened morphologies or that have been deepened by intense erosion and lakes formed where glaciers advance across and block a main trunk valley. We use the compiled data set to develop a conceptual model of how the volumes of supraglacial ponds and lakes, moraine-dammed lakes and ice-dammed lakes should be expected to evolve with increasing area. Although a large amount of bathymetric data exist for moraine-dammed and ice-dammed lakes, we suggest that further measurements of growing supraglacial ponds and lakes are needed to better understand their development.
Estimating the volume of Alpine glacial lakes
NASA Astrophysics Data System (ADS)
Cook, S. J.; Quincey, D. J.
2015-09-01
Supraglacial, moraine-dammed and ice-dammed lakes represent a potential glacial lake outburst flood (GLOF) threat to downstream communities in many mountain regions. This has motivated the development of empirical relationships to predict lake volume given a measurement of lake surface area obtained from satellite imagery. Such relationships are based on the notion that lake depth, area and volume scale predictably. We critically evaluate the performance of these existing empirical relationships by examining a global database of measured glacial lake depths, areas and volumes. Results show that lake area and depth are not always well correlated (r2 = 0.38), and that although lake volume and area are well correlated (r2 = 0.91), there are distinct outliers in the dataset. These outliers represent situations where it may not be appropriate to apply existing empirical relationships to predict lake volume, and include growing supraglacial lakes, glaciers that recede into basins with complex overdeepened morphologies or that have been deepened by intense erosion, and lakes formed where glaciers advance across and block a main trunk valley. We use the compiled dataset to develop a conceptual model of how the volumes of supraglacial ponds and lakes, moraine-dammed lakes and ice-dammed lakes should be expected to evolve with increasing area. Although a large amount of bathymetric data exist for moraine-dammed and ice-dammed lakes, we suggest that further measurements of growing supraglacial ponds and lakes are needed to better understand their development.
SPECTRAL data-based estimation of soil heat flux
Singh, Ramesh K.; Irmak, A.; Walter-Shea, Elizabeth; Verma, S.B.; Suyker, A.E.
2011-01-01
Numerous existing spectral-based soil heat flux (G) models have shown wide variation in performance for maize and soybean cropping systems in Nebraska, indicating the need for localized calibration and model development. The objectives of this article are to develop a semi-empirical model to estimate G from a normalized difference vegetation index (NDVI) and net radiation (Rn) for maize (Zea mays L.) and soybean (Glycine max L.) fields in the Great Plains, and present the suitability of the developed model to estimate G under similar and different soil and management conditions. Soil heat fluxes measured in both irrigated and rainfed fields in eastern and south-central Nebraska were used for model development and validation. An exponential model that uses NDVI and Rn was found to be the best to estimate G based on r2 values. The effect of geographic location, crop, and water management practices were used to develop semi-empirical models under four case studies. Each case study has the same exponential model structure but a different set of coefficients and exponents to represent the crop, soil, and management practices. Results showed that the semi-empirical models can be used effectively for G estimation for nearby fields with similar soil properties for independent years, regardless of differences in crop type, crop rotation, and irrigation practices, provided that the crop residue from the previous year is more than 4000 kg ha-1. The coefficients calibrated from particular fields can be used at nearby fields in order to capture temporal variation in G. However, there is a need for further investigation of the models to account for the interaction effects of crop rotation and irrigation. Validation at an independent site having different soil and crop management practices showed the limitation of the semi-empirical model in estimating G under different soil and environment conditions.
Theorizing Land Cover and Land Use Changes: The Case of Tropical Deforestation
NASA Technical Reports Server (NTRS)
Walker, Robert
2004-01-01
This article addresses land-cover and land-use dynamics from the perspective of regional science and economic geography. It first provides an account of the so-called spatially explicit model, which has emerged in recent years as a key empirical approach to the issue. The article uses this discussion as a springboard to evaluate the potential utility of von Thuenen to the discourse on land-cover and land-use change. After identifying shortcomings of current theoretical approaches to land use in mainly urban models, the article filters a discussion of deforestation through the lens of bid-rent and assesses its effectiveness in helping us comprehend the destruction of tropical forest in the Amazon basin. The article considers the adjustments that would have to be made to existing theory to make it more useful to the empirical issues.
Transaction costs and sequential bargaining in transferable discharge permit markets.
Netusil, N R; Braden, J B
2001-03-01
Market-type mechanisms have been introduced and are being explored for various environmental programs. Several existing programs, however, have not attained the cost savings that were initially projected. Modeling that acknowledges the role of transactions costs and the discrete, bilateral, and sequential manner in which trades are executed should provide a more realistic basis for calculating potential cost savings. This paper presents empirical evidence on potential cost savings by examining a market for the abatement of sediment from farmland. Empirical results based on a market simulation model find no statistically significant change in mean abatement costs under several transaction cost levels when contracts are randomly executed. An alternative method of contract execution, gain-ranked, yields similar results. At the highest transaction cost level studied, trading reduces the total cost of compliance relative to a uniform standard that reflects current regulations.
ERIC Educational Resources Information Center
Kärner, Tobias; Sembill, Detlef; Aßmann, Christian; Friederichs, Edgar; Carstensen, Claus H.
2017-01-01
The investigation of learning processes by assessing students' experience along with objective characteristics within a classroom context has a long tradition in empirical learning process research (e.g. Sembill, 1984 et passim; Wild & Krapp, 1996). However, most of the existing studies confine themselves to psychological variables that seem…
Measuring water and sediment discharge from a road plot with a settling basin and tipping bucket
Thomas A. Black; Charles H. Luce
2013-01-01
A simple empirical method quantifies water and sediment production from a forest road surface, and is well suited for calibration and validation of road sediment models. To apply this quantitative method, the hydrologic technician installs bordered plots on existing typical road segments and measures coarse sediment production in a settling tank. When a tipping bucket...
NASA Astrophysics Data System (ADS)
Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu
2016-06-01
To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.
Lavender, Jason M.; Wonderlich, Stephen A.; Engel, Scott G.; Gordon, Kathryn H.; Kaye, Walter H.; Mitchell, James E.
2015-01-01
Several existing conceptual models and psychological interventions address or emphasize the role of emotion dysregulation in eating disorders. The current article uses Gratz and Roemer’s (2004) multidimensional model of emotion regulation and dysregulation as a clinically relevant framework to review the extant literature on emotion dysregulation in anorexia nervosa (AN) and bulimia nervosa (BN). Specifically, the dimensions reviewed include: (1) the flexible use of adaptive and situationally appropriate strategies to modulate the duration and/or intensity of emotional responses, (2) the ability to successfully inhibit impulsive behavior and maintain goal-directed behavior in the context of emotional distress, (3) awareness, clarity, and acceptance of emotional states, and (4) the willingness to experience emotional distress in the pursuit of meaningful activities. The current review suggests that both AN and BN are characterized by broad emotion regulation deficits, with difficulties in emotion regulation across the four dimensions found to characterize both AN and BN, although a small number of more specific difficulties may distinguish the two disorders. The review concludes with a discussion of the clinical implications of the findings, as well as a summary of limitations of the existing empirical literature and suggestions for future research. PMID:26112760
Empirical relations between large wood transport and catchment characteristics
NASA Astrophysics Data System (ADS)
Steeb, Nicolas; Rickenmann, Dieter; Rickli, Christian; Badoux, Alexandre
2017-04-01
The transport of vast amounts of large wood (LW) in water courses can considerably aggravate hazardous situations during flood events, and often strongly affects resulting flood damage. Large wood recruitment and transport are controlled by various factors which are difficult to assess and the prediction of transported LW volumes is difficult. Such information are, however, important for engineers and river managers to adequately dimension retention structures or to identify critical stream cross-sections. In this context, empirical formulas have been developed to estimate the volume of transported LW during a flood event (Rickenmann, 1997; Steeb et al., 2017). The data base of existing empirical wood load equations is, however, limited. The objective of the present study is to test and refine existing empirical equations, and to derive new relationships to reveal trends in wood loading. Data have been collected for flood events with LW occurrence in Swiss catchments of various sizes. This extended data set allows us to derive statistically more significant results. LW volumes were found to be related to catchment and transport characteristics, such as catchment size, forested area, forested stream length, water discharge, sediment load, or Melton ratio. Both the potential wood load and the fraction that is effectively mobilized during a flood event (effective wood load) are estimated. The difference of potential and effective wood load allows us to derive typical reduction coefficients that can be used to refine spatially explicit GIS models for potential LW recruitment.
A new concept in seismic landslide hazard analysis for practical application
NASA Astrophysics Data System (ADS)
Lee, Chyi-Tyi
2017-04-01
A seismic landslide hazard model could be constructed using deterministic approach (Jibson et al., 2000) or statistical approach (Lee, 2014). Both approaches got landslide spatial probability under a certain return-period earthquake. In the statistical approach, our recent study found that there are common patterns among different landslide susceptibility models of the same region. The common susceptibility could reflect relative stability of slopes at a region; higher susceptibility indicates lower stability. Using the common susceptibility together with an earthquake event landslide inventory and a map of topographically corrected Arias intensity, we can build the relationship among probability of failure, Arias intensity and the susceptibility. This relationship can immediately be used to construct a seismic landslide hazard map for the region that the empirical relationship built. If the common susceptibility model is further normalized and the empirical relationship built with normalized susceptibility, then the empirical relationship may be practically applied to different region with similar tectonic environments and climate conditions. This could be feasible, when a region has no existing earthquake-induce landslide data to train the susceptibility model and to build the relationship. It is worth mentioning that a rain-induced landslide susceptibility model has common pattern similar to earthquake-induced landslide susceptibility in the same region, and is usable to build the relationship with an earthquake event landslide inventory and a map of Arias intensity. These will be introduced with examples in the meeting.
NASA Astrophysics Data System (ADS)
Sergeeva, Tatiana F.; Moshkova, Albina N.; Erlykina, Elena I.; Khvatova, Elena M.
2016-04-01
Creatine kinase is a key enzyme of energy metabolism in the brain. There are known cytoplasmic and mitochondrial creatine kinase isoenzymes. Mitochondrial creatine kinase exists as a mixture of two oligomeric forms - dimer and octamer. The aim of investigation was to study catalytic properties of cytoplasmic and mitochondrial creatine kinase and using of the method of empirical dependences for the possible prediction of the activity of these enzymes in cerebral ischemia. Ischemia was revealed to be accompanied with the changes of the activity of creatine kinase isoenzymes and oligomeric state of mitochondrial isoform. There were made the models of multiple regression that permit to study the activity of creatine kinase system in cerebral ischemia using a calculating method. Therefore, the mathematical method of empirical dependences can be applied for estimation and prediction of the functional state of the brain by the activity of creatine kinase isoenzymes in cerebral ischemia.
Characterization of Nanoscale Gas Transport in Shale Formations
NASA Astrophysics Data System (ADS)
Chai, D.; Li, X.
2017-12-01
Non-Darcy flow behavior can be commonly observed in nano-sized pores of matrix. Most existing gas flow models characterize non-Darcy flow by empirical or semi-empirical methods without considering the real gas effect. In this paper, a novel layered model with physical meanings is proposed for both ideal and real gas transports in nanopores. It can be further coupled with hydraulic fracturing models and consequently benefit the storage evaluation and production prediction for shale gas recovery. It is hypothesized that a nanotube can be divided into a central circular zone where the viscous flow behavior mainly exists due to dominant intermolecular collisions and an outer annular zone where the Knudsen diffusion mainly exists because of dominant collisions between molecules and the wall. The flux is derived based on integration of two zones by applying the virtual boundary. Subsequently, the model is modified by incorporating slip effect, real gas effect, porosity distribution, and tortuosity. Meanwhile, a multi-objective optimization method (MOP) is applied to assist the validation of analytical model to search fitting parameters which are highly localized and contain significant uncertainties. The apparent permeability is finally derived and analyzed with various impact factors. The developed nanoscale gas transport model is well validated by the flux data collected from both laboratory experiments and molecular simulations over the entire spectrum of flow regimes. It has a decrease of as much as 43.8% in total molar flux when the real gas effect is considered in the model. Such an effect is found to be more significant as pore size shrinks. Knudsen diffusion accounts for more than 60% of the total gas flux when pressure is lower than 0.2 MPa and pore size is smaller than 50 nm. Overall, the apparent permeability is found to decrease with pressure, though it rarely changes when pressure is higher than 5.0 MPa and pore size is larger than 50 nm.
Using change-point models to estimate empirical critical loads for nitrogen in mountain ecosystems.
Roth, Tobias; Kohli, Lukas; Rihm, Beat; Meier, Reto; Achermann, Beat
2017-01-01
To protect ecosystems and their services, the critical load concept has been implemented under the framework of the Convention on Long-range Transboundary Air Pollution (UNECE) to develop effects-oriented air pollution abatement strategies. Critical loads are thresholds below which damaging effects on sensitive habitats do not occur according to current knowledge. Here we use change-point models applied in a Bayesian context to overcome some of the difficulties when estimating empirical critical loads for nitrogen (N) from empirical data. We tested the method using simulated data with varying sample sizes, varying effects of confounding variables, and with varying negative effects of N deposition on species richness. The method was applied to the national-scale plant species richness data from mountain hay meadows and (sub)alpine scrubs sites in Switzerland. Seven confounding factors (elevation, inclination, precipitation, calcareous content, aspect as well as indicator values for humidity and light) were selected based on earlier studies examining numerous environmental factors to explain Swiss vascular plant diversity. The estimated critical load confirmed the existing empirical critical load of 5-15 kg N ha -1 yr -1 for (sub)alpine scrubs, while for mountain hay meadows the estimated critical load was at the lower end of the current empirical critical load range. Based on these results, we suggest to narrow down the critical load range for mountain hay meadows to 10-15 kg N ha -1 yr -1 . Copyright © 2016 Elsevier Ltd. All rights reserved.
2009-07-01
section 2, the propellant gun is assessed from existing experimental data. In section 3, circuit analysis is used to model the railgun with a 2 launch...An Empirical Model for Plasma Armature Voltage. IEEE Transactions on Magnetics 1991, 27, 283–288. 11. Elder, D. The First Generation in the...1 US ARMY MISSLE COMMAND AMSRD AMR W W C MCCORKLE 5400 FOWLER RD REDSTONE ARSENAL AL 35898-5240 1 US ARMY TACOM TARDEC AMSTA TR
Liu, Qinli; Ding, Xin; Du, Bowen; Fang, Tao
2017-11-02
Supercritical water oxidation (SCWO), as a novel and efficient technology, has been applied to wastewater treatment processes. The use of phase equilibrium data to optimize process parameters can offer a theoretical guidance for designing SCWO processes and reducing the equipment and operating costs. In this work, high-pressure phase equilibrium data for aromatic compounds+water systems and inorganic compounds+water systems are given. Moreover, thermodynamic models, equations of state (EOS) and empirical and semi-empirical approaches are summarized and evaluated. This paper also lists the existing problems of multi-phase equilibria and solubility studies on aromatic compounds and inorganic compounds in sub- and supercritical water.
Not just a theory--the utility of mathematical models in evolutionary biology.
Servedio, Maria R; Brandvain, Yaniv; Dhole, Sumit; Fitzpatrick, Courtney L; Goldberg, Emma E; Stern, Caitlin A; Van Cleve, Jeremy; Yeh, D Justin
2014-12-01
Progress in science often begins with verbal hypotheses meant to explain why certain biological phenomena exist. An important purpose of mathematical models in evolutionary research, as in many other fields, is to act as “proof-of-concept” tests of the logic in verbal explanations, paralleling the way in which empirical data are used to test hypotheses. Because not all subfields of biology use mathematics for this purpose, misunderstandings of the function of proof-of-concept modeling are common. In the hope of facilitating communication, we discuss the role of proof-of-concept modeling in evolutionary biology.
Fractional Ornstein-Uhlenbeck for index prices of FTSE Bursa Malaysia KLCI
NASA Astrophysics Data System (ADS)
Chen, Kho Chia; Bahar, Arifah; Ting, Chee-Ming
2014-07-01
This paper studies the Ornstein-Uhlenbeck model that incorporates long memory stochastic volatility which is known as fractional Ornstein-Uhlenbeck model. The determination of the existence of long range dependence of the index prices of FTSE Bursa Malaysia KLCI is measured by the Hurst exponent. The empirical distribution of unobserved volatility is estimated using the particle filtering method. The performance between fractional Ornstein -Uhlenbeck and standard Ornstein -Uhlenbeck process had been compared. The mean square errors of the fractional Ornstein-Uhlenbeck model indicated that the model describes index prices better than the standard Ornstein-Uhlenbeck process.
Testing a theory of aircraft noise annoyance: a structural equation analysis.
Kroesen, Maarten; Molin, Eric J E; van Wee, Bert
2008-06-01
Previous research has stressed the relevance of nonacoustical factors in the perception of aircraft noise. However, it is largely empirically driven and lacks a sound theoretical basis. In this paper, a theoretical model which explains noise annoyance based on the psychological stress theory is empirically tested. The model is estimated by applying structural equation modeling based on data from residents living in the vicinity of Amsterdam Airport Schiphol in The Netherlands. The model provides a good model fit and indicates that concern about the negative health effects of noise and pollution, perceived disturbance, and perceived control and coping capacity are the most important variables that explain noise annoyance. Furthermore, the model provides evidence for the existence of two reciprocal relationships between (1) perceived disturbance and noise annoyance and (2) perceived control and coping capacity and noise annoyance. Lastly, the model yielded two unexpected results. Firstly, the variables noise sensitivity and fear related to the noise source were unable to explain additional variance in the endogenous variables of the model and were therefore excluded from the model. And secondly, the size of the total effect of noise exposure on noise annoyance was relatively small. The paper concludes with some recommended directions for further research.
Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.
2017-01-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.
Molecular Oxygen in the Thermosphere: Issues and Measurement Strategies
NASA Astrophysics Data System (ADS)
Picone, J. M.; Hedin, A. E.; Drob, D. P.; Meier, R. R.; Bishop, J.; Budzien, S. A.
2002-05-01
We review the state of empirical knowledge regarding the distribution of molecular oxygen in the lower thermosphere (100-200 km), as embodied by the new NRLMSISE-00 empirical atmospheric model, its predecessors, and the underlying databases. For altitudes above 120 km, the two major classes of data (mass spectrometer and solar ultraviolet [UV] absorption) disagree significantly regarding the magnitude of the O2 density and the dependence on solar activity. As a result, the addition of the Solar Maximum Mission (SMM) data set (based on solar UV absorption) to the NRLMSIS database has directly impacted the new model, increasing the complexity of the model's formulation and generally reducing the thermospheric O2 density relative to MSISE-90. Beyond interest in the thermosphere itself, this issue materially affects detailed models of ionospheric chemistry and dynamics as well as modeling of the upper atmospheric airglow. Because these are key elements of both experimental and operational systems which measure and forecast the near-Earth space environment, we present strategies for augmenting the database through analysis of existing data and through future measurements in order to resolve this issue.
Modelling soil erosion at European scale: towards harmonization and reproducibility
NASA Astrophysics Data System (ADS)
Bosco, C.; de Rigo, D.; Dewitte, O.; Poesen, J.; Panagos, P.
2015-02-01
Soil erosion by water is one of the most widespread forms of soil degradation. The loss of soil as a result of erosion can lead to decline in organic matter and nutrient contents, breakdown of soil structure and reduction of the water-holding capacity. Measuring soil loss across the whole landscape is impractical and thus research is needed to improve methods of estimating soil erosion with computational modelling, upon which integrated assessment and mitigation strategies may be based. Despite the efforts, the prediction value of existing models is still limited, especially at regional and continental scale, because a systematic knowledge of local climatological and soil parameters is often unavailable. A new approach for modelling soil erosion at regional scale is here proposed. It is based on the joint use of low-data-demanding models and innovative techniques for better estimating model inputs. The proposed modelling architecture has at its basis the semantic array programming paradigm and a strong effort towards computational reproducibility. An extended version of the Revised Universal Soil Loss Equation (RUSLE) has been implemented merging different empirical rainfall-erosivity equations within a climatic ensemble model and adding a new factor for a better consideration of soil stoniness within the model. Pan-European soil erosion rates by water have been estimated through the use of publicly available data sets and locally reliable empirical relationships. The accuracy of the results is corroborated by a visual plausibility check (63% of a random sample of grid cells are accurate, 83% at least moderately accurate, bootstrap p ≤ 0.05). A comparison with country-level statistics of pre-existing European soil erosion maps is also provided.
Mager, R; Balzereit, C; Gust, K; Hüsch, T; Herrmann, T; Nagele, U; Haferkamp, A; Schilling, D
2016-05-01
Passive removal of stone fragments in the irrigation stream is one of the characteristics in continuous-flow PCNL instruments. So far the physical principle of this so-called vacuum cleaner effect has not been fully understood yet. The aim of the study was to empirically prove the existence of the vacuum cleaner effect and to develop a physical hypothesis and generate a mathematical model for this phenomenon. In an empiric approach, common low-pressure PCNL instruments and conventional PCNL sheaths were tested using an in vitro model. Flow characteristics were visualized by coloring of irrigation fluid. Influence of irrigation pressure, sheath diameter, sheath design, nephroscope design and position of the nephroscope was assessed. Experiments were digitally recorded for further slow-motion analysis to deduce a physical model. In each tested nephroscope design, we could observe the vacuum cleaner effect. Increase in irrigation pressure and reduction in cross section of sheath sustained the effect. Slow-motion analysis of colored flow revealed a synergism of two effects causing suction and transportation of the stone. For the first time, our model showed a flow reversal in the sheath as an integral part of the origin of the stone transportation during vacuum cleaner effect. The application of Bernoulli's equation provided the explanation of these effects and confirmed our experimental results. We widen the understanding of PCNL with a conclusive physical model, which explains fluid mechanics of the vacuum cleaner effect.
Hayes, Brett K; Heit, Evan; Swendsen, Haruka
2010-03-01
Inductive reasoning entails using existing knowledge or observations to make predictions about novel cases. We review recent findings in research on category-based induction as well as theoretical models of these results, including similarity-based models, connectionist networks, an account based on relevance theory, Bayesian models, and other mathematical models. A number of touchstone empirical phenomena that involve taxonomic similarity are described. We also examine phenomena involving more complex background knowledge about premises and conclusions of inductive arguments and the properties referenced. Earlier models are shown to give a good account of similarity-based phenomena but not knowledge-based phenomena. Recent models that aim to account for both similarity-based and knowledge-based phenomena are reviewed and evaluated. Among the most important new directions in induction research are a focus on induction with uncertain premise categories, the modeling of the relationship between inductive and deductive reasoning, and examination of the neural substrates of induction. A common theme in both the well-established and emerging lines of induction research is the need to develop well-articulated and empirically testable formal models of induction. Copyright © 2010 John Wiley & Sons, Ltd. For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.
Kyogoku, Daisuke; Sota, Teiji
2017-05-17
Interspecific mating interactions, or reproductive interference, can affect population dynamics, species distribution and abundance. Previous population dynamics models have assumed that the impact of frequency-dependent reproductive interference depends on the relative abundances of species. However, this assumption could be an oversimplification inappropriate for making quantitative predictions. Therefore, a more general model to forecast population dynamics in the presence of reproductive interference is required. Here we developed a population dynamics model to describe the absolute density dependence of reproductive interference, which appears likely when encounter rate between individuals is important. Our model (i) can produce diverse shapes of isoclines depending on parameter values and (ii) predicts weaker reproductive interference when absolute density is low. These novel characteristics can create conditions where coexistence is stable and independent from the initial conditions. We assessed the utility of our model in an empirical study using an experimental pair of seed beetle species, Callosobruchus maculatus and Callosobruchus chinensis. Reproductive interference became stronger with increasing total beetle density even when the frequencies of the two species were kept constant. Our model described the effects of absolute density and showed a better fit to the empirical data than the existing model overall.
Hollnagel, H; Malterud, K
1995-12-01
The study was designed to present and apply theoretical and empirical knowledge for the construction of a clinical model intended to shift the attention of the general practitioner from objective risk factors to self-assessed health resources in male and female patients. Review, discussion and analysis of selected theoretical models about personal health resources involving assessing existing theories according to their emphasis concerning self-assessed vs. doctor-assessed health resources, specific health resources vs. life and coping in general, abstract vs. clinically applicable theory, gender perspective explicitly included or not. Relevant theoretical models on health and coping (salutogenesis, coping and social support, control/demand, locus of control, health belief model, quality of life), and the perspective of the underprivileged Other (critical theory, feminist standpoint theory, the patient-centred clinical method) were presented and assessed. Components from Antonovsky's salutogenetic perspective and McWhinney's patient-centred clinical method, supported by gender perspectives, were integrated to a clinical model which is presented. General practitioners are recommended to shift their attention from objective risk factors to self-assessed health resources by means of the clinical model. The relevance and feasibility of the model should be explored in empirical research.
Evaluation of cancer mortality in a cohort of workers exposed to low-level radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lea, C.S.
1995-12-01
The purpose of this dissertation was to re-analyze existing data to explore methodologic approaches that may determine whether excess cancer mortality in the ORNL cohort can be explained by time-related factors not previously considered; grouping of cancer outcomes; selection bias due to choice of method selected to incorporate an empirical induction period; or the type of statistical model chosen.
Moral judgment as information processing: an integrative review.
Guglielmo, Steve
2015-01-01
How do humans make moral judgments about others' behavior? This article reviews dominant models of moral judgment, organizing them within an overarching framework of information processing. This framework poses two distinct questions: (1) What input information guides moral judgments? and (2) What psychological processes generate these judgments? Information Models address the first question, identifying critical information elements (including causality, intentionality, and mental states) that shape moral judgments. A subclass of Biased Information Models holds that perceptions of these information elements are themselves driven by prior moral judgments. Processing Models address the second question, and existing models have focused on the relative contribution of intuitive versus deliberative processes. This review organizes existing moral judgment models within this framework and critically evaluates them on empirical and theoretical grounds; it then outlines a general integrative model grounded in information processing, and concludes with conceptual and methodological suggestions for future research. The information-processing framework provides a useful theoretical lens through which to organize extant and future work in the rapidly growing field of moral judgment.
Moral judgment as information processing: an integrative review
Guglielmo, Steve
2015-01-01
How do humans make moral judgments about others’ behavior? This article reviews dominant models of moral judgment, organizing them within an overarching framework of information processing. This framework poses two distinct questions: (1) What input information guides moral judgments? and (2) What psychological processes generate these judgments? Information Models address the first question, identifying critical information elements (including causality, intentionality, and mental states) that shape moral judgments. A subclass of Biased Information Models holds that perceptions of these information elements are themselves driven by prior moral judgments. Processing Models address the second question, and existing models have focused on the relative contribution of intuitive versus deliberative processes. This review organizes existing moral judgment models within this framework and critically evaluates them on empirical and theoretical grounds; it then outlines a general integrative model grounded in information processing, and concludes with conceptual and methodological suggestions for future research. The information-processing framework provides a useful theoretical lens through which to organize extant and future work in the rapidly growing field of moral judgment. PMID:26579022
Managerial Career Patterns: A Review of the Empirical Evidence
ERIC Educational Resources Information Center
Vinkenburg, Claartje J.; Weber, Torsten
2012-01-01
Despite the ubiquitous presence of the term "career patterns" in the discourse about careers, the existing empirical evidence on (managerial) career patterns is rather limited. From this literature review of 33 published empirical studies of managerial and similar professional career patterns found in electronic bibliographic databases, it is…
NASA Technical Reports Server (NTRS)
He, Maosheng; Vogt, Joachim; Luehr, Hermann; Sorbalo, Eugen; Blagau, Adrian; Le, Guan; Lu, Gang
2012-01-01
Ten years of CHAMP magnetic field measurements are integrated into MFACE, a model of field-aligned currents (FACs) using empirical orthogonal functions (EOFs). EOF1 gives the basic Region-1/Region-2 pattern varying mainly with the interplanetary magnetic field Bz component. EOF2 captures separately the cusp current signature and By-related variability. Compared to existing models, MFACE yields significantly better spatial resolution, reproduces typically observed FAC thickness and intensity, improves on the magnetic local time (MLT) distribution, and gives the seasonal dependence of FAC latitudes and the NBZ current signature. MFACE further reveals systematic dependences on By, including 1) Region-1/Region-2 topology modifications around noon; 2) imbalance between upward and downward maximum current density; 3) MLT location of the Harang discontinuity. Furthermore, our procedure allows quantifying response times of FACs to solar wind driving at the bow shock nose: we obtain 20 minutes and 35-40 minutes lags for the FAC density and latitude, respectively.
Modelled drift patterns of fish larvae link coastal morphology to seabird colony distribution
Sandvik, Hanno; Barrett, Robert T.; Erikstad, Kjell Einar; Myksvoll, Mari S.; Vikebø, Frode; Yoccoz, Nigel G.; Anker-Nilssen, Tycho; Lorentsen, Svein-Håkon; Reiertsen, Tone K.; Skarðhamar, Jofrid; Skern-Mauritzen, Mette; Systad, Geir Helge
2016-01-01
Colonial breeding is an evolutionary puzzle, as the benefits of breeding in high densities are still not fully explained. Although the dynamics of existing colonies are increasingly understood, few studies have addressed the initial formation of colonies, and empirical tests are rare. Using a high-resolution larval drift model, we here document that the distribution of seabird colonies along the Norwegian coast can be explained by variations in the availability and predictability of fish larvae. The modelled variability in concentration of fish larvae is, in turn, predicted by the topography of the continental shelf and coastline. The advection of fish larvae along the coast translates small-scale topographic characteristics into a macroecological pattern, viz. the spatial distribution of top-predator breeding sites. Our findings provide empirical corroboration of the hypothesis that seabird colonies are founded in locations that minimize travel distances between breeding and foraging locations, thereby enabling optimal foraging by central-place foragers. PMID:27173005
New Physical Algorithms for Downscaling SMAP Soil Moisture
NASA Astrophysics Data System (ADS)
Sadeghi, M.; Ghafari, E.; Babaeian, E.; Davary, K.; Farid, A.; Jones, S. B.; Tuller, M.
2017-12-01
The NASA Soil Moisture Active Passive (SMAP) mission provides new means for estimation of surface soil moisture at the global scale. However, for many hydrological and agricultural applications the spatial SMAP resolution is too low. To address this scale issue we fused SMAP data with MODIS observations to generate soil moisture maps at 1-km spatial resolution. In course of this study we have improved several existing empirical algorithms and introduced a new physical approach for downscaling SMAP data. The universal triangle/trapezoid model was applied to relate soil moisture to optical/thermal observations such as NDVI, land surface temperature and surface reflectance. These algorithms were evaluated with in situ data measured at 5-cm depth. Our results demonstrate that downscaling SMAP soil moisture data based on physical indicators of soil moisture derived from the MODIS satellite leads to higher accuracy than that achievable with empirical downscaling algorithms. Keywords: Soil moisture, microwave data, downscaling, MODIS, triangle/trapezoid model.
Exploring the effect of power law social popularity on language evolution.
Gong, Tao; Shuai, Lan
2014-01-01
We evaluate the effect of a power-law-distributed social popularity on the origin and change of language, based on three artificial life models meticulously tracing the evolution of linguistic conventions including lexical items, categories, and simple syntax. A cross-model analysis reveals an optimal social popularity, in which the λ value of the power law distribution is around 1.0. Under this scaling, linguistic conventions can efficiently emerge and widely diffuse among individuals, thus maintaining a useful level of mutual understandability even in a big population. From an evolutionary perspective, we regard this social optimality as a tradeoff among social scaling, mutual understandability, and population growth. Empirical evidence confirms that such optimal power laws exist in many large-scale social systems that are constructed primarily via language-related interactions. This study contributes to the empirical explorations and theoretical discussions of the evolutionary relations between ubiquitous power laws in social systems and relevant individual behaviors.
NASA Astrophysics Data System (ADS)
Wei, Haoyang
A new critical plane-energy model is proposed in this thesis for multiaxial fatigue life prediction of homogeneous and heterogeneous materials. Brief review of existing methods, especially on the critical plane-based and energy-based methods, are given first. Special focus is on one critical plane approach which has been shown to work for both brittle and ductile metals. The key idea is to automatically change the critical plane orientation with respect to different materials and stress states. One potential drawback of the developed model is that it needs an empirical calibration parameter for non-proportional multiaxial loadings since only the strain terms are used and the out-of-phase hardening cannot be considered. The energy-based model using the critical plane concept is proposed with help of the Mroz-Garud hardening rule to explicitly include the effect of non-proportional hardening under fatigue cyclic loadings. Thus, the empirical calibration for non-proportional loading is not needed since the out-of-phase hardening is naturally included in the stress calculation. The model predictions are compared with experimental data from open literature and it is shown the proposed model can work for both proportional and non-proportional loadings without the empirical calibration. Next, the model is extended for the fatigue analysis of heterogeneous materials integrating with finite element method. Fatigue crack initiation of representative volume of heterogeneous materials is analyzed using the developed critical plane-energy model and special focus is on the microstructure effect on the multiaxial fatigue life predictions. Several conclusions and future work is drawn based on the proposed study.
Becher, Matthias A; Osborne, Juliet L; Thorbek, Pernille; Kennedy, Peter J; Grimm, Volker
2013-01-01
The health of managed and wild honeybee colonies appears to have declined substantially in Europe and the United States over the last decade. Sustainability of honeybee colonies is important not only for honey production, but also for pollination of crops and wild plants alongside other insect pollinators. A combination of causal factors, including parasites, pathogens, land use changes and pesticide usage, are cited as responsible for the increased colony mortality. However, despite detailed knowledge of the behaviour of honeybees and their colonies, there are no suitable tools to explore the resilience mechanisms of this complex system under stress. Empirically testing all combinations of stressors in a systematic fashion is not feasible. We therefore suggest a cross-level systems approach, based on mechanistic modelling, to investigate the impacts of (and interactions between) colony and land management. We review existing honeybee models that are relevant to examining the effects of different stressors on colony growth and survival. Most of these models describe honeybee colony dynamics, foraging behaviour or honeybee – varroa mite – virus interactions. We found that many, but not all, processes within honeybee colonies, epidemiology and foraging are well understood and described in the models, but there is no model that couples in-hive dynamics and pathology with foraging dynamics in realistic landscapes. Synthesis and applications. We describe how a new integrated model could be built to simulate multifactorial impacts on the honeybee colony system, using building blocks from the reviewed models. The development of such a tool would not only highlight empirical research priorities but also provide an important forecasting tool for policy makers and beekeepers, and we list examples of relevant applications to bee disease and landscape management decisions. PMID:24223431
Becher, Matthias A; Osborne, Juliet L; Thorbek, Pernille; Kennedy, Peter J; Grimm, Volker
2013-08-01
The health of managed and wild honeybee colonies appears to have declined substantially in Europe and the United States over the last decade. Sustainability of honeybee colonies is important not only for honey production, but also for pollination of crops and wild plants alongside other insect pollinators. A combination of causal factors, including parasites, pathogens, land use changes and pesticide usage, are cited as responsible for the increased colony mortality.However, despite detailed knowledge of the behaviour of honeybees and their colonies, there are no suitable tools to explore the resilience mechanisms of this complex system under stress. Empirically testing all combinations of stressors in a systematic fashion is not feasible. We therefore suggest a cross-level systems approach, based on mechanistic modelling, to investigate the impacts of (and interactions between) colony and land management.We review existing honeybee models that are relevant to examining the effects of different stressors on colony growth and survival. Most of these models describe honeybee colony dynamics, foraging behaviour or honeybee - varroa mite - virus interactions.We found that many, but not all, processes within honeybee colonies, epidemiology and foraging are well understood and described in the models, but there is no model that couples in-hive dynamics and pathology with foraging dynamics in realistic landscapes. Synthesis and applications . We describe how a new integrated model could be built to simulate multifactorial impacts on the honeybee colony system, using building blocks from the reviewed models. The development of such a tool would not only highlight empirical research priorities but also provide an important forecasting tool for policy makers and beekeepers, and we list examples of relevant applications to bee disease and landscape management decisions.
Equifinality in empirical studies of cultural transmission.
Barrett, Brendan J
2018-01-31
Cultural systems exhibit equifinal behavior - a single final state may be arrived at via different mechanisms and/or from different initial states. Potential for equifinality exists in all empirical studies of cultural transmission including controlled experiments, observational field research, and computational simulations. Acknowledging and anticipating the existence of equifinality is important in empirical studies of social learning and cultural evolution; it helps us understand the limitations of analytical approaches and can improve our ability to predict the dynamics of cultural transmission. Here, I illustrate and discuss examples of equifinality in studies of social learning, and how certain experimental designs might be prone to it. I then review examples of equifinality discussed in the social learning literature, namely the use of s-shaped diffusion curves to discern individual from social learning and operational definitions and analytical approaches used in studies of conformist transmission. While equifinality exists to some extent in all studies of social learning, I make suggestions for how to address instances of it, with an emphasis on using data simulation and methodological verification alongside modern statistical approaches that emphasize prediction and model comparison. In cases where evaluated learning mechanisms are equifinal due to non-methodological factors, I suggest that this is not always a problem if it helps us predict cultural change. In some cases, equifinal learning mechanisms might offer insight into how both individual learning, social learning strategies and other endogenous social factors might by important in structuring cultural dynamics and within- and between-group heterogeneity. Copyright © 2018 Elsevier B.V. All rights reserved.
An empirical perspective for understanding climate change impacts in Switzerland
Henne, Paul; Bigalke, Moritz; Büntgen, Ulf; Colombaroli, Daniele; Conedera, Marco; Feller, Urs; Frank, David; Fuhrer, Jürg; Grosjean, Martin; Heiri, Oliver; Luterbacher, Jürg; Mestrot, Adrien; Rigling, Andreas; Rössler, Ole; Rohr, Christian; Rutishauser, This; Schwikowski, Margit; Stampfli, Andreas; Szidat, Sönke; Theurillat, Jean-Paul; Weingartner, Rolf; Wilcke, Wolfgan; Tinner, Willy
2018-01-01
Planning for the future requires a detailed understanding of how climate change affects a wide range of systems at spatial scales that are relevant to humans. Understanding of climate change impacts can be gained from observational and reconstruction approaches and from numerical models that apply existing knowledge to climate change scenarios. Although modeling approaches are prominent in climate change assessments, observations and reconstructions provide insights that cannot be derived from simulations alone, especially at local to regional scales where climate adaptation policies are implemented. Here, we review the wealth of understanding that emerged from observations and reconstructions of ongoing and past climate change impacts in Switzerland, with wider applicability in Europe. We draw examples from hydrological, alpine, forest, and agricultural systems, which are of paramount societal importance, and are projected to undergo important changes by the end of this century. For each system, we review existing model-based projections, present what is known from observations, and discuss how empirical evidence may help improve future projections. A particular focus is given to better understanding thresholds, tipping points and feedbacks that may operate on different time scales. Observational approaches provide the grounding in evidence that is needed to develop local to regional climate adaptation strategies. Our review demonstrates that observational approaches should ideally have a synergistic relationship with modeling in identifying inconsistencies in projections as well as avenues for improvement. They are critical for uncovering unexpected relationships between climate and agricultural, natural, and hydrological systems that will be important to society in the future.
Statistical validity of using ratio variables in human kinetics research.
Liu, Yuanlong; Schutz, Robert W
2003-09-01
The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.
Sperm economy between female mating frequency and male ejaculate allocation.
Abe, Jun; Kamimura, Yoshitaka
2015-03-01
Why females of many species mate multiply is a major question in evolutionary biology. Furthermore, if females accept matings more than once, ejaculates from different males compete for fertilization (sperm competition), which confronts males with the decision of how to allocate their reproductive resources to each mating event. Although most existing models have examined either female mating frequency or male ejaculate allocation while assuming fixed levels of the opposite sex's strategies, these strategies are likely to coevolve. To investigate how the interaction of the two sexes' strategies is influenced by the level of sperm limitation in the population, we developed models in which females adjust their number of allowable matings and males allocate their ejaculate in each mating. Our model predicts that females mate only once or less than once at an even sex ratio or in an extremely female-biased condition, because of female resistance and sperm limitation in the population, respectively. However, in a moderately female-biased condition, males favor partitioning their reproductive budgets across many females, whereas females favor multiple matings to obtain sufficient sperm, which contradicts the predictions of most existing models. We discuss our model's predictions and relationships with the existing models and demonstrate applications for empirical findings.
Empirical evidence for multi-scaled controls on wildfire size distributions in California
NASA Astrophysics Data System (ADS)
Povak, N.; Hessburg, P. F., Sr.; Salter, R. B.
2014-12-01
Ecological theory asserts that regional wildfire size distributions are examples of self-organized critical (SOC) systems. Controls on SOC event-size distributions by virtue are purely endogenous to the system and include the (1) frequency and pattern of ignitions, (2) distribution and size of prior fires, and (3) lagged successional patterns after fires. However, recent work has shown that the largest wildfires often result from extreme climatic events, and that patterns of vegetation and topography may help constrain local fire spread, calling into question the SOC model's simplicity. Using an atlas of >12,000 California wildfires (1950-2012) and maximum likelihood estimation (MLE), we fit four different power-law models and broken-stick regressions to fire-size distributions across 16 Bailey's ecoregions. Comparisons among empirical fire size distributions across ecoregions indicated that most ecoregion's fire-size distributions were significantly different, suggesting that broad-scale top-down controls differed among ecoregions. One-parameter power-law models consistently fit a middle range of fire sizes (~100 to 10000 ha) across most ecoregions, but did not fit to larger and smaller fire sizes. We fit the same four power-law models to patch size distributions of aspect, slope, and curvature topographies and found that the power-law models fit to a similar middle range of topography patch sizes. These results suggested that empirical evidence may exist for topographic controls on fire sizes. To test this, we used neutral landscape modeling techniques to determine if observed fire edges corresponded with aspect breaks more often than expected by random. We found significant differences between the empirical and neutral models for some ecoregions, particularly within the middle range of fire sizes. Our results, combined with other recent work, suggest that controls on ecoregional fire size distributions are multi-scaled and likely are not purely SOC. California wildfire ecosystems appear to be adaptive, governed by stationary and non-stationary controls, which may be either exogenous or endogenous to the system.
A study about the existence of the leverage effect in stochastic volatility models
NASA Astrophysics Data System (ADS)
Florescu, Ionuţ; Pãsãricã, Cristian Gabriel
2009-02-01
The empirical relationship between the return of an asset and the volatility of the asset has been well documented in the financial literature. Named the leverage effect or sometimes risk-premium effect, it is observed in real data that, when the return of the asset decreases, the volatility increases and vice versa. Consequently, it is important to demonstrate that any formulated model for the asset price is capable of generating this effect observed in practice. Furthermore, we need to understand the conditions on the parameters present in the model that guarantee the apparition of the leverage effect. In this paper we analyze two general specifications of stochastic volatility models and their capability of generating the perceived leverage effect. We derive conditions for the apparition of leverage effect in both of these stochastic volatility models. We exemplify using stochastic volatility models used in practice and we explicitly state the conditions for the existence of the leverage effect in these examples.
Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J
2015-01-01
A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.
NASA Astrophysics Data System (ADS)
Shanmugam, Palanisamy; Varunan, Theenathayalan; Nagendra Jaiganesh, S. N.; Sahay, Arvind; Chauhan, Prakash
2016-06-01
Prediction of the curve of the absorption coefficient of colored dissolved organic matter (CDOM) and differentiation between marine and terrestrially derived CDOM pools in coastal environments are hampered by a high degree of variability in the composition and concentration of CDOM, uncertainties in retrieved remote sensing reflectance and the weak signal-to-noise ratio of space-borne instruments. In the present study, a hybrid model is presented along with empirical methods to remotely determine the amount and type of CDOM in coastal and inland water environments. A large set of in-situ data collected on several oceanographic cruises and field campaigns from different regional waters was used to develop empirical methods for studying the distribution and dynamics of CDOM, dissolved organic carbon (DOC) and salinity. Our validation analyses demonstrated that the hybrid model is a better descriptor of CDOM absorption spectra compared to the existing models. Additional spectral slope parameters included in the present model to differentiate between terrestrially derived and marine CDOM pools make a substantial improvement over those existing models. Empirical algorithms to derive CDOM, DOC and salinity from remote sensing reflectance data demonstrated success in retrieval of these products with significantly low mean relative percent differences from large in-situ measurements. The performance of these algorithms was further assessed using three hyperspectral HICO images acquired simultaneously with our field measurements in productive coastal and lagoon waters on the southeast part of India. The validation match-ups of CDOM and salinity showed good agreement between HICO retrievals and field observations. Further analyses of these data showed significant temporal changes in CDOM and phytoplankton absorption coefficients with a distinct phase shift between these two products. Healthy phytoplankton cells and macrophytes were recognized to directly contribute to the autochthonous production of colored humic-like substances in variable amounts within the lagoon system, despite CDOM content being partly derived through river run-off and wetland discharges as well as from conservative mixing of different water masses. Spatial and temporal maps of CDOM, DOC and salinity products provided an interesting insight into these CDOM dynamics and conservative behavior within the lagoon and its extension in coastal and offshore waters of the Bay of Bengal. The hybrid model and empirical algorithms presented here can be useful to assess CDOM, DOC and salinity fields and their changes in response to increasing runoff of nutrient pollution, anthropogenic activities, hydrographic variations and climate oscillations.
Field investigation of the drift shadow
Su, G.W.; Kneafsey, T.J.; Ghezzehei, T.A.; Cook, P.J.; Marshall, B.D.
2006-01-01
The "Drift Shadow" is defined as the relatively drier region that forms below subsurface cavities or drifts in unsaturated rock. Its existence has been predicted through analytical and numerical models of unsaturated flow. However, these theoretical predictions have not been demonstrated empirically to date. In this project we plan to test the drift shadow concept through field investigations and compare our observations to simulations. Based on modeling studies we have an identified a suitable site to perform the study at an inactive mine in a sandstone formation. Pretest modeling studies and preliminary characterization of the site are being used to develop the field scale tests.
Revisiting a model of ontogenetic growth: estimating model parameters from theory and data.
Moses, Melanie E; Hou, Chen; Woodruff, William H; West, Geoffrey B; Nekola, Jeffery C; Zuo, Wenyun; Brown, James H
2008-05-01
The ontogenetic growth model (OGM) of West et al. provides a general description of how metabolic energy is allocated between production of new biomass and maintenance of existing biomass during ontogeny. Here, we reexamine the OGM, make some minor modifications and corrections, and further evaluate its ability to account for empirical variation on rates of metabolism and biomass in vertebrates both during ontogeny and across species of varying adult body size. We show that the updated version of the model is internally consistent and is consistent with other predictions of metabolic scaling theory and empirical data. The OGM predicts not only the near universal sigmoidal form of growth curves but also the M(1/4) scaling of the characteristic times of ontogenetic stages in addition to the curvilinear decline in growth efficiency described by Brody. Additionally, the OGM relates the M(3/4) scaling across adults of different species to the scaling of metabolic rate across ontogeny within species. In providing a simple, quantitative description of how energy is allocated to growth, the OGM calls attention to unexplained variation, unanswered questions, and opportunities for future research.
Error catastrophe and phase transition in the empirical fitness landscape of HIV
NASA Astrophysics Data System (ADS)
Hart, Gregory R.; Ferguson, Andrew L.
2015-03-01
We have translated clinical sequence databases of the p6 HIV protein into an empirical fitness landscape quantifying viral replicative capacity as a function of the amino acid sequence. We show that the viral population resides close to a phase transition in sequence space corresponding to an "error catastrophe" beyond which there is lethal accumulation of mutations. Our model predicts that the phase transition may be induced by drug therapies that elevate the mutation rate, or by forcing mutations at particular amino acids. Applying immune pressure to any combination of killer T-cell targets cannot induce the transition, providing a rationale for why the viral protein can exist close to the error catastrophe without sustaining fatal fitness penalties due to adaptive immunity.
Improved annotation with de novo transcriptome assembly in four social amoeba species.
Singh, Reema; Lawal, Hajara M; Schilde, Christina; Glöckner, Gernot; Barton, Geoffrey J; Schaap, Pauline; Cole, Christian
2017-01-31
Annotation of gene models and transcripts is a fundamental step in genome sequencing projects. Often this is performed with automated prediction pipelines, which can miss complex and atypical genes or transcripts. RNA sequencing (RNA-seq) data can aid the annotation with empirical data. Here we present de novo transcriptome assemblies generated from RNA-seq data in four Dictyostelid species: D. discoideum, P. pallidum, D. fasciculatum and D. lacteum. The assemblies were incorporated with existing gene models to determine corrections and improvement on a whole-genome scale. This is the first time this has been performed in these eukaryotic species. An initial de novo transcriptome assembly was generated by Trinity for each species and then refined with Program to Assemble Spliced Alignments (PASA). The completeness and quality were assessed with the Benchmarking Universal Single-Copy Orthologs (BUSCO) and Transrate tools at each stage of the assemblies. The final datasets of 11,315-12,849 transcripts contained 5,610-7,712 updates and corrections to >50% of existing gene models including changes to hundreds or thousands of protein products. Putative novel genes are also identified and alternative splice isoforms were observed for the first time in P. pallidum, D. lacteum and D. fasciculatum. In taking a whole transcriptome approach to genome annotation with empirical data we have been able to enrich the annotations of four existing genome sequencing projects. In doing so we have identified updates to the majority of the gene annotations across all four species under study and found putative novel genes and transcripts which could be worthy for follow-up. The new transcriptome data we present here will be a valuable resource for genome curators in the Dictyostelia and we propose this effective methodology for use in other genome annotation projects.
NASA Astrophysics Data System (ADS)
Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.
We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.
Evaluation of an Empirical Traction Equation for Forestry Tires
C.R. Vechinski; C.E. Johnson; R.L. Raper
1998-01-01
Variable load test data were used to evaluate the applicability of an existing forestry tire traction model for a new forestry tire and a worn tire of the same size with and without tire chains in a range of soil conditions. `The clay and sandy soi!s ranged in moisture content from 17 to 28%. Soil bulk density varied between 1.1 and 1.4g cm-3...
Forecasting volatility of SSEC in Chinese stock market using multifractal analysis
NASA Astrophysics Data System (ADS)
Wei, Yu; Wang, Peng
2008-03-01
In this paper, taking about 7 years’ high-frequency data of the Shanghai Stock Exchange Composite Index (SSEC) as an example, we propose a daily volatility measure based on the multifractal spectrum of the high-frequency price variability within a trading day. An ARFIMA model is used to depict the dynamics of this multifractal volatility (MFV) measures. The one-day ahead volatility forecasting performances of the MFV model and some other existing volatility models, such as the realized volatility model, stochastic volatility model and GARCH, are evaluated by the superior prediction ability (SPA) test. The empirical results show that under several loss functions, the MFV model obtains the best forecasting accuracy.
Quality and price--impact on patient satisfaction.
Pantouvakis, Angelos; Bouranta, Nancy
2014-01-01
The purpose of this paper is to synthesize existing quality-measurement models and applies them to healthcare by combining a Nordic service-quality with an American service performance model. Results are based on a questionnaire survey of 1,298 respondents. Service quality dimensions were derived and related to satisfaction by employing a multinomial logistic model, which allows prediction and service improvement. Qualitative and empirical evidence indicates that customer satisfaction and service quality are multi-dimensional constructs, whose quality components, together with convenience and cost, influence the customer's overall satisfaction. The proposed model identifies important quality and satisfaction issues. It also enables transitions between different responses in different studies to be compared.
Modeling and estimating the jump risk of exchange rates: Applications to RMB
NASA Astrophysics Data System (ADS)
Wang, Yiming; Tong, Hanfei
2008-11-01
In this paper we propose a new type of continuous-time stochastic volatility model, SVDJ, for the spot exchange rate of RMB, and other foreign currencies. In the model, we assume that the change of exchange rate can be decomposed into two components. One is the normally small-cope innovation driven by the diffusion motion; the other is a large drop or rise engendered by the Poisson counting process. Furthermore, we develop a MCMC method to estimate our model. Empirical results indicate the significant existence of jumps in the exchange rate. Jump components explain a large proportion of the exchange rate change.
The effect of fiscal policy on diet, obesity and chronic disease: a systematic review.
Thow, Anne Marie; Jan, Stephen; Leeder, Stephen; Swinburn, Boyd
2010-08-01
To assess the effect of food taxes and subsidies on diet, body weight and health through a systematic review of the literature. We searched the English-language published and grey literature for empirical and modelling studies on the effects of monetary subsidies or taxes levied on specific food products on consumption habits, body weight and chronic conditions. Empirical studies were dealing with an actual tax, while modelling studies predicted outcomes based on a hypothetical tax or subsidy. Twenty-four studies met the inclusion criteria: 13 were from the peer-reviewed literature and 11 were published on line. There were 8 empirical and 16 modelling studies. Nine studies assessed the impact of taxes on food consumption only, 5 on consumption and body weight, 4 on consumption and disease and 6 on body weight only. In general, taxes and subsidies influenced consumption in the desired direction, with larger taxes being associated with more significant changes in consumption, body weight and disease incidence. However, studies that focused on a single target food or nutrient may have overestimated the impact of taxes by failing to take into account shifts in consumption to other foods. The quality of the evidence was generally low. Almost all studies were conducted in high-income countries. Food taxes and subsidies have the potential to contribute to healthy consumption patterns at the population level. However, current evidence is generally of low quality and the empirical evaluation of existing taxes is a research priority, along with research into the effectiveness and differential impact of food taxes in developing countries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beres, W.; Koul, A.K.
1994-09-01
Stress intensity factors for thru-thickness and thumb-nail cracks in the double edge notch specimens, containing two different notch radius (R) to specimen width (W) ratios (R/W = 1/8 and 1/16), are calculated through finite element analysis. The finite element results are compared with predictions based on existing empirical models for SIF calculations. The effects of a change in R/W ratio on SIF of thru-thickness and thumb-nail cracks are also discussed. 34 refs.
Damping parameter study of a perforated plate with bias flow
NASA Astrophysics Data System (ADS)
Mazdeh, Alireza
One of the main impediments to successful operation of combustion systems in industrial and aerospace applications including gas turbines, ramjets, rocket motors, afterburners (augmenters) and even large heaters/boilers is the dynamic instability also known as thermo-acoustic instability. Concerns with this ongoing problem have grown with the introduction of Lean Premixed Combustion (LPC) systems developed to address the environmental concerns associated with the conventional combustion systems. The most common way to mitigate thermo-acoustic instability is adding acoustic damping to the combustor using acoustic liners. Recently damping properties of bias flow initially introduced to liners only for cooling purposes have been recognized and proven to be an asset in enhancing the damping effectiveness of liners. Acoustic liners are currently being designed using empirical design rules followed by build-test-improve steps; basically by trial and error. There is growing concerns on the lack of reliability associated with the experimental evaluation of the acoustic liners with small size apertures. The development of physics-based tools in assisting the design of such liners has become of great interest to practitioners recently. This dissertation focuses primarily on how Large-Eddy Simulations (LES) or similar techniques such as Scaled Adaptive Simulation (SAS) can be used to characterize damping properties of bias flow. The dissertation also reviews assumptions made in the existing analytical, semi-empirical, and numerical models, provides a criteria to rank order the existing models, and identifies the best existing theoretical model. Flow field calculations by LES provide good insight into the mechanisms that led to acoustic damping. Comparison of simulation results with empirical and analytical studies shows that LES simulation is a viable alternative to the empirical and analytical methods and can accurately predict the damping behavior of liners. Currently the role of LES for research studies concerned with damping properties of liners is limited to validation of other empirical or theoretical approaches. This research has shown that LES can go beyond that and can be used for performing parametric studies to characterize the sensitivity of acoustic properties of multi--perforated liners to the changes in the geometry and flow conditions and be used as a tool to design acoustic liners. The conducted research provides an insightful understanding about the contribution of different flow and geometry parameters such as perforated plate thickness, aperture radius, porosity factors and bias flow velocity. While the study agrees with previous observations obtained by analytical or experimental methods, it also quantifies the impact from these parameters on the acoustic impedance of perforated plate, a key parameter to determine the acoustic performance of any system. The conducted study has also explored the limitations and capabilities of commercial tool when are applied for performing simulation studies on damping properties of liners. The overall agreement between LES results and previous studies proves that commercial tools can be effectively used for these applications under certain conditions.
Wakefield, Claire E.
2013-01-01
Adolescents and young adults (AYAs) with cancer must simultaneously navigate the challenges associated with their cancer experience, whilst striving to achieve a number of important developmental milestones at the cusp of adulthood. The disruption caused by their cancer experience at this critical life-stage is assumed to be responsible for significant distress among AYAs living with cancer. The quality and severity of psychological outcomes among AYAs remain poorly documented, however. This review examined the existing literature on psychological outcomes among AYAs living with cancer. All psychological outcomes (both distress and positive adjustment) were included, and AYAs were included across the cancer trajectory, ranging from newly-diagnosed patients, to long-term cancer survivors. Four key research questions were addressed. Section 1 answered the question, “What is the nature and prevalence of distress (and other psychological outcomes) among AYAs living with cancer?” and documented rates of clinical distress, as well as evidence for the trajectory of this distress over time. Section 2 examined the individual, cancer/treatment-related and socio-demographic factors that have been identified as predictors of these outcomes in this existing literature. Section 3 examined current theoretical models relevant to explaining psychological outcomes among AYAs, including developmental models, socio-cognitive and family-systems models, stress-coping frameworks, and cognitive appraisal models (including trauma and meaning making models). The mechanisms implicated in each model were discussed, as was the existing evidence for each model. Converging evidence implicating the potential role of autobiographical memory and future thinking systems in how AYAs process and integrate their cancer experience into their current sense of self and future goals are highlighted. Finally, Section 4 addressed the future of psycho-oncology in understanding and conceptualizing psychological outcomes among AYAs living with cancer, by discussing recent empirical advancements in adjacent, non-oncology fields that might improve our understanding of psychological outcomes in AYAs living with cancer. Included in these were models of memory and future thinking drawn from the broader psychology literature that identify important mechanisms involved in adjustment, as well as experimental paradigms for the study of these mechanisms within analogue, non-cancer AYA samples. PMID:26835313
Stylized facts in social networks: Community-based static modeling
NASA Astrophysics Data System (ADS)
Jo, Hang-Hyun; Murase, Yohsuke; Török, János; Kertész, János; Kaski, Kimmo
2018-06-01
The past analyses of datasets of social networks have enabled us to make empirical findings of a number of aspects of human society, which are commonly featured as stylized facts of social networks, such as broad distributions of network quantities, existence of communities, assortative mixing, and intensity-topology correlations. Since the understanding of the structure of these complex social networks is far from complete, for deeper insight into human society more comprehensive datasets and modeling of the stylized facts are needed. Although the existing dynamical and static models can generate some stylized facts, here we take an alternative approach by devising a community-based static model with heterogeneous community sizes and larger communities having smaller link density and weight. With these few assumptions we are able to generate realistic social networks that show most stylized facts for a wide range of parameters, as demonstrated numerically and analytically. Since our community-based static model is simple to implement and easily scalable, it can be used as a reference system, benchmark, or testbed for further applications.
Moral Enhancement Should Target Self-Interest and Cognitive Capacity.
Ahlskog, Rafael
2017-01-01
Current suggestions for capacities that should be targeted for moral enhancement has centered on traits like empathy, fairness or aggression. The literature, however, lacks a proper model for understanding the interplay and complexity of moral capacities, which limits the practicability of proposed interventions. In this paper, I integrate some existing knowledge on the nature of human moral behavior and present a formal model of prosocial motivation. The model provides two important results regarding the most friction-free route to moral enhancement. First, we should consider decreasing self-interested motivation rather than increasing prosociality directly. Second, this should be complemented with cognitive enhancement. These suggestions are tested against existing and emerging evidence on cognitive capacity, mindfulness meditation and the effects of psychedelic drugs and are found to have sufficient grounding for further theoretical and empirical exploration. Furthermore, moral effects of the latter two are hypothesized to result from a diminished sense of self with subsequent reductions in self-interest.
Kickoff to Conflict: A Sequence Analysis of Intra-State Conflict-Preceding Event Structures
D'Orazio, Vito; Yonamine, James E.
2015-01-01
While many studies have suggested or assumed that the periods preceding the onset of intra-state conflict are similar across time and space, few have empirically tested this proposition. Using the Integrated Crisis Early Warning System's domestic event data in Asia from 1998–2010, we subject this proposition to empirical analysis. We code the similarity of government-rebel interactions in sequences preceding the onset of intra-state conflict to those preceding further periods of peace using three different metrics: Euclidean, Levenshtein, and mutual information. These scores are then used as predictors in a bivariate logistic regression to forecast whether we are likely to observe conflict in neither, one, or both of the states. We find that our model accurately classifies cases where both sequences precede peace, but struggles to distinguish between cases in which one sequence escalates to conflict and where both sequences escalate to conflict. These findings empirically suggest that generalizable patterns exist between event sequences that precede peace. PMID:25951105
Empirical confirmation of creative destruction from world trade data.
Klimek, Peter; Hausmann, Ricardo; Thurner, Stefan
2012-01-01
We show that world trade network datasets contain empirical evidence that the dynamics of innovation in the world economy indeed follows the concept of creative destruction, as proposed by J.A. Schumpeter more than half a century ago. National economies can be viewed as complex, evolving systems, driven by a stream of appearance and disappearance of goods and services. Products appear in bursts of creative cascades. We find that products systematically tend to co-appear, and that product appearances lead to massive disappearance events of existing products in the following years. The opposite-disappearances followed by periods of appearances-is not observed. This is an empirical validation of the dominance of cascading competitive replacement events on the scale of national economies, i.e., creative destruction. We find a tendency that more complex products drive out less complex ones, i.e., progress has a direction. Finally we show that the growth trajectory of a country's product output diversity can be understood by a recently proposed evolutionary model of Schumpeterian economic dynamics.
Empirical Confirmation of Creative Destruction from World Trade Data
Klimek, Peter; Hausmann, Ricardo; Thurner, Stefan
2012-01-01
We show that world trade network datasets contain empirical evidence that the dynamics of innovation in the world economy indeed follows the concept of creative destruction, as proposed by J.A. Schumpeter more than half a century ago. National economies can be viewed as complex, evolving systems, driven by a stream of appearance and disappearance of goods and services. Products appear in bursts of creative cascades. We find that products systematically tend to co-appear, and that product appearances lead to massive disappearance events of existing products in the following years. The opposite–disappearances followed by periods of appearances–is not observed. This is an empirical validation of the dominance of cascading competitive replacement events on the scale of national economies, i.e., creative destruction. We find a tendency that more complex products drive out less complex ones, i.e., progress has a direction. Finally we show that the growth trajectory of a country’s product output diversity can be understood by a recently proposed evolutionary model of Schumpeterian economic dynamics. PMID:22719989
A BRDF statistical model applying to space target materials modeling
NASA Astrophysics Data System (ADS)
Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen
2017-10-01
In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.
Rethinking our approach to gender and disasters: Needs, responsibilities, and solutions.
Montano, Samantha; Savitt, Amanda
2016-01-01
To explore how the existing literature has discussed the vulnerability and needs of women in a disaster context. It will consider the literature's suggestions of how to minimize vulnerability and address the needs of women, including who involved in emergency management should be responsible for such efforts. Empirical journal articles and book chapters from disaster literature were collected that focused on "women" or "gender," and their results and recommendations were analyzed. This review found existing empirical research on women during disasters focuses on their vulnerabilities more than their needs. Second, when researchers do suggest solutions, they tend not to be comprehensive or supported by empirical evidence. Finally, it is not clear from existing research who is responsible for addressing these needs and implementing solutions. Future research should study the intersection of gender and disasters in terms of needs and solutions including who is responsible for implementing solutions.
Wind Energy Facilities and Residential Properties: The Effect of Proximity and View on Sales Prices
DOE Office of Scientific and Technical Information (OSTI.GOV)
San Diego State University; Bard Center for Environmental Policy at Bard College; Hoen, Ben
2011-06-23
With increasing numbers of communities considering wind power developments, empirical investigations regarding related community concerns are needed. One such concern is that proximate property values may be adversely affected, yet relatively little research exists on the subject. The present research investigates roughly 7,500 sales of single-family homes surrounding 24 existing U.S. wind facilities. Across four different hedonic models, and a variety of robustness tests, the results are consistent: neither the view of the wind facilities nor the distance of the home to those facilities is found to have a statistically significant effect on sales prices, yet further research is warranted.
Wind Energy Facilities and Residential Properties: The Effect of Proximity and View on Sales Prices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoen, Ben; Wiser, Ryan; Cappers, Peter
2010-04-01
With an increasing number of communities considering nearby wind power developments, there is a need to empirically investigate community concerns about wind project development. One such concern is that property values may be adversely affected by wind energy facilities, and relatively little research exists on the subject. The present research investigates roughly 7,500 sales of single-family homes surrounding 24 existing U.S. wind facilities. Across four different hedonic models the results are consistent: neither the view of the wind facilities nor the distance of the home to those facilities is found to have a statistically significant effect on home sales prices.
Mechanistic-empirical design concepts for continuously reinforced concrete pavements in Illinois.
DOT National Transportation Integrated Search
2009-04-01
The Illinois Department of Transportation (IDOT) currently has an existing jointed plain concrete pavement : (JPCP) design based on mechanistic-empirical (M-E) principles. However, their continuously reinforced concrete : pavement (CRCP) design proce...
Equal Area Logistic Estimation for Item Response Theory
NASA Astrophysics Data System (ADS)
Lo, Shih-Ching; Wang, Kuo-Chang; Chang, Hsin-Li
2009-08-01
Item response theory (IRT) models use logistic functions exclusively as item response functions (IRFs). Applications of IRT models require obtaining the set of values for logistic function parameters that best fit an empirical data set. However, success in obtaining such set of values does not guarantee that the constructs they represent actually exist, for the adequacy of a model is not sustained by the possibility of estimating parameters. In this study, an equal area based two-parameter logistic model estimation algorithm is proposed. Two theorems are given to prove that the results of the algorithm are equivalent to the results of fitting data by logistic model. Numerical results are presented to show the stability and accuracy of the algorithm.
NASA Technical Reports Server (NTRS)
Wang, Qun-Zhen; Massey, Steven J.; Abdol-Hamid, Khaled S.; Frink, Neal T.
1999-01-01
USM3D is a widely-used unstructured flow solver for simulating inviscid and viscous flows over complex geometries. The current version (version 5.0) of USM3D, however, does not have advanced turbulence models to accurately simulate complicated flows. We have implemented two modified versions of the original Jones and Launder k-epsilon two-equation turbulence model and the Girimaji algebraic Reynolds stress model in USM3D. Tests have been conducted for two flat plate boundary layer cases, a RAE2822 airfoil and an ONERA M6 wing. The results are compared with those of empirical formulae, theoretical results and the existing Spalart-Allmaras one-equation model.
Improving Marine Ecosystem Models with Biochemical Tracers
NASA Astrophysics Data System (ADS)
Pethybridge, Heidi R.; Choy, C. Anela; Polovina, Jeffrey J.; Fulton, Elizabeth A.
2018-01-01
Empirical data on food web dynamics and predator-prey interactions underpin ecosystem models, which are increasingly used to support strategic management of marine resources. These data have traditionally derived from stomach content analysis, but new and complementary forms of ecological data are increasingly available from biochemical tracer techniques. Extensive opportunities exist to improve the empirical robustness of ecosystem models through the incorporation of biochemical tracer data and derived indices, an area that is rapidly expanding because of advances in analytical developments and sophisticated statistical techniques. Here, we explore the trophic information required by ecosystem model frameworks (species, individual, and size based) and match them to the most commonly used biochemical tracers (bulk tissue and compound-specific stable isotopes, fatty acids, and trace elements). Key quantitative parameters derived from biochemical tracers include estimates of diet composition, niche width, and trophic position. Biochemical tracers also provide powerful insight into the spatial and temporal variability of food web structure and the characterization of dominant basal and microbial food web groups. A major challenge in incorporating biochemical tracer data into ecosystem models is scale and data type mismatches, which can be overcome with greater knowledge exchange and numerical approaches that transform, integrate, and visualize data.
Bagby, R Michael; Widiger, Thomas A
2018-01-01
The Five-Factor Model (FFM) is a dimensional model of general personality structure, consisting of the domains of neuroticism (or emotional instability), extraversion versus introversion, openness (or unconventionality), agreeableness versus antagonism, and conscientiousness (or constraint). The FFM is arguably the most commonly researched dimensional model of general personality structure. However, a notable limitation of existing measures of the FFM has been a lack of coverage of its maladaptive variants. A series of self-report inventories has been developed to assess for the maladaptive personality traits that define Diagnostic and Statistical Manual of Mental Disorders (fifth edition; DSM-5) Section II personality disorders (American Psychiatric Association [APA], 2013) from the perspective of the FFM. In this paper, we provide an introduction to this Special Section, presenting the rationale and empirical support for these measures and placing them in the historical context of the recent revision to the APA diagnostic manual. This introduction is followed by 5 papers that provide further empirical support for these measures and address current issues within the personality assessment literature. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Modeling listeners' emotional response to music.
Eerola, Tuomas
2012-10-01
An overview of the computational prediction of emotional responses to music is presented. Communication of emotions by music has received a great deal of attention during the last years and a large number of empirical studies have described the role of individual features (tempo, mode, articulation, timbre) in predicting the emotions suggested or invoked by the music. However, unlike the present work, relatively few studies have attempted to model continua of expressed emotions using a variety of musical features from audio-based representations in a correlation design. The construction of the computational model is divided into four separate phases, with a different focus for evaluation. These phases include the theoretical selection of relevant features, empirical assessment of feature validity, actual feature selection, and overall evaluation of the model. Existing research on music and emotions and extraction of musical features is reviewed in terms of these criteria. Examples drawn from recent studies of emotions within the context of film soundtracks are used to demonstrate each phase in the construction of the model. These models are able to explain the dominant part of the listeners' self-reports of the emotions expressed by music and the models show potential to generalize over different genres within Western music. Possible applications of the computational models of emotions are discussed. Copyright © 2012 Cognitive Science Society, Inc.
Fung, Monica; Kim, Jane; Marty, Francisco M; Schwarzinger, Michaël; Koo, Sophia
2015-01-01
Invasive fungal disease (IFD) causes significant morbidity and mortality in hematologic malignancy patients with high-risk febrile neutropenia (FN). These patients therefore often receive empirical antifungal therapy. Diagnostic test-guided pre-emptive antifungal therapy has been evaluated as an alternative treatment strategy in these patients. We conducted an electronic search for literature comparing empirical versus pre-emptive antifungal strategies in FN among adult hematologic malignancy patients. We systematically reviewed 9 studies, including randomized-controlled trials, cohort studies, and feasibility studies. Random and fixed-effect models were used to generate pooled relative risk estimates of IFD detection, IFD-related mortality, overall mortality, and rates and duration of antifungal therapy. Heterogeneity was measured via Cochran's Q test, I2 statistic, and between study τ2. Incorporating these parameters and direct costs of drugs and diagnostic testing, we constructed a comparative costing model for the two strategies. We conducted probabilistic sensitivity analysis on pooled estimates and one-way sensitivity analyses on other key parameters with uncertain estimates. Nine published studies met inclusion criteria. Compared to empirical antifungal therapy, pre-emptive strategies were associated with significantly lower antifungal exposure (RR 0.48, 95% CI 0.27-0.85) and duration without an increase in IFD-related mortality (RR 0.82, 95% CI 0.36-1.87) or overall mortality (RR 0.95, 95% CI 0.46-1.99). The pre-emptive strategy cost $324 less (95% credible interval -$291.88 to $418.65 pre-emptive compared to empirical) than the empirical approach per FN episode. However, the cost difference was influenced by relatively small changes in costs of antifungal therapy and diagnostic testing. Compared to empirical antifungal therapy, pre-emptive antifungal therapy in patients with high-risk FN may decrease antifungal use without increasing mortality. We demonstrate a state of economic equipoise between empirical and diagnostic-directed pre-emptive antifungal treatment strategies, influenced by small changes in cost of antifungal therapy and diagnostic testing, in the current literature. This work emphasizes the need for optimization of existing fungal diagnostic strategies, development of more efficient diagnostic strategies, and less toxic and more cost-effective antifungals.
A mechanistic investigation of the oxygen fixation hypothesis and oxygen enhancement ratio.
Grimes, David Robert; Partridge, Mike
2015-12-04
The presence of oxygen in tumours has substantial impact on treatment outcome; relative to anoxic regions, well-oxygenated cells respond better to radiotherapy by a factor 2.5-3. This increased radio-response is known as the oxygen enhancement ratio. The oxygen effect is most commonly explained by the oxygen fixation hypothesis, which postulates that radical-induced DNA damage can be permanently 'fixed' by molecular oxygen, rendering DNA damage irreparable. While this oxygen effect is important in both existing therapy and for future modalities such a radiation dose-painting, the majority of existing mathematical models for oxygen enhancement are empirical rather than based on the underlying physics and radiochemistry. Here we propose a model of oxygen-enhanced damage from physical first principles, investigating factors that might influence the cell kill. This is fitted to a range of experimental oxygen curves from literature and shown to describe them well, yielding a single robust term for oxygen interaction obtained. The model also reveals a small thermal dependency exists but that this is unlikely to be exploitable.
Eaton, Mitchell J.; Hughes, Phillip T.; Hines, James E.; Nichols, James D.
2014-01-01
Metapopulation ecology is a field that is richer in theory than in empirical results. Many existing empirical studies use an incidence function approach based on spatial patterns and key assumptions about extinction and colonization rates. Here we recast these assumptions as hypotheses to be tested using 18 years of historic detection survey data combined with four years of data from a new monitoring program for the Lower Keys marsh rabbit. We developed a new model to estimate probabilities of local extinction and colonization in the presence of nondetection, while accounting for estimated occupancy levels of neighboring patches. We used model selection to identify important drivers of population turnover and estimate the effective neighborhood size for this system. Several key relationships related to patch size and isolation that are often assumed in metapopulation models were supported: patch size was negatively related to the probability of extinction and positively related to colonization, and estimated occupancy of neighboring patches was positively related to colonization and negatively related to extinction probabilities. This latter relationship suggested the existence of rescue effects. In our study system, we inferred that coastal patches experienced higher probabilities of extinction and colonization than interior patches. Interior patches exhibited higher occupancy probabilities and may serve as refugia, permitting colonization of coastal patches following disturbances such as hurricanes and storm surges. Our modeling approach should be useful for incorporating neighbor occupancy into future metapopulation analyses and in dealing with other historic occupancy surveys that may not include the recommended levels of sampling replication.
Fluid mechanics of Windkessel effect.
Mei, C C; Zhang, J; Jing, H X
2018-01-08
We describe a mechanistic model of Windkessel phenomenon based on the linear dynamics of fluid-structure interactions. The phenomenon has its origin in an old-fashioned fire-fighting equipment where an air chamber serves to transform the intermittent influx from a pump to a more steady stream out of the hose. A similar mechanism exists in the cardiovascular system where blood injected intermittantly from the heart becomes rather smooth after passing through an elastic aorta. In existing haeodynamics literature, this mechanism is explained on the basis of electric circuit analogy with empirical impedances. We present a mechanistic theory based on the principles of fluid/structure interactions. Using a simple one-dimensional model, wave motion in the elastic aorta is coupled to the viscous flow in the rigid peripheral artery. Explicit formulas are derived that exhibit the role of material properties such as the blood density, viscosity, wall elasticity, and radii and lengths of the vessels. The current two-element model in haemodynamics is shown to be the limit of short aorta and low injection frequency and the impedance coefficients are derived theoretically. Numerical results for different aorta lengths and radii are discussed to demonstrate their effects on the time variations of blood pressure, wall shear stress, and discharge. Graphical Abstract A mechanistic analysis of Windkessel Effect is described which confirms theoretically the well-known feature that intermittent influx becomes continuous outflow. The theory depends only on the density and viscosity of the blood, the elasticity and dimensions of the vessel. Empirical impedence parameters are avoided.
Walter, Stephen D.; Riddell, Corinne A.; Rabachini, Tatiana; Villa, Luisa L.; Franco, Eduardo L.
2013-01-01
Introduction Studies on the association of a polymorphism in codon 72 of the p53 tumour suppressor gene (rs1042522) with cervical neoplasia have inconsistent results. While several methods for genotyping p53 exist, they vary in accuracy and are often discrepant. Methods We used latent class models (LCM) to examine the accuracy of six methods for p53 determination, all conducted by the same laboratory. We also examined the association of p53 with cytological cervical abnormalities, recognising potential test inaccuracy. Results Pairwise disagreement between laboratory methods occurred approximately 10% of the time. Given the estimated true p53 status of each woman, we found that each laboratory method is most likely to classify a woman to her correct status. Arg/Arg women had the highest risk of squamous intraepithelial lesions (SIL). Test accuracy was independent of cytology. There was no strong evidence for correlations of test errors. Discussion Empirical analyses ignore possible laboratory errors, and so are inherently biased, but test accuracy estimated by the LCM approach is unbiased when model assumptions are met. LCM analysis avoids ambiguities arising from empirical test discrepancies, obviating the need to regard any of the methods as a “gold” standard measurement. The methods we presented here to analyse the p53 data can be applied in many other situations where multiple tests exist, but where none of them is a gold standard. PMID:23441193
A theoretical framework for the associations between identity and psychopathology.
Klimstra, Theo A; Denissen, Jaap J A
2017-11-01
Identity research largely emerged from clinical observations. Decades of empirical work advanced the field in refining existing approaches and adding new approaches. Furthermore, the existence of linkages of identity with psychopathology is now well established. Unfortunately, both the directionality of effects between identity aspects and psychopathology symptoms, and the mechanisms underlying associations are unclear. In the present paper, we present a new framework to inspire hypothesis-driven empirical research to overcome this limitation. The framework has a basic resemblance to theoretical models for the study of personality and psychopathology, so we provide examples of how these might apply to the study of identity. Next, we explain that unique features of identity may come into play in individuals suffering from psychopathology that are mostly related to the content of one's identity. These include pros and cons of identifying with one's diagnostic label. Finally, inspired by Hermans' dialogical self theory and principles derived from Piaget's, Swann's and Kelly's work, we delineate a framework with identity at the core of an individual multidimensional space. In this space, psychopathology symptoms have a known distance (representing relevance) to one's identity, and individual multidimensional spaces are connected to those of other individuals in one's social network. We discuss methodological (quantitative and qualitative, idiographic and nomothetic) and statistical procedures (multilevel models and network models) to test the framework. Resulting evidence can boost the field of identity research in demonstrating its high practical relevance for the emergence and conservation of psychopathology. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, B.C.
This study is an assessment of the ground shock which may be generated in the event of an accidental explosion at J5 or the Proposed Large Altitude Rocket Cell (LARC) at the Arnold Engineering Development Center (AEDC). The assessment is accomplished by reviewing existing empirical relationships for predicting ground motion from ground shock. These relationships are compared with data for surface explosions at sites with similar geology and with yields similar to expected conditions at AEDC. Empirical relationships are developed from these data and a judgment made whether to use existing empirical relationships or the relationships developed in this study.more » An existing relationship (Lipner et al.) is used to predict velocity; the empirical relationships developed in the course of this study are used to predict acceleration and displacement. The ground motions are presented in table form and as contour plots. Included also is a discussion of damage criteria from blast and earthquake studies. This report recommends using velocity rather than acceleration as an indicator of structural blast damage. It is recommended that v = 2 ips (v = .167 fps) be used as the damage threshold value (no major damage for v less than or equal to 2 ips). 13 references, 25 figures, 6 tables.« less
Characterization and effectiveness of pay-for-performance in ophthalmology: a systematic review.
Herbst, Tim; Emmert, Martin
2017-06-05
To identify, characterize and compare existing pay-for-performance approaches and their impact on the quality of care and efficiency in ophthalmology. A systematic evidence-based review was conducted. English, French and German written literature published between 2000 and 2015 were searched in the following databases: Medline (via PubMed), NCBI web site, Scopus, Web of Knowledge, Econlit and the Cochrane Library. Empirical as well as descriptive articles were included. Controlled clinical trials, meta-analyses, randomized controlled studies as well as observational studies were included as empirical articles. Systematic characterization of identified pay-for-performance approaches (P4P approaches) was conducted according to the "Model for Implementing and Monitoring Incentives for Quality" (MIMIQ). Methodological quality of empirical articles was assessed according to the Critical Appraisal Skills Programme (CASP) checklists. Overall, 13 relevant articles were included. Eleven articles were descriptive and two articles included empirical analyses. Based on these articles, four different pay-for-performance approaches implemented in the United States were identified. With regard to quality and incentive elements, systematic comparison showed numerous differences between P4P approaches. Empirical studies showed isolated cost or quality effects, while a simultaneous examination of these effects was missing. Research results show that experiences with pay-for-performance approaches in ophthalmology are limited. Identified approaches differ with regard to quality and incentive elements restricting comparability. Two empirical studies are insufficient to draw strong conclusions about the effectiveness and efficiency of these approaches.
The Dark Matter Crisis: Falsification of the Current Standard Model of Cosmology
NASA Astrophysics Data System (ADS)
Kroupa, P.
2012-06-01
The current standard model of cosmology (SMoC) requires The Dual Dwarf Galaxy Theorem to be true according to which two types of dwarf galaxies must exist: primordial dark-matter (DM) dominated (type A) dwarf galaxies, and tidal-dwarf and ram-pressure-dwarf (type B) galaxies void of DM. Type A dwarfs surround the host approximately spherically, while type B dwarfs are typically correlated in phase-space. Type B dwarfs must exist in any cosmological theory in which galaxies interact. Only one type of dwarf galaxy is observed to exist on the baryonic Tully-Fisher plot and in the radius-mass plane. The Milky Way satellite system forms a vast phase-space-correlated structure that includes globular clusters and stellar and gaseous streams. Other galaxies also have phase-space correlated satellite systems. Therefore, The Dual Dwarf Galaxy Theorem is falsified by observation and dynamically relevant cold or warm DM cannot exist. It is shown that the SMoC is incompatible with a large set of other extragalactic observations. Other theoretical solutions to cosmological observations exist. In particular, alone the empirical mass-discrepancy-acceleration correlation constitutes convincing evidence that galactic-scale dynamics must be Milgromian. Major problems with inflationary big bang cosmologies remain unresolved.
Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S
2017-10-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.
Flood loss modelling with FLF-IT: a new flood loss function for Italian residential structures
NASA Astrophysics Data System (ADS)
Hasanzadeh Nafari, Roozbeh; Amadio, Mattia; Ngo, Tuan; Mysiak, Jaroslav
2017-07-01
The damage triggered by different flood events costs the Italian economy millions of euros each year. This cost is likely to increase in the future due to climate variability and economic development. In order to avoid or reduce such significant financial losses, risk management requires tools which can provide a reliable estimate of potential flood impacts across the country. Flood loss functions are an internationally accepted method for estimating physical flood damage in urban areas. In this study, we derived a new flood loss function for Italian residential structures (FLF-IT), on the basis of empirical damage data collected from a recent flood event in the region of Emilia-Romagna. The function was developed based on a new Australian approach (FLFA), which represents the confidence limits that exist around the parameterized functional depth-damage relationship. After model calibration, the performance of the model was validated for the prediction of loss ratios and absolute damage values. It was also contrasted with an uncalibrated relative model with frequent usage in Europe. In this regard, a three-fold cross-validation procedure was carried out over the empirical sample to measure the range of uncertainty from the actual damage data. The predictive capability has also been studied for some sub-classes of water depth. The validation procedure shows that the newly derived function performs well (no bias and only 10 % mean absolute error), especially when the water depth is high. Results of these validation tests illustrate the importance of model calibration. The advantages of the FLF-IT model over other Italian models include calibration with empirical data, consideration of the epistemic uncertainty of data, and the ability to change parameters based on building practices across Italy.
A Solution to Separation and Multicollinearity in Multiple Logistic Regression
Shen, Jianzhao; Gao, Sujuan
2010-01-01
In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286
A Solution to Separation and Multicollinearity in Multiple Logistic Regression.
Shen, Jianzhao; Gao, Sujuan
2008-10-01
In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.
How Does Rumination Impact Cognition? A First Mechanistic Model.
van Vugt, Marieke K; van der Velde, Maarten
2018-01-01
Rumination is a process of uncontrolled, narrowly focused negative thinking that is often self-referential, and that is a hallmark of depression. Despite its importance, little is known about its cognitive mechanisms. Rumination can be thought of as a specific, constrained form of mind-wandering. Here, we introduce a cognitive model of rumination that we developed on the basis of our existing model of mind-wandering. The rumination model implements the hypothesis that rumination is caused by maladaptive habits of thought. These habits of thought are modeled by adjusting the number of memory chunks and their associative structure, which changes the sequence of memories that are retrieved during mind-wandering, such that during rumination the same set of negative memories is retrieved repeatedly. The implementation of habits of thought was guided by empirical data from an experience sampling study in healthy and depressed participants. On the basis of this empirically derived memory structure, our model naturally predicts the declines in cognitive task performance that are typically observed in depressed patients. This study demonstrates how we can use cognitive models to better understand the cognitive mechanisms underlying rumination and depression. Copyright © 2018 The Authors. Topics in Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
Wiedermann, Wolfgang; Li, Xintong
2018-04-16
In nonexperimental data, at least three possible explanations exist for the association of two variables x and y: (1) x is the cause of y, (2) y is the cause of x, or (3) an unmeasured confounder is present. Statistical tests that identify which of the three explanatory models fits best would be a useful adjunct to the use of theory alone. The present article introduces one such statistical method, direction dependence analysis (DDA), which assesses the relative plausibility of the three explanatory models on the basis of higher-moment information about the variables (i.e., skewness and kurtosis). DDA involves the evaluation of three properties of the data: (1) the observed distributions of the variables, (2) the residual distributions of the competing models, and (3) the independence properties of the predictors and residuals of the competing models. When the observed variables are nonnormally distributed, we show that DDA components can be used to uniquely identify each explanatory model. Statistical inference methods for model selection are presented, and macros to implement DDA in SPSS are provided. An empirical example is given to illustrate the approach. Conceptual and empirical considerations are discussed for best-practice applications in psychological data, and sample size recommendations based on previous simulation studies are provided.
An Empirical State Error Covariance Matrix Orbit Determination Example
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.
Health lifestyle theory and the convergence of agency and structure.
Cockerham, William C
2005-03-01
This article utilizes the agency-structure debate as a framework for constructing a health lifestyle theory. No such theory currently exists, yet the need for one is underscored by the fact that many daily lifestyle practices involve considerations of health outcomes. An individualist paradigm has influenced concepts of health lifestyles in several disciplines, but this approach neglects the structural dimensions of such lifestyles and has limited applicability to the empirical world. The direction of this article is to present a theory of health lifestyles that includes considerations of both agency and structure, with an emphasis upon restoring structure to its appropriate position. The article begins by defining agency and structure, followed by presentation of a health lifestyle model and the theoretical and empirical studies that support it.
Martin, Natasha K.; Skaathun, Britt; Vickerman, Peter; Stuart, David
2017-01-01
Background People who inject drugs (PWID) and HIV-infected men who have sex with men (MSM) are key risk groups for hepatitis C virus (HCV) transmission. Mathematical modeling studies can help elucidate what level and combination of prevention intervention scale-up is required to control or eliminate epidemics among these key populations. Methods We discuss the evidence surrounding HCV prevention interventions and provide an overview of the mathematical modeling literature projecting the impact of scaled-up HCV prevention among PWID and HIV-infected MSM. Results Harm reduction interventions such as opiate substitution therapy and needle and syringe programs are effective in reducing HCV incidence among PWID. Modeling and limited empirical data indicate HCV treatment could additionally be used for prevention. No studies have evaluated the effectiveness of behavior change interventions to reduce HCV incidence among MSM, but existing interventions to reduce HIV risk could be effective. Mathematical modeling and empirical data indicates that scale-up of harm reduction could reduce HCV transmission, but in isolation is unlikely to eliminate HCV among PWID. By contrast, elimination is possibly achievable through combination scale-up of harm reduction and HCV treatment. Similarly, among HIV-infected MSM, eliminating the emerging epidemics will likely require HCV treatment scale-up in combination with additional interventions to reduce HCV-related risk behaviors. Conclusions Elimination of HCV will likely require combination prevention efforts among both PWID and HIV-infected MSM populations. Further empirical research is required to validate HCV treatment as prevention among these populations, and to identify effective behavioral interventions to reduce HCV incidence among MSM. PMID:28534885
FIELD INVESTIGATIONS OF THE DRIFT SHADOW
DOE Office of Scientific and Technical Information (OSTI.GOV)
G. W. Su, T. J. Kneafsey, T. A. Ghezzehei, B. D. Marshall, and P. J. Cook
The ''Drift Shadow'' is defined as the relatively drier region that forms below subsurface cavities or drifts in unsaturated rock. Its existence has been predicted through analytical and numerical models of unsaturated flow. However, these theoretical predictions have not been demonstrated empirically to date. In this project they plan to test the drift shadow concept through field investigations and compare our observations to simulations. Based on modeling studies they have an identified suitable site to perform the study at an inactive mine in a sandstone formation. Pretest modeling studies and preliminary characterization of the site are being used to developmore » the field scale tests.« less
A network model of the interbank market
NASA Astrophysics Data System (ADS)
Li, Shouwei; He, Jianmin; Zhuang, Yaming
2010-12-01
This work introduces a network model of an interbank market based on interbank credit lending relationships. It generates some network features identified through empirical analysis. The critical issue to construct an interbank network is to decide the edges among banks, which is realized in this paper based on the interbank’s degree of trust. Through simulation analysis of the interbank network model, some typical structural features are identified in our interbank network, which are also proved to exist in real interbank networks. They are namely, a low clustering coefficient and a relatively short average path length, community structures, and a two-power-law distribution of out-degree and in-degree.
[Masculinity and femininity scales: current state of the art].
Fernández, Juan; Quiroga, María A; Del Olmo, Isabel; Rodríguez, Antonio
2007-08-01
A theoretical and empirical review of masculinity and femininity scales was carried out after 30 years of their existence. Hypotheses to be tested were: (a) muldimensionality versus bidimensionality; (b) inadequate percentage of variance accounted for (less than 50%); (c) inconsistency between factor structure and the dualistic model. 618, 200 and 287 students took part in each of the three studies that were carried out. Factorial analyses (PAF) were performed. Results support multidimensionality, unsatisfactory percentage of variance accounted for, and lack of congruence between obtained factors and the dualistic model. All these data were analysed within the context of the twofold sex and gender reality model.
Structure induction in diagnostic causal reasoning.
Meder, Björn; Mayrhofer, Ralf; Waldmann, Michael R
2014-07-01
Our research examines the normative and descriptive adequacy of alternative computational models of diagnostic reasoning from single effects to single causes. Many theories of diagnostic reasoning are based on the normative assumption that inferences from an effect to its cause should reflect solely the empirically observed conditional probability of cause given effect. We argue against this assumption, as it neglects alternative causal structures that may have generated the sample data. Our structure induction model of diagnostic reasoning takes into account the uncertainty regarding the underlying causal structure. A key prediction of the model is that diagnostic judgments should not only reflect the empirical probability of cause given effect but should also depend on the reasoner's beliefs about the existence and strength of the link between cause and effect. We confirmed this prediction in 2 studies and showed that our theory better accounts for human judgments than alternative theories of diagnostic reasoning. Overall, our findings support the view that in diagnostic reasoning people go "beyond the information given" and use the available data to make inferences on the (unobserved) causal rather than on the (observed) data level. (c) 2014 APA, all rights reserved.
Aircraft High-Lift Aerodynamic Analysis Using a Surface-Vorticity Solver
NASA Technical Reports Server (NTRS)
Olson, Erik D.; Albertson, Cindy W.
2016-01-01
This study extends an existing semi-empirical approach to high-lift analysis by examining its effectiveness for use with a three-dimensional aerodynamic analysis method. The aircraft high-lift geometry is modeled in Vehicle Sketch Pad (OpenVSP) using a newly-developed set of techniques for building a three-dimensional model of the high-lift geometry, and for controlling flap deflections using scripted parameter linking. Analysis of the low-speed aerodynamics is performed in FlightStream, a novel surface-vorticity solver that is expected to be substantially more robust and stable compared to pressure-based potential-flow solvers and less sensitive to surface perturbations. The calculated lift curve and drag polar are modified by an empirical lift-effectiveness factor that takes into account the effects of viscosity that are not captured in the potential-flow solution. Analysis results are validated against wind-tunnel data for The Energy-Efficient Transport AR12 low-speed wind-tunnel model, a 12-foot, full-span aircraft configuration with a supercritical wing, full-span slats, and part-span double-slotted flaps.
Improving PAGER's real-time earthquake casualty and loss estimation toolkit: a challenge
Jaiswal, K.S.; Wald, D.J.
2012-01-01
We describe the on-going developments of PAGER’s loss estimation models, and discuss value-added web content that can be generated related to exposure, damage and loss outputs for a variety of PAGER users. These developments include identifying vulnerable building types in any given area, estimating earthquake-induced damage and loss statistics by building type, and developing visualization aids that help locate areas of concern for improving post-earthquake response efforts. While detailed exposure and damage information is highly useful and desirable, significant improvements are still necessary in order to improve underlying building stock and vulnerability data at a global scale. Existing efforts with the GEM’s GED4GEM and GVC consortia will help achieve some of these objectives. This will benefit PAGER especially in regions where PAGER’s empirical model is less-well constrained; there, the semi-empirical and analytical models will provide robust estimates of damage and losses. Finally, we outline some of the challenges associated with rapid casualty and loss estimation that we experienced while responding to recent large earthquakes worldwide.
Empirical Protocol for Measuring Virtual Tachyon / Tardon Interactions in a Dirac Vacuum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amoroso, Richard L.; Rauscher, Elizabeth A.
2010-12-22
Here we present discussion for the utility of resonant interference in Calabi-Yau mirror symmetry as a putative empirical test of the existence of virtual tachyon / tardon interactions in a covariant Dirac polarized vacuum.
NASA Astrophysics Data System (ADS)
Lu, Y.; Duursma, R.; Farrior, C.; Medlyn, B. E.
2016-12-01
Stomata control the exchange of soil water for atmospheric CO2, which is one of the most important resource trade-offs for plants. This trade-off has been studied a lot but not in the context of competition. Based on the theory of evolutionarily stable strategy, we search for the uninvadable (or the ESS) response of stomatal conductance to soil water content under stochastic rainfall, with which the dominant plant population should never be invaded by any rare mutants in the water competition due to a higher fitness. In this study, we define the fitness as the difference between the long-term average photosynthetic carbon gain and a carbon cost of stomatal opening. This cost has traditionally been considered an unknown constant. Here we extend this framework by assuming it as the energy required for xylem embolism refilling. With regard to the refilling process, we explore 2 questions 1) to what extent the embolized xylem vessels can be repaired via refilling; and 2) whether this refilling is immediate or has a time delay following the formation of xylem embolism. We compare various assumptions in a total of 5 scenarios and find that the ESS exists only if the xylem damage can be repaired completely. Then, with this ESS, we estimate annual vegetation photosynthesis and water consumption and compare them with empirical results. In conclusion, this study provides a different insight from the existing empirical and mechanistic models as well as the theoretical models based on the optimization theory. In addition, as the model result is a simple quantitative relation between stomatal conductance and soil water content, it can be easily incorporated into other vegetation function models.
The status and challenge of global fire modelling
Hantson, Stijn; Arneth, Almut; Harrison, Sandy P.; ...
2016-06-09
Biomass burning impacts vegetation dynamics, biogeochemical cycling, atmospheric chemistry, and climate, with sometimes deleterious socio-economic impacts. Under future climate projections it is often expected that the risk of wildfires will increase. Our ability to predict the magnitude and geographic pattern of future fire impacts rests on our ability to model fire regimes, using either well-founded empirical relationships or process-based models with good predictive skill. While a large variety of models exist today, it is still unclear which type of model or degree of complexity is required to model fire adequately at regional to global scales. This is the central questionmore » underpinning the creation of the Fire Model Intercomparison Project (FireMIP), an international initiative to compare and evaluate existing global fire models against benchmark data sets for present-day and historical conditions. In this paper we review how fires have been represented in fire-enabled dynamic global vegetation models (DGVMs) and give an overview of the current state of the art in fire-regime modelling. In conclusion, we indicate which challenges still remain in global fire modelling and stress the need for a comprehensive model evaluation and outline what lessons may be learned from FireMIP.« less
The status and challenge of global fire modelling
NASA Astrophysics Data System (ADS)
Hantson, Stijn; Arneth, Almut; Harrison, Sandy P.; Kelley, Douglas I.; Prentice, I. Colin; Rabin, Sam S.; Archibald, Sally; Mouillot, Florent; Arnold, Steve R.; Artaxo, Paulo; Bachelet, Dominique; Ciais, Philippe; Forrest, Matthew; Friedlingstein, Pierre; Hickler, Thomas; Kaplan, Jed O.; Kloster, Silvia; Knorr, Wolfgang; Lasslop, Gitta; Li, Fang; Mangeon, Stephane; Melton, Joe R.; Meyn, Andrea; Sitch, Stephen; Spessa, Allan; van der Werf, Guido R.; Voulgarakis, Apostolos; Yue, Chao
2016-06-01
Biomass burning impacts vegetation dynamics, biogeochemical cycling, atmospheric chemistry, and climate, with sometimes deleterious socio-economic impacts. Under future climate projections it is often expected that the risk of wildfires will increase. Our ability to predict the magnitude and geographic pattern of future fire impacts rests on our ability to model fire regimes, using either well-founded empirical relationships or process-based models with good predictive skill. While a large variety of models exist today, it is still unclear which type of model or degree of complexity is required to model fire adequately at regional to global scales. This is the central question underpinning the creation of the Fire Model Intercomparison Project (FireMIP), an international initiative to compare and evaluate existing global fire models against benchmark data sets for present-day and historical conditions. In this paper we review how fires have been represented in fire-enabled dynamic global vegetation models (DGVMs) and give an overview of the current state of the art in fire-regime modelling. We indicate which challenges still remain in global fire modelling and stress the need for a comprehensive model evaluation and outline what lessons may be learned from FireMIP.
The status and challenge of global fire modelling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hantson, Stijn; Arneth, Almut; Harrison, Sandy P.
Biomass burning impacts vegetation dynamics, biogeochemical cycling, atmospheric chemistry, and climate, with sometimes deleterious socio-economic impacts. Under future climate projections it is often expected that the risk of wildfires will increase. Our ability to predict the magnitude and geographic pattern of future fire impacts rests on our ability to model fire regimes, using either well-founded empirical relationships or process-based models with good predictive skill. While a large variety of models exist today, it is still unclear which type of model or degree of complexity is required to model fire adequately at regional to global scales. This is the central questionmore » underpinning the creation of the Fire Model Intercomparison Project (FireMIP), an international initiative to compare and evaluate existing global fire models against benchmark data sets for present-day and historical conditions. In this paper we review how fires have been represented in fire-enabled dynamic global vegetation models (DGVMs) and give an overview of the current state of the art in fire-regime modelling. In conclusion, we indicate which challenges still remain in global fire modelling and stress the need for a comprehensive model evaluation and outline what lessons may be learned from FireMIP.« less
How Peer Pressure Shapes Consensus, Leadership, and Innovations in Social Groups
NASA Astrophysics Data System (ADS)
Estrada, Ernesto; Vargas-Estrada, Eusebio
2013-10-01
What is the effect of the combined direct and indirect social influences--peer pressure (PP)--on a social group's collective decisions? We present a model that captures PP as a function of the socio-cultural distance between individuals in a social group. Using this model and empirical data from 15 real-world social networks we found that the PP level determines how fast a social group reaches consensus. More importantly, the levels of PP determine the leaders who can achieve full control of their social groups. PP can overcome barriers imposed upon a consensus by the existence of tightly connected communities with local leaders or the existence of leaders with poor cohesiveness of opinions. A moderate level of PP is also necessary to explain the rate at which innovations diffuse through a variety of social groups.
Mäs, Michael; Flache, Andreas
2013-01-01
Explanations of opinion bi-polarization hinge on the assumption of negative influence, individuals' striving to amplify differences to disliked others. However, empirical evidence for negative influence is inconclusive, which motivated us to search for an alternative explanation. Here, we demonstrate that bi-polarization can be explained without negative influence, drawing on theories that emphasize the communication of arguments as central mechanism of influence. Due to homophily, actors interact mainly with others whose arguments will intensify existing tendencies for or against the issue at stake. We develop an agent-based model of this theory and compare its implications to those of existing social-influence models, deriving testable hypotheses about the conditions of bi-polarization. Hypotheses were tested with a group-discussion experiment (N = 96). Results demonstrate that argument exchange can entail bi-polarization even when there is no negative influence.
How peer pressure shapes consensus, leadership, and innovations in social groups.
Estrada, Ernesto; Vargas-Estrada, Eusebio
2013-10-09
What is the effect of the combined direct and indirect social influences--peer pressure (PP)--on a social group's collective decisions? We present a model that captures PP as a function of the socio-cultural distance between individuals in a social group. Using this model and empirical data from 15 real-world social networks we found that the PP level determines how fast a social group reaches consensus. More importantly, the levels of PP determine the leaders who can achieve full control of their social groups. PP can overcome barriers imposed upon a consensus by the existence of tightly connected communities with local leaders or the existence of leaders with poor cohesiveness of opinions. A moderate level of PP is also necessary to explain the rate at which innovations diffuse through a variety of social groups.
Small sum privacy and large sum utility in data publishing.
Fu, Ada Wai-Chee; Wang, Ke; Wong, Raymond Chi-Wing; Wang, Jia; Jiang, Minhao
2014-08-01
While the study of privacy preserving data publishing has drawn a lot of interest, some recent work has shown that existing mechanisms do not limit all inferences about individuals. This paper is a positive note in response to this finding. We point out that not all inference attacks should be countered, in contrast to all existing works known to us, and based on this we propose a model called SPLU. This model protects sensitive information, by which we refer to answers for aggregate queries with small sums, while queries with large sums are answered with higher accuracy. Using SPLU, we introduce a sanitization algorithm to protect data while maintaining high data utility for queries with large sums. Empirical results show that our method behaves as desired. Copyright © 2014 Elsevier Inc. All rights reserved.
The effect of fiscal policy on diet, obesity and chronic disease: a systematic review
Jan, Stephen; Leeder, Stephen; Swinburn, Boyd
2010-01-01
Abstract Objective To assess the effect of food taxes and subsidies on diet, body weight and health through a systematic review of the literature. Methods We searched the English-language published and grey literature for empirical and modelling studies on the effects of monetary subsidies or taxes levied on specific food products on consumption habits, body weight and chronic conditions. Empirical studies were dealing with an actual tax, while modelling studies predicted outcomes based on a hypothetical tax or subsidy. Findings Twenty-four studies met the inclusion criteria: 13 were from the peer-reviewed literature and 11 were published on line. There were 8 empirical and 16 modelling studies. Nine studies assessed the impact of taxes on food consumption only, 5 on consumption and body weight, 4 on consumption and disease and 6 on body weight only. In general, taxes and subsidies influenced consumption in the desired direction, with larger taxes being associated with more significant changes in consumption, body weight and disease incidence. However, studies that focused on a single target food or nutrient may have overestimated the impact of taxes by failing to take into account shifts in consumption to other foods. The quality of the evidence was generally low. Almost all studies were conducted in high-income countries. Conclusion Food taxes and subsidies have the potential to contribute to healthy consumption patterns at the population level. However, current evidence is generally of low quality and the empirical evaluation of existing taxes is a research priority, along with research into the effectiveness and differential impact of food taxes in developing countries. PMID:20680126
Ontological addiction theory: Attachment to me, mine, and I.
Van Gordon, William; Shonin, Edo; Diouri, Sofiane; Garcia-Campayo, Javier; Kotera, Yasuhiro; Griffiths, Mark D
2018-06-07
Background Ontological addiction theory (OAT) is a novel metaphysical model of psychopathology and posits that human beings are prone to forming implausible beliefs concerning the way they think they exist, and that these beliefs can become addictive leading to functional impairments and mental illness. The theoretical underpinnings of OAT derive from the Buddhist philosophical perspective that all phenomena, including the self, do not manifest inherently or independently. Aims and methods This paper outlines the theoretical foundations of OAT along with indicative supportive empirical evidence from studies evaluating meditation awareness training as well as studies investigating non-attachment, emptiness, compassion, and loving-kindness. Results OAT provides a novel perspective on addiction, the factors that underlie mental illness, and how beliefs concerning selfhood are shaped and reified. Conclusion In addition to continuing to test the underlying assumptions of OAT, future empirical research needs to determine how ontological addiction fits with extant theories of self, reality, and suffering, as well with more established models of addiction.
Conceptualisations of infinity by primary pre-service teachers
NASA Astrophysics Data System (ADS)
Date-Huxtable, Elizabeth; Cavanagh, Michael; Coady, Carmel; Easey, Michael
2018-05-01
As part of the Opening Real Science: Authentic Mathematics and Science Education for Australia project, an online mathematics learning module embedding conceptual thinking about infinity in science-based contexts, was designed and trialled with a cohort of 22 pre-service teachers during 1 week of intensive study. This research addressed the question: "How do pre-service teachers conceptualise infinity mathematically?" Participants argued the existence of infinity in a summative reflective task, using mathematical and empirical arguments that were coded according to five themes: definition, examples, application, philosophy and teaching; and 17 codes. Participants' reflections were differentiated as to whether infinity was referred to as an abstract (A) or a real (R) concept or whether both (B) codes were used. Principal component analysis of the reflections, using frequency of codings, revealed that A and R codes occurred at different frequencies in three groups of reflections. Distinct methods of argument were associated with each group of reflections: mathematical numerical examples and empirical measurement comparisons characterised arguments for infinity as an abstract concept, geometric and empirical dynamic examples and belief statements characterised arguments for infinity as a real concept and empirical measurement and mathematical examples and belief statements characterised arguments for infinity as both an abstract and a real concept. An implication of the results is that connections between mathematical and empirical applications of infinity may assist pre-service teachers to contrast finite with infinite models of the world.
Equation of state for dense nucleonic matter from metamodeling. I. Foundational aspects
NASA Astrophysics Data System (ADS)
Margueron, Jérôme; Hoffmann Casali, Rudiney; Gulminelli, Francesca
2018-02-01
Metamodeling for the nucleonic equation of state (EOS), inspired from a Taylor expansion around the saturation density of symmetric nuclear matter, is proposed and parameterized in terms of the empirical parameters. The present knowledge of nuclear empirical parameters is first reviewed in order to estimate their average values and associated uncertainties, and thus defining the parameter space of the metamodeling. They are divided into isoscalar and isovector types, and ordered according to their power in the density expansion. The goodness of the metamodeling is analyzed against the predictions of the original models. In addition, since no correlation among the empirical parameters is assumed a priori, all arbitrary density dependences can be explored, which might not be accessible in existing functionals. Spurious correlations due to the assumed functional form are also removed. This meta-EOS allows direct relations between the uncertainties on the empirical parameters and the density dependence of the nuclear equation of state and its derivatives, and the mapping between the two can be done with standard Bayesian techniques. A sensitivity analysis shows that the more influential empirical parameters are the isovector parameters Lsym and Ksym, and that laboratory constraints at supersaturation densities are essential to reduce the present uncertainties. The present metamodeling for the EOS for nuclear matter is proposed for further applications in neutron stars and supernova matter.
Empirical Bayes estimation of proportions with application to cowbird parasitism rates
Link, W.A.; Hahn, D.C.
1996-01-01
Bayesian models provide a structure for studying collections of parameters such as are considered in the investigation of communities, ecosystems, and landscapes. This structure allows for improved estimation of individual parameters, by considering them in the context of a group of related parameters. Individual estimates are differentially adjusted toward an overall mean, with the magnitude of their adjustment based on their precision. Consequently, Bayesian estimation allows for a more credible identification of extreme values in a collection of estimates. Bayesian models regard individual parameters as values sampled from a specified probability distribution, called a prior. The requirement that the prior be known is often regarded as an unattractive feature of Bayesian analysis and may be the reason why Bayesian analyses are not frequently applied in ecological studies. Empirical Bayes methods provide an alternative approach that incorporates the structural advantages of Bayesian models while requiring a less stringent specification of prior knowledge. Rather than requiring that the prior distribution be known, empirical Bayes methods require only that it be in a certain family of distributions, indexed by hyperparameters that can be estimated from the available data. This structure is of interest per se, in addition to its value in allowing for improved estimation of individual parameters; for example, hypotheses regarding the existence of distinct subgroups in a collection of parameters can be considered under the empirical Bayes framework by allowing the hyperparameters to vary among subgroups. Though empirical Bayes methods have been applied in a variety of contexts, they have received little attention in the ecological literature. We describe the empirical Bayes approach in application to estimation of proportions, using data obtained in a community-wide study of cowbird parasitism rates for illustration. Since observed proportions based on small sample sizes are heavily adjusted toward the mean, extreme values among empirical Bayes estimates identify those species for which there is the greatest evidence of extreme parasitism rates. Applying a subgroup analysis to our data on cowbird parasitism rates, we conclude that parasitism rates for Neotropical Migrants as a group are no greater than those of Resident/Short-distance Migrant species in this forest community. Our data and analyses demonstrate that the parasitism rates for certain Neotropical Migrant species are remarkably low (Wood Thrush and Rose-breasted Grosbeak) while those for others are remarkably high (Ovenbird and Red-eyed Vireo).
A comparison of viscoelastic damping models
NASA Technical Reports Server (NTRS)
Slater, Joseph C.; Belvin, W. Keith; Inman, Daniel J.
1993-01-01
Modern finite element methods (FEM's) enable the precise modeling of mass and stiffness properties in what were in the past overwhelmingly large and complex structures. These models allow the accurate determination of natural frequencies and mode shapes. However, adequate methods for modeling highly damped and high frequency dependent structures did not exist until recently. The most commonly used method, Modal Strain Energy, does not correctly predict complex mode shapes since it is based on the assumption that the mode shapes of a structure are real. Recently, many techniques have been developed which allow the modeling of frequency dependent damping properties of materials in a finite element compatible form. Two of these methods, the Golla-Hughes-McTavish method and the Lesieutre-Mingori method, model the frequency dependent effects by adding coordinates to the existing system thus maintaining the linearity of the model. The third model, proposed by Bagley and Torvik, is based on the Fractional Calculus method and requires fewer empirical parameters to model the frequency dependence at the expense of linearity of the governing equations. This work examines the Modal Strain Energy, Golla-Hughes-McTavish and Bagley and Torvik models and compares them to determine the plausibility of using them for modeling viscoelastic damping in large structures.
Simpson-Southward, Chloe; Waller, Glenn; Hardy, Gillian E
2017-11-01
Clinical supervision for psychotherapies is widely used in clinical and research contexts. Supervision is often assumed to ensure therapy adherence and positive client outcomes, but there is little empirical research to support this contention. Regardless, there are numerous supervision models, but it is not known how consistent their recommendations are. This review aimed to identify which aspects of supervision are consistent across models, and which are not. A content analysis of 52 models revealed 71 supervisory elements. Models focus more on supervisee learning and/or development (88.46%), but less on emotional aspects of work (61.54%) or managerial or ethical responsibilities (57.69%). Most models focused on the supervisee (94.23%) and supervisor (80.77%), rather than the client (48.08%) or monitoring client outcomes (13.46%). Finally, none of the models were clearly or adequately empirically based. Although we might expect clinical supervision to contribute to positive client outcomes, the existing models have limited client focus and are inconsistent. Therefore, it is not currently recommended that one should assume that the use of such models will ensure consistent clinician practice or positive therapeutic outcomes. There is little evidence for the effectiveness of supervision. There is a lack of consistency in supervision models. Services need to assess whether supervision is effective for practitioners and patients. Copyright © 2017 John Wiley & Sons, Ltd.
Benchmarking test of empirical root water uptake models
NASA Astrophysics Data System (ADS)
dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman
2017-01-01
Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation
. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model
. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.
Modeling bias and variation in the stochastic processes of small RNA sequencing
Etheridge, Alton; Sakhanenko, Nikita; Galas, David
2017-01-01
Abstract The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. PMID:28369495
Technical Manual for the SAM Physical Trough Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, M. J.; Gilman, P.
2011-06-01
NREL, in conjunction with Sandia National Lab and the U.S Department of Energy, developed the System Advisor Model (SAM) analysis tool for renewable energy system performance and economic analysis. This paper documents the technical background and engineering formulation for one of SAM's two parabolic trough system models in SAM. The Physical Trough model calculates performance relationships based on physical first principles where possible, allowing the modeler to predict electricity production for a wider range of component geometries than is possible in the Empirical Trough model. This document describes the major parabolic trough plant subsystems in detail including the solar field,more » power block, thermal storage, piping, auxiliary heating, and control systems. This model makes use of both existing subsystem performance modeling approaches, and new approaches developed specifically for SAM.« less
Technology dependence and health-related quality of life: a model.
Marden, Susan F
2005-04-01
This paper presents a new theoretical model to explain people's diverse responses to therapeutic health technology by characterizing the relationship between technology dependence and health-related quality of life (HRQL). Technology dependence has been defined as reliance on a variety of devices, drugs and procedures to alleviate or remedy acute or chronic health problems. Health professionals must ensure that these technologies result in positive outcomes for those who must rely on them, while minimizing the potential for unintended consequences. Little research exists to inform health professionals about how dependency on therapeutic technology may affect patient-reported outcomes such as HRQL. Organizing frameworks to focus such research are also limited. Generated from the synthesis of three theoretical frameworks and empirical research, the model proposes that attitudes towards technology dependence affect HRQL through a person's illness representations or commonsense beliefs about their illness. Symptom distress, illness history, age and gender also influence the technology dependence and HRQL relationship. Five concepts form the major components of the model: a) attitudes towards technology dependence, b) illness representation, c) symptom distress, d) HRQL and e) illness history. The model is proposed as a guide for clinical nursing research into the impact of a wide variety of therapeutic health care interventions on HRQL. Empirical validation of the model is needed to test its generality.
Population, internal migration, and economic growth: an empirical analysis.
Moreland, R S
1982-01-01
The role of population growth in the development process has received increasing attention during the last 15 years, as manifested in the literature in 3 broad categories. In the 1st category, the effects of rapid population growth on the growth of income have been studied with the use of simulation models, which sometimes include endogenous population growth. The 2nd category of the literature is concerned with theoretical and empirical studies of the economic determinants of various demographic rates--most usually fertility. Internal migration and dualism is the 3rd population development category to recieve attention. An attempt is made to synthesize developments in these 3 categories by estimating from a consistent set of data a 2 sector economic demographic model in which the major demographic rates are endogenous. Due to the fact that the interactions between economic and demographic variables are nonlinear and complex, the indirect effects of changes in a particular variable may depend upon the balance of numerical coefficients. For this reason it was felt that the model should be empirically grounded. A brief overview of the model is provided, and the model is compared to some similar existing models. Estimation of the model's 9 behavior equations is discussed, followed by a "base run" simulation of a developing country "stereotype" and a report of a number of policy experiments. The relatively new field of economic determinants of demographic variables was drawn upon in estimating equations to endogenize demographic phenomena that are frequently left exogenous in simulation models. The fertility and labor force participation rate functions are fairly standard, but a step beyong existing literature was taken in the life expectancy and intersectorial migration equations. On the economic side, sectoral savings functions were estimated, and it was found that the marginal propensity to save is lower in agriculture than in nonagriculture. Testing to see the effect of a population's age structure on savings rather than assuming a particular direction as Coale-Hoover and Simon do in their models, it was found that a higher proportion of children compete with savings in agriculture but complement savings in industrial areas. This was consistent with the economic value of children in agricultural and nonagricultural regions of less developed countries. The estimated production functions showed that marginal products of labor were considerably higher in agriculture than in nonagriculture. As with other simulation models, the effect of reducing fertility was to accelerate income growth. Reductions in rural fertility were more equitable and raised the overall level of per capita income more than similar efforts directed to urban areas only.
Ionospheric convection inferred from interplanetary magnetic field-dependent Birkeland currents
NASA Technical Reports Server (NTRS)
Rasmussen, C. E.; Schunk, R. W.
1988-01-01
Computer simulations of ionospheric convection have been performed, combining empirical models of Birkeland currents with a model of ionospheric conductivity in order to investigate IMF-dependent convection characteristics. Birkeland currents representing conditions in the northern polar cap of the negative IMF By component are used. Two possibilities are considered: (1) the morning cell shifting into the polar cap as the IMF turns northward, and this cell and a distorted evening cell providing for sunward flow in the polar cap; and (2) the existence of a three-cell pattern when the IMF is strongly northward.
Computer assisted analysis of research-based teaching method in English newspaper reading teaching
NASA Astrophysics Data System (ADS)
Jie, Zheng
2017-06-01
In recent years, the teaching of English newspaper reading has been developing rapidly. However, the teaching effect of the existing course is not ideal. The paper tries to apply the research-based teaching model to English newspaper reading teaching, investigates the current situation in higher vocational colleges, and analyzes the problems. It designs a teaching model of English newspaper reading and carries out the empirical research conducted by computers. The results show that the teaching mode can use knowledge and ability to stimulate learners interest and comprehensively improve their ability to read newspapers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milora, S. L.
1976-02-01
The use of the code NLIN (IBM Share Program No. 1428) to obtain empirical thermodynamic pressure-volume-temperature (P-V-T) relationships for substances in the gaseous and dense gaseous states is described. When sufficient experimental data exist, the code STATEQ will provide least-squares estimates for the 21 parameters of the Martin model. Another code, APPROX, is described which also obtains parameter estimates for the model by making use of the approximate generalized behavior of fluids. Use of the codes is illustrated in obtaining thermodynamic representations for isobutane. (auth)
On the optically thick winds of Wolf-Rayet stars
NASA Astrophysics Data System (ADS)
Gräfener, G.; Owocki, S. P.; Grassitelli, L.; Langer, N.
2017-12-01
Context. The classical Wolf-Rayet (WR) phase is believed to mark the end stage of the evolution of massive stars with initial masses higher than 25M⊙. Stars in this phase expose their stripped cores with the products of H- or He-burning at their surface. They develop strong, optically thick stellar winds that are important for the mechanical and chemical feedback of massive stars, and that determine whether the most massive stars end their lives as neutron stars or black holes. The winds of WR stars are currently not well understood, and their inclusion in stellar evolution models relies on uncertain empirical mass-loss relations. Aims: We investigate theoretically the mass-loss properties of H-free WR stars of the nitrogen sequence (WN stars). Methods: We connected stellar structure models for He stars with wind models for optically thick winds and assessed the degree to which these two types of models can simultaneously fulfil their respective sonic-point conditions. Results: Fixing the outer wind law and terminal wind velocity ν∞, we obtain unique solutions for the mass-loss rates of optically thick, radiation-driven winds of WR stars in the phase of core He-burning. The resulting mass-loss relations as a function of stellar parameters agree well with previous empirical relations. Furthermore, we encounter stellar mass limits below which no continuous solutions exist. While these mass limits agree with observations of WR stars in the Galaxy, they contradict observations in the LMC. Conclusions: While our results in particular confirm the slope of often-used empirical mass-loss relations, they imply that only part of the observed WN population can be understood in the framework of the standard assumptions of a smooth transonic flow and compact stellar core. This means that alternative approaches such as a clumped and inflated wind structure or deviations from the diffusion limit at the sonic point may have to be invoked. Qualitatively, the existence of mass limits for the formation of WR-type winds may be relevant for the non-detection of low-mass WR stars in binary systems, which are believed to be progenitors of Type Ib/c supernovae. The sonic-point conditions derived in this work may provide a possibility to include optically thick winds in stellar evolution models in a more physically motivated form than in current models.
Soil Erosion as a stochastic process
NASA Astrophysics Data System (ADS)
Casper, Markus C.
2015-04-01
The main tools to provide estimations concerning risk and amount of erosion are different types of soil erosion models: on the one hand, there are empirically based model concepts on the other hand there are more physically based or process based models. However, both types of models have substantial weak points. All empirical model concepts are only capable of providing rough estimates over larger temporal and spatial scales, they do not account for many driving factors that are in the scope of scenario related analysis. In addition, the physically based models contain important empirical parts and hence, the demand for universality and transferability is not given. As a common feature, we find, that all models rely on parameters and input variables, which are to certain, extend spatially and temporally averaged. A central question is whether the apparent heterogeneity of soil properties or the random nature of driving forces needs to be better considered in our modelling concepts. Traditionally, researchers have attempted to remove spatial and temporal variability through homogenization. However, homogenization has been achieved through physical manipulation of the system, or by statistical averaging procedures. The price for obtaining this homogenized (average) model concepts of soils and soil related processes has often been a failure to recognize the profound importance of heterogeneity in many of the properties and processes that we study. Especially soil infiltrability and the resistance (also called "critical shear stress" or "critical stream power") are the most important empirical factors of physically based erosion models. The erosion resistance is theoretically a substrate specific parameter, but in reality, the threshold where soil erosion begins is determined experimentally. The soil infiltrability is often calculated with empirical relationships (e.g. based on grain size distribution). Consequently, to better fit reality, this value needs to be corrected experimentally. To overcome this disadvantage of our actual models, soil erosion models are needed that are able to use stochastic directly variables and parameter distributions. There are only some minor approaches in this direction. The most advanced is the model "STOSEM" proposed by Sidorchuk in 2005. In this model, only a small part of the soil erosion processes is described, the aggregate detachment and the aggregate transport by flowing water. The concept is highly simplified, for example, many parameters are temporally invariant. Nevertheless, the main problem is that our existing measurements and experiments are not geared to provide stochastic parameters (e.g. as probability density functions); in the best case they deliver a statistical validation of the mean values. Again, we get effective parameters, spatially and temporally averaged. There is an urgent need for laboratory and field experiments on overland flow structure, raindrop effects and erosion rate, which deliver information on spatial and temporal structure of soil and surface properties and processes.
Implementation of Advanced Two Equation Turbulence Models in the USM3D Unstructured Flow Solver
NASA Technical Reports Server (NTRS)
Wang, Qun-Zhen; Massey, Steven J.; Abdol-Hamid, Khaled S.
2000-01-01
USM3D is a widely-used unstructured flow solver for simulating inviscid and viscous flows over complex geometries. The current version (version 5.0) of USM3D, however, does not have advanced turbulence models to accurately simulate complicated flow. We have implemented two modified versions of the original Jones and Launder k-epsilon "two-equation" turbulence model and the Girimaji algebraic Reynolds stress model in USM3D. Tests have been conducted for three flat plate boundary layer cases, a RAE2822 airfoil and an ONERA M6 wing. The results are compared with those from direct numerical simulation, empirical formulae, theoretical results, and the existing Spalart-Allmaras one-equation model.
Point- and line-based transformation models for high resolution satellite image rectification
NASA Astrophysics Data System (ADS)
Abd Elrahman, Ahmed Mohamed Shaker
Rigorous mathematical models with the aid of satellite ephemeris data can present the relationship between the satellite image space and the object space. With government funded satellites, access to calibration and ephemeris data has allowed the development and use of these models. However, for commercial high-resolution satellites, which have been recently launched, these data are withheld from users, and therefore alternative empirical models should be used. In general, the existing empirical models are based on the use of control points and involve linking points in the image space and the corresponding points in the object space. But the lack of control points in some remote areas and the questionable accuracy of the identified discrete conjugate points provide a catalyst for the development of algorithms based on features other than control points. This research, concerned with image rectification and 3D geo-positioning determination using High-Resolution Satellite Imagery (HRSI), has two major objectives. First, the effects of satellite sensor characteristics, number of ground control points (GCPs), and terrain elevation variations on the performance of several point based empirical models are studied. Second, a new mathematical model, using only linear features as control features, or linear features with a minimum number of GCPs, is developed. To meet the first objective, several experiments for different satellites such as Ikonos, QuickBird, and IRS-1D have been conducted using different point based empirical models. Various data sets covering different terrain types are presented and results from representative sets of the experiments are shown and analyzed. The results demonstrate the effectiveness and the superiority of these models under certain conditions. From the results obtained, several alternatives to circumvent the effects of the satellite sensor characteristics, the number of GCPs, and the terrain elevation variations are introduced. To meet the second objective, a new model named the Line Based Transformation Model (LBTM) is developed for HRSI rectification. The model has the flexibility to either solely use linear features or use linear features and a number of control points to define the image transformation parameters. Unlike point features, which must be explicitly defined, linear features have the advantage that they can be implicitly defined by any segment along the line. (Abstract shortened by UMI.)
On the methods for determining the transverse dispersion coefficient in river mixing
NASA Astrophysics Data System (ADS)
Baek, Kyong Oh; Seo, Il Won
2016-04-01
In this study, the strengths and weaknesses of existing methods for determining the dispersion coefficient in the two-dimensional river mixing model were assessed based on hydraulic and tracer data sets acquired from experiments conducted on either laboratory channels or natural rivers. From the results of this study, it can be concluded that, when the longitudinal dispersion coefficient as well as the transverse dispersion coefficients must be determined in the transient concentration situation, the two-dimensional routing procedures, 2D RP and 2D STRP, can be employed to calculate dispersion coefficients among the observation methods. For the steady concentration situation, the STRP can be applied to calculate the transverse dispersion coefficient. When the tracer data are not available, either theoretical or empirical equations by the estimation method can be used to calculate the dispersion coefficient using the geometric and hydraulic data sets. Application of the theoretical and empirical equations to the laboratory channel showed that equations by Baek and Seo [[3], 2011] predicted reasonable values while equations by Fischer [23] and Boxwall and Guymer (2003) overestimated by factors of ten to one hundred. Among existing empirical equations, those by Jeon et al. [28] and Baek and Seo [6] gave the agreeable values of the transverse dispersion coefficient for most cases of natural rivers. Further, the theoretical equation by Baek and Seo [5] has the potential to be broadly applied to both laboratory and natural channels.
Curran, Patrick J.; Howard, Andrea L.; Bainter, Sierra; Lane, Stephanie T.; McGinley, James S.
2014-01-01
Objective Although recent statistical and computational developments allow for the empirical testing of psychological theories in ways not previously possible, one particularly vexing challenge remains: how to optimally model the prospective, reciprocal relations between two constructs as they developmentally unfold over time. Several analytic methods currently exist that attempt to model these types of relations, and each approach is successful to varying degrees. However, none provide the unambiguous separation of between-person and within-person components of stability and change over time, components that are often hypothesized to exist in the psychological sciences. The goal of our paper is to propose and demonstrate a novel extension of the multivariate latent curve model to allow for the disaggregation of these effects. Method We begin with a review of the standard latent curve models and describe how these primarily capture between-person differences in change. We then extend this model to allow for regression structures among the time-specific residuals to capture within-person differences in change. Results We demonstrate this model using an artificial data set generated to mimic the developmental relation between alcohol use and depressive symptomatology spanning five repeated measures. Conclusions We obtain a specificity of results from the proposed analytic strategy that are not available from other existing methodologies. We conclude with potential limitations of our approach and directions for future research. PMID:24364798
Observations and modeling of San Diego beaches during El Niño
NASA Astrophysics Data System (ADS)
Doria, André; Guza, R. T.; O'Reilly, William C.; Yates, M. L.
2016-08-01
Subaerial sand levels were observed at five southern California beaches for 16 years, including notable El Niños in 1997-98 and 2009-10. An existing, empirical shoreline equilibrium model, driven with wave conditions estimated using a regional buoy network, simulates well the seasonal changes in subaerial beach width (e.g. the cross-shore location of the MSL contour) during non-El Niño years, similar to previous results with a 5-year time series lacking an El Niño winter. The existing model correctly identifies the 1997-98 El Niño winter conditions as more erosive than 2009-10, but overestimates shoreline erosion during both El Niños. The good skill of the existing equilibrium model in typical conditions does not necessarily extrapolate to extreme erosion on these beaches where a few meters thick sand layer often overlies more resistant layers. The modest over-prediction of the 2009-10 El Niño is reduced by gradually decreasing the model mobility of highly eroded shorelines (simulating cobbles, kelp wrack, shell hash, or other stabilizing layers). Over prediction during the more severe 1997-98 El Niño is corrected by stopping model erosion when resilient surfaces (identified with aerial imagery) are reached. The trained model provides a computationally simple (e.g. nonlinear first order differential equation) representation of the observed relationship between incident waves and shoreline change.
CHARMM Drude Polarizable Force Field for Aldopentofuranoses and Methyl-aldopentofuranosides
Jana, Madhurima; MacKerell, Alexander D.
2015-01-01
An empirical all-atom CHARMM polarizable force filed for aldopentofuranoses and methyl-aldopentofuranosides based on the classical Drude oscillator is presented. A single electrostatic model is developed for eight different diastereoisomers of aldopentofuranoses by optimizing the existing electrostatic and bonded parameters as transferred from ethers, alcohols and hexopyranoses to reproduce quantum mechanical (QM) dipole moments, furanose-water interaction energies and conformational energies. Optimization of selected electrostatic and dihedral parameters was performed to generate a model for methyl-aldopentofuranosides. Accuracy of the model was tested by reproducing experimental data for crystal intramolecular geometries and lattice unit cell parameters, aqueous phase densities, and ring pucker and exocyclic rotamer populations as obtained from NMR experiments. In most cases the model is found to reproduce both QM data and experimental observables in an excellent manner, while for the remainder the level of agreement is in the satisfactory regimen. In aqueous phase simulations the monosaccharides have significantly enhanced dipoles as compared to the gas phase. The final model from this study is transferrable for future studies on carbohydrates and can be used with the existing CHARMM Drude polarizable force field for biomolecules. PMID:26018564
Aspara, Jaakko; Klein, Jan F; Luo, Xueming; Tikkanen, Henrikki
2018-05-01
We conduct a systematic exploratory investigation of the effects of firms' existing service productivity on the success of their new service innovations. Although previous research extensively addresses service productivity and service innovation, this is the first empirical study that bridges the gap between these two research streams and examines the links between the two concepts. Based on a comprehensive data set of new service introductions in a financial services market over a 14-year period, we empirically explore the relationship between a firm's existing service productivity and the firm's success in introducing new services to the market. The results unveil a fundamental service productivity-service innovation dilemma: Being productive in existing services increases a firm's willingness to innovate new services proactively but decreases the firm's capabilities of bringing these services to the market successfully. We provide specific insights into the mechanism underlying the complex relationship between a firm's productivity in existing services, its innovation proactivity, and its service innovation success. For managers, we not only unpack and elucidate this dilemma but also demonstrate that a focused customer scope and growth market conditions may enable firms to mitigate the dilemma and successfully pursue service productivity and service innovation simultaneously.
Using phrases and document metadata to improve topic modeling of clinical reports.
Speier, William; Ong, Michael K; Arnold, Corey W
2016-06-01
Probabilistic topic models provide an unsupervised method for analyzing unstructured text, which have the potential to be integrated into clinical automatic summarization systems. Clinical documents are accompanied by metadata in a patient's medical history and frequently contains multiword concepts that can be valuable for accurately interpreting the included text. While existing methods have attempted to address these problems individually, we present a unified model for free-text clinical documents that integrates contextual patient- and document-level data, and discovers multi-word concepts. In the proposed model, phrases are represented by chained n-grams and a Dirichlet hyper-parameter is weighted by both document-level and patient-level context. This method and three other Latent Dirichlet allocation models were fit to a large collection of clinical reports. Examples of resulting topics demonstrate the results of the new model and the quality of the representations are evaluated using empirical log likelihood. The proposed model was able to create informative prior probabilities based on patient and document information, and captured phrases that represented various clinical concepts. The representation using the proposed model had a significantly higher empirical log likelihood than the compared methods. Integrating document metadata and capturing phrases in clinical text greatly improves the topic representation of clinical documents. The resulting clinically informative topics may effectively serve as the basis for an automatic summarization system for clinical reports. Copyright © 2016 Elsevier Inc. All rights reserved.
A Behavior-Analytic Account of Motivational Interviewing
ERIC Educational Resources Information Center
Christopher, Paulette J.; Dougher, Michael J.
2009-01-01
Several published reports have now documented the clinical effectiveness of motivational interviewing (MI). Despite its effectiveness, there are no generally accepted or empirically supported theoretical accounts of its effects. The theoretical accounts that do exist are mentalistic, descriptive, and not based on empirically derived behavioral…
DOT National Transportation Integrated Search
2006-01-01
This project evaluated the procedures proposed by the Mechanistic-Empirical Pavement Design Guide (MEPDG) to characterize existing hot-mix asphalt (HMA) layers for rehabilitation purposes. Thirty-three cores were extracted from nine sites in Virginia...
Simple, empirical approach to predict neutron capture cross sections from nuclear masses
NASA Astrophysics Data System (ADS)
Couture, A.; Casten, R. F.; Cakirli, R. B.
2017-12-01
Background: Neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40 % , and has limited predictive power, with predictions from different models rapidly differing by an order of magnitude a few nucleons from the last measurement. Purpose: To develop a new approach to predicting neutron capture cross sections over broad ranges of nuclei that accounts for their values where known and which has reliable predictive power with small uncertainties for many nuclei where they are unknown. Methods: Experimental neutron capture cross sections were compared to empirical mass observables in regions of similar structure. Results: We present an extremely simple method, based solely on empirical mass observables, that correlates neutron capture cross sections in the critical energy range from a few keV to a couple hundred keV. We show that regional cross sections are compactly correlated in medium and heavy mass nuclei with the two-neutron separation energy. These correlations are easily amenable to predict unknown cross sections, often converting the usual extrapolations to more reliable interpolations. It almost always reproduces existing data to within 25 % and estimated uncertainties are below about 40 % up to 10 nucleons beyond known data. Conclusions: Neutron capture cross sections display a surprisingly strong connection to the two-neutron separation energy, a nuclear structure property. The simple, empirical correlations uncovered provide model-independent predictions of neutron capture cross sections, extending far from stability, including for nuclei of the highest sensitivity to r -process nucleosynthesis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burrows, Susannah M.; Ogunro, O.; Frossard, Amanda
2014-12-19
The presence of a large fraction of organic matter in primary sea spray aerosol (SSA) can strongly affect its cloud condensation nuclei activity and interactions with marine clouds. Global climate models require new parameterizations of the SSA composition in order to improve the representation of these processes. Existing proposals for such a parameterization use remotely-sensed chlorophyll-a concentrations as a proxy for the biogenic contribution to the aerosol. However, both observations and theoretical considerations suggest that existing relationships with chlorophyll-a, derived from observations at only a few locations, may not be representative for all ocean regions. We introduce a novel frameworkmore » for parameterizing the fractionation of marine organic matter into SSA based on a competitive Langmuir adsorption equilibrium at bubble surfaces. Marine organic matter is partitioned into classes with differing molecular weights, surface excesses, and Langmuir adsorption parameters. The classes include a lipid-like mixture associated with labile dissolved organic carbon (DOC), a polysaccharide-like mixture associated primarily with semi-labile DOC, a protein-like mixture with concentrations intermediate between lipids and polysaccharides, a processed mixture associated with recalcitrant surface DOC, and a deep abyssal humic-like mixture. Box model calculations have been performed for several cases of organic adsorption to illustrate the underlying concepts. We then apply the framework to output from a global marine biogeochemistry model, by partitioning total dissolved organic carbon into several classes of macromolecule. Each class is represented by model compounds with physical and chemical properties based on existing laboratory data. This allows us to globally map the predicted organic mass fraction of the nascent submicron sea spray aerosol. Predicted relationships between chlorophyll-\\textit{a} and organic fraction are similar to existing empirical parameterizations, but can vary between biologically productive and non-productive regions, and seasonally within a given region. Major uncertainties include the bubble film thickness at bursting and the variability of organic surfactant activity in the ocean, which is poorly constrained. In addition, marine colloids and cooperative adsorption of polysaccharides may make important contributions to the aerosol, but are not included here. This organic fractionation framework is an initial step towards a closer linking of ocean biogeochemistry and aerosol chemical composition in Earth system models. Future work should focus on improving constraints on model parameters through new laboratory experiments or through empirical fitting to observed relationships in the real ocean and atmosphere, as well as on atmospheric implications of the variable composition of organic matter in sea spray.« less
Integrating Empirical-Modeling Approaches to Improve Understanding of Terrestrial Ecology Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarthy, Heather; Luo, Yiqi; Wullschleger, Stan D
Recent decades have seen tremendous increases in the quantity of empirical ecological data collected by individual investigators, as well as through research networks such as FLUXNET (Baldocchi et al., 2001). At the same time, advances in computer technology have facilitated the development and implementation of large and complex land surface and ecological process models. Separately, each of these information streams provides useful, but imperfect information about ecosystems. To develop the best scientific understanding of ecological processes, and most accurately predict how ecosystems may cope with global change, integration of empirical and modeling approaches is necessary. However, true integration - inmore » which models inform empirical research, which in turn informs models (Fig. 1) - is not yet common in ecological research (Luo et al., 2011). The goal of this workshop, sponsored by the Department of Energy, Office of Science, Biological and Environmental Research (BER) program, was to bring together members of the empirical and modeling communities to exchange ideas and discuss scientific practices for increasing empirical - model integration, and to explore infrastructure and/or virtual network needs for institutionalizing empirical - model integration (Yiqi Luo, University of Oklahoma, Norman, OK, USA). The workshop included presentations and small group discussions that covered topics ranging from model-assisted experimental design to data driven modeling (e.g. benchmarking and data assimilation) to infrastructure needs for empirical - model integration. Ultimately, three central questions emerged. How can models be used to inform experiments and observations? How can experimental and observational results be used to inform models? What are effective strategies to promote empirical - model integration?« less
Investigation of pressure drop in capillary tube for mixed refrigerant Joule-Thomson cryocooler
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ardhapurkar, P. M.; Sridharan, Arunkumar; Atrey, M. D.
2014-01-29
A capillary tube is commonly used in small capacity refrigeration and air-conditioning systems. It is also a preferred expansion device in mixed refrigerant Joule-Thomson (MR J-T) cryocoolers, since it is inexpensive and simple in configuration. However, the flow inside a capillary tube is complex, since flashing process that occurs in case of refrigeration and air-conditioning systems is metastable. A mixture of refrigerants such as nitrogen, methane, ethane, propane and iso-butane expands below its inversion temperature in the capillary tube of MR J-T cryocooler and reaches cryogenic temperature. The mass flow rate of refrigerant mixture circulating through capillary tube depends onmore » the pressure difference across it. There are many empirical correlations which predict pressure drop across the capillary tube. However, they have not been tested for refrigerant mixtures and for operating conditions of the cryocooler. The present paper assesses the existing empirical correlations for predicting overall pressure drop across the capillary tube for the MR J-T cryocooler. The empirical correlations refer to homogeneous as well as separated flow models. Experiments are carried out to measure the overall pressure drop across the capillary tube for the cooler. Three different compositions of refrigerant mixture are used to study the pressure drop variations. The predicted overall pressure drop across the capillary tube is compared with the experimentally obtained value. The predictions obtained using homogeneous model show better match with the experimental results compared to separated flow models.« less
Scaling laws between population and facility densities.
Um, Jaegon; Son, Seung-Woo; Lee, Sung-Ik; Jeong, Hawoong; Kim, Beom Jun
2009-08-25
When a new facility like a grocery store, a school, or a fire station is planned, its location should ideally be determined by the necessities of people who live nearby. Empirically, it has been found that there exists a positive correlation between facility and population densities. In the present work, we investigate the ideal relation between the population and the facility densities within the framework of an economic mechanism governing microdynamics. In previous studies based on the global optimization of facility positions in minimizing the overall travel distance between people and facilities, it was shown that the density of facility D and that of population rho should follow a simple power law D approximately rho(2/3). In our empirical analysis, on the other hand, the power-law exponent alpha in D approximately rho(alpha) is not a fixed value but spreads in a broad range depending on facility types. To explain this discrepancy in alpha, we propose a model based on economic mechanisms that mimic the competitive balance between the profit of the facilities and the social opportunity cost for populations. Through our simple, microscopically driven model, we show that commercial facilities driven by the profit of the facilities have alpha = 1, whereas public facilities driven by the social opportunity cost have alpha = 2/3. We simulate this model to find the optimal positions of facilities on a real U.S. map and show that the results are consistent with the empirical data.
The Impact of United States Monetary Policy in the Crude Oil futures market
NASA Astrophysics Data System (ADS)
Padilla-Padilla, Fernando M.
This research examines the empirical impact the United States monetary policy, through the federal fund interest rate, has on the volatility in the crude oil price in the futures market. Prior research has shown how macroeconomic events and variables have impacted different financial markets within short and long--term movements. After testing and decomposing the variables, the two stationary time series were analyzed using a Vector Autoregressive Model (VAR). The empirical evidence shows, with statistical significance, a direct relationship when explaining crude oil prices as function of fed fund rates (t-1) and an indirect relationship when explained as a function of fed fund rates (t-2). These results partially address the literature review lacunas within the topic of the existing implication monetary policy has within the crude oil futures market.
Yasaitis, Laura C; Arcaya, Mariana C; Subramanian, S V
2015-09-01
Creating local population health measures from administrative data would be useful for health policy and public health monitoring purposes. While a wide range of options--from simple spatial smoothers to model-based methods--for estimating such rates exists, there are relatively few side-by-side comparisons, especially not with real-world data. In this paper, we compare methods for creating local estimates of acute myocardial infarction rates from Medicare claims data. A Bayesian Monte Carlo Markov Chain estimator that incorporated spatial and local random effects performed best, followed by a method-of-moments spatial Empirical Bayes estimator. As the former is more complicated and time-consuming, spatial linear Empirical Bayes methods may represent a good alternative for non-specialist investigators. Copyright © 2015 Elsevier Ltd. All rights reserved.
FONAGY, PETER
2003-01-01
The paper discusses the precarious position of psychoanalysis, a therapeutic approach which historically has defined itself by freedom from constraint and counted treatment length not in terms of number of sessions but in terms of years, in today's era of empirically validated treatments and brief structured interventions. The evidence that exists for the effectiveness of psychoanalysis as a treatment for psychological disorder is reviewed. The evidence base is significant and growing, but less than might meet criteria for an empirically based therapy. The author goes on to argue that the absence of evidence may be symptomatic of the epistemic difficulties that psychoanalysis faces in the context of 21st century psychiatry, and examines some of the philosophical problems faced by psychoanalysis as a model of the mind. Finally some changes necessary in order to ensure a future for psychoanalysis and psychoanalytic therapies within psychiatry are suggested. PMID:16946899
The Use of Empirical Data Sources in HRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruce Hallbert; David Gertman; Julie Marble
This paper presents a review of available information related to human performance to support Human Reliability Analysis (HRA) performed for nuclear power plants (NPPs). A number of data sources are identified as potentially useful. These include NPP licensee event reports (LERs), augmented inspection team (AIT) reports, operator requalification data, results from the literature in experimental psychology, and the Aviation Safety Reporting System (ASRSs). The paper discusses how utilizing such information improves our capability to model and quantify human performance. In particular the paper discusses how information related to performance shaping factors (PSFs) can be extracted from empirical data to determinemore » their size effect, their relative effects, as well as their interactions. The paper concludes that appropriate use of existing sources can help addressing some of the important issues we are currently facing in HRA.« less
Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.
Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela
2016-12-01
Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.
A better sequence-read simulator program for metagenomics.
Johnson, Stephen; Trost, Brett; Long, Jeffrey R; Pittet, Vanessa; Kusalik, Anthony
2014-01-01
There are many programs available for generating simulated whole-genome shotgun sequence reads. The data generated by many of these programs follow predefined models, which limits their use to the authors' original intentions. For example, many models assume that read lengths follow a uniform or normal distribution. Other programs generate models from actual sequencing data, but are limited to reads from single-genome studies. To our knowledge, there are no programs that allow a user to generate simulated data following non-parametric read-length distributions and quality profiles based on empirically-derived information from metagenomics sequencing data. We present BEAR (Better Emulation for Artificial Reads), a program that uses a machine-learning approach to generate reads with lengths and quality values that closely match empirically-derived distributions. BEAR can emulate reads from various sequencing platforms, including Illumina, 454, and Ion Torrent. BEAR requires minimal user input, as it automatically determines appropriate parameter settings from user-supplied data. BEAR also uses a unique method for deriving run-specific error rates, and extracts useful statistics from the metagenomic data itself, such as quality-error models. Many existing simulators are specific to a particular sequencing technology; however, BEAR is not restricted in this way. Because of its flexibility, BEAR is particularly useful for emulating the behaviour of technologies like Ion Torrent, for which no dedicated sequencing simulators are currently available. BEAR is also the first metagenomic sequencing simulator program that automates the process of generating abundances, which can be an arduous task. BEAR is useful for evaluating data processing tools in genomics. It has many advantages over existing comparable software, such as generating more realistic reads and being independent of sequencing technology, and has features particularly useful for metagenomics work.
Role of local network oscillations in resting-state functional connectivity.
Cabral, Joana; Hugues, Etienne; Sporns, Olaf; Deco, Gustavo
2011-07-01
Spatio-temporally organized low-frequency fluctuations (<0.1 Hz), observed in BOLD fMRI signal during rest, suggest the existence of underlying network dynamics that emerge spontaneously from intrinsic brain processes. Furthermore, significant correlations between distinct anatomical regions-or functional connectivity (FC)-have led to the identification of several widely distributed resting-state networks (RSNs). This slow dynamics seems to be highly structured by anatomical connectivity but the mechanism behind it and its relationship with neural activity, particularly in the gamma frequency range, remains largely unknown. Indeed, direct measurements of neuronal activity have revealed similar large-scale correlations, particularly in slow power fluctuations of local field potential gamma frequency range oscillations. To address these questions, we investigated neural dynamics in a large-scale model of the human brain's neural activity. A key ingredient of the model was a structural brain network defined by empirically derived long-range brain connectivity together with the corresponding conduction delays. A neural population, assumed to spontaneously oscillate in the gamma frequency range, was placed at each network node. When these oscillatory units are integrated in the network, they behave as weakly coupled oscillators. The time-delayed interaction between nodes is described by the Kuramoto model of phase oscillators, a biologically-based model of coupled oscillatory systems. For a realistic setting of axonal conduction speed, we show that time-delayed network interaction leads to the emergence of slow neural activity fluctuations, whose patterns correlate significantly with the empirically measured FC. The best agreement of the simulated FC with the empirically measured FC is found for a set of parameters where subsets of nodes tend to synchronize although the network is not globally synchronized. Inside such clusters, the simulated BOLD signal between nodes is found to be correlated, instantiating the empirically observed RSNs. Between clusters, patterns of positive and negative correlations are observed, as described in experimental studies. These results are found to be robust with respect to a biologically plausible range of model parameters. In conclusion, our model suggests how resting-state neural activity can originate from the interplay between the local neural dynamics and the large-scale structure of the brain. Copyright © 2011 Elsevier Inc. All rights reserved.
Education And Gender Bias in the Sex Ratio At Birth: Evidence From India
ECHÁVARRI, REBECA A.; EZCURRA, ROBERTO
2010-01-01
This article investigates the possible existence of a nonlinear link between female disadvantage in natality and education. To this end, we devise a theoretical model based on the key role of social interaction in explaining people’s acquisition of preferences, which justifies the existence of a nonmonotonic relationship between female disadvantage in natality and education. The empirical validity of the proposed model is examined for the case of India, using district-level data. In this context, our econometric analysis pays particular attention to the role of spatial dependence to avoid any potential problems of misspecification. The results confirm that the relationship between the sex ratio at birth and education in India follows an inverted U-shape. This finding is robust to the inclusion of additional explanatory variables in the analysis, and to the choice of the spatial weight matrix used to quantify the spatial interdependence between the sample districts. PMID:20355693
Mäs, Michael; Flache, Andreas
2013-01-01
Explanations of opinion bi-polarization hinge on the assumption of negative influence, individuals’ striving to amplify differences to disliked others. However, empirical evidence for negative influence is inconclusive, which motivated us to search for an alternative explanation. Here, we demonstrate that bi-polarization can be explained without negative influence, drawing on theories that emphasize the communication of arguments as central mechanism of influence. Due to homophily, actors interact mainly with others whose arguments will intensify existing tendencies for or against the issue at stake. We develop an agent-based model of this theory and compare its implications to those of existing social-influence models, deriving testable hypotheses about the conditions of bi-polarization. Hypotheses were tested with a group-discussion experiment (N = 96). Results demonstrate that argument exchange can entail bi-polarization even when there is no negative influence. PMID:24312164
Education and gender bias in the sex ratio at birth: evidence from India.
Echávarri, Rebeca A; Ezcurra, Roberto
2010-02-01
This article investigates the possible existence of a nonlinear link between female disadvantage in natality and education. To this end, we devise a theoretical model based on the key role of social interaction in explaining people's acquisition of preferences, which justifies the existence of a nonmonotonic relationship between female disadvantage in natality and education. The empirical validity of the proposed model is examined for the case of India, using district-level data. In this context, our econometric analysis pays particular attention to the role of spatial dependence to avoid any potential problems of misspecification. The results confirm that the relationship between the sex ratio at birth and education in India follows an inverted U-shape. This finding is robust to the inclusion of additional explanatory variables in the analysis, and to the choice of the spatial weight matrix used to quantify the spatial interdependence between the sample districts.
The Army of Zimbabwe: A Role Model for Namibia
1990-03-02
centuries. A limited sense of nationhood started to exist. Further south on the African continent Zulu dissidents broke from the main empire and in...important role. 13 One of the manifestations of this unity would emerge in the creation of the new Zimbabwe Defense Forces. 6 ENDNOTES 1. DA PAM 550-171...prove to be very helpful in the months to come, 25 as BMATT arrived, set up, came on line and started its difficult mission. The creation of the first
The spread of gossip in American schools
NASA Astrophysics Data System (ADS)
Lind, P. G.; da Silva, L. R.; Andrade, J. S., Jr.; Herrmann, H. J.
2007-06-01
Gossip is defined as a rumor which specifically targets one individual and essentially only propagates within its friendship connections. How fast and how far a gossip can spread is for the first time assessed quantitatively in this study. For that purpose we introduce the "spread factor" and study it on empirical networks of school friendships as well as on various models for social connections. We discover that there exists an ideal number of friendship connections an individual should have to minimize the danger of gossip propagation.
An Upgrade of the Aeroheating Software ''MINIVER''
NASA Technical Reports Server (NTRS)
Louderback, Pierce
2013-01-01
Detailed computational modeling: CFO often used to create and execute computational domains. Increasing complexity when moving from 20 to 30 geometries. Computational time increased as finer grids are used (accuracy). Strong tool, but takes time to set up and run. MINIVER: Uses theoretical and empirical correlations. Orders of magnitude faster to set up and run. Not as accurate as CFO, but gives reasonable estimations. MINIVER's Drawbacks: Rigid command-line interface. Lackluster, unorganized documentation. No central control; multiple versions exist and have diverged.
NASA Astrophysics Data System (ADS)
Rawat, Kishan Singh; Sehgal, Vinay Kumar; Pradhan, Sanatan; Ray, Shibendu S.
2018-03-01
We have estimated soil moisture (SM) by using circular horizontal polarization backscattering coefficient (σ o_{RH}), differences of circular vertical and horizontal σ o (σ o_{RV} {-} σ o_{RH}) from FRS-1 data of Radar Imaging Satellite (RISAT-1) and surface roughness in terms of RMS height ({RMS}_{height}). We examined the performance of FRS-1 in retrieving SM under wheat crop at tillering stage. Results revealed that it is possible to develop a good semi-empirical model (SEM) to estimate SM of the upper soil layer using RISAT-1 SAR data rather than using existing empirical model based on only single parameter, i.e., σ o. Near surface SM measurements were related to σ o_{RH}, σ o_{RV} {-} σ o_{RH} derived using 5.35 GHz (C-band) image of RISAT-1 and {RMS}_{height}. The roughness component derived in terms of {RMS}_{height} showed a good positive correlation with σ o_{RV} {-} σ o_{RH} (R2 = 0.65). By considering all the major influencing factors (σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}), an SEM was developed where SM (volumetric) predicted values depend on σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}. This SEM showed R2 of 0.87 and adjusted R2 of 0.85, multiple R=0.94 and with standard error of 0.05 at 95% confidence level. Validation of the SM derived from semi-empirical model with observed measurement ({SM}_{Observed}) showed root mean square error (RMSE) = 0.06, relative-RMSE (R-RMSE) = 0.18, mean absolute error (MAE) = 0.04, normalized RMSE (NRMSE) = 0.17, Nash-Sutcliffe efficiency (NSE) = 0.91 ({≈ } 1), index of agreement (d) = 1, coefficient of determination (R2) = 0.87, mean bias error (MBE) = 0.04, standard error of estimate (SEE) = 0.10, volume error (VE) = 0.15, variance of the distribution of differences ({S}d2) = 0.004. The developed SEM showed better performance in estimating SM than Topp empirical model which is based only on σ o. By using the developed SEM, top soil SM can be estimated with low mean absolute percent error (MAPE) = 1.39 and can be used for operational applications.
When Does Model-Based Control Pay Off?
2016-01-01
Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to “model-free” and “model-based” strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand. PMID:27564094
When Does Model-Based Control Pay Off?
Kool, Wouter; Cushman, Fiery A; Gershman, Samuel J
2016-08-01
Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to "model-free" and "model-based" strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand.
Empirical likelihood-based confidence intervals for mean medical cost with censored data.
Jeyarajah, Jenny; Qin, Gengsheng
2017-11-10
In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.
Analytical approximation of the InGaZnO thin-film transistors surface potential
NASA Astrophysics Data System (ADS)
Colalongo, Luigi
2016-10-01
Surface-potential-based mathematical models are among the most accurate and physically based compact models of thin-film transistors, and in turn of indium gallium zinc oxide TFTs, available today. However, the need of iterative computations of the surface potential limits their computational efficiency and diffusion in CAD applications. The existing closed-form approximations of the surface potential are based on regional approximations and empirical smoothing functions that could result not accurate enough in particular to model transconductances and transcapacitances. In this work we present an extremely accurate (in the range of nV) and computationally efficient non-iterative approximation of the surface potential that can serve as a basis for advanced surface-potential-based indium gallium zinc oxide TFTs models.
A review of physically based models for soil erosion by water
NASA Astrophysics Data System (ADS)
Le, Minh-Hoang; Cerdan, Olivier; Sochala, Pierre; Cheviron, Bruno; Brivois, Olivier; Cordier, Stéphane
2010-05-01
Physically-based models rely on fundamental physical equations describing stream flow and sediment and associated nutrient generation in a catchment. This paper reviews several existing erosion and sediment transport approaches. The process of erosion include soil detachment, transport and deposition, we present various forms of equations and empirical formulas used when modelling and quantifying each of these processes. In particular, we detail models describing rainfall and infiltration effects and the system of equations to describe the overland flow and the evolution of the topography. We also present the formulas for the flow transport capacity and the erodibility functions. Finally, we present some recent numerical schemes to approach the shallow water equations and it's coupling with infiltration and erosion source terms.
Cardiac surgery antibiotic prophylaxis and calculated empiric antibiotic therapy.
Gorski, Armin; Hamouda, Khaled; Özkur, Mehmet; Leistner, Markus; Sommer, Sebastian-Patrick; Leyh, Rainer; Schimmer, Christoph
2015-03-01
Ongoing debate exists concerning the optimal choice and duration of antibiotic prophylaxis as well as the reasonable calculated empiric antibiotic therapy for hospital-acquired infections in critically ill cardiac surgery patients. A nationwide questionnaire was distributed to all German heart surgery centers concerning antibiotic prophylaxis and the calculated empiric antibiotic therapy. The response to the questionnaire was 87.3%. All clinics that responded use antibiotic prophylaxis, 79% perform it not longer than 24 h (single-shot: 23%; 2 doses: 29%; 3 doses: 27%; 4 doses: 13%; and >5 doses: 8%). Cephalosporin was used in 89% of clinics (46% second-generation, 43% first-generation cephalosporin). If sepsis is suspected, the following diagnostics are performed routinely: wound inspection 100%; white blood cell count 100%; radiography 99%; C-reactive protein 97%; microbiological testing of urine 91%, blood 81%, and bronchial secretion 81%; procalcitonin 74%; and echocardiography 75%. The calculated empiric antibiotic therapy (depending on the suspected focus) consists of a multidrug combination with broad-spectrum agents. This survey shows that existing national guidelines and recommendations concerning perioperative antibiotic prophylaxis and calculated empiric antibiotic therapy are well applied in almost all German heart centers. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Plausibility Judgments in Conceptual Change and Epistemic Cognition
ERIC Educational Resources Information Center
Lombardi, Doug; Nussbaum, E. Michael; Sinatra, Gale M.
2016-01-01
Plausibility judgments rarely have been addressed empirically in conceptual change research. Recent research, however, suggests that these judgments may be pivotal to conceptual change about certain topics where a gap exists between what scientists and laypersons find plausible. Based on a philosophical and empirical foundation, this article…
Electron Transport in Hall Thrusters
NASA Astrophysics Data System (ADS)
McDonald, Michael Sean
Despite high technological maturity and a long flight heritage, computer models of Hall thrusters remain dependent on empirical inputs and a large part of thruster development to date has been heavily experimental in nature. This empirical approach will become increasingly unsustainable as new high-power thrusters tax existing ground test facilities and more exotic thruster designs stretch and strain the boundaries of existing design experience. The fundamental obstacle preventing predictive modeling of Hall thruster plasma properties and channel erosion is the lack of a first-principles description of electron transport across the strong magnetic fields between the cathode and anode. In spite of an abundance of proposed transport mechanisms, accurate assessments of the magnitude of electron current due to any one mechanism are scarce, and comparative studies of their relative influence on a single thruster platform simply do not exist. Lacking a clear idea of what mechanism(s) are primarily responsible for transport, it is understandably difficult for the electric propulsion scientist to focus his or her theoretical and computational tools on the right targets. This work presents a primarily experimental investigation of collisional and turbulent Hall thruster electron transport mechanisms. High-speed imaging of the thruster discharge channel at tens of thousands of frames per second reveals omnipresent rotating regions of elevated light emission, identified with a rotating spoke instability. This turbulent instability has been shown through construction of an azimuthally segmented anode to drive significant cross-field electron current in the discharge channel, and suggestive evidence points to its spatial extent into the thruster near-field plume as well. Electron trajectory simulations in experimentally measured thruster electromagnetic fields indicate that binary collisional transport mechanisms are not significant in the thruster plume, and experiments altering the bias potential of thruster surfaces show minimal effects from electron collisions with thruster surfaces. Taken together these results motivate further investigation of the rotating spoke instability and development of an analytic description to permit its inclusion in next generation Hall thruster models.
Ewuoso, Cornelius
2017-09-29
Empirical studies have now established that many patients make clinical decisions based on models other than Anglo American model of truth-telling and patient autonomy. Some scholars also add that current medical ethics frameworks and recent proposals for enhancing communication in health professional-patient relationship have not adequately accommodated these models. In certain clinical contexts where health professional and patients are motivated by significant cultural and religious values, these current frameworks cannot prevent communication breakdown, which can, in turn, jeopardize patient care, cause undue distress to a patient in certain clinical contexts or negatively impact his/her relationship with the community. These empirical studies have now recommended that additional frameworks developed around other models of truth-telling; and which take very seriously significant value-differences which sometimes exist between health professional and patients, as well as patient's cultural/religious values or relational capacities, must be developed. This paper contributes towards the development of one. Specifically, this study proposes a framework for truth-telling developed around African model of truth-telling by drawing insights from the communitarian concept of ootọ́ amongst the Yoruba people of south west Nigeria. I am optimistic that if this model is incorporated into current medical ethics codes and curricula, it will significantly enhance health professional-patient communication. © 2017 John Wiley & Sons Ltd.
Petus, Caroline; Devlin, Michelle; Teixera da Silva, Eduardo; Lewis, Stephen; Waterhouse, Jane; Wenger, Amelia; Bainbridge, Zoe; Tracey, Dieter
2018-05-01
Optically active water quality components (OAC) transported by flood plumes to nearshore marine environments affect light levels. The definition of minimum OAC concentrations that must be maintained to sustain sufficient light levels for conservation of light-dependant coastal ecosystems exposed to flood waters is necessary to guide management actions in adjacent catchments. In this study, a framework for defining OAC target concentrations using empirical light attenuation models is proposed and applied to the Wet Tropics region of the Great Barrier Reef (GBR) (Queensland, Australia). This framework comprises several steps: (i) light attenuation (Kd(PAR)) profiles and OAC measurements, including coloured dissolved organic matter (CDOM), chlorophyll-a (Chl-a) and suspended particulate matter (SPM) concentrations collected in flood waters; (ii) empirical light attenuation models used to define the contribution of CDOM, Chl-a and SPM to the light attenuation, and; (iii) translation of empirical models into manageable OAC target concentrations specific for wet season conditions. Results showed that (i) Kd(PAR) variability in the Wet Tropics flood waters is driven primarily by SPM and CDOM, with a lower contribution from Chl-a (r2 = 0.5, p < 0.01), (ii) the relative contributions of each OAC varies across the different water bodies existing along flood waters and strongest Kd(PAR) predictions were achieved when the in-situ data were clustered into water bodies with similar satellite-derived colour characteristics ('brownish flood waters', r2 = 0.8, p < 0.01, 'greenish flood waters', r2 = 0.5, p < 0.01), and (iii) that Kd(PAR) simulations are sensitive to the angular distribution of the light field in the clearest flood water bodies. Empirical models developed were used to translate regional light guidelines (established for the GBR) into manageable OAC target concentrations. Preliminary results suggested that a 90th percentile SPM concentration of 11.4 mg L -1 should be maintained during the wet season to sustain favourable light levels for Wet Tropics coral reefs and seagrass ecosystems exposed to 'brownish' flood waters. Additional data will be collected to validate the light attenuation models and the wet season target concentration which in future will be incorporated into wider catchment modelling efforts to improve coastal water quality in the Wet Tropics and the GBR. Copyright © 2018 Elsevier Ltd. All rights reserved.
An empirical investigation of the efficiency effects of integrated care models in Switzerland
Reich, Oliver; Rapold, Roland; Flatscher-Thöni, Magdalena
2012-01-01
Introduction This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency. Methods The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006–2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model. Results Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5%, respectively, of the variation in total health care expenditure can be attributed to the effects of selection. Conclusions Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy-makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care. PMID:22371691
Obtaining short-fiber orientation model parameters using non-lubricated squeeze flow
NASA Astrophysics Data System (ADS)
Lambert, Gregory; Wapperom, Peter; Baird, Donald
2017-12-01
Accurate models of fiber orientation dynamics during the processing of polymer-fiber composites are needed for the design work behind important automobile parts. All of the existing models utilize empirical parameters, but a standard method for obtaining them independent of processing does not exist. This study considers non-lubricated squeeze flow through a rectangular channel as a solution. A two-dimensional finite element method simulation of the kinematics and fiber orientation evolution along the centerline of a sample is developed as a first step toward a fully three-dimensional simulation. The model is used to fit to orientation data in a short-fiber-reinforced polymer composite after squeezing. Fiber orientation model parameters obtained in this study do not agree well with those obtained for the same material during startup of simple shear. This is attributed to the vastly different rates at which fibers orient during shearing and extensional flows. A stress model is also used to try to fit to experimental closure force data. Although the model can be tuned to the correct magnitude of the closure force, it does not fully recreate the transient behavior, which is attributed to the lack of any consideration for fiber-fiber interactions.
NASA Astrophysics Data System (ADS)
Howard, J. E.
2014-12-01
This study focusses on improving methods of accounting for atmospheric effects on infrasound amplitudes observed on arrays at regional distances in the southwestern United States. Recordings at ranges of 150 to nearly 300 km from a repeating ground truth source of small HE explosions are used. The explosions range in actual weight from approximately 2000-4000 lbs. and are detonated year-round which provides signals for a wide range of atmospheric conditions. Three methods of correcting the observed amplitudes for atmospheric effects are investigated with the data set. The first corrects amplitudes for upper stratospheric wind as developed by Mutschlecner and Whitaker (1999) and uses the average wind speed between 45-55 km altitudes in the direction of propagation to derive an empirical correction formula. This approach was developed using large chemical and nuclear explosions and is tested with the smaller explosions for which shorter wavelengths cause the energy to be scattered by the smaller scale structure of the atmosphere. The second approach isa semi-empirical method using ray tracing to determine wind speed at ray turning heights where the wind estimates replace the wind values in the existing formula. Finally, parabolic equation (PE) modeling is used to predict the amplitudes at the arrays at 1 Hz. The PE amplitudes are compared to the observed amplitudes with a narrow band filter centered at 1 Hz. An analysis is performed of the conditions under which the empirical and semi-empirical methods fail and full wave methods must be used.
A Critical Review of Digital Storyline-Enhanced Learning
ERIC Educational Resources Information Center
Novak, Elena
2015-01-01
Storyline is one of the major motivators that lead people to play video games. However, little empirical evidence exists on the instructional effectiveness of integrating a storyline into digital learning materials. This systematic literature review presents current empirical findings on the effects of a storyline game design element for human…
Empirical Bases for a Prekindergarten Curriculum for Disadvantaged Children.
ERIC Educational Resources Information Center
Di Lorenzo, Louis T.; And Others
This project was undertaken to establish a basis for a compensatory curriculum for disadvantaged preschool children by using existing empirical data to identify factors that predict success in reading comprehension and that differentiate the disadvantaged from the nondisadvantaged. The project focused on factors related to success in learning to…
An Empirical Investigation into Programming Language Syntax
ERIC Educational Resources Information Center
Stefik, Andreas; Siebert, Susanna
2013-01-01
Recent studies in the literature have shown that syntax remains a significant barrier to novice computer science students in the field. While this syntax barrier is known to exist, whether and how it varies across programming languages has not been carefully investigated. For this article, we conducted four empirical studies on programming…
Designing Educative Curriculum Materials: A Theoretically and Empirically Driven Process
ERIC Educational Resources Information Center
Davis, Elizabeth A.; Palincsar, Annemarie Sullivan; Arias, Anna Maria; Bismack, Amber Schultz; Marulis, Loren M.; Iwashyna, Stefanie K.
2014-01-01
In this article, the authors argue for a design process in the development of educative curriculum materials that is theoretically and empirically driven. Using a design-based research approach, they describe their design process for incorporating educative features intended to promote teacher learning into existing, high-quality curriculum…
Considering Young People's Motives for Interactive Media Use
ERIC Educational Resources Information Center
van den Beemt, Antoine; Akkerman, Sanne; Simons, Robert-Jan
2011-01-01
Young people's increasing use of interactive media has led to assertions about possible consequences for education. Rather than following assertions, we argue for theory-driven empirical research as a basis for education renewal. First, we review the existing empirical research, concluding that there is almost no theory-driven research available.…
EMPIRE: Nuclear Reaction Model Code System for Data Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herman, M.; Capote, R.; Carlson, B.V.
EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. A projectile can be a neutron, proton, any ion (including heavy-ions) or a photon. The energy range extends from the beginning of the unresolved resonance region for neutron-induced reactions ({approx} keV) and goes up to several hundred MeV for heavy-ion induced reactions. The code accounts for the major nuclear reaction mechanisms, including direct, pre-equilibrium and compound nucleus ones. Direct reactions are described by a generalized optical model (ECIS03) or by the simplified coupled-channels approachmore » (CCFUS). The pre-equilibrium mechanism can be treated by a deformation dependent multi-step direct (ORION + TRISTAN) model, by a NVWY multi-step compound one or by either a pre-equilibrium exciton model with cluster emission (PCROSS) or by another with full angular momentum coupling (DEGAS). Finally, the compound nucleus decay is described by the full featured Hauser-Feshbach model with {gamma}-cascade and width-fluctuations. Advanced treatment of the fission channel takes into account transmission through a multiple-humped fission barrier with absorption in the wells. The fission probability is derived in the WKB approximation within the optical model of fission. Several options for nuclear level densities include the EMPIRE-specific approach, which accounts for the effects of the dynamic deformation of a fast rotating nucleus, the classical Gilbert-Cameron approach and pre-calculated tables obtained with a microscopic model based on HFB single-particle level schemes with collective enhancement. A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, moments of inertia and {gamma}-ray strength functions. The results can be converted into ENDF-6 formatted files using the accompanying code EMPEND and completed with neutron resonances extracted from the existing evaluations. The package contains the full EXFOR (CSISRS) library of experimental reaction data that are automatically retrieved during the calculations. Publication quality graphs can be obtained using the powerful and flexible plotting package ZVView. The graphic user interface, written in Tcl/Tk, provides for easy operation of the system. This paper describes the capabilities of the code, outlines physical models and indicates parameter libraries used by EMPIRE to predict reaction cross sections and spectra, mainly for nucleon-induced reactions. Selected applications of EMPIRE are discussed, the most important being an extensive use of the code in evaluations of neutron reactions for the new US library ENDF/B-VII.0. Future extensions of the system are outlined, including neutron resonance module as well as capabilities of generating covariances, using both KALMAN and Monte-Carlo methods, that are still being advanced and refined.« less
Exponential model for option prices: Application to the Brazilian market
NASA Astrophysics Data System (ADS)
Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.
2016-03-01
In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.
TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS
Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.
2017-01-01
Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971
A Multiscale Survival Process for Modeling Human Activity Patterns.
Zhang, Tianyang; Cui, Peng; Song, Chaoming; Zhu, Wenwu; Yang, Shiqiang
2016-01-01
Human activity plays a central role in understanding large-scale social dynamics. It is well documented that individual activity pattern follows bursty dynamics characterized by heavy-tailed interevent time distributions. Here we study a large-scale online chatting dataset consisting of 5,549,570 users, finding that individual activity pattern varies with timescales whereas existing models only approximate empirical observations within a limited timescale. We propose a novel approach that models the intensity rate of an individual triggering an activity. We demonstrate that the model precisely captures corresponding human dynamics across multiple timescales over five orders of magnitudes. Our model also allows extracting the population heterogeneity of activity patterns, characterized by a set of individual-specific ingredients. Integrating our approach with social interactions leads to a wide range of implications.
Critical review of membrane bioreactor models--part 1: biokinetic and filtration models.
Naessens, W; Maere, T; Nopens, I
2012-10-01
Membrane bioreactor technology exists for a couple of decades, but has not yet overwhelmed the market due to some serious drawbacks of which operational cost due to fouling is the major contributor. Knowledge buildup and optimisation for such complex systems can significantly benefit from mathematical modelling. In this paper, the vast literature on modelling MBR biokinetics and filtration is critically reviewed. It was found that models cover the wide range of empirical to detailed mechanistic descriptions and have mainly been used for knowledge development and to a lesser extent for system optimisation/control. Moreover, studies are still predominantly performed at lab or pilot scale. Trends are discussed, knowledge gaps identified and interesting routes for further research suggested. Copyright © 2012 Elsevier Ltd. All rights reserved.
Shape modeling with family of Pearson distributions: Langmuir waves
NASA Astrophysics Data System (ADS)
Vidojevic, Sonja
2014-10-01
Two major effects of Langmuir wave electric field influence on spectral line shapes are appearance of depressions shifted from unperturbed line and an additional dynamical line broadening. More realistic and accurate models of Langmuir waves are needed to study these effects with more confidence. In this article we present distribution shapes of a high-quality data set of Langmuir waves electric field observed by the WIND satellite. Using well developed numerical techniques, the distributions of the empirical measurements are modeled by family of Pearson distributions. The results suggest that the existing theoretical models of energy conversion between an electron beam and surrounding plasma is more complex. If the processes of the Langmuir wave generation are better understood, the influence of Langmuir waves on spectral line shapes could be modeled better.
Reproducibility in Psychological Science: When Do Psychological Phenomena Exist?
Iso-Ahola, Seppo E.
2017-01-01
Scientific evidence has recently been used to assert that certain psychological phenomena do not exist. Such claims, however, cannot be made because (1) scientific method itself is seriously limited (i.e., it can never prove a negative); (2) non-existence of phenomena would require a complete absence of both logical (theoretical) and empirical support; even if empirical support is weak, logical and theoretical support can be strong; (3) statistical data are only one piece of evidence and cannot be used to reduce psychological phenomena to statistical phenomena; and (4) psychological phenomena vary across time, situations and persons. The human mind is unreproducible from one situation to another. Psychological phenomena are not particles that can decisively be tested and discovered. Therefore, a declaration that a phenomenon is not real is not only theoretically and empirically unjustified but runs counter to the propositional and provisional nature of scientific knowledge. There are only “temporary winners” and no “final truths” in scientific knowledge. Psychology is a science of subtleties in human affect, cognition and behavior. Its phenomena fluctuate with conditions and may sometimes be difficult to detect and reproduce empirically. When strictly applied, reproducibility is an overstated and even questionable concept in psychological science. Furthermore, statistical measures (e.g., effect size) are poor indicators of the theoretical importance and relevance of phenomena (cf. “deliberate practice” vs. “talent” in expert performance), not to mention whether phenomena are real or unreal. To better understand psychological phenomena, their theoretical and empirical properties should be examined via multiple parameters and criteria. Ten such parameters are suggested. PMID:28626435
Equation-free mechanistic ecosystem forecasting using empirical dynamic modeling
Ye, Hao; Beamish, Richard J.; Glaser, Sarah M.; Grant, Sue C. H.; Hsieh, Chih-hao; Richards, Laura J.; Schnute, Jon T.; Sugihara, George
2015-01-01
It is well known that current equilibrium-based models fall short as predictive descriptions of natural ecosystems, and particularly of fisheries systems that exhibit nonlinear dynamics. For example, model parameters assumed to be fixed constants may actually vary in time, models may fit well to existing data but lack out-of-sample predictive skill, and key driving variables may be misidentified due to transient (mirage) correlations that are common in nonlinear systems. With these frailties, it is somewhat surprising that static equilibrium models continue to be widely used. Here, we examine empirical dynamic modeling (EDM) as an alternative to imposed model equations and that accommodates both nonequilibrium dynamics and nonlinearity. Using time series from nine stocks of sockeye salmon (Oncorhynchus nerka) from the Fraser River system in British Columbia, Canada, we perform, for the the first time to our knowledge, real-data comparison of contemporary fisheries models with equivalent EDM formulations that explicitly use spawning stock and environmental variables to forecast recruitment. We find that EDM models produce more accurate and precise forecasts, and unlike extensions of the classic Ricker spawner–recruit equation, they show significant improvements when environmental factors are included. Our analysis demonstrates the strategic utility of EDM for incorporating environmental influences into fisheries forecasts and, more generally, for providing insight into how environmental factors can operate in forecast models, thus paving the way for equation-free mechanistic forecasting to be applied in management contexts. PMID:25733874
An empirical model of the high-energy electron environment at Jupiter
NASA Astrophysics Data System (ADS)
de Soria-Santacruz, M.; Garrett, H. B.; Evans, R. W.; Jun, I.; Kim, W.; Paranicas, C.; Drozdov, A.
2016-10-01
We present an empirical model of the energetic electron environment in Jupiter's magnetosphere that we have named the Galileo Interim Radiation Electron Model version-2 (GIRE2) since it is based on Galileo data from the Energetic Particle Detector (EPD). Inside 8RJ, GIRE2 adopts the previously existing model of Divine and Garrett because this region was well sampled by the Pioneer and Voyager spacecraft but poorly covered by Galileo. Outside of 8RJ, the model is based on 10 min averages of Galileo EPD data as well as on measurements from the Geiger Tube Telescope on board the Pioneer spacecraft. In the inner magnetosphere the field configuration is dipolar, while in the outer magnetosphere it presents a disk-like structure. The gradual transition between these two behaviors is centered at about 17RJ. GIRE2 distinguishes between the two different regions characterized by these two magnetic field topologies. Specifically, GIRE2 consists of an inner trapped omnidirectional model between 8 to 17RJ that smoothly joins onto the original Divine and Garrett model inside 8RJ and onto a GIRE2 plasma sheet model at large radial distances. The model provides a complete picture of the high-energy electron environment in the Jovian magnetosphere from ˜1 to 50RJ. The present manuscript describes in great detail the data sets, formulation, and fittings used in the model and provides a discussion of the predicted high-energy electron fluxes as a function of energy and radial distance from the planet.
The dynamics of emotions in online interaction
Kappas, Arvid; Küster, Dennis
2016-01-01
We study the changes in emotional states induced by reading and participating in online discussions, empirically testing a computational model of online emotional interaction. Using principles of dynamical systems, we quantify changes in valence and arousal through subjective reports, as recorded in three independent studies including 207 participants (110 female). In the context of online discussions, the dynamics of valence and arousal is composed of two forces: an internal relaxation towards baseline values independent of the emotional charge of the discussion and a driving force of emotional states that depends on the content of the discussion. The dynamics of valence show the existence of positive and negative tendencies, while arousal increases when reading emotional content regardless of its polarity. The tendency of participants to take part in the discussion increases with positive arousal. When participating in an online discussion, the content of participants' expression depends on their valence, and their arousal significantly decreases afterwards as a regulation mechanism. We illustrate how these results allow the design of agent-based models to reproduce and analyse emotions in online communities. Our work empirically validates the microdynamics of a model of online collective emotions, bridging online data analysis with research in the laboratory. PMID:27853586
Scale-dependent feedbacks between patch size and plant reproduction in desert grassland
Svejcar, Lauren N.; Bestelmeyer, Brandon T.; Duniway, Michael C.; James, Darren K.
2015-01-01
Theoretical models suggest that scale-dependent feedbacks between plant reproductive success and plant patch size govern transitions from highly to sparsely vegetated states in drylands, yet there is scant empirical evidence for these mechanisms. Scale-dependent feedback models suggest that an optimal patch size exists for growth and reproduction of plants and that a threshold patch organization exists below which positive feedbacks between vegetation and resources can break down, leading to critical transitions. We examined the relationship between patch size and plant reproduction using an experiment in a Chihuahuan Desert grassland. We tested the hypothesis that reproductive effort and success of a dominant grass (Bouteloua eriopoda) would vary predictably with patch size. We found that focal plants in medium-sized patches featured higher rates of grass reproductive success than when plants occupied either large patch interiors or small patches. These patterns support the existence of scale-dependent feedbacks in Chihuahuan Desert grasslands and indicate an optimal patch size for reproductive effort and success in B. eriopoda. We discuss the implications of these results for detecting ecological thresholds in desert grasslands.
AAA gunnermodel based on observer theory. [predicting a gunner's tracking response
NASA Technical Reports Server (NTRS)
Kou, R. S.; Glass, B. C.; Day, C. N.; Vikmanis, M. M.
1978-01-01
The Luenberger observer theory is used to develop a predictive model of a gunner's tracking response in antiaircraft artillery systems. This model is composed of an observer, a feedback controller and a remnant element. An important feature of the model is that the structure is simple, hence a computer simulation requires only a short execution time. A parameter identification program based on the least squares curve fitting method and the Gauss Newton gradient algorithm is developed to determine the parameter values of the gunner model. Thus, a systematic procedure exists for identifying model parameters for a given antiaircraft tracking task. Model predictions of tracking errors are compared with human tracking data obtained from manned simulation experiments. Model predictions are in excellent agreement with the empirical data for several flyby and maneuvering target trajectories.
Development of a Solid-Oxide Fuel Cell/Gas Turbine Hybrid System Model for Aerospace Applications
NASA Technical Reports Server (NTRS)
Freeh, Joshua E.; Pratt, Joseph W.; Brouwer, Jacob
2004-01-01
Recent interest in fuel cell-gas turbine hybrid applications for the aerospace industry has led to the need for accurate computer simulation models to aid in system design and performance evaluation. To meet this requirement, solid oxide fuel cell (SOFC) and fuel processor models have been developed and incorporated into the Numerical Propulsion Systems Simulation (NPSS) software package. The SOFC and reformer models solve systems of equations governing steady-state performance using common theoretical and semi-empirical terms. An example hybrid configuration is presented that demonstrates the new capability as well as the interaction with pre-existing gas turbine and heat exchanger models. Finally, a comparison of calculated SOFC performance with experimental data is presented to demonstrate model validity. Keywords: Solid Oxide Fuel Cell, Reformer, System Model, Aerospace, Hybrid System, NPSS
2014-01-01
Affinity capture of DNA methylation combined with high-throughput sequencing strikes a good balance between the high cost of whole genome bisulfite sequencing and the low coverage of methylation arrays. We present BayMeth, an empirical Bayes approach that uses a fully methylated control sample to transform observed read counts into regional methylation levels. In our model, inefficient capture can readily be distinguished from low methylation levels. BayMeth improves on existing methods, allows explicit modeling of copy number variation, and offers computationally efficient analytical mean and variance estimators. BayMeth is available in the Repitools Bioconductor package. PMID:24517713
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.
Kong, Shengchun; Nan, Bin
2014-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso
Kong, Shengchun; Nan, Bin
2013-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses. PMID:24516328
Bayesian Networks in Educational Assessment
Culbertson, Michael J.
2015-01-01
Bayesian networks (BN) provide a convenient and intuitive framework for specifying complex joint probability distributions and are thus well suited for modeling content domains of educational assessments at a diagnostic level. BN have been used extensively in the artificial intelligence community as student models for intelligent tutoring systems (ITS) but have received less attention among psychometricians. This critical review outlines the existing research on BN in educational assessment, providing an introduction to the ITS literature for the psychometric community, and points out several promising research paths. The online appendix lists 40 assessment systems that serve as empirical examples of the use of BN for educational assessment in a variety of domains. PMID:29881033
On Optimizing H. 264/AVC Rate Control by Improving R-D Model and Incorporating HVS Characteristics
NASA Astrophysics Data System (ADS)
Zhu, Zhongjie; Wang, Yuer; Bai, Yongqiang; Jiang, Gangyi
2010-12-01
The state-of-the-art JVT-G012 rate control algorithm of H.264 is improved from two aspects. First, the quadratic rate-distortion (R-D) model is modified based on both empirical observations and theoretical analysis. Second, based on the existing physiological and psychological research findings of human vision, the rate control algorithm is optimized by incorporating the main characteristics of the human visual system (HVS) such as contrast sensitivity, multichannel theory, and masking effect. Experiments are conducted, and experimental results show that the improved algorithm can simultaneously enhance the overall subjective visual quality and improve the rate control precision effectively.
Future directions for positive body image research.
Halliwell, Emma
2015-06-01
The emergence of positive body image research during the last 10 years represents an important shift in the body image literature. The existing evidence provides a strong empirical basis for the study of positive body image and research has begun to address issues of age, gender, ethnicity, culture, development, and intervention in relation to positive body image. This article briefly reviews the existing evidence before outlining directions for future research. Specifically, six areas for future positive body image research are outlined: (a) conceptualization, (b) models, (c) developmental factors, (d) social interactions, (e) cognitive processing style, and (f) interventions. Finally, the potential role of positive body image as a protective factor within the broader body image literature is discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Altruistic behavior: mapping responses in the brain
Filkowski, Megan M; Cochran, R Nick; Haas, Brian W
2016-01-01
Altruism is an important social construct related to human relationships and the way many interpersonal and economic decisions are made. Recent progress in social neuroscience research shows that altruism is associated with a specific pattern of brain activity. The tendency to engage in altruistic behaviors is associated with greater activity within limbic regions such as the nucleus accumbens and anterior cingulate cortex in addition to cortical regions such as the medial prefrontal cortex and temporoparietal junction. Here, we review existing theoretical models of altruism as well as recent empirical neuroimaging research demonstrating how altruism is processed within the brain. This review not only highlights the progress in neuroscience research on altruism but also shows that there exist several open questions that remain unexplored. PMID:28580317
Bias-dependent hybrid PKI empirical-neural model of microwave FETs
NASA Astrophysics Data System (ADS)
Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera
2011-10-01
Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.
Number of independent parameters in the potentiometric titration of humic substances.
Lenoir, Thomas; Manceau, Alain
2010-03-16
With the advent of high-precision automatic titrators operating in pH stat mode, measuring the mass balance of protons in solid-solution mixtures against the pH of natural and synthetic polyelectrolytes is now routine. However, titration curves of complex molecules typically lack obvious inflection points, which complicates their analysis despite the high-precision measurements. The calculation of site densities and median proton affinity constants (pK) from such data can lead to considerable covariance between fit parameters. Knowing the number of independent parameters that can be freely varied during the least-squares minimization of a model fit to titration data is necessary to improve the model's applicability. This number was calculated for natural organic matter by applying principal component analysis (PCA) to a reference data set of 47 independent titration curves from fulvic and humic acids measured at I = 0.1 M. The complete data set was reconstructed statistically from pH 3.5 to 9.8 with only six parameters, compared to seven or eight generally adjusted with common semi-empirical speciation models for organic matter, and explains correlations that occur with the higher number of parameters. Existing proton-binding models are not necessarily overparametrized, but instead titration data lack the sensitivity needed to quantify the full set of binding properties of humic materials. Model-independent conditional pK values can be obtained directly from the derivative of titration data, and this approach is the most conservative. The apparent proton-binding constants of the 23 fulvic acids (FA) and 24 humic acids (HA) derived from a high-quality polynomial parametrization of the data set are pK(H,COOH)(FA) = 4.18 +/- 0.21, pK(H,Ph-OH)(FA) = 9.29 +/- 0.33, pK(H,COOH)(HA) = 4.49 +/- 0.18, and pK(H,Ph-OH)(HA) = 9.29 +/- 0.38. Their values at other ionic strengths are more reliably calculated with the empirical Davies equation than any existing model fit.
NASA Astrophysics Data System (ADS)
Yang, Bo
This thesis aims to contribute to a further understanding of the real dynamics of OPEC production behavior and its impacts on the world oil market. A literature review in this area shows that the existing studies on OPEC still have some major deficiencies in theoretical interpretation and empirical estimation technique. After a brief background review in chapter 1, chapter 2 tests Griffin's market-sharing cartel model on the post-Griffin time horizon with a simultaneous system of equations, and an innovative hypothesis of OPEC's behavior (Saudi Arabia in particular) is then proposed based on the estimation results. Chapter 3 first provides a conceptual analysis of OPEC behavior under the framework of non-cooperative collusion with imperfect information. An empirical model is then constructed and estimated. The results of the empirical studies in this thesis strongly support the hypothesis that OPEC has operated as a market-sharing cartel since the early 1980s. In addition, the results also provide some support of the theory of non-cooperative collusion under imperfect information. OPEC members collude under normal circumstances and behave competitively at times in response to imperfect market signals of cartel compliance and some internal attributes. Periodic joint competition conduct plays an important role in sustaining the collusion in the long run. Saudi Arabia acts as the leader of the cartel, accommodating intermediate unfavorable market development and punishing others with a tit-for-tat strategy in extreme circumstances.
Wind Energy Facilities and Residential Properties: The Effect of Proximity and View on Sales Prices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoen, Ben; Wiser, Ryan; Cappers, Peter
2010-04-01
With wind energy expanding rapidly in the U.S. and abroad, and with an increasing number of communities considering nearby wind power developments, there is a need to empirically investigate community concerns about wind project development. One such concern is that property values may be adversely affected by wind energy facilities, and relatively little existing research exists on the subject. The present research is based on almost 7,500 sales of single-family homes situated within ten miles of 24 existing wind facilities in nine different U.S. states. The conclusions of the study are drawn from four different hedonic pricing models. The modelmore » results are consistent in that neither the view of the wind facilities nor the distance of the home to those facilities is found to have a statistically significant effect on home sales prices.« less
Experimental and Theoretical Study of Propeller Spinner/Shank Interference. M.S. Thesis
NASA Technical Reports Server (NTRS)
Cornell, C. C.
1986-01-01
A fundamental experimental and theoretical investigation into the aerodynamic interference associated with propeller spinner and shank regions was conducted. The research program involved a theoretical assessment of solutions previously proposed, followed by a systematic experimental study to supplement the existing data base. As a result, a refined computational procedure was established for prediction of interference effects in terms of interference drag and resolved into propeller thrust and torque components. These quantities were examined with attention to engineering parameters such as two spinner finess ratios, three blade shank forms, and two/three/four/six/eight blades. Consideration of the physics of the phenomena aided in the logical deduction of two individual interference quantities (cascade effects and spinner/shank juncture interference). These interference effects were semi-empirically modeled using existing theories and placed into a compatible form with an existing propeller performance scheme which provided the basis for examples of application.
Parameterization of water vapor using high-resolution GPS data and empirical models
NASA Astrophysics Data System (ADS)
Ningombam, Shantikumar S.; Jade, Sridevi; Shrungeshwara, T. S.
2018-03-01
The present work evaluates eleven existing empirical models to estimate Precipitable Water Vapor (PWV) over a high-altitude (4500 m amsl), cold-desert environment. These models are tested extensively and used globally to estimate PWV for low altitude sites (below 1000 m amsl). The moist parameters used in the model are: water vapor scale height (Hc), dew point temperature (Td) and water vapor pressure (Es 0). These moist parameters are derived from surface air temperature and relative humidity measured at high temporal resolution from automated weather station. The performance of these models are examined statistically with observed high-resolution GPS (GPSPWV) data over the region (2005-2012). The correlation coefficient (R) between the observed GPSPWV and Model PWV is 0.98 at daily data and varies diurnally from 0.93 to 0.97. Parameterization of moisture parameters were studied in-depth (i.e., 2 h to monthly time scales) using GPSPWV , Td , and Es 0 . The slope of the linear relationships between GPSPWV and Td varies from 0.073°C-1 to 0.106°C-1 (R: 0.83 to 0.97) while GPSPWV and Es 0 varied from 1.688 to 2.209 (R: 0.95 to 0.99) at daily, monthly and diurnal time scales. In addition, the moist parameters for the cold desert, high-altitude environment are examined in-depth at various time scales during 2005-2012.
Probabilistic clustering of rainfall condition for landslide triggering
NASA Astrophysics Data System (ADS)
Rossi, Mauro; Luciani, Silvia; Cesare Mondini, Alessandro; Kirschbaum, Dalia; Valigi, Daniela; Guzzetti, Fausto
2013-04-01
Landslides are widespread natural and man made phenomena. They are triggered by earthquakes, rapid snow melting, human activities, but mostly by typhoons and intense or prolonged rainfall precipitations. In Italy mostly they are triggered by intense precipitation. The prediction of landslide triggered by rainfall precipitations over large areas is commonly based on the exploitation of empirical models. Empirical landslide rainfall thresholds are used to identify rainfall conditions for the possible landslide initiation. It's common practice to define rainfall thresholds by assuming a power law lower boundary in the rainfall intensity-duration or cumulative rainfall-duration space above which landslide can occur. The boundary is defined considering rainfall conditions associated to landslide phenomena using heuristic approaches, and doesn't consider rainfall events not causing landslides. Here we present a new fully automatic method to identify the probability of landslide occurrence associated to rainfall conditions characterized by measures of intensity or cumulative rainfall and rainfall duration. The method splits the rainfall events of the past in two groups: a group of events causing landslides and its complementary, then estimate their probabilistic distributions. Next, the probabilistic membership of the new event to one of the two clusters is estimated. The method doesn't assume a priori any threshold model, but simple exploits the real empirical distribution of rainfall events. The approach was applied in the Umbria region, Central Italy, where a catalogue of landslide timing, were obtained through the search of chronicles, blogs and other source of information in the period 2002-2012. The approach was tested using rain gauge measures and satellite rainfall estimates (NASA TRMM-v6), allowing in both cases the identification of the rainfall condition triggering landslides in the region. Compared to the other existing threshold definition methods, the prosed one (i) largely reduces the subjectivity in the choice of the threshold model and in how it is calculated, and (ii) it can be easier set-up in other study areas. The proposed approach can be conveniently integrated in existing early-warning system to improve the accuracy of the estimation of the real landslide occurrence probability associated to rainfall events and its uncertainty.
Mertz, Marcel; Schildmann, Jan
2018-06-01
Empirical bioethics is commonly understood as integrating empirical research with normative-ethical research in order to address an ethical issue. Methodological analyses in empirical bioethics mainly focus on the integration of socio-empirical sciences (e.g. sociology or psychology) and normative ethics. But while there are numerous multidisciplinary research projects combining life sciences and normative ethics, there is few explicit methodological reflection on how to integrate both fields, or about the goals and rationales of such interdisciplinary cooperation. In this paper we will review some drivers for the tendency of empirical bioethics methodologies to focus on the collaboration of normative ethics with particularly social sciences. Subsequently, we argue that the ends of empirical bioethics, not the empirical methods, are decisive for the question of which empirical disciplines can contribute to empirical bioethics in a meaningful way. Using already existing types of research integration as a springboard, five possible types of research which encompass life sciences and normative analysis will illustrate how such cooperation can be conceptualized from a methodological perspective within empirical bioethics. We will conclude with a reflection on the limitations and challenges of empirical bioethics research that integrates life sciences.
Peterson, J.; Dunham, J.B.
2003-01-01
Effective conservation efforts for at-risk species require knowledge of the locations of existing populations. Species presence can be estimated directly by conducting field-sampling surveys or alternatively by developing predictive models. Direct surveys can be expensive and inefficient, particularly for rare and difficult-to-sample species, and models of species presence may produce biased predictions. We present a Bayesian approach that combines sampling and model-based inferences for estimating species presence. The accuracy and cost-effectiveness of this approach were compared to those of sampling surveys and predictive models for estimating the presence of the threatened bull trout ( Salvelinus confluentus ) via simulation with existing models and empirical sampling data. Simulations indicated that a sampling-only approach would be the most effective and would result in the lowest presence and absence misclassification error rates for three thresholds of detection probability. When sampling effort was considered, however, the combined approach resulted in the lowest error rates per unit of sampling effort. Hence, lower probability-of-detection thresholds can be specified with the combined approach, resulting in lower misclassification error rates and improved cost-effectiveness.
Volatility in financial markets: stochastic models and empirical results
NASA Astrophysics Data System (ADS)
Miccichè, Salvatore; Bonanno, Giovanni; Lillo, Fabrizio; Mantegna, Rosario N.
2002-11-01
We investigate the historical volatility of the 100 most capitalized stocks traded in US equity markets. An empirical probability density function (pdf) of volatility is obtained and compared with the theoretical predictions of a lognormal model and of the Hull and White model. The lognormal model well describes the pdf in the region of low values of volatility whereas the Hull and White model better approximates the empirical pdf for large values of volatility. Both models fail in describing the empirical pdf over a moderately large volatility range.
An Empirical Jet-Surface Interaction Noise Model with Temperature and Nozzle Aspect Ratio Effects
NASA Technical Reports Server (NTRS)
Brown, Cliff
2015-01-01
An empirical model for jet-surface interaction (JSI) noise produced by a round jet near a flat plate is described and the resulting model evaluated. The model covers unheated and hot jet conditions (1 less than or equal to jet total temperature ratio less than or equal to 2.7) in the subsonic range (0.5 less than or equal to M(sub a) less than or equal to 0.9), surface lengths 0.6 less than or equal to (axial distance from jet exit to surface trailing edge (inches)/nozzle exit diameter) less than or equal to 10, and surface standoff distances (0 less than or equal to (radial distance from jet lipline to surface (inches)/axial distance from jet exit to surface trailing edge (inches)) less than or equal to 1) using only second-order polynomials to provide predictable behavior. The JSI noise model is combined with an existing jet mixing noise model to produce exhaust noise predictions. Fit quality metrics and comparisons to between the predicted and experimental data indicate that the model is suitable for many system level studies. A first-order correction to the JSI source model that accounts for the effect of nozzle aspect ratio is also explored. This correction is based on changes to the potential core length and frequency scaling associated with rectangular nozzles up to 8:1 aspect ratio. However, more work is needed to refine these findings into a formal model.
Fryer-Edwards, Kelly; Arnold, Robert M; Baile, Walter; Tulsky, James A; Petracca, Frances; Back, Anthony
2006-07-01
Small-group teaching is particularly suited for complex skills such as communication. Existing work has identified the basic elements of small-group teaching, but few descriptions of higher-order teaching practices exist in the medical literature. Thus the authors developed an empirically driven and theoretically grounded model for small-group communication-skills teaching. Between 2002 and 2005, teaching observations were collected over 100 hours of direct contact time between four expert facilitators and 120 medical oncology fellows participating in Oncotalk, a semiannual, four-day retreat focused on end-of-life communication skills. The authors conducted small-group teaching observations, semistructured interviews with faculty participants, video or audio recording with transcript review, and evaluation of results by faculty participants. Teaching skills observed during the retreats included a linked set of reflective, process-oriented teaching practices: identifying a learning edge, proposing and testing hypotheses, and calibrating learner self-assessments. Based on observations and debriefings with facilitators, the authors developed a conceptual model of teaching that illustrates an iterative loop of teaching practices aimed at enhancing learners' engagement and self-efficacy. Through longitudinal, empirical observations, this project identified a set of specific teaching skills for small-group settings with applicability to other clinical teaching settings. This study extends current theory and teaching practice prescriptions by describing specific teaching practices required for effective teaching. These reflective teaching practices, while developed for communication skills training, may be useful for teaching other challenging topics such as ethics and professionalism.
Improved Delayed-Neutron Spectroscopy Using Trapped Ions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norman, Eric B.
The neutrons emitted following the β decay of fission fragments (known as delayed neutrons because they are emitted after fission on a timescale of the β-decay half-lives) play a crucial role in reactor performance and control. Reviews of delayed-neutron properties highlight the need for high-quality data for a wide variety of delayed-neutron emitters to better understand the time dependence and energy spectrum of the neutrons as these properties are essential for a detailed understanding of reactor kinetics needed for reactor safety and to understand the behavior of these reactors under various accident and component-failure scenarios. For fast breeder reactors, criticalitymore » calculations require accurate delayed-neutron energy spectra and approximations that are acceptable for light-water reactors such as assuming the delayed-neutron and fission-neutron energy spectra are identical are not acceptable and improved β-delayed neutron data is needed for safety and accident analyses for these reactors. With improved nuclear data, the delayed neutrons flux and energy spectrum could be calculated from the contributions from individual isotopes and therefore could be accurately modeled for any fuel-cycle concept, actinide mix, or irradiation history. High-quality β-delayed neutron measurements are also critical to constrain modern nuclear-structure calculations and empirical models that predict the decay properties for nuclei for which no data exists and improve the accuracy and flexibility of the existing empirical descriptions of delayed neutrons from fission such as the six-group representation« less
NASA Astrophysics Data System (ADS)
Thiemann, Christian; Treiber, Martin; Kesting, Arne
2008-09-01
Intervehicle communication enables vehicles to exchange messages within a limited broadcast range and thus self-organize into dynamical and geographically embedded wireless ad hoc networks. We study the longitudinal hopping mode in which messages are transported using equipped vehicles driving in the same direction as a relay. Given a finite communication range, we investigate the conditions where messages can percolate through the network, i.e., a linked chain of relay vehicles exists between the sender and receiver. We simulate message propagation in different traffic scenarios and for different fractions of equipped vehicles. Simulations are done with both, modeled and empirical traffic data. These results are used to test the limits of applicability of an analytical model assuming a Poissonian distance distribution between the relays. We found a good agreement for homogeneous traffic scenarios and sufficiently low percentages of equipped vehicles. For higher percentages, the observed connectivity was higher than that of the model while in stop-and-go traffic situations it was lower. We explain these results in terms of correlations of the distances between the relay vehicles. Finally, we introduce variable transmission ranges and found that this additional stochastic component generally increased connectivity compared to a deterministic transmission with the same mean.
The evolution of cooperative breeding in the African cichlid fish, Neolamprologus pulcher.
Wong, Marian; Balshine, Sigal
2011-05-01
The conundrum of why subordinate individuals assist dominants at the expense of their own direct reproduction has received much theoretical and empirical attention over the last 50 years. During this time, birds and mammals have taken centre stage as model vertebrate systems for exploring why helpers help. However, fish have great potential for enhancing our understanding of the generality and adaptiveness of helping behaviour because of the ease with which they can be experimentally manipulated under controlled laboratory and field conditions. In particular, the freshwater African cichlid, Neolamprologus pulcher, has emerged as a promising model species for investigating the evolution of cooperative breeding, with 64 papers published on this species over the past 27 years. Here we clarify current knowledge pertaining to the costs and benefits of helping in N. pulcher by critically assessing the existing empirical evidence. We then provide a comprehensive examination of the evidence pertaining to four key hypotheses for why helpers might help: (1) kin selection; (2) pay-to-stay; (3) signals of prestige; and (4) group augmentation. For each hypothesis, we outline the underlying theory, address the appropriateness of N. pulcher as a model species and describe the key predictions and associated empirical tests. For N. pulcher, we demonstrate that the kin selection and group augmentation hypotheses have received partial support. One of the key predictions of the pay-to-stay hypothesis has failed to receive any support despite numerous laboratory and field studies; thus as it stands, the evidence for this hypothesis is weak. There have been no empirical investigations addressing the key predictions of the signals of prestige hypothesis. By outlining the key predictions of the various hypotheses, and highlighting how many of these remain to be tested explicitly, our review can be regarded as a roadmap in which potential paths for future empirical research into the evolution of cooperative breeding are proposed. Overall, we clarify what is currently known about cooperative breeding in N. pulcher, address discrepancies among studies, caution against incorrect inferences that have been drawn over the years and suggest promising avenues for future research in fishes and other taxonomic groups. © 2010 The Authors. Biological Reviews © 2010 Cambridge Philosophical Society.
Hauschild, L; Lovatto, P A; Pomar, J; Pomar, C
2012-07-01
The objective of this study was to develop and evaluate a mathematical model used to estimate the daily amino acid requirements of individual growing-finishing pigs. The model includes empirical and mechanistic model components. The empirical component estimates daily feed intake (DFI), BW, and daily gain (DG) based on individual pig information collected in real time. Based on DFI, BW, and DG estimates, the mechanistic component uses classic factorial equations to estimate the optimal concentration of amino acids that must be offered to each pig to meet its requirements. The model was evaluated with data from a study that investigated the effect of feeding pigs with a 3-phase or daily multiphase system. The DFI and BW values measured in this study were compared with those estimated by the empirical component of the model. The coherence of the values estimated by the mechanistic component was evaluated by analyzing if it followed a normal pattern of requirements. Lastly, the proposed model was evaluated by comparing its estimates with those generated by the existing growth model (InraPorc). The precision of the proposed model and InraPorc in estimating DFI and BW was evaluated through the mean absolute error. The empirical component results indicated that the DFI and BW trajectories of individual pigs fed ad libitum could be predicted 1 d (DFI) or 7 d (BW) ahead with the average mean absolute error of 12.45 and 1.85%, respectively. The average mean absolute error obtained with the InraPorc for the average individual of the population was 14.72% for DFI and 5.38% for BW. Major differences were observed when estimates from InraPorc were compared with individual observations. The proposed model, however, was effective in tracking the change in DFI and BW for each individual pig. The mechanistic model component estimated the optimal standardized ileal digestible Lys to NE ratio with reasonable between animal (average CV = 7%) and overtime (average CV = 14%) variation. Thus, the amino acid requirements estimated by model are animal- and time-dependent and follow, in real time, the individual DFI and BW growth patterns. The proposed model can follow the average feed intake and feed weight trajectory of each individual pig in real time with good accuracy. Based on these trajectories and using classical factorial equations, the model makes it possible to estimate dynamically the AA requirements of each animal, taking into account the intake and growth changes of the animal.
Nair, Tara; Savulescu, Julian; Everett, Jim; Tonkens, Ryan; Wilkinson, Dominic
2017-01-01
Background Doctors sometimes encounter parents who object to prescribed treatment for their children, and request suboptimal substitutes be administered instead (suboptimal being defined as less effective and/or more expensive). Previous studies have focused on parental refusal of treatment and when this should be permitted, but the ethics of requests for suboptimal treatment has not been explored. Methods The paper consists of two parts: an empirical analysis and an ethical analysis. We performed an online survey with a sample of the general public to assess respondents’ thresholds for acceptable harm and expense resulting from parental choice, and the role that religion played in their judgement. We also identified and applied existing ethical frameworks to the case described in the survey to compare theoretical and empirical results. Results Two hundred and forty-two Mechanical Turk workers took our survey and there were 178 valid responses (73.6%). Respondents’ agreement to provide treatment decreased as the risk or cost of the requested substitute increased (p<0.001). More than 50% of participants were prepared to provide treatment that would involve a small absolute increased risk of death for the child (<5%) and a cost increase of US$<500, respectively. Religiously motivated requests were significantly more likely to be allowed (p<0.001). Existing ethical frameworks largely yielded ambiguous results for the case. There were clear inconsistencies between the theoretical and empirical results. Conclusion Drawing on both survey results and ethical analysis, we propose a potential model and thresholds for deciding about the permissibility of suboptimal treatment requests. PMID:28947505
Danish; Baloch, Muhammad Awais; Suad, Shah
2018-04-01
The objective of this research is to examine the relationship between transport energy consumption, economic growth, and carbon dioxide emission (CO 2 ) from transport sector incorporating foreign direct investment and urbanization. This study is carried out in Pakistan by applying autoregressive distributive lag (ARDL) and vector error correction model (VECM) over 1990-2015. The empirical results indicate a strong significant impact of transport energy consumption on CO 2 emissions from the transportation sector. Furthermore, foreign direct investment also contributes to CO 2 emission. Interestingly, the impact of economic growth and urbanization on transport CO 2 emission is statistically insignificant. Overall, transport energy consumption and foreign direct investment are not environmentally friendly. The new empirical evidence from this study provides a complete picture of the determinants of emissions from the transport sector and these novel findings not only help to advance the existing literature but also can be of special interest to the country's policymakers. So, we urge that government needs to focus on promoting the energy efficient means of transportation to improve environmental quality with less adverse influence on economic growth.
Empirical Models of Social Learning in a Large, Evolving Network.
Bener, Ayşe Başar; Çağlayan, Bora; Henry, Adam Douglas; Prałat, Paweł
2016-01-01
This paper advances theories of social learning through an empirical examination of how social networks change over time. Social networks are important for learning because they constrain individuals' access to information about the behaviors and cognitions of other people. Using data on a large social network of mobile device users over a one-month time period, we test three hypotheses: 1) attraction homophily causes individuals to form ties on the basis of attribute similarity, 2) aversion homophily causes individuals to delete existing ties on the basis of attribute dissimilarity, and 3) social influence causes individuals to adopt the attributes of others they share direct ties with. Statistical models offer varied degrees of support for all three hypotheses and show that these mechanisms are more complex than assumed in prior work. Although homophily is normally thought of as a process of attraction, people also avoid relationships with others who are different. These mechanisms have distinct effects on network structure. While social influence does help explain behavior, people tend to follow global trends more than they follow their friends.
Empirical Models of Social Learning in a Large, Evolving Network
Bener, Ayşe Başar; Çağlayan, Bora; Henry, Adam Douglas; Prałat, Paweł
2016-01-01
This paper advances theories of social learning through an empirical examination of how social networks change over time. Social networks are important for learning because they constrain individuals’ access to information about the behaviors and cognitions of other people. Using data on a large social network of mobile device users over a one-month time period, we test three hypotheses: 1) attraction homophily causes individuals to form ties on the basis of attribute similarity, 2) aversion homophily causes individuals to delete existing ties on the basis of attribute dissimilarity, and 3) social influence causes individuals to adopt the attributes of others they share direct ties with. Statistical models offer varied degrees of support for all three hypotheses and show that these mechanisms are more complex than assumed in prior work. Although homophily is normally thought of as a process of attraction, people also avoid relationships with others who are different. These mechanisms have distinct effects on network structure. While social influence does help explain behavior, people tend to follow global trends more than they follow their friends. PMID:27701430
Psychological profiling of offender characteristics from crime behaviors in serial rape offences.
Kocsis, Richard N; Cooksey, Ray W; Irwin, Harvey J
2002-04-01
Criminal psychological profiling has progressively been incorporated into police procedures despite a dearth of empirical research. Indeed, in the study of serial violent crimes for the purpose of psychological profiling, very few original, quantitative, academically reviewed studies actually exist. This article reports on the analysis of 62 incidents of serial sexual assault. The statistical procedure of multidimensional scaling was employed in the analysis of this data, which in turn produced a five-cluster model of serial rapist behavior. First, a central cluster of behaviors were identified that represent common behaviors to all patterns of serial rape. Second, four distinct outlying patterns were identified as demonstrating distinct offence styles, these being assigned the following descriptive labels brutality, intercourse, chaotic, and ritual. Furthermore, analysis of these patterns also identified distinct offender characteristics that allow for the use of empirically robust offender profiles in future serial rape investigations.
Card, Noel A.
2011-01-01
The traditional psychological approach of studying aggression among schoolchildren in terms of individual differences in aggression and in victimization has been valuable in identifying prevalence rates, risk, and consequences of involvement in aggression. However, it is argued that a focus on aggressor-victim relationships is warranted based on both conceptual and empirical grounds. Such a shift in focus requires modification and integration of existing theories of aggression, and this paper integrates social cognitive theory and interdependence theory to suggest a new, interdependent social cognitive theory of aggression. Specifically, this paper identifies points of overlap and different foci between these theories, and it illustrates their integration through a proposed model of the emergence of aggressor-victim interactions and relationships. The paper concludes that expanding consideration to include aggressor-victim relationships among schoolchildren offers considerable theoretical, empirical, and intervention opportunities. PMID:26985397
Post, Brady; Buchmueller, Tom; Ryan, Andrew M
2017-08-01
Hospital-physician vertical integration is on the rise. While increased efficiencies may be possible, emerging research raises concerns about anticompetitive behavior, spending increases, and uncertain effects on quality. In this review, we bring together several of the key theories of vertical integration that exist in the neoclassical and institutional economics literatures and apply these theories to the hospital-physician relationship. We also conduct a literature review of the effects of vertical integration on prices, spending, and quality in the growing body of evidence ( n = 15) to evaluate which of these frameworks have the strongest empirical support. We find some support for vertical foreclosure as a framework for explaining the observed results. We suggest a conceptual model and identify directions for future research. Based on our analysis, we conclude that vertical integration poses a threat to the affordability of health services and merits special attention from policymakers and antitrust authorities.
Droplet breakup in accelerating gas flows. Part 2: Secondary atomization
NASA Technical Reports Server (NTRS)
Zajac, L. J.
1973-01-01
An experimental investigation to determine the effects of an accelerating gas flow on the atomization characteristics of liquid sprays was conducted. The sprays were produced by impinging two liquid jets. The liquid was molten wax and the gas was nitrogen. The use of molten wax allowed for a quantitative measure of the resulting dropsize distribution. The results of this study, indicate that a significant amount of droplet breakup will occur as a result of the action of the gas on the liquid droplets. Empirical correlations are presented in terms of parameters that were found to affect the mass median dropsize most significantly, the orifice diameter, the liquid injection velocity, and the maximum gas velocity. An empirical correlation for the normalized dropsize distribution is also presented. These correlations are in a form that may be incorporated readily into existing combustion model computer codes for the purpose of calculating rocket engine combustion performance.
Improved inland water levels from SAR altimetry using novel empirical and physical retrackers
NASA Astrophysics Data System (ADS)
Villadsen, Heidi; Deng, Xiaoli; Andersen, Ole B.; Stenseng, Lars; Nielsen, Karina; Knudsen, Per
2016-06-01
Satellite altimetry has proven a valuable resource of information on river and lake levels where in situ data are sparse or non-existent. In this study several new methods for obtaining stable inland water levels from CryoSat-2 Synthetic Aperture Radar (SAR) altimetry are presented and evaluated. In addition, the possible benefits from combining physical and empirical retrackers are investigated. The retracking methods evaluated in this paper include the physical SAR Altimetry MOde Studies and Applications (SAMOSA3) model, a traditional subwaveform threshold retracker, the proposed Multiple Waveform Persistent Peak (MWaPP) retracker, and a method combining the physical and empirical retrackers. Using a physical SAR waveform retracker over inland water has not been attempted before but shows great promise in this study. The evaluation is performed for two medium-sized lakes (Lake Vänern in Sweden and Lake Okeechobee in Florida), and in the Amazon River in Brazil. Comparing with in situ data shows that using the SAMOSA3 retracker generally provides the lowest root-mean-squared-errors (RMSE), closely followed by the MWaPP retracker. For the empirical retrackers, the RMSE values obtained when comparing with in situ data in Lake Vänern and Lake Okeechobee are in the order of 2-5 cm for well-behaved waveforms. Combining the physical and empirical retrackers did not offer significantly improved mean track standard deviations or RMSEs. Based on these studies, it is suggested that future SAR derived water levels are obtained using the SAMOSA3 retracker whenever information about other physical properties apart from range is desired. Otherwise we suggest using the empirical MWaPP retracker described in this paper, which is both easy to implement, computationally efficient, and gives a height estimate for even the most contaminated waveforms.
NASA Astrophysics Data System (ADS)
McKean, John R.; Johnson, Donn; Taylor, R. Garth
2010-09-01
Choice of the appropriate model of economic behavior is important for the measurement of nonmarket demand and benefits. Several travel cost demand model specifications are currently in use. Uncertainty exists over the efficacy of these approaches, and more theoretical and empirical study is warranted. Thus travel cost models with differing assumptions about labor markets and consumer behavior were applied to estimate the demand for steelhead trout sportfishing on an unimpounded reach of the Snake River near Lewiston, Idaho. We introduce a modified two-step decision model that incorporates endogenous time value using a latent index variable approach. The focus is on the importance of distinguishing between short-run and long-run consumer decision variables in a consistent manner. A modified Barnett two-step decision model was found superior to other models tested.
Markkula, Gustav; Boer, Erwin; Romano, Richard; Merat, Natasha
2018-06-01
A conceptual and computational framework is proposed for modelling of human sensorimotor control and is exemplified for the sensorimotor task of steering a car. The framework emphasises control intermittency and extends on existing models by suggesting that the nervous system implements intermittent control using a combination of (1) motor primitives, (2) prediction of sensory outcomes of motor actions, and (3) evidence accumulation of prediction errors. It is shown that approximate but useful sensory predictions in the intermittent control context can be constructed without detailed forward models, as a superposition of simple prediction primitives, resembling neurobiologically observed corollary discharges. The proposed mathematical framework allows straightforward extension to intermittent behaviour from existing one-dimensional continuous models in the linear control and ecological psychology traditions. Empirical data from a driving simulator are used in model-fitting analyses to test some of the framework's main theoretical predictions: it is shown that human steering control, in routine lane-keeping and in a demanding near-limit task, is better described as a sequence of discrete stepwise control adjustments, than as continuous control. Results on the possible roles of sensory prediction in control adjustment amplitudes, and of evidence accumulation mechanisms in control onset timing, show trends that match the theoretical predictions; these warrant further investigation. The results for the accumulation-based model align with other recent literature, in a possibly converging case against the type of threshold mechanisms that are often assumed in existing models of intermittent control.
ERIC Educational Resources Information Center
Otto, Gina
Who is responsible for loss of life and property when one empire is conquered by another? It is the year 1473 A.D., 20 years after the fall of Constantinople. On May 29, 1453, the Eastern Roman Empire came to an end with the military takeover of Constantinople by the Ottoman Turks. How could an empire cease to exist? What were the people in and…
Consumer-mediated recycling and cascading trophic interactions.
Leroux, Shawn J; Loreau, Michel
2010-07-01
Cascading trophic interactions mediated by consumers are complex phenomena, which encompass many direct and indirect effects. Nonetheless, most experiments and theory on the topic focus uniquely on the indirect, positive effects of predators on producers via regulation of herbivores. Empirical research in aquatic ecosystems, however, demonstrate that the indirect, positive effects of consumer-mediated recycling on primary producer stocks may be larger than the effects of herbivore regulation, particularly when predators have access to alternative prey. We derive an ecosystem model with both recipient- and donor-controlled trophic relationships to test the conditions of four hypotheses generated from recent empirical work on the role of consumer-mediated recycling in cascading trophic interactions. Our model predicts that predator regulation of herbivores will have larger, positive effects on producers than consumer-mediated recycling in most cases but that consumer-mediated recycling does generally have a positive effect on producer stocks. We demonstrate that herbivore recycling will have larger effects on producer biomass than predator recycling when turnover rates and recycling efficiencies are high and predators prefer local prey. In addition, predictions suggest that consumer-mediated recycling has the largest effects on primary producers when predators prefer allochthonous prey and predator attack rates are high. Finally, our model predicts that consumer-mediated recycling effects may not be largest when external nutrient loading is low. Our model predictions highlight predator and prey feeding relationships, turnover rates, and external nutrient loading rates as key determinants of the strength of cascading trophic interactions. We show that existing hypotheses from specific empirical systems do not occur under all conditions, which further exacerbates the need to consider a broad suite of mechanisms when investigating trophic cascades.
NASA Astrophysics Data System (ADS)
El-Sebakhy, Emad A.
2009-09-01
Pressure-volume-temperature properties are very important in the reservoir engineering computations. There are many empirical approaches for predicting various PVT properties based on empirical correlations and statistical regression models. Last decade, researchers utilized neural networks to develop more accurate PVT correlations. These achievements of neural networks open the door to data mining techniques to play a major role in oil and gas industry. Unfortunately, the developed neural networks correlations are often limited, and global correlations are usually less accurate compared to local correlations. Recently, adaptive neuro-fuzzy inference systems have been proposed as a new intelligence framework for both prediction and classification based on fuzzy clustering optimization criterion and ranking. This paper proposes neuro-fuzzy inference systems for estimating PVT properties of crude oil systems. This new framework is an efficient hybrid intelligence machine learning scheme for modeling the kind of uncertainty associated with vagueness and imprecision. We briefly describe the learning steps and the use of the Takagi Sugeno and Kang model and Gustafson-Kessel clustering algorithm with K-detected clusters from the given database. It has featured in a wide range of medical, power control system, and business journals, often with promising results. A comparative study will be carried out to compare their performance of this new framework with the most popular modeling techniques, such as neural networks, nonlinear regression, and the empirical correlations algorithms. The results show that the performance of neuro-fuzzy systems is accurate, reliable, and outperform most of the existing forecasting techniques. Future work can be achieved by using neuro-fuzzy systems for clustering the 3D seismic data, identification of lithofacies types, and other reservoir characterization.
ERIC Educational Resources Information Center
Jackson, Duncan J. R.; Cooper-Thomas, Helena D.; van Gelderen, Marco; Davis, Jane
2010-01-01
Competencies represent an important and popular topic in human resource development. Despite this popularity, a divide exists between practitioner approaches to developmental competency measures and the empirical scrutiny of such approaches. However, the scarce empirical studies on competency measures have begun to bridge this gap. In the present…
ERIC Educational Resources Information Center
Savin-Williams, Ritch C.; Vrangalova, Zhana
2013-01-01
We reviewed empirical evidence regarding whether mostly heterosexual exists as a sexual orientation distinct from two adjacent groups on a sexual continuum--exclusively heterosexual and substantially bisexual. We addressed the question: Do mostly heterosexuals show a unique profile of sexual and romantic characteristics that distinguishes them as…
ERIC Educational Resources Information Center
Roberts, Kelly D.; Park, Hye Jin; Brown, Steven; Cook, Bryan
2011-01-01
Universal Design for Instruction (UDI) in postsecondary education is a relatively new concept/framework that has generated significant support. The purpose of this literature review was to examine existing empirical research, including qualitative, quantitative, and mixed methods, on the use of UDI (and related terms) in postsecondary education.…
Self-Published Books: An Empirical "Snapshot"
ERIC Educational Resources Information Center
Bradley, Jana; Fulton, Bruce; Helm, Marlene
2012-01-01
The number of books published by authors using fee-based publication services, such as Lulu and AuthorHouse, is overtaking the number of books published by mainstream publishers, according to Bowker's 2009 annual data. Little empirical research exists on self-published books. This article presents the results of an investigation of a random sample…
A Critique of Schema Theory in Reading and a Dual Coding Alternative (Commentary).
ERIC Educational Resources Information Center
Sadoski, Mark; And Others
1991-01-01
Evaluates schema theory and presents dual coding theory as a theoretical alternative. Argues that schema theory is encumbered by lack of a consistent definition, its roots in idealist epistemology, and mixed empirical support. Argues that results of many empirical studies used to demonstrate the existence of schemata are more consistently…
The Theoretical and Empirical Basis for Meditation as an Intervention for PTSD
ERIC Educational Resources Information Center
Lang, Ariel J.; Strauss, Jennifer L.; Bomyea, Jessica; Bormann, Jill E.; Hickman, Steven D.; Good, Raquel C.; Essex, Michael
2012-01-01
In spite of the existence of good empirically supported treatments for posttraumatic stress disorder (PTSD), consumers and providers continue to ask for more options for managing this common and often chronic condition. Meditation-based approaches are being widely implemented, but there is minimal research rigorously assessing their effectiveness.…
Continued Use of a Chinese Online Portal: An Empirical Study
ERIC Educational Resources Information Center
Shih, Hung-Pin
2008-01-01
The evolution of the internet has made online portals a popular means of surfing the internet. In internet commerce, understanding the post-adoption behaviour of users of online portals can help enterprises to attract new users and retain existing customers. For predicting continued use intentions, this empirical study focused on applying and…
Beyond Ideological Warfare: The Maturation of Research on Charter Schools
ERIC Educational Resources Information Center
Smith, Joanna; Wohlstetter, Priscilla; Farrell, Caitlin C.; Nayfack, Michelle B.
2011-01-01
Philosophical debate about charter schools often results in theory and anecdotes overshadowing empirical research. This systematic review of trends in the charter school research over the last decade helps determine where empirical evidence exists and where new research is necessary. Findings reveal that student and school outcomes are the most…
Transition mixing study empirical model report
NASA Technical Reports Server (NTRS)
Srinivasan, R.; White, C.
1988-01-01
The empirical model developed in the NASA Dilution Jet Mixing Program has been extended to include the curvature effects of transition liners. This extension is based on the results of a 3-D numerical model generated under this contract. The empirical model results agree well with the numerical model results for all tests cases evaluated. The empirical model shows faster mixing rates compared to the numerical model. Both models show drift of jets toward the inner wall of a turning duct. The structure of the jets from the inner wall does not exhibit the familiar kidney-shaped structures observed for the outer wall jets or for jets injected in rectangular ducts.
Mothers Coping With Bereavement in the 2008 China Earthquake: A Dual Process Model Analysis.
Chen, Lin; Fu, Fang; Sha, Wei; Chan, Cecilia L W; Chow, Amy Y M
2017-01-01
The purpose of this study is to explore the grief experiences of mothers after they lost their children in the 2008 China earthquake. Informed by the dual process model, this study conducted in-depth interviews to explore how six bereaved mothers coped with such grief over a 2-year period. Right after the earthquake, these mothers suffered from intensive grief. They primarily coped with loss-oriented stressors. As time passed, these mothers began to focus on restoration-oriented stressors to face changes in life. This coping trajectory was a dynamic and integral process, which bereaved mothers oscillated between loss- and restoration-oriented stressors. This study offers insight in extending the existing empirical evidence of the dual process model.
Mothers Coping With Bereavement in the 2008 China Earthquake: A Dual Process Model Analysis.
Chen, Lin; Fu, Fang; Sha, Wei; Chan, Cecilia L W; Chow, Amy Y M
2017-01-01
The purpose of this study is to explore the grief experiences of mothers after they lost their children in the 2008 China earthquake. Informed by the Dual Process Model, this study conducted in-depth interviews to explore how six bereaved mothers coped with such grief over a 2-year period. Right after the earthquake, these mothers suffered from intensive grief. They primarily coped with loss-oriented stressors. As time passed, these mothers began to focus on restoration-oriented stressors to face changes in life. This coping trajectory was a dynamic and integral process, which bereaved mothers oscillated between loss- and restoration-oriented stressors. This study offers insight in extending the existing empirical evidence of the Dual Process Model.
Cleaning up with genomics: applying molecular biology to bioremediation.
Lovley, Derek R
2003-10-01
Bioremediation has the potential to restore contaminated environments inexpensively yet effectively, but a lack of information about the factors controlling the growth and metabolism of microorganisms in polluted environments often limits its implementation. However, rapid advances in the understanding of bioremediation are on the horizon. Researchers now have the ability to culture microorganisms that are important in bioremediation and can evaluate their physiology using a combination of genome-enabled experimental and modelling techniques. In addition, new environmental genomic techniques offer the possibility for similar studies on as-yet-uncultured organisms. Combining models that can predict the activity of microorganisms that are involved in bioremediation with existing geochemical and hydrological models should transform bioremediation from a largely empirical practice into a science.
De Vries, Martine; Van Leeuwen, Evert
2010-11-01
In ethics, the use of empirical data has become more and more popular, leading to a distinct form of applied ethics, namely empirical ethics. This 'empirical turn' is especially visible in bioethics. There are various ways of combining empirical research and ethical reflection. In this paper we discuss the use of empirical data in a special form of Reflective Equilibrium (RE), namely the Network Model with Third Person Moral Experiences. In this model, the empirical data consist of the moral experiences of people in a practice. Although inclusion of these moral experiences in this specific model of RE can be well defended, their use in the application of the model still raises important questions. What precisely are moral experiences? How to determine relevance of experiences, in other words: should there be a selection of the moral experiences that are eventually used in the RE? How much weight should the empirical data have in the RE? And the key question: can the use of RE by empirical ethicists really produce answers to practical moral questions? In this paper we start to answer the above questions by giving examples taken from our research project on understanding the norm of informed consent in the field of pediatric oncology. We especially emphasize that incorporation of empirical data in a network model can reduce the risk of self-justification and bias and can increase the credibility of the RE reached. © 2009 Blackwell Publishing Ltd.
A Comparison of Combustor-Noise Models
NASA Technical Reports Server (NTRS)
Hultgren, Lennart S.
2012-01-01
The present status of combustor-noise prediction in the NASA Aircraft Noise Prediction Program (ANOPP)1 for current-generation (N) turbofan engines is summarized. Several semi-empirical models for turbofan combustor noise are discussed, including best methods for near-term updates to ANOPP. An alternate turbine-transmission factor2 will appear as a user selectable option in the combustor-noise module GECOR in the next release. The three-spectrum model proposed by Stone et al.3 for GE turbofan-engine combustor noise is discussed and compared with ANOPP predictions for several relevant cases. Based on the results presented herein and in their report,3 it is recommended that the application of this fully empirical combustor-noise prediction method be limited to situations involving only General-Electric turbofan engines. Long-term needs and challenges for the N+1 through N+3 time frame are discussed. Because the impact of other propulsion-noise sources continues to be reduced due to turbofan design trends, advances in noise-mitigation techniques, and expected aircraft configuration changes, the relative importance of core noise is expected to greatly increase in the future. The noise-source structure in the combustor, including the indirect one, and the effects of the propagation path through the engine and exhaust nozzle need to be better understood. In particular, the acoustic consequences of the expected trends toward smaller, highly efficient gas-generator cores and low-emission fuel-flexible combustors need to be fully investigated since future designs are quite likely to fall outside of the parameter space of existing (semi-empirical) prediction tools.
NASA Technical Reports Server (NTRS)
Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.
2009-01-01
To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage. In addition, the unexpected polycrystalline structure seen in the focused ion beam (FIB) cross section in the first experiment was confirmed in this experiment using transmission electron microscopy (TEM). The FIB was also used to cross section two card guides to facilitate the measurement of the grain size of each card guide's tin plating to determine its finish.
Salience-Based Selection: Attentional Capture by Distractors Less Salient Than the Target
Goschy, Harriet; Müller, Hermann Joseph
2013-01-01
Current accounts of attentional capture predict the most salient stimulus to be invariably selected first. However, existing salience and visual search models assume noise in the map computation or selection process. Consequently, they predict the first selection to be stochastically dependent on salience, implying that attention could even be captured first by the second most salient (instead of the most salient) stimulus in the field. Yet, capture by less salient distractors has not been reported and salience-based selection accounts claim that the distractor has to be more salient in order to capture attention. We tested this prediction using an empirical and modeling approach of the visual search distractor paradigm. For the empirical part, we manipulated salience of target and distractor parametrically and measured reaction time interference when a distractor was present compared to absent. Reaction time interference was strongly correlated with distractor salience relative to the target. Moreover, even distractors less salient than the target captured attention, as measured by reaction time interference and oculomotor capture. In the modeling part, we simulated first selection in the distractor paradigm using behavioral measures of salience and considering the time course of selection including noise. We were able to replicate the result pattern we obtained in the empirical part. We conclude that each salience value follows a specific selection time distribution and attentional capture occurs when the selection time distributions of target and distractor overlap. Hence, selection is stochastic in nature and attentional capture occurs with a certain probability depending on relative salience. PMID:23382820
Model Guided Design and Development Process for an Electronic Health Record Training Program
He, Ze; Marquard, Jenna; Henneman, Elizabeth
2016-01-01
Effective user training is important to ensure electronic health record (EHR) implementation success. Though many previous studies report best practice principles and success and failure stories, current EHR training is largely empirically-based and often lacks theoretical guidance. In addition, the process of training development is underemphasized and underreported. A white paper by the American Medical Informatics Association called for models of user training for clinical information system implementation; existing instructional development models from learning theory provide a basis to meet this call. We describe in this paper our experiences and lessons learned as we adapted several instructional development models to guide our development of EHR user training. Specifically, we focus on two key aspects of this training development: training content and training process. PMID:28269940
Continuing Development of a Hybrid Model (VSH) of the Neutral Thermosphere
NASA Technical Reports Server (NTRS)
Burns, Alan
1996-01-01
We propose to continue the development of a new operational model of neutral thermospheric density, composition, temperatures and winds to improve current engineering environment definitions of the neutral thermosphere. This model will be based on simulations made with the National Center for Atmospheric Research (NCAR) Thermosphere-Ionosphere- Electrodynamic General Circulation Model (TIEGCM) and on empirical data. It will be capable of using real-time geophysical indices or data from ground-based and satellite inputs and provides neutral variables at specified locations and times. This "hybrid" model will be based on a Vector Spherical Harmonic (VSH) analysis technique developed (over the last 8 years) at the University of Michigan that permits the incorporation of the TIGCM outputs and data into the model. The VSH model will be a more accurate version of existing models of the neutral thermospheric, and will thus improve density specification for satellites flying in low Earth orbit (LEO).
Belone, Lorenda; Lucero, Julie E; Duran, Bonnie; Tafoya, Greg; Baker, Elizabeth A; Chan, Domin; Chang, Charlotte; Greene-Moton, Ella; Kelley, Michele A; Wallerstein, Nina
2016-01-01
A national community-based participatory research (CBPR) team developed a conceptual model of CBPR partnerships to understand the contribution of partnership processes to improved community capacity and health outcomes. With the model primarily developed through academic literature and expert consensus building, we sought community input to assess face validity and acceptability. Our research team conducted semi-structured focus groups with six partnerships nationwide. Participants validated and expanded on existing model constructs and identified new constructs based on "real-world" praxis, resulting in a revised model. Four cross-cutting constructs were identified: trust development, capacity, mutual learning, and power dynamics. By empirically testing the model, we found community face validity and capacity to adapt the model to diverse contexts. We recommend partnerships use and adapt the CBPR model and its constructs, for collective reflection and evaluation, to enhance their partnering practices and achieve their health and research goals. © The Author(s) 2014.
Sojda, R.S.
2007-01-01
Decision support systems are often not empirically evaluated, especially the underlying modelling components. This can be attributed to such systems necessarily being designed to handle complex and poorly structured problems and decision making. Nonetheless, evaluation is critical and should be focused on empirical testing whenever possible. Verification and validation, in combination, comprise such evaluation. Verification is ensuring that the system is internally complete, coherent, and logical from a modelling and programming perspective. Validation is examining whether the system is realistic and useful to the user or decision maker, and should answer the question: “Was the system successful at addressing its intended purpose?” A rich literature exists on verification and validation of expert systems and other artificial intelligence methods; however, no single evaluation methodology has emerged as preeminent. At least five approaches to validation are feasible. First, under some conditions, decision support system performance can be tested against a preselected gold standard. Second, real-time and historic data sets can be used for comparison with simulated output. Third, panels of experts can be judiciously used, but often are not an option in some ecological domains. Fourth, sensitivity analysis of system outputs in relation to inputs can be informative. Fifth, when validation of a complete system is impossible, examining major components can be substituted, recognizing the potential pitfalls. I provide an example of evaluation of a decision support system for trumpeter swan (Cygnus buccinator) management that I developed using interacting intelligent agents, expert systems, and a queuing system. Predicted swan distributions over a 13-year period were assessed against observed numbers. Population survey numbers and banding (ringing) studies may provide long term data useful in empirical evaluation of decision support.
Empirical models of wind conditions on Upper Klamath Lake, Oregon
Buccola, Norman L.; Wood, Tamara M.
2010-01-01
Upper Klamath Lake is a large (230 square kilometers), shallow (mean depth 2.8 meters at full pool) lake in southern Oregon. Lake circulation patterns are driven largely by wind, and the resulting currents affect the water quality and ecology of the lake. To support hydrodynamic modeling of the lake and statistical investigations of the relation between wind and lake water-quality measurements, the U.S. Geological Survey has monitored wind conditions along the lakeshore and at floating raft sites in the middle of the lake since 2005. In order to make the existing wind archive more useful, this report summarizes the development of empirical wind models that serve two purposes: (1) to fill short (on the order of hours or days) wind data gaps at raft sites in the middle of the lake, and (2) to reconstruct, on a daily basis, over periods of months to years, historical wind conditions at U.S. Geological Survey sites prior to 2005. Empirical wind models based on Artificial Neural Network (ANN) and Multivariate-Adaptive Regressive Splines (MARS) algorithms were compared. ANNs were better suited to simulating the 10-minute wind data that are the dependent variables of the gap-filling models, but the simpler MARS algorithm may be adequate to accurately simulate the daily wind data that are the dependent variables of the historical wind models. To further test the accuracy of the gap-filling models, the resulting simulated winds were used to force the hydrodynamic model of the lake, and the resulting simulated currents were compared to measurements from an acoustic Doppler current profiler. The error statistics indicated that the simulation of currents was degraded as compared to when the model was forced with observed winds, but probably is adequate for short gaps in the data of a few days or less. Transport seems to be less affected by the use of the simulated winds in place of observed winds. The simulated tracer concentration was similar between model results when simulated winds were used to force the model, and when observed winds were used to force the model, and differences between the two results did not accumulate over time.
NASA Astrophysics Data System (ADS)
Ladewski, Barbara G.
Despite considerable exploration of inquiry and reflection in the literatures of science education and teacher education/teacher professional development over the past century, few theoretical or analytical tools exist to characterize these processes within a naturalistic classroom context. In addition, little is known regarding possible developmental trajectories for inquiry or reflection---for teachers or students---as these processes develop within a classroom context over time. In the dissertation, I use a sociocultural lens to explore these issues with an eye to the ways in which teachers and students develop shared sense-making, rather than from the more traditional perspective of individual teacher activity or student learning. The study includes both theoretical and empirical components. Theoretically, I explore the elaborations of sociocultural theory needed to characterize teacher-student shared sense-making as it develops within a classroom context, and, in particular, the role of inquiry and reflection in that sense-making. I develop a sociocultural model of shared sense-making that attempts to represent the dialectic between the individual and the social, through an elaboration of existing sociocultural and psychological constructs, including Vygotsky's zone of proximal development and theory of mind. Using this model as an interpretive framework, I develop a case study that explores teacher-student shared sense-making within a middle-school science classroom across a year of scaffolded introduction to inquiry-based science instruction. The empirical study serves not only as a test case for the theoretical model, but also informs our understanding regarding possible developmental trajectories and important mechanisms supporting and constraining shared sense-making within inquiry-based science classrooms. Theoretical and empirical findings provide support for the idea that perspectival shifts---that is, shifts of point-of-view that alter relationships and proximities of elements within the interaction space---play an important role in shared sense-making. Findings further suggest that the mutually constitutive interaction of inquiry and reflection plays a key role in flexible shared sense-making. Finally, findings lend support to the idea of a dialectical relationship between human models of shared sense-making and human systems of shared sense-making; that is, the ways in which human minds are coordinated is a work in progress, shaping and shaped by human culture.
Patent reform in the United States.
Mills, Ann; Tereskerz, Patti
2010-01-01
The recent financial meltdown has muted the patent reform debate in the United States. But given that President Obama, as well as many members of Congress, support patent reform, we expect the debate to resurface. In this essay, we look carefully at reports from three prestigious organizations which have been enormously influential in the debate. We examine the empirical basis contained in these reports upon which proposed legislative changes are based. We conclude that the empirical data being used to justify the need for reform either has serious methodological limitations or is non-existent. Moreover, we review recent court decisions which have already altered the patent environment calling into further question whether the limited data that exists is still applicable. The effect of these recent decisions has not been adequately evaluated or assessed. Thus, we recommend other empirical studies are needed to inform public policy as to whether patent reform is necessary.
Benefits of Applying Hierarchical Models to the Empirical Green's Function Approach
NASA Astrophysics Data System (ADS)
Denolle, M.; Van Houtte, C.
2017-12-01
Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study improves upon these existing methods, and shows that the fitting method may explain some of the discrepancy. In particular, Bayesian hierarchical modelling is shown to be a method that can reduce bias, better quantify uncertainties and allow additional effects to be resolved. The method is applied to the Mw7.1 Kumamoto, Japan earthquake, and other global, moderate-magnitude, strike-slip earthquakes between Mw5 and Mw7.5. It is shown that the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be reliably retrieved without overfitting the data. Additionally, it is shown that methods commonly used to calculate corner frequencies can give substantial biases. In particular, if fc were calculated for the Kumamoto earthquake using a model with a falloff rate fixed at 2 instead of the best fit 1.6, the obtained fc would be as large as twice its realistic value. The reliable retrieval of the falloff rate allows deeper examination of this parameter for a suite of global, strike-slip earthquakes, and its scaling with magnitude. The earthquake sequences considered in this study are from Japan, New Zealand, Haiti and California.
Zipf 's law and the effect of ranking on probability distributions
NASA Astrophysics Data System (ADS)
Günther, R.; Levitin, L.; Schapiro, B.; Wagner, P.
1996-02-01
Ranking procedures are widely used in the description of many different types of complex systems. Zipf's law is one of the most remarkable frequency-rank relationships and has been observed independently in physics, linguistics, biology, demography, etc. We show that ranking plays a crucial role in making it possible to detect empirical relationships in systems that exist in one realization only, even when the statistical ensemble to which the systems belong has a very broad probability distribution. Analytical results and numerical simulations are presented which clarify the relations between the probability distributions and the behavior of expected values for unranked and ranked random variables. This analysis is performed, in particular, for the evolutionary model presented in our previous papers which leads to Zipf's law and reveals the underlying mechanism of this phenomenon in terms of a system with interdependent and interacting components as opposed to the “ideal gas” models suggested by previous researchers. The ranking procedure applied to this model leads to a new, unexpected phenomenon: a characteristic “staircase” behavior of the mean values of the ranked variables (ranked occupation numbers). This result is due to the broadness of the probability distributions for the occupation numbers and does not follow from the “ideal gas” model. Thus, it provides an opportunity, by comparison with empirical data, to obtain evidence as to which model relates to reality.
ERIC Educational Resources Information Center
Porfeli, Erik J.; Richard, George V.; Savickas, Mark L.
2010-01-01
An empirical measurement model for interest inventory construction uses internal criteria whereas an inductive measurement model uses external criteria. The empirical and inductive measurement models are compared and contrasted and then two models are assessed through tests of the effectiveness and economy of scales for the Medical Specialty…
Bridging process-based and empirical approaches to modeling tree growth
Harry T. Valentine; Annikki Makela; Annikki Makela
2005-01-01
The gulf between process-based and empirical approaches to modeling tree growth may be bridged, in part, by the use of a common model. To this end, we have formulated a process-based model of tree growth that can be fitted and applied in an empirical mode. The growth model is grounded in pipe model theory and an optimal control model of crown development. Together, the...
NASA Astrophysics Data System (ADS)
Hansen, K. C.; Fougere, N.; Bieler, A. M.; Altwegg, K.; Combi, M. R.; Gombosi, T. I.; Huang, Z.; Rubin, M.; Tenishev, V.; Toth, G.; Tzou, C. Y.
2015-12-01
We have previously published results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model and its characterization of the neutral coma of comet 67P/Churyumov-Gerasimenko through detailed comparison with data collected by the ROSINA/COPS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/COmet Pressure Sensor) instrument aboard the Rosetta spacecraft [Bieler, 2015]. Results from these DSMC models have been used to create an empirical model of the near comet coma (<200 km) of comet 67P. The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. The model is a significant improvement over more simple empirical models, such as the Haser model. While the DSMC results are a more accurate representation of the coma at any given time, the advantage of a mean state, empirical model is the ease and speed of use. One use of such an empirical model is in the calculation of a total cometary coma production rate from the ROSINA/COPS data. The COPS data are in situ measurements of gas density and velocity along the ROSETTA spacecraft track. Converting the measured neutral density into a production rate requires knowledge of the neutral gas distribution in the coma. Our empirical model provides this information and therefore allows us to correct for the spacecraft location to calculate a production rate as a function of heliocentric distance. We will present the full empirical model as well as the calculated neutral production rate for the period of August 2014 - August 2015 (perihelion).
Debating the role of econophysics.
Rosser, J Barkley
2008-07-01
Research in econophysics has been going on for more than a decade with considerable publicity in some of the leading general science journals. Strong claims have been made by some advocates regarding its reputed superiority to economics, with arguments that in fact the teaching of microeconomics and macroeconomics as they are currently constituted should cease and be replaced by appropriate courses in mathematics, physics, and some other harder sciences. The lack of invariance principles in economics and the failure of economists to deal properly with certain empirical regularities are held against it in this line of argument. Responding arguments address four points: (a) that many econophysicists lack awareness of what has been done in economics and thus sometimes claim a greater degree of originality and innovativeness in their work than is deserved, (b) that econophysicists do not use as sufficiently rigorous or sophisticated statistical methodology as econometricians, (c) that econophysicists search for universal empirical regularities in economics that probably do not exist, and (d) that the theoretical models they adduce to explain empirical phenomena have many difficulties and limits. This article examines the arguments and concludes that nonlinear dynamics and entropy concepts may provide a productive way forward.
German Water Infrastructure in China: Colonial Qingdao 1898-1914.
Kneitz, Agnes
2016-12-01
Within the colorful tapestry of colonial possessions the German empire acquired over the short period of its existence, Qingdao stands out because it fulfilled a different role from settlements in Africa-especially because of its exemplary planned water infrastructure: its technological model, the resulting (public) hygiene, and the adjunct brewery. The National Naval Office (Reichsmarineamt), which oversaw the administration of the future "harbour colony"-at first little more than a little fishing village-enjoyed a remarkable degree of freedom in implementing this project. The German government invested heavily in showing off its techno-cultural achievements to China and the world and thereby massively exploited the natural resources of the mountainous interior. This contribution focuses on Qingdao's water infrastructure and its role in public hygiene and further area development. This article will not only use new empirical evidence to demonstrate that the water infrastructure was an ambivalent "tool of empire". Relying on the concept of "urban metabolism," this paper primarily traces the ecological consequences, particularly the landscape transformation of the mountains surrounding the bay and the implications for the region's water resources. When evaluating colonial enterprises, changes in local ecology should play a significantly greater role.
Darwinism and cultural change.
Godfrey-Smith, Peter
2012-08-05
Evolutionary models of cultural change have acquired an important role in attempts to explain the course of human evolution, especially our specialization in knowledge-gathering and intelligent control of environments. In both biological and cultural change, different patterns of explanation become relevant at different 'grains' of analysis and in contexts associated with different explanatory targets. Existing treatments of the evolutionary approach to culture, both positive and negative, underestimate the importance of these distinctions. Close attention to grain of analysis motivates distinctions between three possible modes of cultural evolution, each associated with different empirical assumptions and explanatory roles.
Approach to ignition of tokamak reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sigmar, D.J.
1981-02-01
Recent transport modeling results for JET, INTOR, and ETF are reviewed and analyzed with respect to existing uncertainties in the underlying physics, the self-consistency of the very large numerical codes, and the margin for ignition. The codes show ignition to occur in ETF/INTOR-sized machines if empirical scaling can be extrapolated to ion temperatures (and beta values) much higher than those presently achieved, if there is no significant impurity accumulation over the first 7 s, and if the known ideal and resistive MHD instabilities remain controllable for the evolving plasma profiles during ignition startup.
A Service Design Thinking Approach for Stakeholder-Centred eHealth.
Lee, Eunji
2016-01-01
Studies have described the opportunities and challenges of applying service design techniques to health services, but empirical evidence on how such techniques can be implemented in the context of eHealth services is still lacking. This paper presents how a service design thinking approach can be applied for specification of an existing and new eHealth service by supporting evaluation of the current service and facilitating suggestions for the future service. We propose Service Journey Modelling Language and Service Journey Cards to engage stakeholders in the design of eHealth services.
SU-F-T-144: Analytical Closed Form Approximation for Carbon Ion Bragg Curves in Water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuomanen, S; Moskvin, V; Farr, J
2016-06-15
Purpose: Semi-empirical modeling is a powerful computational method in radiation dosimetry. A set of approximations exist for proton ion depth dose distribution (DDD) in water. However, the modeling is more complicated for carbon ions due to fragmentation. This study addresses this by providing and evaluating a new methodology for DDD modeling of carbon ions in water. Methods: The FLUKA, Monte Carlo (MC) general-purpose transport code was used for simulation of carbon DDDs for energies of 100–400 MeV in water as reference data model benchmarking. Based on Thomas Bortfeld’s closed form equation approximating proton Bragg Curves as a basis, we derivedmore » the critical constants for a beam of Carbon ions by applying models of radiation transport by Lee et. al. and Geiger to our simulated Carbon curves. We hypothesized that including a new exponential (κ) residual distance parameter to Bortfeld’s fluence reduction relation would improve DDD modeling for carbon ions. We are introducing an additional term to be added to Bortfeld’s equation to describe fragmentation tail. This term accounts for the pre-peak dose from nuclear fragments (NF). In the post peak region, the NF transport will be treated as new beams utilizing the Glauber model for interaction cross sections and the Abrasion- Ablation fragmentation model. Results: The carbon beam specific constants in the developed model were determined to be : p= 1.75, β=0.008 cm-1, γ=0.6, α=0.0007 cm MeV, σmono=0.08, and the new exponential parameter κ=0.55. This produced a close match for the plateau part of the curve (max deviation 6.37%). Conclusion: The derived semi-empirical model provides an accurate approximation of the MC simulated clinical carbon DDDs. This is the first direct semi-empirical simulation for the dosimetry of therapeutic carbon ions. The accurate modeling of the NF tail in the carbon DDD will provide key insight into distal edge dose deposition formation.« less
Fiorentine, Robert; Hillhouse, Maureen P
2004-01-01
Although previous research provided empirical support for the main assumptions of the Addicted-Self (A-S) Model of recovery, it is not known whether the model predicts recovery for various gender, ethnic, age, and drug preference populations. It may be that the model predicts recovery only for some groups of addicts and should not be viewed as a general theory of the recovery process. Addressing this concern using data from the Los Angeles Target Cities Drug Treatment Enhancement Project, it was determined that only trivial population differences exist in the primary variables associated with the A-S Model. The A-S Model predicts abstinence with about the same degree of accuracy and parsimony for all populations. The findings indicate that the A-S Model is a general theory of drug and alcohol addictive behavior cessation.
A watershed model of individual differences in fluid intelligence.
Kievit, Rogier A; Davis, Simon W; Griffiths, John; Correia, Marta M; Cam-Can; Henson, Richard N
2016-10-01
Fluid intelligence is a crucial cognitive ability that predicts key life outcomes across the lifespan. Strong empirical links exist between fluid intelligence and processing speed on the one hand, and white matter integrity and processing speed on the other. We propose a watershed model that integrates these three explanatory levels in a principled manner in a single statistical model, with processing speed and white matter figuring as intermediate endophenotypes. We fit this model in a large (N=555) adult lifespan cohort from the Cambridge Centre for Ageing and Neuroscience (Cam-CAN) using multiple measures of processing speed, white matter health and fluid intelligence. The model fit the data well, outperforming competing models and providing evidence for a many-to-one mapping between white matter integrity, processing speed and fluid intelligence. The model can be naturally extended to integrate other cognitive domains, endophenotypes and genotypes. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
An empirical analysis of executive behaviour with hospital executive information systems in Taiwan.
Huang, Wei-Min
2013-01-01
Existing health information systems largely only support the daily operations of a medical centre, and are unable to generate the information required by executives for decision-making. Building on past research concerning information retrieval behaviour and learning through mental models, this study examines the use of information systems by hospital executives in medical centres. It uses a structural equation model to help find ways hospital executives might use information systems more effectively. The results show that computer self-efficacy directly affects the maintenance of mental models, and that system characteristics directly impact learning styles and information retrieval behaviour. Other results include the significant impact of perceived environmental uncertainty on scan searches; information retrieval behaviour and focused searches on mental models and perceived efficiency; scan searches on mental model building; learning styles and model building on perceived efficiency; and finally the impact of mental model maintenance on perceived efficiency and effectiveness.
A Longitudinal Empirical Investigation of the Pathways Model of Problem Gambling.
Allami, Youssef; Vitaro, Frank; Brendgen, Mara; Carbonneau, René; Lacourse, Éric; Tremblay, Richard E
2017-12-01
The pathways model of problem gambling suggests the existence of three developmental pathways to problem gambling, each differentiated by a set of predisposing biopsychosocial characteristics: behaviorally conditioned (BC), emotionally vulnerable (EV), and biologically vulnerable (BV) gamblers. This study examined the empirical validity of the Pathways Model among adolescents followed up to early adulthood. A prospective-longitudinal design was used, thus overcoming limitations of past studies that used concurrent or retrospective designs. Two samples were used: (1) a population sample of French-speaking adolescents (N = 1033) living in low socio-economic status (SES) neighborhoods from the Greater Region of Montreal (Quebec, Canada), and (2) a population sample of adolescents (N = 3017), representative of French-speaking students in Quebec. Only participants with at-risk or problem gambling by mid-adolescence or early adulthood were included in the main analysis (n = 180). Latent Profile Analyses were conducted to identify the optimal number of profiles, in accordance with participants' scores on a set of variables prescribed by the Pathways Model and measured during early adolescence: depression, anxiety, impulsivity, hyperactivity, antisocial/aggressive behavior, and drug problems. A four-profile model fit the data best. Three profiles differed from each other in ways consistent with the Pathways Model (i.e., BC, EV, and BV gamblers). A fourth profile emerged, resembling a combination of EV and BV gamblers. Four profiles of at-risk and problem gamblers were identified. Three of these profiles closely resemble those suggested by the Pathways Model.
Sticky knowledge: A possible model for investigating implementation in healthcare contexts
Elwyn, Glyn; Taubert, Mark; Kowalczuk, Jenny
2007-01-01
Background In health care, a well recognized gap exists between what we know should be done based on accumulated evidence and what we actually do in practice. A body of empirical literature shows organizations, like individuals, are difficult to change. In the business literature, knowledge management and transfer has become an established area of theory and practice, whilst in healthcare it is only starting to establish a firm footing. Knowledge has become a business resource, and knowledge management theorists and practitioners have examined how knowledge moves in organisations, how it is shared, and how the return on knowledge capital can be maximised to create competitive advantage. New models are being considered, and we wanted to explore the applicability of one of these conceptual models to the implementation of evidence-based practice in healthcare systems. Methods The application of a conceptual model called sticky knowledge, based on an integration of communication theory and knowledge transfer milestones, into a scenario of attempting knowledge transfer in primary care. Results We describe Szulanski's model, the empirical work he conducted, and illustrate its potential applicability with a hypothetical healthcare example based on improving palliative care services. We follow a doctor through two different posts and analyse aspects of knowledge transfer in different primary care settings. The factors included in the sticky knowledge model include: causal ambiguity, unproven knowledge, motivation of source, credibility of source, recipient motivation, recipient absorptive capacity, recipient retentive capacity, barren organisational context, and arduous relationship between source and recipient. We found that we could apply all these factors to the difficulty of implementing new knowledge into practice in primary care settings. Discussion Szulanski argues that knowledge factors play a greater role in the success or failure of a knowledge transfer than has been suspected, and we consider that this conjecture requires further empirical work in healthcare settings. PMID:18096040
Empirically Understanding Can Make Problems Go Away: The Case of the Chinese Room
ERIC Educational Resources Information Center
Overskeid, Geir
2005-01-01
The many authors debating whether computers can understand often fail to clarify what understanding is, and no agreement exists on this important issue. In his Chinese room argument, Searle (1980) claims that computers running formal programs can never understand. I discuss Searle's claim based on a definition of understanding that is empirical,…
Multi-Product Total Cost Functions for Higher Education: The Case of Chinese Research Universities
ERIC Educational Resources Information Center
Longlong, Hou; Fengliang, Li; Weifang, Min
2009-01-01
This paper empirically investigates the economies of scale and economies of scope for the Chinese research universities by employing the flexible fixed cost quadratic (FFCQ) function. The empirical results show that both economies of scale and economies of scope exist in the Chinese higher education system and support the common belief of…
Identifying Empirically Supported Treatments for Pica in Individuals with Intellectual Disabilities
ERIC Educational Resources Information Center
Hagopian, Louis P.; Rooker, Griffin W.; Rolider, Natalie U.
2011-01-01
The purpose of the current study was to critically examine the existing literature on the treatment of pica displayed by individuals with intellectual disabilities. Criteria for empirically supported treatments as described by Divisions 12 and 16 of APA, and adapted for studies employing single-case designs were used to review this body of…
ERIC Educational Resources Information Center
Merrick, K. E.
2010-01-01
This correspondence describes an adaptation of puzzle-based learning to teaching an introductory computer programming course. Students from two offerings of the course--with and without the puzzle-based learning--were surveyed over a two-year period. Empirical results show that the synthesis of puzzle-based learning concepts with existing course…
ERIC Educational Resources Information Center
Ghahramanlou-Holloway, Marjan; Cox, Daniel W.; Greene, Farrah N.
2012-01-01
To date, no empirically based inpatient intervention for individuals who have attempted suicide exists. We present an overview of a novel psychotherapeutic approach, Post-Admission Cognitive Therapy (PACT), currently under development and empirical testing for inpatients who have been admitted for a recent suicide attempt. PACT is adapted from an…
ERIC Educational Resources Information Center
Dababnah, Sarah; Parish, Susan L.
2016-01-01
This article reports on the feasibility of implementing an existing empirically based program, "The Incredible Years," tailored to parents of young children with autism spectrum disorder. Parents raising preschool-aged children (aged 3-6?years) with autism spectrum disorder (N?=?17) participated in a 15-week pilot trial of the…
Review of Literature on the Career Transitions of Performing Artists Pursuing Career Development
ERIC Educational Resources Information Center
Middleton, Jerry C.; Middleton, Jason A.
2017-01-01
Few studies in the existent empirical literature explore the career transitions of performing artists. First, we provide working definitions of career transition and of a performing artist. Thereafter, we peruse empirical studies, from the 1980s onward, that delineate the career transition process in terms of three main types of transition:…
ERIC Educational Resources Information Center
Holland, Sally; Tannock, Stuart; Collicott, Hayley
2011-01-01
The paper reviews public discourses and research on the safeguarding of other people's children by adults at the neighbourhood level. There is much empirical evidence pointing to the existence of thriving informal communities of support and informal childcare for parents across the social classes. There appears to be less empirical evidence…
DOE Office of Scientific and Technical Information (OSTI.GOV)
P., Henry
2008-11-20
A recent article in which John Searle claims to refute dualism is examined from a scientific perspective. John Searle begins his recent article 'Dualism Revisited' by stating his belief that the philosophical problem of consciousness has a scientific solution. He then claims to refute dualism. It is therefore appropriate to examine his arguments against dualism from a scientific perspective. Scientific physical theories contain two kinds of descriptions: (1) Descriptions of our empirical findings, expressed in an every-day language that allows us communicate to each other our sensory experiences pertaining to what we have done and what we have learned; andmore » (2) Descriptions of a theoretical model, expressed in a mathematical language that allows us to communicate to each other certain ideas that exist in our mathematical imaginations, and that are believed to represent, within our streams of consciousness, certain aspects of reality that we deem to exist independently of their being perceived by any human observer. These two parts of our scientific description correspond to the two aspects of our general contemporary dualistic understanding of the total reality in which we are imbedded, namely the empirical-mental aspect and the theoretical-physical aspect. The duality question is whether this general dualistic understanding of ourselves should be regarded as false in some important philosophical or scientific sense.« less
Average Associations Between Sexual Desire, Testosterone, and Stress in Women and Men Over Time.
Raisanen, Jessica C; Chadwick, Sara B; Michalak, Nicholas; van Anders, Sari M
2018-05-29
Sexual desire and testosterone are widely assumed to be directly and positively linked to each other despite the lack of supporting empirical evidence. The literature that does exist is mixed, which may result from a conflation of solitary and dyadic desire, and the exclusion of contextual variables, like stress, known to be relevant. Here, we use the Steroid/Peptide Theory of Social Bonds as a framework for examining how testosterone, solitary and partnered desire, and stress are linked over time. To do so, we collected saliva samples (for testosterone and cortisol) and measured desire as well as other variables via questionnaires over nine monthly sessions in 78 women and 79 men. Linear mixed models showed that testosterone negatively predicted partnered desire in women but not men. Stress moderated associations between testosterone and solitary desire in both women and men, but differently: At lower levels of stress, higher average testosterone corresponded to higher average solitary desire for men, but lower solitary desire on average for women. Similarly, for partnered desire, higher perceived stress predicted lower desire for women, but higher desire for men. We conclude by discussing the ways that these results both counter presumptions about testosterone and desire but fit with the existing literature and theory, and highlight the empirical importance of stress and gender norms.
Froissart, R.; Doumayrou, J.; Vuillaume, F.; Alizon, S.; Michalakis, Y.
2010-01-01
The adaptive hypothesis invoked to explain why parasites harm their hosts is known as the trade-off hypothesis, which states that increased parasite transmission comes at the cost of shorter infection duration. This correlation arises because both transmission and disease-induced mortality (i.e. virulence) are increasing functions of parasite within-host density. There is, however, a glaring lack of empirical data to support this hypothesis. Here, we review empirical investigations reporting to what extent within-host viral accumulation determines the transmission rate and the virulence of vector-borne plant viruses. Studies suggest that the correlation between within-plant viral accumulation and transmission rate of natural isolates is positive. Unfortunately, results on the correlation between viral accumulation and virulence are very scarce. We found only very few appropriate studies testing such a correlation, themselves limited by the fact that they use symptoms as a proxy for virulence and are based on very few viral genotypes. Overall, the available evidence does not allow us to confirm or refute the existence of a transmission–virulence trade-off for vector-borne plant viruses. We discuss the type of data that should be collected and how theoretical models can help us refine testable predictions of virulence evolution. PMID:20478886
ERIC Educational Resources Information Center
Brady, Kristine L.; Eisler, Richard M.
1995-01-01
Summarizes eight studies on gender bias in college classrooms, examining the range of variables assessed and adequacy of evidence supporting the existence of bias. Inconsistent findings and significant methodological flaws in existing literature suggest that more empirical research is needed to investigate the existence of gender bias in college…
Pelowski, Matthew; Markey, Patrick S.; Lauring, Jon O.; Leder, Helmut
2016-01-01
The last decade has witnessed a renaissance of empirical and psychological approaches to art study, especially regarding cognitive models of art processing experience. This new emphasis on modeling has often become the basis for our theoretical understanding of human interaction with art. Models also often define areas of focus and hypotheses for new empirical research, and are increasingly important for connecting psychological theory to discussions of the brain. However, models are often made by different researchers, with quite different emphases or visual styles. Inputs and psychological outcomes may be differently considered, or can be under-reported with regards to key functional components. Thus, we may lose the major theoretical improvements and ability for comparison that can be had with models. To begin addressing this, this paper presents a theoretical assessment, comparison, and new articulation of a selection of key contemporary cognitive or information-processing-based approaches detailing the mechanisms underlying the viewing of art. We review six major models in contemporary psychological aesthetics. We in turn present redesigns of these models using a unified visual form, in some cases making additions or creating new models where none had previously existed. We also frame these approaches in respect to their targeted outputs (e.g., emotion, appraisal, physiological reaction) and their strengths within a more general framework of early, intermediate, and later processing stages. This is used as a basis for general comparison and discussion of implications and future directions for modeling, and for theoretically understanding our engagement with visual art. PMID:27199697
Pelowski, Matthew; Markey, Patrick S; Lauring, Jon O; Leder, Helmut
2016-01-01
The last decade has witnessed a renaissance of empirical and psychological approaches to art study, especially regarding cognitive models of art processing experience. This new emphasis on modeling has often become the basis for our theoretical understanding of human interaction with art. Models also often define areas of focus and hypotheses for new empirical research, and are increasingly important for connecting psychological theory to discussions of the brain. However, models are often made by different researchers, with quite different emphases or visual styles. Inputs and psychological outcomes may be differently considered, or can be under-reported with regards to key functional components. Thus, we may lose the major theoretical improvements and ability for comparison that can be had with models. To begin addressing this, this paper presents a theoretical assessment, comparison, and new articulation of a selection of key contemporary cognitive or information-processing-based approaches detailing the mechanisms underlying the viewing of art. We review six major models in contemporary psychological aesthetics. We in turn present redesigns of these models using a unified visual form, in some cases making additions or creating new models where none had previously existed. We also frame these approaches in respect to their targeted outputs (e.g., emotion, appraisal, physiological reaction) and their strengths within a more general framework of early, intermediate, and later processing stages. This is used as a basis for general comparison and discussion of implications and future directions for modeling, and for theoretically understanding our engagement with visual art.
Effects of deterministic and random refuge in a prey-predator model with parasite infection.
Mukhopadhyay, B; Bhattacharyya, R
2012-09-01
Most natural ecosystem populations suffer from various infectious diseases and the resulting host-pathogen dynamics is dependent on host's characteristics. On the other hand, empirical evidences show that for most host pathogen systems, a part of the host population always forms a refuge. To study the role of refuge on the host-pathogen interaction, we study a predator-prey-pathogen model where the susceptible and the infected prey can undergo refugia of constant size to evade predator attack. The stability aspects of the model system is investigated from a local and global perspective. The study reveals that the refuge sizes for the susceptible and the infected prey are the key parameters that control possible predator extinction as well as species co-existence. Next we perform a global study of the model system using Lyapunov functions and show the existence of a global attractor. Finally we perform a stochastic extension of the basic model to study the phenomenon of random refuge arising from various intrinsic, habitat-related and environmental factors. The stochastic model is analyzed for exponential mean square stability. Numerical study of the stochastic model shows that increasing the refuge rates has a stabilizing effect on the stochastic dynamics. Copyright © 2012 Elsevier Inc. All rights reserved.
Changing-Look Quasars: Radical Changes in Accretion Rate?
NASA Astrophysics Data System (ADS)
Green, Paul
2017-09-01
Over a dozen 'changing look quasars' (CLQs) that switch between quasar and galaxy states have recently been discovered. CLQ transitions have variously been attributed to tidal disruption events, significant changes in intrinsic absorption, or in accretion rate, but all these models suffer strong theoretical or empirical challenges. We propose Chandra ToO observations of strong CLQ candidates with existing X-ray observations, triggered after confirmation via optical imaging and spectroscopy. Our approved Cycle 18 CLQ ToO program is as yet untriggered, so we propose again here to achieve our primary goals: to directly probe CLQ changes in nuclear X-ray luminosity, intrinsic absorption, and accretion rate, adding information crucial to distinguish between models.
Fear of Crime in the Sanctuary: Comparing American and Ghanaian University Students' Fearfulness.
Boateng, Francis D
2018-02-01
While much is known about fear of crime in the West, little is known about how fearfulness of crime develops in non-Western societies, especially among university students. Representing the first attempt to empirically compare levels of fear of crime between Ghanaian and U.S. college students, this article examined students' levels of fear of crime on campus, and tested the applicability of two evolving models of fear of crime-the vulnerability and reassurance models-using comparative data. The general finding is that Ghanaian and U.S. college students differ in terms of their rates of fearfulness on campus. This significant difference adds to the already existing differences between the two countries.
Chen, Vivian Yi-Ju; Yang, Tse-Chuan
2012-08-01
An increasing interest in exploring spatial non-stationarity has generated several specialized analytic software programs; however, few of these programs can be integrated natively into a well-developed statistical environment such as SAS. We not only developed a set of SAS macro programs to fill this gap, but also expanded the geographically weighted generalized linear modeling (GWGLM) by integrating the strengths of SAS into the GWGLM framework. Three features distinguish our work. First, the macro programs of this study provide more kernel weighting functions than the existing programs. Second, with our codes the users are able to better specify the bandwidth selection process compared to the capabilities of existing programs. Third, the development of the macro programs is fully embedded in the SAS environment, providing great potential for future exploration of complicated spatially varying coefficient models in other disciplines. We provided three empirical examples to illustrate the use of the SAS macro programs and demonstrated the advantages explained above. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Serov, E. A.; Odintsova, T. A.; Tretyakov, M. Yu.; Semenov, V. E.
2017-05-01
Analysis of the continuum absorption in water vapor at room temperature within the purely rotational and fundamental ro-vibrational bands shows that a significant part (up to a half) of the observed absorption cannot be explained within the framework of the existing concepts of the continuum. Neither of the two most prominent mechanisms of continuum originating, namely, the far wings of monomer lines and the dimers, cannot reproduce the currently available experimental data adequately. We propose a new approach to developing a physically based model of the continuum. It is demonstrated that water dimers and wings of monomer lines may contribute equally to the continuum within the bands, and their contribution should be taken into account in the continuum model. We propose a physical mechanism giving missing justification for the super-Lorentzian behavior of the intermediate line wing. The qualitative validation of the proposed approach is given on the basis of a simple empirical model. The obtained results are directly indicative of the necessity to reconsider the existing line wing theory and can guide this consideration.
Shriver, K A
1986-01-01
Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.
Liu, Fengyun; Liu, Deqiang; Malekian, Reza; Li, Zhixiong; Wang, Deqing
2017-01-01
Employing the fundamental value of real estate determined by the economic fundamentals, a measurement model for real estate bubble size is established based on the panel data analysis. Using this model, real estate bubble sizes in various regions in Japan in the late 1980s and in recent China are examined. Two panel models for Japan provide results, which are consistent with the reality in the 1980s where a commercial land price bubble appeared in most area and was much larger than that of residential land. This provides evidence of the reliability of our model, overcoming the limit of existing literature with this method. The same models for housing prices in China at both the provincial and city levels show that contrary to the concern of serious housing price bubble in China, over-valuing in recent China is much smaller than that in 1980s Japan. PMID:28273141
Liu, Fengyun; Liu, Deqiang; Malekian, Reza; Li, Zhixiong; Wang, Deqing
2017-01-01
Employing the fundamental value of real estate determined by the economic fundamentals, a measurement model for real estate bubble size is established based on the panel data analysis. Using this model, real estate bubble sizes in various regions in Japan in the late 1980s and in recent China are examined. Two panel models for Japan provide results, which are consistent with the reality in the 1980s where a commercial land price bubble appeared in most area and was much larger than that of residential land. This provides evidence of the reliability of our model, overcoming the limit of existing literature with this method. The same models for housing prices in China at both the provincial and city levels show that contrary to the concern of serious housing price bubble in China, over-valuing in recent China is much smaller than that in 1980s Japan.
Exploring the patterns and evolution of self-organized urban street networks through modeling
NASA Astrophysics Data System (ADS)
Rui, Yikang; Ban, Yifang; Wang, Jiechen; Haas, Jan
2013-03-01
As one of the most important subsystems in cities, urban street networks have recently been well studied by using the approach of complex networks. This paper proposes a growing model for self-organized urban street networks. The model involves a competition among new centers with different values of attraction radius and a local optimal principle of both geometrical and topological factors. We find that with the model growth, the local optimization in the connection process and appropriate probability for the loop construction well reflect the evolution strategy in real-world cities. Moreover, different values of attraction radius in centers competition process lead to morphological change in patterns including urban network, polycentric and monocentric structures. The model succeeds in reproducing a large diversity of road network patterns by varying parameters. The similarity between the properties of our model and empirical results implies that a simple universal growth mechanism exists in self-organized cities.
A Decision Model for Supporting Task Allocation Processes in Global Software Development
NASA Astrophysics Data System (ADS)
Lamersdorf, Ansgar; Münch, Jürgen; Rombach, Dieter
Today, software-intensive systems are increasingly being developed in a globally distributed way. However, besides its benefit, global development also bears a set of risks and problems. One critical factor for successful project management of distributed software development is the allocation of tasks to sites, as this is assumed to have a major influence on the benefits and risks. We introduce a model that aims at improving management processes in globally distributed projects by giving decision support for task allocation that systematically regards multiple criteria. The criteria and causal relationships were identified in a literature study and refined in a qualitative interview study. The model uses existing approaches from distributed systems and statistical modeling. The article gives an overview of the problem and related work, introduces the empirical and theoretical foundations of the model, and shows the use of the model in an example scenario.
Mapping Diffuse Seismicity Using Empirical Matched Field Processing Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, J; Templeton, D C; Harris, D B
The objective of this project is to detect and locate more microearthquakes using the empirical matched field processing (MFP) method than can be detected using only conventional earthquake detection techniques. We propose that empirical MFP can complement existing catalogs and techniques. We test our method on continuous seismic data collected at the Salton Sea Geothermal Field during November 2009 and January 2010. In the Southern California Earthquake Data Center (SCEDC) earthquake catalog, 619 events were identified in our study area during this time frame and our MFP technique identified 1094 events. Therefore, we believe that the empirical MFP method combinedmore » with conventional methods significantly improves the network detection ability in an efficient matter.« less
St-Pierre, Renée A; Temcheff, Caroline E; Derevensky, Jeffrey L; Gupta, Rina
2015-12-01
Given its serious implications for psychological and socio-emotional health, the prevention of problem gambling among adolescents is increasingly acknowledged as an area requiring attention. The theory of planned behavior (TPB) is a well-established model of behavior change that has been studied in the development and evaluation of primary preventive interventions aimed at modifying cognitions and behavior. However, the utility of the TPB has yet to be explored as a framework for the development of adolescent problem gambling prevention initiatives. This paper first examines the existing empirical literature addressing the effectiveness of school-based primary prevention programs for adolescent gambling. Given the limitations of existing programs, we then present a conceptual framework for the integration of the TPB in the development of effective problem gambling preventive interventions. The paper describes the TPB, demonstrates how the framework has been applied to gambling behavior, and reviews the strengths and limitations of the model for the design of primary prevention initiatives targeting adolescent risk and addictive behaviors, including adolescent gambling.
Hollows, Kerrilee; Fritzon, Katarina
2012-10-01
This study aimed to address the limitations of the existing genocide literature with the development of an empirically based classification system. Using Shye's (1985) action systems model, it was hypothesized that four types of perpetrators would exist and would be distinguishable by differences in the sources and target of individual criminal actions. Court transcripts from 80 perpetrators sentenced by the international courts were subject to content analysis and revealed 39 offense action variables, 17 perpetrator characteristic variables, and 6 perpetrator motive variables. A smallest space analysis using the Jaccard coefficient of association was conducted on the offense variables. The results supported the proposed framework, producing four distinct types of genocidal perpetrators. Correlational analyses were then conducted to examine the relationships between each of the perpetrator types and the remaining variables. The results of those correlations provided further support for the proposed framework. The implications of these findings are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.
A review of ion and metal pollutants in urban green water infrastructures.
Kabir, Md Imran; Daly, Edoardo; Maggi, Federico
2014-02-01
In urban environments, the breakdown of chemicals and pollutants, especially ions and metal compounds, can be favoured by green water infrastructures (GWIs). The overall aim of this review is to set the basis to model GWIs using deterministic approaches in contrast to empirical ones. If a better picture of chemicals and pollutant input and an improved understanding of hydrological and biogeochemical processes affecting these pollutants were known, GWIs could be designed to efficiently retain these pollutants for site-specific meteorological patterns and pollutant load. To this end, we surveyed the existing literature to retrieve a comprehensive dataset of anions and cations, and alkaline and transition metal pollutants incoming to urban environments. Based on this survey, we assessed the pollution load and ecological risk indexes for metals. The existing literature was then surveyed to review the metal retention efficiency of GWIs, and possible biogeochemical processes related to inorganic metal compounds were proposed that could be integrated in biogeochemical models of GWIs. © 2013.
Dunn, Erin C.; Masyn, Katherine E.; Yudron, Monica; Jones, Stephanie M.; Subramanian, S.V.
2014-01-01
The observation that features of the social environment, including family, school, and neighborhood characteristics, are associated with individual-level outcomes has spurred the development of dozens of multilevel or ecological theoretical frameworks in epidemiology, public health, psychology, and sociology, among other disciplines. Despite the widespread use of such theories in etiological, intervention, and policy studies, challenges remain in bridging multilevel theory and empirical research. This paper set out to synthesize these challenges and provide specific examples of methodological and analytical strategies researchers are using to gain a more nuanced understanding of the social determinants of psychiatric disorders, with a focus on children’s mental health. To accomplish this goal, we begin by describing multilevel theories, defining their core elements, and discussing what these theories suggest is needed in empirical work. In the second part, we outline the main challenges researchers face in translating multilevel theory into research. These challenges are presented for each stage of the research process. In the third section, we describe two methods being used as alternatives to traditional multilevel modeling techniques to better bridge multilevel theory and multilevel research. These are: (1) multilevel factor analysis and multilevel structural equation modeling; and (2) dynamic systems approaches. Through its review of multilevel theory, assessment of existing strategies, and examination of emerging methodologies, this paper offers a framework to evaluate and guide empirical studies on the social determinants of child psychiatric disorders as well as health across the lifecourse. PMID:24469555
NASA Astrophysics Data System (ADS)
Hansen, Kenneth; Altwegg, Kathrin; Berthelier, Jean-Jacques; Bieler, Andre; Calmonte, Ursina; Combi, Michael; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, Tamas; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Lena; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu
2016-04-01
We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet coma (<400 km) of comet 67P for the pre-equinox orbit of comet 67P/Churyumov-Gerasimenko. In this work we extend the empirical model to the post-equinox, post-perihelion time period. In addition, we extend the coma model to significantly further from the comet (~100,000-1,000,000 km). The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. Furthermore, we have generalized the model beyond application to 67P by replacing the heliocentric distance parameterizations and mapping them to production rates. Using this method, the model become significantly more general and can be applied to any comet. The model is a significant improvement over simpler empirical models, such as the Haser model. For 67P, the DSMC results are, of course, a more accurate representation of the coma at any given time, but the advantage of a mean state, empirical model is the ease and speed of use. One application of the empirical model is to de-trend the spacecraft motion from the ROSINA COPS and DFMS data (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Comet Pressure Sensor, Double Focusing Mass Spectrometer). The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on the single point measurement. In this presentation we will present the coma production rate as a function of heliocentric distance both pre- and post-equinox and perihelion.
Shields, Michele; Kestenbaum, Allison; Dunn, Laura B
2015-02-01
Distinguishing the unique contributions and roles of chaplains as members of healthcare teams requires the fundamental step of articulating and critically evaluating conceptual models that guide practice. However, there is a paucity of well-described spiritual assessment models. Even fewer of the extant models prescribe interventions and describe desired outcomes corresponding to spiritual assessments. This article describes the development, theoretical underpinnings, and key components of one model, called the Spiritual Assessment and Intervention Model (Spiritual AIM). Three cases are presented that illustrate Spiritual AIM in practice. Spiritual AIM was developed over the past 20 years to address the limitations of existing models. The model evolved based in part on observing how different people respond to a health crisis and what kinds of spiritual needs appear to emerge most prominently during a health crisis. Spiritual AIM provides a conceptual framework for the chaplain to diagnose an individual's primary unmet spiritual need, devise and implement a plan for addressing this need through embodiment/relationship, and articulate and evaluate the desired and actual outcome of the intervention. Spiritual AIM's multidisciplinary theory is consistent with the goals of professional chaplaincy training and practice, which emphasize the integration of theology, recognition of interpersonal dynamics, cultural humility and competence, ethics, and theories of human development. Further conceptual and empirical work is needed to systematically refine, evaluate, and disseminate well-articulated spiritual assessment models such as Spiritual AIM. This foundational work is vital to advancing chaplaincy as a theoretically grounded and empirically rigorous healthcare profession.
Impact of AMS-02 Measurements on Reducing GCR Model Uncertainties
NASA Technical Reports Server (NTRS)
Slaba, T. C.; O'Neill, P. M.; Golge, S.; Norbury, J. W.
2015-01-01
For vehicle design, shield optimization, mission planning, and astronaut risk assessment, the exposure from galactic cosmic rays (GCR) poses a significant and complex problem both in low Earth orbit and in deep space. To address this problem, various computational tools have been developed to quantify the exposure and risk in a wide range of scenarios. Generally, the tool used to describe the ambient GCR environment provides the input into subsequent computational tools and is therefore a critical component of end-to-end procedures. Over the past few years, several researchers have independently and very carefully compared some of the widely used GCR models to more rigorously characterize model differences and quantify uncertainties. All of the GCR models studied rely heavily on calibrating to available near-Earth measurements of GCR particle energy spectra, typically over restricted energy regions and short time periods. In this work, we first review recent sensitivity studies quantifying the ions and energies in the ambient GCR environment of greatest importance to exposure quantities behind shielding. Currently available measurements used to calibrate and validate GCR models are also summarized within this context. It is shown that the AMS-II measurements will fill a critically important gap in the measurement database. The emergence of AMS-II measurements also provides a unique opportunity to validate existing models against measurements that were not used to calibrate free parameters in the empirical descriptions. Discussion is given regarding rigorous approaches to implement the independent validation efforts, followed by recalibration of empirical parameters.
An ideal observer analysis of visual working memory.
Sims, Chris R; Jacobs, Robert A; Knill, David C
2012-10-01
Limits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this article we develop an ideal observer analysis of human VWM by deriving the expected behavior of an optimally performing but limited-capacity memory system. This analysis is framed around rate-distortion theory, a branch of information theory that provides optimal bounds on the accuracy of information transmission subject to a fixed information capacity. The result of the ideal observer analysis is a theoretical framework that provides a task-independent and quantitative definition of visual memory capacity and yields novel predictions regarding human performance. These predictions are subsequently evaluated and confirmed in 2 empirical studies. Further, the framework is general enough to allow the specification and testing of alternative models of visual memory (e.g., how capacity is distributed across multiple items). We demonstrate that a simple model developed on the basis of the ideal observer analysis-one that allows variability in the number of stored memory representations but does not assume the presence of a fixed item limit-provides an excellent account of the empirical data and further offers a principled reinterpretation of existing models of VWM. PsycINFO Database Record (c) 2012 APA, all rights reserved.
An Ideal Observer Analysis of Visual Working Memory
Sims, Chris R.; Jacobs, Robert A.; Knill, David C.
2013-01-01
Limits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this paper we develop an ideal observer analysis of human visual working memory, by deriving the expected behavior of an optimally performing, but limited-capacity memory system. This analysis is framed around rate–distortion theory, a branch of information theory that provides optimal bounds on the accuracy of information transmission subject to a fixed information capacity. The result of the ideal observer analysis is a theoretical framework that provides a task-independent and quantitative definition of visual memory capacity and yields novel predictions regarding human performance. These predictions are subsequently evaluated and confirmed in two empirical studies. Further, the framework is general enough to allow the specification and testing of alternative models of visual memory (for example, how capacity is distributed across multiple items). We demonstrate that a simple model developed on the basis of the ideal observer analysis—one which allows variability in the number of stored memory representations, but does not assume the presence of a fixed item limit—provides an excellent account of the empirical data, and further offers a principled re-interpretation of existing models of visual working memory. PMID:22946744
Andrews, Tessa C.; Lemons, Paula P.
2015-01-01
Despite many calls for undergraduate biology instructors to incorporate active learning into lecture courses, few studies have focused on what it takes for instructors to make this change. We sought to investigate the process of adopting and sustaining active-learning instruction. As a framework for our research, we used the innovation-decision model, a generalized model of how individuals adopt innovations. We interviewed 17 biology instructors who were attempting to implement case study teaching and conducted qualitative text analysis on interview data. The overarching theme that emerged from our analysis was that instructors prioritized personal experience—rather than empirical evidence—in decisions regarding case study teaching. We identified personal experiences that promote case study teaching, such as anecdotal observations of student outcomes, and those that hinder case study teaching, such as insufficient teaching skills. By analyzing the differences between experienced and new case study instructors, we discovered that new case study instructors need support to deal with unsupportive colleagues and to develop the skill set needed for an active-learning classroom. We generated hypotheses that are grounded in our data about effectively supporting instructors in adopting and sustaining active-learning strategies. We also synthesized our findings with existing literature to tailor the innovation-decision model. PMID:25713092
Maduku, Daniel K
2017-01-01
The book publishing industry is going through radical transformations that are driven by recent developments in information systems (IS). E-books are merely one of these developments. Notwithstanding the projections in the growth of e-book use, producers of these products contend with the issue of building user retention and loyalty through continued use. Extending the technology acceptance model (TAM), this study examined the impact of factors of perceived usefulness, perceived ease of use, social influence, and facilitating conditions on e-book continuance intention among users. The subjects of this study were 317 students from five higher institutions of learning in South Africa. Empirical testing of the research model was carried out using structural equation modeling. The results indicate that 42 percent of the variance in e-book users' continuance intention is explained by perceived usefulness, perceived ease of use, and social influence. Interestingly, facilitating conditions have an influence, although indirectly, through perceived usefulness, perceived ease of use, and social influence. The study not only contributes to the existing IS literature by extending the TAM to explain continuance intention in the e-book IS domain in a developing country but also makes recommendations to practitioners who attempt to foster continuous use of this technology.
The mass discrepancy acceleration relation in a ΛCDM context
NASA Astrophysics Data System (ADS)
Di Cintio, Arianna; Lelli, Federico
2016-02-01
The mass discrepancy acceleration relation (MDAR) describes the coupling between baryons and dark matter (DM) in galaxies: the ratio of total-to-baryonic mass at a given radius anticorrelates with the acceleration due to baryons. The MDAR has been seen as a challenge to the Λ cold dark matter (ΛCDM) galaxy formation model, while it can be explained by Modified Newtonian Dynamics. In this Letter, we show that the MDAR arises in a ΛCDM cosmology once observed galaxy scaling relations are taken into account. We build semi-empirical models based on ΛCDM haloes, with and without the inclusion of baryonic effects, coupled to empirically motivated structural relations. Our models can reproduce the MDAR: specifically, a mass-dependent density profile for DM haloes can fully account for the observed MDAR shape, while a universal profile shows a discrepancy with the MDAR of dwarf galaxies with M⋆ < 109.5 M⊙, a further indication suggesting the existence of DM cores. Additionally, we reproduce slope and normalization of the baryonic Tully-Fisher relation (BTFR) with 0.17 dex scatter. These results imply that in ΛCDM (I) the MDAR is driven by structural scaling relations of galaxies and DM density profile shapes, and (II) the baryonic fractions determined by the BTFR are consistent with those inferred from abundance-matching studies.
An Empirical Study on the Acquisition of English Rising Tone by Chinese EFL Learners
ERIC Educational Resources Information Center
Chen, Wenkai
2013-01-01
Intonation is the melody and soul of speech, and plays an important role in oral communication. Nevertheless, the acquisition of English intonation by Chinese EFL learners is far from being satisfactory. It is found by empirical study that the main problems existing in acquiring English rising tone are improper placement of nucleus stress, failure…
Back Eddies of Learning in the Recognition of Prior Learning: A Case Study
ERIC Educational Resources Information Center
Peruniak, Geoff; Powell, Rick
2007-01-01
The limited research that exists in the area of prior learning assessment (PLA) has tended to be descriptive and conceptual in nature. Where empirical studies have been done, they have focussed mainly on PLA as a means of credentialing rather than as a learning experience. Furthermore, there has been very little empirical research into the…
An empirical relationship for path diversity gain. [earth-space microwave propagation attenuation
NASA Technical Reports Server (NTRS)
Hodge, D. B.
1976-01-01
Existing 15.3 and 16 GHz path diversity gain data for earth-space propagation paths are used to generate an empirical relationship for diversity gain as a function of terminal separation distance and single terminal fade depth. The agreement between the resulting closed form expression and the data is within 0.75 dB in all cases.
ERIC Educational Resources Information Center
Mahmood, Khalid
2016-01-01
This systematic review has analyzed 53 English language studies that assessed and compared peoples' self-reported and demonstrated information literacy (IL) skills. The objective was to collect empirical evidence on the existence of Dunning-Kruger Effect in the area of information literacy. The findings clearly show that this theory works in this…
Stopping Distances: An Excellent Example of Empirical Modelling.
ERIC Educational Resources Information Center
Lawson, D. A.; Tabor, J. H.
2001-01-01
Explores the derivation of empirical models for the stopping distance of a car being driven at a range of speeds. Indicates that the calculation of stopping distances makes an excellent example of empirical modeling because it is a situation that is readily understood and particularly relevant to many first-year undergraduates who are learning or…
Stadler, Tanja; Degnan, James H.; Rosenberg, Noah A.
2016-01-01
Classic null models for speciation and extinction give rise to phylogenies that differ in distribution from empirical phylogenies. In particular, empirical phylogenies are less balanced and have branching times closer to the root compared to phylogenies predicted by common null models. This difference might be due to null models of the speciation and extinction process being too simplistic, or due to the empirical datasets not being representative of random phylogenies. A third possibility arises because phylogenetic reconstruction methods often infer gene trees rather than species trees, producing an incongruity between models that predict species tree patterns and empirical analyses that consider gene trees. We investigate the extent to which the difference between gene trees and species trees under a combined birth–death and multispecies coalescent model can explain the difference in empirical trees and birth–death species trees. We simulate gene trees embedded in simulated species trees and investigate their difference with respect to tree balance and branching times. We observe that the gene trees are less balanced and typically have branching times closer to the root than the species trees. Empirical trees from TreeBase are also less balanced than our simulated species trees, and model gene trees can explain an imbalance increase of up to 8% compared to species trees. However, we see a much larger imbalance increase in empirical trees, about 100%, meaning that additional features must also be causing imbalance in empirical trees. This simulation study highlights the necessity of revisiting the assumptions made in phylogenetic analyses, as these assumptions, such as equating the gene tree with the species tree, might lead to a biased conclusion. PMID:26968785
Kuo, Ben C.H.
2014-01-01
Given the continuous, dynamic demographic changes internationally due to intensive worldwide migration and globalization, the need to more fully understand how migrants adapt and cope with acculturation experiences in their new host cultural environment is imperative and timely. However, a comprehensive review of what we currently know about the relationship between coping behavior and acculturation experience for individuals undergoing cultural changes has not yet been undertaken. Hence, the current article aims to compile, review, and examine cumulative cross-cultural psychological research that sheds light on the relationships among coping, acculturation, and psychological and mental health outcomes for migrants. To this end, this present article reviews prevailing literature pertaining to: (a) the stress and coping conceptual perspective of acculturation; (b) four theoretical models of coping, acculturation and cultural adaptation; (c) differential coping pattern among diverse acculturating migrant groups; and (d) the relationship between coping variabilities and acculturation levels among migrants. In terms of theoretical understanding, this review points to the relative strengths and limitations associated with each of the four theoretical models on coping-acculturation-adaptation. These theories and the empirical studies reviewed in this article further highlight the central role of coping behaviors/strategies in the acculturation process and outcome for migrants and ethnic populations, both conceptually and functionally. Moreover, the review shows that across studies culturally preferred coping patterns exist among acculturating migrants and migrant groups and vary with migrants' acculturation levels. Implications and limitations of the existing literature for coping, acculturation, and psychological adaptation research are discussed and recommendations for future research are put forth. PMID:25750766
[Service productivity in hospital nursing--conceptual framework of a productivity analysis].
Thomas, D; Borchert, M; Brockhaus, N; Jäschke, L; Schmitz, G; Wasem, J
2015-01-01
Decreasing staff numbers compounded by an increasing number of cases is regarded as main challenge in German hospital nursing. These input reductions accompanied by output extensions imply that hospital nursing services have had to achieve a continuous productivity growth in the recent years. Appropriately targeted productivity enhancements require approved and effective methods for productivity acquisition and measurement. However, there is a lack of suitable productivity measurement instruments for hospital nursing services. This deficit is addressed in the present study by the development of an integrated productivity model for hospital nursing services. Conceptually, qualitative as well as quantitative aspects of nursing services productivity are equally taken into consideration. Based on systematic literature reviews different conceptual frameworks of service productivity and the current state of research in hospital nursing services productivity were analysed. On this basis nursing sensitive inputs, processes and outputs were identified and integrated into a productivity model. As an adequate framework for a hospital nursing services productivity model the conceptual approach by Grönroos/Ojasalo was identified. The basic structure of this model was adapted stepwise to our study purpose by integrating theoretical and empirical findings from the research fields of service productivity, nursing productivity as well as national and international nursing research. Special challenges existed concerning the identification of relevant influencing factors as well as the representation of nursing sensitive outputs. The final result is an integrated productivity model, which can be used as an adequate framework for further research in hospital nursing productivity. Research on hospital nursing services productivity is rare, especially in Germany. The conceptual framework developed in this study builds on established knowledge in service productivity research. The theoretical findings have been advanced and adapted to the context of German hospital nursing services. The presented productivity model represents a unique combination of services and nursing services research, which did not exist so far. By operationalisation of the model's components it can be used as the basis for further empirical -research. © Georg Thieme Verlag KG Stuttgart · New York.
Nonlinear multi-analysis of agent-based financial market dynamics by epidemic system
NASA Astrophysics Data System (ADS)
Lu, Yunfan; Wang, Jun; Niu, Hongli
2015-10-01
Based on the epidemic dynamical system, we construct a new agent-based financial time series model. In order to check and testify its rationality, we compare the statistical properties of the time series model with the real stock market indices, Shanghai Stock Exchange Composite Index and Shenzhen Stock Exchange Component Index. For analyzing the statistical properties, we combine the multi-parameter analysis with the tail distribution analysis, the modified rescaled range analysis, and the multifractal detrended fluctuation analysis. For a better perspective, the three-dimensional diagrams are used to present the analysis results. The empirical research in this paper indicates that the long-range dependence property and the multifractal phenomenon exist in the real returns and the proposed model. Therefore, the new agent-based financial model can recurrence some important features of real stock markets.
Airframe Noise Sub-Component Definition and Model
NASA Technical Reports Server (NTRS)
Golub, Robert A. (Technical Monitor); Sen, Rahul; Hardy, Bruce; Yamamoto, Kingo; Guo, Yue-Ping; Miller, Gregory
2004-01-01
Both in-house, and jointly with NASA under the Advanced Subsonic Transport (AST) program, Boeing Commerical Aircraft Company (BCA) had begun work on systematically identifying specific components of noise responsible for total airframe noise generation and applying the knowledge gained towards the creation of a model for airframe noise prediction. This report documents the continuation of the collection of database from model-scale and full-scale airframe noise measurements to compliment the earlier existing databases, the development of the subcomponent models and the generation of a new empirical prediction code. The airframe subcomponent data includes measurements from aircraft ranging in size from a Boeing 737 to aircraft larger than a Boeing 747 aircraft. These results provide the continuity to evaluate the technology developed under the AST program consistent with the guidelines set forth in NASA CR-198298.
Evolutionary dynamics of nationalism and migration
NASA Astrophysics Data System (ADS)
Barreira da Silva Rocha, André
2013-08-01
I present a dynamic evolutionary game model to address the relation between nationalism against immigrants and assimilation of the latter into the host country culture. I assume a country composed of two different large polymorphic populations, one of native citizens and the other of immigrants. A native citizen may behave nationalistically or may welcome immigrants. Immigrants may have an interest in learning the host country language or not. Evolution is modeled using replicator dynamics (RD). I also account for the presence of an enclave of immigrants in the host country. In the RD, the latter represents the immigrants’ own population effect, which contribution to fitness is controlled using a parameter ρ, 0≤ρ≤1, that represents the enclave size. In line with the empirical literature on migration, the existence of an enclave of immigrants makes assimilation less likely to occur. For large values of ρ, complete assimilation may not occur even if immigrants and natives share very close cultures and norms. Government policy regarding nationalism is modeled both exogenously and endogenously. A single or multiple asymptotically stable states exist for all cases studied but one in which the dynamics is similar to that found in the predator-prey model of Lotka-Volterra for competing species.
Lee, Juyong; Lee, Jinhyuk; Sasaki, Takeshi N; Sasai, Masaki; Seok, Chaok; Lee, Jooyoung
2011-08-01
Ab initio protein structure prediction is a challenging problem that requires both an accurate energetic representation of a protein structure and an efficient conformational sampling method for successful protein modeling. In this article, we present an ab initio structure prediction method which combines a recently suggested novel way of fragment assembly, dynamic fragment assembly (DFA) and conformational space annealing (CSA) algorithm. In DFA, model structures are scored by continuous functions constructed based on short- and long-range structural restraint information from a fragment library. Here, DFA is represented by the full-atom model by CHARMM with the addition of the empirical potential of DFIRE. The relative contributions between various energy terms are optimized using linear programming. The conformational sampling was carried out with CSA algorithm, which can find low energy conformations more efficiently than simulated annealing used in the existing DFA study. The newly introduced DFA energy function and CSA sampling algorithm are implemented into CHARMM. Test results on 30 small single-domain proteins and 13 template-free modeling targets of the 8th Critical Assessment of protein Structure Prediction show that the current method provides comparable and complementary prediction results to existing top methods. Copyright © 2011 Wiley-Liss, Inc.
Covariate adjustment of event histories estimated from Markov chains: the additive approach.
Aalen, O O; Borgan, O; Fekjaer, H
2001-12-01
Markov chain models are frequently used for studying event histories that include transitions between several states. An empirical transition matrix for nonhomogeneous Markov chains has previously been developed, including a detailed statistical theory based on counting processes and martingales. In this article, we show how to estimate transition probabilities dependent on covariates. This technique may, e.g., be used for making estimates of individual prognosis in epidemiological or clinical studies. The covariates are included through nonparametric additive models on the transition intensities of the Markov chain. The additive model allows for estimation of covariate-dependent transition intensities, and again a detailed theory exists based on counting processes. The martingale setting now allows for a very natural combination of the empirical transition matrix and the additive model, resulting in estimates that can be expressed as stochastic integrals, and hence their properties are easily evaluated. Two medical examples will be given. In the first example, we study how the lung cancer mortality of uranium miners depends on smoking and radon exposure. In the second example, we study how the probability of being in response depends on patient group and prophylactic treatment for leukemia patients who have had a bone marrow transplantation. A program in R and S-PLUS that can carry out the analyses described here has been developed and is freely available on the Internet.
The Situation Awareness Weighted Network (SAWN) model and method: Theory and application.
Kalloniatis, Alexander; Ali, Irena; Neville, Timothy; La, Phuong; Macleod, Iain; Zuparic, Mathew; Kohn, Elizabeth
2017-05-01
We introduce a novel model and associated data collection method to examine how a distributed organisation of military staff who feed a Common Operating Picture (COP) generates Situation Awareness (SA), a critical component in organisational performance. The proposed empirically derived Situation Awareness Weighted Network (SAWN) model draws on two scientific models of SA, by Endsley involving perception, comprehension and projection, and by Stanton et al. positing that SA exists across a social and semantic network of people and information objects in activities connected across a set of tasks. The output of SAWN is a representation as a weighted semi-bipartite network of the interaction between people ('human nodes') and information artefacts such as documents and system displays ('product nodes'); link weights represent the Endsley levels of SA that individuals acquire from or provide to information objects and other individuals. The SAWN method is illustrated with aggregated empirical data from a case study of Australian military staff undertaking their work during two very different scenarios, during steady-state operations and in a crisis threat context. A key outcome of analysis of the weighted networks is that we are able to quantify flow of SA through an organisation as staff seek to "value-add" in the conduct of their work. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
A neural network based reputation bootstrapping approach for service selection
NASA Astrophysics Data System (ADS)
Wu, Quanwang; Zhu, Qingsheng; Li, Peng
2015-10-01
With the concept of service-oriented computing becoming widely accepted in enterprise application integration, more and more computing resources are encapsulated as services and published online. Reputation mechanism has been studied to establish trust on prior unknown services. One of the limitations of current reputation mechanisms is that they cannot assess the reputation of newly deployed services as no record of their previous behaviours exists. Most of the current bootstrapping approaches merely assign default reputation values to newcomers. However, by this kind of methods, either newcomers or existing services will be favoured. In this paper, we present a novel reputation bootstrapping approach, where correlations between features and performance of existing services are learned through an artificial neural network (ANN) and they are then generalised to establish a tentative reputation when evaluating new and unknown services. Reputations of services published previously by the same provider are also incorporated for reputation bootstrapping if available. The proposed reputation bootstrapping approach is seamlessly embedded into an existing reputation model and implemented in the extended service-oriented architecture. Empirical studies of the proposed approach are shown at last.
Ahmed, Tamer; Filiatrault, Johanne; Yu, Hsiu-Ting; Zunzunegui, Maria Victoria
2017-01-01
Abstract Purpose: Active aging is a concept that lacks consensus. The WHO defines it as a holistic concept that encompasses the overall health, participation, and security of older adults. Fernández-Ballesteros and colleagues propose a similar concept but omit security and include mood and cognitive function. To date, researchers attempting to validate conceptual models of active aging have obtained mixed results. The goal of this study was to examine the validity of existing models of active aging with epidemiological data from Canada. Methods: The WHO model of active aging and the psychological model of active aging developed by Fernández-Ballesteros and colleagues were tested with confirmatory factor analysis. The data used included 799 community-dwelling older adults between 65 and 74 years old, recruited from the patient lists of family physicians in Saint-Hyacinthe, Quebec and Kingston, Ontario. Results: Neither model could be validated in the sample of Canadian older adults. Although a concept of healthy aging can be modeled adequately, social participation and security did not fit a latent factor model. A simple binary index indicated that 27% of older adults in the sample did not meet the active aging criteria proposed by the WHO. Implications: Our results suggest that active aging might represent a human rights policy orientation rather than an empirical measurement tool to guide research among older adult populations. Binary indexes of active aging may serve to highlight what remains to be improved about the health, participation, and security of growing populations of older adults. PMID:26350153
Fault displacement hazard assessment for nuclear installations based on IAEA safety standards
NASA Astrophysics Data System (ADS)
Fukushima, Y.
2016-12-01
In the IAEA Safety NS-R-3, surface fault displacement hazard assessment (FDHA) is required for the siting of nuclear installations. If any capable faults exist in the candidate site, IAEA recommends the consideration of alternative sites. However, due to the progress in palaeoseismological investigations, capable faults may be found in existing site. In such a case, IAEA recommends to evaluate the safety using probabilistic FDHA (PFDHA), which is an empirical approach based on still quite limited database. Therefore a basic and crucial improvement is to increase the database. In 2015, IAEA produced a TecDoc-1767 on Palaeoseismology as a reference for the identification of capable faults. Another IAEA Safety Report 85 on ground motion simulation based on fault rupture modelling provides an annex introducing recent PFDHAs and fault displacement simulation methodologies. The IAEA expanded the project of FDHA for the probabilistic approach and the physics based fault rupture modelling. The first approach needs a refinement of the empirical methods by building a world wide database, and the second approach needs to shift from kinematic to the dynamic scheme. Both approaches can complement each other, since simulated displacement can fill the gap of a sparse database and geological observations can be useful to calibrate the simulations. The IAEA already supported a workshop in October 2015 to discuss the existing databases with the aim of creating a common worldwide database. A consensus of a unified database was reached. The next milestone is to fill the database with as many fault rupture data sets as possible. Another IAEA work group had a WS in November 2015 to discuss the state-of-the-art PFDHA as well as simulation methodologies. Two groups jointed a consultancy meeting in February 2016, shared information, identified issues, discussed goals and outputs, and scheduled future meetings. Now we may aim at coordinating activities for the whole FDHA tasks jointly.
Gauran, Iris Ivy M; Park, Junyong; Lim, Johan; Park, DoHwan; Zylstra, John; Peterson, Thomas; Kann, Maricel; Spouge, John L
2017-09-22
In recent mutation studies, analyses based on protein domain positions are gaining popularity over gene-centric approaches since the latter have limitations in considering the functional context that the position of the mutation provides. This presents a large-scale simultaneous inference problem, with hundreds of hypothesis tests to consider at the same time. This article aims to select significant mutation counts while controlling a given level of Type I error via False Discovery Rate (FDR) procedures. One main assumption is that the mutation counts follow a zero-inflated model in order to account for the true zeros in the count model and the excess zeros. The class of models considered is the Zero-inflated Generalized Poisson (ZIGP) distribution. Furthermore, we assumed that there exists a cut-off value such that smaller counts than this value are generated from the null distribution. We present several data-dependent methods to determine the cut-off value. We also consider a two-stage procedure based on screening process so that the number of mutations exceeding a certain value should be considered as significant mutations. Simulated and protein domain data sets are used to illustrate this procedure in estimation of the empirical null using a mixture of discrete distributions. Overall, while maintaining control of the FDR, the proposed two-stage testing procedure has superior empirical power. 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
The Role of Spatially Controlled Cell Proliferation in Limb Bud Morphogenesis
Boehm, Bernd; Westerberg, Henrik; Lesnicar-Pucko, Gaja; Raja, Sahdia; Rautschka, Michael; Cotterell, James; Swoger, Jim; Sharpe, James
2010-01-01
Although the vertebrate limb bud has been studied for decades as a model system for spatial pattern formation and cell specification, the cellular basis of its distally oriented elongation has been a relatively neglected topic by comparison. The conventional view is that a gradient of isotropic proliferation exists along the limb, with high proliferation rates at the distal tip and lower rates towards the body, and that this gradient is the driving force behind outgrowth. Here we test this hypothesis by combining quantitative empirical data sets with computer modelling to assess the potential role of spatially controlled proliferation rates in the process of directional limb bud outgrowth. In particular, we generate two new empirical data sets for the mouse hind limb—a numerical description of shape change and a quantitative 3D map of cell cycle times—and combine these with a new 3D finite element model of tissue growth. By developing a parameter optimization approach (which explores spatial patterns of tissue growth) our computer simulations reveal that the observed distribution of proliferation rates plays no significant role in controlling the distally extending limb shape, and suggests that directional cell activities are likely to be the driving force behind limb bud outgrowth. This theoretical prediction prompted us to search for evidence of directional cell orientations in the limb bud mesenchyme, and we thus discovered a striking highly branched and extended cell shape composed of dynamically extending and retracting filopodia, a distally oriented bias in Golgi position, and also a bias in the orientation of cell division. We therefore provide both theoretical and empirical evidence that limb bud elongation is achieved by directional cell activities, rather than a PD gradient of proliferation rates. PMID:20644711
Solar forcing - implications for the volatile inventory on Mars and Venus. (Invited)
NASA Astrophysics Data System (ADS)
Lundin, Rickard
2015-04-01
Planets in the solar system are exposed to a persistent solar forcing by solar irradiation and the solar wind. The forcing, most pronounced for the inner Earth-like planets, ionizes, heats, modifies chemically, and gradually erodes the upper atmosphere throughout the lifetime of the planets. Of the four inner planets, the Earth is at present the only one habitable. Our kin Venus and Mars have taken different evolutionary paths, the present lack of a hydrosphere being the most significant difference. However, there are ample evidence for that an early Noachian, water rich period existed on Mars. Similarly, arguments have been presented for an early water-rich period on Venus. The question is, what made Mars and Venus evolve in such a different way compared to the Earth? Under the assumption of similar initial conditions, the planets may have experienced different externally driven episodes (e.g. impacts) with time. Conversely, internal factors on Mars and Venus made them less resilient, unable to sustain solar forcing on an evolutionary time-scale. The latter has been quantified from simulations, combining atmospheric and ionospheric modeling and empiric data from solar-like stars (Sun in time). In a similar way, semi-empirical models based on experimental data were used to determine the mass-loss of volatiles back in time from Mars and Venus. This presentation will review further aspects of semi-empirical modeling based on ion and energetic neutral atom (ENA) escape data from Mars and Venus - on short term (days), mid-term (solar cycle proxies), long-term (Heliospheric flux proxies, 10 000 year), and on time scales corresponding to the solar evolution.
Bases of creation of new concept in global tectonics
NASA Astrophysics Data System (ADS)
Anokhin, Vladimir
2014-05-01
With the accumulation of new facts about the structure of the Earth existing plate paradigm is becoming more doubtful. In fact, it is supported by the opinion of the majority specialist-theorist interested in its preservation and substantial use of administrative resources. The author knows well what is totalitarianism, and regretfully sees signs of it in monopolistic domination of the world geotectonic «the only correct» plate tectonics theory. Scientists have been looking for the factual material in the field, most belong to the plate theory skeptical, to the extent that believe their own eyes more than books. Believing that science is a search for truth, not only grants, the author proposes to critically reconsider the position in modern geotectonic and look for a way out of the impasse. Obviously, if we are not satisfied with the existing paradigm, we should not be limited by its critics, and must seek an alternative concept, avoiding errors, for which we criticize plate tectonic. The new concept should be based on all the facts, using only the necessary minimum of modeling. Methodological principles of creation of the concept are presented to the author of the following: - strict adherence to scientific logic; - the constant application of the principle of Occam's razor; - ranking of existing tectonic information on groups, in descending order of reliability: 1) established facts 2) the facts to be checked 3) empirical generalizations 4) physical and other models, including the facts and their generalizations 5) theoretical constructions based on empirical generalizations and models 6) hypotheses arising from the grounded theoretical constructions 7) the concepts 8) ideas (Professor's theory or idea can cost less than a fact from a student). - generalization, rethinking the information according to the indicated rankings, including outside the boards paradigm; - establishment of boundary conditions of the action and the eligibility of the consequences of all newly created entity, strict adherence to these restrictions. In the new geotectonic, perhaps there is a place some synthesis with some provisions of the plate tectonic provided they are consistent with the above principles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walls, D.S.
1978-01-01
This study traces the origins of the notion that Appalachia constitutes a unique social-problem region, examines the models of Appalachian problems popularized during the 1960s, proposes an alternative framework for situating the Central Appalachian coalfields, and examines aspects of the coal industry's structure in the Central Appalachian region. The idea of Appalachia as a distinctive social problem region was created between 1890 and 1930 by a social movement affiliated with various Protestant church home mission boards and organizationally focused in the Conference of Southern Mountain Workers and Berea College. The movement stressed an environmental explanation of regional problems. During themore » 1960s, three explanatory models of Appalachian poverty and underdevelopment achieved prominence: the subculture of poverty model, the regional development model, and the internal colonialism model. Each contributed to a regionalized conception of Appalachian problems. Empirical studies show the subculture of poverty model to fail as an explanation of regional underdevelopment. In the absence of a critique of domination and a redistribution of power and wealth, the regional development model serves as a rationalization of existing structures of privilege. The internal colonialism model provides a critique of domination, but not the most appropriate one. This study argues that the above models should not be viewed as mutually exclusive formulations, and that they may be reconstructed to represent different dimensions of social existence.« less
Climate Shocks and Migration: An Agent-Based Modeling Approach.
Entwisle, Barbara; Williams, Nathalie E; Verdery, Ashton M; Rindfuss, Ronald R; Walsh, Stephen J; Malanson, George P; Mucha, Peter J; Frizzelle, Brian G; McDaniel, Philip M; Yao, Xiaozheng; Heumann, Benjamin W; Prasartkul, Pramote; Sawangdee, Yothin; Jampaklay, Aree
2016-09-01
This is a study of migration responses to climate shocks. We construct an agent-based model that incorporates dynamic linkages between demographic behaviors, such as migration, marriage, and births, and agriculture and land use, which depend on rainfall patterns. The rules and parameterization of our model are empirically derived from qualitative and quantitative analyses of a well-studied demographic field site, Nang Rong district, Northeast Thailand. With this model, we simulate patterns of migration under four weather regimes in a rice economy: 1) a reference, 'normal' scenario; 2) seven years of unusually wet weather; 3) seven years of unusually dry weather; and 4) seven years of extremely variable weather. Results show relatively small impacts on migration. Experiments with the model show that existing high migration rates and strong selection factors, which are unaffected by climate change, are likely responsible for the weak migration response.
NASA Technical Reports Server (NTRS)
Forbes, G. S.; Pielke, R. A.
1985-01-01
Various empirical and statistical weather-forecasting studies which utilize stratification by weather regime are described. Objective classification was used to determine weather regime in some studies. In other cases the weather pattern was determined on the basis of a parameter representing the physical and dynamical processes relevant to the anticipated mesoscale phenomena, such as low level moisture convergence and convective precipitation, or the Froude number and the occurrence of cold-air damming. For mesoscale phenomena already in existence, new forecasting techniques were developed. The use of cloud models in operational forecasting is discussed. Models to calculate the spatial scales of forcings and resultant response for mesoscale systems are presented. The use of these models to represent the climatologically most prevalent systems, and to perform case-by-case simulations is reviewed. Operational implementation of mesoscale data into weather forecasts, using both actual simulation output and method-output statistics is discussed.
Climate Shocks and Migration: An Agent-Based Modeling Approach
Entwisle, Barbara; Williams, Nathalie E.; Verdery, Ashton M.; Rindfuss, Ronald R.; Walsh, Stephen J.; Malanson, George P.; Mucha, Peter J.; Frizzelle, Brian G.; McDaniel, Philip M.; Yao, Xiaozheng; Heumann, Benjamin W.; Prasartkul, Pramote; Sawangdee, Yothin; Jampaklay, Aree
2016-01-01
This is a study of migration responses to climate shocks. We construct an agent-based model that incorporates dynamic linkages between demographic behaviors, such as migration, marriage, and births, and agriculture and land use, which depend on rainfall patterns. The rules and parameterization of our model are empirically derived from qualitative and quantitative analyses of a well-studied demographic field site, Nang Rong district, Northeast Thailand. With this model, we simulate patterns of migration under four weather regimes in a rice economy: 1) a reference, ‘normal’ scenario; 2) seven years of unusually wet weather; 3) seven years of unusually dry weather; and 4) seven years of extremely variable weather. Results show relatively small impacts on migration. Experiments with the model show that existing high migration rates and strong selection factors, which are unaffected by climate change, are likely responsible for the weak migration response. PMID:27594725
Joeng, Hee-Koung; Chen, Ming-Hui; Kang, Sangwook
2015-01-01
Discrete survival data are routinely encountered in many fields of study including behavior science, economics, epidemiology, medicine, and social science. In this paper, we develop a class of proportional exponentiated link transformed hazards (ELTH) models. We carry out a detailed examination of the role of links in fitting discrete survival data and estimating regression coefficients. Several interesting results are established regarding the choice of links and baseline hazards. We also characterize the conditions for improper survival functions and the conditions for existence of the maximum likelihood estimates under the proposed ELTH models. An extensive simulation study is conducted to examine the empirical performance of the parameter estimates under the Cox proportional hazards model by treating discrete survival times as continuous survival times, and the model comparison criteria, AIC and BIC, in determining links and baseline hazards. A SEER breast cancer dataset is analyzed in details to further demonstrate the proposed methodology. PMID:25772374
Relativistic Quark Model Based Description of Low Energy NN Scattering
NASA Astrophysics Data System (ADS)
Antalik, R.; Lyubovitskij, V. E.
A model describing the NN scattering phase shifts is developed. Two nucleon interactions induced by meson exchange forces are constructed starting from π, η, η‧ pseudoscalar-, the ρ, ϕ, ω vector-, and the ɛ(600), a0, f0(1400) scalar — meson-nucleon coupling constants, which we obtained within a relativistic quantum field theory based quark model. Working within the Blankenbecler-Sugar-Logunov-Tavkhelidze quasipotential dynamics, we describe the NN phase shifts in a relativistically invariant way. In this procedure we use phenomenological form factor cutoff masses and effective ɛ and ω meson-nucleon coupling constants, only. Resulting NN phase shifts are in a good agreement with both, the empirical data, and the entirely phenomenological Bonn OBEP model fit. While the quality of our description, evaluated as a ratio of our results to the Bonn OBEP model χ2 ones is about 1.2, other existing (semi)microscopic results gave qualitative results only.
A Stratified Acoustic Model Accounting for Phase Shifts for Underwater Acoustic Networks
Wang, Ping; Zhang, Lin; Li, Victor O. K.
2013-01-01
Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated. PMID:23669708
Self-consistent approach for neutral community models with speciation
NASA Astrophysics Data System (ADS)
Haegeman, Bart; Etienne, Rampal S.
2010-03-01
Hubbell’s neutral model provides a rich theoretical framework to study ecological communities. By incorporating both ecological and evolutionary time scales, it allows us to investigate how communities are shaped by speciation processes. The speciation model in the basic neutral model is particularly simple, describing speciation as a point-mutation event in a birth of a single individual. The stationary species abundance distribution of the basic model, which can be solved exactly, fits empirical data of distributions of species’ abundances surprisingly well. More realistic speciation models have been proposed such as the random-fission model in which new species appear by splitting up existing species. However, no analytical solution is available for these models, impeding quantitative comparison with data. Here, we present a self-consistent approximation method for neutral community models with various speciation modes, including random fission. We derive explicit formulas for the stationary species abundance distribution, which agree very well with simulations. We expect that our approximation method will be useful to study other speciation processes in neutral community models as well.
A stratified acoustic model accounting for phase shifts for underwater acoustic networks.
Wang, Ping; Zhang, Lin; Li, Victor O K
2013-05-13
Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated.
Tracking Expected Improvements of Decadal Prediction in Climate Services
NASA Astrophysics Data System (ADS)
Suckling, E.; Thompson, E.; Smith, L. A.
2013-12-01
Physics-based simulation models are ultimately expected to provide the best available (decision-relevant) probabilistic climate predictions, as they can capture the dynamics of the Earth System across a range of situations, situations for which observations for the construction of empirical models are scant if not nonexistent. This fact in itself provides neither evidence that predictions from today's Earth Systems Models will outperform today's empirical models, nor a guide to the space and time scales on which today's model predictions are adequate for a given purpose. Empirical (data-based) models are employed to make probability forecasts on decadal timescales. The skill of these forecasts is contrasted with that of state-of-the-art climate models, and the challenges faced by each approach are discussed. The focus is on providing decision-relevant probability forecasts for decision support. An empirical model, known as Dynamic Climatology is shown to be competitive with CMIP5 climate models on decadal scale probability forecasts. Contrasting the skill of simulation models not only with each other but also with empirical models can reveal the space and time scales on which a generation of simulation models exploits their physical basis effectively. It can also quantify their ability to add information in the formation of operational forecasts. Difficulties (i) of information contamination (ii) of the interpretation of probabilistic skill and (iii) of artificial skill complicate each modelling approach, and are discussed. "Physics free" empirical models provide fixed, quantitative benchmarks for the evaluation of ever more complex climate models, that is not available from (inter)comparisons restricted to only complex models. At present, empirical models can also provide a background term for blending in the formation of probability forecasts from ensembles of simulation models. In weather forecasting this role is filled by the climatological distribution, and can significantly enhance the value of longer lead-time weather forecasts to those who use them. It is suggested that the direct comparison of simulation models with empirical models become a regular component of large model forecast intercomparison and evaluation. This would clarify the extent to which a given generation of state-of-the-art simulation models provide information beyond that available from simpler empirical models. It would also clarify current limitations in using simulation forecasting for decision support. No model-based probability forecast is complete without a quantitative estimate if its own irrelevance; this estimate is likely to increase as a function of lead time. A lack of decision-relevant quantitative skill would not bring the science-based foundation of anthropogenic warming into doubt. Similar levels of skill with empirical models does suggest a clear quantification of limits, as a function of lead time, for spatial and temporal scales on which decisions based on such model output are expected to prove maladaptive. Failing to clearly state such weaknesses of a given generation of simulation models, while clearly stating their strength and their foundation, risks the credibility of science in support of policy in the long term.
A Theoretical Framework for the Associations between Identity and Psychopathology
ERIC Educational Resources Information Center
Klimstra, Theo A.; Denissen, Jaap J. A.
2017-01-01
Identity research largely emerged from clinical observations. Decades of empirical work advanced the field in refining existing approaches and adding new approaches. Furthermore, the existence of linkages of identity with psychopathology is now well established. Unfortunately, both the directionality of effects between identity aspects and…
Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest-Posttest Study.
Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A
2008-09-01
The pretest-posttest study design is commonly used in medical and social science research to assess the effect of a treatment or an intervention. Recently, interest has been rising in developing inference procedures that improve efficiency while relaxing assumptions used in the pretest-posttest data analysis, especially when the posttest measurement might be missing. In this article we propose a semiparametric estimation procedure based on empirical likelihood (EL) that incorporates the common baseline covariate information to improve efficiency. The proposed method also yields an asymptotically unbiased estimate of the response distribution. Thus functions of the response distribution, such as the median, can be estimated straightforwardly, and the EL method can provide a more appealing estimate of the treatment effect for skewed data. We show that, compared with existing methods, the proposed EL estimator has appealing theoretical properties, especially when the working model for the underlying relationship between the pretest and posttest measurements is misspecified. A series of simulation studies demonstrates that the EL-based estimator outperforms its competitors when the working model is misspecified and the data are missing at random. We illustrate the methods by analyzing data from an AIDS clinical trial (ACTG 175).
Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest–Posttest Study
Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.
2013-01-01
The pretest–posttest study design is commonly used in medical and social science research to assess the effect of a treatment or an intervention. Recently, interest has been rising in developing inference procedures that improve efficiency while relaxing assumptions used in the pretest–posttest data analysis, especially when the posttest measurement might be missing. In this article we propose a semiparametric estimation procedure based on empirical likelihood (EL) that incorporates the common baseline covariate information to improve efficiency. The proposed method also yields an asymptotically unbiased estimate of the response distribution. Thus functions of the response distribution, such as the median, can be estimated straightforwardly, and the EL method can provide a more appealing estimate of the treatment effect for skewed data. We show that, compared with existing methods, the proposed EL estimator has appealing theoretical properties, especially when the working model for the underlying relationship between the pretest and posttest measurements is misspecified. A series of simulation studies demonstrates that the EL-based estimator outperforms its competitors when the working model is misspecified and the data are missing at random. We illustrate the methods by analyzing data from an AIDS clinical trial (ACTG 175). PMID:23729942
Tin Whisker Electrical Short Circuit Characteristics. Part 2
NASA Technical Reports Server (NTRS)
Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Lawrence L.; Wright, Maria C.
2009-01-01
Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that has an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage. In addition, the unexpected polycrystalline structure seen in the focused ion beam (FIB) cross section in the first experiment was confirmed in this experiment using transmission electron microscopy (TEM). The FIB was also used to cross section two card guides to facilitate the measurement of the grain size of each card guide's tin plating to determine its finish.
Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2
NASA Technical Reports Server (NTRS)
Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.
2009-01-01
To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage.
Sedgley, Norman; Elmslie, Bruce
2011-01-01
Evidence of the importance of urban agglomeration and the offsetting effects of congestion are provided in a number of studies of productivity and wages. Little attention has been paid to this evidence in the economic growth literature, where the recent focus is on technological change. We extend the idea of agglomeration and congestion effects to the area of innovation by empirically looking for a nonlinear link between population density and patent activity. A panel data set consisting of observations on 302 USA metropolitan statistical areas (MSAs) over a 10-year period from 1990 to 1999 is utilized. Following the patent and R&D literature, models that account for the discreet nature of the dependent variable are employed. Strong evidence is found that agglomeration and congestion are important in explaining the vast differences in patent rates across US cities. The most important reason cities continue to exist, given the dramatic drop in transportation costs for physical goods over the last century, is probably related to the forces of agglomeration as they apply to knowledge spillovers. Therefore, the empirical investigation proposed here is an important part of understanding the viability of urban areas in the future.
NASA Astrophysics Data System (ADS)
McInerney, David; Thyer, Mark; Kavetski, Dmitri; Kuczera, George
2016-04-01
Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic streamflow predictions. In particular, residual errors of hydrological predictions are often heteroscedastic, with large errors associated with high runoff events. Although multiple approaches exist for representing this heteroscedasticity, few if any studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating a range of approaches for representing heteroscedasticity in residual errors. These approaches include the 'direct' weighted least squares approach and 'transformational' approaches, such as logarithmic, Box-Cox (with and without fitting the transformation parameter), logsinh and the inverse transformation. The study reports (1) theoretical comparison of heteroscedasticity approaches, (2) empirical evaluation of heteroscedasticity approaches using a range of multiple catchments / hydrological models / performance metrics and (3) interpretation of empirical results using theory to provide practical guidance on the selection of heteroscedasticity approaches. Importantly, for hydrological practitioners, the results will simplify the choice of approaches to represent heteroscedasticity. This will enhance their ability to provide hydrological probabilistic predictions with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality).
NASA Astrophysics Data System (ADS)
Liu, Xiangli; Cheng, Siwei; Wang, Shouyang; Hong, Yongmiao; Li, Yi
2008-02-01
This study employs a parametric approach based on TGARCH and GARCH models to estimate the VaR of the copper futures market and spot market in China. Considering the short selling mechanism in the futures market, the paper introduces two new notions: upside VaR and extreme upside risk spillover. And downside VaR and upside VaR are examined by using the above approach. Also, we use Kupiec’s [P.H. Kupiec, Techniques for verifying the accuracy of risk measurement models, Journal of Derivatives 3 (1995) 73-84] backtest to test the power of our approaches. In addition, we investigate information spillover effects between the futures market and the spot market by employing a linear Granger causality test, and Granger causality tests in mean, volatility and risk respectively. Moreover, we also investigate the relationship between the futures market and the spot market by using a test based on a kernel function. Empirical results indicate that there exist significant two-way spillovers between the futures market and the spot market, and the spillovers from the futures market to the spot market are much more striking.
Beyond R 0: Demographic Models for Variability of Lifetime Reproductive Output
Caswell, Hal
2011-01-01
The net reproductive rate measures the expected lifetime reproductive output of an individual, and plays an important role in demography, ecology, evolution, and epidemiology. Well-established methods exist to calculate it from age- or stage-classified demographic data. As an expectation, provides no information on variability; empirical measurements of lifetime reproduction universally show high levels of variability, and often positive skewness among individuals. This is often interpreted as evidence of heterogeneity, and thus of an opportunity for natural selection. However, variability provides evidence of heterogeneity only if it exceeds the level of variability to be expected in a cohort of identical individuals all experiencing the same vital rates. Such comparisons require a way to calculate the statistics of lifetime reproduction from demographic data. Here, a new approach is presented, using the theory of Markov chains with rewards, obtaining all the moments of the distribution of lifetime reproduction. The approach applies to age- or stage-classified models, to constant, periodic, or stochastic environments, and to any kind of reproductive schedule. As examples, I analyze data from six empirical studies, of a variety of animal and plant taxa (nematodes, polychaetes, humans, and several species of perennial plants). PMID:21738586