NASA Astrophysics Data System (ADS)
Bingham, S.; Mouikis, C.; Kistler, L. M.; Fok, M. C. H.; Glocer, A.; Farrugia, C. J.; Gkioulidou, M.; Spence, H. E.
2016-12-01
The ring current responds differently to the different solar and interplanetary storm drivers such as coronal mass injections, (CMEs), and co-rotating interaction regions (CIRs). Delineating the differences in the ring current development between these two drivers will aid our understanding of the ring current dynamics. Using Van Allen Probes observations, we develop an empirical ring current model of the ring current pressure, the pressure anisotropy and the current density development during the storm phases for both types of storm drivers and for all MLTs inside L 6. In addition, we identify the populations (energy and species) responsible. We find that during the storm main phase and the early recovery phase the plasma sheet particles (10-80 keV) convecting from the nightside contribute the most on the ring current pressure and current density. However, during these phases, the main difference between CMEs and CIRs is in the O+ contribution. This empirical model is compared to the results of CIMI simulations of CMEs and CIRs where the model input is comprised of the superposed epoch solar wind conditions of the storms that comprise the empirical model, while different inner magnetosphere boundary conditions will be tested in order to match the empirical model results. Comparing the model and simulation results will fill our understanding of the ring current dynamics as part of the highly coupled inner magnetosphere system.
NASA Astrophysics Data System (ADS)
Mouikis, Christopher; Bingham, Samuel; Kistler, Lynn; Spence, Harlan; Gkioulidou, Matina
2017-04-01
The ring current responds differently to the different solar and interplanetary storm drivers such as coronal mass injections, (CME's), and co-rotating interaction regions (CIR's). Using Van Allen Probes observations, we develop an empirical ring current model of the ring current pressure, the pressure anisotropy and the current density development during the storm phases for both types of storm drivers and for all MLTs inside L 6. Delineating the differences in the ring current development between these two drivers will aid our understanding of the ring current dynamics. We find that during the storm main phase most of the ring current pressure in the pre-midnight inner magnetosphere is contributed by particles on open drift paths that cause the development of a strong partial ring current that causes most of the main phase Dst drop. These particles can reach as deep as L 2 and their pressure compares to the local magnetic field pressure as deep as L 3. During the recovery phase, if these particles are not lost at the magnetopause, will become trapped and will contribute to the symmetric ring current. However, the largest difference between the CME and CIR ring current responses during the storm main and early recovery phases is caused by how the 15 - 60 keV O+ responds to these drivers. This empirical model is compared to the results of CIMI simulations of a CMEs and a CIRs where the model input is comprised of the superposed epoch solar wind conditions of the storms that comprise the empirical model. Different inner magnetosphere boundary conditions are tested in order to match the empirical model results. Comparing the model and simulation results improves our understanding of the ring current dynamics as part of the highly coupled inner magnetosphere system. In addition, within the framework of this empirical model, the prediction of the EMIC wave generation linear theory is tested using the observed plasma parameters and comparing with the observations of EMIC waves.
Karimi, Leila; Ghassemi, Abbas
2016-07-01
Among the different technologies developed for desalination, the electrodialysis/electrodialysis reversal (ED/EDR) process is one of the most promising for treating brackish water with low salinity when there is high risk of scaling. Multiple researchers have investigated ED/EDR to optimize the process, determine the effects of operating parameters, and develop theoretical/empirical models. Previously published empirical/theoretical models have evaluated the effect of the hydraulic conditions of the ED/EDR on the limiting current density using dimensionless numbers. The reason for previous studies' emphasis on limiting current density is twofold: 1) to maximize ion removal, most ED/EDR systems are operated close to limiting current conditions if there is not a scaling potential in the concentrate chamber due to a high concentration of less-soluble salts; and 2) for modeling the ED/EDR system with dimensionless numbers, it is more accurate and convenient to use limiting current density, where the boundary layer's characteristics are known at constant electrical conditions. To improve knowledge of ED/EDR systems, ED/EDR models should be also developed for the Ohmic region, where operation reduces energy consumption, facilitates targeted ion removal, and prolongs membrane life compared to limiting current conditions. In this paper, theoretical/empirical models were developed for ED/EDR performance in a wide range of operating conditions. The presented ion removal and selectivity models were developed for the removal of monovalent ions and divalent ions utilizing the dominant dimensionless numbers obtained from laboratory scale electrodialysis experiments. At any system scale, these models can predict ED/EDR performance in terms of monovalent and divalent ion removal. Copyright © 2016 Elsevier Ltd. All rights reserved.
Integrated urban systems model with multiple transportation supply agents.
DOT National Transportation Integrated Search
2012-10-01
This project demonstrates the feasibility of developing quantitative models that can forecast future networks under : current and alternative transportation planning processes. The current transportation planning process is modeled : based on empiric...
EMPIRE and pyenda: Two ensemble-based data assimilation systems written in Fortran and Python
NASA Astrophysics Data System (ADS)
Geppert, Gernot; Browne, Phil; van Leeuwen, Peter Jan; Merker, Claire
2017-04-01
We present and compare the features of two ensemble-based data assimilation frameworks, EMPIRE and pyenda. Both frameworks allow to couple models to the assimilation codes using the Message Passing Interface (MPI), leading to extremely efficient and fast coupling between models and the data-assimilation codes. The Fortran-based system EMPIRE (Employing Message Passing Interface for Researching Ensembles) is optimized for parallel, high-performance computing. It currently includes a suite of data assimilation algorithms including variants of the ensemble Kalman and several the particle filters. EMPIRE is targeted at models of all kinds of complexity and has been coupled to several geoscience models, eg. the Lorenz-63 model, a barotropic vorticity model, the general circulation model HadCM3, the ocean model NEMO, and the land-surface model JULES. The Python-based system pyenda (Python Ensemble Data Assimilation) allows Fortran- and Python-based models to be used for data assimilation. Models can be coupled either using MPI or by using a Python interface. Using Python allows quick prototyping and pyenda is aimed at small to medium scale models. pyenda currently includes variants of the ensemble Kalman filter and has been coupled to the Lorenz-63 model, an advection-based precipitation nowcasting scheme, and the dynamic global vegetation model JSBACH.
Development of a new model for short period ocean tidal variations of Earth rotation
NASA Astrophysics Data System (ADS)
Schuh, Harald
2015-08-01
Within project SPOT (Short Period Ocean Tidal variations in Earth rotation) we develop a new high frequency Earth rotation model based on empirical ocean tide models. The main purpose of the SPOT model is its application to space geodetic observations such as GNSS and VLBI.We consider an empirical ocean tide model, which does not require hydrodynamic ocean modeling to determine ocean tidal angular momentum. We use here the EOT11a model of Savcenko & Bosch (2012), which is extended for some additional minor tides (e.g. M1, J1, T2). As empirical tidal models do not provide ocean tidal currents, which are re- quired for the computation of oceanic relative angular momentum, we implement an approach first published by Ray (2001) to estimate ocean tidal current veloci- ties for all tides considered in the extended EOT11a model. The approach itself is tested by application to tidal heights from hydrodynamic ocean tide models, which also provide tidal current velocities. Based on the tidal heights and the associated current velocities the oceanic tidal angular momentum (OTAM) is calculated.For the computation of the related short period variation of Earth rotation, we have re-examined the Euler-Liouville equation for an elastic Earth model with a liquid core. The focus here is on the consistent calculation of the elastic Love num- bers and associated Earth model parameters, which are considered in the Euler- Liouville equation for diurnal and sub-diurnal periods in the frequency domain.
A delta-rule model of numerical and non-numerical order processing.
Verguts, Tom; Van Opstal, Filip
2014-06-01
Numerical and non-numerical order processing share empirical characteristics (distance effect and semantic congruity), but there are also important differences (in size effect and end effect). At the same time, models and theories of numerical and non-numerical order processing developed largely separately. Currently, we combine insights from 2 earlier models to integrate them in a common framework. We argue that the same learning principle underlies numerical and non-numerical orders, but that environmental features determine the empirical differences. Implications for current theories on order processing are pointed out. PsycINFO Database Record (c) 2014 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Gholizadeh, H.; Robeson, S. M.
2015-12-01
Empirical models have been widely used to estimate global chlorophyll content from remotely sensed data. Here, we focus on the standard NASA empirical models that use blue-green band ratios. These band ratio ocean color (OC) algorithms are in the form of fourth-order polynomials and the parameters of these polynomials (i.e. coefficients) are estimated from the NASA bio-Optical Marine Algorithm Data set (NOMAD). Most of the points in this data set have been sampled from tropical and temperate regions. However, polynomial coefficients obtained from this data set are used to estimate chlorophyll content in all ocean regions with different properties such as sea-surface temperature, salinity, and downwelling/upwelling patterns. Further, the polynomial terms in these models are highly correlated. In sum, the limitations of these empirical models are as follows: 1) the independent variables within the empirical models, in their current form, are correlated (multicollinear), and 2) current algorithms are global approaches and are based on the spatial stationarity assumption, so they are independent of location. Multicollinearity problem is resolved by using partial least squares (PLS). PLS, which transforms the data into a set of independent components, can be considered as a combined form of principal component regression (PCR) and multiple regression. Geographically weighted regression (GWR) is also used to investigate the validity of spatial stationarity assumption. GWR solves a regression model over each sample point by using the observations within its neighbourhood. PLS results show that the empirical method underestimates chlorophyll content in high latitudes, including the Southern Ocean region, when compared to PLS (see Figure 1). Cluster analysis of GWR coefficients also shows that the spatial stationarity assumption in empirical models is not likely a valid assumption.
NASA Technical Reports Server (NTRS)
Hallock, Ashley K.; Polzin, Kurt A.
2011-01-01
A two-dimensional semi-empirical model of pulsed inductive thrust efficiency is developed to predict the effect of such a geometry on thrust efficiency. The model includes electromagnetic and gas-dynamic forces but excludes energy conversion from radial motion to axial motion, with the intention of characterizing thrust efficiency loss mechanisms that result from a conical versus a at inductive coil geometry. The range of conical pulsed inductive thruster geometries to which this model can be applied is explored with the use of finite element analysis. A semi-empirical relation for inductance as a function of current sheet radial and axial position is the limiting feature of the model, restricting the applicability as a function of half cone angle to a range from ten degrees to about 60 degrees. The model is nondimensionalized, yielding a set of dimensionless performance scaling parameters. Results of the model indicate that radial current sheet motion changes the axial dynamic impedance parameter at which thrust efficiency is maximized. This shift indicates that when radial current sheet motion is permitted in the model longer characteristic circuit timescales are more efficient, which can be attributed to a lower current sheet axial velocity as the plasma more rapidly decouples from the coil through radial motion. Thrust efficiency is shown to increase monotonically for decreasing values of the radial dynamic impedance parameter. This trend indicates that to maximize the radial decoupling timescale should be long compared to the characteristic circuit timescale.
The Structure of Psychopathology: Toward an Expanded Quantitative Empirical Model
Wright, Aidan G.C.; Krueger, Robert F.; Hobbs, Megan J.; Markon, Kristian E.; Eaton, Nicholas R.; Slade, Tim
2013-01-01
There has been substantial recent interest in the development of a quantitative, empirically based model of psychopathology. However, the majority of pertinent research has focused on analyses of diagnoses, as described in current official nosologies. This is a significant limitation because existing diagnostic categories are often heterogeneous. In the current research, we aimed to redress this limitation of the existing literature, and to directly compare the fit of categorical, continuous, and hybrid (i.e., combined categorical and continuous) models of syndromes derived from indicators more fine-grained than diagnoses. We analyzed data from a large representative epidemiologic sample (the 2007 Australian National Survey of Mental Health and Wellbeing; N = 8,841). Continuous models provided the best fit for each syndrome we observed (Distress, Obsessive Compulsivity, Fear, Alcohol Problems, Drug Problems, and Psychotic Experiences). In addition, the best fitting higher-order model of these syndromes grouped them into three broad spectra: Internalizing, Externalizing, and Psychotic Experiences. We discuss these results in terms of future efforts to refine emerging empirically based, dimensional-spectrum model of psychopathology, and to use the model to frame psychopathology research more broadly. PMID:23067258
Theoretical models of parental HIV disclosure: a critical review.
Qiao, Shan; Li, Xiaoming; Stanton, Bonita
2013-01-01
This study critically examined three major theoretical models related to parental HIV disclosure (i.e., the Four-Phase Model [FPM], the Disclosure Decision Making Model [DDMM], and the Disclosure Process Model [DPM]), and the existing studies that could provide empirical support to these models or their components. For each model, we briefly reviewed its theoretical background, described its components and/or mechanisms, and discussed its strengths and limitations. The existing empirical studies supported most theoretical components in these models. However, hypotheses related to the mechanisms proposed in the models have not yet tested due to a lack of empirical evidence. This study also synthesized alternative theoretical perspectives and new issues in disclosure research and clinical practice that may challenge the existing models. The current study underscores the importance of including components related to social and cultural contexts in theoretical frameworks, and calls for more adequately designed empirical studies in order to test and refine existing theories and to develop new ones.
Multiscale empirical modeling of the geomagnetic field: From storms to substorms
NASA Astrophysics Data System (ADS)
Stephens, G. K.; Sitnov, M. I.; Korth, H.; Gkioulidou, M.; Ukhorskiy, A. Y.; Merkin, V. G.
2017-12-01
An advanced version of the TS07D empirical geomagnetic field model, herein called SST17, is used to model the global picture of the geomagnetic field and its characteristic variations on both storm and substorm scales. The new SST17 model uses two regular expansions describing the equatorial currents with each having distinctly different scales, one corresponding to a thick and one to a thin current sheet relative to the thermal ion gyroradius. These expansions have an arbitrary distribution of currents in the equatorial plane that is constrained only by magnetometer data. This multi-scale description allows one to reproduce the current sheet thinning during the growth phase. Additionaly, the model uses a flexible description of field-aligned currents that reproduces their spiral structure at low altitudes and provides a continuous transition from region 1 to region 2 current systems. The empirical picture of substorms is obtained by combining magnetometer data from Geotail, THEMIS, Van Allen Probes, Cluster II, Polar, IMP-8, GOES 8, 9, 10 and 12 and then binning this data based on similar values of the auroral index AL, its time derivative and the integral of the solar wind electric field parameter (from ACE, Wind, and IMP-8) in time over substorm scales. The performance of the model is demonstrated for several events, including the 3 July 2012 substorm, which had multi-probe coverage and a series of substorms during the March 2008 storm. It is shown that the AL binning helps reproduce dipolarization signatures in the northward magnetic field Bz, while the solar wind electric field integral allows one to capture the current sheet thinning during the growth phase. The model allows one to trace the substorm dipolarization from the tail to the inner magnetosphere where the dipolarization of strongly stretched tail field lines causes a redistribution of the tail current resulting in an enhancement of the partial ring current in the premidnight sector.
Do We Know the Actual Magnetopause Position for Typical Solar Wind Conditions?
NASA Technical Reports Server (NTRS)
Samsonov, A. A.; Gordeev, E.; Tsyganenko, N. A.; Safrankova, J.; Nemecek, Z.; Simunek, J.; Sibeck, D. G.; Toth, G.; Merkin, V. G.; Raeder, J.
2016-01-01
We compare predicted magnetopause positions at the subsolar point and four reference points in the terminator plane obtained from several empirical and numerical MHD (magnetohydrodynamics) models. Empirical models using various sets of magnetopause crossings and making different assumptions about the magnetopause shape predict significantly different magnetopause positions (with a scatter greater than 1 Earth radius (R (sub E)) even at the subsolar point. Axisymmetric magnetopause models cannot reproduce the cusp indentations or the changes related to the dipole tilt effect, and most of them predict the magnetopause closer to the Earth than non axisymmetric models for typical solar wind conditions and zero tilt angle. Predictions of two global non axisymmetric models do not match each other, and the models need additional verification. MHD models often predict the magnetopause closer to the Earth than the non axisymmetric empirical models, but the predictions of MHD simulations may need corrections for the ring current effect and decreases of the solar wind pressure that occur in the foreshock. Comparing MHD models in which the ring current magnetic field is taken into account with the empirical Lin et al. model, we find that the differences in the reference point positions predicted by these models are relatively small for B (sub z) equals 0 (note: B (sub z) is when the Earth's magnetic field points north versus Sun's magnetic field pointing south). Therefore, we assume that these predictions indicate the actual magnetopause position, but future investigations are still needed.
NASA Technical Reports Server (NTRS)
He, Maosheng; Vogt, Joachim; Luehr, Hermann; Sorbalo, Eugen; Blagau, Adrian; Le, Guan; Lu, Gang
2012-01-01
Ten years of CHAMP magnetic field measurements are integrated into MFACE, a model of field-aligned currents (FACs) using empirical orthogonal functions (EOFs). EOF1 gives the basic Region-1/Region-2 pattern varying mainly with the interplanetary magnetic field Bz component. EOF2 captures separately the cusp current signature and By-related variability. Compared to existing models, MFACE yields significantly better spatial resolution, reproduces typically observed FAC thickness and intensity, improves on the magnetic local time (MLT) distribution, and gives the seasonal dependence of FAC latitudes and the NBZ current signature. MFACE further reveals systematic dependences on By, including 1) Region-1/Region-2 topology modifications around noon; 2) imbalance between upward and downward maximum current density; 3) MLT location of the Harang discontinuity. Furthermore, our procedure allows quantifying response times of FACs to solar wind driving at the bow shock nose: we obtain 20 minutes and 35-40 minutes lags for the FAC density and latitude, respectively.
Plant water potential improves prediction of empirical stomatal models.
Anderegg, William R L; Wolf, Adam; Arango-Velez, Adriana; Choat, Brendan; Chmura, Daniel J; Jansen, Steven; Kolb, Thomas; Li, Shan; Meinzer, Frederick; Pita, Pilar; Resco de Dios, Víctor; Sperry, John S; Wolfe, Brett T; Pacala, Stephen
2017-01-01
Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.
Yilmaz, A Erdem; Boncukcuoğlu, Recep; Kocakerim, M Muhtar
2007-06-01
In this study, it was investigated parameters affecting energy consumption in boron removal from boron containing wastewaters prepared synthetically, via electrocoagulation method. The solution pH, initial boron concentration, dose of supporting electrolyte, current density and temperature of solution were selected as experimental parameters affecting energy consumption. The obtained experimental results showed that boron removal efficiency reached up to 99% under optimum conditions, in which solution pH was 8.0, current density 6.0 mA/cm(2), initial boron concentration 100mg/L and solution temperature 293 K. The current density was an important parameter affecting energy consumption too. High current density applied to electrocoagulation cell increased energy consumption. Increasing solution temperature caused to decrease energy consumption that high temperature decreased potential applied under constant current density. That increasing initial boron concentration and dose of supporting electrolyte caused to increase specific conductivity of solution decreased energy consumption. As a result, it was seen that energy consumption for boron removal via electrocoagulation method could be minimized at optimum conditions. An empirical model was predicted by statistically. Experimentally obtained values were fitted with values predicted from empirical model being as following; [formula in text]. Unfortunately, the conditions obtained for optimum boron removal were not the conditions obtained for minimum energy consumption. It was determined that support electrolyte must be used for increase boron removal and decrease electrical energy consumption.
ERIC Educational Resources Information Center
Martin, Jack; Sugarman, Jeff
1993-01-01
Considers Aristotelian and Galilean views of the relationship between theory and empirical research, and argues that current models in research on teaching are Aristotelian in overly emphasizing the empirical and methodological branches of research programs to the detriment of the theoretical and conceptual. (SLD)
Preparing Current and Future Practitioners to Integrate Research in Real Practice Settings
ERIC Educational Resources Information Center
Thyer, Bruce A.
2015-01-01
Past efforts aimed at promoting a better integration between research and practice are reviewed. These include the empirical clinical practice movement (ECP), originating within social work; the empirically supported treatment (EST) initiative of clinical psychology; and the evidence-based practice (EBP) model developed within medicine. The…
On application of asymmetric Kan-like exact equilibria to the Earth magnetotail modeling
NASA Astrophysics Data System (ADS)
Korovinskiy, Daniil B.; Kubyshkina, Darya I.; Semenov, Vladimir S.; Kubyshkina, Marina V.; Erkaev, Nikolai V.; Kiehas, Stefan A.
2018-04-01
A specific class of solutions of the Vlasov-Maxwell equations, developed by means of generalization of the well-known Harris-Fadeev-Kan-Manankova family of exact two-dimensional equilibria, is studied. The examined model reproduces the current sheet bending and shifting in the vertical plane, arising from the Earth dipole tilting and the solar wind nonradial propagation. The generalized model allows magnetic configurations with equatorial magnetic fields decreasing in a tailward direction as slow as 1/x, contrary to the original Kan model (1/x3); magnetic configurations with a single X point are also available. The analytical solution is compared with the empirical T96 model in terms of the magnetic flux tube volume. It is found that parameters of the analytical model may be adjusted to fit a wide range of averaged magnetotail configurations. The best agreement between analytical and empirical models is obtained for the midtail at distances beyond 10-15 RE at high levels of magnetospheric activity. The essential model parameters (current sheet scale, current density) are compared to Cluster data of magnetotail crossings. The best match of parameters is found for single-peaked current sheets with medium values of number density, proton temperature and drift velocity.
A Time-dependent Heliospheric Model Driven by Empirical Boundary Conditions
NASA Astrophysics Data System (ADS)
Kim, T. K.; Arge, C. N.; Pogorelov, N. V.
2017-12-01
Consisting of charged particles originating from the Sun, the solar wind carries the Sun's energy and magnetic field outward through interplanetary space. The solar wind is the predominant source of space weather events, and modeling the solar wind propagation to Earth is a critical component of space weather research. Solar wind models are typically separated into coronal and heliospheric parts to account for the different physical processes and scales characterizing each region. Coronal models are often coupled with heliospheric models to propagate the solar wind out to Earth's orbit and beyond. The Wang-Sheeley-Arge (WSA) model is a semi-empirical coronal model consisting of a potential field source surface model and a current sheet model that takes synoptic magnetograms as input to estimate the magnetic field and solar wind speed at any distance above the coronal region. The current version of the WSA model takes the Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model as input to provide improved time-varying solutions for the ambient solar wind structure. When heliospheric MHD models are coupled with the WSA model, density and temperature at the inner boundary are treated as free parameters that are tuned to optimal values. For example, the WSA-ENLIL model prescribes density and temperature assuming momentum flux and thermal pressure balance across the inner boundary of the ENLIL heliospheric MHD model. We consider an alternative approach of prescribing density and temperature using empirical correlations derived from Ulysses and OMNI data. We use our own modeling software (Multi-scale Fluid-kinetic Simulation Suite) to drive a heliospheric MHD model with ADAPT-WSA input. The modeling results using the two different approaches of density and temperature prescription suggest that the use of empirical correlations may be a more straightforward, consistent method.
Fire risk in San Diego County, California: A weighted Bayesian model approach
Kolden, Crystal A.; Weigel, Timothy J.
2007-01-01
Fire risk models are widely utilized to mitigate wildfire hazards, but models are often based on expert opinions of less understood fire-ignition and spread processes. In this study, we used an empirically derived weights-of-evidence model to assess what factors produce fire ignitions east of San Diego, California. We created and validated a dynamic model of fire-ignition risk based on land characteristics and existing fire-ignition history data, and predicted ignition risk for a future urbanization scenario. We then combined our empirical ignition-risk model with a fuzzy fire behavior-risk model developed by wildfire experts to create a hybrid model of overall fire risk. We found that roads influence fire ignitions and that future growth will increase risk in new rural development areas. We conclude that empirically derived risk models and hybrid models offer an alternative method to assess current and future fire risk based on management actions.
Modeling Addictive Consumption as an Infectious Disease*
Alamar, Benjamin; Glantz, Stanton A.
2011-01-01
The dominant model of addictive consumption in economics is the theory of rational addiction. The addict in this model chooses how much they are going to consume based upon their level of addiction (past consumption), the current benefits and all future costs. Several empirical studies of cigarette sales and price data have found a correlation between future prices and consumption and current consumption. These studies have argued that the correlation validates the rational addiction model and invalidates any model in which future consumption is not considered. An alternative to the rational addiction model is one in which addiction spreads through a population as if it were an infectious disease, as supported by the large body of empirical research of addictive behaviors. In this model an individual's probability of becoming addicted to a substance is linked to the behavior of their parents, friends and society. In the infectious disease model current consumption is based only on the level of addiction and current costs. Price and consumption data from a simulation of the infectious disease model showed a qualitative match to the results of the rational addiction model. The infectious disease model can explain all of the theoretical results of the rational addiction model with the addition of explaining initial consumption of the addictive good. PMID:21339848
History of research on modelling gypsy moth population ecology
J. J. Colbert
1991-01-01
History of research to develop models of gypsy moth population dynamics and some related studies are described. Empirical regression-based models are reviewed, and then the more comprehensive process models are discussed. Current model- related research efforts are introduced.
Disorders without borders: current and future directions in the meta-structure of mental disorders.
Carragher, Natacha; Krueger, Robert F; Eaton, Nicholas R; Slade, Tim
2015-03-01
Classification is the cornerstone of clinical diagnostic practice and research. However, the extant psychiatric classification systems are not well supported by research evidence. In particular, extensive comorbidity among putatively distinct disorders flags an urgent need for fundamental changes in how we conceptualize psychopathology. Over the past decade, research has coalesced on an empirically based model that suggests many common mental disorders are structured according to two correlated latent dimensions: internalizing and externalizing. We review and discuss the development of a dimensional-spectrum model which organizes mental disorders in an empirically based manner. We also touch upon changes in the DSM-5 and put forward recommendations for future research endeavors. Our review highlights substantial empirical support for the empirically based internalizing-externalizing model of psychopathology, which provides a parsimonious means of addressing comorbidity. As future research goals, we suggest that the field would benefit from: expanding the meta-structure of psychopathology to include additional disorders, development of empirically based thresholds, inclusion of a developmental perspective, and intertwining genomic and neuroscience dimensions with the empirical structure of psychopathology.
An empirically based model for knowledge management in health care organizations.
Sibbald, Shannon L; Wathen, C Nadine; Kothari, Anita
2016-01-01
Knowledge management (KM) encompasses strategies, processes, and practices that allow an organization to capture, share, store, access, and use knowledge. Ideal KM combines different sources of knowledge to support innovation and improve performance. Despite the importance of KM in health care organizations (HCOs), there has been very little empirical research to describe KM in this context. This study explores KM in HCOs, focusing on the status of current intraorganizational KM. The intention is to provide insight for future studies and model development for effective KM implementation in HCOs. A qualitative methods approach was used to create an empirically based model of KM in HCOs. Methods included (a) qualitative interviews (n = 24) with senior leadership to identify types of knowledge important in these roles plus current information-seeking behaviors/needs and (b) in-depth case study with leaders in new executive positions (n = 2). The data were collected from 10 HCOs. Our empirically based model for KM was assessed for face and content validity. The findings highlight the paucity of formal KM in our sample HCOs. Organizational culture, leadership, and resources are instrumental in supporting KM processes. An executive's knowledge needs are extensive, but knowledge assets are often limited or difficult to acquire as much of the available information is not in a usable format. We propose an empirically based model for KM to highlight the importance of context (internal and external), and knowledge seeking, synthesis, sharing, and organization. Participants who reviewed the model supported its basic components and processes, and potential for incorporating KM into organizational processes. Our results articulate ways to improve KM, increase organizational learning, and support evidence-informed decision-making. This research has implications for how to better integrate evidence and knowledge into organizations while considering context and the role of organizational processes.
ERIC Educational Resources Information Center
Balaji, M. S.; Chakrabarti, Diganta
2010-01-01
The present study contributes to the understanding of the effectiveness of online discussion forum in student learning. A conceptual model based on "theory of online learning" and "media richness theory" was proposed and empirically tested. We extend the current understanding of media richness theory to suggest that use of…
Cox, Brian J; Clara, Ian P; Worobec, Lydia M; Grant, Bridget F
2012-12-01
Individual personality disorders (PD) are grouped into three clusters in the DSM-IV (A, B, and C). There is very little empirical evidence available concerning the validity of this model in the general population. The current study included all 10 of the DSM-IV PD assessed in Wave 1 and Wave 2 of the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). Confirmatory factor analysis was used to evaluate three plausible models of the structure of Axis II personality disorders (the current hierarchical DSM-IV three-factor model in which individual PD are believed to load on their assigned clusters, which in turn load onto a single Axis II factor; a general single-factor model; and three independent factors). Each of these models was tested in both the total and also separately for gender. The higher order DSM-IV model demonstrated good fit to the data on a number of goodness-of-fit indices. The results for this model were very similar across genders. A model of PD based on the current DSM-IV hierarchical conceptualization of a higher order classification scheme received strong empirical support through confirmatory factor analysis using a number of goodness-of-fit indices in a nationally representative sample. Other models involving broad, higher order personality domains such as neuroticism in relation to personality disorders have yet to be tested in epidemiologic surveys and represent an important avenue for future research.
NASA Astrophysics Data System (ADS)
Escobar-Palafox, Gustavo; Gault, Rosemary; Ridgway, Keith
2011-12-01
Shaped Metal Deposition (SMD) is an additive manufacturing process which creates parts layer by layer by weld depositions. In this work, empirical models that predict part geometry (wall thickness and outer diameter) and some metallurgical aspects (i.e. surface texture, portion of finer Widmanstätten microstructure) for the SMD process were developed. The models are based on an orthogonal fractional factorial design of experiments with four factors at two levels. The factors considered were energy level (a relationship between heat source power and the rate of raw material input.), step size, programmed diameter and travel speed. The models were validated using previous builds; the prediction error for part geometry was under 11%. Several relationships between the factors and responses were identified. Current had a significant effect on wall thickness; thickness increases with increasing current. Programmed diameter had a significant effect on percentage of shrinkage; this decreased with increasing component size. Surface finish decreased with decreasing step size and current.
Simulating the Risk of Liver Fluke Infection using a Mechanistic Hydro-epidemiological Model
NASA Astrophysics Data System (ADS)
Beltrame, Ludovica; Dunne, Toby; Rose, Hannah; Walker, Josephine; Morgan, Eric; Vickerman, Peter; Wagener, Thorsten
2016-04-01
Liver Fluke (Fasciola hepatica) is a common parasite found in livestock and responsible for considerable economic losses throughout the world. Risk of infection is strongly influenced by climatic and hydrological conditions, which characterise the host environment for parasite development and transmission. Despite on-going control efforts, increases in fluke outbreaks have been reported in recent years in the UK, and have been often attributed to climate change. Currently used fluke risk models are based on empirical relationships derived between historical climate and incidence data. However, hydro-climate conditions are becoming increasingly non-stationary due to climate change and direct anthropogenic impacts such as land use change, making empirical models unsuitable for simulating future risk. In this study we introduce a mechanistic hydro-epidemiological model for Liver Fluke, which explicitly simulates habitat suitability for disease development in space and time, representing the parasite life cycle in connection with key environmental conditions. The model is used to assess patterns of Liver Fluke risk for two catchments in the UK under current and potential future climate conditions. Comparisons are made with a widely used empirical model employing different datasets, including data from regional veterinary laboratories. Results suggest that mechanistic models can achieve adequate predictive ability and support adaptive fluke control strategies under climate change scenarios.
SONOS Nonvolatile Memory Cell Programming Characteristics
NASA Technical Reports Server (NTRS)
MacLeod, Todd C.; Phillips, Thomas A.; Ho, Fat D.
2010-01-01
Silicon-oxide-nitride-oxide-silicon (SONOS) nonvolatile memory is gaining favor over conventional EEPROM FLASH memory technology. This paper characterizes the SONOS write operation using a nonquasi-static MOSFET model. This includes floating gate charge and voltage characteristics as well as tunneling current, voltage threshold and drain current characterization. The characterization of the SONOS memory cell predicted by the model closely agrees with experimental data obtained from actual SONOS memory cells. The tunnel current, drain current, threshold voltage and read drain current all closely agreed with empirical data.
Applicability of empirical data currently used in predicting solid propellant exhaust plumes
NASA Technical Reports Server (NTRS)
Tevepaugh, J. A.; Smith, S. D.; Penny, M. M.; Greenwood, T.; Roberts, B. B.
1977-01-01
Theoretical and experimental approaches to exhaust plume analysis are compared. A two-phase model is extended to include treatment of reacting gas chemistry, and thermodynamical modeling of the gaseous phase of the flow field is considered. The applicability of empirical data currently available to define particle drag coefficients, heat transfer coefficients, mean particle size, and particle size distributions is investigated. Experimental and analytical comparisons are presented for subscale solid rocket motors operating at three altitudes with attention to pitot total pressure and stagnation point heating rate measurements. The mathematical treatment input requirements are explained. The two-phase flow field solution adequately predicts gasdynamic properties in the inviscid portion of two-phase exhaust plumes. It is found that prediction of exhaust plume gas pressures requires an adequate model of flow field dynamics.
Collection of empirical data for assessing 800MHz coverage models
DOT National Transportation Integrated Search
2004-12-01
Wireless communications plays an important role in KDOT operations. Currently, decisions pertaining to KDOTs : 800MHz radio system are made on the basis of coverage models that rely on antenna and terrain characteristics to model the : coverage. W...
Single photon counting linear mode avalanche photodiode technologies
NASA Astrophysics Data System (ADS)
Williams, George M.; Huntington, Andrew S.
2011-10-01
The false count rate of a single-photon-sensitive photoreceiver consisting of a high-gain, low-excess-noise linear-mode InGaAs avalanche photodiode (APD) and a high-bandwidth transimpedance amplifier (TIA) is fit to a statistical model. The peak height distribution of the APD's multiplied dark current is approximated by the weighted sum of McIntyre distributions, each characterizing dark current generated at a different location within the APD's junction. The peak height distribution approximated in this way is convolved with a Gaussian distribution representing the input-referred noise of the TIA to generate the statistical distribution of the uncorrelated sum. The cumulative distribution function (CDF) representing count probability as a function of detection threshold is computed, and the CDF model fit to empirical false count data. It is found that only k=0 McIntyre distributions fit the empirically measured CDF at high detection threshold, and that false count rate drops faster than photon count rate as detection threshold is raised. Once fit to empirical false count data, the model predicts the improvement of the false count rate to be expected from reductions in TIA noise and APD dark current. Improvement by at least three orders of magnitude is thought feasible with further manufacturing development and a capacitive-feedback TIA (CTIA).
EMPIRE: A Reaction Model Code for Nuclear Astrophysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palumbo, A., E-mail: apalumbo@bnl.gov; Herman, M.; Capote, R.
The correct modeling of abundances requires knowledge of nuclear cross sections for a variety of neutron, charged particle and γ induced reactions. These involve targets far from stability and are therefore difficult (or currently impossible) to measure. Nuclear reaction theory provides the only way to estimate values of such cross sections. In this paper we present application of the EMPIRE reaction code to nuclear astrophysics. Recent measurements are compared to the calculated cross sections showing consistent agreement for n-, p- and α-induced reactions of strophysical relevance.
Motivations for play in online games.
Yee, Nick
2006-12-01
An empirical model of player motivations in online games provides the foundation to understand and assess how players differ from one another and how motivations of play relate to age, gender, usage patterns, and in-game behaviors. In the current study, a factor analytic approach was used to create an empirical model of player motivations. The analysis revealed 10 motivation subcomponents that grouped into three overarching components (achievement, social, and immersion). Relationships between motivations and demographic variables (age, gender, and usage patterns) are also presented.
Kabore, Achille; Biritwum, Nana-Kwadwo; Downs, Philip W.; Soares Magalhaes, Ricardo J.; Zhang, Yaobi; Ottesen, Eric A.
2013-01-01
Background Mapping the distribution of schistosomiasis is essential to determine where control programs should operate, but because it is impractical to assess infection prevalence in every potentially endemic community, model-based geostatistics (MBG) is increasingly being used to predict prevalence and determine intervention strategies. Methodology/Principal Findings To assess the accuracy of MBG predictions for Schistosoma haematobium infection in Ghana, school surveys were evaluated at 79 sites to yield empiric prevalence values that could be compared with values derived from recently published MBG predictions. Based on these findings schools were categorized according to WHO guidelines so that practical implications of any differences could be determined. Using the mean predicted values alone, 21 of the 25 empirically determined ‘high-risk’ schools requiring yearly praziquantel would have been undertreated and almost 20% of the remaining schools would have been treated despite empirically-determined absence of infection – translating into 28% of the children in the 79 schools being undertreated and 12% receiving treatment in the absence of any demonstrated need. Conclusions/Significance Using the current predictive map for Ghana as a spatial decision support tool by aggregating prevalence estimates to the district level was clearly not adequate for guiding the national program, but the alternative of assessing each school in potentially endemic areas of Ghana or elsewhere is not at all feasible; modelling must be a tool complementary to empiric assessments. Thus for practical usefulness, predictive risk mapping should not be thought of as a one-time exercise but must, as in the current study, be an iterative process that incorporates empiric testing and model refining to create updated versions that meet the needs of disease control operational managers. PMID:23505584
A Motivational Theory of Life-Span Development
Heckhausen, Jutta; Wrosch, Carsten; Schulz, Richard
2010-01-01
This article had four goals. First, the authors identified a set of general challenges and questions that a life-span theory of development should address. Second, they presented a comprehensive account of their Motivational Theory of Life-Span Development. They integrated the model of optimization in primary and secondary control and the action-phase model of developmental regulation with their original life-span theory of control to present a comprehensive theory of development. Third, they reviewed the relevant empirical literature testing key propositions of the Motivational Theory of Life-Span Development. Finally, because the conceptual reach of their theory goes far beyond the current empirical base, they pointed out areas that deserve further and more focused empirical inquiry. PMID:20063963
NASA Astrophysics Data System (ADS)
Kim, Moon-Jo; Jeong, Hye-Jin; Park, Ju-Won; Hong, Sung-Tae; Han, Heung Nam
2018-01-01
An empirical expression describing the electroplastic deformation behavior is suggested based on the Johnson-Cook (JC) model by adding several functions to consider both thermal and athermal electric current effects. Tensile deformation behaviors are carried out for an AZ31 magnesium alloy and an Al-Mg-Si alloy under pulsed electric current at various current densities with a fixed duration of electric current. To describe the flow curves under electric current, a modified JC model is proposed to take the electric current effect into account. Phenomenological descriptions of the adopted parameters in the equation are made. The modified JC model suggested in the present study is capable of describing the tensile deformation behaviors under pulsed electric current reasonably well.
NASA Technical Reports Server (NTRS)
Gallagher, Dennis L.; Craven, Paul D.; Comfort, Richard H.
1999-01-01
Over 40 years of ground and spacecraft plasmaspheric measurements have resulted in many statistical descriptions of plasmaspheric properties. In some cases, these properties have been represented as analytical descriptions that are valid for specific regions or conditions. For the most part, what has not been done is to extend regional empirical descriptions or models to the plasmasphere as a whole. In contrast, many related investigations depend on the use of representative plasmaspheric conditions throughout the inner magnetosphere. Wave propagation, involving the transport of energy through the magnetosphere, is strongly affected by thermal plasma density and its composition. Ring current collisional and wave particle losses also strongly depend on these quantities. Plasmaspheric also plays a secondary role in influencing radio signals from the Global Positioning System satellites. The Global Core Plasma Model (GCPM) is an attempt to assimilate previous empirical evidence and regional models for plasmaspheric density into a continuous, smooth model of thermal plasma density in the inner magnetosphere. In that spirit, the International Reference Ionosphere is currently used to complete the low altitude description of density and composition in the model. The models and measurements on which the GCPM is currently based and its relationship to IRI will be discussed.
Modelling of project cash flow on construction projects in Malang city
NASA Astrophysics Data System (ADS)
Djatmiko, Bambang
2017-09-01
Contractors usually prepare a project cash flow (PCF) on construction projects. The flow of cash in and cash out within a construction project may vary depending on the owner, contract documents, and construction service providers who have their own authority. Other factors affecting the PCF are down payment, termyn, progress schedule, material schedule, equipment schedule, manpower schedules, and wages of workers and subcontractors. This study aims to describe the cash inflow and cash outflow based on the empirical data obtained from contractors, develop a PCF model based on Halpen & Woodhead's PCF model, and investigate whether or not there is a significant difference between the Halpen & Woodhead's PCF model and the empirical PCF model. Based on the researcher's observation, the PCF management has never been implemented by the contractors in Malang in serving their clients (owners). The research setting is in Malang City because physical development in all field and there are many new construction service providers. The findings in this current study are summarised as follows: 1) Cash in included current assets (20%), owner's down payment (20%), termyin I (5%-25%), termyin II (20%), termyin III (25%), termyin IV (25%) and retention (5%). Cash out included direct cost (65%), indirect cost (20%), and profit + informal cost(15%), 2)the construction work involving the empirical PCF model in this study was started with the funds obtained from DP or current assets and 3) The two models bear several similarities in the upward trends of direct cost, indirect cost, Pro Ic, progress billing, and S-curve. The difference between the two models is the occurrence of overdraft in the Halpen and Woodhead's PCF model only.
NASA Technical Reports Server (NTRS)
Stephens, G. K.; Sitnov, M. I.; Ukhorskiy, A. Y.; Roelof, E. C.; Tsyganenko, N. A.; Le, G.
2016-01-01
The structure of storm time currents in the inner magnetosphere, including its innermost region inside 4R(sub E), is studied for the first time using a modification of the empirical geomagnetic field model TS07D and new data from Van Allen Probes and Time History of Events and Macroscale Interactions during Substorms missions. It is shown that the model, which uses basis-function expansions instead of ad hoc current modules to approximate the magnetic field, consistently improves its resolution and magnetic field reconstruction with the increase of the number of basis functions and resolves the spatial structure and evolution of the innermost eastward current. This includes a connection between the westward ring current flowing largely at R > or approx. 3R(sub E) and the eastward ring current concentrated at R < or approx. 3R(sub E) resulting in a vortex current pattern. A similar pattern coined 'banana current' was previously inferred from the pressure distributions based on the energetic neutral atom imaging and first-principles ring current simulations. The morphology of the equatorial currents is dependent on storm phase. During the main phase, it is complex, with several asymmetries forming banana currents. Near SYM-H minimum, the banana current is strongest, is localized in the evening-midnight sector, and is more structured compared to the main phase. It then weakens during the recovery phase resulting in the equatorial currents to become mostly azimuthally symmetric.
An Institutional Model of Organizational Practice: Financial Reporting at the Fortune 200.
ERIC Educational Resources Information Center
Mezias, Stephen J.
1990-01-01
Compares applied economic models and an institutional model in an empirical study of financial reporting practice at the Fortune 200 between 1962 and 1984. Findings indicate that the institutional model adds significant explanatory power over and above the models currently dominating the applied economics literature. Includes 47 references. (MLH)
Sempértegui, Gabriela A; Karreman, Annemiek; Arntz, Arnoud; Bekker, Marrie H J
2013-04-01
Borderline personality disorder is a serious psychiatric disorder for which the effectiveness of the current pharmacotherapeutical and psychotherapeutic approaches has shown to be limited. In the last decades, schema therapy has increased in popularity as a treatment of borderline personality disorder; however, systematic evaluation of both effectiveness and empirical evidence for the theoretical background of the therapy is limited. This literature review comprehensively evaluates the current empirical status of schema therapy for borderline personality disorder. We first described the theoretical framework and reviewed its empirical foundations. Next, we examined the evidence regarding effectiveness and implementability. We found evidence for a considerable number of elements of Young's schema model; however, the strength of the results varies and there are also mixed results and some empirical blanks in the theory. The number of studies on effectiveness is small, but reviewed findings suggest that schema therapy is a promising treatment. In Western-European societies, the therapy could be readily implemented as a cost-effective strategy with positive economic consequences. Copyright © 2012 Elsevier Ltd. All rights reserved.
Nonrational Processes in Ethical Decision Making
ERIC Educational Resources Information Center
Rogerson, Mark D.; Gottlieb, Michael C.; Handelsman, Mitchell M.; Knapp, Samuel; Younggren, Jeffrey
2011-01-01
Most current ethical decision-making models provide a logical and reasoned process for making ethical judgments, but these models are empirically unproven and rely upon assumptions of rational, conscious, and quasi-legal reasoning. Such models predominate despite the fact that many nonrational factors influence ethical thought and behavior,…
Estimating wildfire behavior and effects
Frank A. Albini
1976-01-01
This paper presents a brief survey of the research literature on wildfire behavior and effects and assembles formulae and graphical computation aids based on selected theoretical and empirical models. The uses of mathematical fire behavior models are discussed, and the general capabilities and limitations of currently available models are outlined.
NASA Astrophysics Data System (ADS)
Yu, Y.; Jordanova, V. K.; McGranaghan, R. M.; Solomon, S. C.
2017-12-01
The ionospheric conductance, height-integrated electric conductivity, can regulate both the ionospheric electrodynamics and the magnetospheric dynamics because of its key role in determining the electric field within the coupled magnetosphere-ionosphere system. State-of-the-art global magnetosphere models commonly adopt empirical conductance calculators to obtain the auroral conductance. Such specification can bypass the complexity of the ionosphere-thermosphere chemistry but on the other hand breaks the self-consistent link within the coupled system. In this study, we couple a kinetic ring current model RAM-SCB-E that solves for anisotropic particle distributions with a two-stream electron transport code (GLOW) to more self-consistently compute the height-dependent electric conductivity, provided the auroral electron precipitation from the ring current model. Comparisons with the traditional empirical formula are carried out. It is found that the newly coupled modeling framework reveals smaller Hall and Pedersen conductance, resulting in a larger electric field. As a consequence, the subauroral polarization streams demonstrate a better agreement with observations from DMSP satellites. It is further found that the commonly assumed Maxwellian spectrum of the particle precipitation is not globally appropriate. Instead, a full precipitation spectrum resulted from wave particle interactions in the ring current accounts for a more comprehensive precipitation spectrum.
Mishra, U.; Jastrow, J.D.; Matamala, R.; Hugelius, G.; Koven, C.D.; Harden, Jennifer W.; Ping, S.L.; Michaelson, G.J.; Fan, Z.; Miller, R.M.; McGuire, A.D.; Tarnocai, C.; Kuhry, P.; Riley, W.J.; Schaefer, K.; Schuur, E.A.G.; Jorgenson, M.T.; Hinzman, L.D.
2013-01-01
The vast amount of organic carbon (OC) stored in soils of the northern circumpolar permafrost region is a potentially vulnerable component of the global carbon cycle. However, estimates of the quantity, decomposability, and combustibility of OC contained in permafrost-region soils remain highly uncertain, thereby limiting our ability to predict the release of greenhouse gases due to permafrost thawing. Substantial differences exist between empirical and modeling estimates of the quantity and distribution of permafrost-region soil OC, which contribute to large uncertainties in predictions of carbon–climate feedbacks under future warming. Here, we identify research challenges that constrain current assessments of the distribution and potential decomposability of soil OC stocks in the northern permafrost region and suggest priorities for future empirical and modeling studies to address these challenges.
NASA Astrophysics Data System (ADS)
Stephens, G. K.; Sitnov, M. I.; Ukhorskiy, A. Y.; Vandegriff, J. D.; Tsyganenko, N. A.
2010-12-01
The dramatic increase of the geomagnetic field data volume available due to many recent missions, including GOES, Polar, Geotail, Cluster, and THEMIS, required at some point the appropriate qualitative transition in the empirical modeling tools. Classical empirical models, such as T96 and T02, used few custom-tailored modules to represent major magnetospheric current systems and simple data binning or loading-unloading inputs for their fitting with data and the subsequent applications. They have been replaced by more systematic expansions of the equatorial and field-aligned current contributions as well as by the advanced data-mining algorithms searching for events with the global activity parameters, such as the Sym-H index, similar to those at the time of interest, as is done in the model TS07D (Tsyganenko and Sitnov, 2007; Sitnov et al., 2008). The necessity to mine and fit data dynamically, with the individual subset of the database being used to reproduce the geomagnetic field pattern at every new moment in time, requires the corresponding transition in the use of the new empirical geomagnetic field models. It becomes more similar to runs-on-request offered by the Community Coordinated Modeling Center for many first principles MHD and kinetic codes. To provide this mode of operation for the TS07D model a new web-based modeling tool has been created and tested at the JHU/APL (http://geomag_field.jhuapl.edu/model/), and we discuss the first results of its performance testing and validation, including in-sample and out-of-sample modeling of a number of CME- and CIR-driven magnetic storms. We also report on the first tests of the forecasting version of the TS07D model, where the magnetospheric part of the macro-parameters involved in the data-binning process (Sym-H index and its trend parameter) are replaced by their solar wind-based analogs obtained using the Burton-McPherron-Russell approach.
Validation of a new plasmapause model derived from CHAMP field-aligned current signatures
NASA Astrophysics Data System (ADS)
Heilig, Balázs; Darrouzet, Fabien; Vellante, Massimo; Lichtenberger, János; Lühr, Hermann
2014-05-01
Recently a new model for the plasmapause location in the equatorial plane was introduced based on magnetic field observations made by the CHAMP satellite in the topside ionosphere (Heilig and Lühr, 2013). Related signals are medium-scale field-aligned currents (MSFAC) (some 10km scale size). An empirical model for the MSFAC boundary was developed as a function of Kp and MLT. The MSFAC model then was compared to in situ plasmapause observations of IMAGE RPI. By considering this systematic displacement resulting from this comparison and by taking into account the diurnal variation and Kp-dependence of the residuals an empirical model of the plasmapause location that is based on MSFAC measurements from CHAMP was constructed. As a first step toward validation of the new plasmapause model we used in-situ (Van Allen Probes/EMFISIS, Cluster/WHISPER) and ground based (EMMA) plasma density observations. Preliminary results show a good agreement in general between the model and observations. Some observed differences stem from the different definitions of the plasmapause. A more detailed validation of the method can take place as soon as SWARM and VAP data become available. Heilig, B., and H. Lühr (2013) New plasmapause model derived from CHAMP field-aligned current signatures, Ann. Geophys., 31, 529-539, doi:10.5194/angeo-31-529-2013
The HEXACO and Five-Factor Models of Personality in Relation to RIASEC Vocational Interests
ERIC Educational Resources Information Center
McKay, Derek A.; Tokar, David M.
2012-01-01
The current study extended the empirical research on the overlap of vocational interests and personality by (a) testing hypothesized relations between RIASEC interests and the personality dimensions of the HEXACO model, and (b) exploring the HEXACO personality model's predictive advantage over the five-factor model (FFM) in capturing RIASEC…
Thinking outside the channel: modeling nitrogen cycling in networked river ecosystems
Ashley M. Helton; Geoffrey C. Poole; Judy L. Meyer; Wilfred M. Wollheim; Bruce J. Peterson; Patrick J. Mulholland; Emily S. Bernhardt; Jack A. Stanford; Clay Arango; Linda R. Ashkenas; Lee W. Cooper; Walter K. Dodds; Stanley V. Gregory; Robert O. Hall; Stephen K. Hamilton; Sherri L. Johnson; William H. McDowell; Jody D. Potter; Jennifer L. Tank; Suzanne M. Thomas; H. Maurice Valett; Jackson R. Webster; Lydia Zeglin
2011-01-01
Agricultural and urban development alters nitrogen and other biogeochemical cycles in rivers worldwide. Because such biogeochemical processes cannot be measured empirically across whole river networks, simulation models are critical tools for understanding river-network biogeochemistry. However, limitations inherent in current models restrict our ability to simulate...
Empirical models of wind conditions on Upper Klamath Lake, Oregon
Buccola, Norman L.; Wood, Tamara M.
2010-01-01
Upper Klamath Lake is a large (230 square kilometers), shallow (mean depth 2.8 meters at full pool) lake in southern Oregon. Lake circulation patterns are driven largely by wind, and the resulting currents affect the water quality and ecology of the lake. To support hydrodynamic modeling of the lake and statistical investigations of the relation between wind and lake water-quality measurements, the U.S. Geological Survey has monitored wind conditions along the lakeshore and at floating raft sites in the middle of the lake since 2005. In order to make the existing wind archive more useful, this report summarizes the development of empirical wind models that serve two purposes: (1) to fill short (on the order of hours or days) wind data gaps at raft sites in the middle of the lake, and (2) to reconstruct, on a daily basis, over periods of months to years, historical wind conditions at U.S. Geological Survey sites prior to 2005. Empirical wind models based on Artificial Neural Network (ANN) and Multivariate-Adaptive Regressive Splines (MARS) algorithms were compared. ANNs were better suited to simulating the 10-minute wind data that are the dependent variables of the gap-filling models, but the simpler MARS algorithm may be adequate to accurately simulate the daily wind data that are the dependent variables of the historical wind models. To further test the accuracy of the gap-filling models, the resulting simulated winds were used to force the hydrodynamic model of the lake, and the resulting simulated currents were compared to measurements from an acoustic Doppler current profiler. The error statistics indicated that the simulation of currents was degraded as compared to when the model was forced with observed winds, but probably is adequate for short gaps in the data of a few days or less. Transport seems to be less affected by the use of the simulated winds in place of observed winds. The simulated tracer concentration was similar between model results when simulated winds were used to force the model, and when observed winds were used to force the model, and differences between the two results did not accumulate over time.
Uniting paradigms of connectivity in marine ecology.
Brown, Christopher J; Harborne, Alastair R; Paris, Claire B; Mumby, Peter J
2016-09-01
The connectivity of marine organisms among habitat patches has been dominated by two independent paradigms with distinct conservation strategies. One paradigm is the dispersal of larvae on ocean currents, which suggests networks of marine reserves. The other is the demersal migration of animals from nursery to adult habitats, requiring the conservation of connected ecosystem corridors. Here, we suggest that a common driver, wave exposure, links larval and demersal connectivity across the seascape. To study the effect of linked connectivities on fish abundance at reefs, we parameterize a demographic model for The Bahamas seascape using maps of habitats, empirically forced models of wave exposure and spatially realistic three-dimensional hydrological models of larval dispersal. The integrated empirical-modeling approach enabled us to study linked connectivity on a scale not currently possible by purely empirical studies. We find sheltered environments not only provide greater nursery habitat for juvenile fish but larvae spawned on adjacent reefs have higher retention, thereby creating a synergistic increase in fish abundance. Uniting connectivity paradigms to consider all life stages simultaneously can help explain the evolution of nursery habitat use and simplifies conservation advice: Reserves in sheltered environments have desirable characteristics for biodiversity conservation and can support local fisheries through adult spillover. © 2016 by the Ecological Society of America.
Krueger, Robert F.; Markon, Kristian E.
2008-01-01
Research on psychopathology is at a historical crossroads. New technologies offer the promise of lasting advances in our understanding of the causes of human psychological suffering. Making the best use of these technologies, however, requires an empirically accurate model of psychopathology. Much current research is framed by the model of psychopathology portrayed in current versions of the Diagnostic and Statistical Manual of Mental Disorders (DSM; American Psychiatric Association, 2000). Although the modern DSMs have been fundamental in advancing psychopathology research, recent research also challenges some assumptions made in the DSM—for example, the assumption that all forms of psychopathology are well conceived of as discrete categories. Psychological science has a critical role to play in working through the implications of this research and the challenges it presents. In particular, behavior-genetic, personality, and quantitative-psychological research perspectives can be melded to inform the development of an empirically based model of psychopathology that would constitute an evolution of the DSM. PMID:18392116
Improved structural pricing model for the fair market price of Sukuk Ijarah in Indonesia
NASA Astrophysics Data System (ADS)
Rosadi, D.; Muslim
2017-12-01
Shariah financial products are currently developing in Indonesia financial market. One of the most important products is called as Sukuk which is commonly referred to as "sharia compliant" bonds. The type of Sukuk that have been widely traded in Indonesia until now are Sukuk Ijarah and Sukuk Mudharabah. In [1], we discuss various models for the price of the fixed-non-callable Sukuk Ijarah and provide the empirical studies using data from Indonesia Bonds market. We found that the structural model considered in [1] cannot model the market price empirically well. In this paper, we consider the improved model and show that it performs well for modelling the fair market price of Sukuk Ijarah.
Base drag prediction on missile configurations
NASA Technical Reports Server (NTRS)
Moore, F. G.; Hymer, T.; Wilcox, F.
1993-01-01
New wind tunnel data have been taken, and a new empirical model has been developed for predicting base drag on missile configurations. The new wind tunnel data were taken at NASA-Langley in the Unitary Wind Tunnel at Mach numbers from 2.0 to 4.5, angles of attack to 16 deg, fin control deflections up to 20 deg, fin thickness/chord of 0.05 to 0.15, and fin locations from 'flush with the base' to two chord-lengths upstream of the base. The empirical model uses these data along with previous wind tunnel data, estimating base drag as a function of all these variables as well as boat-tail and power-on/power-off effects. The new model yields improved accuracy, compared to wind tunnel data. The new model also is more robust due to inclusion of additional variables. On the other hand, additional wind tunnel data are needed to validate or modify the current empirical model in areas where data are not available.
Vu, Duy; Lomi, Alessandro; Mascia, Daniele; Pallotti, Francesca
2017-06-30
The main objective of this paper is to introduce and illustrate relational event models, a new class of statistical models for the analysis of time-stamped data with complex temporal and relational dependencies. We outline the main differences between recently proposed relational event models and more conventional network models based on the graph-theoretic formalism typically adopted in empirical studies of social networks. Our main contribution involves the definition and implementation of a marked point process extension of currently available models. According to this approach, the sequence of events of interest is decomposed into two components: (a) event time and (b) event destination. This decomposition transforms the problem of selection of event destination in relational event models into a conditional multinomial logistic regression problem. The main advantages of this formulation are the possibility of controlling for the effect of event-specific data and a significant reduction in the estimation time of currently available relational event models. We demonstrate the empirical value of the model in an analysis of interhospital patient transfers within a regional community of health care organizations. We conclude with a discussion of how the models we presented help to overcome some the limitations of statistical models for networks that are currently available. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Tracking Expected Improvements of Decadal Prediction in Climate Services
NASA Astrophysics Data System (ADS)
Suckling, E.; Thompson, E.; Smith, L. A.
2013-12-01
Physics-based simulation models are ultimately expected to provide the best available (decision-relevant) probabilistic climate predictions, as they can capture the dynamics of the Earth System across a range of situations, situations for which observations for the construction of empirical models are scant if not nonexistent. This fact in itself provides neither evidence that predictions from today's Earth Systems Models will outperform today's empirical models, nor a guide to the space and time scales on which today's model predictions are adequate for a given purpose. Empirical (data-based) models are employed to make probability forecasts on decadal timescales. The skill of these forecasts is contrasted with that of state-of-the-art climate models, and the challenges faced by each approach are discussed. The focus is on providing decision-relevant probability forecasts for decision support. An empirical model, known as Dynamic Climatology is shown to be competitive with CMIP5 climate models on decadal scale probability forecasts. Contrasting the skill of simulation models not only with each other but also with empirical models can reveal the space and time scales on which a generation of simulation models exploits their physical basis effectively. It can also quantify their ability to add information in the formation of operational forecasts. Difficulties (i) of information contamination (ii) of the interpretation of probabilistic skill and (iii) of artificial skill complicate each modelling approach, and are discussed. "Physics free" empirical models provide fixed, quantitative benchmarks for the evaluation of ever more complex climate models, that is not available from (inter)comparisons restricted to only complex models. At present, empirical models can also provide a background term for blending in the formation of probability forecasts from ensembles of simulation models. In weather forecasting this role is filled by the climatological distribution, and can significantly enhance the value of longer lead-time weather forecasts to those who use them. It is suggested that the direct comparison of simulation models with empirical models become a regular component of large model forecast intercomparison and evaluation. This would clarify the extent to which a given generation of state-of-the-art simulation models provide information beyond that available from simpler empirical models. It would also clarify current limitations in using simulation forecasting for decision support. No model-based probability forecast is complete without a quantitative estimate if its own irrelevance; this estimate is likely to increase as a function of lead time. A lack of decision-relevant quantitative skill would not bring the science-based foundation of anthropogenic warming into doubt. Similar levels of skill with empirical models does suggest a clear quantification of limits, as a function of lead time, for spatial and temporal scales on which decisions based on such model output are expected to prove maladaptive. Failing to clearly state such weaknesses of a given generation of simulation models, while clearly stating their strength and their foundation, risks the credibility of science in support of policy in the long term.
Senço, Natasha M; Huang, Yu; D'Urso, Giordano; Parra, Lucas C; Bikson, Marom; Mantovani, Antonio; Shavitt, Roseli G; Hoexter, Marcelo Q; Miguel, Eurípedes C; Brunoni, André R
2015-07-01
Neuromodulation techniques for obsessive-compulsive disorder (OCD) treatment have expanded with greater understanding of the brain circuits involved. Transcranial direct current stimulation (tDCS) might be a potential new treatment for OCD, although the optimal montage is unclear. To perform a systematic review on meta-analyses of repetitive transcranianal magnetic stimulation (rTMS) and deep brain stimulation (DBS) trials for OCD, aiming to identify brain stimulation targets for future tDCS trials and to support the empirical evidence with computer head modeling analysis. Systematic reviews of rTMS and DBS trials on OCD in Pubmed/MEDLINE were searched. For the tDCS computational analysis, we employed head models with the goal of optimally targeting current delivery to structures of interest. Only three references matched our eligibility criteria. We simulated four different electrodes montages and analyzed current direction and intensity. Although DBS, rTMS and tDCS are not directly comparable and our theoretical model, based on DBS and rTMS targets, needs empirical validation, we found that the tDCS montage with the cathode over the pre-supplementary motor area and extra-cephalic anode seems to activate most of the areas related to OCD.
NASA Astrophysics Data System (ADS)
Tonitto, C.; Gurwick, N. P.
2012-12-01
Policy initiatives to reduce greenhouse gas emissions (GHG) have promoted the development of agricultural management protocols to increase SOC storage and reduce GHG emissions. We review approaches for quantifying N2O flux from agricultural landscapes. We summarize the temporal and spatial extent of observations across representative soil classes, climate zones, cropping systems, and management scenarios. We review applications of simulation and empirical modeling approaches and compare validation outcomes across modeling tools. Subsequently, we review current model application in agricultural management protocols. In particular, we compare approaches adapted for compliance with the California Global Warming Solutions Act, the Alberta Climate Change and Emissions Management Act, and by the American Carbon Registry. In the absence of regional data to drive model development, policies that require GHG quantification often use simple empirical models based on highly aggregated data of N2O flux as a function of applied N - Tier 1 models according to IPCC categorization. As participants in development of protocols that could be used in carbon offset markets, we observed that stakeholders outside of the biogeochemistry community favored outcomes from simulation modeling (Tier 3) rather than empirical modeling (Tier 2). In contrast, scientific advisors were more accepting of outcomes based on statistical approaches that rely on local observations, and their views sometimes swayed policy practitioners over the course of policy development. Both Tier 2 and Tier 3 approaches have been implemented in current policy development, and it is important that the strengths and limitations of both approaches, in the face of available data, be well-understood by those drafting and adopting policies and protocols. The reliability of all models is contingent on sufficient observations for model development and validation. Simulation models applied without site-calibration generally result in poor validation results, and this point particularly needs to be emphasized during policy development. For cases where sufficient calibration data are available, simulation models have demonstrated the ability to capture seasonal patterns of N2O flux. The reliability of statistical models likewise depends on data availability. Because soil moisture is a significant driver of N2O flux, the best outcomes occur when empirical models are applied to systems with relevant soil classification and climate. The structure of current carbon offset protocols is not well-aligned with a budgetary approach to GHG accounting. Current protocols credit field-scale reduction in N2O flux as a result of reduced fertilizer use. Protocols do not award farmers credit for reductions in CO2 emissions resulting from reduced production of synthetic N fertilizer. To achieve the greatest GHG emission reductions through reduced synthetic N production and reduced landscape N saturation requires a re-envisioning of the agricultural landscape to include cropping systems with legume and manure N sources. The current focus on on-farm GHG sources focuses credits on simple reductions of N applied in conventional systems rather than on developing cropping systems which promote higher recycling and retention of N.
Using change-point models to estimate empirical critical loads for nitrogen in mountain ecosystems.
Roth, Tobias; Kohli, Lukas; Rihm, Beat; Meier, Reto; Achermann, Beat
2017-01-01
To protect ecosystems and their services, the critical load concept has been implemented under the framework of the Convention on Long-range Transboundary Air Pollution (UNECE) to develop effects-oriented air pollution abatement strategies. Critical loads are thresholds below which damaging effects on sensitive habitats do not occur according to current knowledge. Here we use change-point models applied in a Bayesian context to overcome some of the difficulties when estimating empirical critical loads for nitrogen (N) from empirical data. We tested the method using simulated data with varying sample sizes, varying effects of confounding variables, and with varying negative effects of N deposition on species richness. The method was applied to the national-scale plant species richness data from mountain hay meadows and (sub)alpine scrubs sites in Switzerland. Seven confounding factors (elevation, inclination, precipitation, calcareous content, aspect as well as indicator values for humidity and light) were selected based on earlier studies examining numerous environmental factors to explain Swiss vascular plant diversity. The estimated critical load confirmed the existing empirical critical load of 5-15 kg N ha -1 yr -1 for (sub)alpine scrubs, while for mountain hay meadows the estimated critical load was at the lower end of the current empirical critical load range. Based on these results, we suggest to narrow down the critical load range for mountain hay meadows to 10-15 kg N ha -1 yr -1 . Copyright © 2016 Elsevier Ltd. All rights reserved.
Dynamics of bloggers’ communities: Bipartite networks from empirical data and agent-based modeling
NASA Astrophysics Data System (ADS)
Mitrović, Marija; Tadić, Bosiljka
2012-11-01
We present an analysis of the empirical data and the agent-based modeling of the emotional behavior of users on the Web portals where the user interaction is mediated by posted comments, like Blogs and Diggs. We consider the dataset of discussion-driven popular Diggs, in which all comments are screened by machine-learning emotion detection in the text, to determine positive and negative valence (attractiveness and aversiveness) of each comment. By mapping the data onto a suitable bipartite network, we perform an analysis of the network topology and the related time-series of the emotional comments. The agent-based model is then introduced to simulate the dynamics and to capture the emergence of the emotional behaviors and communities. The agents are linked to posts on a bipartite network, whose structure evolves through their actions on the posts. The emotional states (arousal and valence) of each agent fluctuate in time, subject to the current contents of the posts to which the agent is exposed. By an agent’s action on a post its current emotions are transferred to the post. The model rules and the key parameters are inferred from the considered empirical data to ensure their realistic values and mutual consistency. The model assumes that the emotional arousal over posts drives the agent’s action. The simulations are preformed for the case of constant flux of agents and the results are analyzed in full analogy with the empirical data. The main conclusions are that the emotion-driven dynamics leads to long-range temporal correlations and emergent networks with community structure, that are comparable with the ones in the empirical system of popular posts. In view of pure emotion-driven agents actions, this type of comparisons provide a quantitative measure for the role of emotions in the dynamics on real blogs. Furthermore, the model reveals the underlying mechanisms which relate the post popularity with the emotion dynamics and the prevalence of negative emotions (critique). We also demonstrate how the community structure is tuned by varying a relevant parameter in the model. All data used in these works are fully anonymized.
An empirical model of human aspiration in low-velocity air using CFD investigations.
Anthony, T Renée; Anderson, Kimberly R
2015-01-01
Computational fluid dynamics (CFD) modeling was performed to investigate the aspiration efficiency of the human head in low velocities to examine whether the current inhaled particulate mass (IPM) sampling criterion matches the aspiration efficiency of an inhaling human in airflows common to worker exposures. Data from both mouth and nose inhalation, averaged to assess omnidirectional aspiration efficiencies, were compiled and used to generate a unifying model to relate particle size to aspiration efficiency of the human head. Multiple linear regression was used to generate an empirical model to estimate human aspiration efficiency and included particle size as well as breathing and freestream velocities as dependent variables. A new set of simulated mouth and nose breathing aspiration efficiencies was generated and used to test the fit of empirical models. Further, empirical relationships between test conditions and CFD estimates of aspiration were compared to experimental data from mannequin studies, including both calm-air and ultra-low velocity experiments. While a linear relationship between particle size and aspiration is reported in calm air studies, the CFD simulations identified a more reasonable fit using the square of particle aerodynamic diameter, which better addressed the shape of the efficiency curve's decline toward zero for large particles. The ultimate goal of this work was to develop an empirical model that incorporates real-world variations in critical factors associated with particle aspiration to inform low-velocity modifications to the inhalable particle sampling criterion.
Mandija, Stefano; Sommer, Iris E. C.; van den Berg, Cornelis A. T.; Neggers, Sebastiaan F. W.
2017-01-01
Background Despite TMS wide adoption, its spatial and temporal patterns of neuronal effects are not well understood. Although progress has been made in predicting induced currents in the brain using realistic finite element models (FEM), there is little consensus on how a magnetic field of a typical TMS coil should be modeled. Empirical validation of such models is limited and subject to several limitations. Methods We evaluate and empirically validate models of a figure-of-eight TMS coil that are commonly used in published modeling studies, of increasing complexity: simple circular coil model; coil with in-plane spiral winding turns; and finally one with stacked spiral winding turns. We will assess the electric fields induced by all 3 coil models in the motor cortex using a computer FEM model. Biot-Savart models of discretized wires were used to approximate the 3 coil models of increasing complexity. We use a tailored MR based phase mapping technique to get a full 3D validation of the incident magnetic field induced in a cylindrical phantom by our TMS coil. FEM based simulations on a meshed 3D brain model consisting of five tissues types were performed, using two orthogonal coil orientations. Results Substantial differences in the induced currents are observed, both theoretically and empirically, between highly idealized coils and coils with correctly modeled spiral winding turns. Thickness of the coil winding turns affect minimally the induced electric field, and it does not influence the predicted activation. Conclusion TMS coil models used in FEM simulations should include in-plane coil geometry in order to make reliable predictions of the incident field. Modeling the in-plane coil geometry is important to correctly simulate the induced electric field and to correctly make reliable predictions of neuronal activation PMID:28640923
Reacting Chemistry Based Burn Model for Explosive Hydrocodes
NASA Astrophysics Data System (ADS)
Schwaab, Matthew; Greendyke, Robert; Steward, Bryan
2017-06-01
Currently, in hydrocodes designed to simulate explosive material undergoing shock-induced ignition, the state of the art is to use one of numerous reaction burn rate models. These burn models are designed to estimate the bulk chemical reaction rate. Unfortunately, these models are largely based on empirical data and must be recalibrated for every new material being simulated. We propose that the use of an equilibrium Arrhenius rate reacting chemistry model in place of these empirically derived burn models will improve the accuracy for these computational codes. Such models have been successfully used in codes simulating the flow physics around hypersonic vehicles. A reacting chemistry model of this form was developed for the cyclic nitramine RDX by the Naval Research Laboratory (NRL). Initial implementation of this chemistry based burn model has been conducted on the Air Force Research Laboratory's MPEXS multi-phase continuum hydrocode. In its present form, the burn rate is based on the destruction rate of RDX from NRL's chemistry model. Early results using the chemistry based burn model show promise in capturing deflagration to detonation features more accurately in continuum hydrocodes than previously achieved using empirically derived burn models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larmat, Carene; Rougier, Esteban; Lei, Zhou
This project is in support of the Source Physics Experiment SPE (Snelson et al. 2013), which aims to develop new seismic source models of explosions. One priority of this program is first principle numerical modeling to validate and extend current empirical models.
A Theoretical Model of Children's Storytelling Using Physically-Oriented Technologies (SPOT)
ERIC Educational Resources Information Center
Guha, Mona Leigh; Druin, Allison; Montemayor, Jaime; Chipman, Gene; Farber, Allison
2007-01-01
This paper develops a model of children's storytelling using Physically-Oriented Technology (SPOT). The SPOT model draws upon literature regarding current physical storytelling technologies and was developed using a grounded theory approach to qualitative research. This empirical work focused on the experiences of 18 children, ages 5-6, who worked…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Hsi, W; Zhao, J
2016-06-15
Purpose: The Gaussian model for the lateral profiles in air is crucial for an accurate treatment planning system. The field size dependence of dose and the lateral beam profiles of scanning proton and carbon ion beams are due mainly to particles undergoing multiple Coulomb scattering in the beam line components and secondary particles produced by nuclear interactions in the target, both of which depend upon the energy and species of the beam. In this work, lateral profile shape parameters were fitted to measurements of field size dependence dose at the center of field size in air. Methods: Previous studies havemore » employed empirical fits to measured profile data to significantly reduce the QA time required for measurements. From this approach to derive the weight and sigma of lateral profiles in air, empirical model formulations were simulated for three selected energies for both proton and carbon beams. Results: The 20%–80% lateral penumbras predicted by the double model for proton and single model for carbon with the error functions agreed with the measurements within 1 mm. The standard deviation between measured and fitted field size dependence of dose for empirical model in air has a maximum accuracy of 0.74% for proton with double Gaussian, and of 0.57% for carbon with single Gaussian. Conclusion: We have demonstrated that the double Gaussian model of lateral beam profiles is significantly better than the single Gaussian model for proton while a single Gaussian model is sufficient for carbon. The empirical equation may be used to double check the separately obtained model that is currently used by the planning system. The empirical model in air for dose of spot scanning proton and carbon ion beams cannot be directly used for irregular shaped patient fields, but can be to provide reference values for clinical use and quality assurance.« less
Low-Order Modeling of Dynamic Stall on Airfoils in Incompressible Flow
NASA Astrophysics Data System (ADS)
Narsipur, Shreyas
Unsteady aerodynamics has been a topic of research since the late 1930's and has increased in popularity among researchers studying dynamic stall in helicopters, insect/bird flight, micro air vehicles, wind-turbine aerodynamics, and ow-energy harvesting devices. Several experimental and computational studies have helped researchers gain a good understanding of the unsteady ow phenomena, but have proved to be expensive and time-intensive for rapid design and analysis purposes. Since the early 1970's, the push to develop low-order models to solve unsteady ow problems has resulted in several semi-empirical models capable of effectively analyzing unsteady aerodynamics in a fraction of the time required by high-order methods. However, due to the various complexities associated with time-dependent flows, several empirical constants and curve fits derived from existing experimental and computational results are required by the semi-empirical models to be an effective analysis tool. The aim of the current work is to develop a low-order model capable of simulating incompressible dynamic-stall type ow problems with a focus on accurately modeling the unsteady ow physics with the aim of reducing empirical dependencies. The lumped-vortex-element (LVE) algorithm is used as the baseline unsteady inviscid model to which augmentations are applied to model unsteady viscous effects. The current research is divided into two phases. The first phase focused on augmentations aimed at modeling pure unsteady trailing-edge boundary-layer separation and stall without leading-edge vortex (LEV) formation. The second phase is targeted at including LEV shedding capabilities to the LVE algorithm and combining with the trailing-edge separation model from phase one to realize a holistic, optimized, and robust low-order dynamic stall model. In phase one, initial augmentations to theory were focused on modeling the effects of steady trailing-edge separation by implementing a non-linear decambering flap to model the effect of the separated boundary-layer. Unsteady RANS results for several pitch and plunge motions showed that the differences in aerodynamic loads between steady and unsteady flows can be attributed to the boundary-layer convection lag, which can be modeled by choosing an appropriate value of the time lag parameter, tau2. In order to provide appropriate viscous corrections to inviscid unsteady calculations, the non-linear decambering flap is applied with a time lag determined by the tau2 value, which was found to be independent of motion kinematics for a given airfoil and Reynolds number. The predictions of the aerodynamic loads, unsteady stall, hysteresis loops, and ow reattachment from the low-order model agree well with CFD and experimental results, both for individual cases and for trends between motions. The model was also found to perform as well as existing semi-empirical models while using only a single empirically defined parameter. Inclusion of LEV shedding capabilities and combining the resulting algorithm with phase one's trailing-edge separation model was the primary objective of phase two. Computational results at low and high Reynolds numbers were used to analyze the ow morphology of the LEV to identify the common surface signature associated with LEV initiation at both low and high Reynolds numbers and relate it to the critical leading-edge suction parameter (LESP ) to control the initiation and termination of LEV shedding in the low-order model. The critical LESP, like the tau2 parameter, was found to be independent of motion kinematics for a given airfoil and Reynolds number. Results from the final low-order model compared excellently with CFD and experimental solutions, both in terms of aerodynamic loads and vortex ow pattern predictions. Overall, the final combined dynamic stall model that resulted from the current research was successful in accurately modeling the physics of unsteady ow thereby helping restrict the number of empirical coefficients to just two variables while successfully modeling the aerodynamic forces and ow patterns in a simple and precise manner.
GPS-Derived Precipitable Water Compared with the Air Force Weather Agency’s MM5 Model Output
2002-03-26
and less then 100 sensors are available throughout Europe . While the receiver density is currently comparable to the upper-air sounding network...profiles from 38 upper air sites throughout Europe . Based on these empirical formulae and simplifications, Bevis (1992) has determined that the error...Alaska using Bevis’ (1992) empirical correlation based on 8718 radiosonde calculations over 2 years. Other studies have been conducted in Europe and
Data Retention and Anonymity Services
NASA Astrophysics Data System (ADS)
Berthold, Stefan; Böhme, Rainer; Köpsell, Stefan
The recently introduced legislation on data retention to aid prosecuting cyber-related crime in Europe also affects the achievable security of systems for anonymous communication on the Internet. We argue that data retention requires a review of existing security evaluations against a new class of realistic adversary models. In particular, we present theoretical results and first empirical evidence for intersection attacks by law enforcement authorities. The reference architecture for our study is the anonymity service AN.ON, from which we also collect empirical data. Our adversary model reflects an interpretation of the current implementation of the EC Directive on Data Retention in Germany.
Development of reliable pavement models.
DOT National Transportation Integrated Search
2011-05-01
The current report proposes a framework for estimating the reliability of a given pavement structure as analyzed by : the Mechanistic-Empirical Pavement Design Guide (MEPDG). The methodology proposes using a previously fit : response surface, in plac...
Nevers, Meredith B.; Whitman, Richard L.
2011-01-01
Efforts to improve public health protection in recreational swimming waters have focused on obtaining real-time estimates of water quality. Current monitoring techniques rely on the time-intensive culturing of fecal indicator bacteria (FIB) from water samples, but rapidly changing FIB concentrations result in management errors that lead to the public being exposed to high FIB concentrations (type II error) or beaches being closed despite acceptable water quality (type I error). Empirical predictive models may provide a rapid solution, but their effectiveness at improving health protection has not been adequately assessed. We sought to determine if emerging monitoring approaches could effectively reduce risk of illness exposure by minimizing management errors. We examined four monitoring approaches (inactive, current protocol, a single predictive model for all beaches, and individual models for each beach) with increasing refinement at 14 Chicago beaches using historical monitoring and hydrometeorological data and compared management outcomes using different standards for decision-making. Predictability (R2) of FIB concentration improved with model refinement at all beaches but one. Predictive models did not always reduce the number of management errors and therefore the overall illness burden. Use of a Chicago-specific single-sample standard-rather than the default 235 E. coli CFU/100 ml widely used-together with predictive modeling resulted in the greatest number of open beach days without any increase in public health risk. These results emphasize that emerging monitoring approaches such as empirical models are not equally applicable at all beaches, and combining monitoring approaches may expand beach access.
DOT National Transportation Integrated Search
1997-05-01
Current pavement design procedures are based principally on empirical approaches. The current trend toward developing more mechanistic-empirical type pavement design methods led Minnesota to develop the Minnesota Road Research Project (Mn/ROAD), a lo...
Predicting the Magnetic Properties of ICMEs: A Pragmatic View
NASA Astrophysics Data System (ADS)
Riley, P.; Linker, J.; Ben-Nun, M.; Torok, T.; Ulrich, R. K.; Russell, C. T.; Lai, H.; de Koning, C. A.; Pizzo, V. J.; Liu, Y.; Hoeksema, J. T.
2017-12-01
The southward component of the interplanetary magnetic field plays a crucial role in being able to successfully predict space weather phenomena. Yet, thus far, it has proven extremely difficult to forecast with any degree of accuracy. In this presentation, we describe an empirically-based modeling framework for estimating Bz values during the passage of interplanetary coronal mass ejections (ICMEs). The model includes: (1) an empirically-based estimate of the magnetic properties of the flux rope in the low corona (including helicity and field strength); (2) an empirically-based estimate of the dynamic properties of the flux rope in the high corona (including direction, speed, and mass); and (3) a physics-based estimate of the evolution of the flux rope during its passage to 1 AU driven by the output from (1) and (2). We compare model output with observations for a selection of events to estimate the accuracy of this approach. Importantly, we pay specific attention to the uncertainties introduced by the components within the framework, separating intrinsic limitations from those that can be improved upon, either by better observations or more sophisticated modeling. Our analysis suggests that current observations/modeling are insufficient for this empirically-based framework to provide reliable and actionable prediction of the magnetic properties of ICMEs. We suggest several paths that may lead to better forecasts.
On the Adequacy of Current Empirical Evaluations of Formal Models of Categorization
ERIC Educational Resources Information Center
Wills, Andy J.; Pothos, Emmanuel M.
2012-01-01
Categorization is one of the fundamental building blocks of cognition, and the study of categorization is notable for the extent to which formal modeling has been a central and influential component of research. However, the field has seen a proliferation of noncomplementary models with little consensus on the relative adequacy of these accounts.…
Maria C. Mateo-Sanchez; Niko Balkenhol; Samuel Cushman; Trinidad Perez; Ana Dominguez; Santiago Saura
2015-01-01
Resistance models provide a key foundation for landscape connectivity analyses and are widely used to delineate wildlife corridors. Currently, there is no general consensus regarding the most effective empirical methods to parameterize resistance models, but habitat data (speciesâ presence data and related habitat suitability models) and genetic data are the...
Ignition behavior of live California chaparral leaves
J.D. Engstrom; J.K Butler; S.G. Smith; L.L. Baxter; T.H. Fletcher; D.R. Weise
2004-01-01
Current forest fire models are largely empirical correlations based on data from beds of dead vegetation Improvement in model capabilities is sought by developing models of the combustion of live fuels. A facility was developed to determine the combustion behavior of small samples of live fuels, consisting of a flat-flame burner on a moveable platform Qualitative and...
Ewuoso, Cornelius
2017-09-29
Empirical studies have now established that many patients make clinical decisions based on models other than Anglo American model of truth-telling and patient autonomy. Some scholars also add that current medical ethics frameworks and recent proposals for enhancing communication in health professional-patient relationship have not adequately accommodated these models. In certain clinical contexts where health professional and patients are motivated by significant cultural and religious values, these current frameworks cannot prevent communication breakdown, which can, in turn, jeopardize patient care, cause undue distress to a patient in certain clinical contexts or negatively impact his/her relationship with the community. These empirical studies have now recommended that additional frameworks developed around other models of truth-telling; and which take very seriously significant value-differences which sometimes exist between health professional and patients, as well as patient's cultural/religious values or relational capacities, must be developed. This paper contributes towards the development of one. Specifically, this study proposes a framework for truth-telling developed around African model of truth-telling by drawing insights from the communitarian concept of ootọ́ amongst the Yoruba people of south west Nigeria. I am optimistic that if this model is incorporated into current medical ethics codes and curricula, it will significantly enhance health professional-patient communication. © 2017 John Wiley & Sons Ltd.
An empirical model of the quiet daily geomagnetic field variation
Yamazaki, Y.; Yumoto, K.; Cardinal, M.G.; Fraser, B.J.; Hattori, P.; Kakinami, Y.; Liu, J.Y.; Lynn, K.J.W.; Marshall, R.; McNamara, D.; Nagatsuma, T.; Nikiforov, V.M.; Otadoy, R.E.; Ruhimat, M.; Shevtsov, B.M.; Shiokawa, K.; Abe, S.; Uozumi, T.; Yoshikawa, A.
2011-01-01
An empirical model of the quiet daily geomagnetic field variation has been constructed based on geomagnetic data obtained from 21 stations along the 210 Magnetic Meridian of the Circum-pan Pacific Magnetometer Network (CPMN) from 1996 to 2007. Using the least squares fitting method for geomagnetically quiet days (Kp ??? 2+), the quiet daily geomagnetic field variation at each station was described as a function of solar activity SA, day of year DOY, lunar age LA, and local time LT. After interpolation in latitude, the model can describe solar-activity dependence and seasonal dependence of solar quiet daily variations (S) and lunar quiet daily variations (L). We performed a spherical harmonic analysis (SHA) on these S and L variations to examine average characteristics of the equivalent external current systems. We found three particularly noteworthy results. First, the total current intensity of the S current system is largely controlled by solar activity while its focus position is not significantly affected by solar activity. Second, we found that seasonal variations of the S current intensity exhibit north-south asymmetry; the current intensity of the northern vortex shows a prominent annual variation while the southern vortex shows a clear semi-annual variation as well as annual variation. Thirdly, we found that the total intensity of the L current system changes depending on solar activity and season; seasonal variations of the L current intensity show an enhancement during the December solstice, independent of the level of solar activity. Copyright 2011 by the American Geophysical Union.
Moustafa, Ahmed A.; Wufong, Ella; Servatius, Richard J.; Pang, Kevin C. H.; Gluck, Mark A.; Myers, Catherine E.
2013-01-01
A recurrent-network model provides a unified account of the hippocampal region in mediating the representation of temporal information in classical eyeblink conditioning. Much empirical research is consistent with a general conclusion that delay conditioning (in which the conditioned stimulus CS and unconditioned stimulus US overlap and co-terminate) is independent of the hippocampal system, while trace conditioning (in which the CS terminates before US onset) depends on the hippocampus. However, recent studies show that, under some circumstances, delay conditioning can be hippocampal-dependent and trace conditioning can be spared following hippocampal lesion. Here, we present an extension of our prior trial-level models of hippocampal function and stimulus representation that can explain these findings within a unified framework. Specifically, the current model includes adaptive recurrent collateral connections that aid in the representation of intra-trial temporal information. With this model, as in our prior models, we argue that the hippocampus is not specialized for conditioned response timing, but rather is a general-purpose system that learns to predict the next state of all stimuli given the current state of variables encoded by activity in recurrent collaterals. As such, the model correctly predicts that hippocampal involvement in classical conditioning should be critical not only when there is an intervening trace interval, but also when there is a long delay between CS onset and US onset. Our model simulates empirical data from many variants of classical conditioning, including delay and trace paradigms in which the length of the CS, the inter-stimulus interval, or the trace interval is varied. Finally, we discuss model limitations, future directions, and several novel empirical predictions of this temporal processing model of hippocampal function and learning. PMID:23178699
The lure of rationality: Why does the deficit model persist in science communication?
Simis, Molly J; Madden, Haley; Cacciatore, Michael A; Yeo, Sara K
2016-05-01
Science communication has been historically predicated on the knowledge deficit model. Yet, empirical research has shown that public communication of science is more complex than what the knowledge deficit model suggests. In this essay, we pose four lines of reasoning and present empirical data for why we believe the deficit model still persists in public communication of science. First, we posit that scientists' training results in the belief that public audiences can and do process information in a rational manner. Second, the persistence of this model may be a product of current institutional structures. Many graduate education programs in science, technology, engineering, and math (STEM) fields generally lack formal training in public communication. We offer empirical evidence that demonstrates that scientists who have less positive attitudes toward the social sciences are more likely to adhere to the knowledge deficit model of science communication. Third, we present empirical evidence of how scientists conceptualize "the public" and link this to attitudes toward the deficit model. We find that perceiving a knowledge deficit in the public is closely tied to scientists' perceptions of the individuals who comprise the public. Finally, we argue that the knowledge deficit model is perpetuated because it can easily influence public policy for science issues. We propose some ways to uproot the deficit model and move toward more effective science communication efforts, which include training scientists in communication methods grounded in social science research and using approaches that engage community members around scientific issues. © The Author(s) 2016.
Fundamental Algorithms of the Goddard Battery Model
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1985-01-01
The Goddard Space Flight Center (GSFC) is currently producing a computer model to predict Nickel Cadmium (NiCd) performance in a Low Earth Orbit (LEO) cycling regime. The model proper is currently still in development, but the inherent, fundamental algorithms (or methodologies) of the model are defined. At present, the model is closely dependent on empirical data and the data base currently used is of questionable accuracy. Even so, very good correlations have been determined between model predictions and actual cycling data. A more accurate and encompassing data base has been generated to serve dual functions: show the limitations of the current data base, and be inbred in the model properly for more accurate predictions. The fundamental algorithms of the model, and the present data base and its limitations, are described and a brief preliminary analysis of the new data base and its verification of the model's methodology are presented.
Effect of vergence adaptation on convergence-accommodation: model simulations.
Sreenivasan, Vidhyapriya; Bobier, William R; Irving, Elizabeth L; Lakshminarayanan, Vasudevan
2009-10-01
Several theoretical control models depict the adaptation effects observed in the accommodation and vergence mechanisms of the human visual system. Two current quantitative models differ in their approach of defining adaptation and in identifying the effect of controller adaptation on their respective cross-links between the vergence and accommodative systems. Here, we compare the simulation results of these adaptation models with empirical data obtained from emmetropic adults when they performed sustained near task through + 2D lens addition. The results of our experimental study showed an initial increase in exophoria (a divergent open-loop vergence position) and convergence-accommodation (CA) when viewing through +2D lenses. Prolonged fixation through the near addition lenses initiated vergence adaptation, which reduced the lens-induced exophoria and resulted in a concurrent reduction of CA. Both models showed good agreement with empirical measures of vergence adaptation. However, only one model predicted the experimental time course of reduction in CA. The pattern of our empirical results seem to be best described by the adaptation model that indicates the total vergence response to be a sum of two controllers, phasic and tonic, with the output of phasic controller providing input to the cross-link interactions.
Limits of Predictability in Commuting Flows in the Absence of Data for Calibration
Yang, Yingxiang; Herrera, Carlos; Eagle, Nathan; González, Marta C.
2014-01-01
The estimation of commuting flows at different spatial scales is a fundamental problem for different areas of study. Many current methods rely on parameters requiring calibration from empirical trip volumes. Their values are often not generalizable to cases without calibration data. To solve this problem we develop a statistical expression to calculate commuting trips with a quantitative functional form to estimate the model parameter when empirical trip data is not available. We calculate commuting trip volumes at scales from within a city to an entire country, introducing a scaling parameter α to the recently proposed parameter free radiation model. The model requires only widely available population and facility density distributions. The parameter can be interpreted as the influence of the region scale and the degree of heterogeneity in the facility distribution. We explore in detail the scaling limitations of this problem, namely under which conditions the proposed model can be applied without trip data for calibration. On the other hand, when empirical trip data is available, we show that the proposed model's estimation accuracy is as good as other existing models. We validated the model in different regions in the U.S., then successfully applied it in three different countries. PMID:25012599
Phenomenological aspects of the cognitive rumination construct.
Meyer, Leonardo Fernandez; Taborda, José Geraldo Vernet; da Costa, Fábio Antônio; Soares, Ana Luiza Alfaya Galego; Mecler, Kátia; Valença, Alexandre Martins
2015-01-01
To evaluate the importance of phenomenological aspects of the cognitive rumination (CR) construct in current empirical psychiatric research. We searched SciELO, Scopus, ScienceDirect, MEDLINE, OneFile (GALE), SpringerLink, Cambridge Journals and Web of Science between February and March of 2014 for studies whose title and topic included the following keywords: cognitive rumination; rumination response scale; and self-reflection. The inclusion criteria were: empirical clinical study; CR as the main object of investigation; and study that included a conceptual definition of CR. The studies selected were published in English in biomedical journals in the last 10 years. Our phenomenological analysis was based on Karl Jaspers' General Psychopathology. Most current empirical studies adopt phenomenological cognitive elements in conceptual definitions. However, these elements do not seem to be carefully examined and are indistinctly understood as objective empirical factors that may be measured, which may contribute to misunderstandings about CR, erroneous interpretations of results and problematic theoretical models. Empirical studies fail when evaluating phenomenological aspects of the cognitive elements of the CR construct. Psychopathology and phenomenology may help define the characteristics of CR elements and may contribute to their understanding and hierarchical organization as a construct. A review of the psychopathology principles established by Jasper may clarify some of these issues.
Sburlati, Elizabeth S; Lyneham, Heidi J; Mufson, Laura H; Schniering, Carolyn A
2012-06-01
In order to treat adolescent depression, a number of empirically supported treatments (ESTs) have been developed from both the cognitive behavioral therapy (CBT) and interpersonal psychotherapy (IPT-A) frameworks. Research has shown that in order for these treatments to be implemented in routine clinical practice (RCP), effective therapist training must be generated and provided. However, before such training can be developed, a good understanding of the therapist competencies needed to implement these ESTs is required. Sburlati et al. (Clin Child Fam Psychol Rev 14:89-109, 2011) developed a model of therapist competencies for implementing CBT using the well-established Delphi technique. Given that IPT-A differs considerably to CBT, the current study aims to develop a model of therapist competencies for the implementation of IPT-A using a similar procedure as that applied in Sburlati et al. (Clin Child Fam Psychol Rev 14:89-109, 2011). This method involved: (1) identifying and reviewing an empirically supported IPT-A approach, (2) extracting therapist competencies required for the implementation of IPT-A, (3) consulting with a panel of IPT-A experts to generate an overall model of therapist competencies, and (4) validating the overall model with the IPT-A manual author. The resultant model offers an empirically derived set of competencies necessary for effectively treating adolescent depression using IPT-A and has wide implications for the development of therapist training, competence assessment measures, and evidence-based practice guidelines. This model, therefore, provides an empirical framework for the development of dissemination and implementation programs aimed at ensuring that adolescents with depression receive effective care in RCP settings. Key similarities and differences between CBT and IPT-A, and the therapist competencies required for implementing these treatments, are also highlighted throughout this article.
Empirical correlations of the performance of vapor-anode PX-series AMTEC cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, L.; Merrill, J.M.; Mayberry, C.
Power systems based on AMTEC technology will be used for future NASA missions, including a Pluto-Express (PX) or Europa mission planned for approximately year 2004. AMTEC technology may also be used as an alternative to photovoltaic based power systems for future Air Force missions. An extensive development program of Alkali-Metal Thermal-to-Electric Conversion (AMTEC) technology has been underway at the Vehicle Technologies Branch of the Air Force Research Laboratory (AFRL) in Albuquerque, New Mexico since 1992. Under this program, numerical modeling and experimental investigations of the performance of the various multi-BASE tube, vapor-anode AMTEC cells have been and are being performed.more » Vacuum testing of AMTEC cells at AFRL determines the effects of changing the hot and cold end temperatures, T{sub hot} and T{sub cold}, and applied external load, R{sub ext}, on the cell electric power output, current-voltage characteristics, and conversion efficiency. Test results have traditionally been used to provide feedback to cell designers, and to validate numerical models. The current work utilizes the test data to develop empirical correlations for cell output performance under various working conditions. Because the empirical correlations are developed directly from the experimental data, uncertainties arising from material properties that must be used in numerical modeling can be avoided. Empirical correlations of recent vapor-anode PX-series AMTEC cells have been developed. Based on AMTEC theory and the experimental data, the cell output power (as well as voltage and current) was correlated as a function of three parameters (T{sub hot}, T{sub cold}, and R{sub ext}) for a given cell. Correlations were developed for different cells (PX-3C, PX-3A, PX-G3, and PX-5A), and were in good agreement with experimental data for these cells. Use of these correlations can greatly reduce the testing required to determine electrical performance of a given type of AMTEC cell over a wide range of operating conditions.« less
Comparison of field-aligned currents at ionospheric and magnetospheric altitudes
NASA Technical Reports Server (NTRS)
Spence, H. E.; Kivelson, M. G.; Walker, R. J.
1988-01-01
Using the empirical terrestrial magnetospheric magnetic field models of Tsyganenko and Usmanov (1982) and Tsyganenko (1987) the average field-aligned currents (FACs) in the magnetosphere were determined as a function of the Kp index. Three major model FAC systems were identified, namely, the dayside region 1, the nightside region 1, and the nightside polar cap. The models provide information about the sources of the current systems. Mapped ionospheric model FACs are compared with low-altitude measurements obtained by the spacecraft. It is found that low-altitude data can reveal either classic region 1/2 or more highly structured FAC patterns. Therefore, statistical results either obtained from observations or inferred from models are expected to be averages over temporally and spatially shifting patterns.
Teaching Applied Ethics to the Righteous Mind
ERIC Educational Resources Information Center
Murphy, Peter
2014-01-01
What does current empirically informed moral psychology imply about the goals that can be realistically achieved in college-level applied ethics courses? This paper takes up this question from the vantage point of Jonathan Haidt's Social Intuitionist Model of human moral judgment. I summarize Haidt's model, and then consider a variety of…
Assessing Intelligence in Children and Youth Living in the Netherlands
ERIC Educational Resources Information Center
Hurks, Petra P. M.; Bakker, Helen
2016-01-01
In this article, we briefly describe the history of intelligence test use with children and youth in the Netherlands, explain which models of intelligence guide decisions about test use, and detail how intelligence tests are currently being used in Dutch school settings. Empirically supported and theoretical models studying the structure of human…
From Learning Object to Learning Cell: A Resource Organization Model for Ubiquitous Learning
ERIC Educational Resources Information Center
Yu, Shengquan; Yang, Xianmin; Cheng, Gang
2013-01-01
The key to implementing ubiquitous learning is the construction and organization of learning resources. While current research on ubiquitous learning has primarily focused on concept models, supportive environments and small-scale empirical research, exploring ways to organize learning resources to make them available anywhere on-demand is also…
Integrating the Demonstration Orientation and Standards-Based Models of Achievement Goal Theory
ERIC Educational Resources Information Center
Wynne, Heather Marie
2014-01-01
Achievement goal theory and thus, the empirical measures stemming from the research, are currently divided on two conceptual approaches, namely the reason versus aims-based models of achievement goals. The factor structure and predictive utility of goal constructs from the Patterns of Adaptive Learning Strategies (PALS) and the latest two versions…
ERIC Educational Resources Information Center
Grünkorn, Juliane; Upmeier zu Belzen, Annette; Krüger, Dirk
2014-01-01
Research in the field of students' understandings of models and their use in science describes different frameworks concerning these understandings. Currently, there is no conjoint framework that combines these structures and so far, no investigation has focused on whether it reflects students' understandings sufficiently (empirical evaluation).…
Empirical Corrections to Nutation Amplitudes and Precession Computed from a Global VLBI Solution
NASA Astrophysics Data System (ADS)
Schuh, H.; Ferrandiz, J. M.; Belda-Palazón, S.; Heinkelmann, R.; Karbon, M.; Nilsson, T.
2017-12-01
The IAU2000A nutation and IAU2006 precession models were adopted to provide accurate estimations and predictions of the Celestial Intermediate Pole (CIP). However, they are not fully accurate and VLBI (Very Long Baseline Interferometry) observations show that the CIP deviates from the position resulting from the application of the IAU2006/2000A model. Currently, those deviations or offsets of the CIP (Celestial Pole Offsets - CPO), can only be obtained by the VLBI technique. The accuracy of the order of 0.1 milliseconds of arc (mas) allows to compare the observed nutation with theoretical prediction model for a rigid Earth and constrain geophysical parameters describing the Earth's interior. In this study, we empirically evaluate the consistency, systematics and deviations of the IAU 2006/2000A precession-nutation model using several CPO time series derived from the global analysis of VLBI sessions. The final objective is the reassessment of the precession offset and rate, and the amplitudes of the principal terms of nutation, trying to empirically improve the conventional values derived from the precession/nutation theories. The statistical analysis of the residuals after re-fitting the main nutation terms demonstrates that our empirical corrections attain an error reduction by almost 15 micro arc seconds.
Suppression cost forecasts in advance of wildfire seasons
Jeffrey P. Prestemon; Karen Abt; Krista Gebert
2008-01-01
Approaches for forecasting wildfire suppression costs in advance of a wildfire season are demonstrated for two lead times: fall and spring of the current fiscal year (Oct. 1âSept. 30). Model functional forms are derived from aggregate expressions of a least cost plus net value change model. Empirical estimates of these models are used to generate advance-of-season...
Whither Causal Models in the Neuroscience of ADHD?
ERIC Educational Resources Information Center
Coghill, Dave; Nigg, Joel; Rothenberger, Aribert; Sonuga-Barke, Edmund; Tannock, Rosemary
2005-01-01
In this paper we examine the current status of the science of ADHD from a theoretical point of view. While the field has reached the point at which a number of causal models have been proposed, it remains some distance away from demonstrating the viability of such models empirically. We identify a number of existing barriers and make proposals as…
NASA Astrophysics Data System (ADS)
Camp, H. A.; Moyer, Steven; Moore, Richard K.
2010-04-01
The Night Vision and Electronic Sensors Directorate's current time-limited search (TLS) model, which makes use of the targeting task performance (TTP) metric to describe image quality, does not explicitly account for the effects of visual clutter on observer performance. The TLS model is currently based on empirical fits to describe human performance for a time of day, spectrum and environment. Incorporating a clutter metric into the TLS model may reduce the number of these empirical fits needed. The masked target transform volume (MTTV) clutter metric has been previously presented and compared to other clutter metrics. Using real infrared imagery of rural images with varying levels of clutter, NVESD is currently evaluating the appropriateness of the MTTV metric. NVESD had twenty subject matter experts (SME) rank the amount of clutter in each scene in a series of pair-wise comparisons. MTTV metric values were calculated and then compared to the SME observers rankings. The MTTV metric ranked the clutter in a similar manner to the SME evaluation, suggesting that the MTTV metric may emulate SME response. This paper is a first step in quantifying clutter and measuring the agreement to subjective human evaluation.
Nevers, M.B.; Whitman, R.L.
2008-01-01
To understand the fate and movement of Escherichia coli in beach water, numerous modeling studies have been undertaken including mechanistic predictions of currents and plumes and empirical modeling based on hydrometeorological variables. Most approaches are limited in scope by nearshore currents or physical obstacles and data limitations; few examine the issue from a larger spatial scale. Given the similarities between variables typically included in these models, we attempted to take a broader view of E. coli fluctuations by simultaneously examining twelve beaches along 35 km of Indiana's Lake Michigan coastline that includes five point-source outfalls. The beaches had similar E. coli fluctuations, and a best-fit empirical model included two variables: wave height and an interactive term comprised of wind direction and creek turbidity. Individual beach R2 was 0.32-0.50. Data training-set results were comparable to validation results (R2 = 0.48). Amount of variation explained by the model was similar to previous reports for individual beaches. By extending the modeling approach to include more coastline distance, broader-scale spatial and temporal changes in bacteria concentrations and the influencing factors can be characterized. ?? 2008 American Chemical Society.
Lexical Processing and Organization in Bilingual First Language Acquisition: Guiding Future Research
DeAnda, Stephanie; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret
2016-01-01
A rich body of work in adult bilinguals documents an interconnected lexical network across languages, such that early word retrieval is language independent. This literature has yielded a number of influential models of bilingual semantic memory. However, extant models provide limited predictions about the emergence of lexical organization in bilingual first language acquisition (BFLA). Empirical evidence from monolingual infants suggests that lexical networks emerge early in development as children integrate phonological and semantic information. These findings tell us little about the interaction between two languages in the early bilingual memory. To date, an understanding of when and how languages interact in early bilingual development is lacking. In this literature review, we present research documenting lexical-semantic development across monolingual and bilingual infants. This is followed by a discussion of current models of bilingual language representation and organization and their ability to account for the available empirical evidence. Together, these theoretical and empirical accounts inform and highlight unexplored areas of research and guide future work on early bilingual memory. PMID:26866430
A robust empirical seasonal prediction of winter NAO and surface climate.
Wang, L; Ting, M; Kushner, P J
2017-03-21
A key determinant of winter weather and climate in Europe and North America is the North Atlantic Oscillation (NAO), the dominant mode of atmospheric variability in the Atlantic domain. Skilful seasonal forecasting of the surface climate in both Europe and North America is reflected largely in how accurately models can predict the NAO. Most dynamical models, however, have limited skill in seasonal forecasts of the winter NAO. A new empirical model is proposed for the seasonal forecast of the winter NAO that exhibits higher skill than current dynamical models. The empirical model provides robust and skilful prediction of the December-January-February (DJF) mean NAO index using a multiple linear regression (MLR) technique with autumn conditions of sea-ice concentration, stratospheric circulation, and sea-surface temperature. The predictability is, for the most part, derived from the relatively long persistence of sea ice in the autumn. The lower stratospheric circulation and sea-surface temperature appear to play more indirect roles through a series of feedbacks among systems driving NAO evolution. This MLR model also provides skilful seasonal outlooks of winter surface temperature and precipitation over many regions of Eurasia and eastern North America.
Using Empirical Models for Communication Prediction of Spacecraft
NASA Technical Reports Server (NTRS)
Quasny, Todd
2015-01-01
A viable communication path to a spacecraft is vital for its successful operation. For human spaceflight, a reliable and predictable communication link between the spacecraft and the ground is essential not only for the safety of the vehicle and the success of the mission, but for the safety of the humans on board as well. However, analytical models of these communication links are challenged by unique characteristics of space and the vehicle itself. For example, effects of radio frequency during high energy solar events while traveling through a solar array of a spacecraft can be difficult to model, and thus to predict. This presentation covers the use of empirical methods of communication link predictions, using the International Space Station (ISS) and its associated historical data as the verification platform and test bed. These empirical methods can then be incorporated into communication prediction and automation tools for the ISS in order to better understand the quality of the communication path given a myriad of variables, including solar array positions, line of site to satellites, position of the sun, and other dynamic structures on the outside of the ISS. The image on the left below show the current analytical model of one of the communication systems on the ISS. The image on the right shows a rudimentary empirical model of the same system based on historical archived data from the ISS.
Testing Transitivity of Preferences on Two-Alternative Forced Choice Data
Regenwetter, Michel; Dana, Jason; Davis-Stober, Clintin P.
2010-01-01
As Duncan Luce and other prominent scholars have pointed out on several occasions, testing algebraic models against empirical data raises difficult conceptual, mathematical, and statistical challenges. Empirical data often result from statistical sampling processes, whereas algebraic theories are nonprobabilistic. Many probabilistic specifications lead to statistical boundary problems and are subject to nontrivial order constrained statistical inference. The present paper discusses Luce's challenge for a particularly prominent axiom: Transitivity. The axiom of transitivity is a central component in many algebraic theories of preference and choice. We offer the currently most complete solution to the challenge in the case of transitivity of binary preference on the theory side and two-alternative forced choice on the empirical side, explicitly for up to five, and implicitly for up to seven, choice alternatives. We also discuss the relationship between our proposed solution and weak stochastic transitivity. We recommend to abandon the latter as a model of transitive individual preferences. PMID:21833217
Continuous-Time Random Walk with multi-step memory: an application to market dynamics
NASA Astrophysics Data System (ADS)
Gubiec, Tomasz; Kutner, Ryszard
2017-11-01
An extended version of the Continuous-Time Random Walk (CTRW) model with memory is herein developed. This memory involves the dependence between arbitrary number of successive jumps of the process while waiting times between jumps are considered as i.i.d. random variables. This dependence was established analyzing empirical histograms for the stochastic process of a single share price on a market within the high frequency time scale. Then, it was justified theoretically by considering bid-ask bounce mechanism containing some delay characteristic for any double-auction market. Our model appeared exactly analytically solvable. Therefore, it enables a direct comparison of its predictions with their empirical counterparts, for instance, with empirical velocity autocorrelation function. Thus, the present research significantly extends capabilities of the CTRW formalism. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
Tree migration detection through comparisons of historic and current forest inventories
Christopher W. Woodall; Christopher M. Oswalt; James A. Westfall; Charles H. Perry; Mark N. Nelson
2009-01-01
Changes in tree species distributions are a potential impact of climate change on forest ecosystems. The examination of tree species shifts in forests of the eastern United States largely has been limited to modeling activities with little empirical analysis of long-term forest inventory datasets. The goal of this study was to compare historic and current spatial...
Ionospheric convection inferred from interplanetary magnetic field-dependent Birkeland currents
NASA Technical Reports Server (NTRS)
Rasmussen, C. E.; Schunk, R. W.
1988-01-01
Computer simulations of ionospheric convection have been performed, combining empirical models of Birkeland currents with a model of ionospheric conductivity in order to investigate IMF-dependent convection characteristics. Birkeland currents representing conditions in the northern polar cap of the negative IMF By component are used. Two possibilities are considered: (1) the morning cell shifting into the polar cap as the IMF turns northward, and this cell and a distorted evening cell providing for sunward flow in the polar cap; and (2) the existence of a three-cell pattern when the IMF is strongly northward.
NASA Astrophysics Data System (ADS)
Simon, S.; Kabanovic, S.; Meeks, Z. C.; Neubauer, F. M.
2017-12-01
Based on the magnetic field data collected during the Cassini era, we construct an empirical model of the ambient magnetospheric field conditions along the orbit of Saturn's largest moon Titan. Observations from Cassini's close Titan flybys as well as 191 non-targeted crossings of Titan's orbit are taken into account. For each of these events we apply the classification technique of Simon et al. (2010) to categorize the ambient magnetospheric field as current sheet, lobe-like, magnetosheath, or an admixture of these regimes. Independent of Saturnian season, Titan's magnetic environment around noon local time is dominated by the perturbed fields of Saturn's broad magnetodisk current sheet. Only observations from the nightside magnetosphere reveal a slow, but steady change of the background field from southern lobe-type to northern lobe-type on a time scale of several years. This behavior is consistent with a continuous change in the curvature of the bowl-shaped magnetodisk current sheet over the course of the Saturnian year. We determine the occurrence rate of each magnetic environment category along Titan's orbit as a function of Saturnian season and local time.
Atomistic simulations of carbon diffusion and segregation in liquid silicon
NASA Astrophysics Data System (ADS)
Luo, Jinping; Alateeqi, Abdullah; Liu, Lijun; Sinno, Talid
2017-12-01
The diffusivity of carbon atoms in liquid silicon and their equilibrium distribution between the silicon melt and crystal phases are key, but unfortunately not precisely known parameters for the global models of silicon solidification processes. In this study, we apply a suite of molecular simulation tools, driven by multiple empirical potential models, to compute diffusion and segregation coefficients of carbon at the silicon melting temperature. We generally find good consistency across the potential model predictions, although some exceptions are identified and discussed. We also find good agreement with the range of available experimental measurements of segregation coefficients. However, the carbon diffusion coefficients we compute are significantly lower than the values typically assumed in continuum models of impurity distribution. Overall, we show that currently available empirical potential models may be useful, at least semi-quantitatively, for studying carbon (and possibly other impurity) transport in silicon solidification, especially if a multi-model approach is taken.
Toward an epistemology of clinical psychoanalysis.
Ahumada, J L
1997-01-01
Epistemology emerges from the study of the ways knowledge is gained in the different fields of scientific endeavor. Current polemics on the nature of psychoanalytic knowledge involve counterposed misconceptions of the nature of mind. On one side clinical psychoanalysis is under siege from philosophical "hard science" stalwarts who, upholding as the unitary model of scientific knowledge of Galilean model of science built around the "well-behaved" variables of mechanics and cosmology, argue clinical psychoanalysis does not meet empirical criteria for the validation of its claims. On the other side, its empirical character is renounced by hermeneuticists who, agreeing with "hard science" advocates on what science is, dismiss the animal nature of human beings and hold that clinical psychoanalysis is not an empirical science but a "human" interpretive one. Taking Adolf Grünbaum's critique as its referent, this paper examines how, by ignoring the differences between "exact" and observational science, the "hard science" demand for well-behaved variables misconstrues the nature of events in the realm of mind. Criteria for an epistemology fit for the facts of clinical psychoanalysis as an empirical, observational science of mind are then proposed.
Dust cyclone research in the 21st century
USDA-ARS?s Scientific Manuscript database
Research to meet the demand for ever more efficient dust cyclones continues after some eighty years. Recent trends emphasize design optimization through computational fluid dynamics (CFD) and testing design subtleties not modeled by semi-empirical equations. Improvements to current best available ...
Ionosphere-magnetosphere coupling and convection
NASA Technical Reports Server (NTRS)
Wolf, R. A.; Spiro, R. W.
1984-01-01
The following international Magnetospheric Study quantitative models of observed ionosphere-magnetosphere events are reviewed: (1) a theoretical model of convection; (2) algorithms for deducing ionospheric current and electric-field patterns from sets of ground magnetograms and ionospheric conductivity information; and (3) empirical models of ionospheric conductances and polar cap potential drop. Research into magnetic-field-aligned electric fields is reviewed, particularly magnetic-mirror effects and double layers.
Brief report: Factor structure of parenting behaviour in early adolescence.
Spithoven, Annette W M; Bijttebier, Patricia; Van Leeuwen, Karla; Goossens, Luc
2016-12-01
Researchers have traditionally relied on a tripartite model of parenting behaviour, consisting of the dimensions parental support, psychological control, and behavioural control. However, some scholars have argued to distinguish two dimensions of behavioural control, namely reactive control and proactive control. In line with earlier work, the current study found empirical evidence for these distinct behavioural control dimensions. In addition, the study showed that the four parenting dimensions of parental support, psychological control, reactive control, and proactive control were differentially related to peer-related loneliness as well as parent-related loneliness. Thereby, the current study does not only provide empirical evidence for the distinction between various parenting dimensions, but also shows the utility of this differentiation. Copyright © 2016. Published by Elsevier Ltd.
Exploring predictive performance: A reanalysis of the geospace model transition challenge
NASA Astrophysics Data System (ADS)
Welling, D. T.; Anderson, B. J.; Crowley, G.; Pulkkinen, A. A.; Rastätter, L.
2017-01-01
The Pulkkinen et al. (2013) study evaluated the ability of five different geospace models to predict surface dB/dt as a function of upstream solar drivers. This was an important step in the assessment of research models for predicting and ultimately preventing the damaging effects of geomagnetically induced currents. Many questions remain concerning the capabilities of these models. This study presents a reanalysis of the Pulkkinen et al. (2013) results in an attempt to better understand the models' performance. The range of validity of the models is determined by examining the conditions corresponding to the empirical input data. It is found that the empirical conductance models on which global magnetohydrodynamic models rely are frequently used outside the limits of their input data. The prediction error for the models is sorted as a function of solar driving and geomagnetic activity. It is found that all models show a bias toward underprediction, especially during active times. These results have implications for future research aimed at improving operational forecast models.
Prediction of early summer rainfall over South China by a physical-empirical model
NASA Astrophysics Data System (ADS)
Yim, So-Young; Wang, Bin; Xing, Wen
2014-10-01
In early summer (May-June, MJ) the strongest rainfall belt of the northern hemisphere occurs over the East Asian (EA) subtropical front. During this period the South China (SC) rainfall reaches its annual peak and represents the maximum rainfall variability over EA. Hence we establish an SC rainfall index, which is the MJ mean precipitation averaged over 72 stations over SC (south of 28°N and east of 110°E) and represents superbly the leading empirical orthogonal function mode of MJ precipitation variability over EA. In order to predict SC rainfall, we established a physical-empirical model. Analysis of 34-year observations (1979-2012) reveals three physically consequential predictors. A plentiful SC rainfall is preceded in the previous winter by (a) a dipole sea surface temperature (SST) tendency in the Indo-Pacific warm pool, (b) a tripolar SST tendency in North Atlantic Ocean, and (c) a warming tendency in northern Asia. These precursors foreshadow enhanced Philippine Sea subtropical High and Okhotsk High in early summer, which are controlling factors for enhanced subtropical frontal rainfall. The physical empirical model built on these predictors achieves a cross-validated forecast correlation skill of 0.75 for 1979-2012. Surprisingly, this skill is substantially higher than four-dynamical models' ensemble prediction for 1979-2010 period (0.15). The results here suggest that the low prediction skill of current dynamical models is largely due to models' deficiency and the dynamical prediction has large room to improve.
North Dakota implementation of mechanistic-empirical pavement design guide (MEPDG).
DOT National Transportation Integrated Search
2014-12-01
North Dakota currently designs roads based on the AASHTO Design Guide procedure, which is based on : the empirical findings of the AASHTO Road Test of the late 1950s. However, limitations of the current : empirical approach have prompted AASHTO to mo...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorin Zaharia; C.Z. Cheng
In this paper, we study whether the magnetic field of the T96 empirical model can be in force balance with an isotropic plasma pressure distribution. Using the field of T96, we obtain values for the pressure P by solving a Poisson-type equation {del}{sup 2}P = {del} {center_dot} (J x B) in the equatorial plane, and 1-D profiles on the Sun-Earth axis by integrating {del}P = J x B. We work in a flux coordinate system in which the magnetic field is expressed in terms of Euler potentials. Our results lead to the conclusion that the T96 model field cannot bemore » in equilibrium with an isotropic pressure. We also analyze in detail the computation of Birkeland currents using the Vasyliunas relation and the T96 field, which yields unphysical results, again indicating the lack of force balance in the empirical model. The underlying reason for the force imbalance is likely the fact that the derivatives of the least-square fitted model B are not accurate predictions of the actual magnetospheric field derivatives. Finally, we discuss a possible solution to the problem of lack of force balance in empirical field models.« less
Enhancing the Impact of Family Justice Centers via Motivational Interviewing: An Integrated Review.
Simmons, Catherine A; Howell, Kathryn H; Duke, Michael R; Beck, J Gayle
2016-12-01
The Family Justice Center (FJC) model is an approach to assisting survivors of intimate partner violence (IPV) that focuses on integration of services under one roof and co-location of staff members from a range of multidisciplinary agencies. Even though the FJC model is touted as a best practice strategy to help IPV survivors, empirical support for the effectiveness of this approach is scarce. The current article consolidates this small yet promising body of empirically based literature in a clinically focused review. Findings point to the importance of integrating additional resources into the FJC model to engage IPV survivors who have ambivalent feelings about whether to accept help, leave the abusive relationship, and/or participate in criminal justice processes to hold the offender accountable. One such resource, motivational interviewing (MI), holds promise in aiding IPV survivors with these decisions, but empirical investigation into how MI can be incorporated into the FJC model has yet to be published. This article, therefore, also integrates the body of literature supporting the FJC model with the body of literature supporting MI with IPV survivors. Implications for practice, policy, and research are incorporated throughout this review. © The Author(s) 2015.
NASA Technical Reports Server (NTRS)
Herman, J. R.; Hudson, R. D.; Serafino, G.
1990-01-01
Arguments are presented showing that the basic empirical model of the solar backscatter UV (SBUV) instrument degradation used by Cebula et al. (1988) in their analysis of the SBUV data is likely to lead to an incorrect estimate of the ozone trend. A correction factor is given as a function of time and altitude that brings the SBUV data into approximate agreement with the SAGE, SME, and Dobson network ozone trends. It is suggested that the currently archived SBUV ozone data should be used with caution for periods of analysis exceeding 1 yr, since it is likely that the yearly decreases contained in the archived data are too large.
Improved Ionospheric Electrodynamic Models and Application to Calculating Joule Heating Rates
NASA Technical Reports Server (NTRS)
Weimer, D. R.
2004-01-01
Improved techniques have been developed for empirical modeling of the high-latitude electric potentials and magnetic field aligned currents (FAC) as a function of the solar wind parameters. The FAC model is constructed using scalar magnetic Euler potentials, and functions as a twin to the electric potential model. The improved models have more accurate field values as well as more accurate boundary locations. Non-linear saturation effects in the solar wind-magnetosphere coupling are also better reproduced. The models are constructed using a hybrid technique, which has spherical harmonic functions only within a small area at the pole. At lower latitudes the potentials are constructed from multiple Fourier series functions of longitude, at discrete latitudinal steps. It is shown that the two models can be used together in order to calculate the total Poynting flux and Joule heating in the ionosphere. An additional model of the ionospheric conductivity is not required in order to obtain the ionospheric currents and Joule heating, as the conductivity variations as a function of the solar inclination are implicitly contained within the FAC model's data. The models outputs are shown for various input conditions, as well as compared with satellite measurements. The calculations of the total Joule heating are compared with results obtained by the inversion of ground-based magnetometer measurements. Like their predecessors, these empirical models should continue to be a useful research and forecast tools.
NASA Astrophysics Data System (ADS)
Fuchs, Richard; Prestele, Reinhard; Verburg, Peter H.
2018-05-01
The consideration of gross land changes, meaning all area gains and losses within a pixel or administrative unit (e.g. country), plays an essential role in the estimation of total land changes. Gross land changes affect the magnitude of total land changes, which feeds back to the attribution of biogeochemical and biophysical processes related to climate change in Earth system models. Global empirical studies on gross land changes are currently lacking. Whilst the relevance of gross changes for global change has been indicated in the literature, it is not accounted for in future land change scenarios. In this study, we extract gross and net land change dynamics from large-scale and high-resolution (30-100 m) remote sensing products to create a new global gross and net change dataset. Subsequently, we developed an approach to integrate our empirically derived gross and net changes with the results of future simulation models by accounting for the gross and net change addressed by the land use model and the gross and net change that is below the resolution of modelling. Based on our empirical data, we found that gross land change within 0.5° grid cells was substantially larger than net changes in all parts of the world. As 0.5° grid cells are a standard resolution of Earth system models, this leads to an underestimation of the amount of change. This finding contradicts earlier studies, which assumed gross land changes to appear in shifting cultivation areas only. Applied in a future scenario, the consideration of gross land changes led to approximately 50 % more land changes globally compared to a net land change representation. Gross land changes were most important in heterogeneous land systems with multiple land uses (e.g. shifting cultivation, smallholder farming, and agro-forestry systems). Moreover, the importance of gross changes decreased over time due to further polarization and intensification of land use. Our results serve as an empirical database for land change dynamics that can be applied in Earth system models and integrated assessment models.
NASA Technical Reports Server (NTRS)
Truhlik, V.; Triskova, L.
2012-01-01
A data-base of electron temperature (T(sub e)) comprising of most of the available LEO satellite measurements in the altitude range from 350 to 2000 km has been used for the development of a new global empirical model of T(sub e) for the International Reference Ionosphere (IRI). For the first time this will include variations with solar activity. Variations at five fixed altitude ranges centered at 350, 550, 850, 1400, and 2000 km and three seasons (summer, winter, and equinox) were represented by a system of associated Legendre polynomials (up to the 8th order) in terms of magnetic local time and the earlier introduced in vdip latitude. The solar activity variations of T(sub e) are represented by a correction term of the T(sub e) global pattern and it has been derived from the empirical latitudinal profiles of T(sub e) for day and night (Truhlik et al., 2009a). Comparisons of the new T(sub e) model with data and with the IRI 2007 Te model show that the new model agrees well with the data generally within standard deviation limits and that the model performs better than the current IRI T(sub e) model.
South, Susan C.; Hamdi, Nayla; Krueger, Robert F.
2015-01-01
For more than a decade, biometric moderation models have been used to examine whether genetic and environmental influences on individual differences might vary within the population. These quantitative gene × environment interaction (G×E) models not only have the potential to elucidate when genetic and environmental influences on a phenotype might differ, but why, as they provide an empirical test of several theoretical paradigms that serve as useful heuristics to explain etiology—diathesis-stress, bioecological, differential susceptibility, and social control. In the current manuscript, we review how these developmental theories align with different patterns of findings from statistical models of gene-environment interplay. We then describe the extant empirical evidence, using work by our own research group and others, to lay out genetically-informative plausible accounts of how phenotypes related to social inequality—physical health and cognition—might relate to these theoretical models. PMID:26426103
South, Susan C; Hamdi, Nayla R; Krueger, Robert F
2017-02-01
For more than a decade, biometric moderation models have been used to examine whether genetic and environmental influences on individual differences might vary within the population. These quantitative Gene × Environment interaction models have the potential to elucidate not only when genetic and environmental influences on a phenotype might differ, but also why, as they provide an empirical test of several theoretical paradigms that serve as useful heuristics to explain etiology-diathesis-stress, bioecological, differential susceptibility, and social control. In the current article, we review how these developmental theories align with different patterns of findings from statistical models of gene-environment interplay. We then describe the extant empirical evidence, using work by our own research group and others, to lay out genetically informative plausible accounts of how phenotypes related to social inequality-physical health and cognition-might relate to these theoretical models. © 2015 Wiley Periodicals, Inc.
VMT Mix Modeling for Mobile Source Emissions Forecasting: Formulation and Empirical Application
DOT National Transportation Integrated Search
2000-05-01
The purpose of the current report is to propose and implement a methodology for obtaining improved link-specific vehicle miles of travel (VMT) mix values compared to those obtained from existent methods. Specifically, the research is developing a fra...
Geospace environment modeling 2008--2009 challenge: Dst index
Rastätter, L.; Kuznetsova, M.M.; Glocer, A.; Welling, D.; Meng, X.; Raeder, J.; Wittberger, M.; Jordanova, V.K.; Yu, Y.; Zaharia, S.; Weigel, R.S.; Sazykin, S.; Boynton, R.; Wei, H.; Eccles, V.; Horton, W.; Mays, M.L.; Gannon, J.
2013-01-01
This paper reports the metrics-based results of the Dst index part of the 2008–2009 GEM Metrics Challenge. The 2008–2009 GEM Metrics Challenge asked modelers to submit results for four geomagnetic storm events and five different types of observations that can be modeled by statistical, climatological or physics-based models of the magnetosphere-ionosphere system. We present the results of 30 model settings that were run at the Community Coordinated Modeling Center and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations, we use comparisons of 1 hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of 1 minute model data with the 1 minute Dst index calculated by the United States Geological Survey. The latter index can be used to calculate spectral variability of model outputs in comparison to the index. We find that model rankings vary widely by skill score used. None of the models consistently perform best for all events. We find that empirical models perform well in general. Magnetohydrodynamics-based models of the global magnetosphere with inner magnetosphere physics (ring current model) included and stand-alone ring current models with properly defined boundary conditions perform well and are able to match or surpass results from empirical models. Unlike in similar studies, the statistical models used in this study found their challenge in the weakest events rather than the strongest events.
NASA Astrophysics Data System (ADS)
Sanchez-Mejia, Z. M.; Papuga, S. A.
2013-12-01
In semiarid regions, where water resources are limited and precipitation dynamics are changing, understanding land surface-atmosphere interactions that regulate the coupled soil moisture-precipitation system is key for resource management and planning. We present a modeling approach to study soil moisture and albedo controls on planetary boundary layer height (PBLh). We used data from the Santa Rita Creosote Ameriflux site and Tucson Airport atmospheric sounding to generate empirical relationships between soil moisture, albedo and PBLh. We developed empirical relationships and show that at least 50% of the variation in PBLh can be explained by soil moisture and albedo. Then, we used a stochastically driven two-layer bucket model of soil moisture dynamics and our empirical relationships to model PBLh. We explored soil moisture dynamics under three different mean annual precipitation regimes: current, increase, and decrease, to evaluate at the influence on soil moisture on land surface-atmospheric processes. While our precipitation regimes are simple, they represent future precipitation regimes that can influence the two soil layers in our conceptual framework. For instance, an increase in annual precipitation, could impact on deep soil moisture and atmospheric processes if precipitation events remain intense. We observed that the response of soil moisture, albedo, and the PBLh will depend not only on changes in annual precipitation, but also on the frequency and intensity of this change. We argue that because albedo and soil moisture data are readily available at multiple temporal and spatial scales, developing empirical relationships that can be used in land surface - atmosphere applications are of great value.
First-Principles-Driven Model-Based Optimal Control of the Current Profile in NSTX-U
NASA Astrophysics Data System (ADS)
Ilhan, Zeki; Barton, Justin; Wehner, William; Schuster, Eugenio; Gates, David; Gerhardt, Stefan; Kolemen, Egemen; Menard, Jonathan
2014-10-01
Regulation in time of the toroidal current profile is one of the main challenges toward the realization of the next-step operational goals for NSTX-U. A nonlinear, control-oriented, physics-based model describing the temporal evolution of the current profile is obtained by combining the magnetic diffusion equation with empirical correlations obtained at NSTX-U for the electron density, electron temperature, and non-inductive current drives. In this work, the proposed model is embedded into the control design process to synthesize a time-variant, linear-quadratic-integral, optimal controller capable of regulating the safety factor profile around a desired target profile while rejecting disturbances. Neutral beam injectors and the total plasma current are used as actuators to shape the current profile. The effectiveness of the proposed controller in regulating the safety factor profile in NSTX-U is demonstrated via closed-loop predictive simulations carried out in PTRANSP. Supported by PPPL.
The use of analytical models in human-computer interface design
NASA Technical Reports Server (NTRS)
Gugerty, Leo
1991-01-01
Some of the many analytical models in human-computer interface design that are currently being developed are described. The usefulness of analytical models for human-computer interface design is evaluated. Can the use of analytical models be recommended to interface designers? The answer, based on the empirical research summarized here, is: not at this time. There are too many unanswered questions concerning the validity of models and their ability to meet the practical needs of design organizations.
Measurement of positive direct current corona pulse in coaxial wire-cylinder gap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, Han, E-mail: hanyin1986@gmail.com; Zhang, Bo, E-mail: shizbcn@mail.tsinghua.edu.cn; He, Jinliang, E-mail: hejl@tsinghua.edu.cn
In this paper, a system is designed and developed to measure the positive corona current in coaxial wire-cylinder gaps. The characteristic parameters of corona current pulses, such as the amplitude, rise time, half-wave time, and repetition frequency, are statistically analyzed and a new set of empirical formulas are derived by numerical fitting. The influence of space charges on corona currents is tested by using three corona cages with different radii. A numerical method is used to solve a simplified ion-flow model to explain the influence of space charges. Based on the statistical results, a stochastic model is developed to simulatemore » the corona pulse trains. And this model is verified by comparing the simulated frequency-domain responses with the measured ones.« less
a Physical Parameterization of Snow Albedo for Use in Climate Models.
NASA Astrophysics Data System (ADS)
Marshall, Susan Elaine
The albedo of a natural snowcover is highly variable ranging from 90 percent for clean, new snow to 30 percent for old, dirty snow. This range in albedo represents a difference in surface energy absorption of 10 to 70 percent of incident solar radiation. Most general circulation models (GCMs) fail to calculate the surface snow albedo accurately, yet the results of these models are sensitive to the assumed value of the snow albedo. This study replaces the current simple empirical parameterizations of snow albedo with a physically-based parameterization which is accurate (within +/- 3% of theoretical estimates) yet efficient to compute. The parameterization is designed as a FORTRAN subroutine (called SNOALB) which can be easily implemented into model code. The subroutine requires less then 0.02 seconds of computer time (CRAY X-MP) per call and adds only one new parameter to the model calculations, the snow grain size. The snow grain size can be calculated according to one of the two methods offered in this thesis. All other input variables to the subroutine are available from a climate model. The subroutine calculates a visible, near-infrared and solar (0.2-5 μm) snow albedo and offers a choice of two wavelengths (0.7 and 0.9 mu m) at which the solar spectrum is separated into the visible and near-infrared components. The parameterization is incorporated into the National Center for Atmospheric Research (NCAR) Community Climate Model, version 1 (CCM1), and the results of a five -year, seasonal cycle, fixed hydrology experiment are compared to the current model snow albedo parameterization. The results show the SNOALB albedos to be comparable to the old CCM1 snow albedos for current climate conditions, with generally higher visible and lower near-infrared snow albedos using the new subroutine. However, this parameterization offers a greater predictability for climate change experiments outside the range of current snow conditions because it is physically-based and not tuned to current empirical results.
NASA Astrophysics Data System (ADS)
Bora, Sanjay; Scherbaum, Frank; Kuehn, Nicolas; Stafford, Peter; Edwards, Benjamin
2016-04-01
The current practice of deriving empirical ground motion prediction equations (GMPEs) involves using ground motions recorded at multiple sites. However, in applications like site-specific (e.g., critical facility) hazard ground motions obtained from the GMPEs are need to be adjusted/corrected to a particular site/site-condition under investigation. This study presents a complete framework for developing a response spectral GMPE, within which the issue of adjustment of ground motions is addressed in a manner consistent with the linear system framework. The present approach is a two-step process in which the first step consists of deriving two separate empirical models, one for Fourier amplitude spectra (FAS) and the other for a random vibration theory (RVT) optimized duration (Drvto) of ground motion. In the second step the two models are combined within the RVT framework to obtain full response spectral amplitudes. Additionally, the framework also involves a stochastic model based extrapolation of individual Fourier spectra to extend the useable frequency limit of the empirically derived FAS model. The stochastic model parameters were determined by inverting the Fourier spectral data using an approach similar to the one as described in Edwards and Faeh (2013). Comparison of median predicted response spectra from present approach with those from other regional GMPEs indicates that the present approach can also be used as a stand-alone model. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, the Middle East and the Mediterranean region.
Survey of current situation in radiation belt modeling
NASA Technical Reports Server (NTRS)
Fung, Shing F.
2004-01-01
The study of Earth's radiation belts is one of the oldest subjects in space physics. Despite the tremendous progress made in the last four decades, we still lack a complete understanding of the radiation belts in terms of their configurations, dynamics, and detailed physical accounts of their sources and sinks. The static nature of early empirical trapped radiation models, for examples, the NASA AP-8 and AE-8 models, renders those models inappropriate for predicting short-term radiation belt behaviors associated with geomagnetic storms and substorms. Due to incomplete data coverage, these models are also inaccurate at low altitudes (e.g., <1000 km) where many robotic and human space flights occur. The availability of radiation data from modern space missions and advancement in physical modeling and data management techniques have now allowed the development of new empirical and physical radiation belt models. In this paper, we will review the status of modern radiation belt modeling. Published by Elsevier Ltd on behalf of COSPAR.
NASA Technical Reports Server (NTRS)
Gong, J.; Wu, D. L.
2014-01-01
Ice water path (IWP) and cloud top height (ht) are two of the key variables in determining cloud radiative and thermodynamical properties in climate models. Large uncertainty remains among IWP measurements from satellite sensors, in large part due to the assumptions made for cloud microphysics in these retrievals. In this study, we develop a fast algorithm to retrieve IWP from the 157, 183.3+/-3 and 190.3 GHz radiances of the Microwave Humidity Sounder (MHS) such that the MHS cloud ice retrieval is consistent with CloudSat IWP measurements. This retrieval is obtained by constraining the empirical forward models between collocated and coincident measurements of CloudSat IWP and MHS cloud-induced radiance depression (Tcir) at these channels. The empirical forward model is represented by a lookup table (LUT) of Tcir-IWP relationships as a function of ht and the frequency channel.With ht simultaneously retrieved, the IWP is found to be more accurate. The useful range of the MHS IWP retrieval is between 0.5 and 10 kg/sq m, and agrees well with CloudSat in terms of the normalized probability density function (PDF). Compared to the empirical model, current operational radiative transfer models (RTMs) still have significant uncertainties in characterizing the observed Tcir-IWP relationships. Therefore, the empirical LUT method developed here remains an effective approach to retrieving ice cloud properties from the MHS-like microwave channels.
ERIC Educational Resources Information Center
Serenko, Alexander
2011-01-01
The purpose of this project is to empirically investigate several antecedents and consequences of student satisfaction (SS) with Canadian university music programmes as well as to measure students' level of programme satisfaction. For this, the American Customer Satisfaction Model was tested through a survey of 276 current Canadian music students.…
ERIC Educational Resources Information Center
Kajonius, Petri J.
2017-01-01
Research is currently testing how the new maladaptive personality inventory for DSM (PID-5) and the well-established common Five-Factor Model (FFM) together can serve as an empirical and theoretical foundation for clinical psychology. The present study investigated the official short version of the PID-5 together with a common short version of…
Developing the Next Generation NATO Reference Mobility Model
2016-06-27
acquisition • design UNCLASSIFIED: Distribution Statement A. Approved for public release; distribution is unlimited.(#27992) Vehicle Dynamics Model...and numerical resolution – for use in vehicle design , acquisition and operational mobility planning 27 June 2016 An open architecture was established...the current empirical methods for simulating vehicle and suspension designs . – Industry wide shortfall with tire dynamics and soft soil behavior
Wife Abuse and the Wife Abuser: Review and Recommendations.
ERIC Educational Resources Information Center
Carden, Ann D.
1994-01-01
Reviews clinical, theoretical, and empirical literature on wife abuse/abusers. Presents historical and contextual information, overview of domestic violence, prevalence data, and descriptions of evolution and current status of public and professional awareness and response. Proposes integrative model for understanding etiologic, dynamic, and…
Performance Analysis of a Ring Current Model Driven by Global MHD
NASA Astrophysics Data System (ADS)
Falasca, A.; Keller, K. A.; Fok, M.; Hesse, M.; Gombosi, T.
2003-12-01
Effectively modeling the high-energy particles in Earth's inner magnetosphere has the potential to improve safety in both manned and unmanned spacecraft. One model of this environment is the Fok Ring Current Model. This model can utilize as inputs both solar wind data, and empirical ionospheric electric field and magnetic field models. Alternatively, we have a procedure which allows the model to be driven by outputs from the BATS-R-US global MHD model. By using in-situ satellite data we will compare the predictive capability of this model in its original stand-alone form, to that of the model when driven by the BATS-R-US Global Magnetosphere Model. As a basis for comparison we use the April 2002 and May 2003 storms where suitable LANL geosynchronous data are available.
DeAnda, Stephanie; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret
2016-06-01
A rich body of work in adult bilinguals documents an interconnected lexical network across languages, such that early word retrieval is language independent. This literature has yielded a number of influential models of bilingual semantic memory. However, extant models provide limited predictions about the emergence of lexical organization in bilingual first language acquisition (BFLA). Empirical evidence from monolingual infants suggests that lexical networks emerge early in development as children integrate phonological and semantic information. These findings tell us little about the interaction between 2 languages in early bilingual memory. To date, an understanding of when and how languages interact in early bilingual development is lacking. In this literature review, we present research documenting lexical-semantic development across monolingual and bilingual infants. This is followed by a discussion of current models of bilingual language representation and organization and their ability to account for the available empirical evidence. Together, these theoretical and empirical accounts inform and highlight unexplored areas of research and guide future work on early bilingual memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Physics-based Control-oriented Modeling of the Current Profile Evolution in NSTX-Upgrade
NASA Astrophysics Data System (ADS)
Ilhan, Zeki; Barton, Justin; Shi, Wenyu; Schuster, Eugenio; Gates, David; Gerhardt, Stefan; Kolemen, Egemen; Menard, Jonathan
2013-10-01
The operational goals for the NSTX-Upgrade device include non-inductive sustainment of high- β plasmas, realization of the high performance equilibrium scenarios with neutral beam heating, and achievement of longer pulse durations. Active feedback control of the current profile is proposed to enable these goals. Motivated by the coupled, nonlinear, multivariable, distributed-parameter plasma dynamics, the first step towards feedback control design is the development of a physics-based, control-oriented model for the current profile evolution in response to non-inductive current drives and heating systems. For this purpose, the nonlinear magnetic-diffusion equation is coupled with empirical models for the electron density, electron temperature, and non-inductive current drives (neutral beams). The resulting first-principles-driven, control-oriented model is tailored for NSTX-U based on the PTRANSP predictions. Main objectives and possible challenges associated with the use of the developed model for control design are discussed. This work was supported by PPPL.
NASA Astrophysics Data System (ADS)
Chuang, Hsueh-Hua
The purpose of this dissertation is to develop an iterative model for the analysis of the current distribution in vertical-cavity surface-emitting lasers (VCSELs) using a circuit network modeling approach. This iterative model divides the VCSEL structure into numerous annular elements and uses a circuit network consisting of resistors and diodes. The measured sheet resistance of the p-distributed Bragg reflector (DBR), the measured sheet resistance of the layers under the oxide layer, and two empirical adjustable parameters are used as inputs to the iterative model to determine the resistance of each resistor. The two empirical values are related to the anisotropy of the resistivity of the p-DBR structure. The spontaneous current, stimulated current, and surface recombination current are accounted for by the diodes. The lateral carrier transport in the quantum well region is analyzed using drift and diffusion currents. The optical gain is calculated as a function of wavelength and carrier density from fundamental principles. The predicted threshold current densities for these VCSELs match the experimentally measured current densities over the wavelength range of 0.83 mum to 0.86 mum with an error of less than 5%. This model includes the effects of the resistance of the p-DBR mirrors, the oxide current-confining layer and spatial hole burning. Our model shows that higher sheet resistance under the oxide layer reduces the threshold current, but also reduces the current range over which single transverse mode operation occurs. The spatial hole burning profile depends on the lateral drift and diffusion of carriers in the quantum wells but is dominated by the voltage drop across the p-DBR region. To my knowledge, for the first time, the drift current and the diffusion current are treated separately. Previous work uses an ambipolar approach, which underestimates the total charge transferred in the quantum well region, especially under the oxide region. However, the total result of the drift current and the diffusion current is less significant than the Ohmic current, especially in the cavity region. This simple iterative model is applied to commercially available oxide-confined VCSELs. The simulation results show excellent agreement with experimentally measured voltage-current curves (within 3.7% for a 10 mum and within 4% for a 5 mum diameter VCSEL) and light-current curves (within 2% for a 10 mum and within 9% for a 5 mum diameter VCSEL) curves and provides insight into the detailed distributions of current and voltage within a VCSEL. This difference between the theoretically calculated results and the measured results is less than the variation shown in the data sheets for production VCSELs.
Landslide Hazard Probability Derived from Inherent and Dynamic Determinants
NASA Astrophysics Data System (ADS)
Strauch, Ronda; Istanbulluoglu, Erkan
2016-04-01
Landslide hazard research has typically been conducted independently from hydroclimate research. We unify these two lines of research to provide regional scale landslide hazard information for risk assessments and resource management decision-making. Our approach combines an empirical inherent landslide probability with a numerical dynamic probability, generated by combining routed recharge from the Variable Infiltration Capacity (VIC) macro-scale land surface hydrologic model with a finer resolution probabilistic slope stability model run in a Monte Carlo simulation. Landslide hazard mapping is advanced by adjusting the dynamic model of stability with an empirically-based scalar representing the inherent stability of the landscape, creating a probabilistic quantitative measure of geohazard prediction at a 30-m resolution. Climatology, soil, and topography control the dynamic nature of hillslope stability and the empirical information further improves the discriminating ability of the integrated model. This work will aid resource management decision-making in current and future landscape and climatic conditions. The approach is applied as a case study in North Cascade National Park Complex, a rugged terrain with nearly 2,700 m (9,000 ft) of vertical relief, covering 2757 sq km (1064 sq mi) in northern Washington State, U.S.A.
A semi-empirical model for the formation and depletion of the high burnup structure in UO 2
Pizzocri, D.; Cappia, F.; Luzzi, L.; ...
2017-01-31
In the rim zone of UO 2 nuclear fuel pellets, the combination of high burnup and low temperature drives a microstructural change, leading to the formation of the high burnup structure (HBS). In this work, we propose a semi-empirical model to describe the formation of the HBS, which embraces the polygonisation/recrystallization process and the depletion of intra-granular fission gas, describing them as inherently related. To this end, we per-formed grain-size measurements on samples at radial positions in which the restructuring was incomplete. Moreover, based on these new experimental data, we assume an exponential reduction of the average grain size withmore » local effective burnup, paired with a simultaneous depletion of intra-granular fission gas driven by diffusion. The comparison with currently used models indicates the applicability of the herein developed model within integral fuel performance codes.« less
NASA Astrophysics Data System (ADS)
Mandache, C.; Khan, M.; Fahr, A.; Yanishevsky, M.
2011-03-01
Probability of detection (PoD) studies are broadly used to determine the reliability of specific nondestructive inspection procedures, as well as to provide data for damage tolerance life estimations and calculation of inspection intervals for critical components. They require inspections on a large set of samples, a fact that makes these statistical assessments time- and cost-consuming. Physics-based numerical simulations of nondestructive testing inspections could be used as a cost-effective alternative to empirical investigations. They realistically predict the inspection outputs as functions of the input characteristics related to the test piece, transducer and instrument settings, which are subsequently used to partially substitute and/or complement inspection data in PoD analysis. This work focuses on the numerical modelling aspects of eddy current testing for the bolt hole inspections of wing box structures typical of the Lockheed Martin C-130 Hercules and P-3 Orion aircraft, found in the air force inventory of many countries. Boundary element-based numerical modelling software was employed to predict the eddy current signal responses when varying inspection parameters related to probe characteristics, crack geometry and test piece properties. Two demonstrator exercises were used for eddy current signal prediction when lowering the driver probe frequency and changing the material's electrical conductivity, followed by subsequent discussions and examination of the implications on using simulated data in the PoD analysis. Despite some simplifying assumptions, the modelled eddy current signals were found to provide similar results to the actual inspections. It is concluded that physics-based numerical simulations have the potential to partially substitute or complement inspection data required for PoD studies, reducing the cost, time, effort and resources necessary for a full empirical PoD assessment.
Signs of universality in the structure of culture
NASA Astrophysics Data System (ADS)
Băbeanu, Alexandru-Ionuţ; Talman, Leandros; Garlaschelli, Diego
2017-11-01
Understanding the dynamics of opinions, preferences and of culture as whole requires more use of empirical data than has been done so far. It is clear that an important role in driving this dynamics is played by social influence, which is the essential ingredient of many quantitative models. Such models require that all traits are fixed when specifying the "initial cultural state". Typically, this initial state is randomly generated, from a uniform distribution over the set of possible combinations of traits. However, recent work has shown that the outcome of social influence dynamics strongly depends on the nature of the initial state. If the latter is sampled from empirical data instead of being generated in a uniformly random way, a higher level of cultural diversity is found after long-term dynamics, for the same level of propensity towards collective behavior in the short-term. Moreover, if the initial state is randomized by shuffling the empirical traits among people, the level of long-term cultural diversity is in-between those obtained for the empirical and uniformly random counterparts. The current study repeats the analysis for multiple empirical data sets, showing that the results are remarkably similar, although the matrix of correlations between cultural variables clearly differs across data sets. This points towards robust structural properties inherent in empirical cultural states, possibly due to universal laws governing the dynamics of culture in the real world. The results also suggest that this dynamics might be characterized by criticality and involve mechanisms beyond social influence.
"La Clave Profesional": Validation of a Vocational Guidance Instrument
ERIC Educational Resources Information Center
Mudarra, Maria J.; Lázaro Martínez, Ángel
2014-01-01
Introduction: The current study demonstrates empirical and cultural validity of "La Clave Profesional" (Spanish adaptation of Career Key, Jones's test based Holland's RIASEC model). The process of providing validity evidence also includes a reflection on personal and career development and examines the relationahsips between RIASEC…
EVALUATION AND ANALYSIS OF MICROSCALE FLOW AND TRANSPORT DURING REMEDIATION
The design of in-situ remediation is currently based on a description at the macroscopic scale. Phenomena at the pore and pore-network scales are typically lumped in terms of averaged quantities, using empirical or ad hoc expressions. These models cannot address fundamental rem...
ENERGY IMBALANCE UNDERLYING THE DEVELOPMENT OF CHILDHOOD OBESITY IN HISPANIC CHILDREN
USDA-ARS?s Scientific Manuscript database
Childhood obesity arises from dysregulation of energy balance; however, the energetics for the development of childhood obesity are poorly delineated. We therefore developed a mathematical model based on empirical data and current understanding of energy balance to predict the total energy cost of w...
Multicultural Counseling Training: Past, Present, and Future Directions.
ERIC Educational Resources Information Center
Abreu, Jose M.; Chung, Ruth H. Gim; Atkinson, Donald R.
2000-01-01
Provides a selective review of the multicultural counseling training (MCT) literature. A brief historical account of multicultural counseling is followed by three other sections detailing current models of MCT, conceptualization of training objectives, and empirical research. Highlights and discusses critical issues for the present and future…
THE CURRENT STATUS OF RESEARCH AND THEORY IN HUMAN PROBLEM SOLVING.
ERIC Educational Resources Information Center
DAVIS, GARY A.
PROBLEM-SOLVING THEORIES IN THREE AREAS - TRADITIONAL (STIMULUS-RESPONSE) LEARNING, COGNITIVE-GESTALT APPROACHES, AND COMPUTER AND MATHEMATICAL MODELS - WERE SUMMARIZED. RECENT EMPIRICAL STUDIES (1960-65) ON PROBLEM SOLVING WERE CATEGORIZED ACCORDING TO TYPE OF BEHAVIOR ELICITED BY PARTICULAR PROBLEM-SOLVING TASKS. ANAGRAM,…
Xu, Maoqi; Chen, Liang
2018-01-01
The individual sample heterogeneity is one of the biggest obstacles in biomarker identification for complex diseases such as cancers. Current statistical models to identify differentially expressed genes between disease and control groups often overlook the substantial human sample heterogeneity. Meanwhile, traditional nonparametric tests lose detailed data information and sacrifice the analysis power, although they are distribution free and robust to heterogeneity. Here, we propose an empirical likelihood ratio test with a mean-variance relationship constraint (ELTSeq) for the differential expression analysis of RNA sequencing (RNA-seq). As a distribution-free nonparametric model, ELTSeq handles individual heterogeneity by estimating an empirical probability for each observation without making any assumption about read-count distribution. It also incorporates a constraint for the read-count overdispersion, which is widely observed in RNA-seq data. ELTSeq demonstrates a significant improvement over existing methods such as edgeR, DESeq, t-tests, Wilcoxon tests and the classic empirical likelihood-ratio test when handling heterogeneous groups. It will significantly advance the transcriptomics studies of cancers and other complex disease. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Comparing mechanistic and empirical approaches to modeling the thermal niche of almond
NASA Astrophysics Data System (ADS)
Parker, Lauren E.; Abatzoglou, John T.
2017-09-01
Delineating locations that are thermally viable for cultivating high-value crops can help to guide land use planning, agronomics, and water management. Three modeling approaches were used to identify the potential distribution and key thermal constraints on on almond cultivation across the southwestern United States (US), including two empirical species distribution models (SDMs)—one using commonly used bioclimatic variables (traditional SDM) and the other using more physiologically relevant climate variables (nontraditional SDM)—and a mechanistic model (MM) developed using published thermal limitations from field studies. While models showed comparable results over the majority of the domain, including over existing croplands with high almond density, the MM suggested the greatest potential for the geographic expansion of almond cultivation, with frost susceptibility and insufficient heat accumulation being the primary thermal constraints in the southwestern US. The traditional SDM over-predicted almond suitability in locations shown by the MM to be limited by frost, whereas the nontraditional SDM showed greater agreement with the MM in these locations, indicating that incorporating physiologically relevant variables in SDMs can improve predictions. Finally, opportunities for geographic expansion of almond cultivation under current climatic conditions in the region may be limited, suggesting that increasing production may rely on agronomical advances and densifying current almond plantations in existing locations.
D. M. Jimenez; B. W. Butler; J. Reardon
2003-01-01
Current methods for predicting fire-induced plant mortality in shrubs and trees are largely empirical. These methods are not readily linked to duff burning, soil heating, and surface fire behavior models. In response to the need for a physics-based model of this process, a detailed model for predicting the temperature distribution through a tree stem as a function of...
[Personalised treatment of disorders in the use of alcohol and nicotine].
Dom, G; van den Brink, W; Schellekens, A
There is an increasing interest in personalised treatment based on the individual characteristics of the patient in the field of addiction care. To summarise the present state of staging and profiling possibilities within addiction care. A literature review highlighting the current scientific findings and proposing a theoretical model. There are currently an insufficient number of studies to allow for a fully data driven model. However, research identifying biomarkers is growing and some clinically implementable findings can be put forward. a personalised approach in addiction care holds promise. There is an urgent need for better and larger datasets to empirically support models aimed for clinical use.
On the need and use of models to explore the role of economic confidence:a survey.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprigg, James A.; Paez, Paul J.; Hand, Michael S.
2005-04-01
Empirical studies suggest that consumption is more sensitive to current income than suggested under the permanent income hypothesis, which raises questions regarding expectations for future income, risk aversion, and the role of economic confidence measures. This report surveys a body of fundamental economic literature as well as burgeoning computational modeling methods to support efforts to better anticipate cascading economic responses to terrorist threats and attacks. This is a three part survey to support the incorporation of models of economic confidence into agent-based microeconomic simulations. We first review broad underlying economic principles related to this topic. We then review the economicmore » principle of confidence and related empirical studies. Finally, we provide a brief survey of efforts and publications related to agent-based economic simulation.« less
Transport modeling of L- and H-mode discharges with LHCD on EAST
NASA Astrophysics Data System (ADS)
Li, M. H.; Ding, B. J.; Imbeaux, F.; Decker, J.; Zhang, X. J.; Kong, E. H.; Zhang, L.; Wei, W.; Shan, J. F.; Liu, F. K.; Wang, M.; Xu, H. D.; Yang, Y.; Peysson, Y.; Basiuk, V.; Artaud, J.-F.; Yuynh, P.; Wan, B. N.
2013-04-01
High-confinement (H-mode) discharges with lower hybrid current drive (LHCD) as the only heating source are obtained on EAST. In this paper, an empirical transport model of mixed Bohm/gyro-Bohm for electron and ion heat transport was first calibrated against a database of 3 L-mode shots on EAST. The electron and ion temperature profiles are well reproduced in the predictive modeling with the calibrated model coupled to the suite of codes CRONOS. CRONOS calculations with experimental profiles are also performed for electron power balance analysis. In addition, the time evolutions of LHCD are calculated by the C3PO/LUKE code involving current diffusion, and the results are compared with experimental observations.
Modeling Traffic on the Web Graph
NASA Astrophysics Data System (ADS)
Meiss, Mark R.; Gonçalves, Bruno; Ramasco, José J.; Flammini, Alessandro; Menczer, Filippo
Analysis of aggregate and individual Web requests shows that PageRank is a poor predictor of traffic. We use empirical data to characterize properties of Web traffic not reproduced by Markovian models, including both aggregate statistics such as page and link traffic, and individual statistics such as entropy and session size. As no current model reconciles all of these observations, we present an agent-based model that explains them through realistic browsing behaviors: (1) revisiting bookmarked pages; (2) backtracking; and (3) seeking out novel pages of topical interest. The resulting model can reproduce the behaviors we observe in empirical data, especially heterogeneous session lengths, reconciling the narrowly focused browsing patterns of individual users with the extreme variance in aggregate traffic measurements. We can thereby identify a few salient features that are necessary and sufficient to interpret Web traffic data. Beyond the descriptive and explanatory power of our model, these results may lead to improvements in Web applications such as search and crawling.
Cortical Substrate of Haptic Representation
1993-08-24
experience and data from primates , we have developed computational models of short-term active memory. Such models may have technological interest...neurobiological work on primate memory. It is on that empirical work that our current theoretical efforts are 5 founded. Our future physiological research...Academy of Sciences, New York, vol. 608, pp. 318-329, 1990. J.M. Fuster - Behavioral electrophysiology of the prefrontal cortex of the primate . Progress
Thermal Analysis of the PediaFlow pediatric ventricular assist device.
Gardiner, Jeffrey M; Wu, Jingchun; Noh, Myounggyu D; Antaki, James F; Snyder, Trevor A; Paden, David B; Paden, Brad E
2007-01-01
Accurate modeling of heat dissipation in pediatric intracorporeal devices is crucial in avoiding tissue and blood thermotrauma. Thermal models of new Maglev ventricular assist device (VAD) concepts for the PediaFlow VAD are developed by incorporating empirical heat transfer equations with thermal finite element analysis (FEA). The models assume three main sources of waste heat generation: copper motor windings, active magnetic thrust bearing windings, and eddy currents generated within the titanium housing due to the two-pole motor. Waste heat leaves the pump by convection into blood passing through the pump and conduction through surrounding tissue. Coefficients of convection are calculated and assigned locally along fluid path surfaces of the three-dimensional pump housing model. FEA thermal analysis yields a three-dimensional temperature distribution for each of the three candidate pump models. Thermal impedances from the motor and thrust bearing windings to tissue and blood contacting surfaces are estimated based on maximum temperature rise at respective surfaces. A new updated model for the chosen pump topology is created incorporating computational fluid dynamics with empirical fluid and heat transfer equations. This model represents the final geometry of the first generation prototype, incorporates eddy current heating, and has 60 discrete convection regions. Thermal analysis is performed at nominal and maximum flow rates, and temperature distributions are plotted. Results suggest that the pump will not exceed a temperature rise of 2 degrees C during normal operation.
NASA Astrophysics Data System (ADS)
Moreau, D.; Artaud, J. F.; Ferron, J. R.; Holcomb, C. T.; Humphreys, D. A.; Liu, F.; Luce, T. C.; Park, J. M.; Prater, R.; Turco, F.; Walker, M. L.
2015-06-01
This paper shows that semi-empirical data-driven models based on a two-time-scale approximation for the magnetic and kinetic control of advanced tokamak (AT) scenarios can be advantageously identified from simulated rather than real data, and used for control design. The method is applied to the combined control of the safety factor profile, q(x), and normalized pressure parameter, βN, using DIII-D parameters and actuators (on-axis co-current neutral beam injection (NBI) power, off-axis co-current NBI power, electron cyclotron current drive power, and ohmic coil). The approximate plasma response model was identified from simulated open-loop data obtained using a rapidly converging plasma transport code, METIS, which includes an MHD equilibrium and current diffusion solver, and combines plasma transport nonlinearity with 0D scaling laws and 1.5D ordinary differential equations. The paper discusses the results of closed-loop METIS simulations, using the near-optimal ARTAEMIS control algorithm (Moreau D et al 2013 Nucl. Fusion 53 063020) for steady state AT operation. With feedforward plus feedback control, the steady state target q-profile and βN are satisfactorily tracked with a time scale of about 10 s, despite large disturbances applied to the feedforward powers and plasma parameters. The robustness of the control algorithm with respect to disturbances of the H&CD actuators and of plasma parameters such as the H-factor, plasma density and effective charge, is also shown.
Grummer, Jared A; Bryson, Robert W; Reeder, Tod W
2014-03-01
Current molecular methods of species delimitation are limited by the types of species delimitation models and scenarios that can be tested. Bayes factors allow for more flexibility in testing non-nested species delimitation models and hypotheses of individual assignment to alternative lineages. Here, we examined the efficacy of Bayes factors in delimiting species through simulations and empirical data from the Sceloporus scalaris species group. Marginal-likelihood scores of competing species delimitation models, from which Bayes factor values were compared, were estimated with four different methods: harmonic mean estimation (HME), smoothed harmonic mean estimation (sHME), path-sampling/thermodynamic integration (PS), and stepping-stone (SS) analysis. We also performed model selection using a posterior simulation-based analog of the Akaike information criterion through Markov chain Monte Carlo analysis (AICM). Bayes factor species delimitation results from the empirical data were then compared with results from the reversible-jump MCMC (rjMCMC) coalescent-based species delimitation method Bayesian Phylogenetics and Phylogeography (BP&P). Simulation results show that HME and sHME perform poorly compared with PS and SS marginal-likelihood estimators when identifying the true species delimitation model. Furthermore, Bayes factor delimitation (BFD) of species showed improved performance when species limits are tested by reassigning individuals between species, as opposed to either lumping or splitting lineages. In the empirical data, BFD through PS and SS analyses, as well as the rjMCMC method, each provide support for the recognition of all scalaris group taxa as independent evolutionary lineages. Bayes factor species delimitation and BP&P also support the recognition of three previously undescribed lineages. In both simulated and empirical data sets, harmonic and smoothed harmonic mean marginal-likelihood estimators provided much higher marginal-likelihood estimates than PS and SS estimators. The AICM displayed poor repeatability in both simulated and empirical data sets, and produced inconsistent model rankings across replicate runs with the empirical data. Our results suggest that species delimitation through the use of Bayes factors with marginal-likelihood estimates via PS or SS analyses provide a useful and complementary alternative to existing species delimitation methods.
Sheehan, D V; Sheehan, K H
1982-08-01
The history of the classification of anxiety, hysterical, and hypochondriacal disorders is reviewed. Problems in the ability of current classification schemes to predict, control, and describe the relationship between the symptoms and other phenomena are outlined. Existing classification schemes failed the first test of a good classification model--that of providing categories that are mutually exclusive. The independence of these diagnostic categories from each other does not appear to hold up on empirical testing. In the absence of inherently mutually exclusive categories, further empirical investigation of these classes is obstructed since statistically valid analysis of the nominal data and any useful multivariate analysis would be difficult if not impossible. It is concluded that the existing classifications are unsatisfactory and require some fundamental reconceptualization.
NASA Astrophysics Data System (ADS)
Gastis, P.; Perdikakis, G.; Robertson, D.; Almus, R.; Anderson, T.; Bauder, W.; Collon, P.; Lu, W.; Ostdiek, K.; Skulski, M.
2016-04-01
Equilibrium charge state distributions of stable 60Ni, 59Co, and 63Cu beams passing through a 1 μm thick Mo foil were measured at beam energies of 1.84 MeV/u, 2.09 MeV/u, and 2.11 MeV/u respectively. A 1-D position sensitive Parallel Grid Avalanche Counter detector (PGAC) was used at the exit of a spectrograph magnet, enabling us to measure the intensity of several charge states simultaneously. The number of charge states measured for each beam constituted more than 99% of the total equilibrium charge state distribution for that element. Currently, little experimental data exists for equilibrium charge state distributions for heavy ions with 19 ≲Zp,Zt ≲ 54 (Zp and Zt, are the projectile's and target's atomic numbers respectively). Hence the success of the semi-empirical models in predicting typical characteristics of equilibrium CSDs (mean charge states and distribution widths), has not been thoroughly tested at the energy region of interest. A number of semi-empirical models from the literature were evaluated in this study, regarding their ability to reproduce the characteristics of the measured charge state distributions. The evaluated models were selected from the literature based on whether they are suitable for the given range of atomic numbers and on their frequent use by the nuclear physics community. Finally, an attempt was made to combine model predictions for the mean charge state, the distribution width and the distribution shape, to come up with a more reliable model. We discuss this new ;combinatorial; prescription and compare its results with our experimental data and with calculations using the other semi-empirical models studied in this work.
Linking Mechanisms of Work-Family Conflict and Segmentation
ERIC Educational Resources Information Center
Michel, Jesse S.; Hargis, Michael B.
2008-01-01
Despite the abundance of work and family research, few studies have compared the linking mechanisms specified in theoretical models of work-family conflict and segmentation. Accordingly, the current study provides a greater degree of empirical clarity concerning the interplay of work and family by directly examining the indirect effects of…
ERIC Educational Resources Information Center
Sheepway, Lyndal; Lincoln, Michelle; McAllister, Sue
2014-01-01
Background: Speech-language pathology students gain experience and clinical competency through clinical education placements. However, currently little empirical information exists regarding how competency develops. Existing research about the effectiveness of placement types and models in developing competency is generally descriptive and based…
Shen, Kunling; Xiong, Tengbin; Tan, Seng Chuen; Wu, Jiuhong
2016-01-01
Influenza is a common viral respiratory infection that causes epidemics and pandemics in the human population. Oseltamivir is a neuraminidase inhibitor-a new class of antiviral therapy for influenza. Although its efficacy and safety have been established, there is uncertainty regarding whether influenza-like illness (ILI) in children is best managed by oseltamivir at the onset of illness, and its cost-effectiveness in children has not been studied in China. To evaluate the cost-effectiveness of post rapid influenza diagnostic test (RIDT) treatment with oseltamivir and empiric treatment with oseltamivir comparing with no antiviral therapy against influenza for children with ILI. We developed a decision-analytic model based on previously published evidence to simulate and evaluate 1-year potential clinical and economic outcomes associated with three managing strategies for children presenting with symptoms of influenza. Model inputs were derived from literature and expert opinion of clinical practice and research in China. Outcome measures included costs and quality-adjusted life year (QALY). All the interventions were compared with incremental cost-effectiveness ratios (ICER). In base case analysis, empiric treatment with oseltamivir consistently produced the greatest gains in QALY. When compared with no antiviral therapy, the empiric treatment with oseltamivir strategy is very cost effective with an ICER of RMB 4,438. When compared with the post RIDT treatment with oseltamivir, the empiric treatment with oseltamivir strategy is dominant. Probabilistic sensitivity analysis projected that there is a 100% probability that empiric oseltamivir treatment would be considered as a very cost-effective strategy compared to the no antiviral therapy, according to the WHO recommendations for cost-effectiveness thresholds. The same was concluded with 99% probability for empiric oseltamivir treatment being a very cost-effective strategy compared to the post RIDT treatment with oseltamivir. In the Chinese setting of current health system, our modelling based simulation analysis suggests that empiric treatment with oseltamivir to be a cost-saving and very cost-effective strategy in managing children with ILI.
The evolution of cooperative breeding in the African cichlid fish, Neolamprologus pulcher.
Wong, Marian; Balshine, Sigal
2011-05-01
The conundrum of why subordinate individuals assist dominants at the expense of their own direct reproduction has received much theoretical and empirical attention over the last 50 years. During this time, birds and mammals have taken centre stage as model vertebrate systems for exploring why helpers help. However, fish have great potential for enhancing our understanding of the generality and adaptiveness of helping behaviour because of the ease with which they can be experimentally manipulated under controlled laboratory and field conditions. In particular, the freshwater African cichlid, Neolamprologus pulcher, has emerged as a promising model species for investigating the evolution of cooperative breeding, with 64 papers published on this species over the past 27 years. Here we clarify current knowledge pertaining to the costs and benefits of helping in N. pulcher by critically assessing the existing empirical evidence. We then provide a comprehensive examination of the evidence pertaining to four key hypotheses for why helpers might help: (1) kin selection; (2) pay-to-stay; (3) signals of prestige; and (4) group augmentation. For each hypothesis, we outline the underlying theory, address the appropriateness of N. pulcher as a model species and describe the key predictions and associated empirical tests. For N. pulcher, we demonstrate that the kin selection and group augmentation hypotheses have received partial support. One of the key predictions of the pay-to-stay hypothesis has failed to receive any support despite numerous laboratory and field studies; thus as it stands, the evidence for this hypothesis is weak. There have been no empirical investigations addressing the key predictions of the signals of prestige hypothesis. By outlining the key predictions of the various hypotheses, and highlighting how many of these remain to be tested explicitly, our review can be regarded as a roadmap in which potential paths for future empirical research into the evolution of cooperative breeding are proposed. Overall, we clarify what is currently known about cooperative breeding in N. pulcher, address discrepancies among studies, caution against incorrect inferences that have been drawn over the years and suggest promising avenues for future research in fishes and other taxonomic groups. © 2010 The Authors. Biological Reviews © 2010 Cambridge Philosophical Society.
An empirical model of the tidal currents in the Gulf of the Farallones
Steger, J.M.; Collins, C.A.; Schwing, F.B.; Noble, M.; Garfield, N.; Steiner, M.T.
1998-01-01
Candela et al. (1990, 1992) showed that tides in an open ocean region can be resolved using velocity data from a ship-mounted ADCP. We use their method to build a spatially varying model of the tidal currents in the Gulf of the Farallones, an area of complicated bathymetry where the tidal velocities in some parts of the region are weak compared to the mean currents. We describe the tidal fields for the M2, S2, K1, and O1 constituents and show that this method is sensitive to the model parameters and the quantity of input data. In areas with complex bathymetry and tidal structures, a large amount of spatial data is needed to resolve the tides. A method of estimating the associated errors inherent in the model is described.
Jet Aeroacoustics: Noise Generation Mechanism and Prediction
NASA Technical Reports Server (NTRS)
Tam, Christopher
1998-01-01
This report covers the third year research effort of the project. The research work focussed on the fine scale mixing noise of both subsonic and supersonic jets and the effects of nozzle geometry and tabs on subsonic jet noise. In publication 1, a new semi-empirical theory of jet mixing noise from fine scale turbulence is developed. By an analogy to gas kinetic theory, it is shown that the source of noise is related to the time fluctuations of the turbulence kinetic theory. On starting with the Reynolds Averaged Navier-Stokes equations, a formula for the radiated noise is derived. An empirical model of the space-time correlation function of the turbulence kinetic energy is adopted. The form of the model is in good agreement with the space-time two-point velocity correlation function measured by Davies and coworkers. The parameters of the correlation are related to the parameters of the k-epsilon turbulence model. Thus the theory is self-contained. Extensive comparisons between the computed noise spectrum of the theory and experimental measured have been carried out. The parameters include jet Mach number from 0.3 to 2.0 and temperature ratio from 1.0 to 4.8. Excellent agreements are found in the spectrum shape, noise intensity and directivity. It is envisaged that the theory would supercede all semi-empirical and totally empirical jet noise prediction methods in current use.
D'Elia, Jesse; Haig, Susan M.; Johnson, Matthew J.; Marcot, Bruce G.; Young, Richard
2015-01-01
Ecological niche models can be a useful tool to identify candidate reintroduction sites for endangered species but have been infrequently used for this purpose. In this paper, we (1) develop activity-specific ecological niche models (nesting, roosting, and feeding) for the critically endangered California condor (Gymnogyps californianus) to aid in reintroduction planning in California, Oregon, and Washington, USA, (2) test the accuracy of these models using empirical data withheld from model development, and (3) integrate model results with information on condor movement ecology and biology to produce predictive maps of reintroduction site suitability. Our approach, which disentangles niche models into activity-specific components, has applications for other species where it is routinely assumed (often incorrectly) that individuals fulfill all requirements for life within a single environmental space. Ecological niche models conformed to our understanding of California condor ecology, had good predictive performance when tested with data withheld from model development, and aided in the identification of several candidate reintroduction areas outside of the current distribution of the species. Our results suggest there are large unoccupied regions of the California condor’s historical range that have retained ecological features similar to currently occupied habitats, and thus could be considered for future reintroduction efforts. Combining our activity-specific ENMs with ground reconnaissance and information on other threat factors that could not be directly incorporated into empirical ENMs will ultimately improve our ability to select successful reintroduction sites for the California condor.
Non-suicidal self-injury in eating disordered patients: a test of a conceptual model.
Muehlenkamp, Jennifer J; Claes, Laurence; Smits, Dirk; Peat, Christine M; Vandereycken, Walter
2011-06-30
A theoretical model explaining the high co-occurrence of non-suicidal self-injury (NSSI) in eating disordered populations as resulting from childhood traumatic experiences, low self-esteem, psychopathology, dissociation, and body dissatisfaction was previously proposed but not empirically tested. The current study empirically evaluated the fit of this proposed model within a sample of 422 young adult females (mean age=21.60; S.D.=6.27) consecutively admitted to an inpatient treatment unit for eating disorders. Participants completed a packet of questionnaires within a week of admission. Structural equation modeling procedures showed the model provided a good fit to the data, accounting for 15% of the variance in NSSI. Childhood trauma appears to have an indirect relationship to NSSI that is likely to be expressed via relationships to low self-esteem, psychopathology, body dissatisfaction, and dissociation. It appears that dissociation and body dissatisfaction may be particularly salient factors to consider in both understanding and treating NSSI within an eating disordered population. Copyright © 2010 Elsevier Ltd. All rights reserved.
Advanced solar irradiances applied to satellite and ionospheric operational systems
NASA Astrophysics Data System (ADS)
Tobiska, W. Kent; Schunk, Robert; Eccles, Vince; Bouwer, Dave
Satellite and ionospheric operational systems require solar irradiances in a variety of time scales and spectral formats. We describe the development of a system using operational grade solar irradiances that are applied to empirical thermospheric density models and physics-based ionospheric models used by operational systems that require a space weather characterization. The SOLAR2000 (S2K) and SOLARFLARE (SFLR) models developed by Space Environment Technologies (SET) provide solar irradiances from the soft X-rays (XUV) through the Far Ultraviolet (FUV) spectrum. The irradiances are provided as integrated indices for the JB2006 empirical atmosphere density models and as line/band spectral irradiances for the physics-based Ionosphere Forecast Model (IFM) developed by the Space Environment Corporation (SEC). We describe the integration of these irradiances in historical, current epoch, and forecast modes through the Communication Alert and Prediction System (CAPS). CAPS provides real-time and forecast HF radio availability for global and regional users and global total electron content (TEC) conditions.
Models of Solar Wind Structures and Their Interaction with the Earth's Space Environment
NASA Astrophysics Data System (ADS)
Watermann, J.; Wintoft, P.; Sanahuja, B.; Saiz, E.; Poedts, S.; Palmroth, M.; Milillo, A.; Metallinou, F.-A.; Jacobs, C.; Ganushkina, N. Y.; Daglis, I. A.; Cid, C.; Cerrato, Y.; Balasis, G.; Aylward, A. D.; Aran, A.
2009-11-01
The discipline of “Space Weather” is built on the scientific foundation of solar-terrestrial physics but with a strong orientation toward applied research. Models describing the solar-terrestrial environment are therefore at the heart of this discipline, for both physical understanding of the processes involved and establishing predictive capabilities of the consequences of these processes. Depending on the requirements, purely physical models, semi-empirical or empirical models are considered to be the most appropriate. This review focuses on the interaction of solar wind disturbances with geospace. We cover interplanetary space, the Earth’s magnetosphere (with the exception of radiation belt physics), the ionosphere (with the exception of radio science), the neutral atmosphere and the ground (via electromagnetic induction fields). Space weather relevant state-of-the-art physical and semi-empirical models of the various regions are reviewed. They include models for interplanetary space, its quiet state and the evolution of recurrent and transient solar perturbations (corotating interaction regions, coronal mass ejections, their interplanetary remnants, and solar energetic particle fluxes). Models of coupled large-scale solar wind-magnetosphere-ionosphere processes (global magnetohydrodynamic descriptions) and of inner magnetosphere processes (ring current dynamics) are discussed. Achievements in modeling the coupling between magnetospheric processes and the neutral and ionized upper and middle atmospheres are described. Finally we mention efforts to compile comprehensive and flexible models from selections of existing modules applicable to particular regions and conditions in interplanetary space and geospace.
NASA Astrophysics Data System (ADS)
Dalguer, Luis A.; Fukushima, Yoshimitsu; Irikura, Kojiro; Wu, Changjiang
2017-09-01
Inspired by the first workshop on Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations (BestPSHANI) conducted by the International Atomic Energy Agency (IAEA) on 18-20 November, 2015 in Vienna (http://www-pub.iaea.org/iaeameetings/50896/BestPSHANI), this PAGEOPH topical volume collects several extended articles from this workshop as well as several new contributions. A total of 17 papers have been selected on topics ranging from the seismological aspects of earthquake cycle simulations for source-scaling evaluation, seismic source characterization, source inversion and ground motion modeling (based on finite fault rupture using dynamic, kinematic, stochastic and empirical Green's functions approaches) to the engineering application of simulated ground motion for the analysis of seismic response of structures. These contributions include applications to real earthquakes and description of current practice to assess seismic hazard in terms of nuclear safety in low seismicity areas, as well as proposals for physics-based hazard assessment for critical structures near large earthquakes. Collectively, the papers of this volume highlight the usefulness of physics-based models to evaluate and understand the physical causes of observed and empirical data, as well as to predict ground motion beyond the range of recorded data. Relevant importance is given on the validation and verification of the models by comparing synthetic results with observed data and empirical models.
Empirically evaluating decision-analytic models.
Goldhaber-Fiebert, Jeremy D; Stout, Natasha K; Goldie, Sue J
2010-08-01
Model-based cost-effectiveness analyses support decision-making. To augment model credibility, evaluation via comparison to independent, empirical studies is recommended. We developed a structured reporting format for model evaluation and conducted a structured literature review to characterize current model evaluation recommendations and practices. As an illustration, we applied the reporting format to evaluate a microsimulation of human papillomavirus and cervical cancer. The model's outputs and uncertainty ranges were compared with multiple outcomes from a study of long-term progression from high-grade precancer (cervical intraepithelial neoplasia [CIN]) to cancer. Outcomes included 5 to 30-year cumulative cancer risk among women with and without appropriate CIN treatment. Consistency was measured by model ranges overlapping study confidence intervals. The structured reporting format included: matching baseline characteristics and follow-up, reporting model and study uncertainty, and stating metrics of consistency for model and study results. Structured searches yielded 2963 articles with 67 meeting inclusion criteria and found variation in how current model evaluations are reported. Evaluation of the cervical cancer microsimulation, reported using the proposed format, showed a modeled cumulative risk of invasive cancer for inadequately treated women of 39.6% (30.9-49.7) at 30 years, compared with the study: 37.5% (28.4-48.3). For appropriately treated women, modeled risks were 1.0% (0.7-1.3) at 30 years, study: 1.5% (0.4-3.3). To support external and projective validity, cost-effectiveness models should be iteratively evaluated as new studies become available, with reporting standardized to facilitate assessment. Such evaluations are particularly relevant for models used to conduct comparative effectiveness analyses.
A Semi-empirical Model of the Stratosphere in the Climate System
NASA Astrophysics Data System (ADS)
Sodergren, A. H.; Bodeker, G. E.; Kremser, S.; Meinshausen, M.; McDonald, A.
2014-12-01
Chemistry climate models (CCMs) currently used to project changes in Antarctic ozone are extremely computationally demanding. CCM projections are uncertain due to lack of knowledge of future emissions of greenhouse gases (GHGs) and ozone depleting substances (ODSs), as well as parameterizations within the CCMs that have weakly constrained tuning parameters. While projections should be based on an ensemble of simulations, this is not currently possible due to the complexity of the CCMs. An inexpensive but realistic approach to simulate changes in stratospheric ozone, and its coupling to the climate system, is needed as a complement to CCMs. A simple climate model (SCM) can be used as a fast emulator of complex atmospheric-ocean climate models. If such an SCM includes a representation of stratospheric ozone, the evolution of the global ozone layer can be simulated for a wide range of GHG and ODS emissions scenarios. MAGICC is an SCM used in previous IPCC reports. In the current version of the MAGICC SCM, stratospheric ozone changes depend only on equivalent effective stratospheric chlorine (EESC). In this work, MAGICC is extended to include an interactive stratospheric ozone layer using a semi-empirical model of ozone responses to CO2and EESC, with changes in ozone affecting the radiative forcing in the SCM. To demonstrate the ability of our new, extended SCM to generate projections of global changes in ozone, tuning parameters from 19 coupled atmosphere-ocean general circulation models (AOGCMs) and 10 carbon cycle models (to create an ensemble of 190 simulations) have been used to generate probability density functions of the dates of return of stratospheric column ozone to 1960 and 1980 levels for different latitudes.
Assessing the bioaccumulation potential of ionizable organic ...
The objective of the present study is to review current knowledge regarding the bioaccumulation potential of IOCs, with a focus on the availability of empirical data for fish. Aspects of the bioaccumulation potential of IOCs in fish that can be characterized relatively well include the pH-dependence of gill uptake and elimination, uptake in the gut, and sorption to phospholipids (membrane-water partitioning). Key challenges include the lack of empirical data for biotransformation and binding in plasma. Fish possess a diverse array of proteins which may transport IOCs across cell membranes. Except in a few cases, however, the significance of this transport for uptake and accumulation of environmental contaminants is unknown. Two case studies are presented. The first describes modeled effects of pH and biotransformation on bioconcentration of organic acids and bases, while the second employs an updated model to investigate factors responsible for accumulation of perfluoroalkylated acids (PFAA). The PFAA case study is notable insofar as it illustrates the likely importance of membrane transporters in the kidney and highlights the potential value of read across approaches. Recognizing the current need to perform bioaccumulation hazard assessments and ecological and exposure risk assessment for IOCs, we provide a tiered strategy that progresses (as needed) from conservative assumptions (models and associated data) to more sophisticated models requiring chemical-speci
Improved Model Fitting for the Empirical Green's Function Approach Using Hierarchical Models
NASA Astrophysics Data System (ADS)
Van Houtte, Chris; Denolle, Marine
2018-04-01
Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study examines a variety of model-fitting methods and shows that the choice of method can explain some of the discrepancy. The preferred method is Bayesian hierarchical modeling, which can reduce bias, better quantify uncertainties, and allow additional effects to be resolved. Two case study earthquakes are examined, the 2016 MW7.1 Kumamoto, Japan earthquake and a MW5.3 aftershock of the 2016 MW7.8 Kaikōura earthquake. By using hierarchical models, the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be retrieved without overfitting the data. Other methods commonly used to calculate corner frequencies may give substantial biases. In particular, if fc was calculated for the Kumamoto earthquake using an ω-square model, the obtained fc could be twice as large as a realistic value.
Do People Use the Shortest Path? An Empirical Test of Wardrop’s First Principle
Zhu, Shanjiang; Levinson, David
2015-01-01
Most recent route choice models, following either the random utility maximization or rule-based paradigm, require explicit enumeration of feasible routes. The quality of model estimation and prediction is sensitive to the appropriateness of the consideration set. However, few empirical studies of revealed route characteristics have been reported in the literature. This study evaluates the widely applied shortest path assumption by evaluating routes followed by residents of the Minneapolis—St. Paul metropolitan area. Accurate Global Positioning System (GPS) and Geographic Information System (GIS) data were employed to reveal routes people used over an eight to thirteen week period. Most people did not choose the shortest path. Using three weeks of that data, we find that current route choice set generation algorithms do not reveal the majority of paths that individuals took. Findings from this study may guide future efforts in building better route choice models. PMID:26267756
Auxiliary Parameter MCMC for Exponential Random Graph Models
NASA Astrophysics Data System (ADS)
Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro
2016-11-01
Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.
Evidence accumulation as a model for lexical selection.
Anders, R; Riès, S; van Maanen, L; Alario, F X
2015-11-01
We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.
De Vries, Rowen J; Marsh, Steven
2015-11-08
Internal lead shielding is utilized during superficial electron beam treatments of the head and neck, such as lip carcinoma. Methods for predicting backscattered dose include the use of empirical equations or performing physical measurements. The accuracy of these empirical equations required verification for the local electron beams. In this study, a Monte Carlo model of a Siemens Artiste linac was developed for 6, 9, 12, and 15 MeV electron beams using the EGSnrc MC package. The model was verified against physical measurements to an accuracy of better than 2% and 2mm. Multiple MC simulations of lead interfaces at different depths, corresponding to mean electron energies in the range of 0.2-14 MeV at the interfaces, were performed to calculate electron backscatter values. The simulated electron backscatter was compared with current empirical equations to ascertain their accuracy. The major finding was that the current set of backscatter equations does not accurately predict electron backscatter, particularly in the lower energies region. A new equation was derived which enables estimation of electron backscatter factor at any depth upstream from the interface for the local treatment machines. The derived equation agreed to within 1.5% of the MC simulated electron backscatter at the lead interface and upstream positions. Verification of the equation was performed by comparing to measurements of the electron backscatter factor using Gafchromic EBT2 film. These results show a mean value of 0.997 ± 0.022 to 1σ of the predicted values of electron backscatter. The new empirical equation presented can accurately estimate electron backscatter factor from lead shielding in the range of 0.2 to 14 MeV for the local linacs.
Marsh, Steven
2015-01-01
Internal lead shielding is utilized during superficial electron beam treatments of the head and neck, such as lip carcinoma. Methods for predicting backscattered dose include the use of empirical equations or performing physical measurements. The accuracy of these empirical equations required verification for the local electron beams. In this study, a Monte Carlo model of a Siemens Artiste linac was developed for 6, 9, 12, and 15 MeV electron beams using the EGSnrc MC package. The model was verified against physical measurements to an accuracy of better than 2% and 2 mm. Multiple MC simulations of lead interfaces at different depths, corresponding to mean electron energies in the range of 0.2–14 MeV at the interfaces, were performed to calculate electron backscatter values. The simulated electron backscatter was compared with current empirical equations to ascertain their accuracy. The major finding was that the current set of backscatter equations does not accurately predict electron backscatter, particularly in the lower energies region. A new equation was derived which enables estimation of electron backscatter factor at any depth upstream from the interface for the local treatment machines. The derived equation agreed to within 1.5% of the MC simulated electron backscatter at the lead interface and upstream positions. Verification of the equation was performed by comparing to measurements of the electron backscatter factor using Gafchromic EBT2 film. These results show a mean value of 0.997±0.022 to 1σ of the predicted values of electron backscatter. The new empirical equation presented can accurately estimate electron backscatter factor from lead shielding in the range of 0.2 to 14 MeV for the local linacs. PACS numbers: 87.53.Bn, 87.55.K‐, 87.56.bd PMID:26699566
Current Results and Proposed Activities in Microgravity Fluid Dynamics
NASA Technical Reports Server (NTRS)
Polezhaev, V. I.
1996-01-01
The Institute for Problems in Mechanics' Laboratory work in mathematical and physical modelling of fluid mechanics develops models, methods, and software for analysis of fluid flow, instability analysis, direct numerical modelling and semi-empirical models of turbulence, as well as experimental research and verification of these models and their applications in technological fluid dynamics, microgravity fluid mechanics, geophysics, and a number of engineering problems. This paper presents an overview of the results in microgravity fluid dynamics research during the last two years. Nonlinear problems of weakly compressible and compressible fluid flows are discussed.
Ethics in Neuroscience Graduate Training Programs: Views and Models from Canada
ERIC Educational Resources Information Center
Lombera, Sofia; Fine, Alan; Grunau, Ruth E.; Illes, Judy
2010-01-01
Consideration of the ethical, social, and policy implications of research has become increasingly important to scientists and scholars whose work focuses on brain and mind, but limited empirical data exist on the education in ethics available to them. We examined the current landscape of ethics training in neuroscience programs, beginning with the…
ERIC Educational Resources Information Center
De Clute, Shannon M.
2012-01-01
The purpose of this study was to test the applicability of working alliance theory (Bordin, 1979; Castonguay, Constantino, & Grosse Holtforth, 2006) and interpersonal influence theory (Strong, 1968) as ways to articulate an empirically informed model of student-teacher relationships in order to extend the current body of knowledge on effective…
Placing Parent Education in Conceptual and Empirical Context.
ERIC Educational Resources Information Center
Dunst, Carl J.
1999-01-01
This response to Mahoney et al. (EC 623 392), although agreeing that parent education needs to be reemphasized, disagrees with the reasons offered for why parent education is not a more explicit focus of current early-intervention efforts. Alternative approaches, such as family-centered practices and family support, are described. A model that…
ERIC Educational Resources Information Center
Bradshaw, Catherine P.; Mitchell, Mary M.; O'Brennan, Lindsey M.; Leaf, Philip J.
2010-01-01
Although there is increasing awareness of the overrepresentation of ethic minority students--particularly Black students--in disciplinary actions, the extant research has rarely empirically examined potential factors that may contribute to these disparities. The current study used a multilevel modeling approach to examine factors at the child…
ERIC Educational Resources Information Center
Eteokleous, Nikleia; Pavlou, Victoria; Tsolakidis, Simos
2015-01-01
As a way to respond to the contemporary challenges for promoting multiliteracies and multimodality in education, the current study proposes a theoretical framework--the multiliteracies model--in identifying, developing and evaluating multimodal material. The article examines, first theoretically and then empirically, the promotion of…
Mindfulness and Behavioral Parent Training: Commentary
ERIC Educational Resources Information Center
Eyberg, Sheila M.; Graham-Pole, John R.
2005-01-01
We review the description of mindfulness-based parent training (MBPT) and the argument that mindfulness practice offers a way to bring behavioral parent training (BPT) in line with current empirical knowledge. The strength of the proposed MBPT model is the attention it draws to process issues in BPT. We suggest, however, that it may not be…
The Social Stress Model of Substance Abuse among Childbearing-Age Women: A Review of the Literature.
ERIC Educational Resources Information Center
Lindenberg, Cathy Strachan; And Others
1994-01-01
This article synthesizes current empirical evidence for the interaction between stress level, stress modification, and drug abuse. The authors analyze 13 research studies of women; and they profile consistencies and inconsistencies in the findings, provide critiques of key methodological issues, and examine implications for future research,…
ERIC Educational Resources Information Center
Mattern, Krista D.; Marini, Jessica P.; Shaw, Emily J.
2015-01-01
Throughout the college retention literature, there is a recurring theme that students leave college for a variety of reasons making retention a difficult phenomenon to model. In the current study, cluster analysis techniques were employed to investigate whether multiple empirically based profiles of nonreturning students existed to more fully…
Immigrant Youth in Canadian Health Promoting Schools: A Literature Review
ERIC Educational Resources Information Center
Nyika, Lawrence; McPherson, Charmaine; Murray-Orr, Anne
2017-01-01
In this essay, we review empirical, theoretical, and substantial grey literature in relation to immigrant youth and health promoting schools (HPS). We examine the health promotion concept to consider how it may inform the HPS model. Using Canada as an example, we examine current immigrant youth demographics and define several key terms including…
Use of model-predicted “transference ratios” is currently under consideration by the US EPA in the formulation of a Secondary National Ambient Air Quality Standard for oxidized nitrogen and oxidized sulfur. This term is an empirical parameter defined for oxidized sulfur (TS)as th...
How to Promote Innovative Behavior at Work? The Role of Justice and Support within Organizations
ERIC Educational Resources Information Center
Young, Linn D.
2012-01-01
To provide a more developed research model of innovation in organizations, we reconsidered current thinking about the effects of organizational justice on innovative behavior at work. We investigated the mediating role of perceived organizational support (POS) between the two constructs. As hypothesized, empirical results showed that justice…
A General Model for Estimating Macroevolutionary Landscapes.
Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef
2018-03-01
The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].
An analytical model of iceberg drift
NASA Astrophysics Data System (ADS)
Eisenman, I.; Wagner, T. J. W.; Dell, R.
2017-12-01
Icebergs transport freshwater from glaciers and ice shelves, releasing the freshwater into the upper ocean thousands of kilometers from the source. This influences ocean circulation through its effect on seawater density. A standard empirical rule-of-thumb for estimating iceberg trajectories is that they drift at the ocean surface current velocity plus 2% of the atmospheric surface wind velocity. This relationship has been observed in empirical studies for decades, but it has never previously been physically derived or justified. In this presentation, we consider the momentum balance for an individual iceberg, which includes nonlinear drag terms. Applying a series of approximations, we derive an analytical solution for the iceberg velocity as a function of time. In order to validate the model, we force it with surface velocity and temperature data from an observational state estimate and compare the results with iceberg observations in both hemispheres. We show that the analytical solution reduces to the empirical 2% relationship in the asymptotic limit of small icebergs (or strong winds), which approximately applies for typical Arctic icebergs. We find that the 2% value arises due to a term involving the drag coefficients for water and air and the densities of the iceberg, ocean, and air. In the opposite limit of large icebergs (or weak winds), which approximately applies for typical Antarctic icebergs with horizontal length scales greater than about 12 km, we find that the 2% relationship is not applicable and that icebergs instead move with the ocean current, unaffected by the wind. The two asymptotic regimes can be understood by considering how iceberg size influences the relative importance of the wind and ocean current drag terms compared with the Coriolis and pressure gradient force terms in the iceberg momentum balance.
New Elements To Consider When Modeling the Hazards Associated with Botulinum Neurotoxin in Food.
Ihekwaba, Adaoha E C; Mura, Ivan; Malakar, Pradeep K; Walshaw, John; Peck, Michael W; Barker, G C
2016-01-15
Botulinum neurotoxins (BoNTs) produced by the anaerobic bacterium Clostridium botulinum are the most potent biological substances known to mankind. BoNTs are the agents responsible for botulism, a rare condition affecting the neuromuscular junction and causing a spectrum of diseases ranging from mild cranial nerve palsies to acute respiratory failure and death. BoNTs are a potential biowarfare threat and a public health hazard, since outbreaks of foodborne botulism are caused by the ingestion of preformed BoNTs in food. Currently, mathematical models relating to the hazards associated with C. botulinum, which are largely empirical, make major contributions to botulinum risk assessment. Evaluated using statistical techniques, these models simulate the response of the bacterium to environmental conditions. Though empirical models have been successfully incorporated into risk assessments to support food safety decision making, this process includes significant uncertainties so that relevant decision making is frequently conservative and inflexible. Progression involves encoding into the models cellular processes at a molecular level, especially the details of the genetic and molecular machinery. This addition drives the connection between biological mechanisms and botulism risk assessment and hazard management strategies. This review brings together elements currently described in the literature that will be useful in building quantitative models of C. botulinum neurotoxin production. Subsequently, it outlines how the established form of modeling could be extended to include these new elements. Ultimately, this can offer further contributions to risk assessments to support food safety decision making. Copyright © 2015 Ihekwaba et al.
Carreón, Gustavo; Gershenson, Carlos; Pineda, Luis A
2017-01-01
The equal headway instability-the fact that a configuration with regular time intervals between vehicles tends to be volatile-is a common regulation problem in public transportation systems. An unsatisfactory regulation results in low efficiency and possible collapses of the service. Computational simulations have shown that self-organizing methods can regulate the headway adaptively beyond the theoretical optimum. In this work, we develop a computer simulation for metro systems fed with real data from the Mexico City Metro to test the current regulatory method with a novel self-organizing approach. The current model considers overall system's data such as minimum and maximum waiting times at stations, while the self-organizing method regulates the headway in a decentralized manner using local information such as the passenger's inflow and the positions of neighboring trains. The simulation shows that the self-organizing method improves the performance over the current one as it adapts to environmental changes at the timescale they occur. The correlation between the simulation of the current model and empirical observations carried out in the Mexico City Metro provides a base to calculate the expected performance of the self-organizing method in case it is implemented in the real system. We also performed a pilot study at the Balderas station to regulate the alighting and boarding of passengers through guide signs on platforms. The analysis of empirical data shows a delay reduction of the waiting time of trains at stations. Finally, we provide recommendations to improve public transportation systems.
Gershenson, Carlos; Pineda, Luis A.
2017-01-01
The equal headway instability—the fact that a configuration with regular time intervals between vehicles tends to be volatile—is a common regulation problem in public transportation systems. An unsatisfactory regulation results in low efficiency and possible collapses of the service. Computational simulations have shown that self-organizing methods can regulate the headway adaptively beyond the theoretical optimum. In this work, we develop a computer simulation for metro systems fed with real data from the Mexico City Metro to test the current regulatory method with a novel self-organizing approach. The current model considers overall system’s data such as minimum and maximum waiting times at stations, while the self-organizing method regulates the headway in a decentralized manner using local information such as the passenger’s inflow and the positions of neighboring trains. The simulation shows that the self-organizing method improves the performance over the current one as it adapts to environmental changes at the timescale they occur. The correlation between the simulation of the current model and empirical observations carried out in the Mexico City Metro provides a base to calculate the expected performance of the self-organizing method in case it is implemented in the real system. We also performed a pilot study at the Balderas station to regulate the alighting and boarding of passengers through guide signs on platforms. The analysis of empirical data shows a delay reduction of the waiting time of trains at stations. Finally, we provide recommendations to improve public transportation systems. PMID:29287120
Snelson, Catherine M.; Abbott, Robert E.; Broome, Scott T.; ...
2013-07-02
A series of chemical explosions, called the Source Physics Experiments (SPE), is being conducted under the auspices of the U.S. Department of Energy’s National Nuclear Security Administration (NNSA) to develop a new more physics-based paradigm for nuclear test monitoring. Currently, monitoring relies on semi-empirical models to discriminate explosions from earthquakes and to estimate key parameters such as yield. While these models have been highly successful monitoring established test sites, there is concern that future tests could occur in media and at scale depths of burial outside of our empirical experience. This is highlighted by North Korean tests, which exhibit poormore » performance of a reliable discriminant, mb:Ms (Selby et al., 2012), possibly due to source emplacement and differences in seismic responses for nascent and established test sites. The goal of SPE is to replace these semi-empirical relationships with numerical techniques grounded in a physical basis and thus applicable to any geologic setting or depth.« less
Cultural Accommodation of Substance Abuse Treatment for Latino Adolescents
Burrow-Sanchez, Jason; Martinez, Charles; Hops, Hyman; Wrona, Megan
2011-01-01
Collaborating with community stakeholders is an often suggested step when integrating cultural variables into psychological treatments for members of ethnic minority groups. However, there is a dearth of literature describing how to accomplish this process within the context of substance abuse treatment studies. This paper describes a qualitative study conducted through a series of focus groups with stakeholders in the Latino community. Data from focus groups were used by researchers to guide the integration of cultural variables into an empirically-supported substance abuse treatment for Latino adolescents currently being evaluated for efficacy. A model for culturally accommodating empirically-supported treatments for ethnic minority participants is also described. PMID:21888499
The Syllogism of Neuro-Economics.
Padoa-Schioppa, Camillo
2008-01-01
If Neuroscience is to contribute to Economics, it will do so by the way of Psychology. Neural data can and do lead to better psychological theories, and psychological insights can and do lead to better economic models. Hence, Neuroscience can in principle contribute to Economics. Whether it actually will do so is an empirical question and the jury is still out. Economics currently faces theoretical and empirical challenges analogous to those faced by Physics at the turn of the 20(th) century and ultimately addressed by quantum theory. If "quantum Economics" will emerge in the coming decades, it may well be founded on such concepts as cognitive processes and brain activity.
Empirical modeling of an alcohol expectancy memory network using multidimensional scaling.
Rather, B C; Goldman, M S; Roehrich, L; Brannick, M
1992-02-01
Risk-related antecedent variables can be linked to later alcohol consumption by memory processes, and alcohol expectancies may be one relevant memory content. To advance research in this area, it would be useful to apply current memory models such as semantic network theory to explain drinking decision processes. We used multidimensional scaling (MDS) to empirically model a preliminary alcohol expectancy semantic network, from which a theoretical account of drinking decision making was generated. Subanalyses (PREFMAP) showed how individuals with differing alcohol consumption histories may have had different association pathways within the expectancy network. These pathways may have, in turn influenced future drinking levels and behaviors while the person was under the influence of alcohol. All individuals associated positive/prosocial effects with drinking, but heavier drinkers indicated arousing effects as their highest probability associates, whereas light drinkers expected sedation. An important early step in this MDS modeling process is the determination of iso-meaning expectancy adjective groups, which correspond to theoretical network nodes.
A Critique of Sociocultural Values in PBIS.
Wilson, Alyssa N
2015-05-01
Horner and Sugai provide lessons learned from their work with disseminating the Positive Behavioral Interventions and Support (PBIS) model. While PBIS represents an empirical school-wide approach for maladaptive student behaviors, the model appears to have limitations regarding sociocultural values and behavioral data collection practices. The current paper provides an overview of three identified areas for improvement and outlines how administrators using PBIS can incorporate acceptance and mindfulness-based intervention procedures to address the discussed limitations.
Prediction of Meiyu rainfall in Taiwan by multi-lead physical-empirical models
NASA Astrophysics Data System (ADS)
Yim, So-Young; Wang, Bin; Xing, Wen; Lu, Mong-Ming
2015-06-01
Taiwan is located at the dividing point of the tropical and subtropical monsoons over East Asia. Taiwan has double rainy seasons, the Meiyu in May-June and the Typhoon rains in August-September. To predict the amount of Meiyu rainfall is of profound importance to disaster preparedness and water resource management. The seasonal forecast of May-June Meiyu rainfall has been a challenge to current dynamical models and the factors controlling Taiwan Meiyu variability has eluded climate scientists for decades. Here we investigate the physical processes that are possibly important for leading to significant fluctuation of the Taiwan Meiyu rainfall. Based on this understanding, we develop a physical-empirical model to predict Taiwan Meiyu rainfall at a lead time of 0- (end of April), 1-, and 2-month, respectively. Three physically consequential and complementary predictors are used: (1) a contrasting sea surface temperature (SST) tendency in the Indo-Pacific warm pool, (2) the tripolar SST tendency in North Atlantic that is associated with North Atlantic Oscillation, and (3) a surface warming tendency in northeast Asia. These precursors foreshadow an enhanced Philippine Sea anticyclonic anomalies and the anomalous cyclone near the southeastern China in the ensuing summer, which together favor increasing Taiwan Meiyu rainfall. Note that the identified precursors at various lead-times represent essentially the same physical processes, suggesting the robustness of the predictors. The physical empirical model made by these predictors is capable of capturing the Taiwan rainfall variability with a significant cross-validated temporal correlation coefficient skill of 0.75, 0.64, and 0.61 for 1979-2012 at the 0-, 1-, and 2-month lead time, respectively. The physical-empirical model concept used here can be extended to summer monsoon rainfall prediction over the Southeast Asia and other regions.
NASA Astrophysics Data System (ADS)
Naif, Samer
2018-01-01
Electrical conductivity soundings provide important constraints on the thermal and hydration state of the mantle. Recent seafloor magnetotelluric surveys have imaged the electrical conductivity structure of the oceanic upper mantle over a variety of plate ages. All regions show high conductivity (0.02 to 0.2 S/m) at 50 to 150 km depths that cannot be explained with a sub-solidus dry mantle regime without unrealistic temperature gradients. Instead, the conductivity observations require either a small amount of water stored in nominally anhydrous minerals or the presence of interconnected partial melts. This ambiguity leads to dramatically different interpretations on the origin of the asthenosphere. Here, I apply the damp peridotite solidus together with plate cooling models to determine the amount of H2O needed to induce dehydration melting as a function of depth and plate age. Then, I use the temperature and water content estimates to calculate the electrical conductivity of the oceanic mantle with a two-phase mixture of olivine and pyroxene from several competing empirical conductivity models. This represents the maximum potential conductivity of sub-solidus oceanic mantle at the limit of hydration. The results show that partial melt is required to explain the subset of the high conductivity observations beneath young seafloor, irrespective of which empirical model is applied. In contrast, the end-member empirical models predict either nearly dry (<20 wt ppm H2O) or slightly damp (<200 wt ppm H2O) asthenosphere for observations of mature seafloor. Since the former estimate is too dry compared with geochemical constraints from mid-ocean ridge basalts, this suggests the effect of water on mantle conductivity is less pronounced than currently predicted by the conductive end-member empirical model.
Benchmarking test of empirical root water uptake models
NASA Astrophysics Data System (ADS)
dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman
2017-01-01
Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation
. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model
. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.
NASA Astrophysics Data System (ADS)
Ilhan, Z.; Wehner, W. P.; Schuster, E.; Boyer, M. D.; Gates, D. A.; Gerhardt, S.; Menard, J.
2015-11-01
Active control of the toroidal current density profile is crucial to achieve and maintain high-performance, MHD-stable plasma operation in NSTX-U. A first-principles-driven, control-oriented model describing the temporal evolution of the current profile has been proposed earlier by combining the magnetic diffusion equation with empirical correlations obtained at NSTX-U for the electron density, electron temperature, and non-inductive current drives. A feedforward + feedback control scheme for the requlation of the current profile is constructed by embedding the proposed nonlinear, physics-based model into the control design process. Firstly, nonlinear optimization techniques are used to design feedforward actuator trajectories that steer the plasma to a desired operating state with the objective of supporting the traditional trial-and-error experimental process of advanced scenario planning. Secondly, a feedback control algorithm to track a desired current profile evolution is developed with the goal of adding robustness to the overall control scheme. The effectiveness of the combined feedforward + feedback control algorithm for current profile regulation is tested in predictive simulations carried out in TRANSP. Supported by PPPL.
Parrott, Andrew C; Downey, Luke A; Roberts, Carl A; Montgomery, Cathy; Bruno, Raimondo; Fox, Helen C
2017-08-01
The purpose of this article is to debate current understandings about the psychobiological effects of recreational 3,4-methylenedioxymethamphetamine (MDMA or 'ecstasy'), and recommend theoretically-driven topics for future research. Recent empirical findings, especially those from novel topic areas were reviewed. Potential causes for the high variance often found in group findings were also examined. The first empirical reports into psychobiological and psychiatric aspects from the early 1990s concluded that regular users demonstrated some selective psychobiological deficits, for instance worse declarative memory, or heightened depression. More recent research has covered a far wider range of psychobiological functions, and deficits have emerged in aspects of vision, higher cognitive skill, neurohormonal functioning, and foetal developmental outcomes. However, variance levels are often high, indicating that while some recreational users develop problems, others are less affected. Potential reasons for this high variance are debated. An explanatory model based on multi-factorial causation is then proposed. A number of theoretically driven research topics are suggested, in order to empirically investigate the potential causes for these diverse psychobiological deficits. Future neuroimaging studies should study the practical implications of any serotonergic and/or neurohormonal changes, using a wide range of functional measures.
Practical Applications for Earthquake Scenarios Using ShakeMap
NASA Astrophysics Data System (ADS)
Wald, D. J.; Worden, B.; Quitoriano, V.; Goltz, J.
2001-12-01
In planning and coordinating emergency response, utilities, local government, and other organizations are best served by conducting training exercises based on realistic earthquake situations-ones that they are most likely to face. Scenario earthquakes can fill this role; they can be generated for any geologically plausible earthquake or for actual historic earthquakes. ShakeMap Web pages now display selected earthquake scenarios (www.trinet.org/shake/archive/scenario/html) and more events will be added as they are requested and produced. We will discuss the methodology and provide practical examples where these scenarios are used directly for risk reduction. Given a selected event, we have developed tools to make it relatively easy to generate a ShakeMap earthquake scenario using the following steps: 1) Assume a particular fault or fault segment will (or did) rupture over a certain length, 2) Determine the magnitude of the earthquake based on assumed rupture dimensions, 3) Estimate the ground shaking at all locations in the chosen area around the fault, and 4) Represent these motions visually by producing ShakeMaps and generating ground motion input for loss estimation modeling (e.g., FEMA's HAZUS). At present, ground motions are estimated using empirical attenuation relationships to estimate peak ground motions on rock conditions. We then correct the amplitude at that location based on the local site soil (NEHRP) conditions as we do in the general ShakeMap interpolation scheme. Finiteness is included explicitly, but directivity enters only through the empirical relations. Although current ShakeMap earthquake scenarios are empirically based, substantial improvements in numerical ground motion modeling have been made in recent years. However, loss estimation tools, HAZUS for example, typically require relatively high frequency (3 Hz) input for predicting losses, above the range of frequencies successfully modeled to date. Achieving full-synthetic ground motion estimates that will substantially improve over empirical relations at these frequencies will require developing cost-effective numerical tools for proper theoretical inclusion of known complex ground motion effects. Current efforts underway must continue in order to obtain site, basin, and deeper crustal structure, and to characterize and test 3D earth models (including attenuation and nonlinearity). In contrast, longer period synthetics (>2 sec) are currently being generated in a deterministic fashion to include 3D and shallow site effects, an improvement on empirical estimates alone. As progress is made, we will naturally incorporate such advances into the ShakeMap scenario earthquake and processing methodology. Our scenarios are currently used heavily in emergency response planning and loss estimation. Primary users include city, county, state and federal government agencies (e.g., the California Office of Emergency Services, FEMA, the County of Los Angeles) as well as emergency response planners and managers for utilities, businesses, and other large organizations. We have found the scenarios are also of fundamental interest to many in the media and the general community interested in the nature of the ground shaking likely experienced in past earthquakes as well as effects of rupture on known faults in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrinec, S.M.; Russell, C.T.
1995-06-01
The shape of the dayside magnetopause has been studied from both a theoretical and an empirical perspective for several decades. Early theoretical studies of the magnetopause shape assumed an inviscid interaction and normal pressure balance along the entire boundary, with the interior magnetic field and magnetopause currents being solved self-consistently and iteratively, using the Biot-Savart Law. The derived shapes are complicated, due to asymmetries caused by the nature of the dipole field and the direction of flow of the solar wind. These models contain a weak field region or cusp through which the solar wind has direct access to themore » ionosphere. More recent MHD model results have indicated that the closed magnetic field lines of the dayside magnetosphere can be dragged tailward of the terminator plane, so that there is no direct access of the magnetosheath to the ionosphere. Most empirical studies have assumed that the magnetopause can be approximated by a simple conic section with a specified number of coefficients, which are determined by least squares fits to spacecraft crossing positions. Thus most empirical models resemble more the MHD models than the more complex shape of the Biot-Savart models. In this work, the authors examine empirically the effect of the cusp regions on the shape of the dayside magnetopause, and they test the accuracy of these models. They find that during periods of northward IMF, crossings of the magnetopause that are close to one of the cusp regions are observed at distances closer to Earth than crossings in the equatorial plane. This result is consistent with the results of the inviscid Biot-Savart models and suggests that the magnetopause is less viscous than is assumed in many MHD models. 28 refs., 4 figs., 1 tab.« less
Heuristics for the Hodgkin-Huxley system.
Hoppensteadt, Frank
2013-09-01
Hodgkin and Huxley (HH) discovered that voltages control ionic currents in nerve membranes. This led them to describe electrical activity in a neuronal membrane patch in terms of an electronic circuit whose characteristics were determined using empirical data. Due to the complexity of this model, a variety of heuristics, including relaxation oscillator circuits and integrate-and-fire models, have been used to investigate activity in neurons, and these simpler models have been successful in suggesting experiments and explaining observations. Connections between most of the simpler models had not been made clear until recently. Shown here are connections between these heuristics and the full HH model. In particular, we study a new model (Type III circuit): It includes the van der Pol-based models; it can be approximated by a simple integrate-and-fire model; and it creates voltages and currents that correspond, respectively, to the h and V components of the HH system. Copyright © 2012 Elsevier Inc. All rights reserved.
Theorizing Land Cover and Land Use Changes: The Case of Tropical Deforestation
NASA Technical Reports Server (NTRS)
Walker, Robert
2004-01-01
This article addresses land-cover and land-use dynamics from the perspective of regional science and economic geography. It first provides an account of the so-called spatially explicit model, which has emerged in recent years as a key empirical approach to the issue. The article uses this discussion as a springboard to evaluate the potential utility of von Thuenen to the discourse on land-cover and land-use change. After identifying shortcomings of current theoretical approaches to land use in mainly urban models, the article filters a discussion of deforestation through the lens of bid-rent and assesses its effectiveness in helping us comprehend the destruction of tropical forest in the Amazon basin. The article considers the adjustments that would have to be made to existing theory to make it more useful to the empirical issues.
Transaction costs and sequential bargaining in transferable discharge permit markets.
Netusil, N R; Braden, J B
2001-03-01
Market-type mechanisms have been introduced and are being explored for various environmental programs. Several existing programs, however, have not attained the cost savings that were initially projected. Modeling that acknowledges the role of transactions costs and the discrete, bilateral, and sequential manner in which trades are executed should provide a more realistic basis for calculating potential cost savings. This paper presents empirical evidence on potential cost savings by examining a market for the abatement of sediment from farmland. Empirical results based on a market simulation model find no statistically significant change in mean abatement costs under several transaction cost levels when contracts are randomly executed. An alternative method of contract execution, gain-ranked, yields similar results. At the highest transaction cost level studied, trading reduces the total cost of compliance relative to a uniform standard that reflects current regulations.
Computer modelling of cyclic deformation of high-temperature materials. Progress report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duesbery, M.S.; Louat, N.P.
1992-11-16
Current methods of lifetime assessment leave much to be desired. Typically, the expected life of a full-scale component exposed to a complex environment is based upon empirical interpretations of measurements performed on microscopic samples in controlled laboratory conditions. Extrapolation to the service component is accomplished by scaling laws which, if used at all, are empirical; little or no attention is paid to synergistic interactions between the different components of the real environment. With the increasingly hostile conditions which must be faced in modern aerospace applications, improvement in lifetime estimation is mandated by both cost and safety considerations. This program aimsmore » at improving current methods of lifetime assessment by building in the characteristics of the micro-mechanisms known to be responsible for damage and failure. The broad approach entails the integration and, where necessary, augmentation of the micro-scale research results currently available in the literature into a macro-scale model with predictive capability. In more detail, the program will develop a set of hierarchically structured models at different length scales, from atomic to macroscopic, at each level taking as parametric input the results of the model at the next smaller scale. In this way the known microscopic properties can be transported by systematic procedures to the unknown macro-scale region. It may not be possible to eliminate empiricism completely, because some of the quantities involved cannot yet be estimated to the required degree of precision. In this case the aim will be at least to eliminate functional empiricism.« less
An Initial Non-Equilibrium Porous-Media Model for CFD Simulation of Stirling Regenerators
NASA Technical Reports Server (NTRS)
Tew, Roy; Simon, Terry; Gedeon, David; Ibrahim, Mounir; Rong, Wei
2006-01-01
The objective of this paper is to define empirical parameters (or closwre models) for an initial thermai non-equilibrium porous-media model for use in Computational Fluid Dynamics (CFD) codes for simulation of Stirling regenerators. The two CFD codes currently being used at Glenn Research Center (GRC) for Stirling engine modeling are Fluent and CFD-ACE. The porous-media models available in each of these codes are equilibrium models, which assmne that the solid matrix and the fluid are in thermal equilibrium at each spatial location within the porous medium. This is believed to be a poor assumption for the oscillating-flow environment within Stirling regenerators; Stirling 1-D regenerator models, used in Stirling design, we non-equilibrium regenerator models and suggest regenerator matrix and gas average temperatures can differ by several degrees at a given axial location end time during the cycle. A NASA regenerator research grant has been providing experimental and computational results to support definition of various empirical coefficients needed in defining a noa-equilibrium, macroscopic, porous-media model (i.e., to define "closure" relations). The grant effort is being led by Cleveland State University, with subcontractor assistance from the University of Minnesota, Gedeon Associates, and Sunpower, Inc. Friction-factor and heat-transfer correlations based on data taken with the NASAlSunpower oscillating-flow test rig also provide experimentally based correlations that are useful in defining parameters for the porous-media model; these correlations are documented in Gedeon Associates' Sage Stirling-Code Manuals. These sources of experimentally based information were used to define the following terms and parameters needed in the non-equilibrium porous-media model: hydrodynamic dispersion, permeability, inertial coefficient, fluid effective thermal conductivity (including themal dispersion and estimate of tortuosity effects}, and fluid-solid heat transfer coefficient. Solid effective thermal conductivity (including the effect of tortuosity) was also estimated. Determination of the porous-media model parameters was based on planned use in a CFD model of Infinia's Stirling Technology Demonstration Convertor (TDC), which uses a random-fiber regenerator matrix. The non-equilibrium porous-media model presented is considered to be an initial, or "draft," model for possible incorporation in commercial CFD codes, with the expectation that the empirical parameters will likely need to be updated once resulting Stirling CFD model regenerator and engine results have been analyzed. The emphasis of the paper is on use of available data to define empirical parameters (and closure models) needed in a thermal non-equilibrium porous-media model for Stirling regenerator simulation. Such a model has not yet been implemented by the authors or their associates. However, it is anticipated that a thermal non-equilibrium model such as that presented here, when iacorporated in the CFD codes, will improve our ability to accurately model Stirling regenerators with CFD relative to current thermal-equilibrium porous-media models.
A new UK fission yield evaluation UKFY3.7
NASA Astrophysics Data System (ADS)
Mills, Robert William
2017-09-01
The JEFF neutron induced and spontaneous fission product yield evaluation is currently unchanged from JEFF-3.1.1, also known by its UK designation UKFY3.6A. It is based upon experimental data combined with empirically fitted mass, charge and isomeric state models which are then adjusted within the experimental and model uncertainties to conform to the physical constraints of the fission process. A new evaluation has been prepared for JEFF, called UKFY3.7, that incorporates new experimental data and replaces the current empirical models (multi-Gaussian fits of mass distribution and Wahl Zp model for charge distribution combined with parameter extrapolation), with predictions from GEF. The GEF model has the advantage that one set of parameters allows the prediction of many different fissioning nuclides at different excitation energies unlike previous models where each fissioning nuclide at a specific excitation energy had to be fitted individually to the relevant experimental data. The new UKFY3.7 evaluation, submitted for testing as part of JEFF-3.3, is described alongside initial results of testing. In addition, initial ideas for future developments allowing inclusion of new measurements types and changing from any neutron spectrum type to true neutron energy dependence are discussed. Also, a method is proposed to propagate uncertainties of fission product yields based upon the experimental data that underlies the fission yield evaluation. The covariance terms being determined from the evaluated cumulative and independent yields combined with the experimental uncertainties on the cumulative yield measurements.
A pore-network model for foam formation and propagation in porous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharabaf, H.; Yortsos, Y.C.
1996-12-31
We present a pore-network model, based on a pores-and-throats representation of the porous medium, to simulate the generation and mobilization of foams in porous media. The model allows for various parameters or processes, empirically treated in current models, to be quantified and interpreted. Contrary to previous works, we also consider a dynamic (invasion) in addition to a static process. We focus on the properties of the displacement, the onset of foam flow and mobilization, the foam texture and the sweep efficiencies obtained. The model simulates an invasion process, in which gas invades a porous medium occupied by a surfactant solution.more » The controlling parameter is the snap-off probability, which in turn determines the foam quality for various size distributions of pores and throats. For the front to advance, the applied pressure gradient needs to be sufficiently high to displace a series of lamellae along a minimum capillary resistance (threshold) path. We determine this path using a novel algorithm. The fraction of the flowing lamellae, X{sub f} (and, consequently, the fraction of the trapped lamellae, X{sub f}) which are currently empirical, are also calculated. The model allows the delineation of conditions tinder which high-quality (strong) or low-quality (weak) foams form. In either case, the sweep efficiencies in displacements in various media are calculated. In particular, the invasion by foam of low permeability layers during injection in a heterogeneous system is demonstrated.« less
ERIC Educational Resources Information Center
Huang, Wenhao David; Johnson, Tristan E.; Han, Seung-Hyun Caleb
2013-01-01
Colleges and universities have begun to understand the instructional potential of digital game-based learning (DGBL) due to digital games' immersive features. These features, however, might overload learners as excessive motivational and cognitive stimuli thus impeding intended learning. Current research, however, lacks empirical evidences to…
ERIC Educational Resources Information Center
Truong, Thang Dinh; Hallinger, Philip
2017-01-01
The need for empirical research on leadership in educational organizations across more diverse national settings frames the purpose of the current study of principal leadership in Vietnam. To date, the international literature on school leadership in Vietnam is virtually a blank slate. This research aimed at describing how Vietnamese school…
Mechanisms of Reference Frame Selection in Spatial Term Use: Computational and Empirical Studies
ERIC Educational Resources Information Center
Schultheis, Holger; Carlson, Laura A.
2017-01-01
Previous studies have shown that multiple reference frames are available and compete for selection during the use of spatial terms such as "above." However, the mechanisms that underlie the selection process are poorly understood. In the current paper we present two experiments and a comparison of three computational models of selection…
ERIC Educational Resources Information Center
Collins, Christopher S.; Liu, Min
2014-01-01
The authors investigate whether Greek affiliation and living in Greek housing significantly influence college students' health-related behaviors. In addition, based on the findings, this study provides some important implications about the current practice of Greek society in higher education. The authors empirically tested a path model using…
Moreau, Didier; Artaud, J. F.; Ferron, John R.; ...
2015-05-01
This paper shows that semi-empirical data-driven models based on a twotime- scale approximation for the magnetic and kinetic control of advanced tokamak (AT) scenarios can be advantageously identified from simulated rather than real data, and used for control design. The method is applied to the combined control of the safety factor profile, q(x), and normalized pressure parameter, β N, using DIII-D parameters and actuators (on-axis co-current neutral beam injection (NBI) power, off axis co-current NBI power, electron cyclotron current drive power, and ohmic coil). The approximate plasma response model was identified from simulated data obtained using a rapidly converging plasmamore » transport code, METIS, which includes an MHD equilibrium and current diffusion solver, and combines plasma transport nonlinearity with 0-D scaling laws and 1.5-D ordinary differential equations. A number of open loop simulations were performed, in which the heating and current drive (H&CD) sources were randomly modulated around the typical values of a reference AT discharge on DIIID. Using these simulated data, a two-time-scale state space model was obtained for the coupled evolution of the poloidal flux profile and βN parameter, and a controller was synthesized based on the near-optimal ARTAEMIS algorithm [D. Moreau et al., Nucl. Fusion 53 (2013) 063020]. The paper discusses the results of closed-loop nonlinear simulations, using this controller for steady state AT operation. With feedforward plus feedback control, the steady state target q-profile and β N are satisfactorily tracked with a time scale of about ten seconds, despite large disturbances applied to the feedforward powers and plasma parameters. The effectiveness of the control algorithm is thus demonstrated for long pulse and steady state high-β N AT discharges. Its robustness with respect to disturbances of the H&CD actuators and of plasma parameters such as the H-factor, plasma density and effective charge, is also shown.« less
Schädler, Marc R; Warzybok, Anna; Kollmeier, Birger
2018-01-01
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than -20 dB could not be predicted.
Schädler, Marc R.; Warzybok, Anna; Kollmeier, Birger
2018-01-01
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than −20 dB could not be predicted. PMID:29692200
Bryant, Fred B
2016-12-01
This paper introduces a special section of the current issue of the Journal of Evaluation in Clinical Practice that includes a set of 6 empirical articles showcasing a versatile, new machine-learning statistical method, known as optimal data (or discriminant) analysis (ODA), specifically designed to produce statistical models that maximize predictive accuracy. As this set of papers clearly illustrates, ODA offers numerous important advantages over traditional statistical methods-advantages that enhance the validity and reproducibility of statistical conclusions in empirical research. This issue of the journal also includes a review of a recently published book that provides a comprehensive introduction to the logic, theory, and application of ODA in empirical research. It is argued that researchers have much to gain by using ODA to analyze their data. © 2016 John Wiley & Sons, Ltd.
Reconceptualizing Native Women's Health: An “Indigenist” Stress-Coping Model
Walters, Karina L.; Simoni, Jane M.
2002-01-01
This commentary presents an “indigenist” model of Native women's health, a stress-coping paradigm that situates Native women's health within the larger context of their status as a colonized people. The model is grounded in empirical evidence that traumas such as the “soul wound” of historical and contemporary discrimination among Native women influence health and mental health outcomes. The preliminary model also incorporates cultural resilience, including as moderators identity, enculturation, spiritual coping, and traditional healing practices. Current epidemiological data on Native women's general health and mental health are reconsidered within the framework of this model. PMID:11919043
Modeling specific action potentials in the human atria based on a minimal single-cell model.
Richter, Yvonne; Lind, Pedro G; Maass, Philipp
2018-01-01
We present an effective method to model empirical action potentials of specific patients in the human atria based on the minimal model of Bueno-Orovio, Cherry and Fenton adapted to atrial electrophysiology. In this model, three ionic are currents introduced, where each of it is governed by a characteristic time scale. By applying a nonlinear optimization procedure, a best combination of the respective time scales is determined, which allows one to reproduce specific action potentials with a given amplitude, width and shape. Possible applications for supporting clinical diagnosis are pointed out.
NASA Technical Reports Server (NTRS)
Pliutau, Denis; Prasad, Narasimha S
2013-01-01
Studies were performed to carry out semi-empirical validation of a new measurement approach we propose for molecular mixing ratios determination. The approach is based on relative measurements in bands of O2 and other molecules and as such may be best described as cross band relative absorption (CoBRA). . The current validation studies rely upon well verified and established theoretical and experimental databases, satellite data assimilations and modeling codes such as HITRAN, line-by-line radiative transfer model (LBLRTM), and the modern-era retrospective analysis for research and applications (MERRA). The approach holds promise for atmospheric mixing ratio measurements of CO2 and a variety of other molecules currently under investigation for several future satellite lidar missions. One of the advantages of the method is a significant reduction of the temperature sensitivity uncertainties which is illustrated with application to the ASCENDS mission for the measurement of CO2 mixing ratios (XCO2). Additional advantages of the method include the possibility to closely match cross-band weighting function combinations which is harder to achieve using conventional differential absorption techniques and the potential for additional corrections for water vapor and other interferences without using the data from numerical weather prediction (NWP) models.
Integrating Empirical-Modeling Approaches to Improve Understanding of Terrestrial Ecology Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarthy, Heather; Luo, Yiqi; Wullschleger, Stan D
Recent decades have seen tremendous increases in the quantity of empirical ecological data collected by individual investigators, as well as through research networks such as FLUXNET (Baldocchi et al., 2001). At the same time, advances in computer technology have facilitated the development and implementation of large and complex land surface and ecological process models. Separately, each of these information streams provides useful, but imperfect information about ecosystems. To develop the best scientific understanding of ecological processes, and most accurately predict how ecosystems may cope with global change, integration of empirical and modeling approaches is necessary. However, true integration - inmore » which models inform empirical research, which in turn informs models (Fig. 1) - is not yet common in ecological research (Luo et al., 2011). The goal of this workshop, sponsored by the Department of Energy, Office of Science, Biological and Environmental Research (BER) program, was to bring together members of the empirical and modeling communities to exchange ideas and discuss scientific practices for increasing empirical - model integration, and to explore infrastructure and/or virtual network needs for institutionalizing empirical - model integration (Yiqi Luo, University of Oklahoma, Norman, OK, USA). The workshop included presentations and small group discussions that covered topics ranging from model-assisted experimental design to data driven modeling (e.g. benchmarking and data assimilation) to infrastructure needs for empirical - model integration. Ultimately, three central questions emerged. How can models be used to inform experiments and observations? How can experimental and observational results be used to inform models? What are effective strategies to promote empirical - model integration?« less
2016-06-01
site customization of existing models. The author performed an empirical study centered around a survey of United States Marine Corps (USMC) and United...recommends that more studies be performed to determine the best way forward for AM within the USMC and USN. 14. SUBJECT TERMS 3D printing, additive...customization of existing models. The author performed an em- pirical study centered around a survey of United States Marine Corps (USMC) and United
Mechanism-based modeling of solute strengthening: application to thermal creep in Zr alloy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tome, Carlos; Wen, Wei; Capolungo, Laurent
2017-08-01
This report focuses on the development of a physics-based thermal creep model aiming to predict the behavior of Zr alloy under reactor accident condition. The current models used for this kind of simulations are mostly empirical in nature, based generally on fits to the experimental steady-state creep rates under different temperature and stress conditions, which has the following limitations. First, reactor accident conditions, such as RIA and LOCA, usually take place in short times and involve only the primary, not the steady-state creep behavior stage. Moreover, the empirical models cannot cover the conditions from normal operation to accident environments. Formore » example, Kombaiah and Murty [1,2] recently reported a transition between the low (n~4) and high (n~9) power law creep regimes in Zr alloys depending on the applied stress. Capturing such a behavior requires an accurate description of the mechanisms involved in the process. Therefore, a mechanism-based model that accounts for the evolution with time of microstructure is more appropriate and reliable for this kind of simulation.« less
Modeling Infrared Signal Reflections to Characterize Indoor Multipath Propagation
De-La-Llana-Calvo, Álvaro; Lázaro-Galilea, José Luis; Gardel-Vicente, Alfredo; Rodríguez-Navarro, David; Bravo-Muñoz, Ignacio; Tsirigotis, Georgios; Iglesias-Miguel, Juan
2017-01-01
In this paper, we propose a model to characterize Infrared (IR) signal reflections on any kind of surface material, together with a simplified procedure to compute the model parameters. The model works within the framework of Local Positioning Systems (LPS) based on IR signals (IR-LPS) to evaluate the behavior of transmitted signal Multipaths (MP), which are the main cause of error in IR-LPS, and makes several contributions to mitigation methods. Current methods are based on physics, optics, geometry and empirical methods, but these do not meet our requirements because of the need to apply several different restrictions and employ complex tools. We propose a simplified model based on only two reflection components, together with a method for determining the model parameters based on 12 empirical measurements that are easily performed in the real environment where the IR-LPS is being applied. Our experimental results show that the model provides a comprehensive solution to the real behavior of IR MP, yielding small errors when comparing real and modeled data (the mean error ranges from 1% to 4% depending on the environment surface materials). Other state-of-the-art methods yielded mean errors ranging from 15% to 40% in test measurements. PMID:28406436
Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach.
Pandey, S; Chadha, V K; Laxminarayan, R; Arinaminpathy, N
2017-04-01
There is an urgent need for improved estimations of the burden of tuberculosis (TB). To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8-156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used.
Cashin, Cheryl; Phuong, Nguyen Khanh; Shain, Ryan; Oanh, Tran Thi Mai; Thuy, Nguyen Thi
2015-01-01
Vietnam is currently considering a revision of its 2008 Health Insurance Law, including the regulation of provider payment methods. This study uses a simple spreadsheet-based, micro-simulation model to analyse the potential impacts of different provider payment reform scenarios on resource allocation across health care providers in three provinces in Vietnam, as well as on the total expenditure of the provincial branches of the public health insurance agency (Provincial Social Security [PSS]). The results show that currently more than 50% of PSS spending is concentrated at the provincial level with less than half at the district level. There is also a high degree of financial risk on district hospitals with the current fund-holding arrangement. Results of the simulation model show that several alternative scenarios for provider payment reform could improve the current payment system by reducing the high financial risk currently borne by district hospitals without dramatically shifting the current level and distribution of PSS expenditure. The results of the simulation analysis provided an empirical basis for health policy-makers in Vietnam to assess different provider payment reform options and make decisions about new models to support health system objectives.
Testing a new Free Core Nutation empirical model
NASA Astrophysics Data System (ADS)
Belda, Santiago; Ferrándiz, José M.; Heinkelmann, Robert; Nilsson, Tobias; Schuh, Harald
2016-03-01
The Free Core Nutation (FCN) is a free mode of the Earth's rotation caused by the different material characteristics of the Earth's core and mantle. This causes the rotational axes of those layers to slightly diverge from each other, resulting in a wobble of the Earth's rotation axis comparable to nutations. In this paper we focus on estimating empirical FCN models using the observed nutations derived from the VLBI sessions between 1993 and 2013. Assuming a fixed value for the oscillation period, the time-variable amplitudes and phases are estimated by means of multiple sliding window analyses. The effects of using different a priori Earth Rotation Parameters (ERP) in the derivation of models are also addressed. The optimal choice of the fundamental parameters of the model, namely the window width and step-size of its shift, is searched by performing a thorough experimental analysis using real data. The former analyses lead to the derivation of a model with a temporal resolution higher than the one used in the models currently available, with a sliding window reduced to 400 days and a day-by-day shift. It is shown that this new model increases the accuracy of the modeling of the observed Earth's rotation. Besides, empirical models determined from USNO Finals as a priori ERP present a slightly lower Weighted Root Mean Square (WRMS) of residuals than IERS 08 C04 along the whole period of VLBI observations, according to our computations. The model is also validated through comparisons with other recognized models. The level of agreement among them is satisfactory. Let us remark that our estimates give rise to the lowest residuals and seem to reproduce the FCN signal in more detail.
Equilibrium charge state distributions of Ni, Co, and Cu beams in molybdenum foil at 2 MeV/u
NASA Astrophysics Data System (ADS)
Gastis, Panagiotis; Perdikakis, George; Robertson, Daniel; Bauder, Will; Skulski, Michael; Collon, Phillipe; Anderson, Tyler; Ostdiek, Karen; Aprahamian, Ani; Lu, Wenting; Almus, Robert
2015-10-01
The charge states of heavy-ions are important for the study of nuclear reactions in inverse kinematics when electromagnetic recoil mass spectrometers are used. The passage of recoil products through a material, like the windows of gas cells or charge state boosters, results a charge state distribution (CSD) in the exit. This distribution must be known for the extraction of any cross section since only few charge-state can be transmitted through a magnetic separator separator for a given setting. The calculation of CSDs for heavy ions is challenging. Currently we rely on semi-empirical models with unknown accuracy for ion/target combinations in the Z > 20 region. In the present study were measured the CSDs of the stable 60Ni, 59Co, and 63Cu beams while passing through a 1 μm molybdenum foil. The beam energies were 1.84 MeV/u, 2.09 MeV/u, and 2.11 MeV/u for the 60Ni, 59Co, and 63Cu respectively. The results of this study mainly check the accuracy of the semi-empirical models used by the program LISE++, on calculating CSDs for ion/target combinations of Z > 20. In addition, other empirical models on calculating mean charge states were compared and checked.
Kessler, Sudha Kilaru; Minhas, Preet; Woods, Adam J.; Rosen, Alyssa; Gorman, Casey; Bikson, Marom
2013-01-01
Transcranial direct current stimulation (tDCS) is being widely investigated in adults as a therapeutic modality for brain disorders involving abnormal cortical excitability or disordered network activity. Interest is also growing in studying tDCS in children. Limited empirical studies in children suggest that tDCS is well tolerated and may have a similar safety profile as in adults. However, in electrotherapy as in pharmacotherapy, dose selection in children requires special attention, and simple extrapolation from adult studies may be inadequate. Critical aspects of dose adjustment include 1) differences in neurophysiology and disease, and 2) variation in brain electric fields for a specified dose due to gross anatomical differences between children and adults. In this study, we used high-resolution MRI derived finite element modeling simulations of two healthy children, ages 8 years and 12 years, and three healthy adults with varying head size to compare differences in electric field intensity and distribution. Multiple conventional and high-definition tDCS montages were tested. Our results suggest that on average, children will be exposed to higher peak electrical fields for a given applied current intensity than adults, but there is likely to be overlap between adults with smaller head size and children. In addition, exposure is montage specific. Variations in peak electrical fields were seen between the two pediatric models, despite comparable head size, suggesting that the relationship between neuroanatomic factors and bioavailable current dose is not trivial. In conclusion, caution is advised in using higher tDCS doses in children until 1) further modeling studies in a larger group shed light on the range of exposure possible by applied dose and age and 2) further studies correlate bioavailable dose estimates from modeling studies with empirically tested physiologic effects, such as modulation of motor evoked potentials after stimulation. PMID:24086698
The Earth's magnetosphere modeling and ISO standard
NASA Astrophysics Data System (ADS)
Alexeev, I.
The empirical model developed by Tsyganenko T96 is constructed by minimizing the rms deviation from the large magnetospheric data base Fairfield et al 1994 which contains Earth s magnetospheric magnetic field measurements accumulated during many years The applicability of the T96 model is limited mainly by quiet conditions in the solar wind along the Earth orbit But contrary to the internal planet s field the external magnetospheric magnetic field sources are much more time-dependent A reliable representation of the magnetic field is crucial in the framework of radiation belt modelling especially for disturbed conditions The last version of the Tsyganenko model has been constructed for a geomagnetic storm time interval This version based on the more accurate and physically consistent approach in which each source of the magnetic field would have its own relaxation timescale and a driving function based on an individual best fit combination of the solar wind and IMF parameters The same method has been used previously for paraboloid model construction This method is based on a priori information about the global magnetospheric current systems structure Each current system is included as a separate block module in the magnetospheric model As it was shown by the spacecraft magnetometer data there are three current systems which are the main contributors to the external magnetospheric magnetic field magnetopause currents ring current and tail current sheet Paraboloid model is based on an analytical solution of the Laplace
Paraboloid magnetospheric magnetic field model and the status of the model as an ISO standard
NASA Astrophysics Data System (ADS)
Alexeev, I.
A reliable representation of the magnetic field is crucial in the framework of radiation belt modelling especially for disturbed conditions The empirical model developed by Tsyganenko T96 is constructed by minimizing the rms deviation from the large magnetospheric data base The applicability of the T96 model is limited mainly by quiet conditions in the solar wind along the Earth orbit But contrary to the internal planet s field the external magnetospheric magnetic field sources are much more time-dependent A reliable representation of the magnetic field is crucial in the framework of radiation belt modelling especially for disturbed conditions It is a reason why the method of the paraboloid magnetospheric model construction based on the more accurate and physically consistent approach in which each source of the magnetic field would have its own relaxation timescale and a driving function based on an individual best fit combination of the solar wind and IMF parameters Such approach is based on a priori information about the global magnetospheric current systems structure Each current system is included as a separate block module in the magnetospheric model As it was shown by the spacecraft magnetometer data there are three current systems which are the main contributors to the external magnetospheric magnetic field magnetopause currents ring current and tail current sheet Paraboloid model is based on an analytical solution of the Laplace equation for each of these large-scale current systems in the magnetosphere with a
Al-Badriyeh, Daoud; Liew, Danny; Stewart, Kay; Kong, David C M
2009-01-01
A major randomized clinical trial, evaluating voriconazole versus liposomal amphotericin B (LAMB) as empirical therapy in febrile neutropenia, recommended voriconazole as a suitable alternative to LAMB. The current study sought to investigate the health economic impact of using voriconazole and LAMB for febrile neutropenia in Australia. A decision analytic model was constructed to capture downstream consequences of empirical antifungal therapy with each agent. The main outcomes were: success, breakthrough fungal infection, persistent baseline fungal infection, persistent fever, premature discontinuation and death. Underlying transition probabilities and treatment patterns were derived directly from trial data. Resource use was estimated using an expert panel. Cost inputs were obtained from the latest Australian representative published sources. The perspective adopted was that of the Australian hospital. Uncertainty and sensitivity analyses were undertaken via the Monte Carlo simulation. Compared with voriconazole, LAMB was associated with a net cost saving of AU$1422 (2.9%) per patient. A similar trend was observed with the cost per death prevented and successful treatment. LAMB dominated voriconazole as it resulted in higher efficacy and lower costs when compared with voriconazole. The results were most sensitive to the duration of therapy and the alternative therapy used post discontinuations. In uncertainty analysis, LAMB had 99.8% chance of costing less than voriconazole. In this study, which used the current standard five component endpoint to assess the impact of empirical antifungal therapy, LAMB was associated with cost savings relative to voriconazole.
Development of self-control in children aged 3 to 9 years: Perspective from a dual-systems model
Tao, Ting; Wang, Ligang; Fan, Chunlei; Gao, Wenbin
2014-01-01
The current study tested a set of interrelated theoretical propositions based on a dual-systems model of self-control. Data were collected from 2135 children aged 3 to 9 years. The results suggest that (a) there was positive growth in good self-control, whereas poor control remained relatively stable; and (b) girls performed better than boys on tests of good self-control. The results are discussed in terms of their implications for a dual-systems model of self-control theory and future empirical work. PMID:25501669
NASA Astrophysics Data System (ADS)
Vaccaro, S. R.
2011-09-01
The voltage dependence of the ionic and gating currents of a K channel is dependent on the activation barriers of a voltage sensor with a potential function which may be derived from the principal electrostatic forces on an S4 segment in an inhomogeneous dielectric medium. By variation of the parameters of a voltage-sensing domain model, consistent with x-ray structures and biophysical data, the lowest frequency of the survival probability of each stationary state derived from a solution of the Smoluchowski equation provides a good fit to the voltage dependence of the slowest time constant of the ionic current in a depolarized membrane, and the gating current exhibits a rising phase that precedes an exponential relaxation. For each depolarizing potential, the calculated time dependence of the survival probabilities of the closed states of an alpha helical S4 sensor are in accord with an empirical model of the ionic and gating currents recorded during the activation process.
The Past, Present and Future of Geodemographic Research in the United States and United Kingdom
Singleton, Alexander D.; Spielman, Seth E.
2014-01-01
This article presents an extensive comparative review of the emergence and application of geodemographics in both the United States and United Kingdom, situating them as an extension of earlier empirically driven models of urban socio-spatial structure. The empirical and theoretical basis for this generalization technique is also considered. Findings demonstrate critical differences in both the application and development of geodemographics between the United States and United Kingdom resulting from their diverging histories, variable data economies, and availability of academic or free classifications. Finally, current methodological research is reviewed, linking this discussion prospectively to the changing spatial data economy in both the United States and United Kingdom. PMID:25484455
Increasing the relevance of GCM simulations for Climate Services
NASA Astrophysics Data System (ADS)
Smith, L. A.; Suckling, E.
2012-12-01
The design and interpretation of model simulations for climate services differ significantly from experimental design for the advancement of the fundamental research on predictability that underpins it. Climate services consider the sources of best information available today; this calls for a frank evaluation of model skill in the face of statistical benchmarks defined by empirical models. The fact that Physical simulation models are thought to provide the only reliable method for extrapolating into conditions not previously observed has no bearing on whether or not today's simulation models outperform empirical models. Evidence on the length scales on which today's simulation models fail to outperform empirical benchmarks is presented; it is illustrated that this occurs even on global scales in decadal prediction. At all timescales considered thus far (as of July 2012), predictions based on simulation models are improved by blending with the output of statistical models. Blending is shown to be more interesting in the climate context than it is in the weather context, where blending with a history-based climatology is straightforward. As GCMs improve and as the Earth's climate moves further from that of the last century, the skill from simulation models and their relevance to climate services is expected to increase. Examples from both seasonal and decadal forecasting will be used to discuss a third approach that may increase the role of current GCMs more quickly. Specifically, aspects of the experimental design in previous hind cast experiments are shown to hinder the use of GCM simulations for climate services. Alternative designs are proposed. The value in revisiting Thompson's classic approach to improving weather forecasting in the fifties in the context of climate services is discussed.
Jet-induced ground effects on a parametric flat-plate model in hover
NASA Technical Reports Server (NTRS)
Wardwell, Douglas A.; Hange, Craig E.; Kuhn, Richard E.; Stewart, Vearl R.
1993-01-01
The jet-induced forces generated on short takeoff and vertical landing (STOVL) aircraft when in close proximity to the ground can have a significant effect on aircraft performance. Therefore, accurate predictions of these aerodynamic characteristics are highly desirable. Empirical procedures for estimating jet-induced forces during the vertical/short takeoff and landing (V/STOL) portions of the flight envelope are currently limited in accuracy. The jet-induced force data presented significantly add to the current STOVL configurations data base. Further development of empirical prediction methods for jet-induced forces, to provide more configuration diversity and improved overall accuracy, depends on the viability of this STOVL data base. The data base may also be used to validate computational fluid dynamics (CFD) analysis codes. The hover data obtained at the NASA Ames Jet Calibration and Hover Test (JCAHT) facility for a parametric flat-plate model is presented. The model tested was designed to allow variations in the planform aspect ratio, number of jets, nozzle shape, and jet location. There were 31 different planform/nozzle configurations tested. Each configuration had numerous pressure taps installed to measure the pressures on the undersurface of the model. All pressure data along with the balance jet-induced lift and pitching-moment increments are tabulated. For selected runs, pressure data are presented in the form of contour plots that show lines of constant pressure coefficient on the model undersurface. Nozzle-thrust calibrations and jet flow-pressure survey information are also provided.
NASA Technical Reports Server (NTRS)
Bilitza, D.; Reinisch, B.; Gallagher, D.; Huang, X.; Truhlik, V.; Nsumei, P.
2007-01-01
The goal of this LWS tools effort is the development of a new data-based F-region TOpside and PLAsmasphere (TOPLA) model for the electron density (Ne) and temperature (Te) for inclusion in the International Reference Ionosphere (IRI) model using newly available satellite data and models for these regions. The IRI model is the de facto international standard for specification of ionospheric parameters and is currently being considered as an ISO Technical Specification for the ionosphere. Our effort is directed towards improving the topside part of the model and extending it into the plasmasphere. Specifically we are planning to overcome the following shortcomings of the current IRI topside model: (I) overestimation of densities above 700 km by a factor of 2 and more, (3) unrealistically steep density profiles at high latitudes during very high solar activities, (4) no solar cycle variations and no semi-annual variations for the electron temperature, (5) discontinuities or unphysical gradients when merging with plasmaspheric models. We will report on first accomplishments and on the current status of the project.
Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model
Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.
2013-01-01
One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874
Detection and quantification of flow consistency in business process models.
Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel; Soffer, Pnina; Weber, Barbara
2018-01-01
Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics addressing these challenges, each following a different view of flow consistency. We then report the results of an empirical evaluation, which indicates which metric is more effective in predicting the human perception of this feature. Moreover, two other automatic evaluations describing the performance and the computational capabilities of our metrics are reported as well.
Revisiting competition in a classic model system using formal links between theory and data.
Hart, Simon P; Burgin, Jacqueline R; Marshall, Dustin J
2012-09-01
Formal links between theory and data are a critical goal for ecology. However, while our current understanding of competition provides the foundation for solving many derived ecological problems, this understanding is fractured because competition theory and data are rarely unified. Conclusions from seminal studies in space-limited benthic marine systems, in particular, have been very influential for our general understanding of competition, but rely on traditional empirical methods with limited inferential power and compatibility with theory. Here we explicitly link mathematical theory with experimental field data to provide a more sophisticated understanding of competition in this classic model system. In contrast to predictions from conceptual models, our estimates of competition coefficients show that a dominant space competitor can be equally affected by interspecific competition with a poor competitor (traditionally defined) as it is by intraspecific competition. More generally, the often-invoked competitive hierarchies and intransitivities in this system might be usefully revisited using more sophisticated empirical and analytical approaches.
The tell-tale look: viewing time, preferences, and prices.
Gunia, Brian C; Murnighan, J Keith
2015-01-01
Even the simplest choices can prompt decision-makers to balance their preferences against other, more pragmatic considerations like price. Thus, discerning people's preferences from their decisions creates theoretical, empirical, and practical challenges. The current paper addresses these challenges by highlighting some specific circumstances in which the amount of time that people spend examining potential purchase items (i.e., viewing time) can in fact reveal their preferences. Our model builds from the gazing literature, in a purchasing context, to propose that the informational value of viewing time depends on prices. Consistent with the model's predictions, four studies show that when prices are absent or moderate, viewing time provides a signal that is consistent with a person's preferences and purchase intentions. When prices are extreme or consistent with a person's preferences, however, viewing time is a less reliable predictor of either. Thus, our model highlights a price-contingent "viewing bias," shedding theoretical, empirical, and practical light on the psychology of preferences and visual attention, and identifying a readily observable signal of preference.
Samuel, Douglas B.; Widiger, Thomas A.
2008-01-01
Theory and research have suggested that the personality disorders contained within the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR) can be understood as maladaptive variants of the personality traits included within the five-factor model (FFM). The current meta-analysis of FFM personality disorder research both replicated and extended the 2004 work of Saulsman and Page (The five-factor model and personality disorder empirical literature: A meta-analytic review. Clinical Psychology Review, 23, 1055-1085) through a facet-level analysis that provides a more specific and nuanced description of each DSM-IV-TR personality disorder. The empirical FFM profiles generated for each personality disorder were generally congruent at the facet level with hypothesized FFM translations of the DSM-IV-TR personality disorders. However, notable exceptions to the hypotheses did occur and even some findings that were consistent with FFM theory could be said to be instrument specific. PMID:18708274
Torrence, Nicole D.; John, Samantha E.; Gavett, Brandon E.; O'Bryant, Sid E.
2016-01-01
The original factor structure of the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) has received little empirical support, but at least eight alternative factor structures have been identified in the literature. The current study used confirmatory factor analysis to compare the original RBANS model with eight alternatives, which were adjusted to include a general factor. Participant data were obtained from Project FRONTIER, an epidemiological study of rural health, and comprised 341 adults (229 women, 112 men) with mean age of 61.2 years (SD = 12.1) and mean education of 12.4 years (SD = 3.3). A bifactor version of the model proposed by Duff and colleagues provided the best fit to the data (CFI = 0.98; root-mean-squared error of approximation = 0.07), but required further modification to produce appropriate factor loadings. The results support the inclusion of a general factor and provide partial replication of the Duff and colleagues RBANS model. PMID:26429558
ERIC Educational Resources Information Center
Weissberg, Roger P., Ed.; Gullotta, Thomas P., Ed.; Hampton, Robert L., Ed.; Ryan, Bruce A., Ed.; Adams, Gerald R., Ed.
Young people are facing greater risks to their current and future health and social development, as shown by involvement of younger and younger children in risk-taking behaviors. This volume emphasizes developmentally and contextually appropriate prevention service delivery models and identifies state-of-the-art, empirically based strategies to…
ERIC Educational Resources Information Center
Brausch, Amy M.; Gutierrez, Peter M.
2009-01-01
There is much empirical literature on factors for adolescent suicide risk, but body image and disordered eating are rarely included in these models. In the current study, disordered eating and body image were examined as risk factors for suicide ideation since these factors are prevalent in adolescence, particularly for females. It was…
Lavender, Jason M.; Wonderlich, Stephen A.; Engel, Scott G.; Gordon, Kathryn H.; Kaye, Walter H.; Mitchell, James E.
2015-01-01
Several existing conceptual models and psychological interventions address or emphasize the role of emotion dysregulation in eating disorders. The current article uses Gratz and Roemer’s (2004) multidimensional model of emotion regulation and dysregulation as a clinically relevant framework to review the extant literature on emotion dysregulation in anorexia nervosa (AN) and bulimia nervosa (BN). Specifically, the dimensions reviewed include: (1) the flexible use of adaptive and situationally appropriate strategies to modulate the duration and/or intensity of emotional responses, (2) the ability to successfully inhibit impulsive behavior and maintain goal-directed behavior in the context of emotional distress, (3) awareness, clarity, and acceptance of emotional states, and (4) the willingness to experience emotional distress in the pursuit of meaningful activities. The current review suggests that both AN and BN are characterized by broad emotion regulation deficits, with difficulties in emotion regulation across the four dimensions found to characterize both AN and BN, although a small number of more specific difficulties may distinguish the two disorders. The review concludes with a discussion of the clinical implications of the findings, as well as a summary of limitations of the existing empirical literature and suggestions for future research. PMID:26112760
The spectral basis of optimal error field correction on DIII-D
Paz-Soldan, Carlos A.; Buttery, Richard J.; Garofalo, Andrea M.; ...
2014-04-28
Here, experimental optimum error field correction (EFC) currents found in a wide breadth of dedicated experiments on DIII-D are shown to be consistent with the currents required to null the poloidal harmonics of the vacuum field which drive the kink mode near the plasma edge. This allows the identification of empirical metrics which predict optimal EFC currents with accuracy comparable to that of first- principles modeling which includes the ideal plasma response. While further metric refinements are desirable, this work suggests optimal EFC currents can be effectively fed-forward based purely on knowledge of the vacuum error field and basic equilibriummore » properties which are routinely calculated in real-time.« less
Internet-based system for simulation-based medical planning for cardiovascular disease.
Steele, Brooke N; Draney, Mary T; Ku, Joy P; Taylor, Charles A
2003-06-01
Current practice in vascular surgery utilizes only diagnostic and empirical data to plan treatments, which does not enable quantitative a priori prediction of the outcomes of interventions. We have previously described simulation-based medical planning methods to model blood flow in arteries and plan medical treatments based on physiologic models. An important consideration for the design of these patient-specific modeling systems is the accessibility to physicians with modest computational resources. We describe a simulation-based medical planning environment developed for the World Wide Web (WWW) using the Virtual Reality Modeling Language (VRML) and the Java programming language.
Exponential model for option prices: Application to the Brazilian market
NASA Astrophysics Data System (ADS)
Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.
2016-03-01
In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.
In-depth porosity control of mesoporous silicon layers by an anodization current adjustment
NASA Astrophysics Data System (ADS)
Lascaud, J.; Defforge, T.; Certon, D.; Valente, D.; Gautier, G.
2017-12-01
The formation of thick mesoporous silicon layers in P+-type substrates leads to an increase in the porosity from the surface to the interface with silicon. The adjustment of the current density during the electrochemical etching of porous silicon is an intuitive way to control the layer in-depth porosity. The duration and the current density during the anodization were varied to empirically model porosity variations with layer thickness and build a database. Current density profiles were extracted from the model in order to etch layer with in-depth control porosity. As a proof of principle, an 80 μm-thick porous silicon multilayer was synthetized with decreasing porosities from 55% to 35%. The results show that the assessment of the in-depth porosity could be significantly enhanced by taking into account the pure chemical etching of the layer in the hydrofluoric acid-based electrolyte.
Cheerleading and Cynicism of Effective Mentoring in Current Empirical Research
ERIC Educational Resources Information Center
Crutcher, Paul A.; Naseem, Samina
2016-01-01
This article presents the results of a review of current empirical research of effective practices in teacher mentoring. Compiling literature published since 2000 in peer-reviewed journals, we examine arguments for mentoring practices to improve teacher candidate and novice teacher experiences and skills. The emergent "effective"…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busuioc, A.; Storch, H. von; Schnur, R.
Empirical downscaling procedures relate large-scale atmospheric features with local features such as station rainfall in order to facilitate local scenarios of climate change. The purpose of the present paper is twofold: first, a downscaling technique is used as a diagnostic tool to verify the performance of climate models on the regional scale; second, a technique is proposed for verifying the validity of empirical downscaling procedures in climate change applications. The case considered is regional seasonal precipitation in Romania. The downscaling model is a regression based on canonical correlation analysis between observed station precipitation and European-scale sea level pressure (SLP). Themore » climate models considered here are the T21 and T42 versions of the Hamburg ECHAM3 atmospheric GCM run in time-slice mode. The climate change scenario refers to the expected time of doubled carbon dioxide concentrations around the year 2050. Generally, applications of statistical downscaling to climate change scenarios have been based on the assumption that the empirical link between the large-scale and regional parameters remains valid under a changed climate. In this study, a rationale is proposed for this assumption by showing the consistency of the 2 x CO{sub 2} GCM scenarios in winter, derived directly from the gridpoint data, with the regional scenarios obtained through empirical downscaling. Since the skill of the GCMs in regional terms is already established, it is concluded that the downscaling technique is adequate for describing climatically changing regional and local conditions, at least for precipitation in Romania during winter.« less
The effect of fiscal policy on diet, obesity and chronic disease: a systematic review.
Thow, Anne Marie; Jan, Stephen; Leeder, Stephen; Swinburn, Boyd
2010-08-01
To assess the effect of food taxes and subsidies on diet, body weight and health through a systematic review of the literature. We searched the English-language published and grey literature for empirical and modelling studies on the effects of monetary subsidies or taxes levied on specific food products on consumption habits, body weight and chronic conditions. Empirical studies were dealing with an actual tax, while modelling studies predicted outcomes based on a hypothetical tax or subsidy. Twenty-four studies met the inclusion criteria: 13 were from the peer-reviewed literature and 11 were published on line. There were 8 empirical and 16 modelling studies. Nine studies assessed the impact of taxes on food consumption only, 5 on consumption and body weight, 4 on consumption and disease and 6 on body weight only. In general, taxes and subsidies influenced consumption in the desired direction, with larger taxes being associated with more significant changes in consumption, body weight and disease incidence. However, studies that focused on a single target food or nutrient may have overestimated the impact of taxes by failing to take into account shifts in consumption to other foods. The quality of the evidence was generally low. Almost all studies were conducted in high-income countries. Food taxes and subsidies have the potential to contribute to healthy consumption patterns at the population level. However, current evidence is generally of low quality and the empirical evaluation of existing taxes is a research priority, along with research into the effectiveness and differential impact of food taxes in developing countries.
Drainage investment and wetland loss: an analysis of the national resources inventory data
Douglas, Aaron J.; Johnson, Richard L.
1994-01-01
The United States Soil Conservation Service (SCS) conducts a survey for the purpose of establishing an agricultural land use database. This survey is called the National Resources Inventory (NRI) database. The complex NRI land classification system, in conjunction with the quantitative information gathered by the survey, has numerous applications. The current paper uses the wetland area data gathered by the NRI in 1982 and 1987 to examine empirically the factors that generate wetland loss in the United States. The cross-section regression models listed here use the quantity of wetlands, the stock of drainage capital, the realty value of farmland and drainage costs to explain most of the cross-state variation in wetland loss rates. Wetlands preservation efforts by federal agencies assume that pecuniary economic factors play a decisive role in wetland drainage. The empirical models tested in the present paper validate this assumption.
NASA Astrophysics Data System (ADS)
Hegde, Ganesh; Povolotskyi, Michael; Kubis, Tillmann; Charles, James; Klimeck, Gerhard
2014-03-01
The Semi-Empirical tight binding model developed in Part I Hegde et al. [J. Appl. Phys. 115, 123703 (2014)] is applied to metal transport problems of current relevance in Part II. A systematic study of the effect of quantum confinement, transport orientation, and homogeneous strain on electronic transport properties of Cu is carried out. It is found that quantum confinement from bulk to nanowire boundary conditions leads to significant anisotropy in conductance of Cu along different transport orientations. Compressive homogeneous strain is found to reduce resistivity by increasing the density of conducting modes in Cu. The [110] transport orientation in Cu nanowires is found to be the most favorable for mitigating conductivity degradation since it shows least reduction in conductance with confinement and responds most favorably to compressive strain.
Psychosocial interventions in bipolar disorder: a review.
Lolich, María; Vázquez, Gustavo H; Alvarez, Lina M; Tamayo, Jorge M
2012-01-01
Multiple psychosocial interventions for bipolar disorder have been proposed in recent years. Therefore, we consider that a critical review of empirically validated models would be useful. A review of the literature was conducted in Medline/PubMed for articles published during 2000-2010 that respond to the combination of "bipolar disorder" with the following key words: "psychosocial intervention", "psychoeducational intervention" and "psychotherapy". Cognitive-behavioral, psychoeducational, systematic care models, interpersonal and family therapy interventions were found to be empirically validated. All of them reported significant improvements in therapeutic adherence and in the patients' functionality. Although there are currently several validated psychosocial interventions for treating bipolar disorder, their efficacy needs to be specified in relation to more precise variables such as clinical type, comorbid disorders, stages or duration of the disease. Taking into account these clinical features would enable a proper selection of the most adequate intervention according to the patient's specific characteristics.
Species coextinctions and the biodiversity crisis.
Koh, Lian Pin; Dunn, Robert R; Sodhi, Navjot S; Colwell, Robert K; Proctor, Heather C; Smith, Vincent S
2004-09-10
To assess the coextinction of species (the loss of a species upon the loss of another), we present a probabilistic model, scaled with empirical data. The model examines the relationship between coextinction levels (proportion of species extinct) of affiliates and their hosts across a wide range of coevolved interspecific systems: pollinating Ficus wasps and Ficus, parasites and their hosts, butterflies and their larval host plants, and ant butterflies and their host ants. Applying a nomographic method based on mean host specificity (number of host species per affiliate species), we estimate that 6300 affiliate species are "coendangered" with host species currently listed as endangered. Current extinction estimates need to be recalibrated by taking species coextinctions into account.
Modeling of Current Consumption in 802.15.4/ZigBee Sensor Motes
Casilari, Eduardo; Cano-García, Jose M.; Campos-Garrido, Gonzalo
2010-01-01
Battery consumption is a key aspect in the performance of wireless sensor networks. One of the most promising technologies for this type of networks is 802.15.4/ZigBee. This paper presents an empirical characterization of battery consumption in commercial 802.15.4/ZigBee motes. This characterization is based on the measurement of the current that is drained from the power source under different 802.15.4 communication operations. The measurements permit the definition of an analytical model to predict the maximum, minimum and mean expected battery lifetime of a sensor networking application as a function of the sensor duty cycle and the size of the sensed data. PMID:22219671
Modeling of current consumption in 802.15.4/ZigBee sensor motes.
Casilari, Eduardo; Cano-García, Jose M; Campos-Garrido, Gonzalo
2010-01-01
Battery consumption is a key aspect in the performance of wireless sensor networks. One of the most promising technologies for this type of networks is 802.15.4/ZigBee. This paper presents an empirical characterization of battery consumption in commercial 802.15.4/ZigBee motes. This characterization is based on the measurement of the current that is drained from the power source under different 802.15.4 communication operations. The measurements permit the definition of an analytical model to predict the maximum, minimum and mean expected battery lifetime of a sensor networking application as a function of the sensor duty cycle and the size of the sensed data.
Welding current and melting rate in GMAW of aluminium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pandey, S.; Rao, U.R.K.; Aghakhani, M.
1996-12-31
Studies on GMAW of aluminium and its alloy 5083, revealed that the welding current and melting rate were affected by any change in wire feed rate, arc voltage, nozzle to plate distance, welding speed and torch angle. Empirical models have been presented to determine accurately the welding current and melting rate for any set of these parameters. These results can be utilized for determining accurately the heat input into the workpiece from which reliable predictions can be made about the mechanical and the metallurgical properties of a welded joint. The analysis of the model also helps in providing a vitalmore » information about the static V-I characteristics of the welding power source. The models were developed using a two-level fractional factorial design. The adequacy of the model was tested by the use of analysis of variance technique and the significance of the coefficients was tested by the student`s t test. The estimated and observed values of the welding current and melting rate have been shown on a scatter diagram and the interaction effects of different parameters involved have been presented in graphical forms.« less
An Initial Non-Equilibrium Porous-Media Model for CFD Simulation of Stirling Regenerators
NASA Technical Reports Server (NTRS)
Tew, Roy C.; Simon, Terry; Gedeon, David; Ibrahim, Mounir; Rong, Wei
2006-01-01
The objective of this paper is to define empirical parameters for an initial thermal non-equilibrium porous-media model for use in Computational Fluid Dynamics (CFD) codes for simulation of Stirling regenerators. The two codes currently used at Glenn Research Center for Stirling modeling are Fluent and CFD-ACE. The codes porous-media models are equilibrium models, which assume solid matrix and fluid are in thermal equilibrium. This is believed to be a poor assumption for Stirling regenerators; Stirling 1-D regenerator models, used in Stirling design, use non-equilibrium regenerator models and suggest regenerator matrix and gas average temperatures can differ by several degrees at a given axial location and time during the cycle. Experimentally based information was used to define: hydrodynamic dispersion, permeability, inertial coefficient, fluid effective thermal conductivity, and fluid-solid heat transfer coefficient. Solid effective thermal conductivity was also estimated. Determination of model parameters was based on planned use in a CFD model of Infinia's Stirling Technology Demonstration Converter (TDC), which uses a random-fiber regenerator matrix. Emphasis is on use of available data to define empirical parameters needed in a thermal non-equilibrium porous media model for Stirling regenerator simulation. Such a model has not yet been implemented by the authors or their associates.
Bias-dependent hybrid PKI empirical-neural model of microwave FETs
NASA Astrophysics Data System (ADS)
Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera
2011-10-01
Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.
An Empirical State Error Covariance Matrix Orbit Determination Example
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.
Belloir, Jean-Marc; Goiffon, Vincent; Virmontois, Cédric; Raine, Mélanie; Paillet, Philippe; Duhamel, Olivier; Gaillardin, Marc; Molina, Romain; Magnan, Pierre; Gilard, Olivier
2016-02-22
The dark current produced by neutron irradiation in CMOS Image Sensors (CIS) is investigated. Several CIS with different photodiode types and pixel pitches are irradiated with various neutron energies and fluences to study the influence of each of these optical detector and irradiation parameters on the dark current distribution. An empirical model is tested on the experimental data and validated on all the irradiated optical imagers. This model is able to describe all the presented dark current distributions with no parameter variation for neutron energies of 14 MeV or higher, regardless of the optical detector and irradiation characteristics. For energies below 1 MeV, it is shown that a single parameter has to be adjusted because of the lower mean damage energy per nuclear interaction. This model and these conclusions can be transposed to any silicon based solid-state optical imagers such as CIS or Charged Coupled Devices (CCD). This work can also be used when designing an optical imager instrument, to anticipate the dark current increase or to choose a mitigation technique.
Determination of a Limited Scope Network's Lightning Detection Efficiency
NASA Technical Reports Server (NTRS)
Rompala, John T.; Blakeslee, R.
2008-01-01
This paper outlines a modeling technique to map lightning detection efficiency variations over a region surveyed by a sparse array of ground based detectors. A reliable flash peak current distribution (PCD) for the region serves as the technique's base. This distribution is recast as an event probability distribution function. The technique then uses the PCD together with information regarding: site signal detection thresholds, type of solution algorithm used, and range attenuation; to formulate the probability that a flash at a specified location will yield a solution. Applying this technique to the full region produces detection efficiency contour maps specific to the parameters employed. These contours facilitate a comparative analysis of each parameter's effect on the network's detection efficiency. In an alternate application, this modeling technique gives an estimate of the number, strength, and distribution of events going undetected. This approach leads to a variety of event density contour maps. This application is also illustrated. The technique's base PCD can be empirical or analytical. A process for formulating an empirical PCD specific to the region and network being studied is presented. A new method for producing an analytical representation of the empirical PCD is also introduced.
Was Newton right? A search for non-Newtonian behavior of weak-field gravity
NASA Astrophysics Data System (ADS)
Boynton, Paul; Moore, Michael; Newman, Riley; Berg, Eric; Bonicalzi, Ricco; McKenney, Keven
2014-06-01
Empirical tests of Einstein's metric theory of gravitation, even in the non-relativistic, weak-field limit, could play an important role in judging theory-driven extensions of the current Standard Model of fundamental interactions. Guided by Galileo's work and his own experiments, Newton formulated a theory of gravity in which the force of attraction between two bodies is independent of composition and proportional to the inertia of each, thereby transparently satisfying Galileo's empirically informed conjecture regarding the Universality of Free Fall. Similarly, Einstein honored the manifest success of Newton's theory by assuring that the linearized equations of GTR matched the Newtonian formalism under "classical" conditions. Each of these steps, however, was explicitly an approximation raised to the status of principle. Perhaps, at some level, Newtonian gravity does not accurately describe the physical interaction between uncharged, unmagnetized, macroscopic bits of ordinary matter. What if Newton were wrong? Detecting any significant deviation from Newtonian behavior, no matter how small, could provide new insights and possibly reveal new physics. In the context of physics as an empirical science, for us this yet unanswered question constitutes sufficient motivation to attempt precision measurements of the kind described here. In this paper we report the current status of a project to search for violation of the Newtonian inverse square law of gravity.
NASA Astrophysics Data System (ADS)
Crighton, David G.
1991-08-01
Current understanding of airframe noise was reviewed as represented by experiment at model and full scale, by theoretical modeling, and by empirical correlation models. The principal component sources are associated with the trailing edges of wing and tail, deflected trailing edge flaps, flap side edges, leading edge flaps or slats, undercarriage gear elements, gear wheel wells, fuselage and wing boundary layers, and panel vibration, together with many minor protrusions like radio antennas and air conditioning intakes which may contribute significantly to perceived noise. There are also possibilities for interactions between the various mechanisms. With current engine technology, the principal airframe noise mechanisms dominate only at low frequencies, typically less than 1 kHz and often much lower, but further reduction of turbomachinery noise in particular may make airframe noise the principal element of approach noise at frequencies in the sensitive range.
Charge-based MOSFET model based on the Hermite interpolation polynomial
NASA Astrophysics Data System (ADS)
Colalongo, Luigi; Richelli, Anna; Kovacs, Zsolt
2017-04-01
An accurate charge-based compact MOSFET model is developed using the third order Hermite interpolation polynomial to approximate the relation between surface potential and inversion charge in the channel. This new formulation of the drain current retains the same simplicity of the most advanced charge-based compact MOSFET models such as BSIM, ACM and EKV, but it is developed without requiring the crude linearization of the inversion charge. Hence, the asymmetry and the non-linearity in the channel are accurately accounted for. Nevertheless, the expression of the drain current can be worked out to be analytically equivalent to BSIM, ACM and EKV. Furthermore, thanks to this new mathematical approach the slope factor is rigorously defined in all regions of operation and no empirical assumption is required.
Improving Control of Tuberculosis in Low-Burden Countries: Insights from Mathematical Modeling
White, Peter J.; Abubakar, Ibrahim
2016-01-01
Tuberculosis control and elimination remains a challenge for public health even in low-burden countries. New technology and novel approaches to case-finding, diagnosis, and treatment are causes for optimism but they need to be used cost-effectively. This in turn requires improved understanding of the epidemiology of TB and analysis of the effectiveness and cost-effectiveness of different interventions. We describe the contribution that mathematical modeling can make to understanding epidemiology and control of TB in different groups, guiding improved approaches to public health interventions. We emphasize that modeling is not a substitute for collecting data but rather is complementary to empirical research, helping determine what are the key questions to address to maximize the public-health impact of research, helping to plan studies, and making maximal use of available data, particularly from surveillance, and observational studies. We provide examples of how modeling and related empirical research inform policy and discuss how a combination of these approaches can be used to address current questions of key importance, including use of whole-genome sequencing, screening and treatment for latent infection, and combating drug resistance. PMID:27199896
Volatility in financial markets: stochastic models and empirical results
NASA Astrophysics Data System (ADS)
Miccichè, Salvatore; Bonanno, Giovanni; Lillo, Fabrizio; Mantegna, Rosario N.
2002-11-01
We investigate the historical volatility of the 100 most capitalized stocks traded in US equity markets. An empirical probability density function (pdf) of volatility is obtained and compared with the theoretical predictions of a lognormal model and of the Hull and White model. The lognormal model well describes the pdf in the region of low values of volatility whereas the Hull and White model better approximates the empirical pdf for large values of volatility. Both models fail in describing the empirical pdf over a moderately large volatility range.
Towards a universal model for carbon dioxide uptake by plants
Wang, Han; Prentice, I. Colin; Keenan, Trevor F.; ...
2017-09-04
Gross primary production (GPP) - the uptake of carbon dioxide (CO 2) by leaves, and its conversion to sugars by photosynthesis - is the basis for life on land. Earth System Models (ESMs) incorporating the interactions of land ecosystems and climate are used to predict the future of the terrestrial sink for anthropogenic CO 2. ESMs require accurate representation of GPP. However, current ESMs disagree on how GPP responds to environmental variations, suggesting a need for a more robust theoretical framework for modelling. Here in this work, we focus on a key quantity for GPP, the ratio of leaf internalmore » to external CO 2 (χ). χ is tightly regulated and depends on environmental conditions, but is represented empirically and incompletely in today's models. We show that a simple evolutionary optimality hypothesis predicts specific quantitative dependencies of χ on temperature, vapour pressure deficit and elevation; and that these same dependencies emerge from an independent analysis of empirical χ values, derived from a worldwide dataset of >3,500 leaf stable carbon isotope measurements. A single global equation embodying these relationships then unifies the empirical light-use efficiency model with the standard model of C 3 photosynthesis, and successfully predicts GPP measured at eddy-covariance flux sites. This success is notable given the equation's simplicity and broad applicability across biomes and plant functional types. Finally, it provides a theoretical underpinning for the analysis of plant functional coordination across species and emergent properties of ecosystems, and a potential basis for the reformulation of the controls of GPP in next-generation ESMs.« less
Towards a universal model for carbon dioxide uptake by plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Han; Prentice, I. Colin; Keenan, Trevor F.
Gross primary production (GPP) - the uptake of carbon dioxide (CO 2) by leaves, and its conversion to sugars by photosynthesis - is the basis for life on land. Earth System Models (ESMs) incorporating the interactions of land ecosystems and climate are used to predict the future of the terrestrial sink for anthropogenic CO 2. ESMs require accurate representation of GPP. However, current ESMs disagree on how GPP responds to environmental variations, suggesting a need for a more robust theoretical framework for modelling. Here in this work, we focus on a key quantity for GPP, the ratio of leaf internalmore » to external CO 2 (χ). χ is tightly regulated and depends on environmental conditions, but is represented empirically and incompletely in today's models. We show that a simple evolutionary optimality hypothesis predicts specific quantitative dependencies of χ on temperature, vapour pressure deficit and elevation; and that these same dependencies emerge from an independent analysis of empirical χ values, derived from a worldwide dataset of >3,500 leaf stable carbon isotope measurements. A single global equation embodying these relationships then unifies the empirical light-use efficiency model with the standard model of C 3 photosynthesis, and successfully predicts GPP measured at eddy-covariance flux sites. This success is notable given the equation's simplicity and broad applicability across biomes and plant functional types. Finally, it provides a theoretical underpinning for the analysis of plant functional coordination across species and emergent properties of ecosystems, and a potential basis for the reformulation of the controls of GPP in next-generation ESMs.« less
Vujanovic, Anka A; Meyer, Thomas D; Heads, Angela M; Stotts, Angela L; Villarreal, Yolanda R; Schmitz, Joy M
2017-07-01
The co-occurrence of depression and substance use disorders (SUD) is highly prevalent and associated with poor treatment outcomes for both disorders. As compared to individuals suffering from either disorder alone, individuals with both conditions are likely to endure a more severe and chronic clinical course with worse treatment outcomes. Thus, current practice guidelines recommend treating these co-occurring disorders simultaneously. The overarching aims of this narrative are two-fold: (1) to provide an updated review of the current empirical status of integrated psychotherapy approaches for SUD and depression comorbidity, based on models of traditional cognitive-behavioral therapy (CBT) and newer third-wave CBT approaches, including acceptance- and mindfulness-based interventions and behavioral activation (BA); and (2) to propose a novel theoretical framework for transdiagnostic CBT for SUD-depression, based upon empirically grounded psychological mechanisms underlying this highly prevalent comorbidity. Traditional CBT approaches for the treatment of SUD-depression are well-studied. Despite advances in the development and evaluation of various third-wave psychotherapies, more work needs to be done to evaluate the efficacy of such approaches for SUD-depression. Informed by this summary of the evidence, we propose a transdiagnostic therapy approach that aims to integrate treatment elements found in empirically supported CBT-based interventions for SUD and depression. By targeting shared cognitive-affective processes underlying SUD-depression, transdiagnostic treatment models have the potential to offer a novel clinical approach to treating this difficult-to-treat comorbidity and relevant, co-occurring psychiatric disturbances, such as posttraumatic stress.
Fung, Monica; Kim, Jane; Marty, Francisco M; Schwarzinger, Michaël; Koo, Sophia
2015-01-01
Invasive fungal disease (IFD) causes significant morbidity and mortality in hematologic malignancy patients with high-risk febrile neutropenia (FN). These patients therefore often receive empirical antifungal therapy. Diagnostic test-guided pre-emptive antifungal therapy has been evaluated as an alternative treatment strategy in these patients. We conducted an electronic search for literature comparing empirical versus pre-emptive antifungal strategies in FN among adult hematologic malignancy patients. We systematically reviewed 9 studies, including randomized-controlled trials, cohort studies, and feasibility studies. Random and fixed-effect models were used to generate pooled relative risk estimates of IFD detection, IFD-related mortality, overall mortality, and rates and duration of antifungal therapy. Heterogeneity was measured via Cochran's Q test, I2 statistic, and between study τ2. Incorporating these parameters and direct costs of drugs and diagnostic testing, we constructed a comparative costing model for the two strategies. We conducted probabilistic sensitivity analysis on pooled estimates and one-way sensitivity analyses on other key parameters with uncertain estimates. Nine published studies met inclusion criteria. Compared to empirical antifungal therapy, pre-emptive strategies were associated with significantly lower antifungal exposure (RR 0.48, 95% CI 0.27-0.85) and duration without an increase in IFD-related mortality (RR 0.82, 95% CI 0.36-1.87) or overall mortality (RR 0.95, 95% CI 0.46-1.99). The pre-emptive strategy cost $324 less (95% credible interval -$291.88 to $418.65 pre-emptive compared to empirical) than the empirical approach per FN episode. However, the cost difference was influenced by relatively small changes in costs of antifungal therapy and diagnostic testing. Compared to empirical antifungal therapy, pre-emptive antifungal therapy in patients with high-risk FN may decrease antifungal use without increasing mortality. We demonstrate a state of economic equipoise between empirical and diagnostic-directed pre-emptive antifungal treatment strategies, influenced by small changes in cost of antifungal therapy and diagnostic testing, in the current literature. This work emphasizes the need for optimization of existing fungal diagnostic strategies, development of more efficient diagnostic strategies, and less toxic and more cost-effective antifungals.
Economic selection indexes for Hereford and Braford cattle raised in southern Brazil.
Costa, R F; Teixeira, B B M; Yokoo, M J; Cardoso, F F
2017-07-01
Economic selection indexes (EI) are considered the best way to select the most profitable animals for specific production systems. Nevertheless, in Brazil, few genetic evaluation programs deliver such indexes to their breeders. The aims of this study were to determine the breeding goals (BG) and economic values (EV, in US$) for typical beef cattle production systems in southern Brazil, to propose EI aimed to maximize profitability, and to compare the proposed EI with the currently used empirical index. Bioeconomic models were developed to characterize 3 typical production systems, identifying traits of economic impact and their respective EV. The first was called the calf-crop system and included the birth rate (BR), direct weaning weight (WWd), and mature cow weight (MCW) as selection goals. The second system was called the full-cycle system, and its breeding goals were BR, WWd, MCW, and carcass weight (CW). Finally, the third was called the stocking and finishing system, which had WWd and CW as breeding goals. To generate the EI, we adopted the selection criteria currently measured and used in the empirical index of PampaPlus, which is the genetic evaluation program of the Brazilian Hereford and Braford Association. The comparison between the EI and the current PampaPlus index was made by the aggregated genetic-economic gain per generation (Δ). Therefore, for each production system an index was developed using the derived economic weights, and it was compared with the current empirical index. The relative importance (RI) for BR, WWd, and MCW for the calf-crop system was 68.03%, 19.35%, and 12.62%, respectively. For the full-cycle system, the RI for BR, WWd, MCW, and CW were 69.63%, 7.31%, 5.01%, and 18.06%, respectively. For the stocking and finishing production system, the RI for WWd and CW was 34.20% and 65.80%, respectively. The Δ for the calf-crop system were US$6.12 and US$4.36, using the proposed economic and empirical indexes, respectively. Respective values were US$19.87 and US$18.22 for the full-cycle system and US$20.52 and US$18.52 in the stocking and finishing system. The efficiency of the proposed EI had low sensitivity to changes in the values of the economic and genetic parameters. The 3 EI generated higher Δ when using the proposed economic weight compared to the Δ provided by a PampaPlus index, suggesting the use of proposed EI to obtain greater economic profitability in relation to the current empirical PampaPlus index.
Scaling rules for the final decline to extinction
Griffen, Blaine D.; Drake, John M.
2009-01-01
Space–time scaling rules are ubiquitous in ecological phenomena. Current theory postulates three scaling rules that describe the duration of a population's final decline to extinction, although these predictions have not previously been empirically confirmed. We examine these scaling rules across a broader set of conditions, including a wide range of density-dependent patterns in the underlying population dynamics. We then report on tests of these predictions from experiments using the cladoceran Daphnia magna as a model. Our results support two predictions that: (i) the duration of population persistence is much greater than the duration of the final decline to extinction and (ii) the duration of the final decline to extinction increases with the logarithm of the population's estimated carrying capacity. However, our results do not support a third prediction that the duration of the final decline scales inversely with population growth rate. These findings not only support the current standard theory of population extinction but also introduce new empirical anomalies awaiting a theoretical explanation. PMID:19141422
NASA Astrophysics Data System (ADS)
Wu, M. Q.; Pan, C. K.; Chan, V. S.; Li, G. Q.; Garofalo, A. M.; Jian, X.; Liu, L.; Ren, Q. L.; Chen, J. L.; Gao, X.; Gong, X. Z.; Ding, S. Y.; Qian, J. P.; Cfetr Physics Team
2018-04-01
Time-dependent integrated modeling of DIII-D ITER-like and high bootstrap current plasma ramp-up discharges has been performed with the equilibrium code EFIT, and the transport codes TGYRO and ONETWO. Electron and ion temperature profiles are simulated by TGYRO with the TGLF (SAT0 or VX model) turbulent and NEO neoclassical transport models. The VX model is a new empirical extension of the TGLF turbulent model [Jian et al., Nucl. Fusion 58, 016011 (2018)], which captures the physics of multi-scale interaction between low-k and high-k turbulence from nonlinear gyro-kinetic simulation. This model is demonstrated to accurately model low Ip discharges from the EAST tokamak. Time evolution of the plasma current density profile is simulated by ONETWO with the experimental current ramp-up rate. The general trend of the predicted evolution of the current density profile is consistent with that obtained from the equilibrium reconstruction with Motional Stark effect constraints. The predicted evolution of βN , li , and βP also agrees well with the experiments. For the ITER-like cases, the predicted electron and ion temperature profiles using TGLF_Sat0 agree closely with the experimental measured profiles, and are demonstrably better than other proposed transport models. For the high bootstrap current case, the predicted electron and ion temperature profiles perform better in the VX model. It is found that the SAT0 model works well at high IP (>0.76 MA) while the VX model covers a wider range of plasma current ( IP > 0.6 MA). The results reported in this paper suggest that the developed integrated modeling could be a candidate for ITER and CFETR ramp-up engineering design modeling.
Mathematics for understanding disease.
Bies, R R; Gastonguay, M R; Schwartz, S L
2008-06-01
The application of mathematical models to reflect the organization and activity of biological systems can be viewed as a continuum of purpose. The far left of the continuum is solely the prediction of biological parameter values, wherein an understanding of the underlying biological processes is irrelevant to the purpose. At the far right of the continuum are mathematical models, the purposes of which are a precise understanding of those biological processes. No models in present use fall at either end of the continuum. Without question, however, the emphasis in regards to purpose has been on prediction, e.g., clinical trial simulation and empirical disease progression modeling. Clearly the model that ultimately incorporates a universal understanding of biological organization will also precisely predict biological events, giving the continuum the logical form of a tautology. Currently that goal lies at an immeasurable distance. Nonetheless, the motive here is to urge movement in the direction of that goal. The distance traveled toward understanding naturally depends upon the nature of the scientific question posed with respect to comprehending and/or predicting a particular disease process. A move toward mathematical models implies a move away from static empirical modeling and toward models that focus on systems biology, wherein modeling entails the systematic study of the complex pattern of organization inherent in biological systems.
Transition mixing study empirical model report
NASA Technical Reports Server (NTRS)
Srinivasan, R.; White, C.
1988-01-01
The empirical model developed in the NASA Dilution Jet Mixing Program has been extended to include the curvature effects of transition liners. This extension is based on the results of a 3-D numerical model generated under this contract. The empirical model results agree well with the numerical model results for all tests cases evaluated. The empirical model shows faster mixing rates compared to the numerical model. Both models show drift of jets toward the inner wall of a turning duct. The structure of the jets from the inner wall does not exhibit the familiar kidney-shaped structures observed for the outer wall jets or for jets injected in rectangular ducts.
Counselor Training: Empirical Findings and Current Approaches
ERIC Educational Resources Information Center
Buser, Trevor J.
2008-01-01
The literature on counselor training has included attention to cognitive and interpersonal skill development and has reported on empirical findings regarding the relationship of training with client outcomes. This article reviews the literature on each of these topics and discusses empirical and theoretical underpinnings of recently developed…
De Vries, Martine; Van Leeuwen, Evert
2010-11-01
In ethics, the use of empirical data has become more and more popular, leading to a distinct form of applied ethics, namely empirical ethics. This 'empirical turn' is especially visible in bioethics. There are various ways of combining empirical research and ethical reflection. In this paper we discuss the use of empirical data in a special form of Reflective Equilibrium (RE), namely the Network Model with Third Person Moral Experiences. In this model, the empirical data consist of the moral experiences of people in a practice. Although inclusion of these moral experiences in this specific model of RE can be well defended, their use in the application of the model still raises important questions. What precisely are moral experiences? How to determine relevance of experiences, in other words: should there be a selection of the moral experiences that are eventually used in the RE? How much weight should the empirical data have in the RE? And the key question: can the use of RE by empirical ethicists really produce answers to practical moral questions? In this paper we start to answer the above questions by giving examples taken from our research project on understanding the norm of informed consent in the field of pediatric oncology. We especially emphasize that incorporation of empirical data in a network model can reduce the risk of self-justification and bias and can increase the credibility of the RE reached. © 2009 Blackwell Publishing Ltd.
GPS-Based Reduced Dynamic Orbit Determination Using Accelerometer Data
NASA Technical Reports Server (NTRS)
VanHelleputte, Tom; Visser, Pieter
2007-01-01
Currently two gravity field satellite missions, CHAMP and GRACE, are equipped with high sensitivity electrostatic accelerometers, measuring the non-conservative forces acting on the spacecraft in three orthogonal directions. During the gravity field recovery these measurements help to separate gravitational and non-gravitational contributions in the observed orbit perturbations. For precise orbit determination purposes all these missions have a dual-frequency GPS receiver on board. The reduced dynamic technique combines the dense and accurate GPS observations with physical models of the forces acting on the spacecraft, complemented by empirical accelerations, which are stochastic parameters adjusted in the orbit determination process. When the spacecraft carries an accelerometer, these measured accelerations can be used to replace the models of the non-conservative forces, such as air drag and solar radiation pressure. This approach is implemented in a batch least-squares estimator of the GPS High Precision Orbit Determination Software Tools (GHOST), developed at DLR/GSOC and DEOS. It is extensively tested with data of the CHAMP and GRACE satellites. As accelerometer observations typically can be affected by an unknown scale factor and bias in each measurement direction, they require calibration during processing. Therefore the estimated state vector is augmented with six parameters: a scale and bias factor for the three axes. In order to converge efficiently to a good solution, reasonable a priori values for the bias factor are necessary. These are calculated by combining the mean value of the accelerometer observations with the mean value of the non-conservative force models and empirical accelerations, estimated when using these models. When replacing the non-conservative force models with accelerometer observations and still estimating empirical accelerations, a good orbit precision is achieved. 100 days of GRACE B data processing results in a mean orbit fit of a few centimeters with respect to high-quality JPL reference orbits. This shows a slightly better consistency compared to the case when using force models. A purely dynamic orbit, without estimating empirical accelerations thus only adjusting six state parameters and the bias and scale factors, gives an orbit fit for the GRACE B test case below the decimeter level. The in orbit calibrated accelerometer observations can be used to validate the modelled accelerations and estimated empirical accelerations computed with the GHOST tools. In along track direction they show the best resemblance, with a mean correlation coefficient of 93% for the same period. In radial and normal direction the correlation is smaller. During days of high solar activity the benefit of using accelerometer observations is clearly visible. The observations during these days show fluctuations which the modelled and empirical accelerations can not follow.
Ferrada, Evandro; Vergara, Ismael A; Melo, Francisco
2007-01-01
The correct discrimination between native and near-native protein conformations is essential for achieving accurate computer-based protein structure prediction. However, this has proven to be a difficult task, since currently available physical energy functions, empirical potentials and statistical scoring functions are still limited in achieving this goal consistently. In this work, we assess and compare the ability of different full atom knowledge-based potentials to discriminate between native protein structures and near-native protein conformations generated by comparative modeling. Using a benchmark of 152 near-native protein models and their corresponding native structures that encompass several different folds, we demonstrate that the incorporation of close non-bonded pairwise atom terms improves the discriminating power of the empirical potentials. Since the direct and unbiased derivation of close non-bonded terms from current experimental data is not possible, we obtained and used those terms from the corresponding pseudo-energy functions of a non-local knowledge-based potential. It is shown that this methodology significantly improves the discrimination between native and near-native protein conformations, suggesting that a proper description of close non-bonded terms is important to achieve a more complete and accurate description of native protein conformations. Some external knowledge-based energy functions that are widely used in model assessment performed poorly, indicating that the benchmark of models and the specific discrimination task tested in this work constitutes a difficult challenge.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Li; He, YaLing; Tao, Wen -Quan
The electrode of a vanadium redox flow battery generally is a carbon fibre-based porous medium, in which important physicochemical processes occur. In this work, pore-scale simulations are performed to study complex multiphase flow and reactive transport in the electrode by using the lattice Boltzmann method (LBM). Four hundred fibrous electrodes with different fibre diameters and porosities are reconstructed. Both the permeability and diffusivity of the reconstructed electrodes are predicted and compared with empirical relationships in the literature. Reactive surface area of the electrodes is also evaluated and it is found that existing empirical relationship overestimates the reactive surface under lowermore » porosities. Further, a pore-scale electrochemical reaction model is developed to study the effects of fibre diameter and porosity on electrolyte flow, V II/V III transport, and electrochemical reaction at the electrolyte-fibre surface. Finally, evolution of bubble cluster generated by the side reaction is studied by adopting a LB multiphase flow model. Effects of porosity, fibre diameter, gas saturation and solid surface wettability on average bubble diameter and reduction of reactive surface area due to coverage of bubbles on solid surface are investigated in detail. It is found that gas coverage ratio is always lower than that adopted in the continuum model in the literature. Furthermore, the current pore-scale studies successfully reveal the complex multiphase flow and reactive transport processes in the electrode, and the simulation results can be further upscaled to improve the accuracy of the current continuum-scale models.« less
Ghosh, Sujit K
2010-01-01
Bayesian methods are rapidly becoming popular tools for making statistical inference in various fields of science including biology, engineering, finance, and genetics. One of the key aspects of Bayesian inferential method is its logical foundation that provides a coherent framework to utilize not only empirical but also scientific information available to a researcher. Prior knowledge arising from scientific background, expert judgment, or previously collected data is used to build a prior distribution which is then combined with current data via the likelihood function to characterize the current state of knowledge using the so-called posterior distribution. Bayesian methods allow the use of models of complex physical phenomena that were previously too difficult to estimate (e.g., using asymptotic approximations). Bayesian methods offer a means of more fully understanding issues that are central to many practical problems by allowing researchers to build integrated models based on hierarchical conditional distributions that can be estimated even with limited amounts of data. Furthermore, advances in numerical integration methods, particularly those based on Monte Carlo methods, have made it possible to compute the optimal Bayes estimators. However, there is a reasonably wide gap between the background of the empirically trained scientists and the full weight of Bayesian statistical inference. Hence, one of the goals of this chapter is to bridge the gap by offering elementary to advanced concepts that emphasize linkages between standard approaches and full probability modeling via Bayesian methods.
Are personality differences in a small iteroparous mammal maintained by a life-history trade-off?
Dammhahn, Melanie
2012-01-01
Despite increasing interest, animal personality is still a puzzling phenomenon. Several theoretical models have been proposed to explain intraindividual consistency and interindividual variation in behaviour, which have been primarily supported by qualitative data and simulations. Using an empirical approach, I tested predictions of one main life-history hypothesis, which posits that consistent individual differences in behaviour are favoured by a trade-off between current and future reproduction. Data on life-history were collected for individuals of a natural population of grey mouse lemurs (Microcebus murinus). Using open-field and novel-object tests, I quantified variation in activity, exploration and boldness for 117 individuals over 3 years. I found systematic variation in boldness between individuals of different residual reproductive value. Young males with low current but high expected future fitness were less bold than older males with high current fecundity, and males might increase in boldness with age. Females have low variation in assets and in boldness with age. Body condition was not related to boldness and only explained marginal variation in exploration. Overall, these data indicate that a trade-off between current and future reproduction might maintain personality variation in mouse lemurs, and thus provide empirical support of this life-history trade-off hypothesis. PMID:22398164
Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach
Chadha, V. K.; Laxminarayan, R.; Arinaminpathy, N.
2017-01-01
SUMMARY BACKGROUND: There is an urgent need for improved estimations of the burden of tuberculosis (TB). OBJECTIVE: To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. DESIGN: We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. RESULTS: Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8–156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. CONCLUSIONS: Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used. PMID:28284250
Curtis, Evan T; Jamieson, Randall K
2018-04-01
Current theory has divided memory into multiple systems, resulting in a fractionated account of human behaviour. By an alternative perspective, memory is a single system. However, debate over the details of different single-system theories has overshadowed the converging agreement among them, slowing the reunification of memory. Evidence in favour of dividing memory often takes the form of dissociations observed in amnesia, where amnesic patients are impaired on some memory tasks but not others. The dissociations are taken as evidence for separate explicit and implicit memory systems. We argue against this perspective. We simulate two key dissociations between classification and recognition in a computational model of memory, A Theory of Nonanalytic Association. We assume that amnesia reflects a quantitative difference in the quality of encoding. We also present empirical evidence that replicates the dissociations in healthy participants, simulating amnesic behaviour by reducing study time. In both analyses, we successfully reproduce the dissociations. We integrate our computational and empirical successes with the success of alternative models and manipulations and argue that our demonstrations, taken in concert with similar demonstrations with similar models, provide converging evidence for a more general set of single-system analyses that support the conclusion that a wide variety of memory phenomena can be explained by a unified and coherent set of principles.
ERIC Educational Resources Information Center
Porfeli, Erik J.; Richard, George V.; Savickas, Mark L.
2010-01-01
An empirical measurement model for interest inventory construction uses internal criteria whereas an inductive measurement model uses external criteria. The empirical and inductive measurement models are compared and contrasted and then two models are assessed through tests of the effectiveness and economy of scales for the Medical Specialty…
Bridging process-based and empirical approaches to modeling tree growth
Harry T. Valentine; Annikki Makela; Annikki Makela
2005-01-01
The gulf between process-based and empirical approaches to modeling tree growth may be bridged, in part, by the use of a common model. To this end, we have formulated a process-based model of tree growth that can be fitted and applied in an empirical mode. The growth model is grounded in pipe model theory and an optimal control model of crown development. Together, the...
Temperament, Speech and Language: An Overview
Conture, Edward G.; Kelly, Ellen M.; Walden, Tedra A.
2013-01-01
The purpose of this article is to discuss definitional and measurement issues as well as empirical evidence regarding temperament, especially with regard to children's (a)typical speech and language development. Although all ages are considered, there is a predominant focus on children. Evidence from considerable empirical research lends support to the association between temperament, childhood development and social competence. With regard to communication disorders, extant literature suggests that at least certain elements of temperament (e.g., attention regulation, inhibitory control) are associated with the presence of certain communication disorders. However, the precise nature of this association remains unclear. Three possible accounts of the association between temperament and speech-language disorder are presented. One, the disability model (i.e., certain disorders impact psychological processes leading to changes in these processes, personality, etc., Roy & Bless, 2000a) suggests speech-language disorders may lead to or cause changes in psychological or temperamental characteristics. The disability account cannot be categorically refuted based on currently available research findings. The (pre)dispositional or vulnerability model (i.e., certain psychological processes directly cause the disorder or indirectly modify the course or expression of the disorder, Roy & Bless, 2000a) suggests that psychological or temperamental characteristics may lead to or cause changes in speech-language disorders. The vulnerability account has received some empirical support with regard to stuttering and voice disorders but has not received widespread empirical testing for most speech-language disorders. A third, interaction account, suggests that “disability” and ““vulnerability” may both impact communication disorders in a complex, dynamically-changing manner, a possibility that must await further empirical study. Suggestions for future research directions are provided. PMID:23273707
Measuring water fluxes in forests: The need for integrative platforms of analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Eric J.
To understand the importance of analytical tools such as those provided by Berdanier et al. (2016) in this issue of Tree Physiology, one must understand both the grand challenges facing Earth system modelers, as well as the minutia of engaging in ecophysiological research in the field. It is between these two extremes of scale that many ecologists struggle to translate empirical research into useful conclusions that guide our understanding of how ecosystems currently function and how they are likely to change in the future. Likewise, modelers struggle to build complexity into their models that match this sophisticated understanding of howmore » ecosystems function, so that necessary simplifications required by large scales do not themselves change the conclusions drawn from these simulations. As both monitoring technology and computational power increase, along with the continual effort in both empirical and modeling research, the gap between the scale of Earth system models and ecological observations continually closes. In addition, this creates a need for platforms of model–data interaction that incorporate uncertainties in both simulations and observations when scaling from one to the other, moving beyond simple comparisons of monthly or annual sums and means.« less
Measuring water fluxes in forests: The need for integrative platforms of analysis
Ward, Eric J.
2016-08-09
To understand the importance of analytical tools such as those provided by Berdanier et al. (2016) in this issue of Tree Physiology, one must understand both the grand challenges facing Earth system modelers, as well as the minutia of engaging in ecophysiological research in the field. It is between these two extremes of scale that many ecologists struggle to translate empirical research into useful conclusions that guide our understanding of how ecosystems currently function and how they are likely to change in the future. Likewise, modelers struggle to build complexity into their models that match this sophisticated understanding of howmore » ecosystems function, so that necessary simplifications required by large scales do not themselves change the conclusions drawn from these simulations. As both monitoring technology and computational power increase, along with the continual effort in both empirical and modeling research, the gap between the scale of Earth system models and ecological observations continually closes. In addition, this creates a need for platforms of model–data interaction that incorporate uncertainties in both simulations and observations when scaling from one to the other, moving beyond simple comparisons of monthly or annual sums and means.« less
Bagby, R Michael; Widiger, Thomas A
2018-01-01
The Five-Factor Model (FFM) is a dimensional model of general personality structure, consisting of the domains of neuroticism (or emotional instability), extraversion versus introversion, openness (or unconventionality), agreeableness versus antagonism, and conscientiousness (or constraint). The FFM is arguably the most commonly researched dimensional model of general personality structure. However, a notable limitation of existing measures of the FFM has been a lack of coverage of its maladaptive variants. A series of self-report inventories has been developed to assess for the maladaptive personality traits that define Diagnostic and Statistical Manual of Mental Disorders (fifth edition; DSM-5) Section II personality disorders (American Psychiatric Association [APA], 2013) from the perspective of the FFM. In this paper, we provide an introduction to this Special Section, presenting the rationale and empirical support for these measures and placing them in the historical context of the recent revision to the APA diagnostic manual. This introduction is followed by 5 papers that provide further empirical support for these measures and address current issues within the personality assessment literature. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Berman, A. L.
1976-01-01
In the last two decades, increasingly sophisticated deep space missions have placed correspondingly stringent requirements on navigational accuracy. As part of the effort to increase navigational accuracy, and hence the quality of radiometric data, much effort has been expended in an attempt to understand and compute the tropospheric effect on range (and hence range rate) data. The general approach adopted has been that of computing a zenith range refraction, and then mapping this refraction to any arbitrary elevation angle via an empirically derived function of elevation. The prediction of zenith range refraction derived from surface measurements of meteorological parameters is presented. Refractivity is separated into wet (water vapor pressure) and dry (atmospheric pressure) components. The integration of dry refractivity is shown to be exact. Attempts to integrate wet refractivity directly prove ineffective; however, several empirical models developed by the author and other researchers at JPL are discussed. The best current wet refraction model is here considered to be a separate day/night model, which is proportional to surface water vapor pressure and inversely proportional to surface temperature. Methods are suggested that might improve the accuracy of the wet range refraction model.
Studies and comparison of currently utilized models for ablation in Electrothermal-chemical guns
NASA Astrophysics Data System (ADS)
Jia, Shenli; Li, Rui; Li, Xingwen
2009-10-01
Wall ablation is a key process taking place in the capillary plasma generator in Electrothermal-Chemical (ETC) guns, whose characteristic directly decides the generator's performance. In the present article, this ablation process is theoretically studied. Currently widely used mathematical models designed to describe such process are analyzed and compared, including a recently developed kinetic model which takes into account the unsteady state in plasma-wall transition region by dividing it into two sub-layers, a Knudsen layer and a collision dominated non-equilibrium Hydrodynamic layer, a model based on Langmuir Law, as well as a simplified model widely used in arc-wall interaction process in circuit breakers, which assumes a proportional factor and an ablation enthalpy obtained empirically. Bulk plasma state and parameters are assumed to be consistent while analyzing and comparing each model, in order to take into consideration only the difference caused by model itself. Finally ablation rate is calculated in each method respectively and differences are discussed.
ERIC Educational Resources Information Center
Fierro, Catriel; Ostrovsky, Ana Elisa; Di Doménico, María Cristina
2018-01-01
This study is an empirical analysis of the field's current state in Argentinian universities. Bibliometric parameters were used to retrieve the total listed texts (N = 797) of eight undergraduate history courses' syllabi from Argentina's most populated public university psychology programs. Then, professors in charge of the selected courses (N =…
Modeling of outgassing and matrix decomposition in carbon-phenolic composites
NASA Technical Reports Server (NTRS)
Mcmanus, Hugh L.
1993-01-01
A new release rate equation to model the phase change of water to steam in composite materials was derived from the theory of molecular diffusion and equilibrium moisture concentration. The new model is dependent on internal pressure, the microstructure of the voids and channels in the composite materials, and the diffusion properties of the matrix material. Hence, it is more fundamental and accurate than the empirical Arrhenius rate equation currently in use. The model was mathematically formalized and integrated into the thermostructural analysis code CHAR. Parametric studies on variation of several parameters have been done. Comparisons to Arrhenius and straight-line models show that the new model produces physically realistic results under all conditions.
Theory, modeling, and integrated studies in the Arase (ERG) project
NASA Astrophysics Data System (ADS)
Seki, Kanako; Miyoshi, Yoshizumi; Ebihara, Yusuke; Katoh, Yuto; Amano, Takanobu; Saito, Shinji; Shoji, Masafumi; Nakamizo, Aoi; Keika, Kunihiro; Hori, Tomoaki; Nakano, Shin'ya; Watanabe, Shigeto; Kamiya, Kei; Takahashi, Naoko; Omura, Yoshiharu; Nose, Masahito; Fok, Mei-Ching; Tanaka, Takashi; Ieda, Akimasa; Yoshikawa, Akimasa
2018-02-01
Understanding of underlying mechanisms of drastic variations of the near-Earth space (geospace) is one of the current focuses of the magnetospheric physics. The science target of the geospace research project Exploration of energization and Radiation in Geospace (ERG) is to understand the geospace variations with a focus on the relativistic electron acceleration and loss processes. In order to achieve the goal, the ERG project consists of the three parts: the Arase (ERG) satellite, ground-based observations, and theory/modeling/integrated studies. The role of theory/modeling/integrated studies part is to promote relevant theoretical and simulation studies as well as integrated data analysis to combine different kinds of observations and modeling. Here we provide technical reports on simulation and empirical models related to the ERG project together with their roles in the integrated studies of dynamic geospace variations. The simulation and empirical models covered include the radial diffusion model of the radiation belt electrons, GEMSIS-RB and RBW models, CIMI model with global MHD simulation REPPU, GEMSIS-RC model, plasmasphere thermosphere model, self-consistent wave-particle interaction simulations (electron hybrid code and ion hybrid code), the ionospheric electric potential (GEMSIS-POT) model, and SuperDARN electric field models with data assimilation. ERG (Arase) science center tools to support integrated studies with various kinds of data are also briefly introduced.[Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Hansen, K. C.; Fougere, N.; Bieler, A. M.; Altwegg, K.; Combi, M. R.; Gombosi, T. I.; Huang, Z.; Rubin, M.; Tenishev, V.; Toth, G.; Tzou, C. Y.
2015-12-01
We have previously published results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model and its characterization of the neutral coma of comet 67P/Churyumov-Gerasimenko through detailed comparison with data collected by the ROSINA/COPS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/COmet Pressure Sensor) instrument aboard the Rosetta spacecraft [Bieler, 2015]. Results from these DSMC models have been used to create an empirical model of the near comet coma (<200 km) of comet 67P. The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. The model is a significant improvement over more simple empirical models, such as the Haser model. While the DSMC results are a more accurate representation of the coma at any given time, the advantage of a mean state, empirical model is the ease and speed of use. One use of such an empirical model is in the calculation of a total cometary coma production rate from the ROSINA/COPS data. The COPS data are in situ measurements of gas density and velocity along the ROSETTA spacecraft track. Converting the measured neutral density into a production rate requires knowledge of the neutral gas distribution in the coma. Our empirical model provides this information and therefore allows us to correct for the spacecraft location to calculate a production rate as a function of heliocentric distance. We will present the full empirical model as well as the calculated neutral production rate for the period of August 2014 - August 2015 (perihelion).
Electrostatics of cysteine residues in proteins: parameterization and validation of a simple model.
Salsbury, Freddie R; Poole, Leslie B; Fetrow, Jacquelyn S
2012-11-01
One of the most popular and simple models for the calculation of pK(a) s from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pK(a) s. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pK(a) s; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pK(a) s. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pK(a) values (where the calculation should reproduce the pK(a) within experimental error). Both the general behavior of cysteines in proteins and the perturbed pK(a) in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pK(a) should be shifted, and validation of force field parameters for cysteine residues. Copyright © 2012 Wiley Periodicals, Inc.
The halo current in ASDEX Upgrade
NASA Astrophysics Data System (ADS)
Pautasso, G.; Giannone, L.; Gruber, O.; Herrmann, A.; Maraschek, M.; Schuhbeck, K. H.; ASDEX Upgrade Team
2011-04-01
Due to the complexity of the phenomena involved, a self-consistent physical model for the prediction of the halo current is not available. Therefore the ITER specifications of the spatial distribution and evolution of the halo current rely on empirical assumptions. This paper presents the results of an extensive analysis of the halo current measured in ASDEX Upgrade with particular emphasis on the evolution of the halo region, on the magnitude and time history of the halo current, and on the structure and duration of its toroidal and poloidal asymmetries. The effective length of the poloidal path of the halo current in the vessel is found to be rather insensitive to plasma parameters. Large values of the toroidally averaged halo current are observed in both vertical displacement events and centred disruptions but last a small fraction of the current quench; they coincide typically with a large but short-lived MHD event.
An oilspill trajectory analysis model with a variable wind deflection angle
Samuels, W.B.; Huang, N.E.; Amstutz, D.E.
1982-01-01
The oilspill trajectory movement algorithm consists of a vector sum of the surface drift component due to wind and the surface current component. In the U.S. Geological Survey oilspill trajectory analysis model, the surface drift component is assumed to be 3.5% of the wind speed and is rotated 20 degrees clockwise to account for Coriolis effects in the Northern Hemisphere. Field and laboratory data suggest, however, that the deflection angle of the surface drift current can be highly variable. An empirical formula, based on field observations and theoretical arguments relating wind speed to deflection angle, was used to calculate a new deflection angle at each time step in the model. Comparisons of oilspill contact probabilities to coastal areas calculated for constant and variable deflection angles showed that the model is insensitive to this changing angle at low wind speeds. At high wind speeds, some statistically significant differences in contact probabilities did appear. ?? 1982.
Modeling local chemistry in PWR steam generator crevices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Millett, P.J.
1997-02-01
Over the past two decades steam generator corrosion damage has been a major cost impact to PWR owners. Crevices and occluded regions create thermal-hydraulic conditions where aggressive impurities can become highly concentrated, promoting localized corrosion of the tubing and support structure materials. The type of corrosion varies depending on the local conditions, with stress corrosion cracking being the phenomenon of most current concern. A major goal of the EPRI research in this area has been to develop models of the concentration process and resulting crevice chemistry conditions. These models may then be used to predict crevice chemistry based on knowledgemore » of bulk chemistry, thereby allowing the operator to control corrosion damage. Rigorous deterministic models have not yet been developed; however, empirical approaches have shown promise and are reflected in current versions of the industry-developed secondary water chemistry guidelines.« less
Development of a Conceptual Chum Salmon Emergence Model for Ives Island
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, Christopher J.; Geist, David R.; Arntzen, Evan V.
2011-02-09
The objective of the study described herein was to develop a conceptual model of chum salmon emergence that was based on empirical water temperature of the riverbed and river in specific locations where chum salmon spawn in the Ives Island area. The conceptual model was developed using water temperature data that have been collected in the past and are currently being collected in the Ives Island area. The model will be useful to system operators who need to estimate the complete distribution of chum salmon emergence (first emergence through final emergence) in order to balance chum salmon redd protection andmore » power system operation.« less
NKG201xGIA - first results for a new model of glacial isostatic adjustment in Fennoscandia
NASA Astrophysics Data System (ADS)
Steffen, Holger; Barletta, Valentina; Kollo, Karin; Milne, Glenn A.; Nordman, Maaria; Olsson, Per-Anders; Simpson, Matthew J. R.; Tarasov, Lev; Ågren, Jonas
2016-04-01
Glacial isostatic adjustment (GIA) is a dominant process in northern Europe, which is observed with several geodetic and geophysical methods. The observed land uplift due to this process amounts to about 1 cm/year in the northern Gulf of Bothnia. GIA affects the establishment and maintenance of reliable geodetic and gravimetric reference networks in the Nordic countries. To support a high level of accuracy in the determination of position, adequate corrections have to be applied with dedicated models. Currently, there are efforts within a Nordic Geodetic Commission (NKG) activity towards a model of glacial isostatic adjustment for Fennoscandia. The new model, NKG201xGIA, to be developed in the near future will complement the forthcoming empirical NKG land uplift model, which will substitute the currently used empirical land uplift model NKG2005LU (Ågren & Svensson, 2007). Together, the models will be a reference for vertical and horizontal motion, gravity and geoid change and more. NKG201xGIA will also provide uncertainty estimates for each field. Following former investigations, the GIA model is based on a combination of an ice and an earth model. The selected reference ice model, GLAC, for Fennoscandia, the Barents/Kara seas and the British Isles is provided by Lev Tarasov and co-workers. Tests of different ice and earth models will be performed based on the expertise of each involved modeler. This includes studies on high resolution ice sheets, different rheologies, lateral variations in lithosphere and mantle viscosity and more. This will also be done in co-operation with scientists outside NKG who help in the development and testing of the model. References Ågren, J., Svensson, R. (2007): Postglacial Land Uplift Model and System Definition for the New Swedish Height System RH 2000. Reports in Geodesy and Geographical Information Systems Rapportserie, LMV-Rapport 4, Lantmäteriet, Gävle.
NASA Astrophysics Data System (ADS)
Veyette, Mark J.; Muirhead, Philip S.; Mann, Andrew W.; Brewer, John M.; Allard, France; Homeier, Derek
2017-12-01
The ability to perform detailed chemical analysis of Sun-like F-, G-, and K-type stars is a powerful tool with many applications, including studying the chemical evolution of the Galaxy and constraining planet formation theories. Unfortunately, complications in modeling cooler stellar atmospheres hinders similar analyses of M dwarf stars. Empirically calibrated methods to measure M dwarf metallicity from moderate-resolution spectra are currently limited to measuring overall metallicity and rely on astrophysical abundance correlations in stellar populations. We present a new, empirical calibration of synthetic M dwarf spectra that can be used to infer effective temperature, Fe abundance, and Ti abundance. We obtained high-resolution (R ˜ 25,000), Y-band (˜1 μm) spectra of 29 M dwarfs with NIRSPEC on Keck II. Using the PHOENIX stellar atmosphere modeling code (version 15.5), we generated a grid of synthetic spectra covering a range of temperatures, metallicities, and alpha-enhancements. From our observed and synthetic spectra, we measured the equivalent widths of multiple Fe I and Ti I lines and a temperature-sensitive index based on the FeH band head. We used abundances measured from widely separated solar-type companions to empirically calibrate transformations to the observed indices and equivalent widths that force agreement with the models. Our calibration achieves precisions in T eff, [Fe/H], and [Ti/Fe] of 60 K, 0.1 dex, and 0.05 dex, respectively, and is calibrated for 3200 K < T eff < 4100 K, -0.7 < [Fe/H] < +0.3, and -0.05 < [Ti/Fe] < +0.3. This work is a step toward detailed chemical analysis of M dwarfs at a precision similar to what has been achieved for FGK stars.
The effect of fiscal policy on diet, obesity and chronic disease: a systematic review
Jan, Stephen; Leeder, Stephen; Swinburn, Boyd
2010-01-01
Abstract Objective To assess the effect of food taxes and subsidies on diet, body weight and health through a systematic review of the literature. Methods We searched the English-language published and grey literature for empirical and modelling studies on the effects of monetary subsidies or taxes levied on specific food products on consumption habits, body weight and chronic conditions. Empirical studies were dealing with an actual tax, while modelling studies predicted outcomes based on a hypothetical tax or subsidy. Findings Twenty-four studies met the inclusion criteria: 13 were from the peer-reviewed literature and 11 were published on line. There were 8 empirical and 16 modelling studies. Nine studies assessed the impact of taxes on food consumption only, 5 on consumption and body weight, 4 on consumption and disease and 6 on body weight only. In general, taxes and subsidies influenced consumption in the desired direction, with larger taxes being associated with more significant changes in consumption, body weight and disease incidence. However, studies that focused on a single target food or nutrient may have overestimated the impact of taxes by failing to take into account shifts in consumption to other foods. The quality of the evidence was generally low. Almost all studies were conducted in high-income countries. Conclusion Food taxes and subsidies have the potential to contribute to healthy consumption patterns at the population level. However, current evidence is generally of low quality and the empirical evaluation of existing taxes is a research priority, along with research into the effectiveness and differential impact of food taxes in developing countries. PMID:20680126
Superconducting cosmic string loops as sources for fast radio bursts
NASA Astrophysics Data System (ADS)
Cao, Xiao-Feng; Yu, Yun-Wei
2018-01-01
The cusp burst radiation of superconducting cosmic string (SCS) loops is thought to be a possible origin of observed fast radio bursts with the model-predicted radiation spectrum and the redshift- and energy-dependent event rate, we fit the observational redshift and energy distributions of 21 Parkes fast radio bursts and constrain the model parameters. It is found that the model can basically be consistent with the observations, if the current on the SCS loops has a present value of ˜1016μ179 /10 esu s-1 and evolves with redshift as an empirical power law ˜(1 +z )-1.3 , where μ17=μ /1017 g cm-1 is the string tension. This current evolution may provide a clue to probe the evolution of the cosmic magnetic fields and the gathering of the SCS loops to galaxy clusters.
Quantile Regression Models for Current Status Data
Ou, Fang-Shu; Zeng, Donglin; Cai, Jianwen
2016-01-01
Current status data arise frequently in demography, epidemiology, and econometrics where the exact failure time cannot be determined but is only known to have occurred before or after a known observation time. We propose a quantile regression model to analyze current status data, because it does not require distributional assumptions and the coefficients can be interpreted as direct regression effects on the distribution of failure time in the original time scale. Our model assumes that the conditional quantile of failure time is a linear function of covariates. We assume conditional independence between the failure time and observation time. An M-estimator is developed for parameter estimation which is computed using the concave-convex procedure and its confidence intervals are constructed using a subsampling method. Asymptotic properties for the estimator are derived and proven using modern empirical process theory. The small sample performance of the proposed method is demonstrated via simulation studies. Finally, we apply the proposed method to analyze data from the Mayo Clinic Study of Aging. PMID:27994307
Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition
Fraley, Chris; Percival, Daniel
2014-01-01
Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001
Patients' mental models and adherence to outpatient physical therapy home exercise programs.
Rizzo, Jon
2015-05-01
Within physical therapy, patient adherence usually relates to attending appointments, following advice, and/or undertaking prescribed exercise. Similar to findings for general medical adherence, patient adherence to physical therapy home exercise programs (HEP) is estimated between 35 and 72%. Adherence to HEPs is a multifactorial and poorly understood phenomenon, with no consensus regarding a common theoretical framework that best guides empirical or clinical efforts. Mental models, a construct used to explain behavior and decision-making in the social sciences, may serve as this framework. Mental models comprise an individual's tacit thoughts about how the world works. They include assumptions about new experiences and expectations for the future based on implicit comparisons between current and past experiences. Mental models play an important role in decision-making and guiding actions. This professional theoretical article discusses empirical research demonstrating relationships among mental models, prior experience, and adherence decisions in medical and physical therapy contexts. Specific issues related to mental models and physical therapy patient adherence are discussed, including the importance of articulation of patients' mental models, assessment of patients' mental models that relate to exercise program adherence, discrepancy between patient and provider mental models, and revision of patients' mental models in ways that enhance adherence. The article concludes with practical implications for physical therapists and recommendations for further research to better understand the role of mental models in physical therapy patient adherence behavior.
A Social-Ecological Framework of Theory, Assessment, and Prevention of Suicide
Cramer, Robert J.; Kapusta, Nestor D.
2017-01-01
The juxtaposition of increasing suicide rates with continued calls for suicide prevention efforts begs for new approaches. Grounded in the Centers for Disease Control and Prevention (CDC) framework for tackling health issues, this personal views work integrates relevant suicide risk/protective factor, assessment, and intervention/prevention literatures. Based on these components of suicide risk, we articulate a Social-Ecological Suicide Prevention Model (SESPM) which provides an integration of general and population-specific risk and protective factors. We also use this multi-level perspective to provide a structured approach to understanding current theories and intervention/prevention efforts concerning suicide. Following similar multi-level prevention efforts in interpersonal violence and Human Immunodeficiency Virus (HIV) domains, we offer recommendations for social-ecologically informed suicide prevention theory, training, research, assessment, and intervention programming. Although the SESPM calls for further empirical testing, it provides a suitable backdrop for tailoring of current prevention and intervention programs to population-specific needs. Moreover, the multi-level model shows promise to move suicide risk assessment forward (e.g., development of multi-level suicide risk algorithms or structured professional judgments instruments) to overcome current limitations in the field. Finally, we articulate a set of characteristics of social-ecologically based suicide prevention programs. These include the need to address risk and protective factors with the strongest degree of empirical support at each multi-level layer, incorporate a comprehensive program evaluation strategy, and use a variety of prevention techniques across levels of prevention. PMID:29062296
Structure of High Latitude Currents in Magnetosphere-Ionosphere Models
NASA Astrophysics Data System (ADS)
Wiltberger, M.; Rigler, E. J.; Merkin, V.; Lyon, J. G.
2017-03-01
Using three resolutions of the Lyon-Fedder-Mobarry global magnetosphere-ionosphere model (LFM) and the Weimer 2005 empirical model we examine the structure of the high latitude field-aligned current patterns. Each resolution was run for the entire Whole Heliosphere Interval which contained two high speed solar wind streams and modest interplanetary magnetic field strengths. Average states of the field-aligned current (FAC) patterns for 8 interplanetary magnetic field clock angle directions are computed using data from these runs. Generally speaking the patterns obtained agree well with results obtained from the Weimer 2005 computing using the solar wind and IMF conditions that correspond to each bin. As the simulation resolution increases the currents become more intense and narrow. A machine learning analysis of the FAC patterns shows that the ratio of Region 1 (R1) to Region 2 (R2) currents decreases as the simulation resolution increases. This brings the simulation results into better agreement with observational predictions and the Weimer 2005 model results. The increase in R2 current strengths also results in the cross polar cap potential (CPCP) pattern being concentrated in higher latitudes. Current-voltage relationships between the R1 and CPCP are quite similar at the higher resolution indicating the simulation is converging on a common solution. We conclude that LFM simulations are capable of reproducing the statistical features of FAC patterns.
Structure of high latitude currents in global magnetospheric-ionospheric models
Wiltberger, M; Rigler, E. J.; Merkin, V; Lyon, J. G
2016-01-01
Using three resolutions of the Lyon-Fedder-Mobarry global magnetosphere-ionosphere model (LFM) and the Weimer 2005 empirical model we examine the structure of the high latitude field-aligned current patterns. Each resolution was run for the entire Whole Heliosphere Interval which contained two high speed solar wind streams and modest interplanetary magnetic field strengths. Average states of the field-aligned current (FAC) patterns for 8 interplanetary magnetic field clock angle directions are computed using data from these runs. Generally speaking the patterns obtained agree well with results obtained from the Weimer 2005 computing using the solar wind and IMF conditions that correspond to each bin. As the simulation resolution increases the currents become more intense and narrow. A machine learning analysis of the FAC patterns shows that the ratio of Region 1 (R1) to Region 2 (R2) currents decreases as the simulation resolution increases. This brings the simulation results into better agreement with observational predictions and the Weimer 2005 model results. The increase in R2 current strengths also results in the cross polar cap potential (CPCP) pattern being concentrated in higher latitudes. Current-voltage relationships between the R1 and CPCP are quite similar at the higher resolution indicating the simulation is converging on a common solution. We conclude that LFM simulations are capable of reproducing the statistical features of FAC patterns.
The Journey to Interprofessional Collaborative Practice: Are We There Yet?
Golom, Frank D; Schreck, Janet Simon
2018-02-01
Interprofessional collaborative practice (IPCP) is a service delivery approach that seeks to improve health care outcomes and the patient experience while simultaneously decreasing health care costs. The current article reviews the core competencies and current trends associated with IPCP, including challenges faced by health care practitioners when working on interprofessional teams. Several conceptual frameworks and empirically supported interventions from the fields of organizational psychology and organization development are presented to assist health care professionals in transitioning their teams to a more interprofessionally collaborative, team-based model of practice. Copyright © 2017 Elsevier Inc. All rights reserved.
Use of Research for Transforming Youth Agencies
ERIC Educational Resources Information Center
Baizerman, Michael; Rence, Emily; Johnson, Sean
2013-01-01
Current philosophy and practice urge, even require for funding, that programs be empirically based and grounded in empirically proven emerging, promising, or best practices. In most of the human services, including youth programs, services, and practices, this requirement is a goal as well as an ideal. Empirical research and evaluation can be used…
Bringing Science to Bear: An Empirical Assessment of the Comprehensive Soldier Fitness Program
ERIC Educational Resources Information Center
Lester, Paul B.; McBride, Sharon; Bliese, Paul D.; Adler, Amy B.
2011-01-01
This article outlines the U.S. Army's effort to empirically validate and assess the Comprehensive Soldier Fitness (CSF) program. The empirical assessment includes four major components. First, the CSF scientific staff is currently conducting a longitudinal study to determine if the Master Resilience Training program and the Comprehensive…
Measurement and Characterization of Space Shuttle Solid Rocket Motor Plume Acoustics
NASA Technical Reports Server (NTRS)
Kenny, Robert Jeremy
2009-01-01
NASA's current models to predict lift-off acoustics for launch vehicles are currently being updated using several numerical and empirical inputs. One empirical input comes from free-field acoustic data measured at three Space Shuttle Reusable Solid Rocket Motor (RSRM) static firings. The measurements were collected by a joint collaboration between NASA - Marshall Space Flight Center, Wyle Labs, and ATK Launch Systems. For the first time NASA measured large-thrust solid rocket motor plume acoustics for evaluation of both noise sources and acoustic radiation properties. Over sixty acoustic free-field measurements were taken over the three static firings to support evaluation of acoustic radiation near the rocket plume, far-field acoustic radiation patterns, plume acoustic power efficiencies, and apparent noise source locations within the plume. At approximately 67 m off nozzle centerline and 70 m downstream of the nozzle exit plan, the measured overall sound pressure level of the RSRM was 155 dB. Peak overall levels in the far field were over 140 dB at 300 m and 50-deg off of the RSRM thrust centerline. The successful collaboration has yielded valuable data that are being implemented into NASA's lift-off acoustic models, which will then be used to update predictions for Ares I and Ares V liftoff acoustic environments.
Scaling Dissolved Nutrient Removal in River Networks: A Comparative Modeling Investigation
NASA Astrophysics Data System (ADS)
Ye, Sheng; Reisinger, Alexander J.; Tank, Jennifer L.; Baker, Michelle A.; Hall, Robert O.; Rosi, Emma J.; Sivapalan, Murugesu
2017-11-01
Along the river network, water, sediment, and nutrients are transported, cycled, and altered by coupled hydrological and biogeochemical processes. Our current understanding of the rates and processes controlling the cycling and removal of dissolved inorganic nutrients in river networks is limited due to a lack of empirical measurements in large, (nonwadeable), rivers. The goal of this paper was to develop a coupled hydrological and biogeochemical process model to simulate nutrient uptake at the network scale during summer base flow conditions. The model was parameterized with literature values from headwater streams, and empirical measurements made in 15 rivers with varying hydrological, biological, and topographic characteristics, to simulate nutrient uptake at the network scale. We applied the coupled model to 15 catchments describing patterns in uptake for three different solutes to determine the role of rivers in network-scale nutrient cycling. Model simulation results, constrained by empirical data, suggested that rivers contributed proportionally more to nutrient removal than headwater streams given the fraction of their length represented in a network. In addition, variability of nutrient removal patterns among catchments was varied among solutes, and as expected, was influenced by nutrient concentration and discharge. Net ammonium uptake was not significantly correlated with any environmental descriptor. In contrast, net daily nitrate removal was linked to suspended chlorophyll a (an indicator of primary producers) and land use characteristics. Finally, suspended sediment characteristics and agricultural land use were correlated with net daily removal of soluble reactive phosphorus, likely reflecting abiotic sorption dynamics. Rivers are understudied relative to streams, and our model suggests that rivers can contribute more to network-scale nutrient removal than would be expected based upon their representative fraction of network channel length.
Current-voltage characteristics of dc corona discharges in air between coaxial cylinders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Yuesheng, E-mail: yueshengzheng@fzu.edu.cn; Zhang, Bo, E-mail: shizbcn@tsinghua.edu.cn; He, Jinliang, E-mail: hejl@tsinghua.edu.cn
This paper presents the experimental measurement and numerical analysis of the current-voltage characteristics of dc corona discharges in air between coaxial cylinders. The current-voltage characteristics for both positive and negative corona discharges were measured within a specially designed corona cage. Then the measured results were fitted by different empirical formulae and analyzed by the fluid model. The current-voltage characteristics between coaxial cylinders can be expressed as I = C(U − U{sub 0}){sup m}, where m is within the range 1.5–2.0, which is similar to the point-plane electrode system. The ionization region has no significant effect on the current-voltage characteristic under a low corona current,more » while it will affect the distribution for the negative corona under a high corona current. The surface onset fields and ion mobilities were emphatically discussed.« less
Use of transport models for wildfire behavior simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linn, R.R.; Harlow, F.H.
1998-01-01
Investigators have attempted to describe the behavior of wildfires for over fifty years. Current models for numerical description are mainly algebraic and based on statistical or empirical ideas. The authors have developed a transport model called FIRETEC. The use of transport formulations connects the propagation rates to the full conservation equations for energy, momentum, species concentrations, mass, and turbulence. In this paper, highlights of the model formulation and results are described. The goal of the FIRETEC model is to describe most probable average behavior of wildfires in a wide variety of conditions. FIRETEC represents the essence of the combination ofmore » many small-scale processes without resolving each process in complete detail.« less
Modeling the effect of topical oxygen therapy on wound healing
NASA Astrophysics Data System (ADS)
Agyingi, Ephraim; Ross, David; Maggelakis, Sophia
2011-11-01
Oxygen supply is a critical element for the healing of wounds. Clinical investigations have shown that topical oxygen therapy (TOT) increases the healing rate of wounds. The reason behind TOT increasing the healing rate of a wound remains unclear and hence current protocols are empirical. In this paper we present a mathematical model of wound healing that we use to simulate the application of TOT in the treatment of cutaneous wounds. At the core of our model is an account of the initiation of angiogenesis by macrophage-derived growth factors. The model is expressed as a system of reaction-diffusion equations. We present results of simulations for a version of the model with one spatial dimension.
NASA Technical Reports Server (NTRS)
Wang, Qun-Zhen; Massey, Steven J.; Abdol-Hamid, Khaled S.; Frink, Neal T.
1999-01-01
USM3D is a widely-used unstructured flow solver for simulating inviscid and viscous flows over complex geometries. The current version (version 5.0) of USM3D, however, does not have advanced turbulence models to accurately simulate complicated flows. We have implemented two modified versions of the original Jones and Launder k-epsilon two-equation turbulence model and the Girimaji algebraic Reynolds stress model in USM3D. Tests have been conducted for two flat plate boundary layer cases, a RAE2822 airfoil and an ONERA M6 wing. The results are compared with those of empirical formulae, theoretical results and the existing Spalart-Allmaras one-equation model.
Body Topography Parcellates Human Sensory and Motor Cortex.
Kuehn, Esther; Dinse, Juliane; Jakobsen, Estrid; Long, Xiangyu; Schäfer, Andreas; Bazin, Pierre-Louis; Villringer, Arno; Sereno, Martin I; Margulies, Daniel S
2017-07-01
The cytoarchitectonic map as proposed by Brodmann currently dominates models of human sensorimotor cortical structure, function, and plasticity. According to this model, primary motor cortex, area 4, and primary somatosensory cortex, area 3b, are homogenous areas, with the major division lying between the two. Accumulating empirical and theoretical evidence, however, has begun to question the validity of the Brodmann map for various cortical areas. Here, we combined in vivo cortical myelin mapping with functional connectivity analyses and topographic mapping techniques to reassess the validity of the Brodmann map in human primary sensorimotor cortex. We provide empirical evidence that area 4 and area 3b are not homogenous, but are subdivided into distinct cortical fields, each representing a major body part (the hand and the face). Myelin reductions at the hand-face borders are cortical layer-specific, and coincide with intrinsic functional connectivity borders as defined using large-scale resting state analyses. Our data extend the Brodmann model in human sensorimotor cortex and suggest that body parts are an important organizing principle, similar to the distinction between sensory and motor processing. © The Author 2017. Published by Oxford University Press.
Leder, Helmut; Nadal, Marcos
2014-11-01
About a decade ago, psychology of the arts started to gain momentum owing to a number of drives: technological progress improved the conditions under which art could be studied in the laboratory, neuroscience discovered the arts as an area of interest, and new theories offered a more comprehensive look at aesthetic experiences. Ten years ago, Leder, Belke, Oeberst, and Augustin (2004) proposed a descriptive information-processing model of the components that integrate an aesthetic episode. This theory offered explanations for modern art's large number of individualized styles, innovativeness, and for the diverse aesthetic experiences it can stimulate. In addition, it described how information is processed over the time course of an aesthetic episode, within and over perceptual, cognitive and emotional components. Here, we review the current state of the model, and its relation to the major topics in empirical aesthetics today, including the nature of aesthetic emotions, the role of context, and the neural and evolutionary foundations of art and aesthetics. © 2014 The British Psychological Society.
Alcohol Use and Suicidal Behaviors among Adults: A Synthesis and Theoretical Model
Lamis, Dorian A.; Malone, Patrick S.
2012-01-01
Suicidal behavior and alcohol use are major public health concerns in the United States; however the association between these behaviors has received relatively little empirical attention. The relative lack of research in this area may be due in part to the absence of theory explaining the alcohol use-suicidality link in the general adult population. The present article expands upon Conner, McCloskey, and Duberstein’s (2008) model of suicide in individuals with alcoholism and proposes a theoretical framework that can be used to explain why a range of adult alcohol users may engage in suicidal behaviors. Guided by this model, we review and evaluate the evidence on the associations among several constructs that may contribute to suicidal behaviors in adult alcohol consumers. The current framework should inform future research and facilitate further empirical analyses on the interactive effects among risk factors that may contribute to suicidal behaviors. Once the nature of these associations is better understood among alcohol using adults, more effective suicide prevention programs may be designed and implemented. PMID:23243500
NASA Astrophysics Data System (ADS)
Knipp, D.; Kilcommons, L. M.; Damas, M. C.
2015-12-01
We have created a simple and user-friendly web application to visualize output from empirical atmospheric models that describe the lower atmosphere and the Space-Atmosphere Interface Region (SAIR). The Atmospheric Model Web Explorer (AtModWeb) is a lightweight, multi-user, Python-driven application which uses standard web technology (jQuery, HTML5, CSS3) to give an in-browser interface that can produce plots of modeled quantities such as temperature and individual species and total densities of neutral and ionized upper-atmosphere. Output may be displayed as: 1) a contour plot over a map projection, 2) a pseudo-color plot (heatmap) which allows visualization of a variable as a function of two spatial coordinates, or 3) a simple line plot of one spatial coordinate versus any number of desired model output variables. The application is designed around an abstraction of an empirical atmospheric model, essentially treating the model code as a black box, which makes it simple to add additional models without modifying the main body of the application. Currently implemented are the Naval Research Laboratory NRLMSISE00 model for neutral atmosphere and the International Reference Ionosphere (IRI). These models are relevant to the Low Earth Orbit environment and the SAIR. The interface is simple and usable, allowing users (students and experts) to specify time and location, and choose between historical (i.e. the values for the given date) or manual specification of whichever solar or geomagnetic activity drivers are required by the model. We present a number of use-case examples from research and education: 1) How does atmospheric density between the surface and 1000 km vary with time of day, season and solar cycle?; 2) How do ionospheric layers change with the solar cycle?; 3 How does the composition of the SAIR vary between day and night at a fixed altitude?
How good are indirect tests at detecting recombination in human mtDNA?
White, Daniel James; Bryant, David; Gemmell, Neil John
2013-07-08
Empirical proof of human mitochondrial DNA (mtDNA) recombination in somatic tissues was obtained in 2004; however, a lack of irrefutable evidence exists for recombination in human mtDNA at the population level. Our inability to demonstrate convincingly a signal of recombination in population data sets of human mtDNA sequence may be due, in part, to the ineffectiveness of current indirect tests. Previously, we tested some well-established indirect tests of recombination (linkage disequilibrium vs. distance using D' and r(2), Homoplasy Test, Pairwise Homoplasy Index, Neighborhood Similarity Score, and Max χ(2)) on sequence data derived from the only empirically confirmed case of human mtDNA recombination thus far and demonstrated that some methods were unable to detect recombination. Here, we assess the performance of these six well-established tests and explore what characteristics specific to human mtDNA sequence may affect their efficacy by simulating sequence under various parameters with levels of recombination (ρ) that vary around an empirically derived estimate for human mtDNA (population parameter ρ = 5.492). No test performed infallibly under any of our scenarios, and error rates varied across tests, whereas detection rates increased substantially with ρ values > 5.492. Under a model of evolution that incorporates parameters specific to human mtDNA, including rate heterogeneity, population expansion, and ρ = 5.492, successful detection rates are limited to a range of 7-70% across tests with an acceptable level of false-positive results: the neighborhood similarity score incompatibility test performed best overall under these parameters. Population growth seems to have the greatest impact on recombination detection probabilities across all models tested, likely due to its impact on sequence diversity. The implications of our findings on our current understanding of mtDNA recombination in humans are discussed.
How Good Are Indirect Tests at Detecting Recombination in Human mtDNA?
White, Daniel James; Bryant, David; Gemmell, Neil John
2013-01-01
Empirical proof of human mitochondrial DNA (mtDNA) recombination in somatic tissues was obtained in 2004; however, a lack of irrefutable evidence exists for recombination in human mtDNA at the population level. Our inability to demonstrate convincingly a signal of recombination in population data sets of human mtDNA sequence may be due, in part, to the ineffectiveness of current indirect tests. Previously, we tested some well-established indirect tests of recombination (linkage disequilibrium vs. distance using D′ and r2, Homoplasy Test, Pairwise Homoplasy Index, Neighborhood Similarity Score, and Max χ2) on sequence data derived from the only empirically confirmed case of human mtDNA recombination thus far and demonstrated that some methods were unable to detect recombination. Here, we assess the performance of these six well-established tests and explore what characteristics specific to human mtDNA sequence may affect their efficacy by simulating sequence under various parameters with levels of recombination (ρ) that vary around an empirically derived estimate for human mtDNA (population parameter ρ = 5.492). No test performed infallibly under any of our scenarios, and error rates varied across tests, whereas detection rates increased substantially with ρ values > 5.492. Under a model of evolution that incorporates parameters specific to human mtDNA, including rate heterogeneity, population expansion, and ρ = 5.492, successful detection rates are limited to a range of 7−70% across tests with an acceptable level of false-positive results: the neighborhood similarity score incompatibility test performed best overall under these parameters. Population growth seems to have the greatest impact on recombination detection probabilities across all models tested, likely due to its impact on sequence diversity. The implications of our findings on our current understanding of mtDNA recombination in humans are discussed. PMID:23665874
NASA Astrophysics Data System (ADS)
Hansen, Kenneth; Altwegg, Kathrin; Berthelier, Jean-Jacques; Bieler, Andre; Calmonte, Ursina; Combi, Michael; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, Tamas; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Lena; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu
2016-04-01
We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet coma (<400 km) of comet 67P for the pre-equinox orbit of comet 67P/Churyumov-Gerasimenko. In this work we extend the empirical model to the post-equinox, post-perihelion time period. In addition, we extend the coma model to significantly further from the comet (~100,000-1,000,000 km). The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. Furthermore, we have generalized the model beyond application to 67P by replacing the heliocentric distance parameterizations and mapping them to production rates. Using this method, the model become significantly more general and can be applied to any comet. The model is a significant improvement over simpler empirical models, such as the Haser model. For 67P, the DSMC results are, of course, a more accurate representation of the coma at any given time, but the advantage of a mean state, empirical model is the ease and speed of use. One application of the empirical model is to de-trend the spacecraft motion from the ROSINA COPS and DFMS data (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Comet Pressure Sensor, Double Focusing Mass Spectrometer). The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on the single point measurement. In this presentation we will present the coma production rate as a function of heliocentric distance both pre- and post-equinox and perihelion.
Tsareva, Daria A; Osolodkin, Dmitry I; Shulga, Dmitry A; Oliferenko, Alexander A; Pisarev, Sergey A; Palyulin, Vladimir A; Zefirov, Nikolay S
2011-03-14
Two fast empirical charge models, Kirchhoff Charge Model (KCM) and Dynamic Electronegativity Relaxation (DENR), had been developed in our laboratory previously for widespread use in drug design research. Both models are based on the electronegativity relaxation principle (Adv. Quantum Chem. 2006, 51, 139-156) and parameterized against ab initio dipole/quadrupole moments and molecular electrostatic potentials, respectively. As 3D QSAR studies comprise one of the most important fields of applied molecular modeling, they naturally have become the first topic to test our charges and thus, indirectly, the assumptions laid down to the charge model theories in a case study. Here these charge models are used in CoMFA and CoMSIA methods and tested on five glycogen synthase kinase 3 (GSK-3) inhibitor datasets, relevant to our current studies, and one steroid dataset. For comparison, eight other different charge models, ab initio through semiempirical and empirical, were tested on the same datasets. The complex analysis including correlation and cross-validation, charges robustness and predictability, as well as visual interpretability of 3D contour maps generated was carried out. As a result, our new electronegativity relaxation-based models both have shown stable results, which in conjunction with other benefits discussed render them suitable for building reliable 3D QSAR models. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Simpson-Southward, Chloe; Waller, Glenn; Hardy, Gillian E
2017-11-01
Clinical supervision for psychotherapies is widely used in clinical and research contexts. Supervision is often assumed to ensure therapy adherence and positive client outcomes, but there is little empirical research to support this contention. Regardless, there are numerous supervision models, but it is not known how consistent their recommendations are. This review aimed to identify which aspects of supervision are consistent across models, and which are not. A content analysis of 52 models revealed 71 supervisory elements. Models focus more on supervisee learning and/or development (88.46%), but less on emotional aspects of work (61.54%) or managerial or ethical responsibilities (57.69%). Most models focused on the supervisee (94.23%) and supervisor (80.77%), rather than the client (48.08%) or monitoring client outcomes (13.46%). Finally, none of the models were clearly or adequately empirically based. Although we might expect clinical supervision to contribute to positive client outcomes, the existing models have limited client focus and are inconsistent. Therefore, it is not currently recommended that one should assume that the use of such models will ensure consistent clinician practice or positive therapeutic outcomes. There is little evidence for the effectiveness of supervision. There is a lack of consistency in supervision models. Services need to assess whether supervision is effective for practitioners and patients. Copyright © 2017 John Wiley & Sons, Ltd.
Structure of high latitude currents in magnetosphere-ionosphere models
NASA Astrophysics Data System (ADS)
Wiltberger, M. J.; Lyon, J.; Merkin, V. G.; Rigler, E. J.
2016-12-01
Using three resolutions of the Lyon-Fedder-Mobarry global magnetosphere-ionosphere model (LFM) and the Weimer 2005 empirical model the structure of the high latitude field-aligned current patterns is examined. Each LFM resolution was run for the entire Whole Heliosphere Interval (WHI), which contained two high-speed solar wind streams and modest interplanetary magnetic field strengths. Average states of the field-aligned current (FAC) patterns for 8 interplanetary magnetic field clock angle directions are computed using data from these runs. Generally speaking the patterns obtained agree well with results from the Weimer 2005 computed using the solar wind and IMF conditions that correspond to each bin. As the simulation resolution increases the currents become more intense and confined. A machine learning analysis of the FAC patterns shows that the ratio of Region 1 (R1) to Region 2 (R2) currents decreases as the simulation resolution increases. This brings the simulation results into better agreement with observational predictions and the Weimer 2005 model results. The increase in R2 current strengths in the model also results in a better shielding of mid- and low-latitude ionosphere from the polar cap convection, also in agreement with observations. Current-voltage relationships between the R1 strength and the cross-polar cap potential (CPCP) are quite similar at the higher resolutions indicating the simulation is converging on a common solution. We conclude that LFM simulations are capable of reproducing the statistical features of FAC patterns.
The Comprehensive Inner Magnetosphere-Ionosphere Model
NASA Technical Reports Server (NTRS)
Fok, M.-C.; Buzulukova, N. Y.; Chen, S.-H.; Glocer, A.; Nagai, T.; Valek, P.; Perez, J. D.
2014-01-01
Simulation studies of the Earth's radiation belts and ring current are very useful in understanding the acceleration, transport, and loss of energetic particles. Recently, the Comprehensive Ring Current Model (CRCM) and the Radiation Belt Environment (RBE) model were merged to form a Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model. CIMI solves for many essential quantities in the inner magnetosphere, including ion and electron distributions in the ring current and radiation belts, plasmaspheric density, Region 2 currents, convection potential, and precipitation in the ionosphere. It incorporates whistler mode chorus and hiss wave diffusion of energetic electrons in energy, pitch angle, and cross terms. CIMI thus represents a comprehensive model that considers the effects of the ring current and plasmasphere on the radiation belts. We have performed a CIMI simulation for the storm on 5-9 April 2010 and then compared our results with data from the Two Wide-angle Imaging Neutral-atom Spectrometers and Akebono satellites. We identify the dominant energization and loss processes for the ring current and radiation belts. We find that the interactions with the whistler mode chorus waves are the main cause of the flux increase of MeV electrons during the recovery phase of this particular storm. When a self-consistent electric field from the CRCM is used, the enhancement of MeV electrons is higher than when an empirical convection model is applied. We also demonstrate how CIMI can be a powerful tool for analyzing and interpreting data from the new Van Allen Probes mission.
NASA Astrophysics Data System (ADS)
Danov, Dimitar
2008-02-01
The statistical field-aligned current (FAC) distribution has been demonstrated by [Iijima, T., Potemra, T.A., 1976. The amplitude distribution of field-aligned currents at northern high latitudes observed by Triad. Journal of Geophysical Research 81(13), 2165-2174] and many other authors. The large-scale (LS) FACs have been described by different empirical/statistical models [Feldstein, Ya. I., Levitin, A.E., 1986. Solar wind control of electric fields and currents in the ionosphere. Journal of Geomagnetism and Geoelectricity 38, 1143; Papitashvili, V.O., Rich, F.J., Heinemann, M.A., Hairston, M.R., 1999. Parameterization of the Defense Meteorological Satellite Program ionospheric electrostatic potentials by the interplanetary magnetic field strength and direction. Journal of Geophysical Research 104, 177-184; Papitashvili, V.O., Christiansen, F., Neubert, T., 2002. A new model of field-aligned currents derived from high-precision satellite magnetic field data. Geophysical Research Letters, 29(14), 1683, doi:10.1029/2001GL014207; Tsyganenko, N.A., 2001. A model of the near magnetosphere with a dawn-dusk asymetry (I. Mathematical structure). Journal of Geophysical Research 107(A8), doi:10.1029/2001JA000219; Weimer, D.R., 1996a. A new model for prediction of ionospheric electric potentials as a function of the IMF. In: Snowmass'96 Online Poster Session; Weimer, D.R., 1996b. Substorm influence on the ionospheric convection patterns. In: Snowmass'96 Online Poster Session; Weimer, D.R., 2001. Maps of ionospheric field-aligned currents as a function of the interplanetary magnetic field derived from Dynamic Explorer 2 data. Journal of Geophysical Research 106, 12,889-12,902; Weimer, D.R., 2005. Improved ionospheric electrodynamic models and application to calculating Joule heating rates. Journal of Geophysical Research 110, A05306, doi:10.1029/2004JA010884]. In the present work, we compare two cases of LS FAC obtained from magnetic field measurements onboard the Intercosmos Bulgaria-1300 satellite with three models: two empirical [Tsyganenko, N.A., 2001. A model of the near magnetosphere with a down-dusk asymetry (I. Mathematical structure). Journal of Geophysical Research 107(A8), doi:10.1029/2001JA000219; Weimer, D.R., 2005. Improved ionospheric electrodynamic models and application to calculating Joule heating rates. Journal of Geophysical Research 110, A05306, doi:10.1029/2004JA010884] and one computer-based MHD-simulation in "The Community Coordinated Modeling Center" (CCMC) [Toth, G., et al., 2005. Space weather modeling framework: a new tool for the space science community. Journal of Geophysical Research 110, A12226, doi:10.1029/2005JA011126]. We found that the position of the measured FAC is close to the positions predicted by the models, but the measured density can be greater than the model FAC densities. We discuss the possible reasons for the observed discrepancy between the measured and modeled FACs.
Empirical Relationships from Regional Infrasound Signals
NASA Astrophysics Data System (ADS)
Negraru, P. T.; Golden, P.
2011-12-01
Two yearlong infrasound observations were collected at two arrays located within the so called "Zone of Silence" or "Shadow Zone" from well controlled explosive sources to investigate the long term atmospheric effects on signal propagation. The first array (FNIAR) is located north of Fallon NV, at 154 km from the munitions disposal facility outside of Hawthorne NV, while the second array (DNIAR) is located near Mercury NV, approximately 293 km south east of the detonation site. Based on celerity values, approximately 80% of the observed arrivals at FNIAR are considered stratospheric (celerities below 300 m/s), while 20% of them propagated as tropospheric waveguides with celerities of 330-345 m/s. Although there is considerable scatter in the celerity values, two seasonal effects were observed for both years; 1) a gradual decrease in celerity from summer to winter (July/January period) and 2) an increase in celerity values that starts in April. In the winter months celerity values can be extremely variable, and we have observed signals with celerities as low as 240 m/s. In contrast, at DNIAR we observe much stronger seasonal variations. In winter months we have observed tropospheric, stratospheric and thermospheric arrivals while in the summer mostly tropospheric and slower thermospheric arrivals dominate. This interpretation is consistent with the current seasonal variation of the stratospheric winds and was confirmed by ray tracing with G2S models. In addition we also discuss how the observed infrasound arrivals can be used to improve ground truth estimation methods (location, origin times and yield). For instance an empirical wind parameter derived from G2S models suggests that the differences in celerity values observed for both arrays can be explained by changes in the wind conditions. Currently we have started working on improving location algorithms that take into account empirical celerity models derived from celerity/wind plots.
Trophic interaction modifications: an empirical and theoretical framework.
Terry, J Christopher D; Morris, Rebecca J; Bonsall, Michael B
2017-10-01
Consumer-resource interactions are often influenced by other species in the community. At present these 'trophic interaction modifications' are rarely included in ecological models despite demonstrations that they can drive system dynamics. Here, we advocate and extend an approach that has the potential to unite and represent this key group of non-trophic interactions by emphasising the change to trophic interactions induced by modifying species. We highlight the opportunities this approach brings in comparison to frameworks that coerce trophic interaction modifications into pairwise relationships. To establish common frames of reference and explore the value of the approach, we set out a range of metrics for the 'strength' of an interaction modification which incorporate increasing levels of contextual information about the system. Through demonstrations in three-species model systems, we establish that these metrics capture complimentary aspects of interaction modifications. We show how the approach can be used in a range of empirical contexts; we identify as specific gaps in current understanding experiments with multiple levels of modifier species and the distributions of modifications in networks. The trophic interaction modification approach we propose can motivate and unite empirical and theoretical studies of system dynamics, providing a route to confront ecological complexity. © 2017 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.
Korth, Haje; Tsyganenko, Nikolai A; Johnson, Catherine L; Philpott, Lydia C; Anderson, Brian J; Al Asad, Manar M; Solomon, Sean C; McNutt, Ralph L
2015-06-01
Accurate knowledge of Mercury's magnetospheric magnetic field is required to understand the sources of the planet's internal field. We present the first model of Mercury's magnetospheric magnetic field confined within a magnetopause shape derived from Magnetometer observations by the MErcury Surface, Space ENvironment, GEochemistry, and Ranging spacecraft. The field of internal origin is approximated by a dipole of magnitude 190 nT R M 3 , where R M is Mercury's radius, offset northward by 479 km along the spin axis. External field sources include currents flowing on the magnetopause boundary and in the cross-tail current sheet. The cross-tail current is described by a disk-shaped current near the planet and a sheet current at larger (≳ 5 R M ) antisunward distances. The tail currents are constrained by minimizing the root-mean-square (RMS) residual between the model and the magnetic field observed within the magnetosphere. The magnetopause current contributions are derived by shielding the field of each module external to the magnetopause by minimizing the RMS normal component of the magnetic field at the magnetopause. The new model yields improvements over the previously developed paraboloid model in regions that are close to the magnetopause and the nightside magnetic equatorial plane. Magnetic field residuals remain that are distributed systematically over large areas and vary monotonically with magnetic activity. Further advances in empirical descriptions of Mercury's magnetospheric external field will need to account for the dependence of the tail and magnetopause currents on magnetic activity and additional sources within the magnetosphere associated with Birkeland currents and plasma distributions near the dayside magnetopause.
Tsyganenko, Nikolai A.; Johnson, Catherine L.; Philpott, Lydia C.; Anderson, Brian J.; Al Asad, Manar M.; Solomon, Sean C.; McNutt, Ralph L.
2015-01-01
Abstract Accurate knowledge of Mercury's magnetospheric magnetic field is required to understand the sources of the planet's internal field. We present the first model of Mercury's magnetospheric magnetic field confined within a magnetopause shape derived from Magnetometer observations by the MErcury Surface, Space ENvironment, GEochemistry, and Ranging spacecraft. The field of internal origin is approximated by a dipole of magnitude 190 nT RM 3, where RM is Mercury's radius, offset northward by 479 km along the spin axis. External field sources include currents flowing on the magnetopause boundary and in the cross‐tail current sheet. The cross‐tail current is described by a disk‐shaped current near the planet and a sheet current at larger (≳ 5 RM) antisunward distances. The tail currents are constrained by minimizing the root‐mean‐square (RMS) residual between the model and the magnetic field observed within the magnetosphere. The magnetopause current contributions are derived by shielding the field of each module external to the magnetopause by minimizing the RMS normal component of the magnetic field at the magnetopause. The new model yields improvements over the previously developed paraboloid model in regions that are close to the magnetopause and the nightside magnetic equatorial plane. Magnetic field residuals remain that are distributed systematically over large areas and vary monotonically with magnetic activity. Further advances in empirical descriptions of Mercury's magnetospheric external field will need to account for the dependence of the tail and magnetopause currents on magnetic activity and additional sources within the magnetosphere associated with Birkeland currents and plasma distributions near the dayside magnetopause. PMID:27656335
NASA Iced Aerodynamics and Controls Current Research
NASA Technical Reports Server (NTRS)
Addy, Gene
2009-01-01
This slide presentation reviews the state of current research in the area of aerodynamics and aircraft control with ice conditions by the Aviation Safety Program, part of the Integrated Resilient Aircraft Controls Project (IRAC). Included in the presentation is a overview of the modeling efforts. The objective of the modeling is to develop experimental and computational methods to model and predict aircraft response during adverse flight conditions, including icing. The Aircraft icing modeling efforts includes the Ice-Contaminated Aerodynamics Modeling, which examines the effects of ice contamination on aircraft aerodynamics, and CFD modeling of ice-contaminated aircraft aerodynamics, and Advanced Ice Accretion Process Modeling which examines the physics of ice accretion, and works on computational modeling of ice accretions. The IRAC testbed, a Generic Transport Model (GTM) and its use in the investigation of the effects of icing on its aerodynamics is also reviewed. This has led to a more thorough understanding and models, both theoretical and empirical of icing physics and ice accretion for airframes, advanced 3D ice accretion prediction codes, CFD methods for iced aerodynamics and better understanding of aircraft iced aerodynamics and its effects on control surface effectiveness.
NASA Astrophysics Data System (ADS)
de Andrés, Javier; Landajo, Manuel; Lorca, Pedro; Labra, Jose; Ordóñez, Patricia
Artificial neural networks have proven to be useful tools for solving financial analysis problems such as financial distress prediction and audit risk assessment. In this paper we focus on the performance of robust (least absolute deviation-based) neural networks on measuring liquidity of firms. The problem of learning the bivariate relationship between the components (namely, current liabilities and current assets) of the so-called current ratio is analyzed, and the predictive performance of several modelling paradigms (namely, linear and log-linear regressions, classical ratios and neural networks) is compared. An empirical analysis is conducted on a representative data base from the Spanish economy. Results indicate that classical ratio models are largely inadequate as a realistic description of the studied relationship, especially when used for predictive purposes. In a number of cases, especially when the analyzed firms are microenterprises, the linear specification is improved by considering the flexible non-linear structures provided by neural networks.
Kuo, Ben Ch; Kwantes, Catherine T
2014-01-01
Despite the prevalence and popularity of research on positive and negative affect within the field of psychology, there is currently little research on affect involving the examination of cultural variables and with participants of diverse cultural and ethnic backgrounds. To the authors' knowledge, currently no empirical studies have comprehensively examined predictive models of positive and negative affect based specifically on multiple psychosocial, acculturation, and coping variables as predictors with any sample populations. Therefore, the purpose of the present study was to test the predictive power of perceived stress, social support, bidirectional acculturation (i.e., Canadian acculturation and heritage acculturation), religious coping and cultural coping (i.e., collective, avoidance, and engagement coping) in explaining positive and negative affect in a multiethnic sample of 301 undergraduate students in Canada. Two hierarchal multiple regressions were conducted, one for each affect as the dependent variable, with the above described predictors. The results supported the hypotheses and showed the two overall models to be significant in predicting affect of both kinds. Specifically, a higher level of positive affect was predicted by a lower level of perceived stress, less use of religious coping, and more use of engagement coping in dealing with stress by the participants. Higher level of negative affect, however, was predicted by a higher level of perceived stress and more use of avoidance coping in responding to stress. The current findings highlight the value and relevance of empirically examining the stress-coping-adaptation experiences of diverse populations from an affective conceptual framework, particularly with the inclusion of positive affect. Implications and recommendations for advancing future research and theoretical works in this area are considered and presented.
Stopping Distances: An Excellent Example of Empirical Modelling.
ERIC Educational Resources Information Center
Lawson, D. A.; Tabor, J. H.
2001-01-01
Explores the derivation of empirical models for the stopping distance of a car being driven at a range of speeds. Indicates that the calculation of stopping distances makes an excellent example of empirical modeling because it is a situation that is readily understood and particularly relevant to many first-year undergraduates who are learning or…
Stadler, Tanja; Degnan, James H.; Rosenberg, Noah A.
2016-01-01
Classic null models for speciation and extinction give rise to phylogenies that differ in distribution from empirical phylogenies. In particular, empirical phylogenies are less balanced and have branching times closer to the root compared to phylogenies predicted by common null models. This difference might be due to null models of the speciation and extinction process being too simplistic, or due to the empirical datasets not being representative of random phylogenies. A third possibility arises because phylogenetic reconstruction methods often infer gene trees rather than species trees, producing an incongruity between models that predict species tree patterns and empirical analyses that consider gene trees. We investigate the extent to which the difference between gene trees and species trees under a combined birth–death and multispecies coalescent model can explain the difference in empirical trees and birth–death species trees. We simulate gene trees embedded in simulated species trees and investigate their difference with respect to tree balance and branching times. We observe that the gene trees are less balanced and typically have branching times closer to the root than the species trees. Empirical trees from TreeBase are also less balanced than our simulated species trees, and model gene trees can explain an imbalance increase of up to 8% compared to species trees. However, we see a much larger imbalance increase in empirical trees, about 100%, meaning that additional features must also be causing imbalance in empirical trees. This simulation study highlights the necessity of revisiting the assumptions made in phylogenetic analyses, as these assumptions, such as equating the gene tree with the species tree, might lead to a biased conclusion. PMID:26968785
Velayos, Fernando S; Kahn, James G; Sandborn, William J; Feagan, Brian G
2013-06-01
Patients with Crohn's disease who become unresponsive to therapy with tumor necrosis factor antagonists are managed initially with either empiric dose escalation or testing-based strategies. The comparative cost effectiveness of these 2 strategies is unknown. We investigated whether a testing-based strategy is more cost effective than an empiric dose-escalation strategy. A decision analytic model that simulated 2 cohorts of patients with Crohn's disease compared outcomes for the 2 strategies over a 1-year time period. The incremental cost-effectiveness ratio of the empiric strategy was expressed as cost per quality-adjusted life-year (QALY) gained, compared with the testing-based strategy. We performed 1-way, probabilistic, and prespecified secondary analyses. The testing strategy yielded similar QALYs compared with the empiric strategy (0.801 vs 0.800, respectively) but was less expensive ($31,870 vs $37,266, respectively). In sensitivity analyses, the incremental cost-effectiveness ratio of the empiric strategy ranged from $500,000 to more than $5 million per QALY gained. Similar rates of remission (63% vs 66%) and response (28% vs 26%) were achieved through differential use of available interventions. The testing-based strategy resulted in a higher percentage of surgeries (48% vs 34%) and lower percentage use of high-dose biological therapy (41% vs 54%). A testing-based strategy is a cost-effective alternative to the current strategy of empiric dose escalation for managing patients with Crohn's disease who have lost responsiveness to infliximab. The basis for this difference is lower cost at similar outcomes. Copyright © 2013 AGA Institute. Published by Elsevier Inc. All rights reserved.
Empirical approaches to the study of language evolution.
Fitch, W Tecumseh
2017-02-01
The study of language evolution, and human cognitive evolution more generally, has often been ridiculed as unscientific, but in fact it differs little from many other disciplines that investigate past events, such as geology or cosmology. Well-crafted models of language evolution make numerous testable hypotheses, and if the principles of strong inference (simultaneous testing of multiple plausible hypotheses) are adopted, there is an increasing amount of relevant data allowing empirical evaluation of such models. The articles in this special issue provide a concise overview of current models of language evolution, emphasizing the testable predictions that they make, along with overviews of the many sources of data available to test them (emphasizing comparative, neural, and genetic data). The key challenge facing the study of language evolution is not a lack of data, but rather a weak commitment to hypothesis-testing approaches and strong inference, exacerbated by the broad and highly interdisciplinary nature of the relevant data. This introduction offers an overview of the field, and a summary of what needed to evolve to provide our species with language-ready brains. It then briefly discusses different contemporary models of language evolution, followed by an overview of different sources of data to test these models. I conclude with my own multistage model of how different components of language could have evolved.
Studying aerodynamic drag for modeling the kinematical behavior of CMEs
NASA Astrophysics Data System (ADS)
Temmer, M.; Vrsnak, B.; Moestl, C.; Zic, T.; Veronig, A. M.; Rollett, T.
2013-12-01
With the SECCHI instrument suite aboard STEREO, coronal mass ejections (CMEs) can be observed from multiple vantage points during their entire propagation all the way from the Sun to 1 AU. The propagation behavior of CMEs in interplanetary space is mainly influenced by the ambient solar wind flow. CMEs that are faster than the ambient solar wind get decelerated, whereas slower ones are accelerated until the CME speed is finally adjusted to the solar wind speed. On a statistical basis, empirical models taking into account the drag force acting on CMEs, are able to describe the observed kinematical behaviors. For several well observed CME events we derive the kinematical evolution by combining remote sensing and in situ data. The observed kinematical behavior is compared to results from current empirical and numerical propagation models. For this we mainly use the drag based model DBM as well as the MHD model ENLIL. We aim to obtain the distance regime at which the solar wind drag force is dominating the CME propagation and quantify differences between different model results. This work has received funding from the FWF: V195-N16, and the European Commission FP7 Projects eHEROES (284461, www.eheroes.eu) and COMESEP (263252, www.comesep.eu).
Creep and stress relaxation modeling of polycrystalline ceramic fibers
NASA Technical Reports Server (NTRS)
Dicarlo, James A.; Morscher, Gregory N.
1994-01-01
A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading about 800 C, these fibers display creep related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of a mechanism-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the Bend Stress Relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model, but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model tensile creep predictions based on the BSR test results with the literature data show good agreement, supporting both the predictive capability of the model and the use of the BSR text as a simple method for parameter determination for other fibers.
Creep and stress relaxation modeling of polycrystalline ceramic fibers
NASA Technical Reports Server (NTRS)
Dicarlo, James A.; Morscher, Gregory N.
1991-01-01
A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading above 800 C, these fibers display creep-related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of mechanistic-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the bend stress relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model predictions and BSR test results with the literature tensile creep data show good agreement, supporting both the predictive capability of the model and the use of the BSR test as a simple method for parameter determination for other fibers.
Schawo, Saskia J; van Eeren, Hester; Soeteman, Djira I; van der Veldt, Marie-Christine; Noom, Marc J; Brouwer, Werner; Busschbach, Jan J V; Hakkaart, Leona
2012-12-01
Many interventions initiated within and financed from the health care sector are not necessarily primarily aimed at improving health. This poses important questions regarding the operationalisation of economic evaluations in such contexts. We investigated whether assessing cost-effectiveness using state-of-the-art methods commonly applied in health care evaluations is feasible and meaningful when evaluating interventions aimed at reducing youth delinquency. A probabilistic Markov model was constructed to create a framework for the assessment of the cost-effectiveness of systemic interventions in delinquent youth. For illustrative purposes, Functional Family Therapy (FFT), a systemic intervention aimed at improving family functioning and, primarily, reducing delinquent activity in youths, was compared to Treatment as Usual (TAU). "Criminal activity free years" (CAFYs) were introduced as central outcome measure. Criminal activity may e.g. be based on police contacts or committed crimes. In absence of extensive data and for illustrative purposes the current study based criminal activity on available literature on recidivism. Furthermore, a literature search was performed to deduce the model's structure and parameters. Common cost-effectiveness methodology could be applied to interventions for youth delinquency. Model characteristics and parameters were derived from literature and ongoing trial data. The model resulted in an estimate of incremental costs/CAFY and included long-term effects. Illustrative model results point towards dominance of FFT compared to TAU. Using a probabilistic model and the CAFY outcome measure to assess cost-effectiveness of systemic interventions aimed to reduce delinquency is feasible. However, the model structure is limited to three states and the CAFY measure was defined rather crude. Moreover, as the model parameters are retrieved from literature the model results are illustrative in the absence of empirical data. The current model provides a framework to assess the cost-effectiveness of systemic interventions, while taking into account parameter uncertainty and long-term effectiveness. The framework of the model could be used to assess the cost-effectiveness of systemic interventions alongside (clinical) trial data. Consequently, it is suitable to inform reimbursement decisions, since the value for money of systemic interventions can be demonstrated using a decision analytic model. Future research could be focussed on testing the current model based on extensive empirical data, improving the outcome measure and finding appropriate values for that outcome.
VMF3/GPT3: refined discrete and empirical troposphere mapping functions
NASA Astrophysics Data System (ADS)
Landskron, Daniel; Böhm, Johannes
2018-04-01
Incorrect modeling of troposphere delays is one of the major error sources for space geodetic techniques such as Global Navigation Satellite Systems (GNSS) or Very Long Baseline Interferometry (VLBI). Over the years, many approaches have been devised which aim at mapping the delay of radio waves from zenith direction down to the observed elevation angle, so-called mapping functions. This paper contains a new approach intended to refine the currently most important discrete mapping function, the Vienna Mapping Functions 1 (VMF1), which is successively referred to as Vienna Mapping Functions 3 (VMF3). It is designed in such a way as to eliminate shortcomings in the empirical coefficients b and c and in the tuning for the specific elevation angle of 3°. Ray-traced delays of the ray-tracer RADIATE serve as the basis for the calculation of new mapping function coefficients. Comparisons of modeled slant delays demonstrate the ability of VMF3 to approximate the underlying ray-traced delays more accurately than VMF1 does, in particular at low elevation angles. In other words, when requiring highest precision, VMF3 is to be preferable to VMF1. Aside from revising the discrete form of mapping functions, we also present a new empirical model named Global Pressure and Temperature 3 (GPT3) on a 5°× 5° as well as a 1°× 1° global grid, which is generally based on the same data. Its main components are hydrostatic and wet empirical mapping function coefficients derived from special averaging techniques of the respective (discrete) VMF3 data. In addition, GPT3 also contains a set of meteorological quantities which are adopted as they stand from their predecessor, Global Pressure and Temperature 2 wet. Thus, GPT3 represents a very comprehensive troposphere model which can be used for a series of geodetic as well as meteorological and climatological purposes and is fully consistent with VMF3.
O'Keefe, Victoria M; Wingate, LaRicka R; Tucker, Raymond P; Rhoades-Kerswill, Sarah; Slish, Meredith L; Davidson, Collin L
2014-01-01
American Indians (AIs) experience increased suicide rates compared with other groups in the United States. However, no past studies have examined AI suicide by way of a recent empirically supported theoretical model of suicide. The current study investigated whether AI suicidal ideation can be predicted by two components: thwarted belongingness and perceived burdensomeness, from the Interpersonal-Psychological Theory of Suicide (T. E. Joiner, 2005, Why people die by suicide. Cambridge, MA: Harvard University Press). One hundred seventy-one AIs representing 27 different tribes participated in an online survey. Hierarchical regression analyses showed that perceived burdensomeness significantly predicted suicidal ideation above and beyond demographic variables and depressive symptoms; however, thwarted belongingness did not. Additionally, the two-way interaction between thwarted belongingness and perceived burdensomeness significantly predicted suicidal ideation. These results provide initial support for continued research on the components of the Interpersonal-Psychological Theory of Suicide, an empirically supported theoretical model of suicide, to predict suicidal ideation among AI populations.
Sex chromosome evolution: historical insights and future perspectives
Nordén, Anna K.
2017-01-01
Many separate-sexed organisms have sex chromosomes controlling sex determination. Sex chromosomes often have reduced recombination, specialized (frequently sex-specific) gene content, dosage compensation and heteromorphic size. Research on sex determination and sex chromosome evolution has increased over the past decade and is today a very active field. However, some areas within the field have not received as much attention as others. We therefore believe that a historic overview of key findings and empirical discoveries will put current thinking into context and help us better understand where to go next. Here, we present a timeline of important conceptual and analytical models, as well as empirical studies that have advanced the field and changed our understanding of the evolution of sex chromosomes. Finally, we highlight gaps in our knowledge so far and propose some specific areas within the field that we recommend a greater focus on in the future, including the role of ecology in sex chromosome evolution and new multilocus models of sex chromosome divergence. PMID:28469017
Use of multiscale zirconium alloy deformation models in nuclear fuel behavior analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montgomery, Robert; Tomé, Carlos; Liu, Wenfeng
Accurate prediction of cladding mechanical behavior is a key aspect of modeling nuclear fuel behavior, especially for conditions of pellet-cladding interaction (PCI), reactivity-initiated accidents (RIA), and loss of coolant accidents (LOCA). Current approaches to fuel performance modeling rely on empirical models for cladding creep, growth and plastic deformation, which are limited to the materials and conditions for which the models were developed. CASL has endeavored to improve upon this approach by incorporating a microstructurally-based, atomistically-informed, zirconium alloy mechanical deformation analysis capability into the BISON-CASL engineering scale fuel performance code. Specifically, the viscoplastic self-consistent (VPSC) polycrystal plasticity modeling approach, developed bymore » Lebensohn and Tome´ [2], has been coupled with BISON-CASL to represent the mechanistic material processes controlling the deformation behavior of the cladding. A critical component of VPSC is the representation of the crystallographic orientation of the grains within the matrix material and the ability to account for the role of texture on deformation. The multiscale modeling of cladding deformation mechanisms allowed by VPSC far exceed the functionality of typical semi-empirical constitutive models employed in nuclear fuel behavior codes to model irradiation growth and creep, thermal creep, or plasticity. This paper describes the implementation of an interface between VPSC and BISON-CASL and provides initial results utilizing the coupled functionality.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dierauf, Timothy; Kurtz, Sarah; Riley, Evan
This paper provides a recommended method for evaluating the AC capacity of a photovoltaic (PV) generating station. It also presents companion guidance on setting the facilitys capacity guarantee value. This is a principles-based approach that incorporates plant fundamental design parameters such as loss factors, module coefficients, and inverter constraints. This method has been used to prove contract guarantees for over 700 MW of installed projects. The method is transparent, and the results are deterministic. In contrast, current industry practices incorporate statistical regression where the empirical coefficients may only characterize the collected data. Though these methods may work well when extrapolationmore » is not required, there are other situations where the empirical coefficients may not adequately model actual performance.This proposed Fundamentals Approach method provides consistent results even where regression methods start to lose fidelity.« less
Empirical Prediction of Aircraft Landing Gear Noise
NASA Technical Reports Server (NTRS)
Golub, Robert A. (Technical Monitor); Guo, Yue-Ping
2005-01-01
This report documents a semi-empirical/semi-analytical method for landing gear noise prediction. The method is based on scaling laws of the theory of aerodynamic noise generation and correlation of these scaling laws with current available test data. The former gives the method a sound theoretical foundation and the latter quantitatively determines the relations between the parameters of the landing gear assembly and the far field noise, enabling practical predictions of aircraft landing gear noise, both for parametric trends and for absolute noise levels. The prediction model is validated by wind tunnel test data for an isolated Boeing 737 landing gear and by flight data for the Boeing 777 airplane. In both cases, the predictions agree well with data, both in parametric trends and in absolute noise levels.
Non-linear Parameter Estimates from Non-stationary MEG Data
Martínez-Vargas, Juan D.; López, Jose D.; Baker, Adam; Castellanos-Dominguez, German; Woolrich, Mark W.; Barnes, Gareth
2016-01-01
We demonstrate a method to estimate key electrophysiological parameters from resting state data. In this paper, we focus on the estimation of head-position parameters. The recovery of these parameters is especially challenging as they are non-linearly related to the measured field. In order to do this we use an empirical Bayesian scheme to estimate the cortical current distribution due to a range of laterally shifted head-models. We compare different methods of approaching this problem from the division of M/EEG data into stationary sections and performing separate source inversions, to explaining all of the M/EEG data with a single inversion. We demonstrate this through estimation of head position in both simulated and empirical resting state MEG data collected using a head-cast. PMID:27597815
Health lifestyle theory and the convergence of agency and structure.
Cockerham, William C
2005-03-01
This article utilizes the agency-structure debate as a framework for constructing a health lifestyle theory. No such theory currently exists, yet the need for one is underscored by the fact that many daily lifestyle practices involve considerations of health outcomes. An individualist paradigm has influenced concepts of health lifestyles in several disciplines, but this approach neglects the structural dimensions of such lifestyles and has limited applicability to the empirical world. The direction of this article is to present a theory of health lifestyles that includes considerations of both agency and structure, with an emphasis upon restoring structure to its appropriate position. The article begins by defining agency and structure, followed by presentation of a health lifestyle model and the theoretical and empirical studies that support it.
Why Psychology Cannot be an Empirical Science.
Smedslund, Jan
2016-06-01
The current empirical paradigm for psychological research is criticized because it ignores the irreversibility of psychological processes, the infinite number of influential factors, the pseudo-empirical nature of many hypotheses, and the methodological implications of social interactivity. An additional point is that the differences and correlations usually found are much too small to be useful in psychological practice and in daily life. Together, these criticisms imply that an objective, accumulative, empirical and theoretical science of psychology is an impossible project.
ERIC Educational Resources Information Center
Grov, Christian; Bimbi, David S.; Nanin, Jose E.; Parsons, Jeffrey T.
2006-01-01
Reported rates of recreational drug use among gay and bisexual men are currently rising. Although there has been much empirical research documenting current trends in drug use among gay and bisexual men, little research has empirically contrasted differential rates across urban epicenters, while even less has addressed racial or ethnic variation…
Modeling the Earth's magnetospheric magnetic field confined within a realistic magnetopause
NASA Technical Reports Server (NTRS)
Tsyganenko, N. A.
1995-01-01
Empirical data-based models of the magnetosphereic magnetic field have been widely used during recent years. However, the existing models (Tsyganenko, 1987, 1989a) have three serious deficiencies: (1) an unstable de facto magnetopause, (2) a crude parametrization by the K(sub p) index, and (3) inaccuracies in the equatorial magnetotail B(sub z) values. This paper describes a new approach to the problem; the essential new features are (1) a realistic shape and size of the magnetopause, based on fits to a large number of observed crossing (allowing a parametrization by the solar wind pressure), (2) fully controlled shielding of the magnetic field produced by all magnetospheric current systems, (3) new flexible representations for the tail and ring currents, and (4) a new directional criterion for fitting the model field to spacecraft data, providing improved accuracy for field line mapping. Results are presented from initial efforts to create models assembled from these modules and calibrated against spacecraft data sets.
Why Be a Shrub? A Basic Model and Hypotheses for the Adaptive Values of a Common Growth Form
Götmark, Frank; Götmark, Elin; Jensen, Anna M.
2016-01-01
Shrubs are multi-stemmed short woody plants, more widespread than trees, important in many ecosystems, neglected in ecology compared to herbs and trees, but currently in focus due to their global expansion. We present a novel model based on scaling relationships and four hypotheses to explain the adaptive significance of shrubs, including a review of the literature with a test of one hypothesis. Our model describes advantages for a small shrub compared to a small tree with the same above-ground woody volume, based on larger cross-sectional stem area, larger area of photosynthetic tissue in bark and stem, larger vascular cambium area, larger epidermis (bark) area, and larger area for sprouting, and faster production of twigs and canopy. These components form our Hypothesis 1 that predicts higher growth rate for a small shrub than a small tree. This prediction was supported by available relevant empirical studies (14 publications). Further, a shrub will produce seeds faster than a tree (Hypothesis 2), multiple stems in shrubs insure future survival and growth if one or more stems die (Hypothesis 3), and three structural traits of short shrub stems improve survival compared to tall tree stems (Hypothesis 4)—all hypotheses have some empirical support. Multi-stemmed trees may be distinguished from shrubs by more upright stems, reducing bending moment. Improved understanding of shrubs can clarify their recent expansion on savannas, grasslands, and alpine heaths. More experiments and other empirical studies, followed by more elaborate models, are needed to understand why the shrub growth form is successful in many habitats. PMID:27507981
Biological Aging - Criteria for Modeling and a New Mechanistic Model
NASA Astrophysics Data System (ADS)
Pletcher, Scott D.; Neuhauser, Claudia
To stimulate interaction and collaboration across scientific fields, we introduce a minimum set of biological criteria that theoretical models of aging should satisfy. We review results of several recent experiments that examined changes in age-specific mortality rates caused by genetic and environmental manipulation. The empirical data from these experiments is then used to test mathematical models of aging from several different disciplines, including molecular biology, reliability theory, physics, and evolutionary biology/population genetics. We find that none of the current models are consistent with all of the published experimental findings. To provide an example of how our criteria might be applied in practice, we develop a new conceptual model of aging that is consistent with our observations.
Chen, Li; He, YaLing; Tao, Wen -Quan; ...
2017-07-21
The electrode of a vanadium redox flow battery generally is a carbon fibre-based porous medium, in which important physicochemical processes occur. In this work, pore-scale simulations are performed to study complex multiphase flow and reactive transport in the electrode by using the lattice Boltzmann method (LBM). Four hundred fibrous electrodes with different fibre diameters and porosities are reconstructed. Both the permeability and diffusivity of the reconstructed electrodes are predicted and compared with empirical relationships in the literature. Reactive surface area of the electrodes is also evaluated and it is found that existing empirical relationship overestimates the reactive surface under lowermore » porosities. Further, a pore-scale electrochemical reaction model is developed to study the effects of fibre diameter and porosity on electrolyte flow, V II/V III transport, and electrochemical reaction at the electrolyte-fibre surface. Finally, evolution of bubble cluster generated by the side reaction is studied by adopting a LB multiphase flow model. Effects of porosity, fibre diameter, gas saturation and solid surface wettability on average bubble diameter and reduction of reactive surface area due to coverage of bubbles on solid surface are investigated in detail. It is found that gas coverage ratio is always lower than that adopted in the continuum model in the literature. Furthermore, the current pore-scale studies successfully reveal the complex multiphase flow and reactive transport processes in the electrode, and the simulation results can be further upscaled to improve the accuracy of the current continuum-scale models.« less
A thermodynamic and theoretical view for enzyme regulation.
Zhao, Qinyi
2015-01-01
Precise regulation is fundamental to the proper functioning of enzymes in a cell. Current opinions about this, such as allosteric regulation and dynamic contribution to enzyme regulation, are experimental models and substantially empirical. Here we proposed a theoretical and thermodynamic model of enzyme regulation. The main idea is that enzyme regulation is processed via the regulation of abundance of active conformation in the reaction buffer. The theoretical foundation, experimental evidence, and experimental criteria to test our model are discussed and reviewed. We conclude that basic principles of enzyme regulation are laws of protein thermodynamics and it can be analyzed using the concept of distribution curve of active conformations of enzymes.
A modeling technique for STOVL ejector and volume dynamics
NASA Technical Reports Server (NTRS)
Drummond, C. K.; Barankiewicz, W. S.
1990-01-01
New models for thrust augmenting ejector performance prediction and feeder duct dynamic analysis are presented and applied to a proposed Short Take Off and Vertical Landing (STOVL) aircraft configuration. Central to the analysis is the nontraditional treatment of the time-dependent volume integrals in the otherwise conventional control-volume approach. In the case of the thrust augmenting ejector, the analysis required a new relationship for transfer of kinetic energy from the primary flow to the secondary flow. Extraction of the required empirical corrections from current steady-state experimental data is discussed; a possible approach for modeling insight through Computational Fluid Dynamics (CFD) is presented.
[Masculinity and femininity scales: current state of the art].
Fernández, Juan; Quiroga, María A; Del Olmo, Isabel; Rodríguez, Antonio
2007-08-01
A theoretical and empirical review of masculinity and femininity scales was carried out after 30 years of their existence. Hypotheses to be tested were: (a) muldimensionality versus bidimensionality; (b) inadequate percentage of variance accounted for (less than 50%); (c) inconsistency between factor structure and the dualistic model. 618, 200 and 287 students took part in each of the three studies that were carried out. Factorial analyses (PAF) were performed. Results support multidimensionality, unsatisfactory percentage of variance accounted for, and lack of congruence between obtained factors and the dualistic model. All these data were analysed within the context of the twofold sex and gender reality model.
Strong regularities in world wide web surfing
Huberman; Pirolli; Pitkow; Lukose
1998-04-03
One of the most common modes of accessing information in the World Wide Web is surfing from one document to another along hyperlinks. Several large empirical studies have revealed common patterns of surfing behavior. A model that assumes that users make a sequence of decisions to proceed to another page, continuing as long as the value of the current page exceeds some threshold, yields the probability distribution for the number of pages that a user visits within a given Web site. This model was verified by comparing its predictions with detailed measurements of surfing patterns. The model also explains the observed Zipf-like distributions in page hits observed at Web sites.
NASA Technical Reports Server (NTRS)
Connor, Hyunju K.; Zesta, Eftyhia; Fedrizzi, Mariangel; Shi, Yong; Raeder, Joachim; Codrescu, Mihail V.; Fuller-Rowell, Tim J.
2016-01-01
The magnetosphere is a major source of energy for the Earth's ionosphere and thermosphere (IT) system. Current IT models drive the upper atmosphere using empirically calculated magnetospheric energy input. Thus, they do not sufficiently capture the storm-time dynamics, particularly at high latitudes. To improve the prediction capability of IT models, a physics-based magnetospheric input is necessary. Here, we use the Open Global General Circulation Model (OpenGGCM) coupled with the Coupled Thermosphere Ionosphere Model (CTIM). OpenGGCM calculates a three-dimensional global magnetosphere and a two-dimensional high-latitude ionosphere by solving resistive magnetohydrodynamic (MHD) equations with solar wind input. CTIM calculates a global thermosphere and a high-latitude ionosphere in three dimensions using realistic magnetospheric inputs from the OpenGGCM. We investigate whether the coupled model improves the storm-time IT responses by simulating a geomagnetic storm that is preceded by a strong solar wind pressure front on August 24, 2005. We compare the OpenGGCM-CTIM results with low-earth-orbit satellite observations and with the model results of Coupled Thermosphere-Ionosphere-Plasmasphere electrodynamics (CTIPe). CTIPe is an up-to-date version of CTIM that incorporates more IT dynamics such as a low-latitude ionosphere and a plasmasphere, but uses empirical magnetospheric input. OpenGGCMCTIM reproduces localized neutral density peaks at approx. 400 km altitude in the high-latitude dayside regions in agreement with in situ observations during the pressure shock and the early phase of the storm. Although CTIPe is in some sense a much superior model than CTIM, it misses these localized enhancements. Unlike the CTIPe empirical input models, OpenGGCM-CTIM more faithfully produces localized increases of both auroral precipitation and ionospheric electric fields near the high-latitude dayside region after the pressure shock and after the storm onset, which in turn effectively heats the thermosphere and causes the neutral density increase at 400 km altitude.
Detailed characteristics of intermittent current pulses due to positive corona
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yang, E-mail: liuyangwuh520@sina.com; Cui, Xiang; Lu, Tiebing
In order to get detailed characteristics of intermittent current pulses due to positive corona such as the repetition rate of burst-pulse trains, the peak value ratio of the primary pulse to the secondary pulse, the number of pulses per burst, and the interval of the secondary pulses, a systematic study was carried out in a coaxial conductor-cylinder electrode system with the conductor electrode being set with a discharge point. Empirical formulae for the number of pulses per burst and the interval of the secondary pulses are first presented. A theoretical model based on the motion of the space-charge clouds ismore » proposed. Analysis with the model gives explanations to the experimental results and reveals some new insights into the physical mechanism of positive intermittent corona.« less
A side-by-side comparison of CPV module and system performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, Matthew; Marion, Bill; Kurtz, Sarah
A side-by-side comparison is made between concentrator photovoltaic module and system direct current aperture efficiency data with a focus on quantifying system performance losses. The individual losses measured/calculated, when combined, are in good agreement with the total loss seen between the module and the system. Results indicate that for the given test period, the largest individual loss of 3.7% relative is due to the baseline performance difference between the individual module and the average for the 200 modules in the system. A basic empirical model is derived based on module spectral performance data and the tabulated losses between the modulemore » and the system. The model predicts instantaneous system direct current aperture efficiency with a root mean square error of 2.3% relative.« less
NASA Astrophysics Data System (ADS)
Xu, M., III; Liu, X.
2017-12-01
In the past 60 years, both the runoff and sediment load in the Yellow River Basin showed significant decreasing trends owing to the influences of human activities and climate change. Quantifying the impact of each factor (e.g. precipitation, sediment trapping dams, pasture, terrace, etc.) on the runoff and sediment load is among the key issues to guide the implement of water and soil conservation measures, and to predict the variation trends in the future. Hundreds of methods have been developed for studying the runoff and sediment load in the Yellow River Basin. Generally, these methods can be classified into empirical methods and physical-based models. The empirical methods, including hydrological method, soil and water conservation method, etc., are widely used in the Yellow River management engineering. These methods generally apply the statistical analyses like the regression analysis to build the empirical relationships between the main characteristic variables in a river basin. The elasticity method extensively used in the hydrological research can be classified into empirical method as it is mathematically deduced to be equivalent with the hydrological method. Physical-based models mainly include conceptual models and distributed models. The conceptual models are usually lumped models (e.g. SYMHD model, etc.) and can be regarded as transition of empirical models and distributed models. Seen from the publications that less studies have been conducted applying distributed models than empirical models as the simulation results of runoff and sediment load based on distributed models (e.g. the Digital Yellow Integrated Model, the Geomorphology-Based Hydrological Model, etc.) were usually not so satisfied owing to the intensive human activities in the Yellow River Basin. Therefore, this study primarily summarizes the empirical models applied in the Yellow River Basin and theoretically analyzes the main causes for the significantly different results using different empirical researching methods. Besides, we put forward an assessment frame for the researching methods of the runoff and sediment load variations in the Yellow River Basin from the point of view of inputting data, model structure and result output. And the assessment frame was then applied in the Huangfuchuan River.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Sheng; Li, Hongyi; Huang, Maoyi
2014-07-21
Subsurface stormflow is an important component of the rainfall–runoff response, especially in steep terrain. Its contribution to total runoff is, however, poorly represented in the current generation of land surface models. The lack of physical basis of these common parameterizations precludes a priori estimation of the stormflow (i.e. without calibration), which is a major drawback for prediction in ungauged basins, or for use in global land surface models. This paper is aimed at deriving regionalized parameterizations of the storage–discharge relationship relating to subsurface stormflow from a top–down empirical data analysis of streamflow recession curves extracted from 50 eastern United Statesmore » catchments. Detailed regression analyses were performed between parameters of the empirical storage–discharge relationships and the controlling climate, soil and topographic characteristics. The regression analyses performed on empirical recession curves at catchment scale indicated that the coefficient of the power-law form storage–discharge relationship is closely related to the catchment hydrologic characteristics, which is consistent with the hydraulic theory derived mainly at the hillslope scale. As for the exponent, besides the role of field scale soil hydraulic properties as suggested by hydraulic theory, it is found to be more strongly affected by climate (aridity) at the catchment scale. At a fundamental level these results point to the need for more detailed exploration of the co-dependence of soil, vegetation and topography with climate.« less
NASA Astrophysics Data System (ADS)
Cargile, P. A.; Stassun, K. G.; Mathieu, R. D.
2008-02-01
We report the discovery of a pre-main-sequence (PMS), low-mass, double-lined, spectroscopic, eclipsing binary in the Orion star-forming region. We present our observations, including radial velocities derived from optical high-resolution spectroscopy, and present an orbit solution that permits the determination of precise empirical masses for both components of the system. We find that Par 1802 is composed of two equal-mass (0.39 +/- 0.03, 0.40 +/- 0.03 M⊙) stars in a circular, 4.7 day orbit. There is strong evidence, such as the system exhibiting strong Li lines and a center-of-mass velocity consistent with cluster membership, that this system is a member of the Orion star-forming region and quite possibly the Orion Nebula Cluster, and therefore has an age of only a few million years. As there are currently only a few empirical mass and radius measurements for low-mass, PMS stars, this system presents an interesting test for the predictions of current theoretical models of PMS stellar evolution.
Perera, Harsha N
2016-01-01
Considerable debate still exists among scholars over the role of trait emotional intelligence (TEI) in academic performance. The dominant theoretical position is that TEI should be orthogonal or only weakly related to achievement; yet, there are strong theoretical reasons to believe that TEI plays a key role in performance. The purpose of the current article is to provide (a) an overview of the possible theoretical mechanisms linking TEI with achievement and (b) an update on empirical research examining this relationship. To elucidate these theoretical mechanisms, the overview draws on multiple theories of emotion and regulation, including TEI theory, social-functional accounts of emotion, and expectancy-value and psychobiological model of emotion and regulation. Although these theoretical accounts variously emphasize different variables as focal constructs, when taken together, they provide a comprehensive picture of the possible mechanisms linking TEI with achievement. In this regard, the article redresses the problem of vaguely specified theoretical links currently hampering progress in the field. The article closes with a consideration of directions for future research.
NASA Astrophysics Data System (ADS)
Sudibyo, Aziz, N.
2016-02-01
One of the available methods to solve a roughening in cobalt electrodeposition is magneto electrodeposition (MED) in the presence of additive electrolyte. Semi-empirical equation of limiting current under a magnetic field for cobalt MED in the presence of boric acid as an additive electrolyte was successfully developed. This semi empirical equation shows the effects of the electrode area (A), the concentration of the electro active species (C), the diffusion coefficient of the electro active species (D), the kinematic viscosity of the electrolyte (v), magnetic strength (B) and the number of electrons involved in the redox process (n). The presence of boric acid led to decrease in the limiting current, but the acid was found useful as a buffer to avoid the local pH rise caused by parallel hydrogen evolution reaction (HER).
Macroscopic neural mass model constructed from a current-based network model of spiking neurons.
Umehara, Hiroaki; Okada, Masato; Teramae, Jun-Nosuke; Naruse, Yasushi
2017-02-01
Neural mass models (NMMs) are efficient frameworks for describing macroscopic cortical dynamics including electroencephalogram and magnetoencephalogram signals. Originally, these models were formulated on an empirical basis of synaptic dynamics with relatively long time constants. By clarifying the relations between NMMs and the dynamics of microscopic structures such as neurons and synapses, we can better understand cortical and neural mechanisms from a multi-scale perspective. In a previous study, the NMMs were analytically derived by averaging the equations of synaptic dynamics over the neurons in the population and further averaging the equations of the membrane-potential dynamics. However, the averaging of synaptic current assumes that the neuron membrane potentials are nearly time invariant and that they remain at sub-threshold levels to retain the conductance-based model. This approximation limits the NMM to the non-firing state. In the present study, we newly propose a derivation of a NMM by alternatively approximating the synaptic current which is assumed to be independent of the membrane potential, thus adopting a current-based model. Our proposed model releases the constraint of the nearly constant membrane potential. We confirm that the obtained model is reducible to the previous model in the non-firing situation and that it reproduces the temporal mean values and relative power spectrum densities of the average membrane potentials for the spiking neurons. It is further ensured that the existing NMM properly models the averaged dynamics over individual neurons even if they are spiking in the populations.
On the optically thick winds of Wolf-Rayet stars
NASA Astrophysics Data System (ADS)
Gräfener, G.; Owocki, S. P.; Grassitelli, L.; Langer, N.
2017-12-01
Context. The classical Wolf-Rayet (WR) phase is believed to mark the end stage of the evolution of massive stars with initial masses higher than 25M⊙. Stars in this phase expose their stripped cores with the products of H- or He-burning at their surface. They develop strong, optically thick stellar winds that are important for the mechanical and chemical feedback of massive stars, and that determine whether the most massive stars end their lives as neutron stars or black holes. The winds of WR stars are currently not well understood, and their inclusion in stellar evolution models relies on uncertain empirical mass-loss relations. Aims: We investigate theoretically the mass-loss properties of H-free WR stars of the nitrogen sequence (WN stars). Methods: We connected stellar structure models for He stars with wind models for optically thick winds and assessed the degree to which these two types of models can simultaneously fulfil their respective sonic-point conditions. Results: Fixing the outer wind law and terminal wind velocity ν∞, we obtain unique solutions for the mass-loss rates of optically thick, radiation-driven winds of WR stars in the phase of core He-burning. The resulting mass-loss relations as a function of stellar parameters agree well with previous empirical relations. Furthermore, we encounter stellar mass limits below which no continuous solutions exist. While these mass limits agree with observations of WR stars in the Galaxy, they contradict observations in the LMC. Conclusions: While our results in particular confirm the slope of often-used empirical mass-loss relations, they imply that only part of the observed WN population can be understood in the framework of the standard assumptions of a smooth transonic flow and compact stellar core. This means that alternative approaches such as a clumped and inflated wind structure or deviations from the diffusion limit at the sonic point may have to be invoked. Qualitatively, the existence of mass limits for the formation of WR-type winds may be relevant for the non-detection of low-mass WR stars in binary systems, which are believed to be progenitors of Type Ib/c supernovae. The sonic-point conditions derived in this work may provide a possibility to include optically thick winds in stellar evolution models in a more physically motivated form than in current models.
Evaluation of Regression Models of Balance Calibration Data Using an Empirical Criterion
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert; Volden, Thomas R.
2012-01-01
An empirical criterion for assessing the significance of individual terms of regression models of wind tunnel strain gage balance outputs is evaluated. The criterion is based on the percent contribution of a regression model term. It considers a term to be significant if its percent contribution exceeds the empirical threshold of 0.05%. The criterion has the advantage that it can easily be computed using the regression coefficients of the gage outputs and the load capacities of the balance. First, a definition of the empirical criterion is provided. Then, it is compared with an alternate statistical criterion that is widely used in regression analysis. Finally, calibration data sets from a variety of balances are used to illustrate the connection between the empirical and the statistical criterion. A review of these results indicated that the empirical criterion seems to be suitable for a crude assessment of the significance of a regression model term as the boundary between a significant and an insignificant term cannot be defined very well. Therefore, regression model term reduction should only be performed by using the more universally applicable statistical criterion.
Mechanistic-empirical design concepts for continuously reinforced concrete pavements in Illinois.
DOT National Transportation Integrated Search
2009-04-01
The Illinois Department of Transportation (IDOT) currently has an existing jointed plain concrete pavement : (JPCP) design based on mechanistic-empirical (M-E) principles. However, their continuously reinforced concrete : pavement (CRCP) design proce...
Carter, Nathan T; Dalal, Dev K; Boyce, Anthony S; O'Connell, Matthew S; Kung, Mei-Chuan; Delgado, Kristin M
2014-07-01
The personality trait of conscientiousness has seen considerable attention from applied psychologists due to its efficacy for predicting job performance across performance dimensions and occupations. However, recent theoretical and empirical developments have questioned the assumption that more conscientiousness always results in better job performance, suggesting a curvilinear link between the 2. Despite these developments, the results of studies directly testing the idea have been mixed. Here, we propose this link has been obscured by another pervasive assumption known as the dominance model of measurement: that higher scores on traditional personality measures always indicate higher levels of conscientiousness. Recent research suggests dominance models show inferior fit to personality test scores as compared to ideal point models that allow for curvilinear relationships between traits and scores. Using data from 2 different samples of job incumbents, we show the rank-order changes that result from using an ideal point model expose a curvilinear link between conscientiousness and job performance 100% of the time, whereas results using dominance models show mixed results, similar to the current state of the literature. Finally, with an independent cross-validation sample, we show that selection based on predicted performance using ideal point scores results in more favorable objective hiring outcomes. Implications for practice and future research are discussed.
Great expectations: top-down attention modulates the costs of clutter and eccentricity.
Steelman, Kelly S; McCarley, Jason S; Wickens, Christopher D
2013-12-01
An experiment and modeling effort examined interactions between bottom-up and top-down attentional control in visual alert detection. Participants performed a manual tracking task while monitoring peripheral display channels for alerts of varying salience, eccentricity, and spatial expectancy. Spatial expectancy modulated the influence of salience and eccentricity; alerts in low-probability locations engendered higher miss rates, longer detection times, and larger costs of visual clutter and eccentricity, indicating that top-down attentional control offset the costs of poor bottom-up stimulus quality. Data were compared to the predictions of a computational model of scanning and noticing that incorporates bottom-up and top-down sources of attentional control. The model accounted well for the overall pattern of miss rates and response times, predicting each of the observed main effects and interactions. Empirical results suggest that designers should expect the costs of poor bottom-up visibility to be greater for low expectancy signals, and that the placement of alerts within a display should be determined based on the combination of alert expectancy and response priority. Model fits suggest that the current model can serve as a useful tool for exploring a design space as a precursor to empirical data collection and for generating hypotheses for future experiments. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Livestock Helminths in a Changing Climate: Approaches and Restrictions to Meaningful Predictions.
Fox, Naomi J; Marion, Glenn; Davidson, Ross S; White, Piran C L; Hutchings, Michael R
2012-03-06
Climate change is a driving force for livestock parasite risk. This is especially true for helminths including the nematodes Haemonchus contortus, Teladorsagia circumcincta, Nematodirus battus, and the trematode Fasciola hepatica, since survival and development of free-living stages is chiefly affected by temperature and moisture. The paucity of long term predictions of helminth risk under climate change has driven us to explore optimal modelling approaches and identify current bottlenecks to generating meaningful predictions. We classify approaches as correlative or mechanistic, exploring their strengths and limitations. Climate is one aspect of a complex system and, at the farm level, husbandry has a dominant influence on helminth transmission. Continuing environmental change will necessitate the adoption of mitigation and adaptation strategies in husbandry. Long term predictive models need to have the architecture to incorporate these changes. Ultimately, an optimal modelling approach is likely to combine mechanistic processes and physiological thresholds with correlative bioclimatic modelling, incorporating changes in livestock husbandry and disease control. Irrespective of approach, the principal limitation to parasite predictions is the availability of active surveillance data and empirical data on physiological responses to climate variables. By combining improved empirical data and refined models with a broad view of the livestock system, robust projections of helminth risk can be developed.
NASA Astrophysics Data System (ADS)
Wu, M.; Ahmadein, M.; Kharicha, A.; Ludwig, A.; Li, J. H.; Schumacher, P.
2012-07-01
Empirical knowledge about the formation of the as-cast structure, mostly obtained before 1980s, has revealed two critical issues: one is the origin of the equiaxed crystals; one is the competing growth of the columnar and equiaxed structures, and the columnar-to-equiaxed transition (CET). Unfortunately, the application of empirical knowledge to predict and control the as-cast structure was very limited, as the flow and crystal transport were not considered. Therefore, a 5-phase mixed columnar-equiaxed solidification model was recently proposed by the current authors based on modeling the multiphase transport phenomena. The motivation of the recent work is to determine and evaluate the necessary modeling parameters, and to validate the mixed columnar-equiaxed solidification model by comparison with laboratory castings. In this regard an experimental method was recommended for in-situ determination of the nucleation parameters. Additionally, some classical experiments of the Al-Cu ingots were conducted and the as-cast structural information including distinct columnar and equiaxed zones, macrosegregation, and grain size distribution were analysed. The final simulation results exhibited good agreement with experiments in the case of high pouring temperature, whereas disagreement in the case of low pouring temperature. The reasons for the disagreement are discussed.
NASA Astrophysics Data System (ADS)
Smith, L. A.
2007-12-01
We question the relevance of climate-model based Bayesian (or other) probability statements for decision support and impact assessment on spatial scales less than continental and temporal averages less than seasonal. Scientific assessment of higher resolution space and time scale information is urgently needed, given the commercial availability of "products" at high spatiotemporal resolution, their provision by nationally funded agencies for use both in industry decision making and governmental policy support, and their presentation to the public as matters of fact. Specifically we seek to establish necessary conditions for probability forecasts (projections conditioned on a model structure and a forcing scenario) to be taken seriously as reflecting the probability of future real-world events. We illustrate how risk management can profitably employ imperfect models of complicated chaotic systems, following NASA's study of near-Earth PHOs (Potentially Hazardous Objects). Our climate models will never be perfect, nevertheless the space and time scales on which they provide decision- support relevant information is expected to improve with the models themselves. Our aim is to establish a set of baselines of internal consistency; these are merely necessary conditions (not sufficient conditions) that physics based state-of-the-art models are expected to pass if their output is to be judged decision support relevant. Probabilistic Similarity is proposed as one goal which can be obtained even when our models are not empirically adequate. In short, probabilistic similarity requires that, given inputs similar to today's empirical observations and observational uncertainties, we expect future models to produce similar forecast distributions. Expert opinion on the space and time scales on which we might reasonably expect probabilistic similarity may prove of much greater utility than expert elicitation of uncertainty in parameter values in a model that is not empirically adequate; this may help to explain the reluctance of experts to provide information on "parameter uncertainty." Probability statements about the real world are always conditioned on some information set; they may well be conditioned on "False" making them of little value to a rational decision maker. In other instances, they may be conditioned on physical assumptions not held by any of the modellers whose model output is being cast as a probability distribution. Our models will improve a great deal in the next decades, and our insight into the likely climate fifty years hence will improve: maintaining the credibility of the science and the coherence of science based decision support, as our models improve, require a clear statement of our current limitations. What evidence do we have that today's state-of-the-art models provide decision-relevant probability forecasts? What space and time scales do we currently have quantitative, decision-relevant information on for 2050? 2080?
HST/WFC3: Understanding and Mitigating Radiation Damage Effects in the CCD Detectors
NASA Astrophysics Data System (ADS)
Baggett, S.; Anderson, J.; Sosey, M.; MacKenty, J.; Gosmeyer, C.; Noeske, K.; Gunning, H.; Bourque, M.
2015-09-01
At the heart of the Hubble Space Telescope Wide Field Camera 3 (HST/WFC3) UVIS channel resides a 4096x4096 pixel e2v CCD array. While these detectors are performing extremely well after more than 5 years in low-earth orbit, the cumulative effects of radiation damage cause a continual growth in the hot pixel population and a progressive loss in charge transfer efficiency (CTE) over time. The decline in CTE has two effects: (1) it reduces the detected source flux as the defects trap charge during readout and (2) it systematically shifts source centroids as the trapped charge is later released. The flux losses can be significant, particularly for faint sources in low background images. Several mitigation options exist, including target placement within the field of view, empirical stellar photometric corrections, post-flash mode and an empirical pixel-based CTE correction. The application of a post-flash has been remarkably effective in WFC3 at reducing CTE losses in low background images for a relatively small noise penalty. Currently all WFC3 observers are encouraged to post-flash images with low backgrounds. Another powerful option in mitigating CTE losses is the pixel-based CTE correction. Analagous to the CTE correction software currently in use in the HST Advanced Camera for Surveys (ACS) pipeline, the algorithm employs an empirical observationally-constrained model of how much charge is captured and released in order to reconstruct the image. Applied to images (with or without post-flash) after they are acquired, the software is currently available as a standalone routine. The correction will be incorporated into the standard WFC3 calibration pipeline.
Implementation of Advanced Two Equation Turbulence Models in the USM3D Unstructured Flow Solver
NASA Technical Reports Server (NTRS)
Wang, Qun-Zhen; Massey, Steven J.; Abdol-Hamid, Khaled S.
2000-01-01
USM3D is a widely-used unstructured flow solver for simulating inviscid and viscous flows over complex geometries. The current version (version 5.0) of USM3D, however, does not have advanced turbulence models to accurately simulate complicated flow. We have implemented two modified versions of the original Jones and Launder k-epsilon "two-equation" turbulence model and the Girimaji algebraic Reynolds stress model in USM3D. Tests have been conducted for three flat plate boundary layer cases, a RAE2822 airfoil and an ONERA M6 wing. The results are compared with those from direct numerical simulation, empirical formulae, theoretical results, and the existing Spalart-Allmaras one-equation model.
Can sexual selection theory inform genetic management of captive populations? A review
Chargé, Rémi; Teplitsky, Céline; Sorci, Gabriele; Low, Matthew
2014-01-01
Captive breeding for conservation purposes presents a serious practical challenge because several conflicting genetic processes (i.e., inbreeding depression, random genetic drift and genetic adaptation to captivity) need to be managed in concert to maximize captive population persistence and reintroduction success probability. Because current genetic management is often only partly successful in achieving these goals, it has been suggested that management insights may be found in sexual selection theory (in particular, female mate choice). We review the theoretical and empirical literature and consider how female mate choice might influence captive breeding in the context of current genetic guidelines for different sexual selection theories (i.e., direct benefits, good genes, compatible genes, sexy sons). We show that while mate choice shows promise as a tool in captive breeding under certain conditions, for most species, there is currently too little theoretical and empirical evidence to provide any clear guidelines that would guarantee positive fitness outcomes and avoid conflicts with other genetic goals. The application of female mate choice to captive breeding is in its infancy and requires a goal-oriented framework based on the needs of captive species management, so researchers can make honest assessments of the costs and benefits of such an approach, using simulations, model species and captive animal data. PMID:25553072
Two-Dimensional Analysis of Conical Pulsed Inductive Plasma Thruster Performance
NASA Technical Reports Server (NTRS)
Hallock, A. K.; Polzin, K. A.; Emsellem, G. D.
2011-01-01
A model of the maximum achievable exhaust velocity of a conical theta pinch pulsed inductive thruster is presented. A semi-empirical formula relating coil inductance to both axial and radial current sheet location is developed and incorporated into a circuit model coupled to a momentum equation to evaluate the effect of coil geometry on the axial directed kinetic energy of the exhaust. Inductance measurements as a function of the axial and radial displacement of simulated current sheets from four coils of different geometries are t to a two-dimensional expression to allow the calculation of the Lorentz force at any relevant averaged current sheet location. This relation for two-dimensional inductance, along with an estimate of the maximum possible change in gas-dynamic pressure as the current sheet accelerates into downstream propellant, enables the expansion of a one-dimensional circuit model to two dimensions. The results of this two-dimensional model indicate that radial current sheet motion acts to rapidly decouple the current sheet from the driving coil, leading to losses in axial kinetic energy 10-50 times larger than estimations of the maximum available energy in the compressed propellant. The decreased available energy in the compressed propellant as compared to that of other inductive plasma propulsion concepts suggests that a recovery in the directed axial kinetic energy of the exhaust is unlikely, and that radial compression of the current sheet leads to a loss in exhaust velocity for the operating conditions considered here.
Modeling Requirements for Cohort and Register IT.
Stäubert, Sebastian; Weber, Ulrike; Michalik, Claudia; Dress, Jochen; Ngouongo, Sylvie; Stausberg, Jürgen; Winter, Alfred
2016-01-01
The project KoRegIT (funded by TMF e.V.) aimed to develop a generic catalog of requirements for research networks like cohort studies and registers (KoReg). The catalog supports such kind of research networks to build up and to manage their organizational and IT infrastructure. To make transparent the complex relationships between requirements, which are described in use cases from a given text catalog. By analyzing and modeling the requirements a better understanding and optimizations of the catalog are intended. There are two subgoals: a) to investigate one cohort study and two registers and to model the current state of their IT infrastructure; b) to analyze the current state models and to find simplifications within the generic catalog. Processing the generic catalog was performed by means of text extraction, conceptualization and concept mapping. Then methods of enterprise architecture planning (EAP) are used to model the extracted information. To work on objective a) questionnaires are developed by utilizing the model. They are used for semi-structured interviews, whose results are evaluated via qualitative content analysis. Afterwards the current state was modeled. Objective b) was done by model analysis. A given generic text catalog of requirements was transferred into a model. As result of objective a) current state models of one existing cohort study and two registers are created and analyzed. An optimized model called KoReg-reference-model is the result of objective b). It is possible to use methods of EAP to model requirements. This enables a better overview of the partly connected requirements by means of visualization. The model based approach also enables the analysis and comparison of the empirical data from the current state models. Information managers could reduce the effort of planning the IT infrastructure utilizing the KoReg-reference-model. Modeling the current state and the generation of reports from the model, which could be used as requirements specification for bids, is supported, too.
Update in Infectious Diseases 2017.
Candel, F J; Peñuelas, M; Lejárraga, C; Emilov, T; Rico, C; Díaz, I; Lázaro, C; Viñuela-Prieto, J M; Matesanz, M
2017-09-01
Antimicrobial resistance in complex models of continuous infection is a current issue. The update 2017 course addresses about microbiological, epidemiological and clinical aspects useful for a current approach to infectious disease. During the last year, nosocomial pneumonia approach guides, recommendations for management of yeast and filamentous fungal infections, review papers on the empirical approach to peritonitis and extensive guidelines on stewardship have been published. HIV infection is being treated before and more intensively. The implementation of molecular biology, spectrometry and inmunology to traditional techniques of staining and culture achieve a better and faster microbiological diagnosis. Finally, the infection is increasingly integrated, assessing non-antibiotic aspects in the treatment.
An investigation of the mentalization-based model of borderline pathology in adolescents.
Quek, Jeremy; Bennett, Clair; Melvin, Glenn A; Saeedi, Naysun; Gordon, Michael S; Newman, Louise K
2018-07-01
According to mentalization-based theory, transgenerational transmission of mentalization from caregiver to offspring is implicated in the pathogenesis of borderline personality disorder (BPD). Recent research has demonstrated an association between hypermentalizing (excessive, inaccurate mental state reasoning) and BPD, indicating the particular relevance of this form of mentalizing dysfunction to the transgenerational mentalization-based model. As yet, no study has empirically assessed a transgenerational mentalization-based model of BPD. The current study sought firstly to test the mentalization-based model, and additionally, to determine the form of mentalizing dysfunction in caregivers (e.g., hypo- or hypermentalizing) most relevant to a hypermentalizing model of BPD. Participants were a mixed sample of adolescents with BPD and a sample of non-clinical adolescents, and their respective primary caregivers (n = 102; 51 dyads). Using an ecologically valid measure of mentalization, mediational analyses were conducted to examine the relationships between caregiver mentalizing, adolescent mentalizing, and adolescent borderline features. Findings demonstrated that adolescent mentalization mediated the effect of caregiver mentalization on adolescent borderline personality pathology. Furthermore, results indicated that hypomentalizing in caregivers was related to adolescent borderline personality pathology via an effect on adolescent hypermentalizing. Results provide empirical support for the mentalization-based model of BPD, and suggest the indirect influence of caregiver mentalization on adolescent borderline psychopathology. Results further indicate the relevance of caregiver hypomentalizing to a hypermentalizing model of BPD. Copyright © 2018 Elsevier Inc. All rights reserved.
Towards a feminist empowerment model of forgiveness psychotherapy.
McKay, Kevin M; Hill, Melanie S; Freedman, Suzanne R; Enright, Robert D
2007-03-01
In recent years Enright and Fitzgibbon's (2000) process model of forgiveness therapy has received substantial theoretical and empirical attention. However, both the process model of forgiveness therapy and the social-cognitive developmental model on which it is based have received criticism from feminist theorists. The current paper considers feminist criticisms of forgiveness therapy and uses a feminist lens to identify potential areas for growth. Specifically, Worell and Remer's (2003) model of synthesizing feminist ideals into existing theory was consulted, areas of bias within the forgiveness model of psychotherapy were identified, and strategies for restructuring areas of potential bias were introduced. Further, the authors consider unique aspects of forgiveness therapy that can potentially strengthen existing models of feminist therapy. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Implementation and local calibration of the MEPDG transfer functions in Wyoming.
DOT National Transportation Integrated Search
2015-11-01
The Wyoming Department of Transportation (WYDOT) currently uses the empirical AASHTO Design for Design of : Pavement Structures as their standard pavement design procedure. WYDOT plans to transition to the Mechanistic : Empirical Pavement Design Guid...
Computer assisted analysis of research-based teaching method in English newspaper reading teaching
NASA Astrophysics Data System (ADS)
Jie, Zheng
2017-06-01
In recent years, the teaching of English newspaper reading has been developing rapidly. However, the teaching effect of the existing course is not ideal. The paper tries to apply the research-based teaching model to English newspaper reading teaching, investigates the current situation in higher vocational colleges, and analyzes the problems. It designs a teaching model of English newspaper reading and carries out the empirical research conducted by computers. The results show that the teaching mode can use knowledge and ability to stimulate learners interest and comprehensively improve their ability to read newspapers.
The First Empirical Determination of the Fe10+ and Fe13+ Freeze-in Distances in the Solar Corona
NASA Astrophysics Data System (ADS)
Boe, Benjamin; Habbal, Shadia; Druckmüller, Miloslav; Landi, Enrico; Kourkchi, Ehsan; Ding, Adalbert; Starha, Pavel; Hutton, Joseph
2018-06-01
Heavy ions are markers of the physical processes responsible for the density and temperature distribution throughout the fine-scale magnetic structures that define the shape of the solar corona. One of their properties, whose empirical determination has remained elusive, is the “freeze-in” distance (R f ) where they reach fixed ionization states that are adhered to during their expansion with the solar wind. We present the first empirical inference of R f for {Fe}}{10+} and {Fe}}{13+} derived from multi-wavelength imaging observations of the corresponding Fe XI ({Fe}}{10+}) 789.2 nm and Fe XIV ({Fe}}{13+}) 530.3 nm emission acquired during the 2015 March 20 total solar eclipse. We find that the two ions freeze-in at different heliocentric distances. In polar coronal holes (CHs) R f is around 1.45 R ⊙ for {Fe}}{10+} and below 1.25 R ⊙ for {Fe}}{13+}. Along open field lines in streamer regions, R f ranges from 1.4 to 2 R ⊙ for {Fe}}{10+} and from 1.5 to 2.2 R ⊙ for {Fe}}{13+}. These first empirical R f values: (1) reflect the differing plasma parameters between CHs and streamers and structures within them, including prominences and coronal mass ejections; (2) are well below the currently quoted values derived from empirical model studies; and (3) place doubt on the reliability of plasma diagnostics based on the assumption of ionization equilibrium beyond 1.2 R ⊙.
Risk transfer modeling among hierarchically associated stakeholders in development of space systems
NASA Astrophysics Data System (ADS)
Henkle, Thomas Grove, III
Research develops an empirically derived cardinal model that prescribes handling and transfer of risks between organizations with hierarchical relationships. Descriptions of mission risk events, risk attitudes, and conditions for risk transfer are determined for client and underwriting entities associated with acquisition, production, and deployment of space systems. The hypothesis anticipates that large client organizations should be able to assume larger dollar-value risks of a program in comparison to smaller organizations even though many current risk transfer arrangements via space insurance violate this hypothesis. A literature survey covers conventional and current risk assessment methods, current techniques used in the satellite industry for complex system development, cardinal risk modeling, and relevant aspects of utility theory. Data gathered from open literature on demonstrated launch vehicle and satellite in-orbit reliability, annual space insurance premiums and losses, and ground fatalities and range damage associated with satellite launch activities are presented. Empirically derived models are developed for risk attitudes of space system clients and third-party underwriters associated with satellite system development and deployment. Two application topics for risk transfer are examined: the client-underwriter relationship on assumption or transfer of risks associated with first-year mission success, and statutory risk transfer agreements between space insurance underwriters and the US government to promote growth in both commercial client and underwriting industries. Results indicate that client entities with wealth of at least an order of magnitude above satellite project costs should retain risks to first-year mission success despite present trends. Furthermore, large client entities such as the US government should never pursue risk transfer via insurance under previously demonstrated probabilities of mission success; potential savings may reasonably exceed multiple tens of $millions per space project. Additional results indicate that current US government statutory arrangements on risk sharing with underwriting entities appears reasonable with respect to stated objectives. This research combines aspects of multiple disciplines to include risk management, decision theory, utility theory, and systems architecting. It also demonstrates development of a more general theory on prescribing risk transfer criteria between distinct, but hierarchically associated entities involved in complex system development with applicability to a variety of technical domains.
The Diffusion Model Is Not a Deterministic Growth Model: Comment on Jones and Dzhafarov (2014)
Smith, Philip L.; Ratcliff, Roger; McKoon, Gail
2015-01-01
Jones and Dzhafarov (2014) claim that several current models of speeded decision making in cognitive tasks, including the diffusion model, can be viewed as special cases of other general models or model classes. The general models can be made to match any set of response time (RT) distribution and accuracy data exactly by a suitable choice of parameters and so are unfalsifiable. The implication of their claim is that models like the diffusion model are empirically testable only by artificially restricting them to exclude unfalsifiable instances of the general model. We show that Jones and Dzhafarov’s argument depends on enlarging the class of “diffusion” models to include models in which there is little or no diffusion. The unfalsifiable models are deterministic or near-deterministic growth models, from which the effects of within-trial variability have been removed or in which they are constrained to be negligible. These models attribute most or all of the variability in RT and accuracy to across-trial variability in the rate of evidence growth, which is permitted to be distributed arbitrarily and to vary freely across experimental conditions. In contrast, in the standard diffusion model, within-trial variability in evidence is the primary determinant of variability in RT. Across-trial variability, which determines the relative speed of correct responses and errors, is theoretically and empirically constrained. Jones and Dzhafarov’s attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research. PMID:25347314
Tong, Wing-Chiu; Choi, Cecilia Y.; Karche, Sanjay; Holden, Arun V.; Zhang, Henggui; Taggart, Michael J.
2011-01-01
Uterine contractions during labor are discretely regulated by rhythmic action potentials (AP) of varying duration and form that serve to determine calcium-dependent force production. We have employed a computational biology approach to develop a fuller understanding of the complexity of excitation-contraction (E-C) coupling of uterine smooth muscle cells (USMC). Our overall aim is to establish a mathematical platform of sufficient biophysical detail to quantitatively describe known uterine E-C coupling parameters and thereby inform future empirical investigations of physiological and pathophysiological mechanisms governing normal and dysfunctional labors. From published and unpublished data we construct mathematical models for fourteen ionic currents of USMCs: currents (L- and T-type), current, an hyperpolarization-activated current, three voltage-gated currents, two -activated current, -activated current, non-specific cation current, - exchanger, - pump and background current. The magnitudes and kinetics of each current system in a spindle shaped single cell with a specified surface area∶volume ratio is described by differential equations, in terms of maximal conductances, electrochemical gradient, voltage-dependent activation/inactivation gating variables and temporal changes in intracellular computed from known fluxes. These quantifications are validated by the reconstruction of the individual experimental ionic currents obtained under voltage-clamp. Phasic contraction is modeled in relation to the time constant of changing . This integrated model is validated by its reconstruction of the different USMC AP configurations (spikes, plateau and bursts of spikes), the change from bursting to plateau type AP produced by estradiol and of simultaneous experimental recordings of spontaneous AP, and phasic force. In summary, our advanced mathematical model provides a powerful tool to investigate the physiological ionic mechanisms underlying the genesis of uterine electrical E-C coupling of labor and parturition. This will furnish the evolution of descriptive and predictive quantitative models of myometrial electrogenesis at the whole cell and tissue levels. PMID:21559514
NASA Astrophysics Data System (ADS)
Park, Joonam; Appiah, Williams Agyei; Byun, Seoungwoo; Jin, Dahee; Ryou, Myung-Hyun; Lee, Yong Min
2017-10-01
To overcome the limitation of simple empirical cycle life models based on only equivalent circuits, we attempt to couple a conventional empirical capacity loss model with Newman's porous composite electrode model, which contains both electrochemical reaction kinetics and material/charge balances. In addition, an electrolyte depletion function is newly introduced to simulate a sudden capacity drop at the end of cycling, which is frequently observed in real lithium-ion batteries (LIBs). When simulated electrochemical properties are compared with experimental data obtained with 20 Ah-level graphite/LiFePO4 LIB cells, our semi-empirical model is sufficiently accurate to predict a voltage profile having a low standard deviation of 0.0035 V, even at 5C. Additionally, our model can provide broad cycle life color maps under different c-rate and depth-of-discharge operating conditions. Thus, this semi-empirical model with an electrolyte depletion function will be a promising platform to predict long-term cycle lives of large-format LIB cells under various operating conditions.
Three empirical air-to-leaf models for estimating grass concentrations of polychlorinated dibenzo-p-dioxins and dibenzofurans (abbreviated dioxins and furans) from air concentrations of these compounds are described and tested against two field data sets. All are empirical in th...
NASA Astrophysics Data System (ADS)
Slade, Holly Claudia
Hydrogenated amorphous silicon thin film transistors (TFTs) are now well-established as switching elements for a variety of applications in the lucrative electronics market, such as active matrix liquid crystal displays, two-dimensional imagers, and position-sensitive radiation detectors. These applications necessitate the development of accurate characterization and simulation tools. The main goal of this work is the development of a semi- empirical, analytical model for the DC and AC operation of an amorphous silicon TFT for use in a manufacturing facility to improve yield and maintain process control. The model is physically-based, in order that the parameters scale with gate length and can be easily related back to the material and device properties. To accomplish this, extensive experimental data and 2D simulations are used to observe and quantify non- crystalline effects in the TFTs. In particular, due to the disorder in the amorphous network, localized energy states exist throughout the band gap and affect all regimes of TFT operation. These localized states trap most of the free charge, causing a gate-bias-dependent field effect mobility above threshold, a power-law dependence of the current on gate bias below threshold, very low leakage currents, and severe frequency dispersion of the TFT gate capacitance. Additional investigations of TFT instabilities reveal the importance of changes in the density of states and/or back channel conduction due to bias and thermal stress. In the above threshold regime, the model is similar to the crystalline MOSFET model, considering the drift component of free charge. This approach uses the field effect mobility to take into account the trap states and must utilize the correct definition of threshold voltage. In the below threshold regime, the density of deep states is taken into account. The leakage current is modeled empirically, and the parameters are temperature dependent to 150oC. The capacitance of the TFT can be modeled using a transmission line model, which is implemented using a small signal circuit with access resistors in series with the source and drain capacitances. This correctly reproduces the frequency dispersion in the TFT. Automatic parameter extraction routines are provided and are used to test the robustness of the model on a variety of devices from different research laboratories. The results demonstrate excellent agreement, showing that the model is suitable for device design, scaling, and implementation in the manufacturing process.
Empirical agreement in model validation.
Jebeile, Julie; Barberousse, Anouk
2016-04-01
Empirical agreement is often used as an important criterion when assessing the validity of scientific models. However, it is by no means a sufficient criterion as a model can be so adjusted as to fit available data even though it is based on hypotheses whose plausibility is known to be questionable. Our aim in this paper is to investigate into the uses of empirical agreement within the process of model validation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Spatial working memory capacity predicts bias in estimates of location.
Crawford, L Elizabeth; Landy, David; Salthouse, Timothy A
2016-09-01
Spatial memory research has attributed systematic bias in location estimates to a combination of a noisy memory trace with a prior structure that people impose on the space. Little is known about intraindividual stability and interindividual variation in these patterns of bias. In the current work, we align recent empirical and theoretical work on working memory capacity limits and spatial memory bias to generate the prediction that those with lower working memory capacity will show greater bias in memory of the location of a single item. Reanalyzing data from a large study of cognitive aging, we find support for this prediction. Fitting separate models to individuals' data revealed a surprising variety of strategies. Some were consistent with Bayesian models of spatial category use, however roughly half of participants biased estimates outward in a way not predicted by current models and others seemed to combine these strategies. These analyses highlight the importance of studying individuals when developing general models of cognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Spatial Working Memory Capacity Predicts Bias in Estimates of Location
Crawford, L. Elizabeth; Landy, David H.; Salthouse, Timothy A.
2016-01-01
Spatial memory research has attributed systematic bias in location estimates to a combination of a noisy memory trace with a prior structure that people impose on the space. Little is known about intra-individual stability and inter-individual variation in these patterns of bias. In the current work we align recent empirical and theoretical work on working memory capacity limits and spatial memory bias to generate the prediction that those with lower working memory capacity will show greater bias in memory of the location of a single item. Reanalyzing data from a large study of cognitive aging, we find support for this prediction. Fitting separate models to individuals’ data revealed a surprising variety of strategies. Some were consistent with Bayesian models of spatial category use, however roughly half of participants biased estimates outward in a way not predicted by current models and others seemed to combine these strategies. These analyses highlight the importance of studying individuals when developing general models of cognition. PMID:26900708
Model for toroidal velocity in H-mode plasmas in the presence of internal transport barriers
NASA Astrophysics Data System (ADS)
Chatthong, B.; Onjun, T.; Singhsomroje, W.
2010-06-01
A model for predicting toroidal velocity in H-mode plasmas in the presence of internal transport barriers (ITBs) is developed using an empirical approach. In this model, it is assumed that the toroidal velocity is directly proportional to the local ion temperature. This model is implemented in the BALDUR integrated predictive modelling code so that simulations of ITB plasmas can be carried out self-consistently. In these simulations, a combination of a semi-empirical mixed Bohm/gyro-Bohm (mixed B/gB) core transport model that includes ITB effects and NCLASS neoclassical transport is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a theory-based pedestal model based on a combination of magnetic and flow shear stabilization pedestal width scaling and an infinite-n ballooning pressure gradient model. The combination of the mixed B/gB core transport model with ITB effects, together with the pedestal and the toroidal velocity models, is used to simulate the time evolution of plasma current, temperature and density profiles of 10 JET optimized shear discharges. It is found that the simulations can reproduce an ITB formation in these discharges. Statistical analyses including root mean square error (RMSE) and offset are used to quantify the agreement. It is found that the averaged RMSE and offset among these discharges are about 24.59% and -0.14%, respectively.
Equation-free mechanistic ecosystem forecasting using empirical dynamic modeling
Ye, Hao; Beamish, Richard J.; Glaser, Sarah M.; Grant, Sue C. H.; Hsieh, Chih-hao; Richards, Laura J.; Schnute, Jon T.; Sugihara, George
2015-01-01
It is well known that current equilibrium-based models fall short as predictive descriptions of natural ecosystems, and particularly of fisheries systems that exhibit nonlinear dynamics. For example, model parameters assumed to be fixed constants may actually vary in time, models may fit well to existing data but lack out-of-sample predictive skill, and key driving variables may be misidentified due to transient (mirage) correlations that are common in nonlinear systems. With these frailties, it is somewhat surprising that static equilibrium models continue to be widely used. Here, we examine empirical dynamic modeling (EDM) as an alternative to imposed model equations and that accommodates both nonequilibrium dynamics and nonlinearity. Using time series from nine stocks of sockeye salmon (Oncorhynchus nerka) from the Fraser River system in British Columbia, Canada, we perform, for the the first time to our knowledge, real-data comparison of contemporary fisheries models with equivalent EDM formulations that explicitly use spawning stock and environmental variables to forecast recruitment. We find that EDM models produce more accurate and precise forecasts, and unlike extensions of the classic Ricker spawner–recruit equation, they show significant improvements when environmental factors are included. Our analysis demonstrates the strategic utility of EDM for incorporating environmental influences into fisheries forecasts and, more generally, for providing insight into how environmental factors can operate in forecast models, thus paving the way for equation-free mechanistic forecasting to be applied in management contexts. PMID:25733874
Issues in benchmarking human reliability analysis methods : a literature review.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lois, Erasmia; Forester, John Alan; Tran, Tuan Q.
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted,more » reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less
Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing pastmore » benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less
Krieger, Janice L; Krok-Schoen, Jessica L; Dailey, Phokeng M; Palmer-Wackerly, Angela L; Schoenberg, Nancy; Paskett, Electra D; Dignan, Mark
2017-07-01
Distributed cognition occurs when cognitive and affective schemas are shared between two or more people during interpersonal discussion. Although extant research focuses on distributed cognition in decision making between health care providers and patients, studies show that caregivers are also highly influential in the treatment decisions of patients. However, there are little empirical data describing how and when families exert influence. The current article addresses this gap by examining decisional support in the context of cancer randomized clinical trial (RCT) decision making. Data are drawn from in-depth interviews with rural, Appalachian cancer patients ( N = 46). Analysis of transcript data yielded empirical support for four distinct models of health decision making. The implications of these findings for developing interventions to improve the quality of treatment decision making and overall well-being are discussed.
NASA Astrophysics Data System (ADS)
Afrand, Masoud; Hemmat Esfe, Mohammad; Abedini, Ehsan; Teimouri, Hamid
2017-03-01
The current paper first presents an empirical correlation based on experimental results for estimating thermal conductivity enhancement of MgO-water nanofluid using curve fitting method. Then, artificial neural networks (ANNs) with various numbers of neurons have been assessed by considering temperature and MgO volume fraction as the inputs variables and thermal conductivity enhancement as the output variable to select the most appropriate and optimized network. Results indicated that the network with 7 neurons had minimum error. Eventually, the output of artificial neural network was compared with the results of the proposed empirical correlation and those of the experiments. Comparisons revealed that ANN modeling was more accurate than curve-fitting method in the predicting the thermal conductivity enhancement of the nanofluid.
The Use of Empirical Data Sources in HRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruce Hallbert; David Gertman; Julie Marble
This paper presents a review of available information related to human performance to support Human Reliability Analysis (HRA) performed for nuclear power plants (NPPs). A number of data sources are identified as potentially useful. These include NPP licensee event reports (LERs), augmented inspection team (AIT) reports, operator requalification data, results from the literature in experimental psychology, and the Aviation Safety Reporting System (ASRSs). The paper discusses how utilizing such information improves our capability to model and quantify human performance. In particular the paper discusses how information related to performance shaping factors (PSFs) can be extracted from empirical data to determinemore » their size effect, their relative effects, as well as their interactions. The paper concludes that appropriate use of existing sources can help addressing some of the important issues we are currently facing in HRA.« less
2013-08-27
University of New Jersey, Newark, New Jersey, United States of America, 3 Department of Psychology , Rutgers, The State University of New Jersey...United States of America, 5 Marcs Institute for Brain and Behaviour & School of Social Sciences and Psychology , University of Western Sydney, Sydney...for current, severe PTSD symptoms (PTSS) were tested on a probabilistic classification task [19] that interleaves reward learning and punishment
Cooper, Myra J
2005-06-01
Important developments have taken place in cognitive theory of eating disorders (EDs) (and also in other disorders) since the review paper published by M.J. Cooper in 1997. The relevant empirical database has also expanded. Nevertheless, cognitive therapy for anorexia nervosa and bulimia nervosa, although helpful to many patients, leaves much to be desired. The current paper reviews the relevant empirical evidence collected, and the theoretical revisions that have been made to cognitive models of eating disorders, since 1997. The status and limitations of these developments are considered, including whether or not they meet the criteria for "good" theory. New theoretical developments relevant to cognitive explanations of eating disorders (second generation theories) are then presented, and the preliminary evidence that supports these is briefly reviewed. The lack of integration between cognitive theories of EDs and risk (vulnerability) factor research is noted, and a potential model that unites the two is noted. The implications of the review for future research and the development of cognitive theory in eating disorders are then discussed. These include the need for study of cognitive constructs not yet fully integrated (or indeed not yet applied clinically) into current theories and the need for cognitive theories of eating disorders to continue to evolve (as they have indeed done since 1997) in order to fully integrate such constructs. Treatment studies incorporating these new developments also urgently need to be undertaken.
Cowden, Tracy L; Cummings, Greta G
2012-07-01
We describe a theoretical model of staff nurses' intentions to stay in their current positions. The global nursing shortage and high nursing turnover rate demand evidence-based retention strategies. Inconsistent study outcomes indicate a need for testable theoretical models of intent to stay that build on previously published models, are reflective of current empirical research and identify causal relationships between model concepts. Two systematic reviews of electronic databases of English language published articles between 1985-2011. This complex, testable model expands on previous models and includes nurses' affective and cognitive responses to work and their effects on nurses' intent to stay. The concepts of desire to stay, job satisfaction, joy at work, and moral distress are included in the model to capture the emotional response of nurses to their work environments. The influence of leadership is integrated within the model. A causal understanding of clinical nurses' intent to stay and the effects of leadership on the development of that intention will facilitate the development of effective retention strategies internationally. Testing theoretical models is necessary to confirm previous research outcomes and to identify plausible sequences of the development of behavioral intentions. Increased understanding of the causal influences on nurses' intent to stay should lead to strategies that may result in higher retention rates and numbers of nurses willing to work in the health sector. © 2012 Blackwell Publishing Ltd.
The Role of Empirical Research in Bioethics
Kon, Alexander A.
2010-01-01
There has long been tension between bioethicists whose work focuses on classical philosophical inquiry and those who perform empirical studies on bioethical issues. While many have argued that empirical research merely illuminates current practices and cannot inform normative ethics, others assert that research-based work has significant implications for refining our ethical norms. In this essay, I present a novel construct for classifying empirical research in bioethics into four hierarchical categories: Lay of the Land, Ideal Versus Reality, Improving Care, and Changing Ethical Norms. Through explaining these four categories and providing examples of publications in each stratum, I define how empirical research informs normative ethics. I conclude by demonstrating how philosophical inquiry and empirical research can work cooperatively to further normative ethics. PMID:19998120
The role of empirical research in bioethics.
Kon, Alexander A
2009-01-01
There has long been tension between bioethicists whose work focuses on classical philosophical inquiry and those who perform empirical studies on bioethical issues. While many have argued that empirical research merely illuminates current practices and cannot inform normative ethics, others assert that research-based work has significant implications for refining our ethical norms. In this essay, I present a novel construct for classifying empirical research in bioethics into four hierarchical categories: Lay of the Land, Ideal Versus Reality, Improving Care, and Changing Ethical Norms. Through explaining these four categories and providing examples of publications in each stratum, I define how empirical research informs normative ethics. I conclude by demonstrating how philosophical inquiry and empirical research can work cooperatively to further normative ethics.
Calibration of the MEPDG transfer functions in Georgia : task order 2 report.
DOT National Transportation Integrated Search
2015-03-28
The Georgia Department of Transportation (GDOT) currently uses the empirical 1972 AASHTO Interim Guide for : Design of Pavement Structures as their standard pavement design procedure. However, GDOT plans to transition to the : Mechanistic Empirical P...
Particle tracing modeling of ion fluxes at geosynchronous orbit
Brito, Thiago V.; Woodroffe, Jesse; Jordanova, Vania K.; ...
2017-10-31
The initial results of a coupled MHD/particle tracing method to evaluate particle fluxes in the inner magnetosphere are presented. This setup is capable of capturing the earthward particle acceleration process resulting from dipolarization events in the tail region of the magnetosphere. On the period of study, the MHD code was able to capture a dipolarization event and the particle tracing algorithm was able to capture our results of these disturbances and calculate proton fluxes in the night side geosynchronous orbit region. The simulation captured dispersionless injections as well as the energy dispersion signatures that are frequently observed by satellites atmore » geosynchronous orbit. Currently, ring current models rely on Maxwellian-type distributions based on either empirical flux values or sparse satellite data for their boundary conditions close to geosynchronous orbit. In spite of some differences in intensity and timing, the setup presented here is able to capture substorm injections, which represents an improvement regarding a reverse way of coupling these ring current models with MHD codes through the use of boundary conditions.« less
Particle tracing modeling of ion fluxes at geosynchronous orbit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brito, Thiago V.; Woodroffe, Jesse; Jordanova, Vania K.
The initial results of a coupled MHD/particle tracing method to evaluate particle fluxes in the inner magnetosphere are presented. This setup is capable of capturing the earthward particle acceleration process resulting from dipolarization events in the tail region of the magnetosphere. On the period of study, the MHD code was able to capture a dipolarization event and the particle tracing algorithm was able to capture our results of these disturbances and calculate proton fluxes in the night side geosynchronous orbit region. The simulation captured dispersionless injections as well as the energy dispersion signatures that are frequently observed by satellites atmore » geosynchronous orbit. Currently, ring current models rely on Maxwellian-type distributions based on either empirical flux values or sparse satellite data for their boundary conditions close to geosynchronous orbit. In spite of some differences in intensity and timing, the setup presented here is able to capture substorm injections, which represents an improvement regarding a reverse way of coupling these ring current models with MHD codes through the use of boundary conditions.« less
NASA Astrophysics Data System (ADS)
Hansen, Kenneth C.; Altwegg, Kathrin; Bieler, Andre; Berthelier, Jean-Jacques; Calmonte, Ursina; Combi, Michael R.; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, T. I.; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Léna; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu; ROSINA Team
2016-10-01
We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet water (H2O) coma of comet 67P/Churyumov-Gerasimenko. In this work we create additional empirical models for the coma distributions of CO2 and CO. The AMPS simulations are based on ROSINA DFMS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Double Focusing Mass Spectrometer) data taken over the entire timespan of the Rosetta mission. The empirical model is created using AMPS DSMC results which are extracted from simulations at a range of radial distances, rotation phases and heliocentric distances. The simulation results are then averaged over a comet rotation and fitted to an empirical model distribution. Model coefficients are then fitted to piecewise-linear functions of heliocentric distance. The final product is an empirical model of the coma distribution which is a function of heliocentric distance, radial distance, and sun-fixed longitude and latitude angles. The model clearly mimics the behavior of water shifting production from North to South across the inbound equinox while the CO2 production is always in the South.The empirical model can be used to de-trend the spacecraft motion from the ROSINA COPS and DFMS data. The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on single point measurements. In this presentation we will present the coma production rates as a function of heliocentric distance for the entire Rosetta mission.This work was supported by contracts JPL#1266313 and JPL#1266314 from the US Rosetta Project and NASA grant NNX14AG84G from the Planetary Atmospheres Program.
On land-use modeling: A treatise of satellite imagery data and misclassification error
NASA Astrophysics Data System (ADS)
Sandler, Austin M.
Recent availability of satellite-based land-use data sets, including data sets with contiguous spatial coverage over large areas, relatively long temporal coverage, and fine-scale land cover classifications, is providing new opportunities for land-use research. However, care must be used when working with these datasets due to misclassification error, which causes inconsistent parameter estimates in the discrete choice models typically used to model land-use. I therefore adapt the empirical correction methods developed for other contexts (e.g., epidemiology) so that they can be applied to land-use modeling. I then use a Monte Carlo simulation, and an empirical application using actual satellite imagery data from the Northern Great Plains, to compare the results of a traditional model ignoring misclassification to those from models accounting for misclassification. Results from both the simulation and application indicate that ignoring misclassification will lead to biased results. Even seemingly insignificant levels of misclassification error (e.g., 1%) result in biased parameter estimates, which alter marginal effects enough to affect policy inference. At the levels of misclassification typical in current satellite imagery datasets (e.g., as high as 35%), ignoring misclassification can lead to systematically erroneous land-use probabilities and substantially biased marginal effects. The correction methods I propose, however, generate consistent parameter estimates and therefore consistent estimates of marginal effects and predicted land-use probabilities.
NASA Technical Reports Server (NTRS)
Cunningham, Ronan A.; McManus, Hugh L.
1996-01-01
It has previously been demonstrated that simple coupled reaction-diffusion models can approximate the aging behavior of PMR-15 resin subjected to different oxidative environments. Based on empirically observed phenomena, a model coupling chemical reactions, both thermal and oxidative, with diffusion of oxygen into the material bulk should allow simulation of the aging process. Through preliminary modeling techniques such as this it has become apparent that accurate analytical models cannot be created until the phenomena which cause the aging of these materials are quantified. An experimental program is currently underway to quantify all of the reaction/diffusion related mechanisms involved. The following contains a summary of the experimental data which has been collected through thermogravimetric analyses of neat PMR-15 resin, along with analytical predictions from models based on the empirical data. Thermogravimetric analyses were carried out in a number of different environments - nitrogen, air and oxygen. The nitrogen provides data for the purely thermal degradation mechanisms while those in air provide data for the coupled oxidative-thermal process. The intent here is to effectively subtract the nitrogen atmosphere data (assumed to represent only thermal reactions) from the air and oxygen atmosphere data to back-figure the purely oxidative reactions. Once purely oxidative (concentration dependent) reactions have been quantified it should then be possible to quantify the diffusion of oxygen into the material bulk.
Billieux, Joël; Philippot, Pierre; Schmid, Cécile; Maurage, Pierre; De Mol, Jan; Van der Linden, Martial
2015-01-01
Dysfunctional use of the mobile phone has often been conceptualized as a 'behavioural addiction' that shares most features with drug addictions. In the current article, we challenge the clinical utility of the addiction model as applied to mobile phone overuse. We describe the case of a woman who overuses her mobile phone from two distinct approaches: (1) a symptom-based categorical approach inspired from the addiction model of dysfunctional mobile phone use and (2) a process-based approach resulting from an idiosyncratic clinical case conceptualization. In the case depicted here, the addiction model was shown to lead to standardized and non-relevant treatment, whereas the clinical case conceptualization allowed identification of specific psychological processes that can be targeted with specific, empirically based psychological interventions. This finding highlights that conceptualizing excessive behaviours (e.g., gambling and sex) within the addiction model can be a simplification of an individual's psychological functioning, offering only limited clinical relevance. The addiction model, applied to excessive behaviours (e.g., gambling, sex and Internet-related activities) may lead to non-relevant standardized treatments. Clinical case conceptualization allowed identification of specific psychological processes that can be targeted with specific empirically based psychological interventions. The biomedical model might lead to the simplification of an individual's psychological functioning with limited clinical relevance. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Courey, Karim; Wright, Clara; Asfour, Shihab; Onar, Arzu; Bayliss, Jon; Ludwig, Larry
2009-01-01
In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This empirical model can be used to improve existing risk simulation models. FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.
A unifying kinetic framework for modeling oxidoreductase-catalyzed reactions.
Chang, Ivan; Baldi, Pierre
2013-05-15
Oxidoreductases are a fundamental class of enzymes responsible for the catalysis of oxidation-reduction reactions, crucial in most bioenergetic metabolic pathways. From their common root in the ancient prebiotic environment, oxidoreductases have evolved into diverse and elaborate protein structures with specific kinetic properties and mechanisms adapted to their individual functional roles and environmental conditions. While accurate kinetic modeling of oxidoreductases is thus important, current models suffer from limitations to the steady-state domain, lack empirical validation or are too specialized to a single system or set of conditions. To address these limitations, we introduce a novel unifying modeling framework for kinetic descriptions of oxidoreductases. The framework is based on a set of seven elementary reactions that (i) form the basis for 69 pairs of enzyme state transitions for encoding various specific microscopic intra-enzyme reaction networks (micro-models), and (ii) lead to various specific macroscopic steady-state kinetic equations (macro-models) via thermodynamic assumptions. Thus, a synergistic bridge between the micro and macro kinetics can be achieved, enabling us to extract unitary rate constants, simulate reaction variance and validate the micro-models using steady-state empirical data. To help facilitate the application of this framework, we make available RedoxMech: a Mathematica™ software package that automates the generation and customization of micro-models. The Mathematica™ source code for RedoxMech, the documentation and the experimental datasets are all available from: http://www.igb.uci.edu/tools/sb/metabolic-modeling. pfbaldi@ics.uci.edu Supplementary data are available at Bioinformatics online.
Conceptual Modeling in Systems Biology Fosters Empirical Findings: The mRNA Lifecycle
Dori, Dov; Choder, Mordechai
2007-01-01
One of the main obstacles to understanding complex biological systems is the extent and rapid evolution of information, way beyond the capacity individuals to manage and comprehend. Current modeling approaches and tools lack adequate capacity to model concurrently structure and behavior of biological systems. Here we propose Object-Process Methodology (OPM), a holistic conceptual modeling paradigm, as a means to model both diagrammatically and textually biological systems formally and intuitively at any desired number of levels of detail. OPM combines objects, e.g., proteins, and processes, e.g., transcription, in a way that is simple and easily comprehensible to researchers and scholars. As a case in point, we modeled the yeast mRNA lifecycle. The mRNA lifecycle involves mRNA synthesis in the nucleus, mRNA transport to the cytoplasm, and its subsequent translation and degradation therein. Recent studies have identified specific cytoplasmic foci, termed processing bodies that contain large complexes of mRNAs and decay factors. Our OPM model of this cellular subsystem, presented here, led to the discovery of a new constituent of these complexes, the translation termination factor eRF3. Association of eRF3 with processing bodies is observed after a long-term starvation period. We suggest that OPM can eventually serve as a comprehensive evolvable model of the entire living cell system. The model would serve as a research and communication platform, highlighting unknown and uncertain aspects that can be addressed empirically and updated consequently while maintaining consistency. PMID:17849002
Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Clifford W.; Martin, Curtis E.
2015-08-01
We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature;more » (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.« less
Why Nuclear Forensics Needs New Plasma Chemistry Data
NASA Astrophysics Data System (ADS)
Rose, T.; Armstrong, M.; Chernov, A.; Crowhurst, J.; Dai, Z.; Knight, K.; Koroglu, B.; Radousky, H.; Stavrou, E.; Weisz, D.; Zaug, J.; Azer, M.; Finko, M.; Curreli, D.
2016-10-01
The mechanisms that control the distribution of radionuclides in fallout after a nuclear detonation are not adequately constrained. Current capabilities for assessing post-detonation scenarios often rely on empirical observations and approximations. Deeper insight into chemical condensation requires a coupled experimental, theoretical, and modeling approach. The behavior of uranium during plasma condensation is perplexing. Two independent methods are being developed to investigate gas phase uranium chemistry and speciation during plasma condensation: (1) laser-induced breakdown spectroscopy and (2) a unique steady-state ICP flow reactor. Both methods use laser absorption spectroscopy to obtain in situ data for vapor phase molecular species as they form. We are developing a kinetic model to describe the relative abundance of uranium species in the evolving plasma. Characterization of the uranium-oxygen system will be followed by other chemical components, including `carrier' materials such as silica. The goal is to develop a semi-empirical model to describe the chemical fractionation of uranium during fallout formation. Prepared by LLNL under Contract DE-AC52-07NA27344. This project was sponsored in part by the Department of the Defense, Defense Threat Reduction Agency, under Grant Number HDTRA1-16-1-0020.
Mechanistic materials modeling for nuclear fuel performance
Tonks, Michael R.; Andersson, David; Phillpot, Simon R.; ...
2017-03-15
Fuel performance codes are critical tools for the design, certification, and safety analysis of nuclear reactors. However, their ability to predict fuel behavior under abnormal conditions is severely limited by their considerable reliance on empirical materials models correlated to burn-up (a measure of the number of fission events that have occurred, but not a unique measure of the history of the material). In this paper, we propose a different paradigm for fuel performance codes to employ mechanistic materials models that are based on the current state of the evolving microstructure rather than burn-up. In this approach, a series of statemore » variables are stored at material points and define the current state of the microstructure. The evolution of these state variables is defined by mechanistic models that are functions of fuel conditions and other state variables. The material properties of the fuel and cladding are determined from microstructure/property relationships that are functions of the state variables and the current fuel conditions. Multiscale modeling and simulation is being used in conjunction with experimental data to inform the development of these models. Finally, this mechanistic, microstructure-based approach has the potential to provide a more predictive fuel performance capability, but will require a team of researchers to complete the required development and to validate the approach.« less
Wood, Alexander
2004-01-01
This interim report describes an alternative approach for evaluating the efficacy of using mercury (Hg) offsets to improve water quality. Hg-offset programs may allow dischargers facing higher-pollution control costs to meet their regulatory obligations by making more cost effective pollutant-reduction decisions. Efficient Hg management requires methods to translate that science and economics into a regulatory decision framework. This report documents the work in progress by the U.S. Geological Surveys Western Geographic Science Center in collaboration with Stanford University toward developing this decision framework to help managers, regulators, and other stakeholders decide whether offsets can cost effectively meet the Hg total maximum daily load (TMDL) requirements in the Sacramento River watershed. Two key approaches being considered are: (1) a probabilistic approach that explicitly incorporates scientific uncertainty, cost information, and value judgments; and (2) a quantitative approach that captures uncertainty in testing the feasibility of Hg offsets. Current fate and transport-process models commonly attempt to predict chemical transformations and transport pathways deterministically. However, the physical, chemical, and biologic processes controlling the fate and transport of Hg in aquatic environments are complex and poorly understood. Deterministic models of Hg environmental behavior contain large uncertainties, reflecting this lack of understanding. The uncertainty in these underlying physical processes may produce similarly large uncertainties in the decisionmaking process. However, decisions about control strategies are still being made despite the large uncertainties in current Hg loadings, the relations between total Hg (HgT) loading and methylmercury (MeHg) formation, and the relations between control efforts and Hg content in fish. The research presented here focuses on an alternative analytical approach to the current use of safety factors and deterministic methods for Hg TMDL decision support, one that is fully compatible with an adaptive management approach. This alternative approach uses empirical data and informed judgment to provide a scientific and technical basis for helping National Pollutant Discharge Elimination System (NPDES) permit holders make management decisions. An Hg-offset system would be an option if a wastewater-treatment plant could not achieve NPDES permit requirements for HgT reduction. We develop a probabilistic decision-analytical model consisting of three submodels for HgT loading, MeHg, and cost mitigation within a Bayesian network that integrates information of varying rigor and detail into a simple model of a complex system. Hg processes are identified and quantified by using a combination of historical data, statistical models, and expert judgment. Such an integrated approach to uncertainty analysis allows easy updating of prediction and inference when observations of model variables are made. We demonstrate our approach with data from the Cache Creek watershed (a subbasin of the Sacramento River watershed). The empirical models used to generate the needed probability distributions are based on the same empirical models currently being used by the Central Valley Regional Water Quality Control Cache Creek Hg TMDL working group. The significant difference is that input uncertainty and error are explicitly included in the model and propagated throughout its algorithms. This work demonstrates how to integrate uncertainty into the complex and highly uncertain Hg TMDL decisionmaking process. The various sources of uncertainty are propagated as decision risk that allows decisionmakers to simultaneously consider uncertainties in remediation/implementation costs while attempting to meet environmental/ecologic targets. We must note that this research is on going. As more data are collected, the HgT and cost-mitigation submodels are updated and the uncer
Psychological Vulnerability to Completed Suicide: A Review of Empirical Studies.
ERIC Educational Resources Information Center
Conner, Kenneth R.; Duberstein, Paul R.; Conwell, Yeates; Seidlitz, Larry; Caine, Eric D.
2001-01-01
This article reviews empirical literature on psychological vulnerability to completed suicide. Five constructs have been consistently associated with completed suicide: impulsivity/aggression; depression; anxiety; hopelessness; and self-consciousness/social disengagement. Current knowledge of psychological vulnerability could inform social…
NASA Technical Reports Server (NTRS)
Martinovic, Zoran N.; Cerro, Jeffrey A.
2002-01-01
This is an interim user's manual for current procedures used in the Vehicle Analysis Branch at NASA Langley Research Center, Hampton, Virginia, for launch vehicle structural subsystem weight estimation based on finite element modeling and structural analysis. The process is intended to complement traditional methods of conceptual and early preliminary structural design such as the application of empirical weight estimation or application of classical engineering design equations and criteria on one dimensional "line" models. Functions of two commercially available software codes are coupled together. Vehicle modeling and analysis are done using SDRC/I-DEAS, and structural sizing is performed with the Collier Research Corp. HyperSizer program.
A note on two-dimensional asymptotic magnetotail equilibria
NASA Technical Reports Server (NTRS)
Voigt, Gerd-Hannes; Moore, Brian D.
1994-01-01
In order to understand, on the fluid level, the structure, the time evolution, and the stability of current sheets, such as the magnetotail plasma sheet in Earth's magnetosphere, one has to consider magnetic field configurations that are in magnetohydrodynamic (MHD) force equilibrium. Any reasonable MHD current sheet model has to be two-dimensional, at least in an asymptotic sense (B(sub z)/B (sub x)) = epsilon much less than 1. The necessary two-dimensionality is described by a rather arbitrary function f(x). We utilize the free function f(x) to construct two-dimensional magnetotail equilibria are 'equivalent' to current sheets in empirical three-dimensional models. We obtain a class of asymptotic magnetotail equilibria ordered with respect to the magnetic disturbance index Kp. For low Kp values the two-dimensional MHD equilibria reflect some of the realistic, observation-based, aspects of three-dimensional models. For high Kp values the three-dimensional models do not fit the asymptotic MHD equlibria, which is indicative of their inconsistency with the assumed pressure function. This, in turn, implies that high magnetic activity levels of the real magnetosphere might be ruled by thermodynamic conditions different from local thermodynamic equilibrium.
Chiesa, Marco; Cirasola, Antonella; Williams, Riccardo; Nassisi, Valentina; Fonagy, Peter
2017-04-01
Although several studies have highlighted the relationship between attachment states of mind and personality disorders, their findings have not been consistent, possibly due to the application of the traditional taxonomic classification model of attachment. A more recently developed dimensional classification of attachment representations, including more specific aspects of trauma-related representations, may have advantages. In this study, we compare specific associations and predictive power of the categorical attachment and dimensional models applied to 230 Adult Attachment Interview transcripts obtained from personality disordered and nonpsychiatric subjects. We also investigate the role that current levels of psychiatric distress may have in the prediction of PD. The results showed that both models predict the presence of PD, with the dimensional approach doing better in discriminating overall diagnosis of PD. However, both models are less helpful in discriminating specific PD diagnostic subtypes. Current psychiatric distress was found to be the most consistent predictor of PD capturing a large share of the variance and obscuring the role played by attachment variables. The results suggest that attachment parameters correlate with the presence of PD alone and have no specific associations with particular PD subtypes when current psychiatric distress is taken into account.
Variability of bed drag on cohesive beds under wave action
Safak, Ilgar
2016-01-01
Drag force at the bed acting on water flow is a major control on water circulation and sediment transport. Bed drag has been thoroughly studied in sandy waters, but less so in muddy coastal waters. The variation of bed drag on a muddy shelf is investigated here using field observations of currents, waves, and sediment concentration collected during moderate wind and wave events. To estimate bottom shear stress and the bed drag coefficient, an indirect empirical method of logarithmic fitting to current velocity profiles (log-law), a bottom boundary layer model for combined wave-current flow, and a direct method that uses turbulent fluctuations of velocity are used. The overestimation by the log-law is significantly reduced by taking turbulence suppression due to sediment-induced stratification into account. The best agreement between the model and the direct estimates is obtained by using a hydraulic roughness of 10 -4">−4 m in the model. Direct estimate of bed drag on the muddy bed is found to have a decreasing trend with increasing current speed, and is estimated to be around 0.0025 in conditions where wave-induced flow is relatively weak. Bed drag shows an increase (up to fourfold) with increasing wave energy. These findings can be used to test the bed drag parameterizations in hydrodynamic and sediment transport models and the skills of these models in predicting flows in muddy environments.
Modelling innovation performance of European regions using multi-output neural networks
Henriques, Roberto
2017-01-01
Regional innovation performance is an important indicator for decision-making regarding the implementation of policies intended to support innovation. However, patterns in regional innovation structures are becoming increasingly diverse, complex and nonlinear. To address these issues, this study aims to develop a model based on a multi-output neural network. Both intra- and inter-regional determinants of innovation performance are empirically investigated using data from the 4th and 5th Community Innovation Surveys of NUTS 2 (Nomenclature of Territorial Units for Statistics) regions. The results suggest that specific innovation strategies must be developed based on the current state of input attributes in the region. Thus, it is possible to develop appropriate strategies and targeted interventions to improve regional innovation performance. We demonstrate that support of entrepreneurship is an effective instrument of innovation policy. We also provide empirical support that both business and government R&D activity have a sigmoidal effect, implying that the most effective R&D support should be directed to regions with below-average and average R&D activity. We further show that the multi-output neural network outperforms traditional statistical and machine learning regression models. In general, therefore, it seems that the proposed model can effectively reflect both the multiple-output nature of innovation performance and the interdependency of the output attributes. PMID:28968449
Modelling innovation performance of European regions using multi-output neural networks.
Hajek, Petr; Henriques, Roberto
2017-01-01
Regional innovation performance is an important indicator for decision-making regarding the implementation of policies intended to support innovation. However, patterns in regional innovation structures are becoming increasingly diverse, complex and nonlinear. To address these issues, this study aims to develop a model based on a multi-output neural network. Both intra- and inter-regional determinants of innovation performance are empirically investigated using data from the 4th and 5th Community Innovation Surveys of NUTS 2 (Nomenclature of Territorial Units for Statistics) regions. The results suggest that specific innovation strategies must be developed based on the current state of input attributes in the region. Thus, it is possible to develop appropriate strategies and targeted interventions to improve regional innovation performance. We demonstrate that support of entrepreneurship is an effective instrument of innovation policy. We also provide empirical support that both business and government R&D activity have a sigmoidal effect, implying that the most effective R&D support should be directed to regions with below-average and average R&D activity. We further show that the multi-output neural network outperforms traditional statistical and machine learning regression models. In general, therefore, it seems that the proposed model can effectively reflect both the multiple-output nature of innovation performance and the interdependency of the output attributes.
NASA Astrophysics Data System (ADS)
Michelini, Fabienne; Crépieux, Adeline; Beltako, Katawoura
2017-05-01
We discuss some thermodynamic aspects of energy conversion in electronic nanosystems able to convert light energy into electrical or/and thermal energy using the non-equilibrium Green’s function formalism. In a first part, we derive the photon energy and particle currents inside a nanosystem interacting with light and in contact with two electron reservoirs at different temperatures. Energy conservation is verified, and radiation laws are discussed from electron non-equilibrium Green’s functions. We further use the photon currents to formulate the rate of entropy production for steady-state nanosystems, and we recast this rate in terms of efficiency for specific photovoltaic-thermoelectric nanodevices. In a second part, a quantum dot based nanojunction is closely examined using a two-level model. We show analytically that the rate of entropy production is always positive, but we find numerically that it can reach negative values when the derived particule and energy currents are empirically modified as it is usually done for modeling realistic photovoltaic systems.
Michelini, Fabienne; Crépieux, Adeline; Beltako, Katawoura
2017-05-04
We discuss some thermodynamic aspects of energy conversion in electronic nanosystems able to convert light energy into electrical or/and thermal energy using the non-equilibrium Green's function formalism. In a first part, we derive the photon energy and particle currents inside a nanosystem interacting with light and in contact with two electron reservoirs at different temperatures. Energy conservation is verified, and radiation laws are discussed from electron non-equilibrium Green's functions. We further use the photon currents to formulate the rate of entropy production for steady-state nanosystems, and we recast this rate in terms of efficiency for specific photovoltaic-thermoelectric nanodevices. In a second part, a quantum dot based nanojunction is closely examined using a two-level model. We show analytically that the rate of entropy production is always positive, but we find numerically that it can reach negative values when the derived particule and energy currents are empirically modified as it is usually done for modeling realistic photovoltaic systems.
Examining depletion theories under conditions of within-task transfer.
Brewer, Gene A; Lau, Kevin K H; Wingert, Kimberly M; Ball, B Hunter; Blais, Chris
2017-07-01
In everyday life, mental fatigue can be detrimental across many domains including driving, learning, and working. Given the importance of understanding and accounting for the deleterious effects of mental fatigue on behavior, a growing body of literature has studied the role of motivational and executive control processes in mental fatigue. In typical laboratory paradigms, participants complete a task that places demand on these self-control processes and are later given a subsequent task. Generally speaking, decrements to subsequent task performance are taken as evidence that the initial task created mental fatigue through the continued engagement of motivational and executive functions. Several models have been developed to account for negative transfer resulting from this "ego depletion." In the current study, we provide a brief literature review, specify current theoretical approaches to ego-depletion, and report an empirical test of current models of depletion. Across 4 experiments we found minimal evidence for executive control depletion along with strong evidence for motivation mediated ego depletion. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Applying the cell-based coagulation model in the management of critical bleeding.
Ho, K M; Pavey, W
2017-03-01
The cell-based coagulation model was proposed 15 years ago, yet has not been applied commonly in the management of critical bleeding. Nevertheless, this alternative model may better explain the physiological basis of current coagulation management during critical bleeding. In this article we describe the limitations of the traditional coagulation protein cascade and standard coagulation tests, and explain the potential advantages of applying the cell-based model in current coagulation management strategies. The cell-based coagulation model builds on the traditional coagulation model and explains many recent clinical observations and research findings related to critical bleeding unexplained by the traditional model, including the encouraging results of using empirical 1:1:1 fresh frozen plasma:platelets:red blood cells transfusion strategy, and the use of viscoelastic and platelet function tests in patients with critical bleeding. From a practical perspective, applying the cell-based coagulation model also explains why new direct oral anticoagulants are effective systemic anticoagulants even without affecting activated partial thromboplastin time or the International Normalized Ratio in a dose-related fashion. The cell-based coagulation model represents the most cohesive scientific framework on which we can understand and manage coagulation during critical bleeding.
Theoretical and Empirical Comparisons between Two Models for Continuous Item Responses.
ERIC Educational Resources Information Center
Ferrando, Pere J.
2002-01-01
Analyzed the relations between two continuous response models intended for typical response items: the linear congeneric model and Samejima's continuous response model (CRM). Illustrated the relations described using an empirical example and assessed the relations through a simulation study. (SLD)
An empirical and model study on automobile market in Taiwan
NASA Astrophysics Data System (ADS)
Tang, Ji-Ying; Qiu, Rong; Zhou, Yueping; He, Da-Ren
2006-03-01
We have done an empirical investigation on automobile market in Taiwan including the development of the possession rate of the companies in the market from 1979 to 2003, the development of the largest possession rate, and so on. A dynamic model for describing the competition between the companies is suggested based on the empirical study. In the model each company is given a long-term competition factor (such as technology, capital and scale) and a short-term competition factor (such as management, service and advertisement). Then the companies play games in order to obtain more possession rate in the market under certain rules. Numerical simulation based on the model display a competition developing process, which qualitatively and quantitatively agree with our empirical investigation results.
Ogburn, Sarah E.; Calder, Eliza S
2017-01-01
High concentration pyroclastic density currents (PDCs) are hot avalanches of volcanic rock and gas and are among the most destructive volcanic hazards due to their speed and mobility. Mitigating the risk associated with these flows depends upon accurate forecasting of possible impacted areas, often using empirical or physical models. TITAN2D, VolcFlow, LAHARZ, and ΔH/L or energy cone models each employ different rheologies or empirical relationships and therefore differ in appropriateness of application for different types of mass flows and topographic environments. This work seeks to test different statistically- and physically-based models against a range of PDCs of different volumes, emplaced under different conditions, over different topography in order to test the relative effectiveness, operational aspects, and ultimately, the utility of each model for use in hazard assessments. The purpose of this work is not to rank models, but rather to understand the extent to which the different modeling approaches can replicate reality in certain conditions, and to explore the dynamics of PDCs themselves. In this work, these models are used to recreate the inundation areas of the dense-basal undercurrent of all 13 mapped, land-confined, Soufrière Hills Volcano dome-collapse PDCs emplaced from 1996 to 2010 to test the relative effectiveness of different computational models. Best-fit model results and their input parameters are compared with results using observation- and deposit-derived input parameters. Additional comparison is made between best-fit model results and those using empirically-derived input parameters from the FlowDat global database, which represent “forward” modeling simulations as would be completed for hazard assessment purposes. Results indicate that TITAN2D is able to reproduce inundated areas well using flux sources, although velocities are often unrealistically high. VolcFlow is also able to replicate flow runout well, but does not capture the lateral spreading in distal regions of larger-volume flows. Both models are better at reproducing the inundated area of single-pulse, valley-confined, smaller-volume flows than sustained, highly unsteady, larger-volume flows, which are often partially unchannelized. The simple rheological models of TITAN2D and VolcFlow are not able to recreate all features of these more complex flows. LAHARZ is fast to run and can give a rough approximation of inundation, but may not be appropriate for all PDCs and the designation of starting locations is difficult. The ΔH/L cone model is also very quick to run and gives reasonable approximations of runout distance, but does not inherently model flow channelization or directionality and thus unrealistically covers all interfluves. Empirically-based models like LAHARZ and ΔH/L cones can be quick, first-approximations of flow runout, provided a database of similar flows, e.g., FlowDat, is available to properly calculate coefficients or ΔH/L. For hazard assessment purposes, geophysical models like TITAN2D and VolcFlow can be useful for producing both scenario-based or probabilistic hazard maps, but must be run many times with varying input parameters. LAHARZ and ΔH/L cones can be used to produce simple modeling-based hazard maps when run with a variety of input volumes, but do not explicitly consider the probability of occurrence of different volumes. For forward modeling purposes, the ability to derive potential input parameters from global or local databases is crucial, though important input parameters for VolcFlow cannot be empirically estimated. Not only does this work provide a useful comparison of the operational aspects and behavior of various models for hazard assessment, but it also enriches conceptual understanding of the dynamics of the PDCs themselves.
A Comparison of Combustor-Noise Models
NASA Technical Reports Server (NTRS)
Hultgren, Lennart S.
2012-01-01
The present status of combustor-noise prediction in the NASA Aircraft Noise Prediction Program (ANOPP)1 for current-generation (N) turbofan engines is summarized. Several semi-empirical models for turbofan combustor noise are discussed, including best methods for near-term updates to ANOPP. An alternate turbine-transmission factor2 will appear as a user selectable option in the combustor-noise module GECOR in the next release. The three-spectrum model proposed by Stone et al.3 for GE turbofan-engine combustor noise is discussed and compared with ANOPP predictions for several relevant cases. Based on the results presented herein and in their report,3 it is recommended that the application of this fully empirical combustor-noise prediction method be limited to situations involving only General-Electric turbofan engines. Long-term needs and challenges for the N+1 through N+3 time frame are discussed. Because the impact of other propulsion-noise sources continues to be reduced due to turbofan design trends, advances in noise-mitigation techniques, and expected aircraft configuration changes, the relative importance of core noise is expected to greatly increase in the future. The noise-source structure in the combustor, including the indirect one, and the effects of the propagation path through the engine and exhaust nozzle need to be better understood. In particular, the acoustic consequences of the expected trends toward smaller, highly efficient gas-generator cores and low-emission fuel-flexible combustors need to be fully investigated since future designs are quite likely to fall outside of the parameter space of existing (semi-empirical) prediction tools.
Salience-Based Selection: Attentional Capture by Distractors Less Salient Than the Target
Goschy, Harriet; Müller, Hermann Joseph
2013-01-01
Current accounts of attentional capture predict the most salient stimulus to be invariably selected first. However, existing salience and visual search models assume noise in the map computation or selection process. Consequently, they predict the first selection to be stochastically dependent on salience, implying that attention could even be captured first by the second most salient (instead of the most salient) stimulus in the field. Yet, capture by less salient distractors has not been reported and salience-based selection accounts claim that the distractor has to be more salient in order to capture attention. We tested this prediction using an empirical and modeling approach of the visual search distractor paradigm. For the empirical part, we manipulated salience of target and distractor parametrically and measured reaction time interference when a distractor was present compared to absent. Reaction time interference was strongly correlated with distractor salience relative to the target. Moreover, even distractors less salient than the target captured attention, as measured by reaction time interference and oculomotor capture. In the modeling part, we simulated first selection in the distractor paradigm using behavioral measures of salience and considering the time course of selection including noise. We were able to replicate the result pattern we obtained in the empirical part. We conclude that each salience value follows a specific selection time distribution and attentional capture occurs when the selection time distributions of target and distractor overlap. Hence, selection is stochastic in nature and attentional capture occurs with a certain probability depending on relative salience. PMID:23382820
Xu, Xu Steven; Yuan, Min; Yang, Haitao; Feng, Yan; Xu, Jinfeng; Pinheiro, Jose
2017-01-01
Covariate analysis based on population pharmacokinetics (PPK) is used to identify clinically relevant factors. The likelihood ratio test (LRT) based on nonlinear mixed effect model fits is currently recommended for covariate identification, whereas individual empirical Bayesian estimates (EBEs) are considered unreliable due to the presence of shrinkage. The objectives of this research were to investigate the type I error for LRT and EBE approaches, to confirm the similarity of power between the LRT and EBE approaches from a previous report and to explore the influence of shrinkage on LRT and EBE inferences. Using an oral one-compartment PK model with a single covariate impacting on clearance, we conducted a wide range of simulations according to a two-way factorial design. The results revealed that the EBE-based regression not only provided almost identical power for detecting a covariate effect, but also controlled the false positive rate better than the LRT approach. Shrinkage of EBEs is likely not the root cause for decrease in power or inflated false positive rate although the size of the covariate effect tends to be underestimated at high shrinkage. In summary, contrary to the current recommendations, EBEs may be a better choice for statistical tests in PPK covariate analysis compared to LRT. We proposed a three-step covariate modeling approach for population PK analysis to utilize the advantages of EBEs while overcoming their shortcomings, which allows not only markedly reducing the run time for population PK analysis, but also providing more accurate covariate tests.
Bayesian modelling of lung function data from multiple-breath washout tests.
Mahar, Robert K; Carlin, John B; Ranganathan, Sarath; Ponsonby, Anne-Louise; Vuillermin, Peter; Vukcevic, Damjan
2018-05-30
Paediatric respiratory researchers have widely adopted the multiple-breath washout (MBW) test because it allows assessment of lung function in unsedated infants and is well suited to longitudinal studies of lung development and disease. However, a substantial proportion of MBW tests in infants fail current acceptability criteria. We hypothesised that a model-based approach to analysing the data, in place of traditional simple empirical summaries, would enable more efficient use of these tests. We therefore developed a novel statistical model for infant MBW data and applied it to 1197 tests from 432 individuals from a large birth cohort study. We focus on Bayesian estimation of the lung clearance index, the most commonly used summary of lung function from MBW tests. Our results show that the model provides an excellent fit to the data and shed further light on statistical properties of the standard empirical approach. Furthermore, the modelling approach enables the lung clearance index to be estimated by using tests with different degrees of completeness, something not possible with the standard approach. Our model therefore allows previously unused data to be used rather than discarded, as well as routine use of shorter tests without significant loss of precision. Beyond our specific application, our work illustrates a number of important aspects of Bayesian modelling in practice, such as the importance of hierarchical specifications to account for repeated measurements and the value of model checking via posterior predictive distributions. Copyright © 2018 John Wiley & Sons, Ltd.
Alladin, Assen; Sabatini, Linda; Amundson, Jon K
2007-04-01
This paper briefly surveys the trend of and controversy surrounding empirical validation in psychotherapy. Empirical validation of hypnotherapy has paralleled the practice of validation in psychotherapy and the professionalization of clinical psychology, in general. This evolution in determining what counts as evidence for bona fide clinical practice has gone from theory-driven clinical approaches in the 1960s and 1970s through critical attempts at categorization of empirically supported therapies in the 1990s on to the concept of evidence-based practice in 2006. Implications of this progression in professional psychology are discussed in the light of hypnosis's current quest for validation and empirical accreditation.
Kim, Tae-Ho; Yang, Chan-Su; Oh, Jeong-Hwan; Ouchi, Kazuo
2014-01-01
The purpose of this study is to investigate the effects of the wind drift factor under strong tidal conditions in the western coastal area of Korea on the movement of oil slicks caused by the Hebei Spirit oil spill accident in 2007. The movement of oil slicks was computed using a simple simulation model based on the empirical formula as a function of surface current, wind speed, and the wind drift factor. For the simulation, the Environmental Fluid Dynamics Code (EFDC) model and Automatic Weather System (AWS) were used to generate tidal and wind fields respectively. Simulation results were then compared with 5 sets of spaceborne optical and synthetic aperture radar (SAR) data. From the present study, it was found that highest matching rate between the simulation results and satellite imagery was obtained with different values of the wind drift factor, and to first order, this factor was linearly proportional to the wind speed. Based on the results, a new modified empirical formula was proposed for forecasting the movement of oil slicks on the coastal area. PMID:24498094
The Tell-Tale Look: Viewing Time, Preferences, and Prices
Gunia, Brian C.; Murnighan, J. Keith
2015-01-01
Even the simplest choices can prompt decision-makers to balance their preferences against other, more pragmatic considerations like price. Thus, discerning people’s preferences from their decisions creates theoretical, empirical, and practical challenges. The current paper addresses these challenges by highlighting some specific circumstances in which the amount of time that people spend examining potential purchase items (i.e., viewing time) can in fact reveal their preferences. Our model builds from the gazing literature, in a purchasing context, to propose that the informational value of viewing time depends on prices. Consistent with the model’s predictions, four studies show that when prices are absent or moderate, viewing time provides a signal that is consistent with a person’s preferences and purchase intentions. When prices are extreme or consistent with a person’s preferences, however, viewing time is a less reliable predictor of either. Thus, our model highlights a price-contingent “viewing bias,” shedding theoretical, empirical, and practical light on the psychology of preferences and visual attention, and identifying a readily observable signal of preference. PMID:25581382
NASA AVOSS Fast-Time Wake Prediction Models: User's Guide
NASA Technical Reports Server (NTRS)
Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew
2014-01-01
The National Aeronautics and Space Administration (NASA) is developing and testing fast-time wake transport and decay models to safely enhance the capacity of the National Airspace System (NAS). The fast-time wake models are empirical algorithms used for real-time predictions of wake transport and decay based on aircraft parameters and ambient weather conditions. The aircraft dependent parameters include the initial vortex descent velocity and the vortex pair separation distance. The atmospheric initial conditions include vertical profiles of temperature or potential temperature, eddy dissipation rate, and crosswind. The current distribution includes the latest versions of the APA (3.4) and the TDP (2.1) models. This User's Guide provides detailed information on the model inputs, file formats, and the model output. An example of a model run and a brief description of the Memphis 1995 Wake Vortex Dataset is also provided.
Generic Sensor Failure Modeling for Cooperative Systems.
Jäger, Georg; Zug, Sebastian; Casimiro, António
2018-03-20
The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application's fault tolerance and thereby promises maintainability of such system's safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques.
Generic Sensor Failure Modeling for Cooperative Systems
Jäger, Georg; Zug, Sebastian
2018-01-01
The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application’s fault tolerance and thereby promises maintainability of such system’s safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques. PMID:29558435
Progress toward a circulation atlas for application to coastal water siting problems
NASA Technical Reports Server (NTRS)
Munday, J. C., Jr.; Gordon, H. H.
1978-01-01
Circulation data needed to resolve coastal siting problems are assembled from historical hydrographic and remote sensing studies in the form of a Circulation Atlas. Empirical data are used instead of numerical model simulations to achieve fine resolution and include fronts and convergence zones. Eulerian and Langrangian data are collected, transformed, and combined into trajectory maps and current vector maps as a function of tidal phase and wind vector. Initial Atlas development is centered on the Elizabeth River, Hampton Roads, Virgina.
NASA Astrophysics Data System (ADS)
Xu, Shiluo; Niu, Ruiqing
2018-02-01
Every year, landslides pose huge threats to thousands of people in China, especially those in the Three Gorges area. It is thus necessary to establish an early warning system to help prevent property damage and save peoples' lives. Most of the landslide displacement prediction models that have been proposed are static models. However, landslides are dynamic systems. In this paper, the total accumulative displacement of the Baijiabao landslide is divided into trend and periodic components using empirical mode decomposition. The trend component is predicted using an S-curve estimation, and the total periodic component is predicted using a long short-term memory neural network (LSTM). LSTM is a dynamic model that can remember historical information and apply it to the current output. Six triggering factors are chosen to predict the periodic term using the Pearson cross-correlation coefficient and mutual information. These factors include the cumulative precipitation during the previous month, the cumulative precipitation during a two-month period, the reservoir level during the current month, the change in the reservoir level during the previous month, the cumulative increment of the reservoir level during the current month, and the cumulative displacement during the previous month. When using one-step-ahead prediction, LSTM yields a root mean squared error (RMSE) value of 6.112 mm, while the support vector machine for regression (SVR) and the back-propagation neural network (BP) yield values of 10.686 mm and 8.237 mm, respectively. Meanwhile, the Elman network (Elman) yields an RMSE value of 6.579 mm. In addition, when using multi-step-ahead prediction, LSTM obtains an RMSE value of 8.648 mm, while SVR, BP and the Elman network obtains RSME values of 13.418 mm, 13.014 mm, and 13.370 mm. The predicted results indicate that, to some extent, the dynamic model (LSTM) achieves results that are more accurate than those of the static models (i.e., SVR and BP). LSTM even displays better performance than the Elman network, which is also a dynamic method.
Environmentally-induced discharge transient coupling to spacecraft
NASA Technical Reports Server (NTRS)
Viswanathan, R.; Barbay, G.; Stevens, N. J.
1985-01-01
The Hughes SCREENS (Space Craft Response to Environments of Space) technique was applied to generic spin and 3-axis stabilized spacecraft models. It involved the NASCAP modeling for surface charging and lumped element modeling for transients coupling into a spacecraft. A differential voltage between antenna and spun shelf of approx. 400 V and current of 12 A resulted from discharge at antenna for the spinner and approx. 3 kv and 0.3 A from a discharge at solar panels for the 3-axis stabilized Spacecraft. A typical interface circuit response was analyzed to show that the transients would couple into the Spacecraft System through ground points, which are most vulnerable. A compilation and review was performed on 15 years of available data from electron and ion current collection phenomena. Empirical models were developed to match data and compared with flight data of Pix-1 and Pix-2 mission. It was found that large space power systems would float negative and discharge if operated at or above 300 V. Several recommendations are given to improve the models and to apply them to large space systems.
Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O
2018-01-01
Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.
Combining Empirical and Stochastic Models for Extreme Floods Estimation
NASA Astrophysics Data System (ADS)
Zemzami, M.; Benaabidate, L.
2013-12-01
Hydrological models can be defined as physical, mathematical or empirical. The latter class uses mathematical equations independent of the physical processes involved in the hydrological system. The linear regression and Gradex (Gradient of Extreme values) are classic examples of empirical models. However, conventional empirical models are still used as a tool for hydrological analysis by probabilistic approaches. In many regions in the world, watersheds are not gauged. This is true even in developed countries where the gauging network has continued to decline as a result of the lack of human and financial resources. Indeed, the obvious lack of data in these watersheds makes it impossible to apply some basic empirical models for daily forecast. So we had to find a combination of rainfall-runoff models in which it would be possible to create our own data and use them to estimate the flow. The estimated design floods would be a good choice to illustrate the difficulties facing the hydrologist for the construction of a standard empirical model in basins where hydrological information is rare. The construction of the climate-hydrological model, which is based on frequency analysis, was established to estimate the design flood in the Anseghmir catchments, Morocco. The choice of using this complex model returns to its ability to be applied in watersheds where hydrological information is not sufficient. It was found that this method is a powerful tool for estimating the design flood of the watershed and also other hydrological elements (runoff, volumes of water...).The hydrographic characteristics and climatic parameters were used to estimate the runoff, water volumes and design flood for different return periods.
Hawwa, Ahmed F; Collier, Paul S; Millership, Jeff S; McCarthy, Anthony; Dempsey, Sid; Cairns, Carole; McElnay, James C
2008-01-01
WHAT IS ALREADY KNOWN ABOUT THIS SUBJECTThe cytotoxic effects of 6-mercaptopurine (6-MP) were found to be due to drug-derived intracellular metabolites (mainly 6-thioguanine nucleotides and to some extent 6-methylmercaptopurine nucleotides) rather than the drug itself.Current empirical dosing methods for oral 6-MP result in highly variable drug and metabolite concentrations and hence variability in treatment outcome. WHAT THIS STUDY ADDSThe first population pharmacokinetic model has been developed for 6-MP active metabolites in paediatric patients with acute lymphoblastic leukaemia and the potential demographic and genetically controlled factors that could lead to interpatient pharmacokinetic variability among this population have been assessed.The model shows a large reduction in interindividual variability of pharmacokinetic parameters when body surface area and thiopurine methyltransferase polymorphism are incorporated into the model as covariates.The developed model offers a more rational dosing approach for 6-MP than the traditional empirical method (based on body surface area) through combining it with pharmacogenetically guided dosing based on thiopurine methyltransferase genotype. AIMS To investigate the population pharmacokinetics of 6-mercaptopurine (6-MP) active metabolites in paediatric patients with acute lymphoblastic leukaemia (ALL) and examine the effects of various genetic polymorphisms on the disposition of these metabolites. METHODS Data were collected prospectively from 19 paediatric patients with ALL (n = 75 samples, 150 concentrations) who received 6-MP maintenance chemotherapy (titrated to a target dose of 75 mg m−2 day−1). All patients were genotyped for polymorphisms in three enzymes involved in 6-MP metabolism. Population pharmacokinetic analysis was performed with the nonlinear mixed effects modelling program (nonmem) to determine the population mean parameter estimate of clearance for the active metabolites. RESULTS The developed model revealed considerable interindividual variability (IIV) in the clearance of 6-MP active metabolites [6-thioguanine nucleotides (6-TGNs) and 6-methylmercaptopurine nucleotides (6-mMPNs)]. Body surface area explained a significant part of 6-TGNs clearance IIV when incorporated in the model (IIV reduced from 69.9 to 29.3%). The most influential covariate examined, however, was thiopurine methyltransferase (TPMT) genotype, which resulted in the greatest reduction in the model's objective function (P < 0.005) when incorporated as a covariate affecting the fractional metabolic transformation of 6-MP into 6-TGNs. The other genetic covariates tested were not statistically significant and therefore were not included in the final model. CONCLUSIONS The developed pharmacokinetic model (if successful at external validation) would offer a more rational dosing approach for 6-MP than the traditional empirical method since it combines the current practice of using body surface area in 6-MP dosing with a pharmacogenetically guided dosing based on TPMT genotype. PMID:18823306
An exponential decay model for mediation.
Fritz, Matthew S
2014-10-01
Mediation analysis is often used to investigate mechanisms of change in prevention research. Results finding mediation are strengthened when longitudinal data are used because of the need for temporal precedence. Current longitudinal mediation models have focused mainly on linear change, but many variables in prevention change nonlinearly across time. The most common solution to nonlinearity is to add a quadratic term to the linear model, but this can lead to the use of the quadratic function to explain all nonlinearity, regardless of theory and the characteristics of the variables in the model. The current study describes the problems that arise when quadratic functions are used to describe all nonlinearity and how the use of nonlinear functions, such as exponential decay, address many of these problems. In addition, nonlinear models provide several advantages over polynomial models including usefulness of parameters, parsimony, and generalizability. The effects of using nonlinear functions for mediation analysis are then discussed and a nonlinear growth curve model for mediation is presented. An empirical example using data from a randomized intervention study is then provided to illustrate the estimation and interpretation of the model. Implications, limitations, and future directions are also discussed.
An Exponential Decay Model for Mediation
Fritz, Matthew S.
2013-01-01
Mediation analysis is often used to investigate mechanisms of change in prevention research. Results finding mediation are strengthened when longitudinal data are used because of the need for temporal precedence. Current longitudinal mediation models have focused mainly on linear change, but many variables in prevention change nonlinearly across time. The most common solution to nonlinearity is to add a quadratic term to the linear model, but this can lead to the use of the quadratic function to explain all nonlinearity, regardless of theory and the characteristics of the variables in the model. The current study describes the problems that arise when quadratic functions are used to describe all nonlinearity and how the use of nonlinear functions, such as exponential decay, addresses many of these problems. In addition, nonlinear models provide several advantages over polynomial models including usefulness of parameters, parsimony, and generalizability. The effects of using nonlinear functions for mediation analysis are then discussed and a nonlinear growth curve model for mediation is presented. An empirical example using data from a randomized intervention study is then provided to illustrate the estimation and interpretation of the model. Implications, limitations, and future directions are also discussed. PMID:23625557
NASA Astrophysics Data System (ADS)
Bell, M. D.; Walker, J. T.
2017-12-01
Atmospheric deposition of nitrogen compounds are determined using a variety of measurement and modeling methods. These values are then used to calculate fluxes to the ecosystem which can then be linked to ecological responses. But, for this data to be used outside of the system in which it is developed, it is necessary to understand how the deposition estimates relate to one another. Therefore, we first identified sources of "bulk" deposition data and compared methods, reliability of data, and consistency of results to one another. Then we looked at the variation within photochemical models that are used by Federal Agencies to evaluate national trends. Finally, we identified some best practices for researchers to consider if their assessment is intended for use at broader scales. Empirical measurements used in this assessment include passive collection of atmospheric molecules, throughfall deposition of precipitation, snowpack measurements, and using biomonitors such as lichen. The three most common photochemical models used to model deposition within the United States are CMAQ, CAMx, and TDep (which uses empirical data to refine modeled values). These models all use meteorological and emission data to estimate deposition at local, regional, or national scales. We identified the range of uncertainty that exists within the types of deposition measurements and how these vary over space and time. Uncertainty is assessed by comparing deposition estimates from differing collection methods and comparing modeled estimates to empirical deposition data. Each collection method has benefits and downfalls that need to be taken into account if the results are to be expanded outside of the research area. Comparing field measured values to modeled values highlight the importance of each in the greater goals of understanding current conditions and trends within deposition patterns in the US. While models work well on a larger scale, they cannot replicate the local heterogeneity that exists at a site. Often, each researcher has a favorite method of analysis, but if the data cannot be related to other efforts then it becomes harder to apply it to broader policy considerations.
Empirical Tests of the Assumptions Underlying Models for Foreign Exchange Rates.
1984-03-01
Research Report COs 481 EMPIRICAL TESTS OF THE ASSUMPTIO:IS UNDERLYING MODELS FOR FOREIGN EXCHANGE RATES by P. Brockett B. Golany 00 00 CENTER FOR...Research Report CCS 481 EMPIRICAL TESTS OF THE ASSUMPTIONS UNDERLYING MODELS FOR FOREIGN EXCHANGE RATES by P. Brockett B. Golany March 1984...applying these tests to the U.S. dollar to Japanese Yen foreign exchange rates . Conclusions and discussion is given in section VI. 1The previous authors
NASA Astrophysics Data System (ADS)
Welling, D. T.; Manchester, W.; Savani, N.; Sokolov, I.; van der Holst, B.; Jin, M.; Toth, G.; Liemohn, M. W.; Gombosi, T. I.
2017-12-01
The future of space weather prediction depends on the community's ability to predict L1 values from observations of the solar atmosphere, which can yield hours of lead time. While both empirical and physics-based L1 forecast methods exist, it is not yet known if this nascent capability can translate to skilled dB/dt forecasts at the Earth's surface. This paper shows results for the first forecast-quality, solar-atmosphere-to-Earth's-surface dB/dt predictions. Two methods are used to predict solar wind and IMF conditions at L1 for several real-world coronal mass ejection events. The first method is an empirical and observationally based system to estimate the plasma characteristics. The magnetic field predictions are based on the Bz4Cast system which assumes that the CME has a cylindrical flux rope geometry locally around Earth's trajectory. The remaining plasma parameters of density, temperature and velocity are estimated from white-light coronagraphs via a variety of triangulation methods and forward based modelling. The second is a first-principles-based approach that combines the Eruptive Event Generator using Gibson-Low configuration (EEGGL) model with the Alfven Wave Solar Model (AWSoM). EEGGL specifies parameters for the Gibson-Low flux rope such that it erupts, driving a CME in the coronal model that reproduces coronagraph observations and propagates to 1AU. The resulting solar wind predictions are used to drive the operational Space Weather Modeling Framework (SWMF) for geospace. Following the configuration used by NOAA's Space Weather Prediction Center, this setup couples the BATS-R-US global magnetohydromagnetic model to the Rice Convection Model (RCM) ring current model and a height-integrated ionosphere electrodynamics model. The long lead time predictions of dB/dt are compared to model results that are driven by L1 solar wind observations. Both are compared to real-world observations from surface magnetometers at a variety of geomagnetic latitudes. Metrics are calculated to examine how the simulated solar wind drivers impact forecast skill. These results illustrate the current state of long-lead-time forecasting and the promise of this technology for operational use.
Spatial correlation of auroral zone geomagnetic variations
NASA Astrophysics Data System (ADS)
Jackel, B. J.; Davalos, A.
2016-12-01
Magnetic field perturbations in the auroral zone are produced by a combination of distant ionospheric and local ground induced currents. Spatial and temporal structure of these currents is scientifically interesting and can also have a significant influence on critical infrastructure.Ground-based magnetometer networks are an essential tool for studying these phenomena, with the existing complement of instruments in Canada providing extended local time coverage. In this study we examine the spatial correlation between magnetic field observations over a range of scale lengths. Principal component and canonical correlation analysis are used to quantify relationships between multiple sites. Results could be used to optimize network configurations, validate computational models, and improve methods for empirical interpolation.
Locally adaptive, spatially explicit projection of US population for 2030 and 2050.
McKee, Jacob J; Rose, Amy N; Bright, Edward A; Huynh, Timmy; Bhaduri, Budhendra L
2015-02-03
Localized adverse events, including natural hazards, epidemiological events, and human conflict, underscore the criticality of quantifying and mapping current population. Building on the spatial interpolation technique previously developed for high-resolution population distribution data (LandScan Global and LandScan USA), we have constructed an empirically informed spatial distribution of projected population of the contiguous United States for 2030 and 2050, depicting one of many possible population futures. Whereas most current large-scale, spatially explicit population projections typically rely on a population gravity model to determine areas of future growth, our projection model departs from these by accounting for multiple components that affect population distribution. Modeled variables, which included land cover, slope, distances to larger cities, and a moving average of current population, were locally adaptive and geographically varying. The resulting weighted surface was used to determine which areas had the greatest likelihood for future population change. Population projections of county level numbers were developed using a modified version of the US Census's projection methodology, with the US Census's official projection as the benchmark. Applications of our model include incorporating multiple various scenario-driven events to produce a range of spatially explicit population futures for suitability modeling, service area planning for governmental agencies, consequence assessment, mitigation planning and implementation, and assessment of spatially vulnerable populations.
Benefits of Applying Hierarchical Models to the Empirical Green's Function Approach
NASA Astrophysics Data System (ADS)
Denolle, M.; Van Houtte, C.
2017-12-01
Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study improves upon these existing methods, and shows that the fitting method may explain some of the discrepancy. In particular, Bayesian hierarchical modelling is shown to be a method that can reduce bias, better quantify uncertainties and allow additional effects to be resolved. The method is applied to the Mw7.1 Kumamoto, Japan earthquake, and other global, moderate-magnitude, strike-slip earthquakes between Mw5 and Mw7.5. It is shown that the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be reliably retrieved without overfitting the data. Additionally, it is shown that methods commonly used to calculate corner frequencies can give substantial biases. In particular, if fc were calculated for the Kumamoto earthquake using a model with a falloff rate fixed at 2 instead of the best fit 1.6, the obtained fc would be as large as twice its realistic value. The reliable retrieval of the falloff rate allows deeper examination of this parameter for a suite of global, strike-slip earthquakes, and its scaling with magnitude. The earthquake sequences considered in this study are from Japan, New Zealand, Haiti and California.
Lee, Won Hee; Lisanby, Sarah H; Laine, Andrew F; Peterchev, Angel V
2013-01-01
This study examines the characteristics of the electric field induced in the brain by electroconvulsive therapy (ECT) with individualized current amplitude. The electric field induced by bilateral (BL), bifrontal (BF), right unilateral (RUL), and frontomedial (FM) ECT electrode configurations was computed in anatomically realistic finite element models of four nonhuman primates (NHPs). We generated maps of the electric field strength relative to an empirical neural activation threshold, and determined the stimulation strength and focality at fixed current amplitude and at individualized current amplitudes corresponding to seizure threshold (ST) measured in the anesthetized NHPs. The results show less variation in brain volume stimulated above threshold with individualized current amplitudes (16-36%) compared to fixed current amplitude (30-62%). Further, the stimulated brain volume at amplitude-titrated ST is substantially lower than that for ECT with conventional fixed current amplitudes. Thus individualizing the ECT stimulus current could compensate for individual anatomical variability and result in more focal and uniform electric field exposure across different subjects compared to the standard clinical practice of using high, fixed current for all patients.
Reduction of magneto rheological dampers stiffness by incorporating of an eddy current damper
NASA Astrophysics Data System (ADS)
Asghar Maddah, Ali; Hojjat, Yousef; Reza Karafi, Mohammad; Reza Ashory, Mohammad
2017-05-01
In this paper, a hybrid damper is developed to achieve lower stiffness compared to magneto rheological dampers. The hybrid damper consists of an eddy current damper (ECD) and a Magneto Rheological Damper (MRD). The aim of this research is to reduce the stiffness of MRDs with equal damping forces. This work is done by adding an eddy current passive damper to a semi-active MRD. The ECDs are contactless dampers which show an almost viscous damping behavior without increasing the stiffness of a system. However, MRDs increase damping and stiffness of a system simultaneously, when a magnetic field is applied. Damping of each part is studied theoretically and experimentally. A semi-empirical model is developed to explain the viscoelastic behavior of the damper. The experimental results showed that the hybrid damper is able to dissipate energy as much as those of MRDs while its stiffness is 12% lower at a zero excitation current.
Research-Based Implementation of Peer Instruction: A Literature Review
Vickrey, Trisha; Rosploch, Kaitlyn; Rahmanian, Reihaneh; Pilarz, Matthew; Stains, Marilyne
2015-01-01
Current instructional reforms in undergraduate science, technology, engineering, and mathematics (STEM) courses have focused on enhancing adoption of evidence-based instructional practices among STEM faculty members. These practices have been empirically demonstrated to enhance student learning and attitudes. However, research indicates that instructors often adapt rather than adopt practices, unknowingly compromising their effectiveness. Thus, there is a need to raise awareness of the research-based implementation of these practices, develop fidelity of implementation protocols to understand adaptations being made, and ultimately characterize the true impact of reform efforts based on these practices. Peer instruction (PI) is an example of an evidence-based instructional practice that consists of asking students conceptual questions during class time and collecting their answers via clickers or response cards. Extensive research has been conducted by physics and biology education researchers to evaluate the effectiveness of this practice and to better understand the intricacies of its implementation. PI has also been investigated in other disciplines, such as chemistry and computer science. This article reviews and summarizes these various bodies of research and provides instructors and researchers with a research-based model for the effective implementation of PI. Limitations of current studies and recommendations for future empirical inquiries are also provided. PMID:25713095
Brookman-Frazee, Lauren; Stahmer, Aubyn; Baker-Ericzen, Mary J.; Tsai, Katherine
2012-01-01
Empirical support exists for parent training/education (PT/PE) interventions for children with disruptive behavior disorders (DBD) and autism spectrum disorders (ASD). While the models share common roots, current approaches have largely developed independently and the research findings have been disseminated in two different literature traditions: mental health and developmental disabilities. Given that these populations often have overlapping clinical needs and are likely to receive services in similar settings, efforts to integrate the knowledge gained in the disparate literature may be beneficial. This article provides a systematic overview of the current (1995–2005) empirical research on PT/PE for children with DBD and ASD; attending to factors for cross-fertilization. Twenty-two ASD and 38 DBD studies were coded for review. Literature was compared in three main areas: (1) research methodology, (2) focus of PT/PE intervention, and (3) PT/PE procedures. There was no overlap in publication outlets between the studies for the two populations. Results indicate that there are opportunities for cross-fertilization in the areas of (1) research methodology, (2) intervention targets, and (3) format of parenting interventions. The practical implications of integrating these two highly related areas of research are identified and discussed. PMID:17053963
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit; Dooraghi, Mike
Development of accurate transposition models to simulate plane-of-array (POA) irradiance from horizontal measurements or simulations is a complex process mainly because of the anisotropic distribution of diffuse solar radiation in the atmosphere. The limited availability of reliable POA measurements at large temporal and spatial scales leads to difficulties in the comprehensive evaluation of transposition models. This paper proposes new algorithms to assess the uncertainty of transposition models using both surface-based observations and modeling tools. We reviewed the analytical derivation of POA irradiance and the approximation of isotropic diffuse radiation that simplifies the computation. Two transposition models are evaluated against themore » computation by the rigorous analytical solution. We proposed a new algorithm to evaluate transposition models using the clear-sky measurements at the National Renewable Energy Laboratory's (NREL's) Solar Radiation Research Laboratory (SRRL) and a radiative transfer model that integrates diffuse radiances of various sky-viewing angles. We found that the radiative transfer model and a transposition model based on empirical regressions are superior to the isotropic models when compared to measurements. We further compared the radiative transfer model to the transposition models under an extensive range of idealized conditions. Our results suggest that the empirical transposition model has slightly higher cloudy-sky POA irradiance than the radiative transfer model, but performs better than the isotropic models under clear-sky conditions. Significantly smaller POA irradiances computed by the transposition models are observed when the photovoltaics (PV) panel deviates from the azimuthal direction of the sun. The new algorithms developed in the current study have opened the door to a more comprehensive evaluation of transposition models for various atmospheric conditions and solar and PV orientations.« less
Xie, Yu; Sengupta, Manajit; Dooraghi, Mike
2018-03-20
Development of accurate transposition models to simulate plane-of-array (POA) irradiance from horizontal measurements or simulations is a complex process mainly because of the anisotropic distribution of diffuse solar radiation in the atmosphere. The limited availability of reliable POA measurements at large temporal and spatial scales leads to difficulties in the comprehensive evaluation of transposition models. This paper proposes new algorithms to assess the uncertainty of transposition models using both surface-based observations and modeling tools. We reviewed the analytical derivation of POA irradiance and the approximation of isotropic diffuse radiation that simplifies the computation. Two transposition models are evaluated against themore » computation by the rigorous analytical solution. We proposed a new algorithm to evaluate transposition models using the clear-sky measurements at the National Renewable Energy Laboratory's (NREL's) Solar Radiation Research Laboratory (SRRL) and a radiative transfer model that integrates diffuse radiances of various sky-viewing angles. We found that the radiative transfer model and a transposition model based on empirical regressions are superior to the isotropic models when compared to measurements. We further compared the radiative transfer model to the transposition models under an extensive range of idealized conditions. Our results suggest that the empirical transposition model has slightly higher cloudy-sky POA irradiance than the radiative transfer model, but performs better than the isotropic models under clear-sky conditions. Significantly smaller POA irradiances computed by the transposition models are observed when the photovoltaics (PV) panel deviates from the azimuthal direction of the sun. The new algorithms developed in the current study have opened the door to a more comprehensive evaluation of transposition models for various atmospheric conditions and solar and PV orientations.« less
Realising the Mass Public Benefit of Evidence-Based Psychological Therapies: The IAPT Program
Clark, David M
2018-01-01
Empirically supported psychological therapies have been developed for many mental health conditions. However, in most countries only a small proportion of the public benefit from these advances. The English Improving Access to Psychological Therapies (IAPT) program aims to bridge the gap between research and practice by training over 10,500 new psychological therapists in empirically supported treatments and deploying them in new services for the treatment of depression and anxiety disorders. Currently IAPT treats over 560,000 patients per year, obtains clinical outcome data on 98.5% of these individuals and places this information in the public domain. Around 50% of patients treated in IAPT services recover and two-thirds show worthwhile benefits. The clinical and economic arguments on which IAPT is based are presented, along with details of the service model, how the program was implemented, and recent findings about service organization. Limitations and future directions are outlined. PMID:29350997
van den Oord, Ad; van Witteloostuijn, Arjen
2017-01-01
A detailed understanding of technological change as an evolutionary process is currently not well understood. To increase our understanding, we build upon theory from organizational ecology to develop a model of endogenous technological growth and determine to what extent the pattern of technological growth can be attributed to the structural or systemic characteristics of the technology itself. Through an empirical investigation of patent data in the biotechnology industry from 1976 to 2003, we find that a technology’s internal (i.e., density and diversity) ecological characteristics have a positive effect on its growth rate. The niche’s external characteristics of crowding and status have a negative effect on its growth rate. Hence, applying theory from organizational ecology increases our understanding of technological change as an evolutionary process. We discuss the implications of our findings for the study of technological growth and evolution, and suggest avenues for further research. PMID:28081570
Bynion, Teah-Marie; Blumenthal, Heidemarie; Bilsky, Sarah A; Cloutier, Renee M; Leen-Feldner, Ellen W
2017-10-01
Social anxiety is the most common anxiety disorder among youth; theoretical and empirical work suggest specific parenting behaviors may be relevant. However, findings are inconsistent, particularly in terms of maternal as compared to paternal effects. In the current study, we evaluated the indirect effects of perceived psychological control on the relation between anxious rearing behaviors and child social anxiety among 112 community-recruited girls (ages 12-15 years). In addition to self-report, adolescent participants completed a laboratory-based social stress task. In line with hypotheses, results indicated indirect effects of psychological control on the relation between anxious rearing behaviors and child social anxiety in maternal but not paternal models. Findings are discussed in terms of their theoretical and empirical implications for clarifying the role of parental relations in adolescent social anxiety. Copyright © 2017 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
van den Oord, Ad; van Witteloostuijn, Arjen
2017-01-01
A detailed understanding of technological change as an evolutionary process is currently not well understood. To increase our understanding, we build upon theory from organizational ecology to develop a model of endogenous technological growth and determine to what extent the pattern of technological growth can be attributed to the structural or systemic characteristics of the technology itself. Through an empirical investigation of patent data in the biotechnology industry from 1976 to 2003, we find that a technology's internal (i.e., density and diversity) ecological characteristics have a positive effect on its growth rate. The niche's external characteristics of crowding and status have a negative effect on its growth rate. Hence, applying theory from organizational ecology increases our understanding of technological change as an evolutionary process. We discuss the implications of our findings for the study of technological growth and evolution, and suggest avenues for further research.
Health risk perception and betel chewing behavior--the evidence from Taiwan.
Chen, Chiang-Ming; Chang, Kuo-Liang; Lin, Lin; Lee, Jwo-Leun
2013-11-01
In this study, we provided an empirical examination of the interaction between people's health risk perception and betel chewing. We hypothesized that a better knowledge of possible health risks would reduce both the number of individuals who currently chew betel and the likelihood of those who do not yet chew betel to begin the habit. We constructed a simultaneous equation model with Bayesian two-stage approach to control the endogeneity between betel chewing and risk perception. Using a national survey of 26,684 observations in Taiwan, our study results indicated that better health knowledge reduced the possibility that people would become betel chewers. We also found that, in general, betel chewers have a poorer health risk perception than other population. Overall, the empirical evidence suggested that health authorities could reduce the odds of people becoming betel chewers by improving their knowledge of betel-chewing's harmful effects. © 2013 Elsevier Ltd. All rights reserved.
Uncoupling of reading and IQ over time: empirical evidence for a definition of dyslexia.
Ferrer, Emilio; Shaywitz, Bennett A; Holahan, John M; Marchione, Karen; Shaywitz, Sally E
2010-01-01
Developmental dyslexia is defined as an unexpected difficulty in reading in individuals who otherwise possess the intelligence and motivation considered necessary for fluent reading, and who also have had reasonable reading instruction. Identifying factors associated with normative and impaired reading development has implications for diagnosis, intervention, and prevention. We show that in typical readers, reading and IQ development are dynamically linked over time. Such mutual interrelationships are not perceptible in dyslexic readers, which suggests that reading and cognition develop more independently in these individuals. To our knowledge, these findings provide the first empirical demonstration of a coupling between cognition and reading in typical readers and a developmental uncoupling between cognition and reading in dyslexic readers. This uncoupling was the core concept of the initial description of dyslexia and remains the focus of the current definitional model of this learning disability.
A User's Guide for the Differential Reduced Ejector/Mixer Analysis "DREA" Program. 1.0
NASA Technical Reports Server (NTRS)
DeChant, Lawrence J.; Nadell, Shari-Beth
1999-01-01
A system of analytical and numerical two-dimensional mixer/ejector nozzle models that require minimal empirical input has been developed and programmed for use in conceptual and preliminary design. This report contains a user's guide describing the operation of the computer code, DREA (Differential Reduced Ejector/mixer Analysis), that contains these mathematical models. This program is currently being adopted by the Propulsion Systems Analysis Office at the NASA Glenn Research Center. A brief summary of the DREA method is provided, followed by detailed descriptions of the program input and output files. Sample cases demonstrating the application of the program are presented.
Xu, Y.; Xia, J.; Miller, R.D.
2006-01-01
Multichannel analysis of surface waves is a developing method widely used in shallow subsurface investigations. The field procedures and related parameters are very important for successful applications. Among these parameters, the source-receiver offset range is seldom discussed in theory and normally determined by empirical or semi-quantitative methods in current practice. This paper discusses the problem from a theoretical perspective. A formula for quantitatively evaluating a layered homogenous elastic model was developed. The analytical results based on simple models and experimental data demonstrate that the formula is correct for surface wave surveys for near-surface applications. ?? 2005 Elsevier B.V. All rights reserved.
Jet-Surface Interaction - High Aspect Ratio Nozzle Test: Test Summary
NASA Technical Reports Server (NTRS)
Brown, Clifford A.
2016-01-01
The Jet-Surface Interaction High Aspect Ratio Nozzle Test was conducted in the Aero-Acoustic Propulsion Laboratory at the NASA Glenn Research Center in the fall of 2015. There were four primary goals specified for this test: (1) extend the current noise database for rectangular nozzles to higher aspect ratios, (2) verify data previously acquired at small-scale with data from a larger model, (3) acquired jet-surface interaction noise data suitable for creating verifying empirical noise models and (4) investigate the effect of nozzle septa on the jet-mixing and jet-surface interaction noise. These slides give a summary of the test with representative results for each goal.
NASA Technical Reports Server (NTRS)
Castruccio, P. A.; Loats, H. L., Jr.
1975-01-01
An analysis of current computer usage by major water resources users was made to determine the trends of usage and costs for the principal hydrologic users/models. The laws and empirical relationships governing the growth of the data processing loads were described and applied to project the future data loads. Data loads for ERTS CCT image processing were computed and projected through the 1985 era. The analysis showns significant impact due to the utilization and processing of ERTS CCT's data.
Zeng, Liang; Proctor, Robert W; Salvendy, Gavriel
2011-06-01
This research is intended to empirically validate a general model of creative product and service development proposed in the literature. A current research gap inspired construction of a conceptual model to capture fundamental phases and pertinent facilitating metacognitive strategies in the creative design process. The model also depicts the mechanism by which design creativity affects consumer behavior. The validity and assets of this model have not yet been investigated. Four laboratory studies were conducted to demonstrate the value of the proposed cognitive phases and associated metacognitive strategies in the conceptual model. Realistic product and service design problems were used in creativity assessment to ensure ecological validity. Design creativity was enhanced by explicit problem analysis, whereby one formulates problems from different perspectives and at different levels of abstraction. Remote association in conceptual combination spawned more design creativity than did near association. Abstraction led to greater creativity in conducting conceptual expansion than did specificity, which induced mental fixation. Domain-specific knowledge and experience enhanced design creativity, indicating that design can be of a domain-specific nature. Design creativity added integrated value to products and services and positively influenced customer behavior. The validity and value of the proposed conceptual model is supported by empirical findings. The conceptual model of creative design could underpin future theory development. Propositions advanced in this article should provide insights and approaches to facilitate organizations pursuing product and service creativity to gain competitive advantage.
Selection of fire spread model for Russian fire behavior prediction system
Alexandra V. Volokitina; Kevin C. Ryan; Tatiana M. Sofronova; Mark A. Sofronov
2010-01-01
Mathematical modeling of fire behavior prediction is only possible if the models are supplied with an information database that provides spatially explicit input parameters for modeled area. Mathematical models can be of three kinds: 1) physical; 2) empirical; and 3) quasi-empirical (Sullivan, 2009). Physical models (Grishin, 1992) are of academic interest only because...
Carotene Degradation and Isomerization during Thermal Processing: A Review on the Kinetic Aspects.
Colle, Ines J P; Lemmens, Lien; Knockaert, Griet; Van Loey, Ann; Hendrickx, Marc
2016-08-17
Kinetic models are important tools for process design and optimization to balance desired and undesired reactions taking place in complex food systems during food processing and preservation. This review covers the state of the art on kinetic models available to describe heat-induced conversion of carotenoids, in particular lycopene and β-carotene. First, relevant properties of these carotenoids are discussed. Second, some general aspects of kinetic modeling are introduced, including both empirical single-response modeling and mechanism-based multi-response modeling. The merits of multi-response modeling to simultaneously describe carotene degradation and isomerization are demonstrated. The future challenge in this research field lies in the extension of the current multi-response models to better approach the real reaction pathway and in the integration of kinetic models with mass transfer models in case of reaction in multi-phase food systems.