Sample records for empirically derived relation

  1. Empirical ionization fractions in the winds and the determination of mass-loss rates for early-type stars

    NASA Technical Reports Server (NTRS)

    Lamers, H. J. G. L. M.; Gathier, R.; Snow, T. P.

    1980-01-01

    From a study of the UV lines in the spectra of 25 stars from 04 to B1, the empirical relations between the mean density in the wind and the ionization fractions of O VI, N V, Si IV, and the excited C III (2p 3P0) level were derived. Using these empirical relations, a simple relation was derived between the mass-loss rate and the column density of any of these four ions. This relation can be used for a simple determination of the mass-loss rate from O4 to B1 stars.

  2. Increasing Functional Communication in Non-Speaking Preschool Children: Comparison of PECS and VOCA

    ERIC Educational Resources Information Center

    Bock, Stacey Jones; Stoner, Julia B.; Beck, Ann R.; Hanley, Laurie; Prochnow, Jessica

    2005-01-01

    For individuals who have complex communication needs and for the interventionists who work with them, the collection of empirically derived data that support the use of an intervention approach is critical. The purposes of this study were to continue building an empirically derived base of support for, and to compare the relative effectiveness of…

  3. Using LANDSAT to provide potato production estimates to Columbia Basin farmers and processors

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The estimation of potato yields in the Columbia basin is described. The fundamental objective is to provide CROPIX with working models of potato production. A two-pronged approach was used to yield estimation: (1) using simulation models, and (2) using purely empirical models. The simulation modeling approach used satellite observations to determine certain key dates in the development of the crop for each field identified as potatoes. In particular, these include planting dates, emergence dates, and harvest dates. These critical dates are fed into simulation models of crop growth and development to derive yield forecasts. Purely empirical models were developed to relate yield to some spectrally derived measure of crop development. Two empirical approaches are presented: one relates tuber yield to estimates of cumulative intercepted solar radiation, the other relates tuber yield to the integral under GVI (Global Vegetation Index) curve.

  4. Perspectives on empirical approaches for ocean color remote sensing of chlorophyll in a changing climate.

    PubMed

    Dierssen, Heidi M

    2010-10-05

    Phytoplankton biomass and productivity have been continuously monitored from ocean color satellites for over a decade. Yet, the most widely used empirical approach for estimating chlorophyll a (Chl) from satellites can be in error by a factor of 5 or more. Such variability is due to differences in absorption and backscattering properties of phytoplankton and related concentrations of colored-dissolved organic matter (CDOM) and minerals. The empirical algorithms have built-in assumptions that follow the basic precept of biological oceanography--namely, oligotrophic regions with low phytoplankton biomass are populated with small phytoplankton, whereas more productive regions contain larger bloom-forming phytoplankton. With a changing world ocean, phytoplankton composition may shift in response to altered environmental forcing, and CDOM and mineral concentrations may become uncoupled from phytoplankton stocks, creating further uncertainty and error in the empirical approaches. Hence, caution is warranted when using empirically derived Chl to infer climate-related changes in ocean biology. The Southern Ocean is already experiencing climatic shifts and shows substantial errors in satellite-derived Chl for different phytoplankton assemblages. Accurate global assessments of phytoplankton will require improved technology and modeling, enhanced field observations, and ongoing validation of our "eyes in space."

  5. Determining the non-inferiority margin for patient reported outcomes.

    PubMed

    Gerlinger, Christoph; Schmelter, Thomas

    2011-01-01

    One of the cornerstones of any non-inferiority trial is the choice of the non-inferiority margin delta. This threshold of clinical relevance is very difficult to determine, and in practice, delta is often "negotiated" between the sponsor of the trial and the regulatory agencies. However, for patient reported, or more precisely patient observed outcomes, the patients' minimal clinically important difference (MCID) can be determined empirically by relating the treatment effect, for example, a change on a 100-mm visual analogue scale, to the patient's satisfaction with the change. This MCID can then be used to define delta. We used an anchor-based approach with non-parametric discriminant analysis and ROC analysis and a distribution-based approach with Norman's half standard deviation rule to determine delta in three examples endometriosis-related pelvic pain measured on a 100-mm visual analogue scale, facial acne measured by lesion counts, and hot flush counts. For each of these examples, all three methods yielded quite similar results. In two of the cases, the empirically derived MCIDs were smaller or similar of deltas used before in non-inferiority trials, and in the third case, the empirically derived MCID was used to derive a responder definition that was accepted by the FDA. In conclusion, for patient-observed endpoints, the delta can be derived empirically. In our view, this is a better approach than that of asking the clinician for a "nice round number" for delta, such as 10, 50%, π, e, or i. Copyright © 2011 John Wiley & Sons, Ltd.

  6. Prediction of maximum earthquake intensities for the San Francisco Bay region

    USGS Publications Warehouse

    Borcherdt, Roger D.; Gibbs, James F.

    1975-01-01

    The intensity data for the California earthquake of April 18, 1906, are strongly dependent on distance from the zone of surface faulting and the geological character of the ground. Considering only those sites (approximately one square city block in size) for which there is good evidence for the degree of ascribed intensity, the empirical relation derived between 1906 intensities and distance perpendicular to the fault for 917 sites underlain by rocks of the Franciscan Formation is: Intensity = 2.69 - 1.90 log (Distance) (km). For sites on other geologic units intensity increments, derived with respect to this empirical relation, correlate strongly with the Average Horizontal Spectral Amplifications (AHSA) determined from 99 three-component recordings of ground motion generated by nuclear explosions in Nevada. The resulting empirical relation is: Intensity Increment = 0.27 +2.70 log (AHSA), and average intensity increments for the various geologic units are -0.29 for granite, 0.19 for Franciscan Formation, 0.64 for the Great Valley Sequence, 0.82 for Santa Clara Formation, 1.34 for alluvium, 2.43 for bay mud. The maximum intensity map predicted from these empirical relations delineates areas in the San Francisco Bay region of potentially high intensity from future earthquakes on either the San Andreas fault or the Hazard fault.

  7. Empirical Development of an MMPI Subscale for the Assessment of Combat-Related Posttraumatic Stress Disorder.

    ERIC Educational Resources Information Center

    Keane, Terence M.; And Others

    1984-01-01

    Developed empirically based criteria for use of the Minnesota Multiphasic Personality Inventory (MMPI) to aid in the assessment and diagnosis of Posttraumatic Stress Disorder (PTSD) in patients (N=200). Analysis based on an empircally derived decision rule correctly classified 74 percent of the patients in each group. (LLL)

  8. Potential relative increment (PRI): a new method to empirically derive optimal tree diameter growth

    Treesearch

    Don C Bragg

    2001-01-01

    Potential relative increment (PRI) is a new method to derive optimal diameter growth equations using inventory information from a large public database. Optimal growth equations for 24 species were developed using plot and tree records from several states (Michigan, Minnesota, and Wisconsin) of the North Central US. Most species were represented by thousands of...

  9. Pedagogising the University: On Higher Education Policy Implementation and Its Effects on Social Relations

    ERIC Educational Resources Information Center

    Stavrou, Sophia

    2016-01-01

    This paper aims at providing a theoretical and empirical discussion on the concept of pedagogisation which derives from the hypothesis of a new era of "totally pedagogised society" in Basil Bernstein's work. The article is based on empirical research on higher education policy, with a focus on the implementation of curriculum change…

  10. Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.

    PubMed

    Lee, Won Hee; Bullmore, Ed; Frangou, Sophia

    2017-02-01

    There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  11. The Interface between Research on Individual Difference Variables and Teaching Practice: The Case of Cognitive Factors and Personality

    ERIC Educational Resources Information Center

    Biedron, Adriana; Pawlak, Miroslaw

    2016-01-01

    While a substantial body of empirical evidence has been accrued about the role of individual differences in second language acquisition, relatively little is still known about how factors of this kind can mediate the effects of instructional practices as well as how empirically-derived insights can inform foreign language pedagogy, both with…

  12. Relational frame theory: A new paradigm for the analysis of social behavior

    PubMed Central

    Roche, Bryan; Barnes-Holmes, Yvonne; Barnes-Holmes, Dermot; Stewart, Ian; O'Hora, Denis

    2002-01-01

    Recent developments in the analysis of derived relational responding, under the rubric of relational frame theory, have brought several complex language and cognitive phenomena within the empirical reach of the experimental analysis of behavior. The current paper provides an outline of relational frame theory as a new approach to the analysis of language, cognition, and complex behavior more generally. Relational frame theory, it is argued, also provides a suitable paradigm for the analysis of a wide variety of social behavior that is mediated by language. Recent empirical evidence and theoretical interpretations are provided in support of the relational frame approach to social behavior. PMID:22478379

  13. Empirical relations between large wood transport and catchment characteristics

    NASA Astrophysics Data System (ADS)

    Steeb, Nicolas; Rickenmann, Dieter; Rickli, Christian; Badoux, Alexandre

    2017-04-01

    The transport of vast amounts of large wood (LW) in water courses can considerably aggravate hazardous situations during flood events, and often strongly affects resulting flood damage. Large wood recruitment and transport are controlled by various factors which are difficult to assess and the prediction of transported LW volumes is difficult. Such information are, however, important for engineers and river managers to adequately dimension retention structures or to identify critical stream cross-sections. In this context, empirical formulas have been developed to estimate the volume of transported LW during a flood event (Rickenmann, 1997; Steeb et al., 2017). The data base of existing empirical wood load equations is, however, limited. The objective of the present study is to test and refine existing empirical equations, and to derive new relationships to reveal trends in wood loading. Data have been collected for flood events with LW occurrence in Swiss catchments of various sizes. This extended data set allows us to derive statistically more significant results. LW volumes were found to be related to catchment and transport characteristics, such as catchment size, forested area, forested stream length, water discharge, sediment load, or Melton ratio. Both the potential wood load and the fraction that is effectively mobilized during a flood event (effective wood load) are estimated. The difference of potential and effective wood load allows us to derive typical reduction coefficients that can be used to refine spatially explicit GIS models for potential LW recruitment.

  14. The Empirical Derivation of Equations for Predicting Subjective Textual Information. Final Report.

    ERIC Educational Resources Information Center

    Kauffman, Dan; And Others

    A study was made to derive an equation for predicting the "subjective" textual information contained in a text of material written in the English language. Specifically, this investigation describes, by a mathematical equation, the relationship between the "subjective" information content of written textual material and the relative number of…

  15. Empirical algorithms for ocean optics parameters

    NASA Astrophysics Data System (ADS)

    Smart, Jeffrey H.

    2007-06-01

    As part of the Worldwide Ocean Optics Database (WOOD) Project, The Johns Hopkins University Applied Physics Laboratory has developed and evaluated a variety of empirical models that can predict ocean optical properties, such as profiles of the beam attenuation coefficient computed from profiles of the diffuse attenuation coefficient. In this paper, we briefly summarize published empirical optical algorithms and assess their accuracy for estimating derived profiles. We also provide new algorithms and discuss their applicability for deriving optical profiles based on data collected from a variety of locations, including the Yellow Sea, the Sea of Japan, and the North Atlantic Ocean. We show that the scattering coefficient (b) can be computed from the beam attenuation coefficient (c) to about 10% accuracy. The availability of such relatively accurate predictions is important in the many situations where the set of data is incomplete.

  16. A possible closure relation for heat transport in the solar wind

    NASA Technical Reports Server (NTRS)

    Feldman, W. C.; Asbridge, J. R.; Bame, S. J.; Gosling, J. T.; Lemons, D. S.

    1979-01-01

    The objective of the present paper is to search for an empirical closure relation for solar wind heat transport that applies to a microscopic scale. This task is approached by using the quasi-linear wave-particle formalism proposed by Perkins (1973) as a guide to derive an equation relating the relative drift speed between core-electron and proton populations to local bulk flow conditions. The resulting relationship, containing one free parameter, is found to provide a good characterization of Los Alamos Imp electron data measuring during the period from March 1971 through August 1974. An empirical closure relation is implied by this result because of the observed proportionality between heat flux and relative drift speed.

  17. Producing and Recognizing Analogical Relations

    ERIC Educational Resources Information Center

    Lipkens, Regina; Hayes, Steven C.

    2009-01-01

    Analogical reasoning is an important component of intelligent behavior, and a key test of any approach to human language and cognition. Only a limited amount of empirical work has been conducted from a behavior analytic point of view, most of that within Relational Frame Theory (RFT), which views analogy as a matter of deriving relations among…

  18. An Attempt to Derive the epsilon Equation from a Two-Point Closure

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.; Cheng, Y.; Howard, A. M.

    2010-01-01

    The goal of this paper is to derive the equation for the turbulence dissipation rate epsilon for a shear-driven flow. In 1961, Davydov used a one-point closure model to derive the epsilon equation from first principles but the final result contained undetermined terms and thus lacked predictive power. Both in 1987 and in 2001, attempts were made to derive the epsilon equation from first principles using a two-point closure, but their methods relied on a phenomenological assumption. The standard practice has thus been to employ a heuristic form of the equation that contains three empirical ingredients: two constants, c(sub 1 epsilon), and c(sub 2 epsilon), and a diffusion term D(sub epsilon) In this work, a two-point closure is employed, yielding the following results: 1) the empirical constants get replaced by c(sub 1), c(sub 2), which are now functions of Kappa and epsilon; 2) c(sub 1) and c(sub 2) are not independent because a general relation between the two that are valid for any Kappa and epsilon are derived; 3) c(sub 1), c(sub 2) become constant with values close to the empirical values c(sub 1 epsilon), c(sub epsilon 2), (i.e., homogenous flows); and 4) the empirical form of the diffusion term D(sub epsilon) is no longer needed because it gets substituted by the Kappa-epsilon dependence of c(sub 1), c(sub 2), which plays the role of the diffusion, together with the diffusion of the turbulent kinetic energy D(sub Kappa), which now enters the new equation (i.e., inhomogeneous flows). Thus, the three empirical ingredients c(sub 1 epsilon), c(sub epsilon 2), D (sub epsilon)are replaced by a single function c(sub 1)(Kappa, epsilon ) or c(sub 2)(Kappa, epsilon ), plus a D(sub Kappa)term. Three tests of the new equation for epsilon are presented: one concerning channel flow and two concerning the shear-driven planetary boundary layer (PBL).

  19. Stellar Diameters and Temperatures. III. Main-sequence A, F, G, and K Stars: Additional High-precision Measurements and Empirical Relations

    NASA Astrophysics Data System (ADS)

    Boyajian, Tabetha S.; von Braun, Kaspar; van Belle, Gerard; Farrington, Chris; Schaefer, Gail; Jones, Jeremy; White, Russel; McAlister, Harold A.; ten Brummelaar, Theo A.; Ridgway, Stephen; Gies, Douglas; Sturmann, Laszlo; Sturmann, Judit; Turner, Nils H.; Goldfinger, P. J.; Vargas, Norm

    2013-07-01

    Based on CHARA Array measurements, we present the angular diameters of 23 nearby, main-sequence stars, ranging from spectral types A7 to K0, 5 of which are exoplanet host stars. We derive linear radii, effective temperatures, and absolute luminosities of the stars using Hipparcos parallaxes and measured bolometric fluxes. The new data are combined with previously published values to create an Angular Diameter Anthology of measured angular diameters to main-sequence stars (luminosity classes V and IV). This compilation consists of 125 stars with diameter uncertainties of less than 5%, ranging in spectral types from A to M. The large quantity of empirical data is used to derive color-temperature relations to an assortment of color indices in the Johnson (BVR J I J JHK), Cousins (R C I C), Kron (R K I K), Sloan (griz), and WISE (W 3 W 4) photometric systems. These relations have an average standard deviation of ~3% and are valid for stars with spectral types A0-M4. To derive even more accurate relations for Sun-like stars, we also determined these temperature relations omitting early-type stars (T eff > 6750 K) that may have biased luminosity estimates because of rapid rotation; for this subset the dispersion is only ~2.5%. We find effective temperatures in agreement within a couple of percent for the interferometrically characterized sample of main-sequence stars compared to those derived via the infrared flux method and spectroscopic analysis.

  20. An Empirical Human Controller Model for Preview Tracking Tasks.

    PubMed

    van der El, Kasper; Pool, Daan M; Damveld, Herman J; van Paassen, Marinus Rene M; Mulder, Max

    2016-11-01

    Real-life tracking tasks often show preview information to the human controller about the future track to follow. The effect of preview on manual control behavior is still relatively unknown. This paper proposes a generic operator model for preview tracking, empirically derived from experimental measurements. Conditions included pursuit tracking, i.e., without preview information, and tracking with 1 s of preview. Controlled element dynamics varied between gain, single integrator, and double integrator. The model is derived in the frequency domain, after application of a black-box system identification method based on Fourier coefficients. Parameter estimates are obtained to assess the validity of the model in both the time domain and frequency domain. Measured behavior in all evaluated conditions can be captured with the commonly used quasi-linear operator model for compensatory tracking, extended with two viewpoints of the previewed target. The derived model provides new insights into how human operators use preview information in tracking tasks.

  1. Integrated empirical ethics: loss of normativity?

    PubMed

    van der Scheer, Lieke; Widdershoven, Guy

    2004-01-01

    An important discussion in contemporary ethics concerns the relevance of empirical research for ethics. Specifically, two crucial questions pertain, respectively, to the possibility of inferring normative statements from descriptive statements, and to the danger of a loss of normativity if normative statements should be based on empirical research. Here we take part in the debate and defend integrated empirical ethical research: research in which normative guidelines are established on the basis of empirical research and in which the guidelines are empirically evaluated by focusing on observable consequences. We argue that in our concrete example normative statements are not derived from descriptive statements, but are developed within a process of reflection and dialogue that goes on within a specific praxis. Moreover, we show that the distinction in experience between the desirable and the undesirable precludes relativism. The normative guidelines so developed are both critical and normative: they help in choosing the right action and in evaluating that action. Finally, following Aristotle, we plead for a return to the view that morality and ethics are inherently related to one another, and for an acknowledgment of the fact that moral judgments have their origin in experience which is always related to historical and cultural circumstances.

  2. EFFECTIVE USE OF SEDIMENT QUALITY GUIDELINES: WHICH GUIDELINE IS RIGHT FOR ME?

    EPA Science Inventory

    A bewildering array of sediment quality guidelines have been developed, but fortunately they mostly fall into two families: empirically-derived and theoretically-derived. The empirically-derived guidelines use large data bases of concurrent sediment chemistry and biological effe...

  3. Effects of Career-Related Continuous Learning: A Case Study

    ERIC Educational Resources Information Center

    Rowold, Jens; Hochholdinger, Sabine; Schilling, Jan

    2008-01-01

    Purpose: Although proposed from theory, the assumption that career-related continuous learning (CRCL) has a positive impact on subsequent job performance has not been tested empirically. The present study aims to close this gap in the literature. A model is derived from theory that predicts a positive impact of CRCL, learning climate, and initial…

  4. Empirical calibration of the near-infrared Ca II triplet - III. Fitting functions

    NASA Astrophysics Data System (ADS)

    Cenarro, A. J.; Gorgas, J.; Cardiel, N.; Vazdekis, A.; Peletier, R. F.

    2002-02-01

    Using a near-infrared stellar library of 706 stars with a wide coverage of atmospheric parameters, we study the behaviour of the CaII triplet strength in terms of effective temperature, surface gravity and metallicity. Empirical fitting functions for recently defined line-strength indices, namely CaT*, CaT and PaT, are provided. These functions can be easily implemented into stellar population models to provide accurate predictions for integrated CaII strengths. We also present a thorough study of the various error sources and their relation to the residuals of the derived fitting functions. Finally, the derived functional forms and the behaviour of the predicted CaII are compared with those of previous works in the field.

  5. An empirical method for approximating stream baseflow time series using groundwater table fluctuations

    NASA Astrophysics Data System (ADS)

    Meshgi, Ali; Schmitter, Petra; Babovic, Vladan; Chui, Ting Fong May

    2014-11-01

    Developing reliable methods to estimate stream baseflow has been a subject of interest due to its importance in catchment response and sustainable watershed management. However, to date, in the absence of complex numerical models, baseflow is most commonly estimated using statistically derived empirical approaches that do not directly incorporate physically-meaningful information. On the other hand, Artificial Intelligence (AI) tools such as Genetic Programming (GP) offer unique capabilities to reduce the complexities of hydrological systems without losing relevant physical information. This study presents a simple-to-use empirical equation to estimate baseflow time series using GP so that minimal data is required and physical information is preserved. A groundwater numerical model was first adopted to simulate baseflow for a small semi-urban catchment (0.043 km2) located in Singapore. GP was then used to derive an empirical equation relating baseflow time series to time series of groundwater table fluctuations, which are relatively easily measured and are physically related to baseflow generation. The equation was then generalized for approximating baseflow in other catchments and validated for a larger vegetation-dominated basin located in the US (24 km2). Overall, this study used GP to propose a simple-to-use equation to predict baseflow time series based on only three parameters: minimum daily baseflow of the entire period, area of the catchment and groundwater table fluctuations. It serves as an alternative approach for baseflow estimation in un-gauged systems when only groundwater table and soil information is available, and is thus complementary to other methods that require discharge measurements.

  6. Uncertainty in Measurement: A Review of Monte Carlo Simulation Using Microsoft Excel for the Calculation of Uncertainties Through Functional Relationships, Including Uncertainties in Empirically Derived Constants

    PubMed Central

    Farrance, Ian; Frenkel, Robert

    2014-01-01

    The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand. PMID:24659835

  7. Uncertainty in measurement: a review of monte carlo simulation using microsoft excel for the calculation of uncertainties through functional relationships, including uncertainties in empirically derived constants.

    PubMed

    Farrance, Ian; Frenkel, Robert

    2014-02-01

    The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand.

  8. Regionalization of subsurface stormflow parameters of hydrologic models: Derivation from regional analysis of streamflow recession curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Sheng; Li, Hongyi; Huang, Maoyi

    2014-07-21

    Subsurface stormflow is an important component of the rainfall–runoff response, especially in steep terrain. Its contribution to total runoff is, however, poorly represented in the current generation of land surface models. The lack of physical basis of these common parameterizations precludes a priori estimation of the stormflow (i.e. without calibration), which is a major drawback for prediction in ungauged basins, or for use in global land surface models. This paper is aimed at deriving regionalized parameterizations of the storage–discharge relationship relating to subsurface stormflow from a top–down empirical data analysis of streamflow recession curves extracted from 50 eastern United Statesmore » catchments. Detailed regression analyses were performed between parameters of the empirical storage–discharge relationships and the controlling climate, soil and topographic characteristics. The regression analyses performed on empirical recession curves at catchment scale indicated that the coefficient of the power-law form storage–discharge relationship is closely related to the catchment hydrologic characteristics, which is consistent with the hydraulic theory derived mainly at the hillslope scale. As for the exponent, besides the role of field scale soil hydraulic properties as suggested by hydraulic theory, it is found to be more strongly affected by climate (aridity) at the catchment scale. At a fundamental level these results point to the need for more detailed exploration of the co-dependence of soil, vegetation and topography with climate.« less

  9. PolyWaTT: A polynomial water travel time estimator based on Derivative Dynamic Time Warping and Perceptually Important Points

    NASA Astrophysics Data System (ADS)

    Claure, Yuri Navarro; Matsubara, Edson Takashi; Padovani, Carlos; Prati, Ronaldo Cristiano

    2018-03-01

    Traditional methods for estimating timing parameters in hydrological science require a rigorous study of the relations of flow resistance, slope, flow regime, watershed size, water velocity, and other local variables. These studies are mostly based on empirical observations, where the timing parameter is estimated using empirically derived formulas. The application of these studies to other locations is not always direct. The locations in which equations are used should have comparable characteristics to the locations from which such equations have been derived. To overcome this barrier, in this work, we developed a data-driven approach to estimate timing parameters such as travel time. Our proposal estimates timing parameters using historical data of the location without the need of adapting or using empirical formulas from other locations. The proposal only uses one variable measured at two different locations on the same river (for instance, two river-level measurements, one upstream and the other downstream on the same river). The recorded data from each location generates two time series. Our method aligns these two time series using derivative dynamic time warping (DDTW) and perceptually important points (PIP). Using data from timing parameters, a polynomial function generalizes the data by inducing a polynomial water travel time estimator, called PolyWaTT. To evaluate the potential of our proposal, we applied PolyWaTT to three different watersheds: a floodplain ecosystem located in the part of Brazil known as Pantanal, the world's largest tropical wetland area; and the Missouri River and the Pearl River, in United States of America. We compared our proposal with empirical formulas and a data-driven state-of-the-art method. The experimental results demonstrate that PolyWaTT showed a lower mean absolute error than all other methods tested in this study, and for longer distances the mean absolute error achieved by PolyWaTT is three times smaller than empirical formulas.

  10. FACTORS AFFECTING DRY DEPOSITION OF SO2 ON FORESTS AND GRASSLANDS

    EPA Science Inventory

    Deposition velocities for SO2 over forests and grasslands are derived through a mass conservation approach using established empirical relations descriptive of the atmospheric transport of a gaseous contaminant above and within a vegetational canopy. Of particular interest are si...

  11. Using Landsat to provide potato production estimates to Columbia Basin farmers and processors

    NASA Technical Reports Server (NTRS)

    1990-01-01

    A summary of project activities relative to the estimation of potato yields in the Columbia Basin is given. Oregon State University is using a two-pronged approach to yield estimation, one using simulation models and the other using purely empirical models. The simulation modeling approach has used satellite observations to determine key dates in the development of the crop for each field identified as potatoes. In particular, these include planting dates, emergence dates, and harvest dates. These critical dates are fed into simulation models of crop growth and development to derive yield forecasts. Two empirical modeling approaches are illustrated. One relates tuber yield to estimates of cumulative intercepted solar radiation; the other relates tuber yield to the integral under the GVI curve.

  12. On Allometry Relations

    NASA Astrophysics Data System (ADS)

    West, Damien; West, Bruce J.

    2012-07-01

    There are a substantial number of empirical relations that began with the identification of a pattern in data; were shown to have a terse power-law description; were interpreted using existing theory; reached the level of "law" and given a name; only to be subsequently fade away when it proved impossible to connect the "law" with a larger body of theory and/or data. Various forms of allometry relations (ARs) have followed this path. The ARs in biology are nearly two hundred years old and those in ecology, geophysics, physiology and other areas of investigation are not that much younger. In general if X is a measure of the size of a complex host network and Y is a property of a complex subnetwork embedded within the host network a theoretical AR exists between the two when Y = aXb. We emphasize that the reductionistic models of AR interpret X and Y as dynamic variables, albeit the ARs themselves are explicitly time independent even though in some cases the parameter values change over time. On the other hand, the phenomenological models of AR are based on the statistical analysis of data and interpret X and Y as averages to yield the empirical AR: = ab. Modern explanations of AR begin with the application of fractal geometry and fractal statistics to scaling phenomena. The detailed application of fractal geometry to the explanation of theoretical ARs in living networks is slightly more than a decade old and although well received it has not been universally accepted. An alternate perspective is given by the empirical AR that is derived using linear regression analysis of fluctuating data sets. We emphasize that the theoretical and empirical ARs are not the same and review theories "explaining" AR from both the reductionist and statistical fractal perspectives. The probability calculus is used to systematically incorporate both views into a single modeling strategy. We conclude that the empirical AR is entailed by the scaling behavior of the probability density, which is derived using the probability calculus.

  13. The Association Between Sexual Motives and Sexual Satisfaction: Gender Differences and Categorical Comparisons

    PubMed Central

    Ahrold, Tierney K.; Meston, Cindy M.

    2010-01-01

    Past research suggests that sexual satisfaction may be partially dependent on sexual motives (the reasons people have sex). The primary goal of this study was to determine which of a wide range of empirically derived sexual motives were related to sexual satisfaction, and whether gender differences existed in these relationships. Examining data from 544 undergraduate participants (93 men, 451 women), we found that certain types of motives predicted levels of sexual satisfaction for both genders. However, a greater number of motive categories were related to satisfaction for women than for men, and sexual motives were a more consistent predictor of satisfaction in general for women than for men. We also found that empirical categories of motives predicted more variance in satisfaction ratings than did previously used theoretical categories. These findings suggest that a wide range of sexual motives are related to sexual satisfaction, that these connections may be moderated by gender, and that empirically-constructed categories of motives may be the most effective tool for studying this link. PMID:20967494

  14. Patterns of Alcohol Use and Consequences Among Empirically Derived Sexual Minority Subgroups

    PubMed Central

    Talley, Amelia E.; Sher, Kenneth J.; Steinley, Douglas; Wood, Phillip K.; Littlefield, Andrew K.

    2012-01-01

    Objective: The current study develops an empirically determined classification of sexual orientation developmental patterns based on participants’ annual reports of self-identifications, sexual attractions, and sexual behaviors during the first 4 years of college. A secondary aim of the current work was to examine trajectories of alcohol involvement among identified subgroups. Method: Data were drawn from a subsample of a longitudinal study of incoming first-time college students at a large, public university (n = 2,068). Longitudinal latent class analysis was used to classify sexual minority participants into empirically derived subgroups based on three self-reported facets of sexual orientation. Multivariate repeated-measures analyses were conducted to examine how trajectories of alcohol involvement varied by sexual orientation class membership. Results: Four unique subclasses of sexual orientation developmental patterns were identified for males and females: one consistently exclusively heterosexual group and three sexual minority groups. Despite generally similar alcohol use patterns among subclasses, certain sexual minority subgroups reported elevated levels of alcohol-related negative consequences and maladaptive motivations for use throughout college compared with their exclusively heterosexual counterparts. Conclusions: Elevations in coping and conformity motivations for alcohol use were seen among those subgroups that also evidenced heightened negative alcohol-related consequences. Implications and limitations of the current work are discussed. PMID:22333337

  15. Universal properties of galactic rotation curves and a first principles derivation of the Tully-Fisher relation

    NASA Astrophysics Data System (ADS)

    O'Brien, James G.; Chiarelli, Thomas L.; Mannheim, Philip D.

    2018-07-01

    In a recent paper McGaugh, Lelli, and Schombert showed that in an empirical plot of the observed centripetal accelerations in spiral galaxies against those predicted by the Newtonian gravity of the luminous matter in those galaxies the data points occupied a remarkably narrow band. While one could summarize the mean properties of the band by drawing a single mean curve through it, by fitting the band with the illustrative conformal gravity theory with fits that fill out the width of the band we show here that the width of the band is just as physically significant. We show that at very low luminous Newtonian accelerations the plot can become independent of the luminous Newtonian contribution altogether, but still be non-trivial due to the contribution of matter outside of the galaxies (viz. the rest of the visible universe). We present a new empirical plot of the difference between the observed centripetal accelerations and the luminous Newtonian expectations as a function of distance from the centers of galaxies, and show that at distances greater than 10 kpc the plot also occupies a remarkably narrow band, one even close to constant. Using the conformal gravity theory we provide a first principles derivation of the empirical Tully-Fisher relation.

  16. Derivation of Einstein-Cartan theory from general relativity

    NASA Astrophysics Data System (ADS)

    Petti, Richard

    2015-04-01

    General relativity cannot describe exchange of classical intrinsic angular momentum and orbital angular momentum. Einstein-Cartan theory fixes this problem in the least invasive way. In the late 20th century, the consensus view was that Einstein-Cartan theory requires inclusion of torsion without adequate justification, it has no empirical support (though it doesn't conflict with any known evidence), it solves no important problem, and it complicates gravitational theory with no compensating benefit. In 1986 the author published a derivation of Einstein-Cartan theory from general relativity, with no additional assumptions or parameters. Starting without torsion, Poincaré symmetry, classical or quantum spin, or spinors, it derives torsion and its relation to spin from a continuum limit of general relativistic solutions. The present work makes the case that this computation, combined with supporting arguments, constitutes a derivation of Einstein-Cartan theory from general relativity, not just a plausibility argument. This paper adds more and simpler explanations, more computational details, correction of a factor of 2, discussion of limitations of the derivation, and discussion of some areas of gravitational research where Einstein-Cartan theory is relevant.

  17. Empirical conversion of the vertical profile of reflectivity from Ku-band to S-band frequency

    NASA Astrophysics Data System (ADS)

    Cao, Qing; Hong, Yang; Qi, Youcun; Wen, Yixin; Zhang, Jian; Gourley, Jonathan J.; Liao, Liang

    2013-02-01

    ABSTRACT This paper presents an empirical method for converting reflectivity from Ku-band (13.8 GHz) to S-band (2.8 GHz) for several hydrometeor species, which facilitates the incorporation of Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) measurements into quantitative precipitation estimation (QPE) products from the U.S. Next-Generation Radar (NEXRAD). The development of empirical dual-frequency relations is based on theoretical simulations, which have assumed appropriate scattering and microphysical models for liquid and solid hydrometeors (raindrops, snow, and ice/hail). Particle phase, shape, orientation, and density (especially for snow particles) have been considered in applying the T-matrix method to compute the scattering amplitudes. Gamma particle size distribution (PSD) is utilized to model the microphysical properties in the ice region, melting layer, and raining region of precipitating clouds. The variability of PSD parameters is considered to study the characteristics of dual-frequency reflectivity, especially the variations in radar dual-frequency ratio (DFR). The empirical relations between DFR and Ku-band reflectivity have been derived for particles in different regions within the vertical structure of precipitating clouds. The reflectivity conversion using the proposed empirical relations has been tested using real data collected by TRMM-PR and a prototype polarimetric WSR-88D (Weather Surveillance Radar 88 Doppler) radar, KOUN. The processing and analysis of collocated data demonstrate the validity of the proposed empirical relations and substantiate their practical significance for reflectivity conversion, which is essential to the TRMM-based vertical profile of reflectivity correction approach in improving NEXRAD-based QPE.

  18. Policy trends and reforms in the German DRG-based hospital payment system.

    PubMed

    Klein-Hitpaß, Uwe; Scheller-Kreinsen, David

    2015-03-01

    A central structural point in all DRG-based hospital payment systems is the conversion of relative weights into actual payments. In this context policy makers need to address (amongst other things) (a) how the price level of DRG-payments from one period to the following period is changed and (b) whether and how hospital payments based on DRGs are to be differentiated beyond patient characteristics, e.g. by organizational, regional or state-level factors. Both policy problems can be and in international comparison often are empirically addressed. In Germany relative weights are derived from a highly sophisticated empirical cost calculation, whereas the annual changes of DRG-based payments (base rates) as well as the differentiation of DRG-based hospital payments beyond patient characteristics are not empirically addressed. Rather a complex set of regulations and quasi-market negotiations are applied. There were over the last decade also timid attempts to foster the use of empirical data to address these points. However, these reforms failed to increase the fairness, transparency and rationality of the mechanism to convert relative weights into actual DRG-based hospital payments. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  19. Gravity Tides Extracted from Relative Gravimeter Data by Combining Empirical Mode Decomposition and Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Yu, Hongjuan; Guo, Jinyun; Kong, Qiaoli; Chen, Xiaodong

    2018-04-01

    The static observation data from a relative gravimeter contain noise and signals such as gravity tides. This paper focuses on the extraction of the gravity tides from the static relative gravimeter data for the first time applying the combined method of empirical mode decomposition (EMD) and independent component analysis (ICA), called the EMD-ICA method. The experimental results from the CG-5 gravimeter (SCINTREX Limited Ontario Canada) data show that the gravity tides time series derived by EMD-ICA are consistent with the theoretical reference (Longman formula) and the RMS of their differences only reaches 4.4 μGal. The time series of the gravity tides derived by EMD-ICA have a strong correlation with the theoretical time series and the correlation coefficient is greater than 0.997. The accuracy of the gravity tides estimated by EMD-ICA is comparable to the theoretical model and is slightly higher than that of independent component analysis (ICA). EMD-ICA could overcome the limitation of ICA having to process multiple observations and slightly improve the extraction accuracy and reliability of gravity tides from relative gravimeter data compared to that estimated with ICA.

  20. A root-mean-square pressure fluctuations model for internal flow applications

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.

    1985-01-01

    A transport equation for the root-mean-square pressure fluctuations of turbulent flow is derived from the time-dependent momentum equation for incompressible flow. Approximate modeling of this transport equation is included to relate terms with higher order correlations to the mean quantities of turbulent flow. Three empirical constants are introduced in the model. Two of the empirical constants are estimated from homogeneous turbulence data and wall pressure fluctuations measurements. The third constant is determined by comparing the results of large eddy simulations for a plane channel flow and an annulus flow.

  1. An Empirical Method for deriving RBE values associated with Electrons, Photons and Radionuclides

    DOE PAGES

    Bellamy, Michael B; Puskin, J.; Eckerman, Keith F.; ...

    2015-01-01

    There is substantial evidence to justify using relative biological effectiveness (RBE) values greater than one for low-energy electrons and photons. But, in the field of radiation protection, radiation associated with low linear energy transfer (LET) has been assigned a radiation weighting factor w R of one. This value may be suitable for radiation protection but, for risk considerations, it is important to evaluate the potential elevated biological effectiveness of radiation to improve the quality of risk estimates. RBE values between 2 and 3 for tritium are implied by several experimental measurements. Additionally, elevated RBE values have been found for othermore » similar low-energy radiation sources. In this work, RBE values are derived for electrons based upon the fractional deposition of absorbed dose of energies less than a few keV. Using this empirical method, RBE values were also derived for monoenergetic photons and 1070 radionuclides from ICRP Publication 107 for which photons and electrons are the primary emissions.« less

  2. Application of Stein and related parametric empirical Bayes estimators to the nuclear plant reliability data system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, J.R.; Heger, A.S.; Koen, B.V.

    1984-04-01

    This report is the result of a preliminary feasibility study of the applicability of Stein and related parametric empirical Bayes (PEB) estimators to the Nuclear Plant Reliability Data System (NPRDS). A new estimator is derived for the means of several independent Poisson distributions with different sampling times. This estimator is applied to data from NPRDS in an attempt to improve failure rate estimation. Theoretical and Monte Carlo results indicate that the new PEB estimator can perform significantly better than the standard maximum likelihood estimator if the estimation of the individual means can be combined through the loss function or throughmore » a parametric class of prior distributions.« less

  3. An Empirical Derivation of Hierarchies of Propositions Related to Ten of Piaget's Sixteen Binary Operations

    ERIC Educational Resources Information Center

    Benefield, K. Elaine; Capie, William

    1976-01-01

    A group of students from grades four through twelve were tested on ten binary operations in four truth conditions. It was found that propositional operations which had greater inclusiveness or breadth of concepts were more difficult to comprehend. (MLH)

  4. Mapping Perceptions of Lupus Medication Decision-Making Facilitators: The Importance of Patient Context.

    PubMed

    Qu, Haiyan; Shewchuk, Richard M; Alarcón, Graciela; Fraenkel, Liana; Leong, Amye; Dall'Era, Maria; Yazdany, Jinoos; Singh, Jasvinder A

    2016-12-01

    Numerous factors can impede or facilitate patients' medication decision-making and adherence to physicians' recommendations. Little is known about how patients and physicians jointly view issues that affect the decision-making process. Our objective was to derive an empirical framework of patient-identified facilitators to lupus medication decision-making from key stakeholders (including 15 physicians, 5 patients/patient advocates, and 8 medical professionals) using a patient-centered cognitive mapping approach. We used nominal group patient panels to identify facilitators to lupus treatment decision-making. Stakeholders independently sorted the identified facilitators (n = 98) based on their similarities and rated the importance of each facilitator in patient decision-making. Data were analyzed using multidimensional scaling and hierarchical cluster analysis. A cognitive map was derived that represents an empirical framework of facilitators for lupus treatment decisions from multiple stakeholders' perspectives. The facilitator clusters were 1) hope for a normal/healthy life, 2) understand benefits and effectiveness of taking medications, 3) desire to minimize side effects, 4) medication-related data, 5) medication effectiveness for "me," 6) family focus, 7) confidence in physician, 8) medication research, 9) reassurance about medication, and 10) medication economics. Consideration of how different stakeholders perceive the relative importance of lupus medication decision-making clusters is an important step toward improving patient-physician communication and effective shared decision-making. The empirically derived framework of medication decision-making facilitators can be used as a guide to develop a lupus decision aid that focuses on improving physician-patient communication. © 2016, American College of Rheumatology.

  5. Derivation of occupational exposure levels (OELs) of low-toxicity isometric biopersistent particles: How can the kinetic lung overload paradigm be used for improved inhalation toxicity study design and OEL-derivation?

    PubMed

    Pauluhn, Jürgen

    2014-12-20

    Convincing evidence suggests that poorly soluble low-toxicity particles (PSP) exert two unifying major modes of action (MoA), in which one appears to be deposition-related acute, whilst the other is retention-related and occurs with particle accumulation in the lung and associated persistent inflammation. Either MoA has its study- and cumulative dose-specific adverse outcome and metric. Modeling procedures were applied to better understand as to which extent protocol variables may predetermine any specific outcome of study. The results from modeled and empirical studies served as basis to derive OELs from modeled and empirically confirmed directions. This analysis demonstrates that the accumulated retained particle displacement volume was the most prominent unifying denominator linking the pulmonary retained volumetric particle dose to inflammogenicity and toxicity. However, conventional study design may not always be appropriate to unequivocally discriminate the surface thermodynamics-related acute adversity from the cumulative retention volume-related chronic adversity. Thus, in the absence of kinetically designed studies, it may become increasingly challenging to differentiate substance-specific deposition-related acute effects from the more chronic retained cumulative dose-related effects. It is concluded that the degree of dissolution of particles in the pulmonary environment seems to be generally underestimated with the possibility to attribute to toxicity due to decreased particle size and associated changes in thermodynamics and kinetics of dissolution. Accordingly, acute deposition-related outcomes become an important secondary variable within the pulmonary microenvironment. In turn, lung-overload related chronic adversities seem to be better described by the particle volume metric. This analysis supports the concept that 'self-validating', hypothesis-based computational study design delivers the highest level of unifying information required for the risk characterization of PSP. In demonstrating that the PSP under consideration is truly following the generic PSP-paradigm, this higher level of mechanistic information reduces the potential uncertainty involved with OEL derivation.

  6. Writing System Variation and Its Consequences for Reading and Dyslexia

    ERIC Educational Resources Information Center

    Daniels, Peter T.; Share, David L.

    2018-01-01

    Most current theories of reading and dyslexia derive from a relatively narrow empirical base: research on English and a handful of other European alphabets. Furthermore, the two dominant theoretical frameworks for describing cross-script diversity--orthographic depth and psycholinguistic grain size theory--are also deeply entrenched in Anglophone…

  7. Maternal Ratings of Attention Problems in ADHD: Evidence for the Existence of a Continuum

    ERIC Educational Resources Information Center

    Lubke, Gitta H.; Hudziak, James J.; Derks, Eske M.; van Bijsterveldt, Toos C. E. M.; Boomsma, Dorret I.

    2009-01-01

    Objective: To investigate whether items assessing attention problems provide evidence of quantitative differences or categorically distinct subtypes of attention problems (APs) and to investigate the relation of empirically derived latent classes to "DSM-IV" diagnoses of subtypes of attention-deficit/hyperactivity disorder (ADHD), for…

  8. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    NASA Astrophysics Data System (ADS)

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    2016-09-01

    This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ˜ 2°, than those from the three empirical models with averaged errors > ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.

  9. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    DOE PAGES

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    2016-09-21

    Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less

  10. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less

  11. Increasing Chemical Space Coverage by Combining Empirical and Computational Fragment Screens

    PubMed Central

    2015-01-01

    Most libraries for fragment-based drug discovery are restricted to 1,000–10,000 compounds, but over 500,000 fragments are commercially available and potentially accessible by virtual screening. Whether this larger set would increase chemotype coverage, and whether a computational screen can pragmatically prioritize them, is debated. To investigate this question, a 1281-fragment library was screened by nuclear magnetic resonance (NMR) against AmpC β-lactamase, and hits were confirmed by surface plasmon resonance (SPR). Nine hits with novel chemotypes were confirmed biochemically with KI values from 0.2 to low mM. We also computationally docked 290,000 purchasable fragments with chemotypes unrepresented in the empirical library, finding 10 that had KI values from 0.03 to low mM. Though less novel than those discovered by NMR, the docking-derived fragments filled chemotype holes from the empirical library. Crystal structures of nine of the fragments in complex with AmpC β-lactamase revealed new binding sites and explained the relatively high affinity of the docking-derived fragments. The existence of chemotype holes is likely a general feature of fragment libraries, as calculation suggests that to represent the fragment substructures of even known biogenic molecules would demand a library of minimally over 32,000 fragments. Combining computational and empirical fragment screens enables the discovery of unexpected chemotypes, here by the NMR screen, while capturing chemotypes missing from the empirical library and tailored to the target, with little extra cost in resources. PMID:24807704

  12. Towards a unified understanding of event-related changes in the EEG: the firefly model of synchronization through cross-frequency phase modulation.

    PubMed

    Burgess, Adrian P

    2012-01-01

    Although event-related potentials (ERPs) are widely used to study sensory, perceptual and cognitive processes, it remains unknown whether they are phase-locked signals superimposed upon the ongoing electroencephalogram (EEG) or result from phase-alignment of the EEG. Previous attempts to discriminate between these hypotheses have been unsuccessful but here a new test is presented based on the prediction that ERPs generated by phase-alignment will be associated with event-related changes in frequency whereas evoked-ERPs will not. Using empirical mode decomposition (EMD), which allows measurement of narrow-band changes in the EEG without predefining frequency bands, evidence was found for transient frequency slowing in recognition memory ERPs but not in simulated data derived from the evoked model. Furthermore, the timing of phase-alignment was frequency dependent with the earliest alignment occurring at high frequencies. Based on these findings, the Firefly model was developed, which proposes that both evoked and induced power changes derive from frequency-dependent phase-alignment of the ongoing EEG. Simulated data derived from the Firefly model provided a close match with empirical data and the model was able to account for i) the shape and timing of ERPs at different scalp sites, ii) the event-related desynchronization in alpha and synchronization in theta, and iii) changes in the power density spectrum from the pre-stimulus baseline to the post-stimulus period. The Firefly Model, therefore, provides not only a unifying account of event-related changes in the EEG but also a possible mechanism for cross-frequency information processing.

  13. Towards a Unified Understanding of Event-Related Changes in the EEG: The Firefly Model of Synchronization through Cross-Frequency Phase Modulation

    PubMed Central

    Burgess, Adrian P.

    2012-01-01

    Although event-related potentials (ERPs) are widely used to study sensory, perceptual and cognitive processes, it remains unknown whether they are phase-locked signals superimposed upon the ongoing electroencephalogram (EEG) or result from phase-alignment of the EEG. Previous attempts to discriminate between these hypotheses have been unsuccessful but here a new test is presented based on the prediction that ERPs generated by phase-alignment will be associated with event-related changes in frequency whereas evoked-ERPs will not. Using empirical mode decomposition (EMD), which allows measurement of narrow-band changes in the EEG without predefining frequency bands, evidence was found for transient frequency slowing in recognition memory ERPs but not in simulated data derived from the evoked model. Furthermore, the timing of phase-alignment was frequency dependent with the earliest alignment occurring at high frequencies. Based on these findings, the Firefly model was developed, which proposes that both evoked and induced power changes derive from frequency-dependent phase-alignment of the ongoing EEG. Simulated data derived from the Firefly model provided a close match with empirical data and the model was able to account for i) the shape and timing of ERPs at different scalp sites, ii) the event-related desynchronization in alpha and synchronization in theta, and iii) changes in the power density spectrum from the pre-stimulus baseline to the post-stimulus period. The Firefly Model, therefore, provides not only a unifying account of event-related changes in the EEG but also a possible mechanism for cross-frequency information processing. PMID:23049827

  14. Predicting Individual Tree and Shrub Species Distributions with Empirically Derived Microclimate Surfaces in a Complex Mountain Ecosystem in Northern Idaho, USA

    NASA Astrophysics Data System (ADS)

    Holden, Z.; Cushman, S.; Evans, J.; Littell, J. S.

    2009-12-01

    The resolution of current climate interpolation models limits our ability to adequately account for temperature variability in complex mountainous terrain. We empirically derive 30 meter resolution models of June-October day and nighttime temperature and April nighttime Vapor Pressure Deficit (VPD) using hourly data from 53 Hobo dataloggers stratified by topographic setting in mixed conifer forests near Bonners Ferry, ID. 66%, of the variability in average June-October daytime temperature is explained by 3 variables (elevation, relative slope position and topographic roughness) derived from 30 meter digital elevation models. 69% of the variability in nighttime temperatures among stations is explained by elevation, relative slope position and topographic dissection (450 meter window). 54% of variability in April nighttime VPD is explained by elevation, soil wetness and the NDVIc derived from Landsat. We extract temperature and VPD predictions at 411 intensified Forest Inventory and Analysis plots (FIA). We use these variables with soil wetness and solar radiation indices derived from a 30 meter DEM to predict the presence and absence of 10 common forest tree species and 25 shrub species. Classification accuracies range from 87% for Pinus ponderosa , to > 97% for most other tree species. Shrub model accuracies are also high with greater than 90% accuracy for the majority of species. Species distribution models based on the physical variables that drive species occurrence, rather than their topographic surrogates, will eventually allow us to predict potential future distributions of these species with warming climate at fine spatial scales.

  15. Empirically Derived Profiles of Teacher Stress, Burnout, Self-Efficacy, and Coping and Associated Student Outcomes

    ERIC Educational Resources Information Center

    Herman, Keith C.; Hickmon-Rosa, Jal'et; Reinke, Wendy M.

    2018-01-01

    Understanding how teacher stress, burnout, coping, and self-efficacy are interrelated can inform preventive and intervention efforts to support teachers. In this study, we explored these constructs to determine their relation to student outcomes, including disruptive behaviors and academic achievement. Participants in this study were 121 teachers…

  16. Introducing Scale Analysis by Way of a Pendulum

    ERIC Educational Resources Information Center

    Lira, Ignacio

    2007-01-01

    Empirical correlations are a practical means of providing approximate answers to problems in physics whose exact solution is otherwise difficult to obtain. The correlations relate quantities that are deemed to be important in the physical situation to which they apply, and can be derived from experimental data by means of dimensional and/or scale…

  17. Empirically Derived Optimal Growth Equations For Hardwoods and Softwoods in Arkansas

    Treesearch

    Don C. Bragg

    2002-01-01

    Accurate growth projections are critical to reliable forest models, and ecologically based simulators can improve siivicultural predictions because of their sensitivity to change and their capacity to produce long-term forecasts. Potential relative increment (PRI) optimal diameter growth equations for loblolly pine, shortleaf pine, sweetgum, and white oak were fit to...

  18. Tracking Real-Time Neural Activation of Conceptual Knowledge Using Single-Trial Event-Related Potentials

    ERIC Educational Resources Information Center

    Amsel, Ben D.

    2011-01-01

    Empirically derived semantic feature norms categorized into different types of knowledge (e.g., visual, functional, auditory) can be summed to create number-of-feature counts per knowledge type. Initial evidence suggests several such knowledge types may be recruited during language comprehension. The present study provides a more detailed…

  19. Globalisation, Globalism and Cosmopolitanism as an Educational Ideal

    ERIC Educational Resources Information Center

    Papastephanou, Marianna

    2005-01-01

    In this paper, I discuss globalisation as an empirical reality that is in a complex relation to its corresponding discourse and in a critical distance from the cosmopolitan ideal. I argue that failure to grasp the distinctions between globalisation, globalism, and cosmopolitanism derives from mistaken identifications of the Is with the Ought and…

  20. Study of galaxies in the Lynx-Cancer void - VII. New oxygen abundances

    NASA Astrophysics Data System (ADS)

    Pustilnik, S. A.; Perepelitsyna, Y. A.; Kniazev, A. Y.

    2016-11-01

    We present new or improved oxygen abundances (O/H) for the nearby Lynx-Cancer void updated galaxy sample. They are obtained via the SAO 6-m telescope spectroscopy (25 objects), or derived from the Sloan Digital Sky Survey spectra (14 galaxies, of which for seven objects O/H values were unknown). For eight galaxies with detected [O III] λ4363 line, O/H values are derived via the direct (Te) method. For the remaining objects, O/H was estimated via semi-empirical and empirical methods. For all accumulated O/H data for 81 galaxies of this void (with 40 of them derived via Te method), their relation `O/H versus MB' is compared with that for similar late-type galaxies from denser environments (the Local Volume `reference sample'). We confirm our previous conclusion derived for a subsample of 48 objects: void galaxies show systematically reduced O/H for the same luminosity with respect to the reference sample, in average by 0.2 dex, or by a factor of ˜1.6. Moreover, we confirm the fraction of ˜20 per cent of strong outliers, with O/H of two to four times lower than the typical values for the `reference' sample. The new data are consistent with the conclusion on the slower evolution of the main void galaxy population. We obtained Hα velocity for the faint optical counterpart of the most gas-rich (M(H I)/LB = 25) void object J0723+3624, confirming its connection with the respective H I blob. For similar extremely gas-rich dwarf J0706+3020, we give a tentative O/H ˜(O/H)⊙/45. In Appendix A, we present the results of calibration of semi-empirical method by Izotov & Thuan and of empirical calibrators by Pilyugin & Thuan and Yin et al. on the sample of ˜150 galaxies from the literature with O/H measured by Te method.

  1. Τhe observational and empirical thermospheric CO2 and NO power do not exhibit power-law behavior; an indication of their reliability

    NASA Astrophysics Data System (ADS)

    Varotsos, C. A.; Efstathiou, M. N.

    2018-03-01

    In this paper we investigate the evolution of the energy emitted by CO2 and NO from the Earth's thermosphere on a global scale using both observational and empirically derived data. In the beginning, we analyze the daily power observations of CO2 and NO received from the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) equipment on the NASA Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite for the entire period 2002-2016. We then perform the same analysis on the empirical daily power emitted by CO2 and NO that were derived recently from the infrared energy budget of the thermosphere during 1947-2016. The tool used for the analysis of the observational and empirical datasets is the detrended fluctuation analysis, in order to investigate whether the power emitted by CO2 and by NO from the thermosphere exhibits power-law behavior. The results obtained from both observational and empirical data do not support the establishment of the power-law behavior. This conclusion reveals that the empirically derived data are characterized by the same intrinsic properties as those of the observational ones, thus enhancing the validity of their reliability.

  2. Advancing Empirical Scholarship to Further Develop Evaluation Theory and Practice

    ERIC Educational Resources Information Center

    Christie, Christina A.

    2011-01-01

    Good theory development is grounded in empirical inquiry. In the context of educational evaluation, the development of empirically grounded theory has important benefits for the field and the practitioner. In particular, a shift to empirically derived theory will assist in advancing more systematic and contextually relevant evaluation practice, as…

  3. An empirical method for deriving RBE values associated with electrons, photons and radionuclides.

    PubMed

    Bellamy, M; Puskin, J; Hertel, N; Eckerman, K

    2015-12-01

    There is substantial evidence to justify using relative biological effectiveness (RBE) values of >1 for low-energy electrons and photons. But, in the field of radiation protection, radiation associated with low linear energy transfer has been assigned a radiation weighting factor wR of 1. This value may be suitable for radiation protection but, for risk considerations, it is important to evaluate the potential elevated biological effectiveness of radiation to improve the quality of risk estimates. RBE values between 2 and 3 for tritium are implied by several experimental measurements. Additionally, elevated RBE values have been found for other similar low-energy radiation sources. In this work, RBE values are derived for electrons based upon the fractional deposition of absorbed dose of energies less than a few kiloelectron volts. Using this empirical method, RBE values were also derived for monoenergetic photons and 1070 radionuclides from ICRP Publication 107 for which photons and electrons are the primary emissions. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  4. Semi-empirical model for retrieval of soil moisture using RISAT-1 C-Band SAR data over a sub-tropical semi-arid area of Rewari district, Haryana (India)

    NASA Astrophysics Data System (ADS)

    Rawat, Kishan Singh; Sehgal, Vinay Kumar; Pradhan, Sanatan; Ray, Shibendu S.

    2018-03-01

    We have estimated soil moisture (SM) by using circular horizontal polarization backscattering coefficient (σ o_{RH}), differences of circular vertical and horizontal σ o (σ o_{RV} {-} σ o_{RH}) from FRS-1 data of Radar Imaging Satellite (RISAT-1) and surface roughness in terms of RMS height ({RMS}_{height}). We examined the performance of FRS-1 in retrieving SM under wheat crop at tillering stage. Results revealed that it is possible to develop a good semi-empirical model (SEM) to estimate SM of the upper soil layer using RISAT-1 SAR data rather than using existing empirical model based on only single parameter, i.e., σ o. Near surface SM measurements were related to σ o_{RH}, σ o_{RV} {-} σ o_{RH} derived using 5.35 GHz (C-band) image of RISAT-1 and {RMS}_{height}. The roughness component derived in terms of {RMS}_{height} showed a good positive correlation with σ o_{RV} {-} σ o_{RH} (R2 = 0.65). By considering all the major influencing factors (σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}), an SEM was developed where SM (volumetric) predicted values depend on σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}. This SEM showed R2 of 0.87 and adjusted R2 of 0.85, multiple R=0.94 and with standard error of 0.05 at 95% confidence level. Validation of the SM derived from semi-empirical model with observed measurement ({SM}_{Observed}) showed root mean square error (RMSE) = 0.06, relative-RMSE (R-RMSE) = 0.18, mean absolute error (MAE) = 0.04, normalized RMSE (NRMSE) = 0.17, Nash-Sutcliffe efficiency (NSE) = 0.91 ({≈ } 1), index of agreement (d) = 1, coefficient of determination (R2) = 0.87, mean bias error (MBE) = 0.04, standard error of estimate (SEE) = 0.10, volume error (VE) = 0.15, variance of the distribution of differences ({S}d2) = 0.004. The developed SEM showed better performance in estimating SM than Topp empirical model which is based only on σ o. By using the developed SEM, top soil SM can be estimated with low mean absolute percent error (MAPE) = 1.39 and can be used for operational applications.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorin Zaharia; C.Z. Cheng

    In this paper, we study whether the magnetic field of the T96 empirical model can be in force balance with an isotropic plasma pressure distribution. Using the field of T96, we obtain values for the pressure P by solving a Poisson-type equation {del}{sup 2}P = {del} {center_dot} (J x B) in the equatorial plane, and 1-D profiles on the Sun-Earth axis by integrating {del}P = J x B. We work in a flux coordinate system in which the magnetic field is expressed in terms of Euler potentials. Our results lead to the conclusion that the T96 model field cannot bemore » in equilibrium with an isotropic pressure. We also analyze in detail the computation of Birkeland currents using the Vasyliunas relation and the T96 field, which yields unphysical results, again indicating the lack of force balance in the empirical model. The underlying reason for the force imbalance is likely the fact that the derivatives of the least-square fitted model B are not accurate predictions of the actual magnetospheric field derivatives. Finally, we discuss a possible solution to the problem of lack of force balance in empirical field models.« less

  6. Deriving Multidimensional Poverty Indicators: Methodological Issues and an Empirical Analysis for Italy

    ERIC Educational Resources Information Center

    Coromaldi, Manuela; Zoli, Mariangela

    2012-01-01

    Theoretical and empirical studies have recently adopted a multidimensional concept of poverty. There is considerable debate about the most appropriate degree of multidimensionality to retain in the analysis. In this work we add to the received literature in two ways. First, we derive indicators of multiple deprivation by applying a particular…

  7. Asymptotic Properties of the Sequential Empirical ROC, PPV and NPV Curves Under Case-Control Sampling.

    PubMed

    Koopmeiners, Joseph S; Feng, Ziding

    2011-01-01

    The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves.

  8. Asymptotic Properties of the Sequential Empirical ROC, PPV and NPV Curves Under Case-Control Sampling

    PubMed Central

    Koopmeiners, Joseph S.; Feng, Ziding

    2013-01-01

    The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves. PMID:24039313

  9. Exploratory and Confirmatory Factor Analyses of the Structured Interview for Disorders of Extreme Stress

    ERIC Educational Resources Information Center

    Scoboria, Alan; Ford, Julian; Lin, Hsiu-ju; Frisman, Linda

    2008-01-01

    Two studies were conducted to provide the first empirical examination of the factor structure of a revised version of the clinically derived Structured Interview for Disorders of Extreme Stress, a structured interview designed to assess associated features of posttraumatic stress disorder (PTSD) thought to be related to early onset, interpersonal,…

  10. Studying the Value of Library and Information Services: A Taxonomy of Users Assessments.

    ERIC Educational Resources Information Center

    Kantor, Paul B.; Saracevic, Tefko

    1995-01-01

    Describes the development of a taxonomy of the value of library services based on users' assessments from five large research libraries. Highlights include empirical and derived taxonomy, replicability of the study, reasons for using the library, how library services are related to time and money, and a theory of value. (LRW)

  11. The Teaching-Research Gestalt: The Development of a Discipline-Based Scale

    ERIC Educational Resources Information Center

    Duff, Angus; Marriott, Neil

    2017-01-01

    This paper reports the development and empirical testing of a model of the factors that influence the teaching-research nexus. No prior work has attempted to create a measurement model of the nexus. The conceptual model is derived from 19 propositions grouped into four sets of factors relating to: rewards, researchers, curriculum, and students.…

  12. Profiles of Social and Coping Resources in Families of Children with Autism Spectrum Disorder: Relations to Parent and Child Outcomes

    ERIC Educational Resources Information Center

    Zaidman-Zait, Anat; Mirenda, Pat; Szatmari, Peter; Duku, Eric; Smith, Isabel M.; Vaillancourt, Tracy; Volden, Joanne; Waddell, Charlotte; Bennett, Teresa; Zwaigenbaum, Lonnie; Elsabaggh, Mayada; Georgiades, Stelios

    2018-01-01

    This study described empirically derived profiles of parents' personal and social coping resources in a sample of 207 families of children diagnosed with autism spectrum disorder. Latent Profile Analysis identified four family profiles based on socieoeconomic risk, coping strategy utilization, family functioning, available social supports, and…

  13. Association of a Mixed Anxiety-Depression Syndrome and Symptoms of Major Depressive Disorder during Adolescence.

    ERIC Educational Resources Information Center

    Gerhardt, Cynthia A.; Compas, Bruce E.; Connor, Jennifer K.; Achenbach, Thomas M.

    1999-01-01

    Examined the relations between an empirically derived syndrome of Anxiety-Depression and an analogue measure of Major Depressive Disorder (MDD) in a longitudinal study of a nationally representative sample of 3,154 adolescents. Analyses indicated moderate correspondence between scores on the syndrome and symptoms of the MDD analogue. (SLD)

  14. Latent Personality Profiles and the Relations with Psychopathology and Psychopathic Traits in Detained Adolescents

    ERIC Educational Resources Information Center

    Decuyper, Mieke; Colins, Olivier F.; De Clercq, Barbara; Vermeiren, Robert; Broekaert, Eric; Bijttebier, Patricia; Roose, Annelore; De Fruyt, Filip

    2013-01-01

    The present study constructed empirically derived subtypes of adolescent offenders based on general traits and examined their associations with psychopathology and psychopathic traits. The sample included 342 detained minors (172 boys and 170 girls; mean age 15.85 years, SD = 1.07) recruited in various Youth Detention Centers across the Flemish…

  15. Comment on "Classification of aerosol properties derived from AERONET direct sun data" by Gobbi et al. (2007)

    NASA Astrophysics Data System (ADS)

    O'Neill, N. T.

    2010-10-01

    It is pointed out that the graphical, aerosol classification method of Gobbi et al. (2007) can be interpreted as a manifestation of fundamental analytical relations whose existance depends on the simple assumption that the optical effects of aerosols are essentially bimodal in nature. The families of contour lines in their "Ada" curvature space are essentially empirical and discretized illustrations of analytical parabolic forms in (α, α') space (the space formed by the continuously differentiable Angstrom exponent and its spectral derivative).

  16. Low temperature heat capacities and thermodynamic functions described by Debye-Einstein integrals.

    PubMed

    Gamsjäger, Ernst; Wiessner, Manfred

    2018-01-01

    Thermodynamic data of various crystalline solids are assessed from low temperature heat capacity measurements, i.e., from almost absolute zero to 300 K by means of semi-empirical models. Previous studies frequently present fit functions with a large amount of coefficients resulting in almost perfect agreement with experimental data. It is, however, pointed out in this work that special care is required to avoid overfitting. Apart from anomalies like phase transformations, it is likely that data from calorimetric measurements can be fitted by a relatively simple Debye-Einstein integral with sufficient precision. Thereby, reliable values for the heat capacities, standard enthalpies, and standard entropies at T  = 298.15 K are obtained. Standard thermodynamic functions of various compounds strongly differing in the number of atoms in the formula unit can be derived from this fitting procedure and are compared to the results of previous fitting procedures. The residuals are of course larger when the Debye-Einstein integral is applied instead of using a high number of fit coefficients or connected splines, but the semi-empiric fit coefficients keep their meaning with respect to physics. It is suggested to use the Debye-Einstein integral fit as a standard method to describe heat capacities in the range between 0 and 300 K so that the derived thermodynamic functions are obtained on the same theory-related semi-empiric basis. Additional fitting is recommended when a precise description for data at ultra-low temperatures (0-20 K) is requested.

  17. Interaction of Hurricane Katrina with Optically Complex Water in the Gulf of Mexico: Interpretation Using Satellite-Derived Inherent Optical Properties and Chlorophyll Concentration

    DTIC Science & Technology

    2009-04-01

    Shelf, and into the Gulf of Mexico, empirically derived chl ; increases were observed in the Tortugas Gyre circulation feature, and in adjacent...Mexico, empirically derived chl a increases were observed in the Tortugas Gyre circulation feature, and in adjacent waters. Analy- sis of the...hurricane interaction also influenced the Tortugas Gyre, a recognized circulation feature in the southern Gulf of Mexico induced by the flow of the

  18. An Empirically Derived Taxonomy for Personality Diagnosis: Bridging Science and Practice in Conceptualizing Personality

    PubMed Central

    Westen, Drew; Shedler, Jonathan; Bradley, Bekh; DeFife, Jared A.

    2013-01-01

    Objective The authors describe a system for diagnosing personality pathology that is empirically derived, clinically relevant, and practical for day-to-day use. Method A random national sample of psychiatrists and clinical psychologists (N=1,201) described a randomly selected current patient with any degree of personality dysfunction (from minimal to severe) using the descriptors in the Shedler-Westen Assessment Procedure–II and completed additional research forms. Results The authors applied factor analysis to identify naturally occurring diagnostic groupings within the patient sample. The analysis yielded 10 clinically coherent personality diagnoses organized into three higher-order clusters: internalizing, externalizing, and borderline-dysregulated. The authors selected the most highly rated descriptors to construct a diagnostic prototype for each personality syndrome. In a second, independent sample, research interviewers and patients’ treating clinicians were able to diagnose the personality syndromes with high agreement and minimal comorbidity among diagnoses. Conclusions The empirically derived personality prototypes described here provide a framework for personality diagnosis that is both empirically based and clinically relevant. PMID:22193534

  19. A semi-empirical analysis of strong-motion peaks in terms of seismic source, propagation path, and local site conditions

    NASA Astrophysics Data System (ADS)

    Kamiyama, M.; Orourke, M. J.; Flores-Berrones, R.

    1992-09-01

    A new type of semi-empirical expression for scaling strong-motion peaks in terms of seismic source, propagation path, and local site conditions is derived. Peak acceleration, peak velocity, and peak displacement are analyzed in a similar fashion because they are interrelated. However, emphasis is placed on the peak velocity which is a key ground motion parameter for lifeline earthquake engineering studies. With the help of seismic source theories, the semi-empirical model is derived using strong motions obtained in Japan. In the derivation, statistical considerations are used in the selection of the model itself and the model parameters. Earthquake magnitude M and hypocentral distance r are selected as independent variables and the dummy variables are introduced to identify the amplification factor due to individual local site conditions. The resulting semi-empirical expressions for the peak acceleration, velocity, and displacement are then compared with strong-motion data observed during three earthquakes in the U.S. and Mexico.

  20. Examining the Stability of "DSM-IV" and Empirically Derived Eating Disorder Classification: Implications for "DSM-5"

    ERIC Educational Resources Information Center

    Peterson, Carol B.; Crow, Scott J.; Swanson, Sonja A.; Crosby, Ross D.; Wonderlich, Stephen A.; Mitchell, James E.; Agras, W. Stewart; Halmi, Katherine A.

    2011-01-01

    Objective: The purpose of this investigation was to derive an empirical classification of eating disorder symptoms in a heterogeneous eating disorder sample using latent class analysis (LCA) and to examine the longitudinal stability of these latent classes (LCs) and the stability of DSM-IV eating disorder (ED) diagnoses. Method: A total of 429…

  1. Student Response to Faculty Instruction (SRFI): An Empirically Derived Instrument to Measure Student Evaluations of Teaching

    ERIC Educational Resources Information Center

    Beitzel, Brian D.

    2013-01-01

    The Student Response to Faculty Instruction (SRFI) is an instrument designed to measure the student perspective on courses in higher education. The SRFI was derived from decades of empirical studies of student evaluations of teaching. This article describes the development of the SRFI and its psychometric attributes demonstrated in two pilot study…

  2. The Rayleigh-Taylor instability in a self-gravitating two-layer viscous sphere

    NASA Astrophysics Data System (ADS)

    Mondal, Puskar; Korenaga, Jun

    2018-03-01

    The dispersion relation of the Rayleigh-Taylor instability in the spherical geometry is of profound importance in the context of the Earth's core formation. Here we present a complete derivation of this dispersion relation for a self-gravitating two-layer viscous sphere. Such relation is, however, obtained through the solution of a complex transcendental equation, and it is difficult to gain physical insights directly from the transcendental equation itself. We thus also derive an empirical formula to compute the growth rate, by combining the Monte Carlo sampling of the relevant model parameter space with linear regression. Our analysis indicates that the growth rate of Rayleigh-Taylor instability is most sensitive to the viscosity of inner layer in a physical setting that is most relevant to the core formation.

  3. Jackknife variance of the partial area under the empirical receiver operating characteristic curve.

    PubMed

    Bandos, Andriy I; Guo, Ben; Gur, David

    2017-04-01

    Receiver operating characteristic analysis provides an important methodology for assessing traditional (e.g., imaging technologies and clinical practices) and new (e.g., genomic studies, biomarker development) diagnostic problems. The area under the clinically/practically relevant part of the receiver operating characteristic curve (partial area or partial area under the receiver operating characteristic curve) is an important performance index summarizing diagnostic accuracy at multiple operating points (decision thresholds) that are relevant to actual clinical practice. A robust estimate of the partial area under the receiver operating characteristic curve is provided by the area under the corresponding part of the empirical receiver operating characteristic curve. We derive a closed-form expression for the jackknife variance of the partial area under the empirical receiver operating characteristic curve. Using the derived analytical expression, we investigate the differences between the jackknife variance and a conventional variance estimator. The relative properties in finite samples are demonstrated in a simulation study. The developed formula enables an easy way to estimate the variance of the empirical partial area under the receiver operating characteristic curve, thereby substantially reducing the computation burden, and provides important insight into the structure of the variability. We demonstrate that when compared with the conventional approach, the jackknife variance has substantially smaller bias, and leads to a more appropriate type I error rate of the Wald-type test. The use of the jackknife variance is illustrated in the analysis of a data set from a diagnostic imaging study.

  4. What makes people leave their food? The interaction of personal and situational factors leading to plate leftovers in canteens.

    PubMed

    Lorenz, Bettina Anne-Sophie; Hartmann, Monika; Langen, Nina

    2017-09-01

    In order to provide a basis for the reduction of food losses, our study analyzes individual food choice, eating and leftover behavior in a university canteen by consideration of personal, social and environmental determinants. Based on an extended literature review, a structural equation model is derived and empirically tested for a sample of 343 students. The empirical estimates support the derived model with a good overall model fit and sufficient R 2 values for dependent variables. Hence, our results provide evidence for a general significant impact of behavioral intention and related personal and social determinants as well as for the relevance of environmental/situational determinants such as portion sizes and palatability of food for plate leftovers. Moreover, we find that environmental and personal determinants are interrelated and that the impact of different determinants is relative to perceived time constraints during a visit of the university canteen. Accordingly, we conclude that simple measures to decrease avoidable food waste may take effects via complex and interrelated behavioral structures and that future research should focus on these effects to understand and change food leftover behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Selecting a restoration technique to minimize OCR error.

    PubMed

    Cannon, M; Fugate, M; Hush, D R; Scovel, C

    2003-01-01

    This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.

  6. Empirically Derived Combinations of Tools and Clinical Cutoffs: An Illustrative Case with a Sample of Culturally/Linguistically Diverse Children

    ERIC Educational Resources Information Center

    Oetting, Janna B.; Cleveland, Lesli H.; Cope, Robert F., III

    2008-01-01

    Purpose: Using a sample of culturally/linguistically diverse children, we present data to illustrate the value of empirically derived combinations of tools and cutoffs for determining eligibility in child language impairment. Method: Data were from 95 4- and 6-year-olds (40 African American, 55 White; 18 with language impairment, 77 without) who…

  7. Are Women Over-Represented in Dead-End Jobs? A Swedish Study Using Empirically Derived Measures of Dead-End Jobs

    ERIC Educational Resources Information Center

    Bihagen, Erik; Ohls, Marita

    2007-01-01

    It has been claimed that women experience fewer career opportunities than men do mainly because they are over-represented in "Dead-end Jobs" (DEJs). Using Swedish panel data covering 1.1 million employees with the same employer in 1999 and 2003, measures of DEJ are empirically derived from analyses of wage mobility. The results indicate…

  8. Diagnostic Classification of Eating Disorders in Children and Adolescents: How Does DSM-IV-TR Compare to Empirically-Derived Categories?

    ERIC Educational Resources Information Center

    Eddy, Kamryn T.; Le Grange, Daniel; Crosby, Ross D.; Hoste, Renee Rienecke; Doyle, Angela Celio; Smyth, Angela; Herzog, David B.

    2010-01-01

    Objective: The purpose of this study was to empirically derive eating disorder phenotypes in a clinical sample of children and adolescents using latent profile analysis (LPA), and to compare these latent profile (LP) groups to the DSM-IV-TR eating disorder categories. Method: Eating disorder symptom data collected from 401 youth (aged 7 through 19…

  9. Semi-empirical airframe noise prediction model

    NASA Technical Reports Server (NTRS)

    Hersh, A. S.; Putnam, T. W.; Lasagna, P. L.; Burcham, F. W., Jr.

    1976-01-01

    A semi-empirical maximum overall sound pressure level (OASPL) airframe noise model was derived. The noise radiated from aircraft wings and flaps was modeled by using the trailing-edge diffracted quadrupole sound theory derived by Ffowcs Williams and Hall. The noise radiated from the landing gear was modeled by using the acoustic dipole sound theory derived by Curle. The model was successfully correlated with maximum OASPL flyover noise measurements obtained at the NASA Dryden Flight Research Center for three jet aircraft - the Lockheed JetStar, the Convair 990, and the Boeing 747 aircraft.

  10. Small field detector correction factors kQclin,Qmsr (fclin,fmsr) for silicon-diode and diamond detectors with circular 6 MV fields derived using both empirical and numerical methods.

    PubMed

    O'Brien, D J; León-Vintró, L; McClean, B

    2016-01-01

    The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.

  11. Optical assessment of colored dissolved organic matter and its related parameters in dynamic coastal water systems

    NASA Astrophysics Data System (ADS)

    Shanmugam, Palanisamy; Varunan, Theenathayalan; Nagendra Jaiganesh, S. N.; Sahay, Arvind; Chauhan, Prakash

    2016-06-01

    Prediction of the curve of the absorption coefficient of colored dissolved organic matter (CDOM) and differentiation between marine and terrestrially derived CDOM pools in coastal environments are hampered by a high degree of variability in the composition and concentration of CDOM, uncertainties in retrieved remote sensing reflectance and the weak signal-to-noise ratio of space-borne instruments. In the present study, a hybrid model is presented along with empirical methods to remotely determine the amount and type of CDOM in coastal and inland water environments. A large set of in-situ data collected on several oceanographic cruises and field campaigns from different regional waters was used to develop empirical methods for studying the distribution and dynamics of CDOM, dissolved organic carbon (DOC) and salinity. Our validation analyses demonstrated that the hybrid model is a better descriptor of CDOM absorption spectra compared to the existing models. Additional spectral slope parameters included in the present model to differentiate between terrestrially derived and marine CDOM pools make a substantial improvement over those existing models. Empirical algorithms to derive CDOM, DOC and salinity from remote sensing reflectance data demonstrated success in retrieval of these products with significantly low mean relative percent differences from large in-situ measurements. The performance of these algorithms was further assessed using three hyperspectral HICO images acquired simultaneously with our field measurements in productive coastal and lagoon waters on the southeast part of India. The validation match-ups of CDOM and salinity showed good agreement between HICO retrievals and field observations. Further analyses of these data showed significant temporal changes in CDOM and phytoplankton absorption coefficients with a distinct phase shift between these two products. Healthy phytoplankton cells and macrophytes were recognized to directly contribute to the autochthonous production of colored humic-like substances in variable amounts within the lagoon system, despite CDOM content being partly derived through river run-off and wetland discharges as well as from conservative mixing of different water masses. Spatial and temporal maps of CDOM, DOC and salinity products provided an interesting insight into these CDOM dynamics and conservative behavior within the lagoon and its extension in coastal and offshore waters of the Bay of Bengal. The hybrid model and empirical algorithms presented here can be useful to assess CDOM, DOC and salinity fields and their changes in response to increasing runoff of nutrient pollution, anthropogenic activities, hydrographic variations and climate oscillations.

  12. Pleiotropy of cardiometabolic syndrome with obesity-related anthropometric traits determined using empirically derived kinships from the Busselton Health Study.

    PubMed

    Cadby, Gemma; Melton, Phillip E; McCarthy, Nina S; Almeida, Marcio; Williams-Blangero, Sarah; Curran, Joanne E; VandeBerg, John L; Hui, Jennie; Beilby, John; Musk, A W; James, Alan L; Hung, Joseph; Blangero, John; Moses, Eric K

    2018-01-01

    Over two billion adults are overweight or obese and therefore at an increased risk of cardiometabolic syndrome (CMS). Obesity-related anthropometric traits genetically correlated with CMS may provide insight into CMS aetiology. The aim of this study was to utilise an empirically derived genetic relatedness matrix to calculate heritabilities and genetic correlations between CMS and anthropometric traits to determine whether they share genetic risk factors (pleiotropy). We used genome-wide single nucleotide polymorphism (SNP) data on 4671 Busselton Health Study participants. Exploiting both known and unknown relatedness, empirical kinship probabilities were estimated using these SNP data. General linear mixed models implemented in SOLAR were used to estimate narrow-sense heritabilities (h 2 ) and genetic correlations (r g ) between 15 anthropometric and 9 CMS traits. Anthropometric traits were adjusted by body mass index (BMI) to determine whether the observed genetic correlation was independent of obesity. After adjustment for multiple testing, all CMS and anthropometric traits were significantly heritable (h 2 range 0.18-0.57). We identified 50 significant genetic correlations (r g range: - 0.37 to 0.75) between CMS and anthropometric traits. Five genetic correlations remained significant after adjustment for BMI [high density lipoprotein cholesterol (HDL-C) and waist-hip ratio; triglycerides and waist-hip ratio; triglycerides and waist-height ratio; non-HDL-C and waist-height ratio; insulin and iliac skinfold thickness]. This study provides evidence for the presence of potentially pleiotropic genes that affect both anthropometric and CMS traits, independently of obesity.

  13. Relationship between the Geotail spacecraft potential and the magnetospheric electron number density including the distant tail regions

    NASA Astrophysics Data System (ADS)

    Ishisaka, K.; Okada, T.; Tsuruda, K.; Hayakawa, H.; Mukai, T.; Matsumoto, H.

    2001-04-01

    The spacecraft potential has been used to derive the electron number density surrounding the spacecraft in the magnetosphere and solar wind. We have investigated the correlation between the spacecraft potential of the Geotail spacecraft and the electron number density derived from the plasma waves in the solar wind and almost all the regions of the magnetosphere, except for the high-density plasmasphere, and obtained an empirical formula to show their relation. The new formula is effective in the range of spacecraft potential from a few volts up to 90 V, corresponding to the electron number density from 0.001 to 50 cm-3. We compared the electron number density obtained by the empirical formula with the density obtained by the plasma wave and plasma particle measurements. On occasions the density determined by plasma wave measurements in the lobe region is different from that calculated by the empirical formula. Using the difference in the densities measured by two methods, we discuss whether or not the lower cutoff frequency of the plasma waves, such as continuum radiation, indicates the local electron density near the spacecraft. Then we applied the new relation to the spacecraft potential measured by the Geotail spacecraft during the period from October 1993 to December 1995, and obtained the electron spatial distribution in the solar wind and magnetosphere, including the distant tail region. Higher electron number density is clearly observed on the dawnside than on the duskside of the magnetosphere in the distant tail beyond 100RE.

  14. The aging correlation (RH + t): Relative humidity (%) + temperature (deg C)

    NASA Technical Reports Server (NTRS)

    Cuddihy, E. F.

    1986-01-01

    An aging correlation between corrosion lifetime, and relative humidity RH (%) and temperature t (C) has been reported in the literature. This aging correlation is a semi-log plot of corrosion lifetime on the log scale versus the interesting summation term RH(%) + t(C) on the linear scale. This empirical correlation was derived from observation of experimental data trends and has been referred to as an experimental law. Using electrical resistivity data of polyvinyl butyral (PVB) measured as a function of relative humidity and temperature, it was found that the electrical resistivity could be expressed as a function of the term RH(%) t(C). Thus, if corrosion is related to leakage current through an organic insulator, which, in turn, is a function of RH and t, then some partial theoretical validity for the correlation is indicated. This article describes the derivation of the term RH(%) t(C) from PVB electrical resistivity data.

  15. Stopping Distances: An Excellent Example of Empirical Modelling.

    ERIC Educational Resources Information Center

    Lawson, D. A.; Tabor, J. H.

    2001-01-01

    Explores the derivation of empirical models for the stopping distance of a car being driven at a range of speeds. Indicates that the calculation of stopping distances makes an excellent example of empirical modeling because it is a situation that is readily understood and particularly relevant to many first-year undergraduates who are learning or…

  16. Structural Patterns in Empirical Research Articles: A Cross-Disciplinary Study

    ERIC Educational Resources Information Center

    Lin, Ling; Evans, Stephen

    2012-01-01

    This paper presents an analysis of the major generic structures of empirical research articles (RAs), with a particular focus on disciplinary variation and the relationship between the adjacent sections in the introductory and concluding parts. The findings were derived from a close "manual" analysis of 433 recent empirical RAs from high-impact…

  17. Dynamics of osmosis in a porous medium.

    PubMed

    Cardoso, Silvana S S; Cartwright, Julyan H E

    2014-11-01

    We derive from kinetic theory, fluid mechanics and thermodynamics the minimal continuum-level equations governing the flow of a binary, non-electrolytic mixture in an isotropic porous medium with osmotic effects. For dilute mixtures, these equations are linear and in this limit provide a theoretical basis for the widely used semi-empirical relations of Kedem & Katchalsky (Kedem & Katchalsky 1958 Biochim. Biophys. Acta 27, 229-246 (doi:10.1016/0006-3002(58)90330-5), which have hitherto been validated experimentally but not theoretically. The above linearity between the fluxes and the driving forces breaks down for concentrated or non-ideal mixtures, for which our equations go beyond the Kedem-Katchalsky formulation. We show that the heretofore empirical solute permeability coefficient reflects the momentum transfer between the solute molecules that are rejected at a pore entrance and the solvent molecules entering the pore space; it can be related to the inefficiency of a Maxwellian demi-demon.

  18. Survival estimation and the effects of dependency among animals

    USGS Publications Warehouse

    Schmutz, Joel A.; Ward, David H.; Sedinger, James S.; Rexstad, Eric A.

    1995-01-01

    Survival models assume that fates of individuals are independent, yet the robustness of this assumption has been poorly quantified. We examine how empirically derived estimates of the variance of survival rates are affected by dependency in survival probability among individuals. We used Monte Carlo simulations to generate known amounts of dependency among pairs of individuals and analyzed these data with Kaplan-Meier and Cormack-Jolly-Seber models. Dependency significantly increased these empirical variances as compared to theoretically derived estimates of variance from the same populations. Using resighting data from 168 pairs of black brant, we used a resampling procedure and program RELEASE to estimate empirical and mean theoretical variances. We estimated that the relationship between paired individuals caused the empirical variance of the survival rate to be 155% larger than the empirical variance for unpaired individuals. Monte Carlo simulations and use of this resampling strategy can provide investigators with information on how robust their data are to this common assumption of independent survival probabilities.

  19. RR Lyrae Stars as High-Precision Standard Candles in the Mid-Infrared

    NASA Astrophysics Data System (ADS)

    Neeley, Jillian Rose

    In this work, we provide the theoretical and empirical framework to establish RR Lyrae stars (RRL) as the anchor of a Population II distance scale. We present new theoretical period-luminosity-metallicity (PLZ) relations for RRL at Spitzer and WISE wavelengths. The PLZ relations were derived using nonlinear, time-dependent convective hydrodynamical models for a broad range in metal abundances (Z = 0.0001 to 0.0198). We also compare our theoretical relations to empirical relations derived from RRL in the field. Our theoretical PLZ relations were combined with multi-wavelength observations to simultaneously fit the distance modulus and extinction of each individual Galactic RRL in our sample. The results are consistent with trigonometric parallax measurements from the Gaia mission's first data release. This analysis has shown that when considering a sample covering a typical range of iron abundances for RRL, the metallicity spread introduces a dispersion in the PL relation on the order of 0.13 mag. However, if this metallicity component is accounted for in a PLZ relation, the dispersion is reduced to 0.02 mag at MIR wavelengths. On the empirical side, we present the analysis of five clusters from the Carnegie RR Lyrae Program (CRRP) sample (M4, NGC 3201, M5, M15, and M14). M4, the nearest one of the most well studied clusters, was used as a test case to develop a new data analysis pipeline for CRRP. Following the analysis of the five clusters, the resulting calibration PL relations are M[3.6] = -2.424 +/- 0.079 log P -1.205 +/- 0.057 and M [4.5] = -2.245 +/- 0.076 - 1.225 +/- 0.057. The slope of the PL relations was determined from the weighted average of the cluster results, and the zero point was fixed using five Galactic RRL with geometric parallaxes measured by Hubble Space Telescope. The dispersion of the RRL around the PL relations ranges from 0.05 mag in M4 to 0.3 mag in M14. The resulting band-averaged distance moduli for the five clusters agree well with results in the literature. The systematic uncertainty will be greatly reduced when parallaxes of more stars become available from the Gaia mission, and we are able to use the full CRRP sample of 55 Galactic RRL to calibrate the relation.

  20. Neural activity in relation to empirically derived personality syndromes in depression using a psychodynamic fMRI paradigm

    PubMed Central

    Taubner, Svenja; Wiswede, Daniel; Kessler, Henrik

    2013-01-01

    Objective: The heterogeneity between patients with depression cannot be captured adequately with existing descriptive systems of diagnosis and neurobiological models of depression. Furthermore, considering the highly individual nature of depression, the application of general stimuli in past research efforts may not capture the essence of the disorder. This study aims to identify subtypes of depression by using empirically derived personality syndromes, and to explore neural correlates of the derived personality syndromes. Materials and Methods: In the present exploratory study, an individually tailored and psychodynamically based functional magnetic resonance imaging paradigm using dysfunctional relationship patterns was presented to 20 chronically depressed patients. Results from the Shedler–Westen Assessment Procedure (SWAP-200) were analyzed by Q-factor analysis to identify clinically relevant subgroups of depression and related brain activation. Results: The principle component analysis of SWAP-200 items from all 20 patients lead to a two-factor solution: “Depressive Personality” and “Emotional-Hostile-Externalizing Personality.” Both factors were used in a whole-brain correlational analysis but only the second factor yielded significant positive correlations in four regions: a large cluster in the right orbitofrontal cortex (OFC), the left ventral striatum, a small cluster in the left temporal pole, and another small cluster in the right middle frontal gyrus. Discussion: The degree to which patients with depression score high on the factor “Emotional-Hostile-Externalizing Personality” correlated with relatively higher activity in three key areas involved in emotion processing, evaluation of reward/punishment, negative cognitions, depressive pathology, and social knowledge (OFC, ventral striatum, temporal pole). Results may contribute to an alternative description of neural correlates of depression showing differential brain activation dependent on the extent of specific personality syndromes in depression. PMID:24363644

  1. Jet Aeroacoustics: Noise Generation Mechanism and Prediction

    NASA Technical Reports Server (NTRS)

    Tam, Christopher

    1998-01-01

    This report covers the third year research effort of the project. The research work focussed on the fine scale mixing noise of both subsonic and supersonic jets and the effects of nozzle geometry and tabs on subsonic jet noise. In publication 1, a new semi-empirical theory of jet mixing noise from fine scale turbulence is developed. By an analogy to gas kinetic theory, it is shown that the source of noise is related to the time fluctuations of the turbulence kinetic theory. On starting with the Reynolds Averaged Navier-Stokes equations, a formula for the radiated noise is derived. An empirical model of the space-time correlation function of the turbulence kinetic energy is adopted. The form of the model is in good agreement with the space-time two-point velocity correlation function measured by Davies and coworkers. The parameters of the correlation are related to the parameters of the k-epsilon turbulence model. Thus the theory is self-contained. Extensive comparisons between the computed noise spectrum of the theory and experimental measured have been carried out. The parameters include jet Mach number from 0.3 to 2.0 and temperature ratio from 1.0 to 4.8. Excellent agreements are found in the spectrum shape, noise intensity and directivity. It is envisaged that the theory would supercede all semi-empirical and totally empirical jet noise prediction methods in current use.

  2. Experimental investigation on secondary combustion characteristics of airbreathing rockets

    NASA Astrophysics Data System (ADS)

    Mano, Takeshi; Eguchi, Akihiro; Shinohara, Suetsugu; Etou, Takao; Kaneko, Yutaka; Yamamoto, Youichi; Nakagawa, Ichirou

    Empirical correlations of the secondary combustion efficiency of the airbreathing rocket were derived. From the results of a series of experiments employing a connected pipe facility, the combustion efficiency was related to dominant parameters. The feasibility of the performance prediction by one-dimensional analysis was also discussed. The analysis was found to be applicable to the flow processes in the secondary combustor, which include two-stream mixing and combustion.

  3. Alcoholics Anonymous-Related Helping and the Helper Therapy Principle

    PubMed Central

    Pagano, Maria E.; Post, Stephen G.; Johnson, Shannon M.

    2012-01-01

    The helper therapy principle (HTP) observes the helper’s health benefits derived from helping another with a shared malady. The HTP is embodied by the program of Alcoholics Anonymous as a method to diminish egocentrism as a root cause of addiction. This article reviews recent evidence of the HTP in alcohol populations, extends to populations with chronic conditions beyond addiction, and concludes with new directions of empirical inquiry. PMID:23525280

  4. An Analysis of Factor Extraction Strategies: A Comparison of the Relative Strengths of Principal Axis, Ordinary Least Squares, and Maximum Likelihood in Research Contexts That Include Both Categorical and Continuous Variables

    ERIC Educational Resources Information Center

    Coughlin, Kevin B.

    2013-01-01

    This study is intended to provide researchers with empirically derived guidelines for conducting factor analytic studies in research contexts that include dichotomous and continuous levels of measurement. This study is based on the hypotheses that ordinary least squares (OLS) factor analysis will yield more accurate parameter estimates than…

  5. ResearchMaps.org for integrating and planning research.

    PubMed

    Matiasz, Nicholas J; Wood, Justin; Doshi, Pranay; Speier, William; Beckemeyer, Barry; Wang, Wei; Hsu, William; Silva, Alcino J

    2018-01-01

    To plan experiments, a biologist needs to evaluate a growing set of empirical findings and hypothetical assertions from diverse fields that use increasingly complex techniques. To address this problem, we operationalized principles (e.g., convergence and consistency) that biologists use to test causal relations and evaluate experimental evidence. With the framework we derived, we then created a free, open-source web application that allows biologists to create research maps, graph-based representations of empirical evidence and hypothetical assertions found in research articles, reviews, and other sources. With our ResearchMaps web application, biologists can systematically reason through the research that is most important to them, as well as evaluate and plan experiments with a breadth and precision that are unlikely without such a tool.

  6. Optimum wall impedance for spinning modes: A correlation with mode cut-off ratio

    NASA Technical Reports Server (NTRS)

    Rice, E. J.

    1978-01-01

    A correlating equation relating the optimum acoustic impedance for the wall lining of a circular duct to the acoustic mode cut-off ratio, is presented. The optimum impedance was correlated with cut-off ratio because the cut-off ratio appears to be the fundamental parameter governing the propagation of sound in the duct. Modes with similar cut-off ratios respond in a similar way to the acoustic liner. The correlation is a semi-empirical expression developed from an empirical modification of an equation originally derived from sound propagation theory in a thin boundary layer. This correlating equation represents a part of a simplified liner design method, based upon modal cut-off ratio, for multimodal noise propagation.

  7. ResearchMaps.org for integrating and planning research

    PubMed Central

    Speier, William; Beckemeyer, Barry; Wang, Wei; Hsu, William; Silva, Alcino J.

    2018-01-01

    To plan experiments, a biologist needs to evaluate a growing set of empirical findings and hypothetical assertions from diverse fields that use increasingly complex techniques. To address this problem, we operationalized principles (e.g., convergence and consistency) that biologists use to test causal relations and evaluate experimental evidence. With the framework we derived, we then created a free, open-source web application that allows biologists to create research maps, graph-based representations of empirical evidence and hypothetical assertions found in research articles, reviews, and other sources. With our ResearchMaps web application, biologists can systematically reason through the research that is most important to them, as well as evaluate and plan experiments with a breadth and precision that are unlikely without such a tool. PMID:29723213

  8. Mapping a research agenda for the science of team science

    PubMed Central

    Falk-Krzesinski, Holly J; Contractor, Noshir; Fiore, Stephen M; Hall, Kara L; Kane, Cathleen; Keyton, Joann; Klein, Julie Thompson; Spring, Bonnie; Stokols, Daniel; Trochim, William

    2012-01-01

    An increase in cross-disciplinary, collaborative team science initiatives over the last few decades has spurred interest by multiple stakeholder groups in empirical research on scientific teams, giving rise to an emergent field referred to as the science of team science (SciTS). This study employed a collaborative team science concept-mapping evaluation methodology to develop a comprehensive research agenda for the SciTS field. Its integrative mixed-methods approach combined group process with statistical analysis to derive a conceptual framework that identifies research areas of team science and their relative importance to the emerging SciTS field. The findings from this concept-mapping project constitute a lever for moving SciTS forward at theoretical, empirical, and translational levels. PMID:23223093

  9. Empirical analysis of the stress-strain relationship between hydraulic head and subsidence in the San Joaquin Valley Aquifer

    NASA Astrophysics Data System (ADS)

    Neff, K. L.; Farr, T.

    2016-12-01

    Aquifer subsidence due to groundwater abstraction poses a significant threat to aquifer sustainability and infrastructure. The need to prevent permanent compaction to preserve aquifer storage capacity and protect infrastructure begs a better understanding of how compaction is related to groundwater abstraction and aquifer hydrogeology. The stress-strain relationship between hydraulic head changes and aquifer compaction has previously been observed to be hysteretic in both empirical and modeling studies. Here, subsidence data for central California's San Joaquin Valley derived from interferometric synthetic aperture radar (InSAR) for the period 2007-2016 is examined relative to hydraulic head levels in monitoring and production wells collected by the California Department of Water Resources. Such a large and long-term data set is available for empirical analysis for the first time thanks to advances in InSAR data collection and geospatial data management. The California Department of Water Resources (DWR) funded this work to provide the background and an update on subsidence in the Central Valley to support future policy. Part of this work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA.

  10. Evaluating the generalizability of GEP models for estimating reference evapotranspiration in distant humid and arid locations

    NASA Astrophysics Data System (ADS)

    Kiafar, Hamed; Babazadeh, Hosssien; Marti, Pau; Kisi, Ozgur; Landeras, Gorka; Karimi, Sepideh; Shiri, Jalal

    2017-10-01

    Evapotranspiration estimation is of crucial importance in arid and hyper-arid regions, which suffer from water shortage, increasing dryness and heat. A modeling study is reported here to cross-station assessment between hyper-arid and humid conditions. The derived equations estimate ET0 values based on temperature-, radiation-, and mass transfer-based configurations. Using data from two meteorological stations in a hyper-arid region of Iran and two meteorological stations in a humid region of Spain, different local and cross-station approaches are applied for developing and validating the derived equations. The comparison of the gene expression programming (GEP)-based-derived equations with corresponding empirical-semi empirical ET0 estimation equations reveals the superiority of new formulas in comparison with the corresponding empirical equations. Therefore, the derived models can be successfully applied in these hyper-arid and humid regions as well as similar climatic contexts especially in data-lack situations. The results also show that when relying on proper input configurations, cross-station might be a promising alternative for locally trained models for the stations with data scarcity.

  11. An Examination of the Impact of Drizzle Drops on Satellite-Retrieved Effective Particle Sizes

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Arduini, Robert F.; Young, David F.; Ayers, J, Kirk; Albrecht, Bruce A.; Sharon, Tarah; Stevens, Bjorn

    2004-01-01

    In general, cloud effective droplet radii are remotely sensed in the near-infrared using the assumption of a monomodal droplet size distribution. It has been observed in many instances, especially in relatively pristine marine environments, that cloud effective droplet radii derived from satellite data often exceed 15 m or more. Comparisons of remotely sensed and in situ retrievals indicate that the former often overestimates the latter in clouds with drizzle-size droplets. To gain a better understanding of this discrepancy, this paper performs a theoretical and empirical evaluation of the impact of drizzle drops on the derived effective radius.

  12. An empirical, graphical, and analytical study of the relationship between vegetation indices. [derived from LANDSAT data

    NASA Technical Reports Server (NTRS)

    Lautenschlager, L.; Perry, C. R., Jr. (Principal Investigator)

    1981-01-01

    The development of formulae for the reduction of multispectral scanner measurements to a single value (vegetation index) for predicting and assessing vegetative characteristics is addressed. The origin, motivation, and derivation of some four dozen vegetation indices are summarized. Empirical, graphical, and analytical techniques are used to investigate the relationships among the various indices. It is concluded that many vegetative indices are very similar, some being simple algebraic transforms of others.

  13. Personality correlates of pathological gambling derived from Big Three and Big Five personality models

    PubMed Central

    Miller, Joshua D.; MacKillop, James; Fortune, Erica E.; Maples, Jessica; Lance, Charles E.; Campbell, W. Keith; Goodie, Adam S.

    2013-01-01

    Personality traits have proven to be consistent and important factors in a variety of externalizing behaviors including addiction, aggression, and antisocial behavior. Given the comorbidity of these behaviors with pathological gambling (PG), it is important to test the degree to which PG shares these trait correlates. In a large community sample of regular gamblers (N=354; 111 with diagnoses of pathological gambling), the relations between measures of two major models of personality – Big Three and Big Five – were examined in relation to PG symptoms derived from a semi-structured diagnostic interview. Across measures, traits related to the experience of strong negative emotions were the most consistent correlates of PG, regardless of whether they were analyzed using bivariate or multivariate analyses. In several instances, however, the relations between personality and PG were moderated by demographic variable such as gender, race, and age. It will be important for future empirical work of this nature to pay closer attention to potentially important moderators of these relations. PMID:23078872

  14. Evidence-based Nursing Education - a Systematic Review of Empirical Research

    PubMed Central

    Reiber, Karin

    2011-01-01

    The project „Evidence-based Nursing Education – Preparatory Stage“, funded by the Landesstiftung Baden-Württemberg within the programme Impulsfinanzierung Forschung (Funding to Stimulate Research), aims to collect information on current research concerned with nursing education and to process existing data. The results of empirical research which has already been carried out were systematically evaluated with aim of identifying further topics, fields and matters of interest for empirical research in nursing education. In the course of the project, the available empirical studies on nursing education were scientifically analysed and systematised. The over-arching aim of the evidence-based training approach – which extends beyond the aims of this project - is the conception, organisation and evaluation of vocational training and educational processes in the caring professions on the basis of empirical data. The following contribution first provides a systematic, theoretical link to the over-arching reference framework, as the evidence-based approach is adapted from thematically related specialist fields. The research design of the project is oriented towards criteria introduced from a selection of studies and carries out a two-stage systematic review of the selected studies. As a result, the current status of research in nursing education, as well as its organisation and structure, and questions relating to specialist training and comparative education are introduced and discussed. Finally, the empirical research on nursing training is critically appraised as a complementary element in educational theory/psychology of learning and in the ethical tradition of research. This contribution aims, on the one hand, to derive and describe the methods used, and to introduce the steps followed in gathering and evaluating the data. On the other hand, it is intended to give a systematic overview of empirical research work in nursing education. In order to preserve a holistic view of the research field and methods, detailed individual findings are not included. PMID:21818237

  15. An empirical Bayes approach for the Poisson life distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1973-01-01

    A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.

  16. A DEIM Induced CUR Factorization

    DTIC Science & Technology

    2015-09-18

    CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given matrix A, such a factorization provides a...CUR approximations based on leverage scores. 1 Introduction This work presents a new CUR matrix factorization based upon the Discrete Empirical...SUPPLEMENTARY NOTES 14. ABSTRACT We derive a CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given

  17. Multi-scale predictive modeling of nano-material and realistic electron devices

    NASA Astrophysics Data System (ADS)

    Palaria, Amritanshu

    Among the challenges faced in further miniaturization of electronic devices, heavy influence of the detailed atomic configuration of the material(s) involved, which often differs significantly from that of the bulk material(s), is prominent. Device design has therefore become highly interrelated with material engineering at the atomic level. This thesis aims at outlining, with examples, a multi-scale simulation procedure that allows one to integrate material and device aspects of nano-electronic design to predict behavior of novel devices with novel material. This is followed in four parts: (1) An approach that combines a higher time scale reactive force field analysis with density functional theory to predict structure of new material is demonstrated for the first time for nanowires. Novel stable structures for very small diameter silicon nanowires are predicted. (2) Density functional theory is used to show that the new nanowire structures derived in 1 above have properties different from diamond core wires even though the surface bonds in some may be similar to the surface of bulk silicon. (3) Electronic structure of relatively large-scale germanium sections of realistically strained Si/strained Ge/ strained Si nanowire heterostructures is computed using empirical tight binding and it is shown that the average non-homogeneous strain in these structures drives their interesting non-conventional electronic characteristics such as hole effective masses which decrease as the wire cross-section is reduced. (4) It is shown that tight binding, though empirical in nature, is not necessarily limited to the material and atomic structure for which the parameters have been empirically derived, but that simple changes may adapt the derived parameters to new bond environments. Si (100) surface electronic structure is obtained from bulk Si parameters.

  18. Representation and processing of mass and count nouns: a review

    PubMed Central

    Fieder, Nora; Nickels, Lyndsey; Biedermann, Britta

    2014-01-01

    Comprehension and/or production of noun phrases and sentences requires the selection of lexical-syntactic attributes of nouns. These lexical-syntactic attributes include grammatical gender (masculine/feminine/neuter), number (singular/plural) and countability (mass/count). While there has been considerable discussion regarding gender and number, relatively little attention has focused on countability. Therefore, this article reviews empirical evidence for lexical-syntactic specification of nouns for countability. This includes evidence from studies of language production and comprehension with normal speakers and case studies which assess impairments of mass/count nouns in people with acquired brain damage. Current theories of language processing are reviewed and found to be lacking specification regarding countability. Subsequently, the theoretical implications of the empirical studies are discussed in the context of frameworks derived from these accounts of language production (Levelt, 1989; Levelt et al., 1999) and comprehension (Taler and Jarema, 2006). The review concludes that there is empirical support for specification of nouns for countability at a lexical-syntactic level. PMID:24966849

  19. Empirically based assessment and taxonomy of psychopathology for ages 1½-90+ years: Developmental, multi-informant, and multicultural findings.

    PubMed

    Achenbach, Thomas M; Ivanova, Masha Y; Rescorla, Leslie A

    2017-11-01

    Originating in the 1960s, the Achenbach System of Empirically Based Assessment (ASEBA) comprises a family of instruments for assessing problems and strengths for ages 1½-90+ years. To provide an overview of the ASEBA, related research, and future directions for empirically based assessment and taxonomy. Standardized, multi-informant ratings of transdiagnostic dimensions of behavioral, emotional, social, and thought problems are hierarchically scored on narrow-spectrum syndrome scales, broad-spectrum internalizing and externalizing scales, and a total problems (general psychopathology) scale. DSM-oriented and strengths scales are also scored. The instruments and scales have been iteratively developed from assessments of clinical and population samples of hundreds of thousands of individuals. Items, instruments, scales, and norms are tailored to different kinds of informants for ages 1½-5, 6-18, 18-59, and 60-90+ years. To take account of differences between informants' ratings, parallel instruments are completed by parents, teachers, youths, adult probands, and adult collaterals. Syndromes and Internalizing/Externalizing scales derived from factor analyses of each instrument capture variations in patterns of problems that reflect different informants' perspectives. Confirmatory factor analyses have supported the syndrome structures in dozens of societies. Software displays scale scores in relation to user-selected multicultural norms for the age and gender of the person being assessed, according to ratings by each type of informant. Multicultural norms are derived from population samples in 57 societies on every inhabited continent. Ongoing and future research includes multicultural assessment of elders; advancing transdiagnostic progress and outcomes assessment; and testing higher order structures of psychopathology. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Towards a Quasi-global precipitation-induced Landslide Detection System using Remote Sensing Information

    NASA Astrophysics Data System (ADS)

    Adler, B.; Hong, Y.; Huffman, G.; Negri, A.; Pando, M.

    2006-05-01

    Landslides and debris flows are one of the most widespread natural hazards on Earth, responsible for thousands of deaths and billions of dollars in property damage per year. Currently, no system exists at either a national or a global scale to monitor or detect rainfall conditions that may trigger landslides. In this study, global landslide susceptibility is mapped using USGS GTOPO30 Digital Elevation, hydrological derivatives (slopes and wetness index etc.) from HYDRO1k data, soil type information downscaled from Digital Soil Map of the World (Sand, Loam, Silt, or Clay etc.), and MODIS land cover/use classification data. These variables are then combined with empirical landslide inventory data, if available, to derive a global landslide susceptibility map at elemental resolution of 1 x 1 km. This map can then be overlain with the driving force, namely rainfall estimates from the TRMM-based Multiple-satellite Precipitation Analysis to identify when areas with significant landslide potential receive heavy rainfall. The relations between rainfall intensity and rainstorm duration are regionally specific and often take the form of a power-law relation. Several empirical landslide-triggering Rainfall Intensity-Duration thresholds are implemented regionally using the 8-year TRMM-based precipitation with or without the global landslide susceptibility map at continuous space and time domain. Finally, the effectiveness of this system is validated by studying several recent deadly landslide/mudslide events. This study aims to build up a prototype quasi-global potential landslide warning system. Spatially-distributed landslide susceptibility maps and regional empirical rainfall intensity-duration thresholds, in combination with real-time rainfall measurements from space and rainfall forecasts from models, will be the basis for this experimental system.

  1. Comparisons of thermospheric density data sets and models

    NASA Astrophysics Data System (ADS)

    Doornbos, Eelco; van Helleputte, Tom; Emmert, John; Drob, Douglas; Bowman, Bruce R.; Pilinski, Marcin

    During the past decade, continuous long-term data sets of thermospheric density have become available to researchers. These data sets have been derived from accelerometer measurements made by the CHAMP and GRACE satellites and from Space Surveillance Network (SSN) tracking data and related Two-Line Element (TLE) sets. These data have already resulted in a large number of publications on physical interpretation and improvement of empirical density modelling. This study compares four different density data sets and two empirical density models, for the period 2002-2009. These data sources are the CHAMP (1) and GRACE (2) accelerometer measurements, the long-term database of densities derived from TLE data (3), the High Accuracy Satellite Drag Model (4) run by Air Force Space Command, calibrated using SSN data, and the NRLMSISE-00 (5) and Jacchia-Bowman 2008 (6) empirical models. In describing these data sets and models, specific attention is given to differences in the geo-metrical and aerodynamic satellite modelling, applied in the conversion from drag to density measurements, which are main sources of density biases. The differences in temporal and spa-tial resolution of the density data sources are also described and taken into account. With these aspects in mind, statistics of density comparisons have been computed, both as a function of solar and geomagnetic activity levels, and as a function of latitude and local solar time. These statistics give a detailed view of the relative accuracy of the different data sets and of the biases between them. The differences are analysed with the aim at providing rough error bars on the data and models and pinpointing issues which could receive attention in future iterations of data processing algorithms and in future model development.

  2. Derivation of the Freundlich Adsorption Isotherm from Kinetics

    ERIC Educational Resources Information Center

    Skopp, Joseph

    2009-01-01

    The Freundlich adsorption isotherm is a useful description of adsorption phenomena. It is frequently presented as an empirical equation with little theoretical basis. In fact, a variety of derivations exist. Here a new derivation is presented using the concepts of fractal reaction kinetics. This derivation provides an alternative basis for…

  3. Calibration of an M L scale for South Africa using tectonic earthquake data recorded by the South African National Seismograph Network: 2006 to 2009

    NASA Astrophysics Data System (ADS)

    Saunders, Ian; Ottemöller, Lars; Brandt, Martin B. C.; Fourie, Christoffel J. S.

    2013-04-01

    A relation to determine local magnitude ( M L) based on the original Richter definition is empirically derived from synthetic Wood-Anderson seismograms recorded by the South African National Seismograph Network. In total, 263 earthquakes in the distance range 10 to 1,000 km, representing 1,681 trace amplitudes measured in nanometers from synthesized Wood-Anderson records on the vertical channel were considered to derive an attenuation relation appropriate for South Africa through multiple regression analysis. Additionally, station corrections were determined for 26 stations during the regression analysis resulting in values ranging between -0.31 and 0.50. The most appropriate M L scale for South Africa from this study satisfies the equation: {M_{{{L}}}} = {{lo}}{{{g}}_{{10}}}(A) + 1.149{{lo}}{{{g}}_{{10}}}(R) + 0.00063R + 2.04 - S The anelastic attenuation term derived from this study indicates that ground motion attenuation is significantly different from Southern California but comparable with stable continental regions.

  4. Reducing uncertainty and increasing consistency: technical improvements to forest carbon pool estimation using the national forest inventory of the US

    Treesearch

    C.W. Woodall; G.M. Domke; J. Coulston; M.B. Russell; J.A. Smith; C.H. Perry; S.M. Ogle; S. Healey; A. Gray

    2015-01-01

    The FIA program does not directly measure forest C stocks. Instead, a combination of empirically derived C estimates (e.g., standing live and dead trees) and models (e.g., understory C stocks related to stand age and forest type) are used to estimate forest C stocks. A series of recent refinements in FIA estimation procedures have sought to reduce the uncertainty...

  5. A review of depolarization modeling for earth-space radio paths at frequencies above 10 GHz

    NASA Technical Reports Server (NTRS)

    Bostian, C. W.; Stutzman, W. L.; Gaines, J. M.

    1982-01-01

    A review is presented of models for the depolarization, caused by scattering from raindrops and ice crystals, that limits the performance of dual-polarized satellite communication systems at frequencies above 10 GHz. The physical mechanisms of depolarization as well as theoretical formulations and empirical data are examined. Three theoretical models, the transmission, attenuation-derived, and scaling models, are described and their relative merits are considered.

  6. Chiral Nucleon-Nucleus Potentials at N3LO

    NASA Astrophysics Data System (ADS)

    Finelli, Paolo; Vorabbi, Matteo; Giusti, Carlotta

    2018-03-01

    Elastic scattering is probably one of the most relevant tools to study nuclear interactions. In this contribution we study the domain of applicability of microscopic two-body chiral potentials in the construction of an optical potential. A microscopic complex optical potential is derived and tested performing calculations on 16O at different energies. Good agreement with empirical data is obtained if a Lippmann-Schwinger cutoff at relatively high energies (above 500 MeV) is employed.

  7. Linking agent-based models and stochastic models of financial markets

    PubMed Central

    Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H. Eugene

    2012-01-01

    It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that “fat” tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting. PMID:22586086

  8. Linking agent-based models and stochastic models of financial markets.

    PubMed

    Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene

    2012-05-29

    It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting.

  9. Large wood influence on stream metabolism at a reach-scale in the Assabet River, Massachusetts

    NASA Astrophysics Data System (ADS)

    David, G. C. L.; Snyder, N. P.; Rosario, G. M.

    2016-12-01

    Total stream metabolism (TSM) represents the transfer of carbon through a channel by both primary production and respiration, and thus represents the movement of energy through a watershed. Large wood (LW) creates geomorphically complex channels by diverting flows, altering shear stresses on the channel bed and banks, and pool development. The increase in habitat complexity around LW is expected to increase TSM, but this change has not been directly measured. In this study, we measured changes in TSM around a LW jam in a Massachusetts river. Dissolved oxygen (DO) time series data are used to quantify gross primary production (GPP), ecosystem respiration (ER), which equal TSM when summed. Two primary objectives of this study are to (1) assess changes in TSM around LW and (2) compare empirical methods of deriving TSM to Grace et al.'s (2015) BASE model. We hypothesized that LW would increase TSM by providing larger pools, increasing coverage for fish and macroinvertebrates, increasing organic matter accumulation, and providing a place for primary producers to anchor and grow. The Assabet River is a 78 km2 drainage basin in central Massachusetts that provides public water supply to 7 towns. A change in TSM over a reach-scale was assessed using two YSI 6-Series Multiparameter Water Quality sondes over a 140 m long pool-riffle open meadow section. The reach included 6 pools and one LW jam. Every two weeks from July to November 2015, the sondes were moved to different pools. The sondes collected DO, temperature, depth, pH, salinity, light intensity, and turbidity at 15-minute intervals. Velocity (V) and discharge (Q) were measured weekly around the sondes and at established cross sections. Instantaneous V and Q were calculated for each sonde by modeling flows in HEC-RAS. Overall, TSM was heavily influenced by the pool size and indirectly to the LW jam which was associated with the largest pool. The largest error in TSM calculations is related to the empirically calculated reaeration flux (k), which represents oxygen inputs from the atmosphere. We used two well-established empirical equations to compare k values to the BASE model. The model agreed with empirically derived values during intermediate and high Q. Modeled GPP and ER diverged, sometimes by an order of magnitude, from the empirically derived results during the lowest flows.

  10. Violent Crime in Post-Civil War Guatemala: Causes and Policy Implications

    DTIC Science & Technology

    2015-03-01

    on field research and case studies in Honduras, Bolivia, and Argentina. Bailey’s Security Trap theory is comprehensive in nature and derived from... research question. The second phase uses empirical data and comparative case studies to validate or challenge selected arguments that potentially...Contextual relevancy, historical inference, Tools: Empirics and case conclusions empirical data studies Figme2. Sample Research Methodology E

  11. Galaxy and Mass Assembly (GAMA): Mid-infrared Properties and Empirical Relations from WISE

    NASA Astrophysics Data System (ADS)

    Cluver, M. E.; Jarrett, T. H.; Hopkins, A. M.; Driver, S. P.; Liske, J.; Gunawardhana, M. L. P.; Taylor, E. N.; Robotham, A. S. G.; Alpaslan, M.; Baldry, I.; Brown, M. J. I.; Peacock, J. A.; Popescu, C. C.; Tuffs, R. J.; Bauer, A. E.; Bland-Hawthorn, J.; Colless, M.; Holwerda, B. W.; Lara-López, M. A.; Leschinski, K.; López-Sánchez, A. R.; Norberg, P.; Owers, M. S.; Wang, L.; Wilkins, S. M.

    2014-02-01

    The Galaxy And Mass Assembly (GAMA) survey furnishes a deep redshift catalog that, when combined with the Wide-field Infrared Survey Explorer (WISE), allows us to explore for the first time the mid-infrared properties of >110, 000 galaxies over 120 deg2 to z ~= 0.5. In this paper we detail the procedure for producing the matched GAMA-WISE catalog for the G12 and G15 fields, in particular characterizing and measuring resolved sources; the complete catalogs for all three GAMA equatorial fields will be made available through the GAMA public releases. The wealth of multiwavelength photometry and optical spectroscopy allows us to explore empirical relations between optically determined stellar mass (derived from synthetic stellar population models) and 3.4 μm and 4.6 μm WISE measurements. Similarly dust-corrected Hα-derived star formation rates can be compared to 12 μm and 22 μm luminosities to quantify correlations that can be applied to large samples to z < 0.5. To illustrate the applications of these relations, we use the 12 μm star formation prescription to investigate the behavior of specific star formation within the GAMA-WISE sample and underscore the ability of WISE to detect star-forming systems at z ~ 0.5. Within galaxy groups (determined by a sophisticated friends-of-friends scheme), results suggest that galaxies with a neighbor within 100 h -1 kpc have, on average, lower specific star formation rates than typical GAMA galaxies with the same stellar mass.

  12. Empirical mass-loss rates for 25 O and early B stars, derived from Copernicus observations

    NASA Technical Reports Server (NTRS)

    Gathier, R.; Lamers, H. J. G. L. M.; Snow, T. P.

    1981-01-01

    Ultraviolet line profiles are fitted with theoretical line profiles in the cases of 25 stars covering a spectral type range from O4 to B1, including all luminosity classes. Ion column densities are compared for the determination of wind ionization, and it is found that the O VI/N V ratio is dependent on the mean density of the wind and not on effective temperature value, while the Si IV/N V ratio is temperature-dependent. The column densities are used to derive a mass-loss rate parameter that is empirically correlated against the mass-loss rate by means of standard stars with well-determined rates from IR or radio data. The empirical mass-loss rates obtained are compared with those derived by others and found to vary by as much as a factor of 10, which is shown to be due to uncertainties or errors in the ionization fractions of models used for wind ionization balance prediction.

  13. Seasonal forecast of St. Louis encephalitis virus transmission, Florida.

    PubMed

    Shaman, Jeffrey; Day, Jonathan F; Stieglitz, Marc; Zebiak, Stephen; Cane, Mark

    2004-05-01

    Disease transmission forecasts can help minimize human and domestic animal health risks by indicating where disease control and prevention efforts should be focused. For disease systems in which weather-related variables affect pathogen proliferation, dispersal, or transmission, the potential for disease forecasting exists. We present a seasonal forecast of St. Louis encephalitis virus transmission in Indian River County, Florida. We derive an empiric relationship between modeled land surface wetness and levels of SLEV transmission in humans. We then use these data to forecast SLEV transmission with a seasonal lead. Forecast skill is demonstrated, and a real-time seasonal forecast of epidemic SLEV transmission is presented. This study demonstrates how weather and climate forecast skill-verification analyses may be applied to test the predictability of an empiric disease forecast model.

  14. Seasonal Forecast of St. Louis Encephalitis Virus Transmission, Florida

    PubMed Central

    Day, Jonathan F.; Stieglitz, Marc; Zebiak, Stephen; Cane, Mark

    2004-01-01

    Disease transmission forecasts can help minimize human and domestic animal health risks by indicating where disease control and prevention efforts should be focused. For disease systems in which weather-related variables affect pathogen proliferation, dispersal, or transmission, the potential for disease forecasting exists. We present a seasonal forecast of St. Louis encephalitis virus transmission in Indian River County, Florida. We derive an empirical relationship between modeled land surface wetness and levels of SLEV transmission in humans. We then use these data to forecast SLEV transmission with a seasonal lead. Forecast skill is demonstrated, and a real-time seasonal forecast of epidemic SLEV transmission is presented. This study demonstrates how weather and climate forecast skill verification analyses may be applied to test the predictability of an empirical disease forecast model. PMID:15200812

  15. Equation of state for dense nucleonic matter from metamodeling. I. Foundational aspects

    NASA Astrophysics Data System (ADS)

    Margueron, Jérôme; Hoffmann Casali, Rudiney; Gulminelli, Francesca

    2018-02-01

    Metamodeling for the nucleonic equation of state (EOS), inspired from a Taylor expansion around the saturation density of symmetric nuclear matter, is proposed and parameterized in terms of the empirical parameters. The present knowledge of nuclear empirical parameters is first reviewed in order to estimate their average values and associated uncertainties, and thus defining the parameter space of the metamodeling. They are divided into isoscalar and isovector types, and ordered according to their power in the density expansion. The goodness of the metamodeling is analyzed against the predictions of the original models. In addition, since no correlation among the empirical parameters is assumed a priori, all arbitrary density dependences can be explored, which might not be accessible in existing functionals. Spurious correlations due to the assumed functional form are also removed. This meta-EOS allows direct relations between the uncertainties on the empirical parameters and the density dependence of the nuclear equation of state and its derivatives, and the mapping between the two can be done with standard Bayesian techniques. A sensitivity analysis shows that the more influential empirical parameters are the isovector parameters Lsym and Ksym, and that laboratory constraints at supersaturation densities are essential to reduce the present uncertainties. The present metamodeling for the EOS for nuclear matter is proposed for further applications in neutron stars and supernova matter.

  16. Against the empirical viability of the Deutsch-Wallace-Everett approach to quantum mechanics

    NASA Astrophysics Data System (ADS)

    Dawid, Richard; Thébault, Karim P. Y.

    2014-08-01

    The subjective Everettian approach to quantum mechanics presented by Deutsch and Wallace fails to constitute an empirically viable theory of quantum phenomena. The decision theoretic implementation of the Born rule realized in this approach provides no basis for rejecting Everettian quantum mechanics in the face of empirical data that contradicts the Born rule. The approach of Greaves and Myrvold, which provides a subjective implementation of the Born rule as well but derives it from empirical data rather than decision theoretic arguments, avoids the problem faced by Deutsch and Wallace and is empirically viable. However, there is good reason to cast doubts on its scientific value.

  17. Holding-based network of nations based on listed energy companies: An empirical study on two-mode affiliation network of two sets of actors

    NASA Astrophysics Data System (ADS)

    Li, Huajiao; Fang, Wei; An, Haizhong; Gao, Xiangyun; Yan, Lili

    2016-05-01

    Economic networks in the real world are not homogeneous; therefore, it is important to study economic networks with heterogeneous nodes and edges to simulate a real network more precisely. In this paper, we present an empirical study of the one-mode derivative holding-based network constructed by the two-mode affiliation network of two sets of actors using the data of worldwide listed energy companies and their shareholders. First, we identify the primitive relationship in the two-mode affiliation network of the two sets of actors. Then, we present the method used to construct the derivative network based on the shareholding relationship between two sets of actors and the affiliation relationship between actors and events. After constructing the derivative network, we analyze different topological features on the node level, edge level and entire network level and explain the meanings of the different values of the topological features combining the empirical data. This study is helpful for expanding the usage of complex networks to heterogeneous economic networks. For empirical research on the worldwide listed energy stock market, this study is useful for discovering the inner relationships between the nations and regions from a new perspective.

  18. Stochastic theory of fatigue corrosion

    NASA Astrophysics Data System (ADS)

    Hu, Haiyun

    1999-10-01

    A stochastic theory of corrosion has been constructed. The stochastic equations are described giving the transportation corrosion rate and fluctuation corrosion coefficient. In addition the pit diameter distribution function, the average pit diameter and the most probable pit diameter including other related empirical formula have been derived. In order to clarify the effect of stress range on the initiation and growth behaviour of pitting corrosion, round smooth specimen were tested under cyclic loading in 3.5% NaCl solution.

  19. A Behavior-Analytic Account of Motivational Interviewing

    ERIC Educational Resources Information Center

    Christopher, Paulette J.; Dougher, Michael J.

    2009-01-01

    Several published reports have now documented the clinical effectiveness of motivational interviewing (MI). Despite its effectiveness, there are no generally accepted or empirically supported theoretical accounts of its effects. The theoretical accounts that do exist are mentalistic, descriptive, and not based on empirically derived behavioral…

  20. Classification of Marital Relationships: An Empirical Approach.

    ERIC Educational Resources Information Center

    Snyder, Douglas K.; Smith, Gregory T.

    1986-01-01

    Derives an empirically based classification system of marital relationships, employing a multidimensional self-report measure of marital interaction. Spouses' profiles on the Marital Satisfaction Inventory for samples of clinic and nonclinic couples were subjected to cluster analysis, resulting in separate five-group typologies for husbands and…

  1. Development of a detector model for generation of synthetic radiographs of cargo containers

    NASA Astrophysics Data System (ADS)

    White, Timothy A.; Bredt, Ofelia P.; Schweppe, John E.; Runkle, Robert C.

    2008-05-01

    Creation of synthetic cargo-container radiographs that possess attributes of their empirical counterparts requires accurate models of the imaging-system response. Synthetic radiographs serve as surrogate data in studies aimed at determining system effectiveness for detecting target objects when it is impractical to collect a large set of empirical radiographs. In the case where a detailed understanding of the detector system is available, an accurate detector model can be derived from first-principles. In the absence of this detail, it is necessary to derive empirical models of the imaging-system response from radiographs of well-characterized objects. Such a case is the topic of this work, where we demonstrate the development of an empirical model of a gamma-ray radiography system with the intent of creating a detector-response model that translates uncollided photon transport calculations into realistic synthetic radiographs. The detector-response model is calibrated to field measurements of well-characterized objects thus incorporating properties such as system sensitivity, spatial resolution, contrast and noise.

  2. Mathematical modeling of synthetic unit hydrograph case study: Citarum watershed

    NASA Astrophysics Data System (ADS)

    Islahuddin, Muhammad; Sukrainingtyas, Adiska L. A.; Kusuma, M. Syahril B.; Soewono, Edy

    2015-09-01

    Deriving unit hydrograph is very important in analyzing watershed's hydrologic response of a rainfall event. In most cases, hourly measures of stream flow data needed in deriving unit hydrograph are not always available. Hence, one needs to develop methods for deriving unit hydrograph for ungagged watershed. Methods that have evolved are based on theoretical or empirical formulas relating hydrograph peak discharge and timing to watershed characteristics. These are usually referred to Synthetic Unit Hydrograph. In this paper, a gamma probability density function and its variant are used as mathematical approximations of a unit hydrograph for Citarum Watershed. The model is adjusted with real field condition by translation and scaling. Optimal parameters are determined by using Particle Swarm Optimization method with weighted objective function. With these models, a synthetic unit hydrograph can be developed and hydrologic parameters can be well predicted.

  3. Flow properties of the solar wind obtained from white light data, Ulysses observations and a two-fluid model

    NASA Technical Reports Server (NTRS)

    Habbal, Shadia Rifai; Esser, Ruth; Guhathakurta, Madhulika; Fisher, Richard

    1995-01-01

    Using the empirical constraints provided by observations in the inner corona and in interplanetary space. we derive the flow properties of the solar wind using a two fluid model. Density and scale height temperatures are derived from White Light coronagraph observations on SPARTAN 201-1 and at Mauna Loa, from 1.16 to 5.5 R, in the two polar coronal holes on 11-12 Apr. 1993. Interplanetary measurements of the flow speed and proton mass flux are taken from the Ulysses south polar passage. By comparing the results of the model computations that fit the empirical constraints in the two coronal hole regions, we show how the effects of the line of sight influence the empirical inferences and subsequently the corresponding numerical results.

  4. An empirically based conceptual framework for fostering meaningful patient engagement in research.

    PubMed

    Hamilton, Clayon B; Hoens, Alison M; Backman, Catherine L; McKinnon, Annette M; McQuitty, Shanon; English, Kelly; Li, Linda C

    2018-02-01

    Patient engagement in research (PEIR) is promoted to improve the relevance and quality of health research, but has little conceptualization derived from empirical data. To address this issue, we sought to develop an empirically based conceptual framework for meaningful PEIR founded on a patient perspective. We conducted a qualitative secondary analysis of in-depth interviews with 18 patient research partners from a research centre-affiliated patient advisory board. Data analysis involved three phases: identifying the themes, developing a framework and confirming the framework. We coded and organized the data, and abstracted, illustrated, described and explored the emergent themes using thematic analysis. Directed content analysis was conducted to derive concepts from 18 publications related to PEIR to supplement, confirm or refute, and extend the emergent conceptual framework. The framework was reviewed by four patient research partners on our research team. Participants' experiences of working with researchers were generally positive. Eight themes emerged: procedural requirements, convenience, contributions, support, team interaction, research environment, feel valued and benefits. These themes were interconnected and formed a conceptual framework to explain the phenomenon of meaningful PEIR from a patient perspective. This framework, the PEIR Framework, was endorsed by the patient research partners on our team. The PEIR Framework provides guidance on aspects of PEIR to address for meaningful PEIR. It could be particularly useful when patient-researcher partnerships are led by researchers with little experience of engaging patients in research. © 2017 The Authors Health Expectations Published by John Wiley & Sons Ltd.

  5. Dairy farmers' use and non-use values in animal welfare: Determining the empirical content and structure with anchored best-worst scaling.

    PubMed

    Hansson, H; Lagerkvist, C J

    2016-01-01

    In this study, we sought to identify empirically the types of use and non-use values that motivate dairy farmers in their work relating to animal welfare of dairy cows. We also sought to identify how they prioritize between these use and non-use values. Use values are derived from productivity considerations; non-use values are derived from the wellbeing of the animals, independent of the present or future use the farmer may make of the animal. In particular, we examined the empirical content and structure of the economic value dairy farmers associate with animal welfare of dairy cows. Based on a best-worst scaling approach and data from 123 Swedish dairy farmers, we suggest that the economic value those farmers associate with animal welfare of dairy cows covers aspects of both use and non-use type, with non-use values appearing more important. Using principal component factor analysis, we were able to check unidimensionality of the economic value construct. These findings are useful for understanding why dairy farmers may be interested in considering dairy cow welfare. Such understanding is essential for improving agricultural policy and advice aimed at encouraging dairy farmers to improve animal welfare; communicating to consumers the values under which dairy products are produced; and providing a basis for more realistic assumptions when developing economic models about dairy farmers' behavior. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. On the impact of helium abundance on the Cepheid period-luminosity and Wesenheit relations and the distance ladder

    NASA Astrophysics Data System (ADS)

    Carini, R.; Brocato, E.; Raimondo, G.; Marconi, M.

    2017-08-01

    This work analyses the effect of the helium content on synthetic period-luminosity relations (PLRs) and period-Wesenheit relations (PWRs) of Cepheids and the systematic uncertainties on the derived distances that a hidden population of He-enhanced Cepheids may generate. We use new stellar and pulsation models to build a homogeneous and consistent framework to derive the Cepheid features. The Cepheid populations expected in synthetic colour-magnitude diagrams of young stellar systems (from 20 to 250 Myr) are computed in several photometric bands for Y = 0.25 and 0.35, at a fixed metallicity (Z = 0.008). The PLRs appear to be very similar in the two cases, with negligible effects (few per cent) on distances, while PWRs differ somewhat, with systematic uncertainties in deriving distances as high as ˜ 7 per cent at log P < 1.5. Statistical effects due to the number of variables used to determine the relations contribute to a distance systematic error of the order of few percent, with values decreasing from optical to near-infrared bands. The empirical PWRs derived from multiwavelength data sets for the Large Magellanic Cloud (LMC) is in a very good agreement with our theoretical PWRs obtained with a standard He content, supporting the evidence that LMC Cepheids do not show any He effect.

  7. Pre-main-sequence isochrones - II. Revising star and planet formation time-scales

    NASA Astrophysics Data System (ADS)

    Bell, Cameron P. M.; Naylor, Tim; Mayne, N. J.; Jeffries, R. D.; Littlefair, S. P.

    2013-09-01

    We have derived ages for 13 young (<30 Myr) star-forming regions and find that they are up to a factor of 2 older than the ages typically adopted in the literature. This result has wide-ranging implications, including that circumstellar discs survive longer (≃ 10-12 Myr) and that the average Class I lifetime is greater (≃1 Myr) than currently believed. For each star-forming region, we derived two ages from colour-magnitude diagrams. First, we fitted models of the evolution between the zero-age main sequence and terminal-age main sequence to derive a homogeneous set of main-sequence ages, distances and reddenings with statistically meaningful uncertainties. Our second age for each star-forming region was derived by fitting pre-main-sequence stars to new semi-empirical model isochrones. For the first time (for a set of clusters younger than 50 Myr), we find broad agreement between these two ages, and since these are derived from two distinct mass regimes that rely on different aspects of stellar physics, it gives us confidence in the new age scale. This agreement is largely due to our adoption of empirical colour-Teff relations and bolometric corrections for pre-main-sequence stars cooler than 4000 K. The revised ages for the star-forming regions in our sample are: ˜2 Myr for NGC 6611 (Eagle Nebula; M 16), IC 5146 (Cocoon Nebula), NGC 6530 (Lagoon Nebula; M 8) and NGC 2244 (Rosette Nebula); ˜6 Myr for σ Ori, Cep OB3b and IC 348; ≃10 Myr for λ Ori (Collinder 69); ≃11 Myr for NGC 2169; ≃12 Myr for NGC 2362; ≃13 Myr for NGC 7160; ≃14 Myr for χ Per (NGC 884); and ≃20 Myr for NGC 1960 (M 36).

  8. Revising Star and Planet Formation Timescales

    NASA Astrophysics Data System (ADS)

    Bell, Cameron P. M.; Naylor, Tim; Mayne, N. J.; Jeffries, R. D.; Littlefair, S. P.

    2013-07-01

    We have derived ages for 13 young (<30 Myr) star-forming regions and find that they are up to a factor of 2 older than the ages typically adopted in the literature. This result has wide-ranging implications, including that circumstellar discs survive longer (≃ 10-12 Myr) and that the average Class I lifetime is greater (≃1 Myr) than currently believed. For each star-forming region, we derived two ages from colour-magnitude diagrams. First, we fitted models of the evolution between the zero-age main sequence and terminal-age main sequence to derive a homogeneous set of main-sequence ages, distances and reddenings with statistically meaningful uncertainties. Our second age for each star-forming region was derived by fitting pre-main-sequence stars to new semi-empirical model isochrones. For the first time (for a set of clusters younger than 50 Myr), we find broad agreement between these two ages, and since these are derived from two distinct mass regimes that rely on different aspects of stellar physics, it gives us confidence in the new age scale. This agreement is largely due to our adoption of empirical colour-Teff relations and bolometric corrections for pre-main-sequence stars cooler than 4000 K. The revised ages for the star-forming regions in our sample are: 2 Myr for NGC 6611 (Eagle Nebula; M 16), IC 5146 (Cocoon Nebula), NGC 6530 (Lagoon Nebula; M 8) and NGC 2244 (Rosette Nebula); 6 Myr for σ Ori, Cep OB3b and IC 348; ≃10 Myr for λ Ori (Collinder 69); ≃11 Myr for NGC 2169; ≃12 Myr for NGC 2362; ≃13 Myr for NGC 7160; ≃14 Myr for χ Per (NGC 884); and ≃20 Myr for NGC 1960 (M 36).

  9. Tree Guidelines for Inland Empire Communities

    Treesearch

    E.G. McPherson; J.R. Simpson; P.J. Peper; Q. Xiao; D.R. Pittenger; D.R. Hodel

    2001-01-01

    Communities in the Inland Empire region of California contain over 8 million people, or about 25% of the state’s population. The region’s inhabitants derive great benefit from trees because compared to coastal areas, the summers are hotter and air pollution levels are higher. The region’s climate is still mild enough to grow a diverse mix of trees. The Inland Empire’s...

  10. RR Lyrae period luminosity relations with Spitzer

    NASA Astrophysics Data System (ADS)

    Neeley, Jillian R.; Marengo, Massimo; CRRP Team

    2017-01-01

    RR Lyrae variable stars have long been known to be valuable distance indicators, but only recently has a well defined period luminosity relationship been utilized at infrared wavelengths. In my thesis, I am combining Spitzer Space Telescope data of RR Lyrae stars obtained as part of the Carnegie RR Lyrae Program with ground based NIR data to characterize the period-luminosity-metallicity (PLZ) relation and provide an independent Population II calibration of the cosmic distance scale. I will discuss the ongoing efforts to calibrate this relation using objects such as M4 and NGC 6441 and how the first data release from the Gaia mission impacts our findings. I will also compare my preliminary empirical relations to theoretical PLZ relations derived from stellar pulsation models.

  11. Performance evaluation of ocean color satellite models for deriving accurate chlorophyll estimates in the Gulf of Saint Lawrence

    NASA Astrophysics Data System (ADS)

    Montes-Hugo, M.; Bouakba, H.; Arnone, R.

    2014-06-01

    The understanding of phytoplankton dynamics in the Gulf of the Saint Lawrence (GSL) is critical for managing major fisheries off the Canadian East coast. In this study, the accuracy of two atmospheric correction techniques (NASA standard algorithm, SA, and Kuchinke's spectral optimization, KU) and three ocean color inversion models (Carder's empirical for SeaWiFS (Sea-viewing Wide Field-of-View Sensor), EC, Lee's quasi-analytical, QAA, and Garver- Siegel-Maritorena semi-empirical, GSM) for estimating the phytoplankton absorption coefficient at 443 nm (aph(443)) and the chlorophyll concentration (chl) in the GSL is examined. Each model was validated based on SeaWiFS images and shipboard measurements obtained during May of 2000 and April 2001. In general, aph(443) estimates derived from coupling KU and QAA models presented the smallest differences with respect to in situ determinations as measured by High Pressure liquid Chromatography measurements (median absolute bias per cruise up to 0.005, RMSE up to 0.013). A change on the inversion approach used for estimating aph(443) values produced up to 43.4% increase on prediction error as inferred from the median relative bias per cruise. Likewise, the impact of applying different atmospheric correction schemes was secondary and represented an additive error of up to 24.3%. By using SeaDAS (SeaWiFS Data Analysis System) default values for the optical cross section of phytoplankton (i.e., aph(443) = aph(443)/chl = 0.056 m2mg-1), the median relative bias of our chl estimates as derived from the most accurate spaceborne aph(443) retrievals and with respect to in situ determinations increased up to 29%.

  12. Geophysical evaluation of sandstone aquifers in the Reconcavo-Tucano Basin, Bahia -- Brazil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lima, O.A.L. de

    1993-11-01

    The upper clastic sediments in the Reconcavo-Tucano basin comprise a multilayer aquifer system of Jurassic age. Its groundwater is normally fresh down to depths of more than 1,000 m. Locally, however, there are zones producing high salinity or sulfur geothermal water. Analysis of electrical logs of more than 150 wells enabled the identification of the most typical sedimentary structures and the gross geometries for the sandstone units in selected areas of the basin. Based on this information, the thick sands are interpreted as coalescent point bars and the shales as flood plain deposits of a large fluvial environment. The resistivitymore » logs and core laboratory data are combined to develop empirical equations relating aquifer porosity and permeability to log-derived parameters such as formation factor and cementation exponent. Temperature logs of 15 wells were useful to quantify the water leakage through semiconfining shales. The groundwater quality was inferred from spontaneous potential (SP) log deflections under control of chemical analysis of water samples. An empirical chart is developed that relates the SP-derived water resistivity to the true water resistivity within the formations. The patterns of salinity variation with depth inferred from SP logs were helpful in identifying subsurface flows along major fault zones, where extensive mixing of water is taking place. A total of 49 vertical Schlumberger resistivity soundings aid in defining aquifer structures and in extrapolating the log derived results. Transition zones between fresh and saline waters have also been detected based on a combination of logging and surface sounding data. Ionic filtering by water leakage across regional shales, local convection and mixing along major faults and hydrodynamic dispersion away from lateral permeability contrasts are the main mechanisms controlling the observed distributions of salinity and temperature within the basin.« less

  13. Personality correlates of pathological gambling derived from Big Three and Big Five personality models.

    PubMed

    Miller, Joshua D; Mackillop, James; Fortune, Erica E; Maples, Jessica; Lance, Charles E; Keith Campbell, W; Goodie, Adam S

    2013-03-30

    Personality traits have proved to be consistent and important factors in a variety of externalizing behaviors including addiction, aggression, and antisocial behavior. Given the comorbidity of these behaviors with pathological gambling (PG), it is important to test the degree to which PG shares these trait correlates. In a large community sample of regular gamblers (N=354; 111 with diagnoses of pathological gambling), the relations between measures of two major models of personality - Big Three and Big Five - were examined in relation to PG symptoms derived from a semi-structured diagnostic interview. Across measures, traits related to the experience of strong negative emotions were the most consistent correlates of PG, regardless of whether they were analyzed using bivariate or multivariate analyses. In several instances, however, the relations between personality and PG were moderated by demographic variable such as gender, race, and age. It will be important for future empirical work of this nature to pay closer attention to potentially important moderators of these relations. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Clinical decision support alert malfunctions: analysis and empirically derived taxonomy.

    PubMed

    Wright, Adam; Ai, Angela; Ash, Joan; Wiesen, Jane F; Hickman, Thu-Trang T; Aaron, Skye; McEvoy, Dustin; Borkowsky, Shane; Dissanayake, Pavithra I; Embi, Peter; Galanter, William; Harper, Jeremy; Kassakian, Steve Z; Ramoni, Rachel; Schreiber, Richard; Sirajuddin, Anwar; Bates, David W; Sittig, Dean F

    2018-05-01

    To develop an empirically derived taxonomy of clinical decision support (CDS) alert malfunctions. We identified CDS alert malfunctions using a mix of qualitative and quantitative methods: (1) site visits with interviews of chief medical informatics officers, CDS developers, clinical leaders, and CDS end users; (2) surveys of chief medical informatics officers; (3) analysis of CDS firing rates; and (4) analysis of CDS overrides. We used a multi-round, manual, iterative card sort to develop a multi-axial, empirically derived taxonomy of CDS malfunctions. We analyzed 68 CDS alert malfunction cases from 14 sites across the United States with diverse electronic health record systems. Four primary axes emerged: the cause of the malfunction, its mode of discovery, when it began, and how it affected rule firing. Build errors, conceptualization errors, and the introduction of new concepts or terms were the most frequent causes. User reports were the predominant mode of discovery. Many malfunctions within our database caused rules to fire for patients for whom they should not have (false positives), but the reverse (false negatives) was also common. Across organizations and electronic health record systems, similar malfunction patterns recurred. Challenges included updates to code sets and values, software issues at the time of system upgrades, difficulties with migration of CDS content between computing environments, and the challenge of correctly conceptualizing and building CDS. CDS alert malfunctions are frequent. The empirically derived taxonomy formalizes the common recurring issues that cause these malfunctions, helping CDS developers anticipate and prevent CDS malfunctions before they occur or detect and resolve them expediently.

  15. Deriving Empirically-Based Design Guidelines for Advanced Learning Technologies that Foster Disciplinary Comprehension

    ERIC Educational Resources Information Center

    Poitras, Eric; Trevors, Gregory

    2012-01-01

    Planning, conducting, and reporting leading-edge research requires professionals who are capable of highly skilled reading. This study reports the development of an empirically informed computer-based learning environment designed to foster the acquisition of reading comprehension strategies that mediate expertise in the social sciences. Empirical…

  16. Protein model discrimination using mutational sensitivity derived from deep sequencing.

    PubMed

    Adkar, Bharat V; Tripathi, Arti; Sahoo, Anusmita; Bajaj, Kanika; Goswami, Devrishi; Chakrabarti, Purbani; Swarnkar, Mohit K; Gokhale, Rajesh S; Varadarajan, Raghavan

    2012-02-08

    A major bottleneck in protein structure prediction is the selection of correct models from a pool of decoys. Relative activities of ∼1,200 individual single-site mutants in a saturation library of the bacterial toxin CcdB were estimated by determining their relative populations using deep sequencing. This phenotypic information was used to define an empirical score for each residue (RankScore), which correlated with the residue depth, and identify active-site residues. Using these correlations, ∼98% of correct models of CcdB (RMSD ≤ 4Å) were identified from a large set of decoys. The model-discrimination methodology was further validated on eleven different monomeric proteins using simulated RankScore values. The methodology is also a rapid, accurate way to obtain relative activities of each mutant in a large pool and derive sequence-structure-function relationships without protein isolation or characterization. It can be applied to any system in which mutational effects can be monitored by a phenotypic readout. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Data-driven regions of interest for longitudinal change in frontotemporal lobar degeneration.

    PubMed

    Pankov, Aleksandr; Binney, Richard J; Staffaroni, Adam M; Kornak, John; Attygalle, Suneth; Schuff, Norbert; Weiner, Michael W; Kramer, Joel H; Dickerson, Bradford C; Miller, Bruce L; Rosen, Howard J

    2016-01-01

    Current research is investigating the potential utility of longitudinal measurement of brain structure as a marker of drug effect in clinical trials for neurodegenerative disease. Recent studies in Alzheimer's disease (AD) have shown that measurement of change in empirically derived regions of interest (ROIs) allows more reliable measurement of change over time compared with regions chosen a-priori based on known effects of AD on brain anatomy. Frontotemporal lobar degeneration (FTLD) is a devastating neurodegenerative disorder for which there are no approved treatments. The goal of this study was to identify an empirical ROI that maximizes the effect size for the annual rate of brain atrophy in FTLD compared with healthy age matched controls, and to estimate the effect size and associated power estimates for a theoretical study that would use change within this ROI as an outcome measure. Eighty six patients with FTLD were studied, including 43 who were imaged twice at 1.5 T and 43 at 3 T, along with 105 controls (37 imaged at 1.5 T and 67 at 3 T). Empirically-derived maps of change were generated separately for each field strength and included the bilateral insula, dorsolateral, medial and orbital frontal, basal ganglia and lateral and inferior temporal regions. The extent of regions included in the 3 T map was larger than that in the 1.5 T map. At both field strengths, the effect sizes for imaging were larger than for any clinical measures. At 3 T, the effect size for longitudinal change measured within the empirically derived ROI was larger than the effect sizes derived from frontal lobe, temporal lobe or whole brain ROIs. The effect size derived from the data-driven 1.5 T map was smaller than at 3 T, and was not larger than the effect size derived from a-priori ROIs. It was estimated that measurement of longitudinal change using 1.5 T MR systems requires approximately a 3-fold increase in sample size to obtain effect sizes equivalent to those seen at 3 T. While the results should be confirmed in additional datasets, these results indicate that empirically derived ROIs can reduce the number of subjects needed for a longitudinal study of drug effects in FTLD compared with a-priori ROIs. Field strength may have a significant impact on the utility of imaging for measuring longitudinal change.

  18. Global statistics of liquid water content and effective number density of water clouds over ocean derived from combined CALIPSO and MODIS measurements

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Vaughan, M.; McClain, C.; Behrenfeld, M.; Maring, H.; Anderson, D.; Sun-Mack, S.; Flittner, D.; Huang, J.; Wielicki, B.; Minnis, P.; Weimer, C.; Trepte, C.; Kuehn, R.

    2007-03-01

    This study presents an empirical relation that links layer integrated depolarization ratios, the extinction coefficients, and effective radii of water clouds, based on Monte Carlo simulations of CALIPSO lidar observations. Combined with cloud effective radius retrieved from MODIS, cloud liquid water content and effective number density of water clouds are estimated from CALIPSO lidar depolarization measurements in this study. Global statistics of the cloud liquid water content and effective number density are presented.

  19. HIV-related sexual risk behavior among African American adolescent girls.

    PubMed

    Danielson, Carla Kmett; Walsh, Kate; McCauley, Jenna; Ruggiero, Kenneth J; Brown, Jennifer L; Sales, Jessica M; Rose, Eve; Wingood, Gina M; Diclemente, Ralph J

    2014-05-01

    Latent class analysis (LCA) is a useful statistical tool that can be used to enhance understanding of how various patterns of combined sexual behavior risk factors may confer differential levels of HIV infection risk and to identify subtypes among African American adolescent girls. Data for this analysis is derived from baseline assessments completed prior to randomization in an HIV prevention trial. Participants were African American girls (n=701) aged 14-20 years presenting to sexual health clinics. Girls completed an audio computer-assisted self-interview, which assessed a range of variables regarding sexual history and current and past sexual behavior. Two latent classes were identified with the probability statistics for the two groups in this model being 0.89 and 0.88, respectively. In the final multivariate model, class 1 (the "higher risk" group; n=331) was distinguished by a higher likelihood of >5 lifetime sexual partners, having sex while high on alcohol/drugs, less frequent condom use, and history of sexually transmitted diseases (STDs), when compared with class 2 (the "lower risk" group; n=370). The derived model correctly classified 85.3% of participants into the two groups and accounted for 71% of the variance in the latent HIV-related sexual behavior risk variable. The higher risk class also had worse scores on all hypothesized correlates (e.g., self-esteem, history of sexual assault or physical abuse) relative to the lower risk class. Sexual health clinics represent a unique point of access for HIV-related sexual risk behavior intervention delivery by capitalizing on contact with adolescent girls when they present for services. Four empirically supported risk factors differentiated higher versus lower HIV risk. Replication of these findings is warranted and may offer an empirical basis for parsimonious screening recommendations for girls presenting for sexual healthcare services.

  20. Feynman perturbation expansion for the price of coupon bond options and swaptions in quantum finance. II. Empirical

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Liang, Cui

    2007-01-01

    The quantum finance pricing formulas for coupon bond options and swaptions derived by Baaquie [Phys. Rev. E 75, 016703 (2006)] are reviewed. We empirically study the swaption market and propose an efficient computational procedure for analyzing the data. Empirical results of the swaption price, volatility, and swaption correlation are compared with the predictions of quantum finance. The quantum finance model generates the market swaption price to over 90% accuracy.

  1. Feynman perturbation expansion for the price of coupon bond options and swaptions in quantum finance. II. Empirical.

    PubMed

    Baaquie, Belal E; Liang, Cui

    2007-01-01

    The quantum finance pricing formulas for coupon bond options and swaptions derived by Baaquie [Phys. Rev. E 75, 016703 (2006)] are reviewed. We empirically study the swaption market and propose an efficient computational procedure for analyzing the data. Empirical results of the swaption price, volatility, and swaption correlation are compared with the predictions of quantum finance. The quantum finance model generates the market swaption price to over 90% accuracy.

  2. Quantitative structure activity relationships of some pyridine derivatives as corrosion inhibitors of steel in acidic medium.

    PubMed

    El Ashry, El Sayed H; El Nemr, Ahmed; Ragab, Safaa

    2012-03-01

    Quantum chemical calculations using the density functional theory (B3LYP/6-31G DFT) and semi-empirical AM1 methods were performed on ten pyridine derivatives used as corrosion inhibitors for mild steel in acidic medium to determine the relationship between molecular structure and their inhibition efficiencies. Quantum chemical parameters such as total negative charge (TNC) on the molecule, energy of highest occupied molecular orbital (E (HOMO)), energy of lowest unoccupied molecular orbital (E (LUMO)) and dipole moment (μ) as well as linear solvation energy terms, molecular volume (Vi) and dipolar-polarization (π) were correlated to corrosion inhibition efficiency of ten pyridine derivatives. A possible correlation between corrosion inhibition efficiencies and structural properties was searched to reduce the number of compounds to be selected for testing from a library of compounds. It was found that theoretical data support the experimental results. The results were used to predict the corrosion inhibition of 24 related pyridine derivatives.

  3. Conformational locking by design: relating strain energy with luminescence and stability in rigid metal-organic frameworks.

    PubMed

    Shustova, Natalia B; Cozzolino, Anthony F; Dincă, Mircea

    2012-12-05

    Minimization of the torsional barrier for phenyl ring flipping in a metal-organic framework (MOF) based on the new ethynyl-extended octacarboxylate ligand H(8)TDPEPE leads to a fluorescent material with a near-dark state. Immobilization of the ligand in the rigid structure also unexpectedly causes significant strain. We used DFT calculations to estimate the ligand strain energies in our and all other topologically related materials and correlated these with empirical structural descriptors to derive general rules for trapping molecules in high-energy conformations within MOFs. These studies portend possible applications of MOFs for studying fundamental concepts related to conformational locking and its effects on molecular reactivity and chromophore photophysics.

  4. A modified Lorentz theory as a test theory of special relativity

    NASA Technical Reports Server (NTRS)

    Chang, T.; Torr, D. G.; Gagnon, D. R.

    1988-01-01

    Attention has been given recently to a modified Lorentz theory (MLT) that is based on the generalized Galilean transformation. Some explicit formulas within the framework of MLT, dealing with the one-way velocity of light, slow-clock transport, and the Doppler effect are derived. A number of typical experiments are analyzed on this basis. Results indicate that the empirical equivalence between MLT and special relativity is still maintained to second order terms. The results of previous works that predict that the MLT might be distinguished from special relativity at the third order by Doppler centrifuge tests capable of a fractional frequency detection threshold of 10 to the -15th are confirmed.

  5. The methane absorption spectrum near 1.73 μm (5695-5850 cm-1): Empirical line lists at 80 K and 296 K and rovibrational assignments

    NASA Astrophysics Data System (ADS)

    Ghysels, M.; Mondelain, D.; Kassi, S.; Nikitin, A. V.; Rey, M.; Campargue, A.

    2018-07-01

    The methane absorption spectrum is studied at 297 K and 80 K in the center of the Tetradecad between 5695 and 5850 cm-1. The spectra are recorded by differential absorption spectroscopy (DAS) with a noise equivalent absorption of about αmin≈ 1.5 × 10-7 cm-1. Two empirical line lists are constructed including about 4000 and 2300 lines at 297 K and 80 K, respectively. Lines due to 13CH4 present in natural abundance were identified by comparison with a spectrum of pure 13CH4 recorded in the same temperature conditions. About 1700 empirical values of the lower state energy level, Eemp, were derived from the ratios of the line intensities at 80 K and 296 K. They provide accurate temperature dependence for most of the absorption in the region (93% and 82% at 80 K and 296 K, respectively). The quality of the derived empirical values is illustrated by the clear propensity of the corresponding lower state rotational quantum number, Jemp, to be close to integer values. Using an effective Hamiltonian model derived from a previously published ab initio potential energy surface, about 2060 lines are rovibrationnally assigned, adding about 1660 new assignments to those provided in the HITRAN database for 12CH4 in the region.

  6. A review of covariate selection for non-experimental comparative effectiveness research.

    PubMed

    Sauer, Brian C; Brookhart, M Alan; Roy, Jason; VanderWeele, Tyler

    2013-11-01

    This paper addresses strategies for selecting variables for adjustment in non-experimental comparative effectiveness research and uses causal graphs to illustrate the causal network that relates treatment to outcome. Variables in the causal network take on multiple structural forms. Adjustment for a common cause pathway between treatment and outcome can remove confounding, whereas adjustment for other structural types may increase bias. For this reason, variable selection would ideally be based on an understanding of the causal network; however, the true causal network is rarely known. Therefore, we describe more practical variable selection approaches based on background knowledge when the causal structure is only partially known. These approaches include adjustment for all observed pretreatment variables thought to have some connection to the outcome, all known risk factors for the outcome, and all direct causes of the treatment or the outcome. Empirical approaches, such as forward and backward selection and automatic high-dimensional proxy adjustment, are also discussed. As there is a continuum between knowing and not knowing the causal, structural relations of variables, we recommend addressing variable selection in a practical way that involves a combination of background knowledge and empirical selection and that uses high-dimensional approaches. This empirical approach can be used to select from a set of a priori variables based on the researcher's knowledge to be included in the final analysis or to identify additional variables for consideration. This more limited use of empirically derived variables may reduce confounding while simultaneously reducing the risk of including variables that may increase bias. Copyright © 2013 John Wiley & Sons, Ltd.

  7. A Review of Covariate Selection for Nonexperimental Comparative Effectiveness Research

    PubMed Central

    Sauer, Brian C.; Brookhart, Alan; Roy, Jason; Vanderweele, Tyler

    2014-01-01

    This paper addresses strategies for selecting variables for adjustment in non-experimental comparative effectiveness research (CER), and uses causal graphs to illustrate the causal network that relates treatment to outcome. Variables in the causal network take on multiple structural forms. Adjustment for on a common cause pathway between treatment and outcome can remove confounding, while adjustment for other structural types may increase bias. For this reason variable selection would ideally be based on an understanding of the causal network; however, the true causal network is rarely know. Therefore, we describe more practical variable selection approaches based on background knowledge when the causal structure is only partially known. These approaches include adjustment for all observed pretreatment variables thought to have some connection to the outcome, all known risk factors for the outcome, and all direct causes of the treatment or the outcome. Empirical approaches, such as forward and backward selection and automatic high-dimensional proxy adjustment, are also discussed. As there is a continuum between knowing and not knowing the causal, structural relations of variables, we recommend addressing variable selection in a practical way that involves a combination of background knowledge and empirical selection and that uses the high-dimensional approaches. This empirical approach can be used to select from a set of a priori variables based on the researcher’s knowledge to be included in the final analysis or to identify additional variables for consideration. This more limited use of empirically-derived variables may reduce confounding while simultaneously reducing the risk of including variables that may increase bias. PMID:24006330

  8. Global statistics of liquid water content and effective number concentration of water clouds over ocean derived from combined CALIPSO and MODIS measurements

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Vaughan, M.; McClain, C.; Behrenfeld, M.; Maring, H.; Anderson, D.; Sun-Mack, S.; Flittner, D.; Huang, J.; Wielicki, B.; Minnis, P.; Weimer, C.; Trepte, C.; Kuehn, R.

    2007-06-01

    This study presents an empirical relation that links the volume extinction coefficients of water clouds, the layer integrated depolarization ratios measured by lidar, and the effective radii of water clouds derived from collocated passive sensor observations. Based on Monte Carlo simulations of CALIPSO lidar observations, this method combines the cloud effective radius reported by MODIS with the lidar depolarization ratios measured by CALIPSO to estimate both the liquid water content and the effective number concentration of water clouds. The method is applied to collocated CALIPSO and MODIS measurements obtained during July and October of 2006, and January 2007. Global statistics of the cloud liquid water content and effective number concentration are presented.

  9. Comparison of data from the Scanning Multifrequency Microwave Radiometer (SMMR) with data from the Advanced Very High Resolution Radiometer (AVHRR) for terrestrial environmental monitoring - An overview

    NASA Technical Reports Server (NTRS)

    Townshend, J. R. G.; Choudhury, B. J.; Tucker, C. J.; Giddings, L.; Justice, C. O.

    1989-01-01

    Comparison between the microwave polarized difference temperature (MPDT) derived from 37 GHz band data and the normalized difference vegetation index (NDVI) derived from near-infrared and red bands, from several empirical investigations are summarized. These indicate the complementary character of the two measures in environmental monitoring. Overall the NDVI is more sensitive to green leaf activity, whereas the MPDT appears also to be related to other elements of the above-ground biomass. Monitoring of hydrological phenomena is carried out much more effectively by the MPDT. Further work is needed to explain spectral and temporal variation in MPDT both through modelling and field experiments.

  10. The Barnes-Evans color-surface brightness relation: A preliminary theoretical interpretation

    NASA Technical Reports Server (NTRS)

    Shipman, H. L.

    1980-01-01

    Model atmosphere calculations are used to assess whether an empirically derived relation between V-R and surface brightness is independent of a variety of stellar paramters, including surface gravity. This relationship is used in a variety of applications, including the determination of the distances of Cepheid variables using a method based on the Beade-Wesselink method. It is concluded that the use of a main sequence relation between V-R color and surface brightness in determining radii of giant stars is subject to systematic errors that are smaller than 10% in the determination of a radius or distance for temperature cooler than 12,000 K. The error in white dwarf radii determined from a main sequence color surface brightness relation is roughly 10%.

  11. Statistical mechanics of neocortical interactions. Derivation of short-term-memory capacity

    NASA Astrophysics Data System (ADS)

    Ingber, Lester

    1984-06-01

    A theory developed by the author to describe macroscopic neocortical interactions demonstrates that empirical values of chemical and electrical parameters of synaptic interactions establish several minima of the path-integral Lagrangian as a function of excitatory and inhibitory columnar firings. The number of possible minima, their time scales of hysteresis and probable reverberations, and their nearest-neighbor columnar interactions are all consistent with well-established empirical rules of human short-term memory. Thus, aspects of conscious experience are derived from neuronal firing patterns, using modern methods of nonlinear nonequilibrium statistical mechanics to develop realistic explicit synaptic interactions.

  12. An Empirical Typology of Narcissism and Mental Health in Late Adolescence

    ERIC Educational Resources Information Center

    Lapsley, Daniel K.; Aalsma, Matthew C.

    2006-01-01

    A two-step cluster analytic strategy was used in two studies to identify an empirically derived typology of narcissism in late adolescence. In Study 1, late adolescents (N=204) responded to the profile of narcissistic dispositions and measures of grandiosity (''superiority'') and idealization (''goal instability'') inspired by Kohut's theory,…

  13. Untangling the Evidence: Introducing an Empirical Model for Evidence-Based Library and Information Practice

    ERIC Educational Resources Information Center

    Gillespie, Ann

    2014-01-01

    Introduction: This research is the first to investigate the experiences of teacher-librarians as evidence-based practice. An empirically derived model is presented in this paper. Method: This qualitative study utilised the expanded critical incident approach, and investigated the real-life experiences of fifteen Australian teacher-librarians,…

  14. Evaluating the intersection of a regional wildlife connectivity network with highways

    Treesearch

    Samuel A. Cushman; Jesse S. Lewis; Erin L. Landguth

    2013-01-01

    Reliable predictions of regional-scale population connectivity are needed to prioritize conservation actions. However, there have been few examples of regional connectivity models that are empirically derived and validated. The central goals of this paper were to (1) evaluate the effectiveness of factorial least cost path corridor mapping on an empirical...

  15. Derivation of an occupational exposure limit (OEL) for methylene chloride based on acute CNS effects and relative potency analysis.

    PubMed

    Storm, J E; Rozman, K K

    1998-06-01

    The Occupational Safety and Health Administration (OSHA) methylene chloride Permissible Exposure Level (PEL) or 25 ppm is quantitatively derived from mouse tumor results observed in a high-exposure National Toxicology Program bioassay. Because this approach depends on controversial interspecies and low-dose extrapolations, the PEL itself has stimulated heated debate. Here, an alternative safety assessment for methylene chloride is presented. It is based on an acute human lowest-observed-adverse-effect level (LOAEL) of 200 ppm for subtle central nervous system (CNS) depression. Steep, parallel exposure-response curves for anesthetic and subanesthetic CNS effects associated with compounds mechanistically and structurally related to methylene chloride are shown to support a safety factor of two to account for inter-individual variability in response. LOAEL/no-observed-adverse-effect ratios for subtle CNS effects associated with structurally related solvents are shown to support a safety factor range of two to four to account for uncertainty in identifying a subthreshold exposure level. Anesthetic relative potencies and anesthetic/subanesthetic effect level ratios are shown to be constant for the compounds evaluated, demonstrating that subanesthetic relative potencies are also constant. Relative potencies among similarly derived occupational exposure limits (OELs) for solvents structurally related to methylene chloride are therefore used to validate the derived methylene chloride OEL range of 25-50 ppm. Because this safety assessment is based on human (rather than rodent) data and empirical (rather than theoretical) exposure-response relationships and is supported by relative potency analysis, it is a defensible alternative to to the OSHA risk assessment and should positively contribute to the debate regarding the appropriate basis and value for a methylene chloride PEL.

  16. On a New Theoretical Framework for RR Lyrae Stars. II. Mid-infrared Period–Luminosity–Metallicity Relations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neeley, Jillian R.; Marengo, Massimo; Trueba, Nicolas

    2017-06-01

    We present new theoretical period–luminosity–metallicity (PLZ) relations for RR Lyræ stars (RRLs) at Spitzer and WISE wavelengths. The PLZ relations were derived using nonlinear, time-dependent convective hydrodynamical models for a broad range of metal abundances ( Z = 0.0001–0.0198). In deriving the light curves, we tested two sets of atmospheric models and found no significant difference between the resulting mean magnitudes. We also compare our theoretical relations to empirical relations derived from RRLs in both the field and in the globular cluster M4. Our theoretical PLZ relations were combined with multi-wavelength observations to simultaneously fit the distance modulus, μ {submore » 0}, and extinction, A {sub V}, of both the individual Galactic RRL and of the cluster M4. The results for the Galactic RRL are consistent with trigonometric parallax measurements from Gaia ’ s first data release. For M4, we find a distance modulus of μ {sub 0} = 11.257 ± 0.035 mag with A {sub V}= 1.45 ± 0.12 mag, which is consistent with measurements from other distance indicators. This analysis has shown that, when considering a sample covering a range of iron abundances, the metallicity spread introduces a dispersion in the PL relation on the order of 0.13 mag. However, if this metallicity component is accounted for in a PLZ relation, the dispersion is reduced to ∼0.02 mag at mid-infrared wavelengths.« less

  17. Compressional and shear-wave velocity versus depth relations for common rock types in northern California

    USGS Publications Warehouse

    Brocher, T.M.

    2008-01-01

    This article presents new empirical compressional and shear-wave velocity (Vp and Vs) versus depth relationships for the most common rock types in northern California. Vp versus depth relations were developed from borehole, laboratory, seismic refraction and tomography, and density measurements, and were converted to Vs versus depth relations using new empirical relations between Vp and Vs. The relations proposed here account for increasing overburden pressure but not for variations in other factors that can influence velocity over short distance scales, such as lithology, consolidation, induration, porosity, and stratigraphic age. Standard deviations of the misfits predicted by these relations thus provide a measure of the importance of the variability in Vp and Vs caused by these other factors. Because gabbros, greenstones, basalts, and other mafic rocks have a different Vp and Vs relationship than sedimentary and granitic rocks, the differences in Vs between these rock types at depths below 6 or 7 km are generally small. The new relations were used to derive the 2005 U.S. Geological Survey seismic velocity model for northern California employed in the broadband strong motion simulations of the 1989 Loma Prieta and 1906 San Francisco earthquakes; initial tests of the model indicate that the Vp model generally compares favorably to regional seismic tomography models but that the Vp and Vs values proposed for the Franciscan Complex may be about 5% too high.

  18. Spin-dependent sum rules connecting real and virtual Compton scattering verified

    NASA Astrophysics Data System (ADS)

    Lensky, Vadim; Pascalutsa, Vladimir; Vanderhaeghen, Marc; Kao, Chung Wen

    2017-04-01

    We present a detailed derivation of the two sum rules relating the spin polarizabilities measured in real, virtual, and doubly virtual Compton scattering. For example, the polarizability δL T , accessed in inclusive electron scattering, is related to the spin polarizability γE 1 E 1 and the slope of generalized polarizabilities P(M 1 ,M 1 )1-P(L 1 ,L 1 )1 , measured in, respectively, the real and the virtual Compton scattering. We verify these sum rules in different variants of chiral perturbation theory, discuss their empirical verification for the proton, and prospect their use in studies of the nucleon spin structure.

  19. The Origin of Gravitational Lensing: A Postscript to Einstein's 1936 Science Paper

    PubMed

    Renn; Sauer; Stachel

    1997-01-10

    Gravitational lensing, now taken as an important astrophysical consequence of the general theory of relativity, was found even before this theory was formulated but was discarded as a speculative idea without any chance of empirical confirmation. Reconstruction of some of Einstein's research notes dating back to 1912 reveals that he explored the possibility of gravitational lensing 3 years before completing his general theory of relativity. On the basis of preliminary insights into this theory, Einstein had already derived the basic features of the lensing effect. When he finally published the very same results 24 years later, it was only in response to prodding by an amateur scientist.

  20. Thermodynamic properties of semiconductor compounds studied based on Debye-Waller factors

    NASA Astrophysics Data System (ADS)

    Van Hung, Nguyen; Toan, Nguyen Cong; Ba Duc, Nguyen; Vuong, Dinh Quoc

    2015-08-01

    Thermodynamic properties of semiconductor compounds have been studied based on Debye-Waller factors (DWFs) described by the mean square displacement (MSD) which has close relation with the mean square relative displacement (MSRD). Their analytical expressions have been derived based on the statistical moment method (SMM) and the empirical many-body Stillinger-Weber potentials. Numerical results for the MSDs of GaAs, GaP, InP, InSb, which have zinc-blende structure, are found to be in reasonable agreement with experiment and other theories. This paper shows that an elements value for MSD is dependent on the binary semiconductor compound within which it resides.

  1. Pre- and Post-equinox ROSINA production rates calculated using a realistic empirical coma model derived from AMPS-DSMC simulations of comet 67P/Churyumov-Gerasimenko

    NASA Astrophysics Data System (ADS)

    Hansen, Kenneth; Altwegg, Kathrin; Berthelier, Jean-Jacques; Bieler, Andre; Calmonte, Ursina; Combi, Michael; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, Tamas; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Lena; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu

    2016-04-01

    We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet coma (<400 km) of comet 67P for the pre-equinox orbit of comet 67P/Churyumov-Gerasimenko. In this work we extend the empirical model to the post-equinox, post-perihelion time period. In addition, we extend the coma model to significantly further from the comet (~100,000-1,000,000 km). The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. Furthermore, we have generalized the model beyond application to 67P by replacing the heliocentric distance parameterizations and mapping them to production rates. Using this method, the model become significantly more general and can be applied to any comet. The model is a significant improvement over simpler empirical models, such as the Haser model. For 67P, the DSMC results are, of course, a more accurate representation of the coma at any given time, but the advantage of a mean state, empirical model is the ease and speed of use. One application of the empirical model is to de-trend the spacecraft motion from the ROSINA COPS and DFMS data (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Comet Pressure Sensor, Double Focusing Mass Spectrometer). The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on the single point measurement. In this presentation we will present the coma production rate as a function of heliocentric distance both pre- and post-equinox and perihelion.

  2. No breakdown of the radiatively driven wind theory in low-metallicity environments

    NASA Astrophysics Data System (ADS)

    Bouret, J.-C.; Lanz, T.; Hillier, D. J.; Martins, F.; Marcolino, W. L. F.; Depagne, E.

    2015-05-01

    We present a spectroscopic analysis of Hubble Space Telescope/Cosmic Origins Spectrograph observations of three massive stars in the low metallicity dwarf galaxies IC 1613 and WLM. These stars, were previously observed with Very Large Telescope (VLT)/X-shooter by Tramper et al., who claimed that their mass-loss rates are higher than expected from theoretical predictions for the underlying metallicity. A comparison of the far ultraviolet (FUV) spectra with those of stars of similar spectral types/luminosity classes in the Galaxy, and the Magellanic Clouds provides a direct, model-independent check of the mass-loss-metallicity relation. Then, a quantitative spectroscopic analysis is carried out using the non-LTE (NLTE) stellar atmosphere code CMFGEN. We derive the photospheric and wind characteristics, benefiting from a much better sensitivity of the FUV lines to wind properties than Hα. Iron and CNO abundances are measured, providing an independent check of the stellar metallicity. The spectroscopic analysis indicates that Z/Z⊙ = 1/5, similar to a Small Magellanic Cloud-type environment, and higher than usually quoted for IC 1613 and WLM. The mass-loss rates are smaller than the empirical ones by Tramper et al., and those predicted by the widely used theoretical recipe by Vink et al. On the other hand, we show that the empirical, FUV-based, mass-loss rates are in good agreement with those derived from mass fluxes computed by Lucy. We do not concur with Tramper et al. that there is a breakdown in the mass-loss-metallicity relation.

  3. Fire risk in San Diego County, California: A weighted Bayesian model approach

    USGS Publications Warehouse

    Kolden, Crystal A.; Weigel, Timothy J.

    2007-01-01

    Fire risk models are widely utilized to mitigate wildfire hazards, but models are often based on expert opinions of less understood fire-ignition and spread processes. In this study, we used an empirically derived weights-of-evidence model to assess what factors produce fire ignitions east of San Diego, California. We created and validated a dynamic model of fire-ignition risk based on land characteristics and existing fire-ignition history data, and predicted ignition risk for a future urbanization scenario. We then combined our empirical ignition-risk model with a fuzzy fire behavior-risk model developed by wildfire experts to create a hybrid model of overall fire risk. We found that roads influence fire ignitions and that future growth will increase risk in new rural development areas. We conclude that empirically derived risk models and hybrid models offer an alternative method to assess current and future fire risk based on management actions.

  4. A protocol for the creation of useful geometric shape metrics illustrated with a newly derived geometric measure of leaf circularity.

    PubMed

    Krieger, Jonathan D

    2014-08-01

    I present a protocol for creating geometric leaf shape metrics to facilitate widespread application of geometric morphometric methods to leaf shape measurement. • To quantify circularity, I created a novel shape metric in the form of the vector between a circle and a line, termed geometric circularity. Using leaves from 17 fern taxa, I performed a coordinate-point eigenshape analysis to empirically identify patterns of shape covariation. I then compared the geometric circularity metric to the empirically derived shape space and the standard metric, circularity shape factor. • The geometric circularity metric was consistent with empirical patterns of shape covariation and appeared more biologically meaningful than the standard approach, the circularity shape factor. The protocol described here has the potential to make geometric morphometrics more accessible to plant biologists by generalizing the approach to developing synthetic shape metrics based on classic, qualitative shape descriptors.

  5. Evapotranspiration Calculations for an Alpine Marsh Meadow Site in Three-river Headwater Region

    NASA Astrophysics Data System (ADS)

    Zhou, B.; Xiao, H.

    2016-12-01

    Daily radiation and meteorological data were collected at an alpine marsh meadow site in the Three-river Headwater Region(THR). Use them to assess radiation models determined after comparing the performance between Zuo model and the model recommend by FAO56P-M.Four methods, FAO56P-M, Priestley-Taylor, Hargreaves, and Makkink methods were applied to determine daily reference evapotranspiration( ETr) for the growing season and built the empirical models for estimating daily actual evapotranspiration ETa between ETr derived from the four methods and evapotranspiration derived from Bowen Ratio method on alpine marsh meadow in this region. After comparing the performance of four empirical models by RMSE, MAE and AI, it showed these models all can get the better estimated daily ETaon alpine marsh meadow in this region, and the best performance of the FAO56 P-M, Makkink empirical model were better than Priestley-Taylor and Hargreaves model.

  6. On the relationship between tumour growth rate and survival in non-small cell lung cancer.

    PubMed

    Mistry, Hitesh B

    2017-01-01

    A recurrent question within oncology drug development is predicting phase III outcome for a new treatment using early clinical data. One approach to tackle this problem has been to derive metrics from mathematical models that describe tumour size dynamics termed re-growth rate and time to tumour re-growth. They have shown to be strong predictors of overall survival in numerous studies but there is debate about how these metrics are derived and if they are more predictive than empirical end-points. This work explores the issues raised in using model-derived metric as predictors for survival analyses. Re-growth rate and time to tumour re-growth were calculated for three large clinical studies by forward and reverse alignment. The latter involves re-aligning patients to their time of progression. Hence, it accounts for the time taken to estimate re-growth rate and time to tumour re-growth but also assesses if these predictors correlate to survival from the time of progression. I found that neither re-growth rate nor time to tumour re-growth correlated to survival using reverse alignment. This suggests that the dynamics of tumours up until disease progression has no relationship to survival post progression. For prediction of a phase III trial I found the metrics performed no better than empirical end-points. These results highlight that care must be taken when relating dynamics of tumour imaging to survival and that bench-marking new approaches to existing ones is essential.

  7. Spatial and temporal patterns of xylem sap pH derived from stems and twigs of Populus deltoides L.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aubrey, Doug P.; Boyles, Justin G.; Krysinsky, Laura S.

    2011-02-12

    Xylem sap pH (pHX) is critical in determining the quantity of inorganic carbon dissolved in xylem solution from gaseous [CO2] measurements. Studies of internal carbon transport have generally assumed that pHX derived from stems and twigs is similar and that pHX remains constant through time; however, no empirical studies have investigated these assumptions. If any of these assumptions are violated, potentially large errors can be introduced into calculations of dissolved CO 2 in xylem and resulting estimates of internal carbon transport.Wetested the validity of assumptions related to pHX in Populus deltoides L. with a series of non-manipulative experiments. The pHXmore » derived from stems and twigs was generally similar and remained relatively constant through a diel period. The only exception was that pHX derived from lower stem sections at night was higher than that derived from twigs. The pHX derived from stems was similar on clear days when solar radiation and vapor pressure deficit (VPD) were similar, but higher on an overcast day when solar radiation and VPD were lower. Similarly, cloudy conditions immediately before an afternoon thunderstorm increased pHX derived from twigs. The pHX derived from twigs remained similar when measured on sunny afternoons between July and October. Our results suggest that common assumptions of pHX used in studies of internal carbon transport appear valid for P. deltoides and further suggest pHX is influenced by environmental factors, such as solar radiation and VPD that affect transpiration rates.« less

  8. On the use of integrating FLUXNET eddy covariance and remote sensing data for model evaluation

    NASA Astrophysics Data System (ADS)

    Reichstein, Markus; Jung, Martin; Beer, Christian; Carvalhais, Nuno; Tomelleri, Enrico; Lasslop, Gitta; Baldocchi, Dennis; Papale, Dario

    2010-05-01

    The current FLUXNET database (www.fluxdata.org) of CO2, water and energy exchange between the terrestrial biosphere and the atmosphere contains almost 1000 site-years with data from more than 250 sites, encompassing all major biomes of the world and being processed in a standardized way (1-3). In this presentation we show that the information in the data is sufficient to derive generalized empirical relationships between vegetation/respective remote sensing information, climate and the biosphere-atmosphere exchanges across global biomes. These empirical patterns are used to generate global grids of the respective fluxes and derived properties (e.g. radiation and water-use efficiencies or climate sensitivities in general, bowen-ratio, AET/PET ratio). For example we revisit global 'text-book' numbers such as global Gross Primary Productivity (GPP) estimated since the 70's as ca. 120PgC (4), or global evapotranspiration (ET) estimated at 65km3/yr-1 (5) - for the first time with a more solid and direct empirical basis. Evaluation against independent data at regional to global scale (e.g. atmospheric CO2 inversions, runoff data) lends support to the validity of our almost purely empirical up-scaling approaches. Moreover climate factors such as radiation, temperature and water balance are identified as driving factors for variations and trends of carbon and water fluxes, with distinctly different sensitivities between different vegetation types. Hence, these global fields of biosphere-atmosphere exchange and the inferred relations between climate, vegetation type and fluxes should be used for evaluation or benchmarking of climate models or their land-surface components, while overcoming scale-issues with classical point-to-grid-cell comparisons. 1. M. Reichstein et al., Global Change Biology 11, 1424 (2005). 2. D. Baldocchi, Australian Journal of Botany 56, 1 (2008). 3. D. Papale et al., Biogeosciences 3, 571 (2006). 4. D. E. Alexander, R. W. Fairbridge, Encyclopedia of Environmental Science (Springer, Heidelberg, 1999), pp. 741. 5. T. Oki, S. Kanae, Science 313, 1068 (Aug 25, 2006)

  9. A Grounded Theory of Sexual Minority Women and Transgender Individuals' Social Justice Activism.

    PubMed

    Hagen, Whitney B; Hoover, Stephanie M; Morrow, Susan L

    2018-01-01

    Psychosocial benefits of activism include increased empowerment, social connectedness, and resilience. Yet sexual minority women (SMW) and transgender individuals with multiple oppressed statuses and identities are especially prone to oppression-based experiences, even within minority activist communities. This study sought to develop an empirical model to explain the diverse meanings of social justice activism situated in SMW and transgender individuals' social identities, values, and experiences of oppression and privilege. Using a grounded theory design, 20 SMW and transgender individuals participated in initial, follow-up, and feedback interviews. The most frequent demographic identities were queer or bisexual, White, middle-class women with advanced degrees. The results indicated that social justice activism was intensely relational, replete with multiple benefits, yet rife with experiences of oppression from within and outside of activist communities. The empirically derived model shows the complexity of SMW and transgender individuals' experiences, meanings, and benefits of social justice activism.

  10. New Physical Algorithms for Downscaling SMAP Soil Moisture

    NASA Astrophysics Data System (ADS)

    Sadeghi, M.; Ghafari, E.; Babaeian, E.; Davary, K.; Farid, A.; Jones, S. B.; Tuller, M.

    2017-12-01

    The NASA Soil Moisture Active Passive (SMAP) mission provides new means for estimation of surface soil moisture at the global scale. However, for many hydrological and agricultural applications the spatial SMAP resolution is too low. To address this scale issue we fused SMAP data with MODIS observations to generate soil moisture maps at 1-km spatial resolution. In course of this study we have improved several existing empirical algorithms and introduced a new physical approach for downscaling SMAP data. The universal triangle/trapezoid model was applied to relate soil moisture to optical/thermal observations such as NDVI, land surface temperature and surface reflectance. These algorithms were evaluated with in situ data measured at 5-cm depth. Our results demonstrate that downscaling SMAP soil moisture data based on physical indicators of soil moisture derived from the MODIS satellite leads to higher accuracy than that achievable with empirical downscaling algorithms. Keywords: Soil moisture, microwave data, downscaling, MODIS, triangle/trapezoid model.

  11. Research on the Effects of Alcohol and Sexual Arousal on Sexual Risk in Men who have Sex with Men: Implications for HIV Prevention Interventions

    PubMed Central

    Simons, Jeffrey S.

    2017-01-01

    The purpose of this paper was to describe and appraise the research evidence on the effects of acute alcohol intoxication and sexual arousal on sexual risk behaviors in men who have sex with men (MSM) and to examine its implications for design of HIV prevention interventions that target MSM. Toward that end, the paper begins with a discussion of research on sexual arousal in men and alcohol and their acute effects on sexual behaviors. This is followed by a review of empirical evidence on the combined acute effects of alcohol and sexual arousal in heterosexual men (the large majority of studies) and then in MSM. The empirical evidence and related theoretical developments then are integrated to derive implications for developing effective HIV prevention interventions that target MSM. PMID:26459332

  12. Definitions and Conceptual Dimensions of Responsible Research and Innovation: A Literature Review.

    PubMed

    Burget, Mirjam; Bardone, Emanuele; Pedaste, Margus

    2017-02-01

    The aim of this study is to provide a discussion on the definitions and conceptual dimensions of Responsible Research and Innovation based on findings from the literature. In the study, the outcomes of a literature review of 235 RRI-related articles were presented. The articles were selected from the EBSCO and Google Scholar databases regarding the definitions and dimensions of RRI. The results of the study indicated that while administrative definitions were widely quoted in the reviewed literature, they were not substantially further elaborated. Academic definitions were mostly derived from the institutional definitions; however, more empirical studies should be conducted in order to give a broader empirical basis to the development of the concept. In the current study, four distinct conceptual dimensions of RRI that appeared in the reviewed literature were brought out: inclusion, anticipation, responsiveness and reflexivity. Two emerging conceptual dimensions were also added: sustainability and care.

  13. Estimating individual influences of behavioral intentions: an application of random-effects modeling to the theory of reasoned action.

    PubMed

    Hedeker, D; Flay, B R; Petraitis, J

    1996-02-01

    Methods are proposed and described for estimating the degree to which relations among variables vary at the individual level. As an example of the methods, M. Fishbein and I. Ajzen's (1975; I. Ajzen & M. Fishbein, 1980) theory of reasoned action is examined, which posits first that an individual's behavioral intentions are a function of 2 components: the individual's attitudes toward the behavior and the subjective norms as perceived by the individual. A second component of their theory is that individuals may weight these 2 components differently in assessing their behavioral intentions. This article illustrates the use of empirical Bayes methods based on a random-effects regression model to estimate these individual influences, estimating an individual's weighting of both of these components (attitudes toward the behavior and subjective norms) in relation to their behavioral intentions. This method can be used when an individual's behavioral intentions, subjective norms, and attitudes toward the behavior are all repeatedly measured. In this case, the empirical Bayes estimates are derived as a function of the data from the individual, strengthened by the overall sample data.

  14. Geometric Mechanics for Continuous Swimmers on Granular Material

    NASA Astrophysics Data System (ADS)

    Dai, Jin; Faraji, Hossein; Schiebel, Perrin; Gong, Chaohui; Travers, Matthew; Hatton, Ross; Goldman, Daniel; Choset, Howie; Biorobotics Lab Collaboration; LaboratoryRobotics; Applied Mechanics (LRAM) Collaboration; Complex Rheology; Biomechanics Lab Collaboration

    Animal experiments have shown that Chionactis occipitalis(N =10) effectively undulating on granular substrates exhibits a particular set of waveforms which can be approximated by a sinusoidal variation in curvature, i.e., a serpenoid wave. Furthermore, all snakes tested used a narrow subset of all available waveform parameters, measured as the relative curvature equal to 5.0+/-0.3, and number of waves on the body equal to1.8+/-0.1. We hypothesize that the serpenoid wave of a particular choice of parameters offers distinct benefit for locomotion on granular material. To test this hypothesis, we used a physical model (snake robot) to empirically explore the space of serpenoid motions, which is linearly spanned with two independent continuous serpenoid basis functions. The empirically derived height function map, which is a geometric mechanics tool for analyzing movements of cyclic gaits, showed that displacement per gait cycle increases with amplitude at small amplitudes, but reaches a peak value of 0.55 body-lengths at relative curvature equal to 6.0. This work signifies that with shape basis functions, geometric mechanics tools can be extended for continuous swimmers.

  15. Model uncertainty of various settlement estimation methods in shallow tunnels excavation; case study: Qom subway tunnel

    NASA Astrophysics Data System (ADS)

    Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb

    2017-10-01

    In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.

  16. Empirically-Derived, Person-Oriented Patterns of School Readiness in Typically-Developing Children: Description and Prediction to First-Grade Achievement

    ERIC Educational Resources Information Center

    Konold, Timothy R.; Pianta, Robert C.

    2005-01-01

    School readiness assessment is a prominent feature of early childhood education. Because the construct of readiness is multifaceted, we examined children's patterns on multiple indicators previously found to be both theoretically and empirically linked to school readiness: social skill, interactions with parents, problem behavior, and performance…

  17. GPP in Loblolly Pine: A Monthly Comparison of Empirical and Process Models

    Treesearch

    Christopher Gough; John Seiler; Kurt Johnsen; David Arthur Sampson

    2002-01-01

    Monthly and yearly gross primary productivity (GPP) estimates derived from an empirical and two process based models (3PG and BIOMASS) were compared. Spatial and temporal variation in foliar gas photosynthesis was examined and used to develop GPP prediction models for fertilized nine-year-old loblolly pine (Pinus taeda) stands located in the North...

  18. Modeling of Pickup Ion Distributions in the Halley Cometo-Sheath: Empirical Rates of Ionization, Diffusion, Loss and Creation of Fast Neutral Atoms

    NASA Technical Reports Server (NTRS)

    Huddleston, D.; Neugebauer, M.; Goldstein, B.

    1994-01-01

    The shape of the velocity distribution of water-group ions observed by the Giotto ion mass spectrometer on its approach to comet Halley is modeled to derive empirical values for the rates on ionization, energy diffusion, and loss in the mid-cometosheath.

  19. Community Participation of People with an Intellectual Disability: A Review of Empirical Findings

    ERIC Educational Resources Information Center

    Verdonschot, M. M. L.; de Witte, L. P.; Reichrath, E.; Buntinx, W. H. E.; Curfs, L. M. G.

    2009-01-01

    Study design: A systematic review of the literature. Objectives: To investigate community participation of persons with an intellectual disability (ID) as reported in empirical research studies. Method: A systematic literature search was conducted for the period of 1996-2006 on PubMed, CINAHL and PSYCINFO. Search terms were derived from the…

  20. An empirical InSAR-optical fusion approach to mapping vegetation canopy height

    Treesearch

    Wayne S. Walker; Josef M. Kellndorfer; Elizabeth LaPoint; Michael Hoppus; James Westfall

    2007-01-01

    Exploiting synergies afforded by a host of recently available national-scale data sets derived from interferometric synthetic aperture radar (InSAR) and passive optical remote sensing, this paper describes the development of a novel empirical approach for the provision of regional- to continental-scale estimates of vegetation canopy height. Supported by data from the...

  1. Estimating the effects of 17α-ethinylestradiol on stochastic population growth rate of fathead minnows: a population synthesis of empirically derived vital rates.

    PubMed

    Schwindt, Adam R; Winkelman, Dana L

    2016-09-01

    Urban freshwater streams in arid climates are wastewater effluent dominated ecosystems particularly impacted by bioactive chemicals including steroid estrogens that disrupt vertebrate reproduction. However, more understanding of the population and ecological consequences of exposure to wastewater effluent is needed. We used empirically derived vital rate estimates from a mesocosm study to develop a stochastic stage-structured population model and evaluated the effect of 17α-ethinylestradiol (EE2), the estrogen in human contraceptive pills, on fathead minnow Pimephales promelas stochastic population growth rate. Tested EE2 concentrations ranged from 3.2 to 10.9 ng L(-1) and produced stochastic population growth rates (λ S ) below 1 at the lowest concentration, indicating potential for population decline. Declines in λ S compared to controls were evident in treatments that were lethal to adult males despite statistically insignificant effects on egg production and juvenile recruitment. In fact, results indicated that λ S was most sensitive to the survival of juveniles and female egg production. More broadly, our results document that population model results may differ even when empirically derived estimates of vital rates are similar among experimental treatments, and demonstrate how population models integrate and project the effects of stressors throughout the life cycle. Thus, stochastic population models can more effectively evaluate the ecological consequences of experimentally derived vital rates.

  2. “Direct” Gas-phase Metallicity in Local Analogs of High-redshift Galaxies: Empirical Metallicity Calibrations for High-redshift Star-forming Galaxies

    NASA Astrophysics Data System (ADS)

    Bian, Fuyan; Kewley, Lisa J.; Dopita, Michael A.

    2018-06-01

    We study the direct gas-phase oxygen abundance using the well-detected auroral line [O III]λ4363 in the stacked spectra of a sample of local analogs of high-redshift galaxies. These local analogs share the same location as z ∼ 2 star-forming galaxies on the [O III]λ5007/Hβ versus [N II]λ6584/Hα Baldwin–Phillips–Terlevich diagram. This type of analog has the same ionized interstellar medium (ISM) properties as high-redshift galaxies. We establish empirical metallicity calibrations between the direct gas-phase oxygen abundances (7.8< 12+{log}({{O}}/{{H}})< 8.4) and the N2 (log([N II]λ6584/Hα))/O3N2 (log(([O III]λ5007/Hβ)/([N II]λ6584/Hα))) indices in our local analogs. We find significant systematic offsets between the metallicity calibrations for our local analogs of high-redshift galaxies and those derived from the local H II regions and a sample of local reference galaxies selected from the Sloan Digital Sky Survey (SDSS). The N2 and O3N2 metallicities will be underestimated by 0.05–0.1 dex relative to our calibration, if one simply applies the local metallicity calibration in previous studies to high-redshift galaxies. Local metallicity calibrations also cause discrepancies of metallicity measurements in high-redshift galaxies using the N2 and O3N2 indicators. In contrast, our new calibrations produce consistent metallicities between these two indicators. We also derive metallicity calibrations for R23 (log(([O III]λλ4959,5007+[O II]λλ3726,3729)/Hβ)), O32(log([O III]λλ4959,5007/[O II]λλ3726,3729)), {log}([O III]λ5007/Hβ), and log([Ne III]λ3869/[O II]λ3727) indices in our local analogs, which show significant offset compared to those in the SDSS reference galaxies. By comparing with MAPPINGS photoionization models, the different empirical metallicity calibration relations in the local analogs and the SDSS reference galaxies can be shown to be primarily due to the change of ionized ISM conditions. Assuming that temperature structure variations are minimal and ISM conditions do not change dramatically from z ∼ 2 to z ∼ 5, these empirical calibrations can be used to measure relative metallicities in galaxies with redshifts up to z ∼ 5.0 in ground-based observations.

  3. Permeability Estimation Directly From Logging-While-Drilling Induced Polarization Data

    NASA Astrophysics Data System (ADS)

    Fiandaca, G.; Maurya, P. K.; Balbarini, N.; Hördt, A.; Christiansen, A. V.; Foged, N.; Bjerg, P. L.; Auken, E.

    2018-04-01

    In this study, we present the prediction of permeability from time domain spectral induced polarization (IP) data, measured in boreholes on undisturbed formations using the El-log logging-while-drilling technique. We collected El-log data and hydraulic properties on unconsolidated Quaternary and Miocene deposits in boreholes at three locations at a field site in Denmark, characterized by different electrical water conductivity and chemistry. The high vertical resolution of the El-log technique matches the lithological variability at the site, minimizing ambiguity in the interpretation originating from resolution issues. The permeability values were computed from IP data using a laboratory-derived empirical relationship presented in a recent study for saturated unconsolidated sediments, without any further calibration. A very good correlation, within 1 order of magnitude, was found between the IP-derived permeability estimates and those derived using grain size analyses and slug tests, with similar depth trends and permeability contrasts. Furthermore, the effect of water conductivity on the IP-derived permeability estimations was found negligible in comparison to the permeability uncertainties estimated from the inversion and the laboratory-derived empirical relationship.

  4. Spatial structure, sampling design and scale in remotely-sensed imagery of a California savanna woodland

    NASA Technical Reports Server (NTRS)

    Mcgwire, K.; Friedl, M.; Estes, J. E.

    1993-01-01

    This article describes research related to sampling techniques for establishing linear relations between land surface parameters and remotely-sensed data. Predictive relations are estimated between percentage tree cover in a savanna environment and a normalized difference vegetation index (NDVI) derived from the Thematic Mapper sensor. Spatial autocorrelation in original measurements and regression residuals is examined using semi-variogram analysis at several spatial resolutions. Sampling schemes are then tested to examine the effects of autocorrelation on predictive linear models in cases of small sample sizes. Regression models between image and ground data are affected by the spatial resolution of analysis. Reducing the influence of spatial autocorrelation by enforcing minimum distances between samples may also improve empirical models which relate ground parameters to satellite data.

  5. Modeling conflict and error in the medial frontal cortex.

    PubMed

    Mayer, Andrew R; Teshiba, Terri M; Franco, Alexandre R; Ling, Josef; Shane, Matthew S; Stephen, Julia M; Jung, Rex E

    2012-12-01

    Despite intensive study, the role of the dorsal medial frontal cortex (dMFC) in error monitoring and conflict processing remains actively debated. The current experiment manipulated conflict type (stimulus conflict only or stimulus and response selection conflict) and utilized a novel modeling approach to isolate error and conflict variance during a multimodal numeric Stroop task. Specifically, hemodynamic response functions resulting from two statistical models that either included or isolated variance arising from relatively few error trials were directly contrasted. Twenty-four participants completed the task while undergoing event-related functional magnetic resonance imaging on a 1.5-Tesla scanner. Response times monotonically increased based on the presence of pure stimulus or stimulus and response selection conflict. Functional results indicated that dMFC activity was present during trials requiring response selection and inhibition of competing motor responses, but absent during trials involving pure stimulus conflict. A comparison of the different statistical models suggested that relatively few error trials contributed to a disproportionate amount of variance (i.e., activity) throughout the dMFC, but particularly within the rostral anterior cingulate gyrus (rACC). Finally, functional connectivity analyses indicated that an empirically derived seed in the dorsal ACC/pre-SMA exhibited strong connectivity (i.e., positive correlation) with prefrontal and inferior parietal cortex but was anti-correlated with the default-mode network. An empirically derived seed from the rACC exhibited the opposite pattern, suggesting that sub-regions of the dMFC exhibit different connectivity patterns with other large scale networks implicated in internal mentations such as daydreaming (default-mode) versus the execution of top-down attentional control (fronto-parietal). Copyright © 2011 Wiley Periodicals, Inc.

  6. 17 CFR 240.15c3-1f - Optional market and credit risk requirements for OTC derivatives dealers (Appendix F to 17 CFR...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... charges. An OTC derivatives dealer shall provide a description of all statistical models used for pricing... controls over those models, and a statement regarding whether the firm has developed its own internal VAR models. If the OTC derivatives dealer's VAR model incorporates empirical correlations across risk...

  7. Second-order optical effects in several pyrazolo-quinoline derivatives

    NASA Astrophysics Data System (ADS)

    Makowska-Janusik, M.; Gondek, E.; Kityk, I. V.; Wisła, J.; Sanetra, J.; Danel, A.

    2004-11-01

    Using optical poling of several pyazolo-quinoline (PAQ) derivatives we have found an existence of sufficiently high second order optical susceptibility at wavelength 1.76 μm varying in the range 0.9-2.8 pm/V. The performed quantum chemical simulations of the UV-absorption for isolated, solvated and incorporated into the polymethacrylate (PMMA) polymer films have shown that the PM3 method is the best among the semi-empirical ones to simulate the optical properties. The calculations of the hyperpolarizabilites have shown a good correlation with experimentally measured susceptibilities obtained from the optical poling. We have found that experimental susceptibility depends on linear molecular polarizability and photoinducing changes of the molecular dipole moment. It is clearly seen for the PAQ4-PAQ6 molecules possessing halogen atoms with relatively large polarizabilities.

  8. UV-vis, IR and 1H NMR spectroscopic studies of some mono- and bis-azo-compounds based on 2,7-dihydroxynaphthalene and aniline derivatives

    NASA Astrophysics Data System (ADS)

    Issa, Raafat M.; Fayed, Tarek A.; Awad, Mohammed K.; El-Kony, Sanaa M.

    2005-12-01

    The absorption spectra of mono- and bis-azo-derivatives obtained by coupling the diazonium salts of aromatic amines and 2,7-dihydroxynaphthalene have been studied in six organic solvents. The different absorption bands have been assigned and the effect of solvents on the charge transfer band is also discussed. The diagnostic IR spectral bands and 1H NMR signals are assigned and discussed in relation to molecular structure. Also, semi-empirical molecular orbital calculations using the atom superposition and electron delocalization molecular orbital (ASED-MO) theory have been performed to investigate the molecular and electronic structures of these compounds. According to these calculations, an intramolecular hydrogen bonding is essential for stabilization of such molecules.

  9. Sensor Data Qualification Technique Applied to Gas Turbine Engines

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Simon, Donald L.

    2013-01-01

    This paper applies a previously developed sensor data qualification technique to a commercial aircraft engine simulation known as the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k). The sensor data qualification technique is designed to detect, isolate, and accommodate faulty sensor measurements. It features sensor networks, which group various sensors together and relies on an empirically derived analytical model to relate the sensor measurements. Relationships between all member sensors of the network are analyzed to detect and isolate any faulty sensor within the network.

  10. Quantitative reconstruction of cross-sectional dimensions and hydrological parameters of gravelly fluvial channels developed in a forearc basin setting under a temperate climatic condition, central Japan

    NASA Astrophysics Data System (ADS)

    Shibata, Kenichiro; Adhiperdana, Billy G.; Ito, Makoto

    2018-01-01

    Reconstructions of the dimensions and hydrological features of ancient fluvial channels, such as bankfull depth, bankfull width, and water discharges, have used empirical equations developed from compiled data-sets, mainly from modern meandering rivers, in various tectonic and climatic settings. However, the application of the proposed empirical equations to an ancient fluvial succession should be carefully examined with respect to the tectonic and climatic settings of the objective deposits. In this study, we developed empirical relationships among the mean bankfull channel depth, bankfull channel depth, drainage area, bankfull channel width, mean discharge, and bankfull discharge using data from 24 observation sites of modern gravelly rivers in the Kanto region, central Japan. Some of the equations among these parameters are different from those proposed by previous studies. The discrepancies are considered to reflect tectonic and climatic settings of the present river systems, which are characterized by relatively steeper valley slope, active supply of volcaniclastic sediments, and seasonal precipitation in the Kanto region. The empirical relationships derived from the present study can be applied to modern and ancient gravelly fluvial channels with multiple and alternate bars, developed in convergent margin settings under a temperate climatic condition. The developed empirical equations were applied to a transgressive gravelly fluvial succession of the Paleogene Iwaki Formation, Northeast Japan as a case study. Stratigraphic thicknesses of bar deposits were used for estimation of the bankfull channel depth. In addition, some other geomorphological and hydrological parameters were calculated using the empirical equations developed by the present study. The results indicate that the Iwaki Formation fluvial deposits were formed by a fluvial system that was represented by the dimensions and discharges of channels similar to those of the middle to lower reaches of the modern Kuji River, northern Kanto region. In addition, no distinct temporal changes in paleochannel dimensions and discharges were observed in an overall transgressive Iwaki Formation fluvial system. This implies that a rise in relative sea level did not affect the paleochannel dimensions within a sequence stratigraphic framework.

  11. Development and validation of empirical indices to assess the insulinemic potential of diet and lifestyle

    PubMed Central

    Tabung, Fred K.; Wang, Weike; Fung, Teresa T.; Hu, Frank B.; Smith-Warner, Stephanie A.; Chavarro, Jorge E.; Fuchs, Charles S.; Willett, Walter C.; Giovannucci, Edward L.

    2017-01-01

    The glycemic and insulin indices assess postprandial glycemic and insulin response to foods respectively, which may not reflect the long-term effects of diet on insulin response. We developed and evaluated the validity of four empirical indices to assess the insulinemic potential of usual diets and lifestyles, using dietary, lifestyle and biomarker data from the Nurses’ Health Study (NHS, n=5,812 for hyperinsulinemia, n=3,929 for insulin resistance). The four indices were: the empirical dietary index for hyperinsulinemia (EDIH) and empirical lifestyle index for hyperinsulinemia (ELIH); empirical dietary index for insulin resistance (EDIR) and empirical lifestyle index for insulin resistance (ELIR). We entered 39 food frequency questionnaire-derived food groups in stepwise linear regression models and defined indices as the patterns most predictive of fasting plasma C-peptide, for the hyperinsulinemia pathway (EDIH and ELIH); and of the triglyceride/high density lipoprotein-cholesterol (TG/HDL) ratio, for the insulin resistance pathway (EDIR and ELIR). We evaluated the validity of indices in two independent samples from NHS-II and Health Professionals Follow-up Study (HPFS) using multivariable-adjusted linear regression analyses to calculate relative concentrations of biomarkers. EDIH is comprised of 18 food groups; 13 were positively associated with C-peptide, five inversely. EDIR is comprised of 18 food groups; ten were positively associated with TG/HDL and eight inversely. Lifestyle indices had fewer dietary components, and included BMI and physical activity as components. In the validation samples, all indices significantly predicted biomarker concentrations, e.g., the relative concentrations (95%CI) of the corresponding biomarkers comparing extreme index quintiles in HPFS were: EDIH, 1.29(1.22, 1.37); ELIH, 1.78(1.68, 1.88); EDIR, 1.44(1.34, 1.55); ELIR, 2.03(1.89, 2.19); all P-trend<0.0001. The robust associations of these novel hypothesis-driven indices with insulin response biomarker concentrations suggests their usefulness in assessing the ability of whole diets and lifestyles to stimulate and/or sustain insulin secretion. PMID:27821188

  12. Validation of a new plasmapause model derived from CHAMP field-aligned current signatures

    NASA Astrophysics Data System (ADS)

    Heilig, Balázs; Darrouzet, Fabien; Vellante, Massimo; Lichtenberger, János; Lühr, Hermann

    2014-05-01

    Recently a new model for the plasmapause location in the equatorial plane was introduced based on magnetic field observations made by the CHAMP satellite in the topside ionosphere (Heilig and Lühr, 2013). Related signals are medium-scale field-aligned currents (MSFAC) (some 10km scale size). An empirical model for the MSFAC boundary was developed as a function of Kp and MLT. The MSFAC model then was compared to in situ plasmapause observations of IMAGE RPI. By considering this systematic displacement resulting from this comparison and by taking into account the diurnal variation and Kp-dependence of the residuals an empirical model of the plasmapause location that is based on MSFAC measurements from CHAMP was constructed. As a first step toward validation of the new plasmapause model we used in-situ (Van Allen Probes/EMFISIS, Cluster/WHISPER) and ground based (EMMA) plasma density observations. Preliminary results show a good agreement in general between the model and observations. Some observed differences stem from the different definitions of the plasmapause. A more detailed validation of the method can take place as soon as SWARM and VAP data become available. Heilig, B., and H. Lühr (2013) New plasmapause model derived from CHAMP field-aligned current signatures, Ann. Geophys., 31, 529-539, doi:10.5194/angeo-31-529-2013

  13. Empirical effective temperatures and bolometric corrections for early-type stars

    NASA Technical Reports Server (NTRS)

    Code, A. D.; Bless, R. C.; Davis, J.; Brown, R. H.

    1976-01-01

    An empirical effective temperature for a star can be found by measuring its apparent angular diameter and absolute flux distribution. The angular diameters of 32 bright stars in the spectral range O5f to F8 have recently been measured with the stellar interferometer at Narrabri Observatory, and their absolute flux distributions have been found by combining observations of ultraviolet flux from the Orbiting Astronomical Observatory (OAO-2) with ground-based photometry. In this paper, these data have been combined to derive empirical effective temperatures and bolometric corrections for these 32 stars.

  14. A statistical test of the stability assumption inherent in empirical estimates of economic depreciation.

    PubMed

    Shriver, K A

    1986-01-01

    Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.

  15. Review of Thawing Time Prediction Models Depending
on Process Conditions and Product Characteristics

    PubMed Central

    Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna

    2016-01-01

    Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387

  16. A theoretical and empirical review of the death-thought accessibility concept in terror management research.

    PubMed

    Hayes, Joseph; Schimel, Jeff; Arndt, Jamie; Faucher, Erik H

    2010-09-01

    Terror management theory (TMT) highlights the motivational impact of thoughts of death in various aspects of everyday life. Since its inception in 1986, research on TMT has undergone a slight but significant shift from an almost exclusive focus on the manipulation of thoughts of death to a marked increase in studies that measure the accessibility of death-related cognition. Indeed, the number of death-thought accessibility (DTA) studies in the published literature has grown substantially in recent years. In light of this increasing reliance on the DTA concept, the present article is meant to provide a comprehensive theoretical and empirical review of the literature employing this concept. After discussing the roots of DTA, the authors outline the theoretical refinements to TMT that have accompanied significant research findings associated with the DTA concept. Four distinct categories (mortality salience, death association, anxiety-buffer threat, and dispositional) are derived to organize the reviewed DTA studies, and the theoretical implications of each category are discussed. Finally, a number of lingering empirical and theoretical issues in the DTA literature are discussed with the aim of stimulating and focusing future research on DTA specifically and TMT in general.

  17. A robust empirical seasonal prediction of winter NAO and surface climate.

    PubMed

    Wang, L; Ting, M; Kushner, P J

    2017-03-21

    A key determinant of winter weather and climate in Europe and North America is the North Atlantic Oscillation (NAO), the dominant mode of atmospheric variability in the Atlantic domain. Skilful seasonal forecasting of the surface climate in both Europe and North America is reflected largely in how accurately models can predict the NAO. Most dynamical models, however, have limited skill in seasonal forecasts of the winter NAO. A new empirical model is proposed for the seasonal forecast of the winter NAO that exhibits higher skill than current dynamical models. The empirical model provides robust and skilful prediction of the December-January-February (DJF) mean NAO index using a multiple linear regression (MLR) technique with autumn conditions of sea-ice concentration, stratospheric circulation, and sea-surface temperature. The predictability is, for the most part, derived from the relatively long persistence of sea ice in the autumn. The lower stratospheric circulation and sea-surface temperature appear to play more indirect roles through a series of feedbacks among systems driving NAO evolution. This MLR model also provides skilful seasonal outlooks of winter surface temperature and precipitation over many regions of Eurasia and eastern North America.

  18. The relative effectiveness of empirical and physical models for simulating the dense undercurrent of pyroclastic flows under different emplacement conditions

    USGS Publications Warehouse

    Ogburn, Sarah E.; Calder, Eliza S

    2017-01-01

    High concentration pyroclastic density currents (PDCs) are hot avalanches of volcanic rock and gas and are among the most destructive volcanic hazards due to their speed and mobility. Mitigating the risk associated with these flows depends upon accurate forecasting of possible impacted areas, often using empirical or physical models. TITAN2D, VolcFlow, LAHARZ, and ΔH/L or energy cone models each employ different rheologies or empirical relationships and therefore differ in appropriateness of application for different types of mass flows and topographic environments. This work seeks to test different statistically- and physically-based models against a range of PDCs of different volumes, emplaced under different conditions, over different topography in order to test the relative effectiveness, operational aspects, and ultimately, the utility of each model for use in hazard assessments. The purpose of this work is not to rank models, but rather to understand the extent to which the different modeling approaches can replicate reality in certain conditions, and to explore the dynamics of PDCs themselves. In this work, these models are used to recreate the inundation areas of the dense-basal undercurrent of all 13 mapped, land-confined, Soufrière Hills Volcano dome-collapse PDCs emplaced from 1996 to 2010 to test the relative effectiveness of different computational models. Best-fit model results and their input parameters are compared with results using observation- and deposit-derived input parameters. Additional comparison is made between best-fit model results and those using empirically-derived input parameters from the FlowDat global database, which represent “forward” modeling simulations as would be completed for hazard assessment purposes. Results indicate that TITAN2D is able to reproduce inundated areas well using flux sources, although velocities are often unrealistically high. VolcFlow is also able to replicate flow runout well, but does not capture the lateral spreading in distal regions of larger-volume flows. Both models are better at reproducing the inundated area of single-pulse, valley-confined, smaller-volume flows than sustained, highly unsteady, larger-volume flows, which are often partially unchannelized. The simple rheological models of TITAN2D and VolcFlow are not able to recreate all features of these more complex flows. LAHARZ is fast to run and can give a rough approximation of inundation, but may not be appropriate for all PDCs and the designation of starting locations is difficult. The ΔH/L cone model is also very quick to run and gives reasonable approximations of runout distance, but does not inherently model flow channelization or directionality and thus unrealistically covers all interfluves. Empirically-based models like LAHARZ and ΔH/L cones can be quick, first-approximations of flow runout, provided a database of similar flows, e.g., FlowDat, is available to properly calculate coefficients or ΔH/L. For hazard assessment purposes, geophysical models like TITAN2D and VolcFlow can be useful for producing both scenario-based or probabilistic hazard maps, but must be run many times with varying input parameters. LAHARZ and ΔH/L cones can be used to produce simple modeling-based hazard maps when run with a variety of input volumes, but do not explicitly consider the probability of occurrence of different volumes. For forward modeling purposes, the ability to derive potential input parameters from global or local databases is crucial, though important input parameters for VolcFlow cannot be empirically estimated. Not only does this work provide a useful comparison of the operational aspects and behavior of various models for hazard assessment, but it also enriches conceptual understanding of the dynamics of the PDCs themselves.

  19. Effects of Active Learning Classrooms on Student Learning: A Two-Year Empirical Investigation on Student Perceptions and Academic Performance

    ERIC Educational Resources Information Center

    Chiu, Pit Ho Patrio; Cheng, Shuk Han

    2017-01-01

    Recent studies on active learning classrooms (ACLs) have demonstrated their positive influence on student learning. However, most of the research evidence is derived from a few subject-specific courses or limited student enrolment. Empirical studies on this topic involving large student populations are rare. The present work involved a large-scale…

  20. An Empirical Method for Deriving Grade Equivalence for University Entrance Qualifications: An Application to A Levels and the International Baccalaureate

    ERIC Educational Resources Information Center

    Green, Francis; Vignoles, Anna

    2012-01-01

    We present a method to compare different qualifications for entry to higher education by studying students' subsequent performance. Using this method for students holding either the International Baccalaureate (IB) or A-levels gaining their degrees in 2010, we estimate an "empirical" equivalence scale between IB grade points and UCAS…

  1. Probabilistic analysis of tsunami hazards

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2006-01-01

    Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis).

  2. Regressed relations for forced convection heat transfer in a direct injection stratified charge rotary engine

    NASA Technical Reports Server (NTRS)

    Lee, Chi M.; Schock, Harold J.

    1988-01-01

    Currently, the heat transfer equation used in the rotary combustion engine (RCE) simulation model is taken from piston engine studies. These relations have been empirically developed by the experimental input coming from piston engines whose geometry differs considerably from that of the RCE. The objective of this work was to derive equations to estimate heat transfer coefficients in the combustion chamber of an RCE. This was accomplished by making detailed temperature and pressure measurements in a direct injection stratified charge (DISC) RCE under a range of conditions. For each specific measurement point, the local gas velocity was assumed equal to the local rotor tip speed. Local physical properties of the fluids were then calculated. Two types of correlation equations were derived and are described in this paper. The first correlation expresses the Nusselt number as a function of the Prandtl number, Reynolds number, and characteristic temperature ratio; the second correlation expresses the forced convection heat transfer coefficient as a function of fluid temperature, pressure and velocity.

  3. Synthesis and photophysical properties of halogenated derivatives of (dibenzoylmethanato)boron difluoride

    NASA Astrophysics Data System (ADS)

    Kononevich, Yuriy N.; Surin, Nikolay M.; Sazhnikov, Viacheslav A.; Svidchenko, Evgeniya A.; Aristarkhov, Vladimir M.; Safonov, Andrei A.; Bagaturyants, Alexander A.; Alfimov, Mikhail V.; Muzafarov, Aziz M.

    2017-03-01

    A series of (dibenzoylmethanato)boron difluoride (BF2DBM) derivatives with a halogen atom in one of the phenyl rings at the para-position were synthesized and used to elucidate the effects of changing the attached halogen atom on the photophysical properties of BF2DBM. The room-temperature absorption and fluorescence maxima of fluoro-, chloro-, bromo- and iodo-substituted derivatives of BF2DBM in THF are red-shifted by about 2-10 nm relative to the corresponding peaks of the parent BF2DBM. The fluorescence quantum yields of the halogenated BF2DBMs (except the iodinated derivative) are larger than that of the unsubstituted BF2DBM. All the synthesized compounds are able to form fluorescent exciplexes with benzene and toluene (emission maxima at λem = 433 and 445 nm, respectively). The conformational structure and electronic spectral properties of halogenated BF2DBMs have been modeled by DFT/TDDFT calculations at the PBE0/SVP level of theory. The structure and fluorescence spectra of exciplexes were calculated using the CIS method with empirical dispersion correction.

  4. The use of interest rate swaps by nonprofit organizations: evidence from nonprofit health care providers.

    PubMed

    Stewart, Louis J; Trussel, John

    2006-01-01

    Although the use of derivatives, particularly interest rate swaps, has grown explosively over the past decade, derivative financial instrument use by nonprofits has received only limited attention in the research literature. Because little is known about the risk management activities of nonprofits, the impact of these instruments on the ability of nonprofits to raise capital may have significant public policy implications. The primary motivation of this study is to determine the types of derivatives used by nonprofits and estimate the frequency of their use among these organizations. Our study also extends contemporary finance theory by an empirical examination of the motivation for interest rate swap usage among nonprofits. Our empirical data came from 193 large nonprofit health care providers that issued debt to the public between 2000 and 2003. We used a univariate analysis and a multivariate analysis relying on logistic regression models to test alternative explanations of interest rate swaps usage by nonprofits, finding that more than 45 percent of our sample, 88 organizations, used interest rate swaps with an aggregate notional value in excess of $8.3 billion. Our empirical tests indicate the primary motive for nonprofits to use interest rate derivatives is to hedge their exposure to interest rate risk. Although these derivatives are a useful risk management tool, under conditions of falling bond market interest rates these derivatives may also expose a nonprofit swap user to the risk of a material unscheduled termination payment. Finally, we found considerable diversity in the informativeness of footnote disclosure among sample organizations that used interest rate swaps. Many nonprofits did not disclose these risks in their financial statements. In conclusion, we find financial managers in large nonprofits commonly use derivative financial instruments as risk management tools, but the use of interest rate swaps by nonprofits may expose them to other risks that are not adequately disclosed in their financial statements.

  5. Deriving local demand for stumpage from estimates of regional supply and demand.

    Treesearch

    Kent P. Connaughton; Gerard A. Majerus; David H. Jackson

    1989-01-01

    The local (Forest-level or local-area) demand for stumpage can be derived from estimates of regional supply and demand. The derivation of local demand is justified when the local timber economy is similar to the regional timber economy; a simple regression of local on nonlocal prices can be used as an empirical test of similarity between local and regional economies....

  6. Aperture-free star formation rate of SDSS star-forming galaxies

    NASA Astrophysics Data System (ADS)

    Duarte Puertas, S.; Vilchez, J. M.; Iglesias-Páramo, J.; Kehrig, C.; Pérez-Montero, E.; Rosales-Ortega, F. F.

    2017-03-01

    Large area surveys with a high number of galaxies observed have undoubtedly marked a milestone in the understanding of several properties of galaxies, such as star-formation history, morphology, and metallicity. However, in many cases, these surveys provide fluxes from fixed small apertures (e.g. fibre), which cover a scant fraction of the galaxy, compelling us to use aperture corrections to study the global properties of galaxies. In this work, we derive the current total star formation rate (SFR) of Sloan Digital Sky Survey (SDSS) star-forming galaxies, using an empirically based aperture correction of the measured Hα flux for the first time, thus minimising the uncertainties associated with reduced apertures. All the Hα fluxes have been extinction-corrected using the Hα/ Hβ ratio free from aperture effects. The total SFR for 210 000 SDSS star-forming galaxies has been derived applying pure empirical Hα and Hα/ Hβ aperture corrections based on the Calar Alto Legacy Integral Field Area (CALIFA) survey. We find that, on average, the aperture-corrected SFR is 0.65 dex higher than the SDSS fibre-based SFR. The relation between the SFR and stellar mass for SDSS star-forming galaxies (SFR-M⋆) has been obtained, together with its dependence on extinction and Hα equivalent width. We compare our results with those obtained in previous works and examine the behaviour of the derived SFR in six redshift bins, over the redshift range 0.005 ≤ z ≤ 0.22. The SFR-M⋆ sequence derived here is in agreement with selected observational studies based on integral field spectroscopy of individual galaxies as well as with the predictions of recent theoretical models of disc galaxies. A table of the aperture-corrected fluxes and SFR for 210 000 SDSS star-forming galaxies and related relevant data is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/599/A71 Warning, no authors found for 2017A&A...599A..51.

  7. An analytical model of iceberg drift

    NASA Astrophysics Data System (ADS)

    Eisenman, I.; Wagner, T. J. W.; Dell, R.

    2017-12-01

    Icebergs transport freshwater from glaciers and ice shelves, releasing the freshwater into the upper ocean thousands of kilometers from the source. This influences ocean circulation through its effect on seawater density. A standard empirical rule-of-thumb for estimating iceberg trajectories is that they drift at the ocean surface current velocity plus 2% of the atmospheric surface wind velocity. This relationship has been observed in empirical studies for decades, but it has never previously been physically derived or justified. In this presentation, we consider the momentum balance for an individual iceberg, which includes nonlinear drag terms. Applying a series of approximations, we derive an analytical solution for the iceberg velocity as a function of time. In order to validate the model, we force it with surface velocity and temperature data from an observational state estimate and compare the results with iceberg observations in both hemispheres. We show that the analytical solution reduces to the empirical 2% relationship in the asymptotic limit of small icebergs (or strong winds), which approximately applies for typical Arctic icebergs. We find that the 2% value arises due to a term involving the drag coefficients for water and air and the densities of the iceberg, ocean, and air. In the opposite limit of large icebergs (or weak winds), which approximately applies for typical Antarctic icebergs with horizontal length scales greater than about 12 km, we find that the 2% relationship is not applicable and that icebergs instead move with the ocean current, unaffected by the wind. The two asymptotic regimes can be understood by considering how iceberg size influences the relative importance of the wind and ocean current drag terms compared with the Coriolis and pressure gradient force terms in the iceberg momentum balance.

  8. Flood loss modelling with FLF-IT: a new flood loss function for Italian residential structures

    NASA Astrophysics Data System (ADS)

    Hasanzadeh Nafari, Roozbeh; Amadio, Mattia; Ngo, Tuan; Mysiak, Jaroslav

    2017-07-01

    The damage triggered by different flood events costs the Italian economy millions of euros each year. This cost is likely to increase in the future due to climate variability and economic development. In order to avoid or reduce such significant financial losses, risk management requires tools which can provide a reliable estimate of potential flood impacts across the country. Flood loss functions are an internationally accepted method for estimating physical flood damage in urban areas. In this study, we derived a new flood loss function for Italian residential structures (FLF-IT), on the basis of empirical damage data collected from a recent flood event in the region of Emilia-Romagna. The function was developed based on a new Australian approach (FLFA), which represents the confidence limits that exist around the parameterized functional depth-damage relationship. After model calibration, the performance of the model was validated for the prediction of loss ratios and absolute damage values. It was also contrasted with an uncalibrated relative model with frequent usage in Europe. In this regard, a three-fold cross-validation procedure was carried out over the empirical sample to measure the range of uncertainty from the actual damage data. The predictive capability has also been studied for some sub-classes of water depth. The validation procedure shows that the newly derived function performs well (no bias and only 10 % mean absolute error), especially when the water depth is high. Results of these validation tests illustrate the importance of model calibration. The advantages of the FLF-IT model over other Italian models include calibration with empirical data, consideration of the epistemic uncertainty of data, and the ability to change parameters based on building practices across Italy.

  9. Antiferromagnetic coupling between rare earth ions and semiquinones in a series of 1:1 complexes.

    PubMed

    Caneschi, Andrea; Dei, Andrea; Gatteschi, Dante; Poussereau, Sandrine; Sorace, Lorenzo

    2004-04-07

    We use the strategy of diamagnetic substitution for obtaining information on the crystal field effects in paramagnetic rare earth ions using the homologous series of compounds with the diamagnetic tropolonato ligand, Ln(Trp)(HBPz(3))(2), and the paramagnetic semiquinone ligand, Ln(DTBSQ)(HBPz(3))(2), (DTBSQ = 3,5-di-tert-butylsemiquinonato, Trp = tropolonate, HBPz(3)= hydrotrispyrazolylborate) for Ln = Sm(iii), Eu(iii), Gd(iii), Tb(iii), Dy(iii), Ho(iii), Er(iii) or Yb(iii). The X-ray crystal structure of a new form of tropolonate derivative is presented, which shows, as expected, a marked similarity with the structure of the semiquinonate derivative. The Ln(Trp)(HBPz(3))(2) derivatives were then used as a reference for the qualitative determination of crystal field effects in the exchange coupled semiquinone derivatives. Through magnetisation and susceptibility measurements this empirical diamagnetic substitution method evidenced for Er(iii), Tb(iii), Dy(iii) and Yb(iii) derivatives a dominating antiferromagnetic coupling. The increased antiferromagnetic contribution compared to other radical-rare earth metal complexes formed by nitronyl nitroxide ligands may be related to the increased donor strength of the semiquinone ligand.

  10. Irrigation water demand: A meta-analysis of price elasticities

    NASA Astrophysics Data System (ADS)

    Scheierling, Susanne M.; Loomis, John B.; Young, Robert A.

    2006-01-01

    Metaregression models are estimated to investigate sources of variation in empirical estimates of the price elasticity of irrigation water demand. Elasticity estimates are drawn from 24 studies reported in the United States since 1963, including mathematical programming, field experiments, and econometric studies. The mean price elasticity is 0.48. Long-run elasticities, those that are most useful for policy purposes, are likely larger than the mean estimate. Empirical results suggest that estimates may be more elastic if they are derived from mathematical programming or econometric studies and calculated at a higher irrigation water price. Less elastic estimates are found to be derived from models based on field experiments and in the presence of high-valued crops.

  11. Manipulating the Gradient

    ERIC Educational Resources Information Center

    Gaze, Eric C.

    2005-01-01

    We introduce a cooperative learning, group lab for a Calculus III course to facilitate comprehension of the gradient vector and directional derivative concepts. The lab is a hands-on experience allowing students to manipulate a tangent plane and empirically measure the effect of partial derivatives on the direction of optimal ascent. (Contains 7…

  12. Learners with Dyslexia: Exploring Their Experiences with Different Online Reading Affordances

    ERIC Educational Resources Information Center

    Chen, Chwen Jen; Keong, Melissa Wei Yin; Teh, Chee Siong; Chuah, Kee Man

    2015-01-01

    To date, empirically derived guidelines for designing accessible online learning environments for learners with dyslexia are still scarce. This study aims to explore the learning experience of learners with dyslexia when reading passages using different online reading affordances to derive some guidelines for dyslexia-friendly online text. The…

  13. Development and system identification of a light unmanned aircraft for flying qualities research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, M.E.; Andrisani, D. II

    This paper describes the design, construction, flight testing and system identification of a light weight remotely piloted aircraft and its use in studying flying qualities in the longitudinal axis. The short period approximation to the longitudinal dynamics of the aircraft was used. Parameters in this model were determined a priori using various empirical estimators. These parameters were then estimated from flight data using a maximum likelihood parameter identification method. A comparison of the parameter values revealed that the stability derivatives obtained from the empirical estimators were reasonably close to the flight test results. However, the control derivatives determined by themore » empirical estimators were too large by a factor of two. The aircraft was also flown to determine how the longitudinal flying qualities of light weight remotely piloted aircraft compared to full size manned aircraft. It was shown that light weight remotely piloted aircraft require much faster short period dynamics to achieve level I flying qualities in an up-and-away flight task.« less

  14. Estimating the Cross-Shelf Export of Riverine Materials: Part 1. General Relationships From an Idealized Numerical Model

    NASA Astrophysics Data System (ADS)

    Izett, Jonathan G.; Fennel, Katja

    2018-02-01

    Rivers deliver large amounts of terrestrially derived materials (such as nutrients, sediments, and pollutants) to the coastal ocean, but a global quantification of the fate of this delivery is lacking. Nutrients can accumulate on shelves, potentially driving high levels of primary production with negative consequences like hypoxia, or be exported across the shelf to the open ocean where impacts are minimized. Global biogeochemical models cannot resolve the relatively small-scale processes governing river plume dynamics and cross-shelf export; instead, river inputs are often parameterized assuming an "all or nothing" approach. Recently, Sharples et al. (2017), https://doi.org/10.1002/2016GB005483 proposed the SP number—a dimensionless number relating the estimated size of a plume as a function of latitude to the local shelf width—as a simple estimator of cross-shelf export. We extend their work, which is solely based on theoretical and empirical scaling arguments, and address some of its limitations using a numerical model of an idealized river plume. In a large number of simulations, we test whether the SP number can accurately describe export in unforced cases and with tidal and wind forcings imposed. Our numerical experiments confirm that the SP number can be used to estimate export and enable refinement of the quantitative relationships proposed by Sharples et al. We show that, in general, external forcing has only a weak influence compared to latitude and derive empirical relationships from the results of the numerical experiments that can be used to estimate riverine freshwater export to the open ocean.

  15. Nucleon form factors in dispersively improved chiral effective field theory. II. Electromagnetic form factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alarcon, J. M.; Weiss, C.

    We study the nucleon electromagnetic form factors (EM FFs) using a recently developed method combining Chiral Effective Field Theory (more » $$\\chi$$EFT) and dispersion analysis. The spectral functions on the two-pion cut at $$t > 4 M_\\pi^2$$ are constructed using the elastic unitarity relation and an $N/D$ representation. $$\\chi$$EFT is used to calculate the real unctions $$J_\\pm^1 (t) = f_\\pm^1(t)/F_\\pi(t)$$ (ratios of the complex $$\\pi\\pi \\rightarrow N \\bar N$$ partial-wave amplitudes and the timelike pion FF), which are free of $$\\pi\\pi$$ rescattering. Rescattering effects are included through the empirical timelike pion FF $$|F_\\pi(t)|^2$$. The method allows us to compute the isovector EM spectral functions up to $$t \\sim 1$$ GeV$^2$ with controlled accuracy (LO, NLO, and partial N2LO). With the spectral functions we calculate the isovector nucleon EM FFs and their derivatives at $t = 0$ (EM radii, moments) using subtracted dispersion relations. We predict the values of higher FF derivatives with minimal uncertainties and explain their collective behavior. Finally, we estimate the individual proton and neutron FFs by adding an empirical parametrization of the isoscalar sector. Excellent agreement with the present low-$Q^2$ FF data is achieved up to $$\\sim$$0.5 GeV$^2$ for $$G_E$$, and up to $$\\sim$$0.2 GeV$^2$ for $$G_M$$. Our results can be used to guide the analysis of low-$Q^2$ elastic scattering data and the extraction of the proton charge radius.« less

  16. Nucleon form factors in dispersively improved chiral effective field theory. II. Electromagnetic form factors

    DOE PAGES

    Alarcon, J. M.; Weiss, C.

    2018-05-08

    We study the nucleon electromagnetic form factors (EM FFs) using a recently developed method combining Chiral Effective Field Theory (more » $$\\chi$$EFT) and dispersion analysis. The spectral functions on the two-pion cut at $$t > 4 M_\\pi^2$$ are constructed using the elastic unitarity relation and an $N/D$ representation. $$\\chi$$EFT is used to calculate the real unctions $$J_\\pm^1 (t) = f_\\pm^1(t)/F_\\pi(t)$$ (ratios of the complex $$\\pi\\pi \\rightarrow N \\bar N$$ partial-wave amplitudes and the timelike pion FF), which are free of $$\\pi\\pi$$ rescattering. Rescattering effects are included through the empirical timelike pion FF $$|F_\\pi(t)|^2$$. The method allows us to compute the isovector EM spectral functions up to $$t \\sim 1$$ GeV$^2$ with controlled accuracy (LO, NLO, and partial N2LO). With the spectral functions we calculate the isovector nucleon EM FFs and their derivatives at $t = 0$ (EM radii, moments) using subtracted dispersion relations. We predict the values of higher FF derivatives with minimal uncertainties and explain their collective behavior. Finally, we estimate the individual proton and neutron FFs by adding an empirical parametrization of the isoscalar sector. Excellent agreement with the present low-$Q^2$ FF data is achieved up to $$\\sim$$0.5 GeV$^2$ for $$G_E$$, and up to $$\\sim$$0.2 GeV$^2$ for $$G_M$$. Our results can be used to guide the analysis of low-$Q^2$ elastic scattering data and the extraction of the proton charge radius.« less

  17. Empirically derived dietary patterns and incident type 2 diabetes mellitus: a systematic review and meta-analysis on prospective observational studies.

    PubMed

    Maghsoudi, Zahra; Ghiasvand, Reza; Salehi-Abargouei, Amin

    2016-02-01

    To systematically review prospective cohort studies about the association between dietary patterns and type 2 diabetes mellitus (T2DM) incidence, and to quantify the effects using a meta-analysis. Databases such as PubMed, ISI Web of Science, SCOPUS and Google Scholar were searched up to 15 January 2015. Cohort studies which tried to examine the association between empirically derived dietary patterns and incident T2DM were selected. The relative risks (RR) and their 95 % confidence intervals for diabetes among participants with highest v. lowest adherence to derived dietary patterns were incorporated into meta-analysis using random-effects models. Ten studies (n 404 528) were enrolled in the systematic review and meta-analysis; our analysis revealed that adherence to the 'healthy' dietary patterns significantly reduced the risk of T2DM (RR=0·86; 95 % CI 0·82, 0·90), while the 'unhealthy' dietary patterns adversely affected diabetes risk (RR=1·30; 95 % CI 1·18, 1·43). Subgroup analysis showed that unhealthy dietary patterns in which foods with high phytochemical content were also loaded did not significantly increase T2DM risk (RR=1·06; 95 % CI 0·87, 1·30). 'Healthy' dietary patterns containing vegetables, fruits and whole grains can lower diabetes risk by 14 %. Consuming higher amounts of red and processed meats, high-fat dairy and refined grains in the context of 'unhealthy' dietary patterns will increase diabetes risk by 30 %; while including foods with high phytochemical content in these patterns can modify this effect.

  18. Examining spring and autumn phenology in a temperate deciduous urban woodlot

    NASA Astrophysics Data System (ADS)

    Yu, Rong

    This dissertation is an intensive phenological study in a temperate deciduous urban woodlot over six consecutive years (2007-2012). It explores three important topics related to spring and autumn phenology, as well as ground and remote sensing phenology. First, it examines key climatic factors influencing spring and autumn phenology by conducting phenological observations four days a week and recording daily microclimate measurements. Second, it investigates the differences in phenological responses between an urban woodlot and a rural forest by employing comparative basswood phenological data. Finally, it bridges ground visual phenology and remote sensing derived phenological changes by using the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) derived from the Moderate Resolution Imaging Spectro-radiometer (MODIS). The primary outcomes are as follows: 1) empirical spatial regression models for two dominant tree species - basswood and white ash - have been built and analyzed to detect spatial patterns and possible causes of phenological change; the results show that local urban settings significantly affect phenology; 2) empirical phenological progression models have been built for each species and the community as a whole to examine how phenology develops in spring and autumn; the results indicate that the critical factor influencing spring phenology is AGDD (accumulated growing degree-days) and for autumn phenology, ACDD (accumulated chilling degree-days) and day length; and 3) satellite derived phenological changes have been compared with ground visual community phenology in both spring and autumn seasons, and the results confirm that both NDVI and EVI depict vegetation dynamics well and therefore have corresponding phenological meanings.

  19. Measuring Community Resilience to Coastal Hazards along the Northern Gulf of Mexico

    PubMed Central

    Lam, Nina S. N.; Reams, Margaret; Li, Kenan; Li, Chi; Mata, Lillian P.

    2016-01-01

    The abundant research examining aspects of social-ecological resilience, vulnerability, and hazards and risk assessment has yielded insights into these concepts and suggested the importance of quantifying them. Quantifying resilience is complicated by several factors including the varying definitions of the term applied in the research, difficulties involved in selecting and aggregating indicators of resilience, and the lack of empirical validation for the indices derived. This paper applies a new model, called the resilience inference measurement (RIM) model, to quantify resilience to climate-related hazards for 52 U.S. counties along the northern Gulf of Mexico. The RIM model uses three elements (exposure, damage, and recovery indicators) to denote two relationships (vulnerability and adaptability), and employs both K-means clustering and discriminant analysis to derive the resilience rankings, thus enabling validation and inference. The results yielded a classification accuracy of 94.2% with 28 predictor variables. The approach is theoretically sound and can be applied to derive resilience indices for other study areas at different spatial and temporal scales. PMID:27499707

  20. Measuring Community Resilience to Coastal Hazards along the Northern Gulf of Mexico.

    PubMed

    Lam, Nina S N; Reams, Margaret; Li, Kenan; Li, Chi; Mata, Lillian P

    2016-02-01

    The abundant research examining aspects of social-ecological resilience, vulnerability, and hazards and risk assessment has yielded insights into these concepts and suggested the importance of quantifying them. Quantifying resilience is complicated by several factors including the varying definitions of the term applied in the research, difficulties involved in selecting and aggregating indicators of resilience, and the lack of empirical validation for the indices derived. This paper applies a new model, called the resilience inference measurement (RIM) model, to quantify resilience to climate-related hazards for 52 U.S. counties along the northern Gulf of Mexico. The RIM model uses three elements (exposure, damage, and recovery indicators) to denote two relationships (vulnerability and adaptability), and employs both K-means clustering and discriminant analysis to derive the resilience rankings, thus enabling validation and inference. The results yielded a classification accuracy of 94.2% with 28 predictor variables. The approach is theoretically sound and can be applied to derive resilience indices for other study areas at different spatial and temporal scales.

  1. The core mass-radius relation for giants - A new test of stellar evolution theory

    NASA Technical Reports Server (NTRS)

    Joss, P. C.; Rappaport, S.; Lewis, W.

    1987-01-01

    It is demonstrated here that the measurable properties of systems containing degenerate dwarfs can be used as a direct test of the core mass-radius relation for moderate-mass giants if the final stages of the loss of the envelope of the progenitor giant occurred via stable critical lobe overflow. This relation directly probes the internal structure of stars at a relatively advanced evolutionary state and is only modestly influenced by adjustable parameters. The measured properties of six binary systems, including such diverse systems as Sirius and Procyon and two millisecond pulsars, are utilized to derive constraints on the empirical core mass-radius relation, and the constraints are compared to the theoretical relation. The possibility that the final stages of envelope ejection of the giant progenitor of Sirius B occurred via critical lobe overflow in historical times is considered.

  2. Estimating the effects of 17α-ethinylestradiol on stochastic population growth rate of fathead minnows: a population synthesis of empirically derived vital rates

    USGS Publications Warehouse

    Schwindt, Adam R.; Winkelman, Dana L.

    2016-01-01

    Urban freshwater streams in arid climates are wastewater effluent dominated ecosystems particularly impacted by bioactive chemicals including steroid estrogens that disrupt vertebrate reproduction. However, more understanding of the population and ecological consequences of exposure to wastewater effluent is needed. We used empirically derived vital rate estimates from a mesocosm study to develop a stochastic stage-structured population model and evaluated the effect of 17α-ethinylestradiol (EE2), the estrogen in human contraceptive pills, on fathead minnow Pimephales promelas stochastic population growth rate. Tested EE2 concentrations ranged from 3.2 to 10.9 ng L−1 and produced stochastic population growth rates (λ S ) below 1 at the lowest concentration, indicating potential for population decline. Declines in λ S compared to controls were evident in treatments that were lethal to adult males despite statistically insignificant effects on egg production and juvenile recruitment. In fact, results indicated that λ S was most sensitive to the survival of juveniles and female egg production. More broadly, our results document that population model results may differ even when empirically derived estimates of vital rates are similar among experimental treatments, and demonstrate how population models integrate and project the effects of stressors throughout the life cycle. Thus, stochastic population models can more effectively evaluate the ecological consequences of experimentally derived vital rates.

  3. Regionalization of subsurface stormflow parameters of hydrologic models: Up-scaling from physically based numerical simulations at hillslope scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Melkamu; Ye, Sheng; Li, Hongyi

    2014-07-19

    Subsurface stormflow is an important component of the rainfall-runoff response, especially in steep forested regions. However; its contribution is poorly represented in current generation of land surface hydrological models (LSMs) and catchment-scale rainfall-runoff models. The lack of physical basis of common parameterizations precludes a priori estimation (i.e. without calibration), which is a major drawback for prediction in ungauged basins, or for use in global models. This paper is aimed at deriving physically based parameterizations of the storage-discharge relationship relating to subsurface flow. These parameterizations are derived through a two-step up-scaling procedure: firstly, through simulations with a physically based (Darcian) subsurfacemore » flow model for idealized three dimensional rectangular hillslopes, accounting for within-hillslope random heterogeneity of soil hydraulic properties, and secondly, through subsequent up-scaling to the catchment scale by accounting for between-hillslope and within-catchment heterogeneity of topographic features (e.g., slope). These theoretical simulation results produced parameterizations of the storage-discharge relationship in terms of soil hydraulic properties, topographic slope and their heterogeneities, which were consistent with results of previous studies. Yet, regionalization of the resulting storage-discharge relations across 50 actual catchments in eastern United States, and a comparison of the regionalized results with equivalent empirical results obtained on the basis of analysis of observed streamflow recession curves, revealed a systematic inconsistency. It was found that the difference between the theoretical and empirically derived results could be explained, to first order, by climate in the form of climatic aridity index. This suggests a possible codependence of climate, soils, vegetation and topographic properties, and suggests that subsurface flow parameterization needed for ungauged locations must account for both the physics of flow in heterogeneous landscapes, and the co-dependence of soil and topographic properties with climate, including possibly the mediating role of vegetation.« less

  4. Investigating the relation between the geometric properties of river basins and the filtering parameters for regional land hydrology applications using GRACE models

    NASA Astrophysics Data System (ADS)

    Piretzidis, Dimitrios; Sideris, Michael G.

    2016-04-01

    This study investigates the possibilities of local hydrology signal extraction using GRACE data and conventional filtering techniques. The impact of the basin shape has also been studied in order to derive empirical rules for tuning the GRACE filter parameters. GRACE CSR Release 05 monthly solutions were used from April 2002 to August 2015 (161 monthly solutions in total). SLR data were also used to replace the GRACE C2,0 coefficient, and a de-correlation filter with optimal parameters for CSR Release 05 data was applied to attenuate the correlation errors of monthly mass differences. For basins located at higher latitudes, the effect of Glacial Isostatic Adjustment (GIA) was taken into account using the ICE-6G model. The study focuses on three geometric properties, i.e., the area, the convexity and the width in the longitudinal direction, of 100 basins with global distribution. Two experiments have been performed. The first one deals with the determination of the Gaussian smoothing radius that minimizes the gaussianity of GRACE equivalent water height (EWH) over the selected basins. The EWH kurtosis was selected as a metric of gaussianity. The second experiment focuses on the derivation of the Gaussian smoothing radius that minimizes the RMS difference between GRACE data and a hydrology model. The GLDAS 1.0 Noah hydrology model was chosen, which shows good agreement with GRACE data according to previous studies. Early results show that there is an apparent relation between the geometric attributes of the basins examined and the Gaussian radius derived from the two experiments. The kurtosis analysis experiment tends to underestimate the optimal Gaussian radius, which is close to 200-300 km in many cases. Empirical rules for the selection of the Gaussian radius have been also developed for sub-regional scale basins.

  5. A Bayesian Analysis of Scale-Invariant Processes

    DTIC Science & Technology

    2012-01-01

    Earth Grid (EASE- Grid). The NED raster elevation data of one arc-second resolution (30 m) over the continental US are derived from multiple satellites ...instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send...empirical and ME distributions, yet ensuring computational efficiency. Instead of com- puting empirical histograms from large amount of data , only some

  6. Nonlinear bulging factor based on R-curve data

    NASA Technical Reports Server (NTRS)

    Jeong, David Y.; Tong, Pin

    1994-01-01

    In this paper, a nonlinear bulging factor is derived using a strain energy approach combined with dimensional analysis. The functional form of the bulging factor contains an empirical constant that is determined using R-curve data from unstiffened flat and curved panel tests. The determination of this empirical constant is based on the assumption that the R-curve is the same for both flat and curved panels.

  7. Regionally Adaptable Ground Motion Prediction Equation (GMPE) from Empirical Models of Fourier and Duration of Ground Motion

    NASA Astrophysics Data System (ADS)

    Bora, Sanjay; Scherbaum, Frank; Kuehn, Nicolas; Stafford, Peter; Edwards, Benjamin

    2016-04-01

    The current practice of deriving empirical ground motion prediction equations (GMPEs) involves using ground motions recorded at multiple sites. However, in applications like site-specific (e.g., critical facility) hazard ground motions obtained from the GMPEs are need to be adjusted/corrected to a particular site/site-condition under investigation. This study presents a complete framework for developing a response spectral GMPE, within which the issue of adjustment of ground motions is addressed in a manner consistent with the linear system framework. The present approach is a two-step process in which the first step consists of deriving two separate empirical models, one for Fourier amplitude spectra (FAS) and the other for a random vibration theory (RVT) optimized duration (Drvto) of ground motion. In the second step the two models are combined within the RVT framework to obtain full response spectral amplitudes. Additionally, the framework also involves a stochastic model based extrapolation of individual Fourier spectra to extend the useable frequency limit of the empirically derived FAS model. The stochastic model parameters were determined by inverting the Fourier spectral data using an approach similar to the one as described in Edwards and Faeh (2013). Comparison of median predicted response spectra from present approach with those from other regional GMPEs indicates that the present approach can also be used as a stand-alone model. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, the Middle East and the Mediterranean region.

  8. A Global Classification of Contemporary Fire Regimes

    NASA Astrophysics Data System (ADS)

    Norman, S. P.; Kumar, J.; Hargrove, W. W.; Hoffman, F. M.

    2014-12-01

    Fire regimes provide a sensitive indicator of changes in climate and human use as the concept includes fire extent, season, frequency, and intensity. Fires that occur outside the distribution of one or more aspects of a fire regime may affect ecosystem resilience. However, global scale data related to these varied aspects of fire regimes are highly inconsistent due to incomplete or inconsistent reporting. In this study, we derive a globally applicable approach to characterizing similar fire regimes using long geophysical time series, namely MODIS hotspots since 2000. K-means non-hierarchical clustering was used to generate empirically based groups that minimized within-cluster variability. Satellite-based fire detections are known to have shortcomings, including under-detection from obscuring smoke, clouds or dense canopy cover and rapid spread rates, as often occurs with flashy fuels or during extreme weather. Such regions are free from preconceptions, and the empirical, data-mining approach used on this relatively uniform data source allows the region structures to emerge from the data themselves. Comparing such an empirical classification to expectations from climate, phenology, land use or development-based models can help us interpret the similarities and differences among places and how they provide different indicators of changes of concern. Classifications can help identify where large infrequent mega-fires are likely to occur ahead of time such as in the boreal forest and portions of the Interior US West, and where fire reports are incomplete such as in less industrial countries.

  9. Far-field tsunami magnitude determined from ocean-bottom pressure gauge data around Japan

    NASA Astrophysics Data System (ADS)

    Baba, T.; Hirata, K.; Kaneda, Y.

    2003-12-01

    \\hspace*{3mm}Tsunami magnitude is the most fundamental parameter to scale tsunamigenic earthquakes. According to Abe (1979), the tsunami magnitude, Mt, is empirically related to the crest to trough amplitude, H, of the far-field tsunami wave in meters (Mt = logH + 9.1). Here we investigate the far-field tsunami magnitude using ocean-bottom pressure gauge data. The recent ocean-bottom pressure measurements provide more precise tsunami data with a high signal-to-noise ratio. \\hspace*{3mm}Japan Marine Science and Technology Center is monitoring ocean bottom pressure fluctuations using two submarine cables of depths of 1500 - 2400 m. These geophysical observatory systems are located off Cape Muroto, Southwest Japan, and off Hokkaido, Northern Japan. The ocean-bottom pressure data recorded with the Muroto and Hokkaido systems have been collected continuously since March, 1997 and October, 1999, respectively. \\hspace*{3mm}Over the period from March 1997 to June 2003, we have observed four far-field tsunami signals, generated by earthquakes, on ocean-bottom pressure records. These far-field tsunamis were generated by the 1998 Papua New Guinea eq. (Mw 7.0), 1999 Vanuatu eq. (Mw 7.2), 2001 Peru eq. (Mw 8.4) and 2002 Papua New Guinea eq. (Mw 7.6). Maximum amplitude of about 30 mm was recorded by the tsunami from the 2001 Peru earthquake. \\hspace*{3mm}Direct application of the Abe's empirical relation to ocean-bottom pressure gauge data underestimates tsunami magnitudes by about an order of magnitude. This is because the Abe's empirical relation was derived only from tsunami amplitudes with coastal tide gauges where tsunami is amplified by the shoaling of topography and the reflection at the coastline. However, these effects do not work for offshore tsunami in deep oceans. In general, amplification due to shoaling near the coastline is governed by the Green's Law, in which the tsunami amplitude is proportional to h-1/4, where h is the water depth. Wave amplitude also is doubled by reflection at the fixed edge (coastline). Hence, we introduce a water-depth term and a reflection coefficient of 2 in the original Abe_fs empirical relation to correct tsunami amplitude for open oceans and obtain Mt = log(2H/h-1/4) + 9.1, where h is the depth of the ocean bottom pressure gage. The modified empirical relation produces tsunami magnitudes close to those determined using tide gauges.

  10. Protein structure refinement using a quantum mechanics-based chemical shielding predictor.

    PubMed

    Bratholm, Lars A; Jensen, Jan H

    2017-03-01

    The accurate prediction of protein chemical shifts using a quantum mechanics (QM)-based method has been the subject of intense research for more than 20 years but so far empirical methods for chemical shift prediction have proven more accurate. In this paper we show that a QM-based predictor of a protein backbone and CB chemical shifts (ProCS15, PeerJ , 2016, 3, e1344) is of comparable accuracy to empirical chemical shift predictors after chemical shift-based structural refinement that removes small structural errors. We present a method by which quantum chemistry based predictions of isotropic chemical shielding values (ProCS15) can be used to refine protein structures using Markov Chain Monte Carlo (MCMC) simulations, relating the chemical shielding values to the experimental chemical shifts probabilistically. Two kinds of MCMC structural refinement simulations were performed using force field geometry optimized X-ray structures as starting points: simulated annealing of the starting structure and constant temperature MCMC simulation followed by simulated annealing of a representative ensemble structure. Annealing of the CHARMM structure changes the CA-RMSD by an average of 0.4 Å but lowers the chemical shift RMSD by 1.0 and 0.7 ppm for CA and N. Conformational averaging has a relatively small effect (0.1-0.2 ppm) on the overall agreement with carbon chemical shifts but lowers the error for nitrogen chemical shifts by 0.4 ppm. If an amino acid specific offset is included the ProCS15 predicted chemical shifts have RMSD values relative to experiments that are comparable to popular empirical chemical shift predictors. The annealed representative ensemble structures differ in CA-RMSD relative to the initial structures by an average of 2.0 Å, with >2.0 Å difference for six proteins. In four of the cases, the largest structural differences arise in structurally flexible regions of the protein as determined by NMR, and in the remaining two cases, the large structural change may be due to force field deficiencies. The overall accuracy of the empirical methods are slightly improved by annealing the CHARMM structure with ProCS15, which may suggest that the minor structural changes introduced by ProCS15-based annealing improves the accuracy of the protein structures. Having established that QM-based chemical shift prediction can deliver the same accuracy as empirical shift predictors we hope this can help increase the accuracy of related approaches such as QM/MM or linear scaling approaches or interpreting protein structural dynamics from QM-derived chemical shift.

  11. A method to integrate descriptive and experimental field studies at the level of data and empirical concepts1

    PubMed Central

    Bijou, Sidney W.; Peterson, Robert F.; Ault, Marion H.

    1968-01-01

    It is the thesis of this paper that data from descriptive and experimental field studies can be interrelated at the level of data and empirical concepts if both sets are derived from frequency-of-occurrence measures. The methodology proposed for a descriptive field study is predicated on three assumptions: (1) The primary data of psychology are the observable interactions of a biological organism and environmental events, past and present. (2) Theoretical concepts and laws are derived from empirical concepts and laws, which in turn are derived from the raw data. (3) Descriptive field studies describe interactions between behavioral and environmental events; experimental field studies provide information on their functional relationships. The ingredients of a descriptive field investigation using frequency measures consist of: (1) specifying in objective terms the situation in which the study is conducted, (2) defining and recording behavioral and environmental events in observable terms, and (3) measuring observer reliability. Field descriptive studies following the procedures suggested here would reveal interesting new relationships in the usual ecological settings and would also provide provocative cues for experimental studies. On the other hand, field-experimental studies using frequency measures would probably yield findings that would suggest the need for describing new interactions in specific natural situations. PMID:16795175

  12. The application of Signalling Theory to health-related trust problems: The example of herbal clinics in Ghana and Tanzania.

    PubMed

    Hampshire, Kate; Hamill, Heather; Mariwah, Simon; Mwanga, Joseph; Amoako-Sakyi, Daniel

    2017-09-01

    In contexts where healthcare regulation is weak and levels of uncertainty high, how do patients decide whom and what to trust? In this paper, we explore the potential for using Signalling Theory (ST, a form of Behavioural Game Theory) to investigate health-related trust problems under conditions of uncertainty, using the empirical example of 'herbal clinics' in Ghana and Tanzania. Qualitative, ethnographic fieldwork was conducted over an eight-month period (2015-2016) in eight herbal clinics in Ghana and ten in Tanzania, including semi-structured interviews with herbalists (N = 18) and patients (N = 68), plus detailed ethnographic observations and twenty additional key informant interviews. The data were used to explore four ST-derived predictions, relating to herbalists' strategic communication ('signalling') of their trustworthiness to patients, and patients' interpretation of those signals. Signalling Theory is shown to provide a useful analytical framework, allowing us to go beyond the primary trust problem addressed by other researchers - cataloguing observable indicators of trustworthiness - and providing tools for tackling the trickier secondary trust problem, where the trustworthiness of those indicators must be ascertained. Signalling Theory also enables a basis for comparative work between different empirical contexts that share the underlying condition of uncertainty. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Which helper behaviors and intervention styles are related to better short-term outcomes in telephone crisis intervention? Results from a Silent Monitoring Study of Calls to the U.S. 1-800-SUICIDE Network.

    PubMed

    Mishara, Brian L; Chagnon, François; Daigle, Marc; Balan, Bogdan; Raymond, Sylvaine; Marcoux, Isabelle; Bardon, Cécile; Campbell, Julie K; Berman, Alan

    2007-06-01

    A total of 2,611 calls to 14 helplines were monitored to observe helper behaviors and caller characteristics and changes during the calls. The relationship between intervention characteristics and call outcomes are reported for 1,431 crisis calls. Empathy and respect, as well as factor-analytically derived scales of supportive approach and good contact and collaborative problem solving were significantly related to positive outcomes, but not active listening. We recommend recruitment of helpers with these characteristics, development of standardized training in those methods that are empirically shown to be effective, and the need for research relating short-term outcomes to long-term effects.

  14. Dispersion correction derived from first principles for density functional theory and Hartree-Fock theory.

    PubMed

    Guidez, Emilie B; Gordon, Mark S

    2015-03-12

    The modeling of dispersion interactions in density functional theory (DFT) is commonly performed using an energy correction that involves empirically fitted parameters for all atom pairs of the system investigated. In this study, the first-principles-derived dispersion energy from the effective fragment potential (EFP) method is implemented for the density functional theory (DFT-D(EFP)) and Hartree-Fock (HF-D(EFP)) energies. Overall, DFT-D(EFP) performs similarly to the semiempirical DFT-D corrections for the test cases investigated in this work. HF-D(EFP) tends to underestimate binding energies and overestimate intermolecular equilibrium distances, relative to coupled cluster theory, most likely due to incomplete accounting for electron correlation. Overall, this first-principles dispersion correction yields results that are in good agreement with coupled-cluster calculations at a low computational cost.

  15. Large deviation function for a driven underdamped particle in a periodic potential

    NASA Astrophysics Data System (ADS)

    Fischer, Lukas P.; Pietzonka, Patrick; Seifert, Udo

    2018-02-01

    Employing large deviation theory, we explore current fluctuations of underdamped Brownian motion for the paradigmatic example of a single particle in a one-dimensional periodic potential. Two different approaches to the large deviation function of the particle current are presented. First, we derive an explicit expression for the large deviation functional of the empirical phase space density, which replaces the level 2.5 functional used for overdamped dynamics. Using this approach, we obtain several bounds on the large deviation function of the particle current. We compare these to bounds for overdamped dynamics that have recently been derived, motivated by the thermodynamic uncertainty relation. Second, we provide a method to calculate the large deviation function via the cumulant generating function. We use this method to assess the tightness of the bounds in a numerical case study for a cosine potential.

  16. The First Empirical Determination of the Fe10+ and Fe13+ Freeze-in Distances in the Solar Corona

    NASA Astrophysics Data System (ADS)

    Boe, Benjamin; Habbal, Shadia; Druckmüller, Miloslav; Landi, Enrico; Kourkchi, Ehsan; Ding, Adalbert; Starha, Pavel; Hutton, Joseph

    2018-06-01

    Heavy ions are markers of the physical processes responsible for the density and temperature distribution throughout the fine-scale magnetic structures that define the shape of the solar corona. One of their properties, whose empirical determination has remained elusive, is the “freeze-in” distance (R f ) where they reach fixed ionization states that are adhered to during their expansion with the solar wind. We present the first empirical inference of R f for {Fe}}{10+} and {Fe}}{13+} derived from multi-wavelength imaging observations of the corresponding Fe XI ({Fe}}{10+}) 789.2 nm and Fe XIV ({Fe}}{13+}) 530.3 nm emission acquired during the 2015 March 20 total solar eclipse. We find that the two ions freeze-in at different heliocentric distances. In polar coronal holes (CHs) R f is around 1.45 R ⊙ for {Fe}}{10+} and below 1.25 R ⊙ for {Fe}}{13+}. Along open field lines in streamer regions, R f ranges from 1.4 to 2 R ⊙ for {Fe}}{10+} and from 1.5 to 2.2 R ⊙ for {Fe}}{13+}. These first empirical R f values: (1) reflect the differing plasma parameters between CHs and streamers and structures within them, including prominences and coronal mass ejections; (2) are well below the currently quoted values derived from empirical model studies; and (3) place doubt on the reliability of plasma diagnostics based on the assumption of ionization equilibrium beyond 1.2 R ⊙.

  17. An Empirical Derivation of the Run Time of the Bubble Sort Algorithm.

    ERIC Educational Resources Information Center

    Gonzales, Michael G.

    1984-01-01

    Suggests a moving pictorial tool to help teach principles in the bubble sort algorithm. Develops such a tool applied to an unsorted list of numbers and describes a method to derive the run time of the algorithm. The method can be modified to run the times of various other algorithms. (JN)

  18. The contribution of executive functions to emergent mathematic skills in preschool children.

    PubMed

    Espy, Kimberly Andrews; McDiarmid, Melanie M; Cwik, Mary F; Stalets, Melissa Meade; Hamby, Arlena; Senn, Theresa E

    2004-01-01

    Mathematical ability is related to both activation of the prefrontal cortex in neuroimaging studies of adults and to executive functions in school-age children. The purpose of this study was to determine whether executive functions were related to emergent mathematical proficiency in preschool children. Preschool children (N = 96) were administered an executive function battery that was reduced empirically to working memory (WM), inhibitory control (IC), and shifting abilities by calculating composite scores derived from principal component analysis. Both WM and IC predicted early arithmetic competency, with the observed relations robust after controlling statistically for child age, maternal education, and child vocabulary. Only IC accounted for unique variance in mathematical skills, after the contribution of other executive functions were controlled statistically as well. Specific executive functions are related to emergent mathematical proficiency in this age range. Longitudinal studies using structural equation modeling are necessary to better characterize these ontogenetic relations.

  19. Systematic review of empiricism and theory in domestic minor sex trafficking research.

    PubMed

    Twis, Mary K; Shelton, Beth Anne

    2018-01-01

    Empiricism and the application of human behavior theory to inquiry are regarded as markers of high-quality research. Unfortunately, scholars have noted that there are many gaps in theory and empiricism within the human trafficking literature, calling into question the legitimacy of policies and practices that are derived from the available data. To date, there has not been an analysis of the extent to which empirical methods and human behavior theory have been applied to domestic minor sex trafficking (DMST) research as a subcategory of human trafficking inquiry. To fill this gap in the literature, this systematic review was designed to assess the degree to which DMST publications are a) empirical, and b) apply human behavior theory to inquiry. This analysis also focuses on answering research questions related to patterns within DMST study data sources, and patterns of human behavior theory application. The results of this review indicate that a minority of sampled DMST publications are empirical, a minority of those articles that were empirical apply a specific human behavior theory within the research design and reporting of results, a minority of articles utilize data collected directly from DMST victims, and that there are no discernible patterns in the application of human behavior theory to DMST research. This research note suggests that DMST research is limited by the same challenges as the larger body of human trafficking scholarship. Based upon these overarching findings, specific recommendations are offered to DMST researchers who are committed to enhancing the quality of DMST scholarship.

  20. Cost-effectiveness evaluation of voriconazole versus liposomal amphotericin B as empirical therapy for febrile neutropenia in Australia.

    PubMed

    Al-Badriyeh, Daoud; Liew, Danny; Stewart, Kay; Kong, David C M

    2009-01-01

    A major randomized clinical trial, evaluating voriconazole versus liposomal amphotericin B (LAMB) as empirical therapy in febrile neutropenia, recommended voriconazole as a suitable alternative to LAMB. The current study sought to investigate the health economic impact of using voriconazole and LAMB for febrile neutropenia in Australia. A decision analytic model was constructed to capture downstream consequences of empirical antifungal therapy with each agent. The main outcomes were: success, breakthrough fungal infection, persistent baseline fungal infection, persistent fever, premature discontinuation and death. Underlying transition probabilities and treatment patterns were derived directly from trial data. Resource use was estimated using an expert panel. Cost inputs were obtained from the latest Australian representative published sources. The perspective adopted was that of the Australian hospital. Uncertainty and sensitivity analyses were undertaken via the Monte Carlo simulation. Compared with voriconazole, LAMB was associated with a net cost saving of AU$1422 (2.9%) per patient. A similar trend was observed with the cost per death prevented and successful treatment. LAMB dominated voriconazole as it resulted in higher efficacy and lower costs when compared with voriconazole. The results were most sensitive to the duration of therapy and the alternative therapy used post discontinuations. In uncertainty analysis, LAMB had 99.8% chance of costing less than voriconazole. In this study, which used the current standard five component endpoint to assess the impact of empirical antifungal therapy, LAMB was associated with cost savings relative to voriconazole.

  1. Byzantine maritime trade in southern Jordan: The evidence from Port of Aila ('Aqaba).

    NASA Astrophysics Data System (ADS)

    Al-Nasarat, Mohammed

    Eusebius of Caesarea, in (Onomasticon) said that: "Ailath (Aila) is situated at the extremity of Palestine between the southern desert and the Red Sea where cargo was transported by ship from both Egypt and India". There is no doubt that port of Aila- 'Aqaba was important for the sea trade during the Byzantine Period and ancient times. Aila acquired significance in the Byzantine Empire commerce and seafaring according to the information derived from the Byzantine historians, documents and pilgrim's archaeological excavations. This paper focuses on Byzantine Maritime Trade in port of Aila during the period between the fourth and seventh centuries A.D, its importance in the flourishing of trade of southern Jordan, and its relations with other major trade centers such as Gaza, Alexandria and Ethiopia. It appears that port of Aila played a major role in the economy of Byzantine Empire and international trade as attested in the accounts of historians, pilgrims who visited the area during this period, and archaeological excavations which revealed that Aila was at least a transit point and perhaps even a production site for fish sauce or related products in the Byzantine period.

  2. Emotional intelligence: a review of the literature with specific focus on empirical and epistemological perspectives.

    PubMed

    Akerjordet, Kristin; Severinsson, Elisabeth

    2007-08-01

    The aim of this literature review was to evaluate and discuss previous research on emotional intelligence with specific focus on empirical and epistemological perspectives. The concept of emotional intelligence is derived from extensive research and theory about thoughts, feelings and abilities that, prior to 1990, were considered to be unrelated phenomena. Today, emotional intelligence attracts growing interest worldwide, contributing to critical reflection as well as to various educational, health and occupational outcomes. Systematic review. The findings revealed that the epistemological tradition of natural science is the most frequently used and that, therefore, few articles related to humanistic sciences or philosophical perspectives were found. There is no agreement as to whether emotional intelligence is an individual ability, non-cognitive skill, capability or competence. One important finding is that, regardless of the theoretical framework used, researchers agree that emotional intelligence embraces emotional awareness in relation to self and others, professional efficiency and emotional management. There have been some interesting theoretical frameworks that relate emotional intelligence to stress and mental health within different contexts. Emotional learning and maturation processes, i.e. personal growth and development in the area of emotional intelligence, are central to professional competence. There is no doubt that the research on emotional intelligence is scarce and still at the developmental stage. Clinical questions pertaining to the nursing profession should be developed with focus on personal qualities of relevance to nursing practice. Different approaches are needed in order to further expand the theoretical, empirical and philosophical foundation of this important and enigmatic concept. Emotional intelligence may have implications for health promotion and quality of working life within nursing. Emotional intelligence seems to lead to more positive attitudes, greater adaptability, improved relationships and increased orientation towards positive values.

  3. First-order approximation for the pressure-flow relationship of spontaneously contracting lymphangions.

    PubMed

    Quick, Christopher M; Venugopal, Arun M; Dongaonkar, Ranjeet M; Laine, Glen A; Stewart, Randolph H

    2008-05-01

    To return lymph to the great veins of the neck, it must be actively pumped against a pressure gradient. Mean lymph flow in a portion of a lymphatic network has been characterized by an empirical relationship (P(in) - P(out) = -P(p) + R(L)Q(L)), where P(in) - P(out) is the axial pressure gradient and Q(L) is mean lymph flow. R(L) and P(p) are empirical parameters characterizing the effective lymphatic resistance and pump pressure, respectively. The relation of these global empirical parameters to the properties of lymphangions, the segments of a lymphatic vessel bounded by valves, has been problematic. Lymphangions have a structure like blood vessels but cyclically contract like cardiac ventricles; they are characterized by a contraction frequency (f) and the slopes of the end-diastolic pressure-volume relationship [minimum value of resulting elastance (E(min))] and end-systolic pressure-volume relationship [maximum value of resulting elastance (E(max))]. Poiseuille's law provides a first-order approximation relating the pressure-flow relationship to the fundamental properties of a blood vessel. No analogous formula exists for a pumping lymphangion. We therefore derived an algebraic formula predicting lymphangion flow from fundamental physical principles and known lymphangion properties. Quantitative analysis revealed that lymph inertia and resistance to lymph flow are negligible and that lymphangions act like a series of interconnected ventricles. For a single lymphangion, P(p) = P(in) (E(max) - E(min))/E(min) and R(L) = E(max)/f. The formula was tested against a validated, realistic mathematical model of a lymphangion and found to be accurate. Predicted flows were within the range of flows measured in vitro. The present work therefore provides a general solution that makes it possible to relate fundamental lymphangion properties to lymphatic system function.

  4. Are cross-cultural comparisons of norms on death anxiety valid?

    PubMed

    Beshai, James A

    2008-01-01

    Cross-cultural comparisons of norms derived from research on Death Anxiety are valid as long as they provide existential validity. Existential validity is not empirically derived like construct validity. It is an understanding of being human unto death. It is the realization that death is imminent. It is the inner sense that provides a responder to death anxiety scales with a valid expression of his or her sense about the prospect of dying. It can be articulated in a life review by a disclosure of one's ontology. This article calls upon psychologists who develop death anxiety scales to disclose their presuppositions about death before administering a questionnaire. By disclosing his or her ontology a psychologist provides a means of disclosing his or her intentionality in responding to the items. This humanistic paradigm allows for an interactive participation between investigator and subject. Lester, Templer, and Abdel-Khalek (2006-2007) enriched psychology with significant empirical data on several correlates of death anxiety. But all scientists, especially psychologists, will always have alternative interpretations of the same empirical fact pattern. Empirical data is limited by the affirmation of the consequent limitation. A phenomenology of language and communication makes existential validity a necessary step for a broader understanding of the meaning of death anxiety.

  5. Empirical Corrections to Nutation Amplitudes and Precession Computed from a Global VLBI Solution

    NASA Astrophysics Data System (ADS)

    Schuh, H.; Ferrandiz, J. M.; Belda-Palazón, S.; Heinkelmann, R.; Karbon, M.; Nilsson, T.

    2017-12-01

    The IAU2000A nutation and IAU2006 precession models were adopted to provide accurate estimations and predictions of the Celestial Intermediate Pole (CIP). However, they are not fully accurate and VLBI (Very Long Baseline Interferometry) observations show that the CIP deviates from the position resulting from the application of the IAU2006/2000A model. Currently, those deviations or offsets of the CIP (Celestial Pole Offsets - CPO), can only be obtained by the VLBI technique. The accuracy of the order of 0.1 milliseconds of arc (mas) allows to compare the observed nutation with theoretical prediction model for a rigid Earth and constrain geophysical parameters describing the Earth's interior. In this study, we empirically evaluate the consistency, systematics and deviations of the IAU 2006/2000A precession-nutation model using several CPO time series derived from the global analysis of VLBI sessions. The final objective is the reassessment of the precession offset and rate, and the amplitudes of the principal terms of nutation, trying to empirically improve the conventional values derived from the precession/nutation theories. The statistical analysis of the residuals after re-fitting the main nutation terms demonstrates that our empirical corrections attain an error reduction by almost 15 micro arc seconds.

  6. A study of the longevity and operational reliability of Goddard Spacecraft, 1960-1980

    NASA Technical Reports Server (NTRS)

    Shockey, E. F.

    1981-01-01

    Compiled data regarding the design lives and lifetimes actually achieved by 104 orbiting satellites launched by the Goddard Spaceflight Center between the years 1960 and 1980 is analyzed. Historical trends over the entire 21 year period are reviewed, and the more recent data is subjected to an examination of several key parameters. An empirical reliability function is derived, and compared with various mathematical models. Data from related studies is also discussed. The results provide insight into the reliability history of Goddard spacecraft an guidance for estimating the reliability of future programs.

  7. Deriving Two-Dimensional Ocean Wave Spectra and Surface Height Maps from the Shuttle Imaging Radar (SIR-B)

    NASA Technical Reports Server (NTRS)

    Tilley, D. G.

    1986-01-01

    Directional ocean wave spectra were derived from Shuttle Imaging Radar (SIR-B) imagery in regions where nearly simultaneous aircraft-based measurements of the wave spectra were also available as part of the NASA Shuttle Mission 41G experiments. The SIR-B response to a coherently speckled scene is used to estimate the stationary system transfer function in the 15 even terms of an eighth-order two-dimensional polynomial. Surface elevation contours are assigned to SIR-B ocean scenes Fourier filtered using a empirical model of the modulation transfer function calibrated with independent measurements of wave height. The empirical measurements of the wave height distribution are illustrated for a variety of sea states.

  8. Gold and palladium minerals (including empirical PdCuBiSe3) from the former Roter Bär mine, St. Andreasberg, Harz Mountains, Germany: a result of low-temperature, oxidising fluid overprint

    NASA Astrophysics Data System (ADS)

    Cabral, Alexandre Raphael; Ließmann, Wilfried; Lehmann, Bernd

    2015-10-01

    At Roter Bär, a former underground mine in the polymetallic deposits of St. Andreasberg in the middle-Harz vein district, Germany, native gold and palladium minerals occur very locally in clausthalite-hematite pockets of few millimetres across in carbonate veinlets. The native gold is a Au-Ag intermetallic compound and the palladium minerals are characterised as mertieite-II [Pd8(Sb,As)3] and empirical PdCuBiSe3 with some S. The latter coexists with bohdanowiczite (AgBiSe2), a mineral that is stable below 120 °C. The geological setting of Roter Bär, underneath a post-Variscan unconformity, and its hematite-selenide-gold association suggest that oxidising hydrothermal brines of low temperature were instrumental to the Au-Pd mineralisation. The Roter Bär Au-Pd mineralisation can be explained by Permo-Triassic, red-bed-derived brines in the context of post-Variscan, unconformity-related fluid overprint.

  9. Development and Pilot of the Caregiver Strategies Inventory.

    PubMed

    Kirby, Anne V; Little, Lauren M; Schultz, Beth; Watson, Linda R; Zhang, Wanqing; Baranek, Grace T

    2016-01-01

    Children with autism spectrum disorder often demonstrate unusual behavioral responses to sensory stimuli (i.e., sensory features). To manage everyday activities, caregivers may implement strategies to address these features during family routines. However, investigation of specific strategies used by caregivers is limited by the lack of empirically developed measures. In this study, we describe the development and pilot results of the Caregiver Strategies Inventory (CSI), a supplement to the Sensory Experiences Questionnaire Version 3.0 (SEQ 3.0; Baranek, 2009) that measures caregivers' strategies in response to their children's sensory features. Three conceptually derived and empirically grounded strategy types were tested: cognitive-behavioral, sensory-perceptual, and avoidance. Results indicated that the CSI demonstrated good internal consistency and that strategy use was related to child age and cognition. Moreover, parent feedback after completing the CSI supported its utility and social validity. The CSI may be used alongside the SEQ 3.0 to facilitate a family-centered approach to assessment and intervention planning. Copyright © 2016 by the American Occupational Therapy Association, Inc.

  10. Simulating sunflower canopy temperatures to infer root-zone soil water potential

    NASA Technical Reports Server (NTRS)

    Choudhury, B. J.; Idso, S. B.

    1983-01-01

    A soil-plant-atmosphere model for sunflower (Helianthus annuus L.), together with clear sky weather data for several days, is used to study the relationship between canopy temperature and root-zone soil water potential. Considering the empirical dependence of stomatal resistance on insolation, air temperature and leaf water potential, a continuity equation for water flux in the soil-plant-atmosphere system is solved for the leaf water potential. The transpirational flux is calculated using Monteith's combination equation, while the canopy temperature is calculated from the energy balance equation. The simulation shows that, at high soil water potentials, canopy temperature is determined primarily by air and dew point temperatures. These results agree with an empirically derived linear regression equation relating canopy-air temperature differential to air vapor pressure deficit. The model predictions of leaf water potential are also in agreement with observations, indicating that measurements of canopy temperature together with a knowledge of air and dew point temperatures can provide a reliable estimate of the root-zone soil water potential.

  11. Modeling, simulation, and estimation of optical turbulence

    NASA Astrophysics Data System (ADS)

    Formwalt, Byron Paul

    This dissertation documents three new contributions to simulation and modeling of optical turbulence. The first contribution is the formalization, optimization, and validation of a modeling technique called successively conditioned rendering (SCR). The SCR technique is empirically validated by comparing the statistical error of random phase screens generated with the technique. The second contribution is the derivation of the covariance delineation theorem, which provides theoretical bounds on the error associated with SCR. It is shown empirically that the theoretical bound may be used to predict relative algorithm performance. Therefore, the covariance delineation theorem is a powerful tool for optimizing SCR algorithms. For the third contribution, we introduce a new method for passively estimating optical turbulence parameters, and demonstrate the method using experimental data. The technique was demonstrated experimentally, using a 100 m horizontal path at 1.25 m above sun-heated tarmac on a clear afternoon. For this experiment, we estimated C2n ≈ 6.01 · 10-9 m-23 , l0 ≈ 17.9 mm, and L0 ≈ 15.5 m.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Constantin, Lucian A.; Fabiano, Eduardo; Della Sala, Fabio

    We introduce a novel non-local ingredient for the construction of exchange density functionals: the reduced Hartree parameter, which is invariant under the uniform scaling of the density and represents the exact exchange enhancement factor for one- and two-electron systems. The reduced Hartree parameter is used together with the conventional meta-generalized gradient approximation (meta-GGA) semilocal ingredients (i.e., the electron density, its gradient, and the kinetic energy density) to construct a new generation exchange functional, termed u-meta-GGA. This u-meta-GGA functional is exact for the exchange of any one- and two-electron systems, is size-consistent and non-empirical, satisfies the uniform density scaling relation, andmore » recovers the modified gradient expansion derived from the semiclassical atom theory. For atoms, ions, jellium spheres, and molecules, it shows a good accuracy, being often better than meta-GGA exchange functionals. Our construction validates the use of the reduced Hartree ingredient in exchange-correlation functional development, opening the way to an additional rung in the Jacob’s ladder classification of non-empirical density functionals.« less

  13. A reduced-order model from high-dimensional frictional hysteresis

    PubMed Central

    Biswas, Saurabh; Chatterjee, Anindya

    2014-01-01

    Hysteresis in material behaviour includes both signum nonlinearities as well as high dimensionality. Available models for component-level hysteretic behaviour are empirical. Here, we derive a low-order model for rate-independent hysteresis from a high-dimensional massless frictional system. The original system, being given in terms of signs of velocities, is first solved incrementally using a linear complementarity problem formulation. From this numerical solution, to develop a reduced-order model, basis vectors are chosen using the singular value decomposition. The slip direction in generalized coordinates is identified as the minimizer of a dissipation-related function. That function includes terms for frictional dissipation through signum nonlinearities at many friction sites. Luckily, it allows a convenient analytical approximation. Upon solution of the approximated minimization problem, the slip direction is found. A final evolution equation for a few states is then obtained that gives a good match with the full solution. The model obtained here may lead to new insights into hysteresis as well as better empirical modelling thereof. PMID:24910522

  14. Biomolecular Force Field Parameterization via Atoms-in-Molecule Electron Density Partitioning.

    PubMed

    Cole, Daniel J; Vilseck, Jonah Z; Tirado-Rives, Julian; Payne, Mike C; Jorgensen, William L

    2016-05-10

    Molecular mechanics force fields, which are commonly used in biomolecular modeling and computer-aided drug design, typically treat nonbonded interactions using a limited library of empirical parameters that are developed for small molecules. This approach does not account for polarization in larger molecules or proteins, and the parametrization process is labor-intensive. Using linear-scaling density functional theory and atoms-in-molecule electron density partitioning, environment-specific charges and Lennard-Jones parameters are derived directly from quantum mechanical calculations for use in biomolecular modeling of organic and biomolecular systems. The proposed methods significantly reduce the number of empirical parameters needed to construct molecular mechanics force fields, naturally include polarization effects in charge and Lennard-Jones parameters, and scale well to systems comprised of thousands of atoms, including proteins. The feasibility and benefits of this approach are demonstrated by computing free energies of hydration, properties of pure liquids, and the relative binding free energies of indole and benzofuran to the L99A mutant of T4 lysozyme.

  15. A view not to be missed: Salient scene content interferes with cognitive restoration

    PubMed Central

    Van der Jagt, Alexander P. N.; Craig, Tony; Brewer, Mark J.; Pearson, David G.

    2017-01-01

    Attention Restoration Theory (ART) states that built scenes place greater load on attentional resources than natural scenes. This is explained in terms of "hard" and "soft" fascination of built and natural scenes. Given a lack of direct empirical evidence for this assumption we propose that perceptual saliency of scene content can function as an empirically derived indicator of fascination. Saliency levels were established by measuring speed of scene category detection using a Go/No-Go detection paradigm. Experiment 1 shows that built scenes are more salient than natural scenes. Experiment 2 replicates these findings using greyscale images, ruling out a colour-based response strategy, and additionally shows that built objects in natural scenes affect saliency to a greater extent than the reverse. Experiment 3 demonstrates that the saliency of scene content is directly linked to cognitive restoration using an established restoration paradigm. Overall, these findings demonstrate an important link between the saliency of scene content and related cognitive restoration. PMID:28723975

  16. A view not to be missed: Salient scene content interferes with cognitive restoration.

    PubMed

    Van der Jagt, Alexander P N; Craig, Tony; Brewer, Mark J; Pearson, David G

    2017-01-01

    Attention Restoration Theory (ART) states that built scenes place greater load on attentional resources than natural scenes. This is explained in terms of "hard" and "soft" fascination of built and natural scenes. Given a lack of direct empirical evidence for this assumption we propose that perceptual saliency of scene content can function as an empirically derived indicator of fascination. Saliency levels were established by measuring speed of scene category detection using a Go/No-Go detection paradigm. Experiment 1 shows that built scenes are more salient than natural scenes. Experiment 2 replicates these findings using greyscale images, ruling out a colour-based response strategy, and additionally shows that built objects in natural scenes affect saliency to a greater extent than the reverse. Experiment 3 demonstrates that the saliency of scene content is directly linked to cognitive restoration using an established restoration paradigm. Overall, these findings demonstrate an important link between the saliency of scene content and related cognitive restoration.

  17. Runway drainage characteristics related to tire friction performance

    NASA Technical Reports Server (NTRS)

    Yager, Thomas J.

    1991-01-01

    The capability of a runway pavement to rapidly drain water buildup during periods of precipitation is crucial to minimize tire hydroplaning potential and maintain adequate aircraft ground operational safety. Test results from instrumented aircraft, ground friction measuring vehicles, and NASA Langley's Aircraft Landing Dynamics Facility (ALDF) track have been summarized to indicate the adverse effects of pavement wetness conditions on tire friction performance. Water drainage measurements under a range of rainfall rates have been evaluated for several different runway surface treatments including the transversely grooved and longitudinally grinded concrete surfaces at the Space Shuttle Landing Facility (SLF) runway at NASA Kennedy Space Center in Florida. The major parameters influencing drainage rates and extent of flooding/drying conditions are identified. Existing drainage test data are compared to a previously derived empirical relationship and the need for some modification is indicated. The scope of future NASA Langley research directed toward improving empirical relationships to properly define runway drainage capability and consequently, enhance aircraft ground operational safety, is given.

  18. Essays on energy derivatives pricing and financial risk management =

    NASA Astrophysics Data System (ADS)

    Madaleno, Mara Teresa da Silva

    This thesis consists of an introductory chapter (essay I) and five more empirical essays on electricity markets and CO2 spot price behaviour, derivatives pricing analysis and hedging. Essay I presents the structure of the thesis and electricity markets functioning and characteristics, as well as the type of products traded, to be analyzed on the following essays. In the second essay we conduct an empirical study on co-movements in electricity markets resorting to wavelet analysis, discussing long-term dynamics and markets integration. Essay three is about hedging performance and multiscale relationships in the German electricity spot and futures markets, also using wavelet analysis. We concentrate the investigation on the relationship between coherence evolution and hedge ratio analysis, on a time-frequency-scale approach, between spot and futures which conditions the effectiveness of the hedging strategy. Essays four, five and six are interrelated between them and with the other two previous essays given the nature of the commodity analyzed, CO2 emission allowances, traded in electricity markets. Relationships between electricity prices, primary energy fuel prices and carbon dioxide permits are analyzed on essay four. The efficiency of the European market for allowances is examined taking into account markets heterogeneity. Essay five analyzes stylized statistical properties of the recent traded asset CO2 emission allowances, for spot and futures returns, examining also the relation linking convenience yield and risk premium, for the German European Energy Exchange (EEX) between October 2005 and October 2009. The study was conducted through empirical estimations of CO2 allowances risk premium, convenience yield, and their relation. Future prices from an ex-post perspective are examined to show evidence for significant negative risk premium, or else a positive forward premium. Finally, essay six analyzes emission allowances futures hedging effectiveness, providing evidence for utility gains increases with investor’s preference over risk. Deregulation of electricity markets has led to higher uncertainty in electricity prices and by presenting these essays we try to shed new lights about structuring, pricing and hedging in this type of markets.

  19. A scaling law for the critical current of Nb3Sn stands based on strong-coupling theory of superconductivity

    NASA Astrophysics Data System (ADS)

    Oh, Sangjun; Kim, Keeman

    2006-02-01

    We study the transition temperature Tc, the thermodynamic critical field Bc, and the upper critical field Bc2 of Nb3Sn with Eliashberg theory of strongly coupled superconductors using the Einstein spectrum α2(ω)F(ω)=λ<ω2>1/2δ(ω-<ω2>1/2). The strain dependences of λ(ɛ) and <ω2>1/2(V) are introduced from the empirical strain dependence of Tc(V) for three model cases. It is found that the empirical relation Tc(V)/Tc(0)=[Bc2(4.2 K,V)/Bc2(4.2 K,0)]1/w (w~3) is mainly due to the low-energy-phonon mode softening. We derive analytic expressions for the strain and temperature dependences of Bc(T,V) and Bc2(T,V) and the Ginzburg-Landau parameter κ(T,V) from the numerical calculation results. The Summers refinement on the temperature dependence of κ(T) shows deviation from our calculation results. We propose a unified scaling law of flux pinning in Nb3Sn strands in the form of the Kramer model with the analytic expressions of Bc2(T,V) and κ(T,V) derived in this work. It is shown that the proposed scaling law gives a reasonable fit to the reported data with only eight fitting parameters.

  20. How Do Theories of Cognition and Consciousness in Ancient Indian Thought Systems Relate to Current Western Theorizing and Research?

    PubMed Central

    Sedlmeier, Peter; Srinivas, Kunchapudi

    2016-01-01

    Unknown to most Western psychologists, ancient Indian scriptures contain very rich, empirically derived psychological theories that are, however, intertwined with religious and philosophical content. This article represents our attempt to extract the psychological theory of cognition and consciousness from a prominent ancient Indian thought system: Samkhya-Yoga. We derive rather broad hypotheses from this approach that may complement and extend Western mainstream theorizing. These hypotheses address an ancient personality theory, the effects of practicing the applied part of Samkhya-Yoga on normal and extraordinary cognition, as well as different ways of perceiving reality. We summarize empirical evidence collected (mostly without reference to the Indian thought system) in diverse fields of research that allows for making judgments about the hypotheses, and suggest more specific hypotheses to be examined in future research. We conclude that the existing evidence for the (broad) hypotheses is substantial but that there are still considerable gaps in theory and research to be filled. Theories of cognition contained in the ancient Indian systems have the potential to modify and complement existing Western mainstream accounts of cognition. In particular, they might serve as a basis for arriving at more comprehensive theories for several research areas that, so far, lack strong theoretical grounding, such as meditation research or research on aspects of consciousness. PMID:27014150

  1. Validation of Simplified Urban-Canopy Aerodynamic Parametrizations Using a Numerical Simulation of an Actual Downtown Area

    NASA Astrophysics Data System (ADS)

    Ramirez, N.; Afshari, Afshin; Norford, L.

    2018-07-01

    A steady-state Reynolds-averaged Navier-Stoke computational fluid dynamics (CFD) investigation of boundary-layer flow over a major portion of downtown Abu Dhabi is conducted. The results are used to derive the shear stress and characterize the logarithmic region for eight sub-domains, where the sub-domains overlap and are overlaid in the streamwise direction. They are characterized by a high frontal area index initially, which decreases significantly beyond the fifth sub-domain. The plan area index is relatively stable throughout the domain. For each sub-domain, the estimated local roughness length and displacement height derived from CFD results are compared to prevalent empirical formulations. We further validate and tune a mixing-length model proposed by Coceal and Belcher (Q J R Meteorol Soc 130:1349-1372, 2004). Finally, the in-canopy wind-speed attenuation is analysed as a function of fetch. It is shown that, while there is some room for improvement in Macdonald's empirical formulations (Boundary-Layer Meteorol 97:25-45, 2000), Coceal and Belcher's mixing model in combination with the resolution method of Di Sabatino et al. (Boundary-Layer Meteorol 127:131-151, 2008) can provide a robust estimation of the average wind speed in the logarithmic region. Within the roughness sublayer, a properly parametrized Cionco exponential model is shown to be quite accurate.

  2. Chronic Fatigue Syndrome and Myalgic Encephalomyelitis: Toward An Empirical Case Definition

    PubMed Central

    Jason, Leonard A.; Kot, Bobby; Sunnquist, Madison; Brown, Abigail; Evans, Meredyth; Jantke, Rachel; Williams, Yolonda; Furst, Jacob; Vernon, Suzanne D.

    2015-01-01

    Current case definitions of Myalgic Encephalomyelitis (ME) and chronic fatigue syndrome (CFS) have been based on consensus methods, but empirical methods could be used to identify core symptoms and thereby improve the reliability. In the present study, several methods (i.e., continuous scores of symptoms, theoretically and empirically derived cut off scores of symptoms) were used to identify core symptoms best differentiating patients from controls. In addition, data mining with decision trees was conducted. Our study found a small number of core symptoms that have good sensitivity and specificity, and these included fatigue, post-exertional malaise, a neurocognitive symptom, and unrefreshing sleep. Outcomes from these analyses suggest that using empirically selected symptoms can help guide the creation of a more reliable case definition. PMID:26029488

  3. Comparison of modelled and empirical atmospheric propagation data

    NASA Technical Reports Server (NTRS)

    Schott, J. R.; Biegel, J. D.

    1983-01-01

    The radiometric integrity of TM thermal infrared channel data was evaluated and monitored to develop improved radiometric preprocessing calibration techniques for removal of atmospheric effects. Modelled atmospheric transmittance and path radiance were compared with empirical values derived from aircraft underflight data. Aircraft thermal infrared imagery and calibration data were available on two dates as were corresponding atmospheric radiosonde data. The radiosonde data were used as input to the LOWTRAN 5A code which was modified to output atmospheric path radiance in addition to transmittance. The aircraft data were calibrated and used to generate analogous measurements. These data indicate that there is a tendancy for the LOWTRAN model to underestimate atmospheric path radiance and transmittance as compared to empirical data. A plot of transmittance versus altitude for both LOWTRAN and empirical data is presented.

  4. Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.

    PubMed

    Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L

    2017-06-13

    λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.

  5. Climate change and the collapse of the Akkadian empire: Evidence from the deep sea

    NASA Astrophysics Data System (ADS)

    Cullen, H. M.; Demenocal, P. B.; Hemming, S.; Hemming, G.; Brown, F. H.; Guilderson, T.; Sirocko, F.

    2000-04-01

    The Akkadian empire ruled Mesopotamia from the headwaters of the Tigris-Euphrates Rivers to the Persian Gulf during the late third millennium B.C. Archeological evidence has shown that this highly developed civilization collapsed abruptly near 4170 ± 150 calendar yr B.P., perhaps related to a shift to more arid conditions. Detailed paleoclimate records to test this assertion from Mesopotamia are rare, but changes in regional aridity are preserved in adjacent ocean basins. We document Holocene changes in regional aridity using mineralogic and geochemical analyses of a marine sediment core from the Gulf of Oman, which is directly downwind of Mesopotamian dust source areas and archeological sites. Our results document a very abrupt increase in eolian dust and Mesopotamian aridity, accelerator mass spectrometer radiocarbon dated to 4025 ± 125 calendar yr B.P., which persisted for ˜300 yr. Radiogenic (Nd and Sr) isotope analyses confirm that the observed increase in mineral dust was derived from Mesopotamian source areas. Geochemical correlation of volcanic ash shards between the archeological site and marine sediment record establishes a direct temporal link between Mesopotamian aridification and social collapse, implicating a sudden shift to more arid conditions as a key factor contributing to the collapse of the Akkadian empire.

  6. A Universal Threshold for the Assessment of Load and Output Residuals of Strain-Gage Balance Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2017-01-01

    A new universal residual threshold for the detection of load and gage output residual outliers of wind tunnel strain{gage balance data was developed. The threshold works with both the Iterative and Non{Iterative Methods that are used in the aerospace testing community to analyze and process balance data. It also supports all known load and gage output formats that are traditionally used to describe balance data. The threshold's definition is based on an empirical electrical constant. First, the constant is used to construct a threshold for the assessment of gage output residuals. Then, the related threshold for the assessment of load residuals is obtained by multiplying the empirical electrical constant with the sum of the absolute values of all first partial derivatives of a given load component. The empirical constant equals 2.5 microV/V for the assessment of balance calibration or check load data residuals. A value of 0.5 microV/V is recommended for the evaluation of repeat point residuals because, by design, the calculation of these residuals removes errors that are associated with the regression analysis of the data itself. Data from a calibration of a six-component force balance is used to illustrate the application of the new threshold definitions to real{world balance calibration data.

  7. Empirical STORM-E Model. [I. Theoretical and Observational Basis

    NASA Technical Reports Server (NTRS)

    Mertens, Christopher J.; Xu, Xiaojing; Bilitza, Dieter; Mlynczak, Martin G.; Russell, James M., III

    2013-01-01

    Auroral nighttime infrared emission observed by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite is used to develop an empirical model of geomagnetic storm enhancements to E-region peak electron densities. The empirical model is called STORM-E and will be incorporated into the 2012 release of the International Reference Ionosphere (IRI). The proxy for characterizing the E-region response to geomagnetic forcing is NO+(v) volume emission rates (VER) derived from the TIMED/SABER 4.3 lm channel limb radiance measurements. The storm-time response of the NO+(v) 4.3 lm VER is sensitive to auroral particle precipitation. A statistical database of storm-time to climatological quiet-time ratios of SABER-observed NO+(v) 4.3 lm VER are fit to widely available geomagnetic indices using the theoretical framework of linear impulse-response theory. The STORM-E model provides a dynamic storm-time correction factor to adjust a known quiescent E-region electron density peak concentration for geomagnetic enhancements due to auroral particle precipitation. Part II of this series describes the explicit development of the empirical storm-time correction factor for E-region peak electron densities, and shows comparisons of E-region electron densities between STORM-E predictions and incoherent scatter radar measurements. In this paper, Part I of the series, the efficacy of using SABER-derived NO+(v) VER as a proxy for the E-region response to solar-geomagnetic disturbances is presented. Furthermore, a detailed description of the algorithms and methodologies used to derive NO+(v) VER from SABER 4.3 lm limb emission measurements is given. Finally, an assessment of key uncertainties in retrieving NO+(v) VER is presented

  8. Quantification of a greenhouse hydrologic cycle from equatorial to polar latitudes: The mid-Cretaceous water bearer revisited

    USGS Publications Warehouse

    Suarez, M.B.; Gonzalez, Luis A.; Ludvigson, Greg A.

    2011-01-01

    This study aims to investigate the global hydrologic cycle during the mid-Cretaceous greenhouse by utilizing the oxygen isotopic composition of pedogenic carbonates (calcite and siderite) as proxies for the oxygen isotopic composition of precipitation. The data set builds on the Aptian-Albian sphaerosiderite ??18O data set presented by Ufnar et al. (2002) by incorporating additional low latitude data including pedogenic and early meteoric diagenetic calcite ??18O. Ufnar et al. (2002) used the proxy data derived from the North American Cretaceous Western Interior Basin (KWIB) in a mass balance model to estimate precipitation-evaporation fluxes. We have revised this mass balance model to handle sphaerosiderite and calcite proxies, and to account for longitudinal travel by tropical air masses. We use empirical and general circulation model (GCM) temperature gradients for the mid-Cretaceous, and the empirically derived ??18O composition of groundwater as constraints in our mass balance model. Precipitation flux, evaporation flux, relative humidity, seawater composition, and continental feedback are adjusted to generate model calculated groundwater ??18O compositions (proxy for precipitation ??18O) that match the empirically-derived groundwater ??18O compositions to within ??0.5???. The model is calibrated against modern precipitation data sets.Four different Cretaceous temperature estimates were used: the leaf physiognomy estimates of Wolfe and Upchurch (1987) and Spicer and Corfield (1992), the coolest and warmest Cretaceous estimates compiled by Barron (1983) and model outputs from the GENESIS-MOM GCM by Zhou et al. (2008). Precipitation and evaporation fluxes for all the Cretaceous temperature gradients utilized in the model are greater than modern precipitation and evaporation fluxes. Balancing the model also requires relative humidity in the subtropical dry belt to be significantly reduced. As expected calculated precipitation rates are all greater than modern precipitation rates. Calculated global average precipitation rates range from 371mm/year to 1196mm/year greater than modern precipitation rates. Model results support the hypothesis that increased rainout produces ??18O-depleted precipitation.Sensitivity testing of the model indicates that the amount of water vapor in the air mass, and its origin and pathway, significantly affect the oxygen isotopic composition of precipitation. Precipitation ??18O is also sensitive to seawater ??18O and enriched tropical seawater was necessary to simulate proxy data (consistent with fossil and geologic evidence for a warmer and evaporatively enriched Tethys). Improved constraints in variables such as seawater ??18O can help improve boundary conditions for mid-Cretaceous climate simulations. ?? 2011 Elsevier B.V.

  9. Magnetic minerals' classification for sources of magnetic anomalies

    NASA Astrophysics Data System (ADS)

    Kletetschka, G.; Wieczorek, M. A.

    2016-12-01

    Our analysis allows interpretation of magnetic anomalies detected in meteorites, on Mars and Moon, and other bodies where the sources of magnetic field can be assumed to be thermoremanent magnetization (Mtr). We show how the specific approach allows reconsideration of the major magnetic carriers on Moon and Mars. Furthermore we are deriving a generalized equation for iron concentration estimate from magnetizations derived from crustal magnetic anomalies on the Moon. There is fundamental linear relation between the magnetic efficiency of thermoremanent magnetization Mtr measured at room temperature and level of the ambient field present at the time of acquisition. We used experimental data for derivation of the empirical constants for paleofield estimate equations. Specific magnetic mineral carriers from single domain (SD) through pseudosingle domain (PSD) to multidomain (MD) states include iron, meteoritic iron, magnetite, maghemite, pyrrhotite, and hematite. The Mtr/Msr is linearly proportional to the product of the magnetizing field and saturation remanence, while the proportionality constant is independent of magnetic mineralogy, domain state, or composition. We show that the level of magnetic paleofield record relates to two types of demagnetizing field that act as a barrier against the domain wall pinning during the magnetic acquisition. The first type of demagnetizing field relates to saturation magnetization constant derived from the distribution of Bohr's magnetons within the crystal lattice. The second type of demagnetizing field originates from the effect of shape of the magnetic minerals. Knowledge of the character of these demagnetizing fields is a prerequisite for paleofield estimates from rocks containing known magnetic mineralogy and magnetic shape anisotropy.

  10. The derivation of scenic utility functions and surfaces and their role in landscape management

    Treesearch

    John W. Hamilton; Gregory J. Buhyoff; J. Douglas Wellman

    1979-01-01

    This paper outlines a methodological approach for determining relevant physical landscape features which people use in formulating judgments about scenic utility. This information, coupled with either empirically derived or rationally stipulated regression techniques, may be used to produce scenic utility functions and surfaces. These functions can provide a means for...

  11. TEACHING PHYSICS: Biking around a hollow sphere

    NASA Astrophysics Data System (ADS)

    Mak, Se-yuen; Yip, Din-yan

    1999-11-01

    The conditions required for a cyclist riding a motorbike in a horizontal circle on or above the equator of a hollow sphere are derived using concepts of equilibrium and the condition for uniform circular motion. The result is compared with an empirical analysis based on a video show. Some special cases of interest derived from the general solution are elaborated.

  12. GPS-Derived Precipitable Water Compared with the Air Force Weather Agency’s MM5 Model Output

    DTIC Science & Technology

    2002-03-26

    and less then 100 sensors are available throughout Europe . While the receiver density is currently comparable to the upper-air sounding network...profiles from 38 upper air sites throughout Europe . Based on these empirical formulae and simplifications, Bevis (1992) has determined that the error...Alaska using Bevis’ (1992) empirical correlation based on 8718 radiosonde calculations over 2 years. Other studies have been conducted in Europe and

  13. On the meaning of the weighted alternative free-response operating characteristic figure of merit.

    PubMed

    Chakraborty, Dev P; Zhai, Xuetong

    2016-05-01

    The free-response receiver operating characteristic (FROC) method is being increasingly used to evaluate observer performance in search tasks. Data analysis requires definition of a figure of merit (FOM) quantifying performance. While a number of FOMs have been proposed, the recommended one, namely, the weighted alternative FROC (wAFROC) FOM, is not well understood. The aim of this work is to clarify the meaning of this FOM by relating it to the empirical area under a proposed wAFROC curve. The weighted wAFROC FOM is defined in terms of a quasi-Wilcoxon statistic that involves weights, coding the clinical importance, assigned to each lesion. A new wAFROC curve is proposed, the y-axis of which incorporates the weights, giving more credit for marking clinically important lesions, while the x-axis is identical to that of the AFROC curve. An expression is derived relating the area under the empirical wAFROC curve to the wAFROC FOM. Examples are presented with small numbers of cases showing how AFROC and wAFROC curves are affected by correct and incorrect decisions and how the corresponding FOMs credit or penalize these decisions. The wAFROC, AFROC, and inferred ROC FOMs were applied to three clinical data sets involving multiple reader FROC interpretations in different modalities. It is shown analytically that the area under the empirical wAFROC curve equals the wAFROC FOM. This theorem is the FROC analog of a well-known theorem developed in 1975 for ROC analysis, which gave meaning to a Wilcoxon statistic based ROC FOM. A similar equivalence applies between the area under the empirical AFROC curve and the AFROC FOM. The examples show explicitly that the wAFROC FOM gives equal importance to all diseased cases, regardless of the number of lesions, a desirable statistical property not shared by the AFROC FOM. Applications to the clinical data sets show that the wAFROC FOM yields results comparable to that using the AFROC FOM. The equivalence theorem gives meaning to the weighted AFROC FOM, namely, it is identical to the empirical area under weighted AFROC curve.

  14. Semi-empirical estimation of organic compound fugacity ratios at environmentally relevant system temperatures.

    PubMed

    van Noort, Paul C M

    2009-06-01

    Fugacity ratios of organic compounds are used to calculate (subcooled) liquid properties, such as solubility or vapour pressure, from solid properties and vice versa. They can be calculated from the entropy of fusion, the melting temperature, and heat capacity data for the solid and the liquid. For many organic compounds, values for the fusion entropy are lacking. Heat capacity data are even scarcer. In the present study, semi-empirical compound class specific equations were derived to estimate fugacity ratios from molecular weight and melting temperature for polycyclic aromatic hydrocarbons and polychlorinated benzenes, biphenyls, dibenzo[p]dioxins and dibenzofurans. These equations estimate fugacity ratios with an average standard error of about 0.05 log units. In addition, for compounds with known fusion entropy values, a general semi-empirical correction equation based on molecular weight and melting temperature was derived for estimation of the contribution of heat capacity differences to the fugacity ratio. This equation estimates the heat capacity contribution correction factor with an average standard error of 0.02 log units for polycyclic aromatic hydrocarbons, polychlorinated benzenes, biphenyls, dibenzo[p]dioxins and dibenzofurans.

  15. Domain walls and ferroelectric reversal in corundum derivatives

    NASA Astrophysics Data System (ADS)

    Ye, Meng; Vanderbilt, David

    2017-01-01

    Domain walls are the topological defects that mediate polarization reversal in ferroelectrics, and they may exhibit quite different geometric and electronic structures compared to the bulk. Therefore, a detailed atomic-scale understanding of the static and dynamic properties of domain walls is of pressing interest. In this work, we use first-principles methods to study the structures of 180∘ domain walls, both in their relaxed state and along the ferroelectric reversal pathway, in ferroelectrics belonging to the family of corundum derivatives. Our calculations predict their orientation, formation energy, and migration energy and also identify important couplings between polarization, magnetization, and chirality at the domain walls. Finally, we point out a strong empirical correlation between the height of the domain-wall-mediated polarization reversal barrier and the local bonding environment of the mobile A cations as measured by bond-valence sums. Our results thus provide both theoretical and empirical guidance for future searches for ferroelectric candidates in materials of the corundum derivative family.

  16. Domain walls and ferroelectric reversal in corundum derivatives

    NASA Astrophysics Data System (ADS)

    Ye, Meng; Vanderbilt, David

    Domain walls are the topological defects that mediate polarization reversal in ferroelectrics, and they may exhibit quite different geometric and electronic structures compared to the bulk. Therefore, a detailed atomic-scale understanding of the static and dynamic properties of domain walls is of pressing interest. In this work, we use first-principles methods to study the structures of 180° domain walls, both in their relaxed state and along the ferroelectric reversal pathway, in ferroelectrics belonging to the family of corundum derivatives. Our calculations predict their orientation, formation energy, and migration energy, and also identify important couplings between polarization, magnetization, and chirality at the domain walls. Finally, we point out a strong empirical correlation between the height of the domain-wall mediated polarization reversal barrier and the local bonding environment of the mobile A cations as measured by bond valence sums. Our results thus provide both theoretical and empirical guidance to further search for ferroelectric candidates in materials of the corundum derivative family. The work is supported by ONR Grant N00014-12-1-1035.

  17. FUSION++: A New Data Assimilative Model for Electron Density Forecasting

    NASA Astrophysics Data System (ADS)

    Bust, G. S.; Comberiate, J.; Paxton, L. J.; Kelly, M.; Datta-Barua, S.

    2014-12-01

    There is a continuing need within the operational space weather community, both civilian and military, for accurate, robust data assimilative specifications and forecasts of the global electron density field, as well as derived RF application product specifications and forecasts obtained from the electron density field. The spatial scales of interest range from a hundred to a few thousand kilometers horizontally (synoptic large scale structuring) and meters to kilometers (small scale structuring that cause scintillations). RF space weather applications affected by electron density variability on these scales include navigation, communication and geo-location of RF frequencies ranging from 100's of Hz to GHz. For many of these applications, the necessary forecast time periods range from nowcasts to 1-3 hours. For more "mission planning" applications, necessary forecast times can range from hours to days. In this paper we present a new ionosphere-thermosphere (IT) specification and forecast model being developed at JHU/APL based upon the well-known data assimilation algorithms Ionospheric Data Assimilation Four Dimensional (IDA4D) and Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). This new forecast model, "Forward Update Simple IONosphere model Plus IDA4D Plus EMPIRE (FUSION++), ingests data from observations related to electron density, winds, electric fields and neutral composition and provides improved specification and forecast of electron density. In addition, the new model provides improved specification of winds, electric fields and composition. We will present a short overview and derivation of the methodology behind FUSION++, some preliminary results using real observational sources, example derived RF application products such as HF bi-static propagation, and initial comparisons with independent data sources for validation.

  18. Mechanistic quantitative structure-activity relationship model for the photoinduced toxicity of polycyclic aromatic hydrocarbons. 2: An empirical model for the toxicity of 16 polycyclic aromatic hydrocarbons to the duckweed Lemna gibba L. G-3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, X.D.; Krylov, S.N.; Ren, L.

    1997-11-01

    Photoinduced toxicity of polycyclic aromatic hydrocarbons (PAHs) occurs via photosensitization reactions (e.g., generation of singlet-state oxygen) and by photomodification (photooxidation and/or photolysis) of the chemicals to more toxic species. The quantitative structure-activity relationship (QSAR) described in the companion paper predicted, in theory, that photosensitization and photomodification additively contribute to toxicity. To substantiate this QSAR modeling exercise it was necessary to show that toxicity can be described by empirically derived parameters. The toxicity of 16 PAHs to the duckweed Lemna gibba was measured as inhibition of leaf production in simulated solar radiation (a light source with a spectrum similar to thatmore » of sunlight). A predictive model for toxicity was generated based on the theoretical model developed in the companion paper. The photophysical descriptors required of each PAH for modeling were efficiency of photon absorbance, relative uptake, quantum yield for triplet-state formation, and the rate of photomodification. The photomodification rates of the PAHs showed a moderate correlation to toxicity, whereas a derived photosensitization factor (PSF; based on absorbance, triplet-state quantum yield, and uptake) for each PAH showed only a weak, complex correlation to toxicity. However, summing the rate of photomodification and the PSF resulted in a strong correlation to toxicity that had predictive value. When the PSF and a derived photomodification factor (PMF; based on the photomodification rate and toxicity of the photomodified PAHs) were summed, an excellent explanatory model of toxicity was produced, substantiating the additive contributions of the two factors.« less

  19. MiR-137-derived polygenic risk: effects on cognitive performance in patients with schizophrenia and controls.

    PubMed

    Cosgrove, D; Harold, D; Mothersill, O; Anney, R; Hill, M J; Bray, N J; Blokland, G; Petryshen, T; Richards, A; Mantripragada, K; Owen, M; O'Donovan, M C; Gill, M; Corvin, A; Morris, D W; Donohoe, G

    2017-01-24

    Variants at microRNA-137 (MIR137), one of the most strongly associated schizophrenia risk loci identified to date, have been associated with poorer cognitive performance. As microRNA-137 is known to regulate the expression of ~1900 other genes, including several that are independently associated with schizophrenia, we tested whether this gene set was also associated with variation in cognitive performance. Our analysis was based on an empirically derived list of genes whose expression was altered by manipulation of MIR137 expression. This list was cross-referenced with genome-wide schizophrenia association data to construct individual polygenic scores. We then tested, in a sample of 808 patients and 192 controls, whether these risk scores were associated with altered performance on cognitive functions known to be affected in schizophrenia. A subgroup of healthy participants also underwent functional imaging during memory (n=108) and face processing tasks (n=83). Increased polygenic risk within the empirically derived miR-137 regulated gene score was associated with significantly lower performance on intelligence quotient, working memory and episodic memory. These effects were observed most clearly at a polygenic threshold of P=0.05, although significant results were observed at all three thresholds analyzed. This association was found independently for the gene set as a whole, excluding the schizophrenia-associated MIR137 SNP itself. Analysis of the spatial working memory fMRI task further suggested that increased risk score (thresholded at P=10 -5 ) was significantly associated with increased activation of the right inferior occipital gyrus. In conclusion, these data are consistent with emerging evidence that MIR137 associated risk for schizophrenia may relate to its broader downstream genetic effects.

  20. A Solution to the Cosmological Problem of Relativity Theory

    NASA Astrophysics Data System (ADS)

    Janzen, Daryl

    After nearly a century of scientific investigation, the standard cosmological theory continues to have many unexplained problems, which invariably amount to one troubling statement: we know of no good reason for the Universe to appear just as it does, which is described extremely well by the flat ΛCDM cosmological model. Therefore, the problem is not that the physical model is at all incompatible with observation, but that, as our empirical results have been increasingly constrained, it has also become increasingly obvious that the Universe does not meet our prior expectations; e.g., the evidence suggests that the Universe began from a singularity of the theory that is used to describe it, and with space expanding thereafter in cosmic time, even though relativity theory is thought to imply that no such objective foliation of the spacetime continuum should reasonably exist. Furthermore, the expanding Universe is well-described as being flat, isotropic, and homogeneous, even though its shape and expansion rate are everywhere supposed to be the products of local energy-content---and the necessary prior uniform distribution, of just the right amount of matter for all three of these conditions to be met, could not have been causally determined to begin with. And finally, the empirically constrained density parameters now indicate that all of the matter that we directly observe should make up only four percent of the total, so that the dominant forms of energy in the Universe should be dark energy in the form of a cosmological constant, Λ, and cold dark matter (CDM). The most common ways of attacking these problems have been: to apply modifications to the basic physical model, e.g. as in the inflation and quintessence theories which strive to resolve the horizon, flatness, and cosmological constant problems; to use particle physics techniques in order to formulate the description of dark matter candidates that might fit with observations; and, in the case of the Big Bang singularity, to appeal to the need for a quantum theory of gravity. This thesis takes a very different approach to the problem, in hypothesising that, because our physical model really does appear to do a very good job of describing the observed cosmic expansion rate, and all the data indicate that our Universe might well expand precisely according to the flat ΛCDM scale-factor, it may not be the model, but our basic expectations that need to be modified in order to derive a physical theory that stands in reasonable agreement with the empirical results; i.e., that it may actually be that we need to re-examine, and rationally modify our expectations of what should theoretically be, so that we might derive a theory to explain the empirical results of cosmology, which would be based solely on reasonably acceptable first principles. Therefore, a self-consistent theory is constructed here, upon re-consideration of the cosmological foundations of relativity theory, which eventually does afford an explanation of the cosmological problem, as it provides good reason to actually expect observations in the fundamental rest-frame to be described precisely by the flat ΛCDM scale-factor which has been empirically constrained.

  1. Forest impact estimated with NOAA AVHRR and landsat TM data related to an empirical hurricane wind-field distribution

    USGS Publications Warehouse

    Ramsey, Elijah W.; Hodgson, M.E.; Sapkota, S.K.; Nelson, G.A.

    2001-01-01

    An empirical model was used to relate forest type and hurricane-impact distribution with wind speed and duration to explain the variation of hurricane damage among forest types along the Atchafalaya River basin of coastal Louisiana. Forest-type distribution was derived from Landsat Thematic Mapper image data, hurricane-impact distribution from a suite of transformed advanced very high resolution radiometer images, and wind speed and duration from a wind-field model. The empirical model explained 73%, 84%, and 87% of the impact variances for open, hardwood, and cypress-tupelo forests, respectively. These results showed that the estimated impact for each forest type was highly related to the duration and speed of extreme winds associated with Hurricane Andrew in 1992. The wind-field model projected that the highest wind speeds were in the southern basin, dominated by cypress-tupelo and open forests, while lower wind speeds were in the northern basin, dominated by hardwood forests. This evidence could explain why, on average, the impact to cypress-tupelos was more severe than to hardwoods, even though cypress-tupelos are less susceptible to wind damage. Further, examination of the relative importance of wind speed in explaining the impact severity to each forest type showed that the impact to hardwood forests was mainly related to tropical-depression to tropical-storm force wind speeds. Impacts to cypress-tupelo and open forests (a mixture of willows and cypress-tupelo) were broadly related to tropical-storm force wind speeds and by wind speeds near and somewhat in excess of hurricane force. Decoupling the importance of duration from speed in explaining the impact severity to the forests could not be fully realized. Most evidence, however, hinted that impact severity was positively related to higher durations at critical wind speeds. Wind-speed intervals, which were important in explaining the impact severity on hardwoods, showed that higher durations, but not the highest wind speeds, were concentrated in the northern basin, dominated by hardwoods. The extreme impacts associated with the cypress-tupelo forests in the southeast corner of the basin intersected the highest durations as well as the highest wind speeds. ?? 2001 Published by Elsevier Science Inc.

  2. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable.

    PubMed

    Austin, Peter C; Steyerberg, Ewout W

    2012-06-20

    When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.

  3. Nucleon form factors in dispersively improved chiral effective field theory. II. Electromagnetic form factors

    NASA Astrophysics Data System (ADS)

    Alarcón, J. M.; Weiss, C.

    2018-05-01

    We study the nucleon electromagnetic form factors (EM FFs) using a recently developed method combining chiral effective field theory (χ EFT ) and dispersion analysis. The spectral functions on the two-pion cut at t >4 Mπ2 are constructed using the elastic unitarity relation and an N /D representation. χ EFT is used to calculate the real functions J±1(t ) =f±1(t ) /Fπ(t ) (ratios of the complex π π →N N ¯ partial-wave amplitudes and the timelike pion FF), which are free of π π rescattering. Rescattering effects are included through the empirical timelike pion FF | Fπ(t) | 2 . The method allows us to compute the isovector EM spectral functions up to t ˜1 GeV2 with controlled accuracy (leading order, next-to-leading order, and partial next-to-next-to-leading order). With the spectral functions we calculate the isovector nucleon EM FFs and their derivatives at t =0 (EM radii, moments) using subtracted dispersion relations. We predict the values of higher FF derivatives, which are not affected by higher-order chiral corrections and are obtained almost parameter-free in our approach, and explain their collective behavior. We estimate the individual proton and neutron FFs by adding an empirical parametrization of the isoscalar sector. Excellent agreement with the present low-Q2 FF data is achieved up to ˜0.5 GeV2 for GE, and up to ˜0.2 GeV2 for GM. Our results can be used to guide the analysis of low-Q2 elastic scattering data and the extraction of the proton charge radius.

  4. Autonomy and Authority in Public Research Organisations: Structure and Funding Factors.

    PubMed

    Cruz-Castro, Laura; Sanz-Menéndez, Luis

    2018-01-01

    This paper establishes a structural typology of the organisational configurations of public research organisations which vary in their relative internal sharing of authority between researchers and managers; we distinguish between autonomous, heteronomous and managed research organisations. We assume that there are at least two sources of legitimate authority within research organisations, one derived from formal hierarchy (organisational leadership) and another derived from the research community (professional); the balance of authority between researchers and managers is essentially structural but is empirically mediated by the funding portfolio of organisations and the corresponding endowment of resources at the disposal of leaders or researchers. Changes in the level, sources and strings of organisational and individual research funding are expected to affect the balance of internal authority in different ways depending on the organisational configuration, and to open the door to the influence of external actors in the development of research agendas.

  5. Journey to the MBH-σ relation: the fate of low-mass black holes in the Universe

    NASA Astrophysics Data System (ADS)

    Volonteri, Marta; Natarajan, Priyamvada

    2009-12-01

    In this paper, we explore the establishment and evolution of the empirical correlation between black hole mass (MBH) and velocity dispersion (σ) with redshift. We trace the growth and accretion history of massive black holes (MBHs) starting from high-redshift seeds that are planted via physically motivated prescriptions. Two seeding models are explored in this work: `light seeds', derived from Population III remnants, and `heavy seeds', derived from direct gas collapse. Even though the seeds themselves do not satisfy the MBH-σ relation initially, we find that the relation can be established and maintained at all times if self-regulating accretion episodes are associated with major mergers. The massive end of the MBH-σ relation is established early, and lower mass MBHs migrate on to it as hierarchical merging proceeds. How MBHs migrate towards the relation depends critically on the seeding prescription. Light seeds initially lie well below the MBH-σ relation, and MBHs can grow via steady accretion episodes unhindered by self-regulation. In contrast, for the heavy seeding model, MBHs are initially over-massive compared to the empirical correlation, and the host haloes assemble prior to kick-starting the growth of the MBH. We find that the existence of the MBH-σ correlation is purely a reflection of the merging hierarchy of massive dark matter haloes. The slope and scatter of the relation however appear to be a consequence of the seeding mechanism and the self-regulation prescription. We expect flux limited active galactic nucleus surveys to select MBHs that have already migrated on to the MBH-σ relation. Similarly, the Laser Interferometer Space Antenna (LISA) is also likely to be biased towards detecting merging MBHs that preferentially inhabit the MBH-σ. These results are a consequence of major mergers being more common at high redshift for the most massive, biased, galaxies that host MBHs which have already migrated on to the MBH-σ relation. We also predict the existence of a large population of low-mass `hidden' MBHs at high redshift which can easily escape detection. Additionally, we find that if MBH seeds are massive, ~105Msolar, the low-mass end of the MBH-σ flattens towards an asymptotic value, creating a characteristic `plume'.

  6. First Postpubertal Male Same-Sex Sexual Experience in the National Health and Social Life Survey: Current Functioning in Relation to Age at Time of Experience and Partner Age.

    PubMed

    Rind, Bruce

    2017-07-17

    This study used an important data set to examine long-term adjustment and functioning in men, who as adolescents had sexual experiences with men. The data came from the National Health and Social Life Survey, which used a national probability sample (Laumann, Gagnon, Michael, & Michaels, 1994). Three perspectives were considered, which offered different predictions. From the "child sexual abuse" (CSA) paradigm, which dominates clinical, legal, and lay views, expected was robust evidence for poorer adjustment, given that intense harm is assumed to be intrinsic. From the "mainstream psychological" perspective, derived from the CSA paradigm but more scientifically based, poorer adjustment was also expected, but with less magnitude, given that minor-adult sex is seen as posing a serious risk of harm, which may not universally apply. From the "relevant-empirical" perspective, which infers response to male adolescent-adult same-sex sex from relevant prior empirical research (as opposed to clinical cases or the female experience), expected was little or no evidence for poorer adjustment. Results supported the relevant-empirical perspective. Compared to several control groups (i.e., men whose first postpubertal same-sex sex was as men with other men; men with no postpubertal same-sex sexual experience or child-adult sex), men whose first postpubertal same-sex sex was as adolescents with men were just as well adjusted in terms of health, happiness, sexual functioning, and educational and career achievement. Results are discussed in relation to cultural influences, other cultures, and comparative data from primates.

  7. Assessment of microclimate conditions under artificial shades in a ginseng field.

    PubMed

    Lee, Kyu Jong; Lee, Byun-Woo; Kang, Je Yong; Lee, Dong Yun; Jang, Soo Won; Kim, Kwang Soo

    2016-01-01

    Knowledge on microclimate conditions under artificial shades in a ginseng field would facilitate climate-aware management of ginseng production. Weather data were measured under the shade and outside the shade at two fields located in Gochang-gun and Jeongeup-si, Korea, in 2011 and 2012 seasons to assess temperature and humidity conditions under the shade. An empirical approach was developed and validated for the estimation of leaf wetness duration (LWD) using weather measurements outside the shade as inputs to the model. Air temperature and relative humidity were similar between under the shade and outside the shade. For example, temperature conditions favorable for ginseng growth, e.g., between 8°C and 27°C, occurred slightly less frequently in hours during night times under the shade (91%) than outside (92%). Humidity conditions favorable for development of a foliar disease, e.g., relative humidity > 70%, occurred slightly more frequently under the shade (84%) than outside (82%). Effectiveness of correction schemes to an empirical LWD model differed by rainfall conditions for the estimation of LWD under the shade using weather measurements outside the shade as inputs to the model. During dew eligible days, a correction scheme to an empirical LWD model was slightly effective (10%) in reducing estimation errors under the shade. However, another correction approach during rainfall eligible days reduced errors of LWD estimation by 17%. Weather measurements outside the shade and LWD estimates derived from these measurements would be useful as inputs for decision support systems to predict ginseng growth and disease development.

  8. Assessment of microclimate conditions under artificial shades in a ginseng field

    PubMed Central

    Lee, Kyu Jong; Lee, Byun-Woo; Kang, Je Yong; Lee, Dong Yun; Jang, Soo Won; Kim, Kwang Soo

    2015-01-01

    Background Knowledge on microclimate conditions under artificial shades in a ginseng field would facilitate climate-aware management of ginseng production. Methods Weather data were measured under the shade and outside the shade at two fields located in Gochang-gun and Jeongeup-si, Korea, in 2011 and 2012 seasons to assess temperature and humidity conditions under the shade. An empirical approach was developed and validated for the estimation of leaf wetness duration (LWD) using weather measurements outside the shade as inputs to the model. Results Air temperature and relative humidity were similar between under the shade and outside the shade. For example, temperature conditions favorable for ginseng growth, e.g., between 8°C and 27°C, occurred slightly less frequently in hours during night times under the shade (91%) than outside (92%). Humidity conditions favorable for development of a foliar disease, e.g., relative humidity > 70%, occurred slightly more frequently under the shade (84%) than outside (82%). Effectiveness of correction schemes to an empirical LWD model differed by rainfall conditions for the estimation of LWD under the shade using weather measurements outside the shade as inputs to the model. During dew eligible days, a correction scheme to an empirical LWD model was slightly effective (10%) in reducing estimation errors under the shade. However, another correction approach during rainfall eligible days reduced errors of LWD estimation by 17%. Conclusion Weather measurements outside the shade and LWD estimates derived from these measurements would be useful as inputs for decision support systems to predict ginseng growth and disease development. PMID:26843827

  9. Raindrop Size Distribution Measurements at 4,500 m on the Tibetan Plateau During TIPEX-III

    NASA Astrophysics Data System (ADS)

    Chen, Baojun; Hu, Zhiqun; Liu, Liping; Zhang, Guifu

    2017-10-01

    As part of the third Tibetan Plateau Atmospheric Scientific Experiment field campaign, raindrop size distribution (DSD) measurements were taken with a laser optical disdrometer in Naqu, China, at 4,508 m above sea level (asl) during the summer months of 2013, 2014, and 2015. The characteristics of DSDs for five different rain rates, for two rain types (convective and stratiform), and for daytime and nighttime rains were studied. The shapes of the averaged DSDs were similar for different rain rates, and the width increased with rainfall intensity. Little difference was found in stratiform DSDs between day and night, whereas convective DSDs exhibited a significant day-night difference. Daytime convective DSDs had larger mass-weighted mean diameters (Dm) and smaller generalized intercepts (NW) than the nighttime DSDs. The constrained relations between the intercept N0 and shape μ, slope Λ and μ, and NW and Dm of gamma DSDs were derived. We also derived empirical relations between Dm and the radar reflectivity factor in the Ku and Ka bands.

  10. Empirical photometric calibration of the Gaia red clump: Colours, effective temperature, and absolute magnitude

    NASA Astrophysics Data System (ADS)

    Ruiz-Dern, L.; Babusiaux, C.; Arenou, F.; Turon, C.; Lallement, R.

    2018-01-01

    Context. Gaia Data Release 1 allows the recalibration of standard candles such as the red clump stars. To use those stars, they first need to be accurately characterised. In particular, colours are needed to derive interstellar extinction. As no filter is available for the first Gaia data release and to avoid the atmosphere model mismatch, an empirical calibration is unavoidable. Aims: The purpose of this work is to provide the first complete and robust photometric empirical calibration of the Gaia red clump stars of the solar neighbourhood through colour-colour, effective temperature-colour, and absolute magnitude-colour relations from the Gaia, Johnson, 2MASS, HIPPARCOS, Tycho-2, APASS-SLOAN, and WISE photometric systems, and the APOGEE DR13 spectroscopic temperatures. Methods: We used a 3D extinction map to select low reddening red giants. To calibrate the colour-colour and the effective temperature-colour relations, we developed a MCMC method that accounts for all variable uncertainties and selects the best model for each photometric relation. We estimated the red clump absolute magnitude through the mode of a kernel-based distribution function. Results: We provide 20 colour versus G-Ks relations and the first Teff versus G-Ks calibration. We obtained the red clump absolute magnitudes for 15 photometric bands with, in particular, MKs = (-1.606 ± 0.009) and MG = (0.495 ± 0.009) + (1.121 ± 0.128)(G-Ks-2.1). We present a dereddened Gaia-TGAS HR diagram and use the calibrations to compare its red clump and its red giant branch bump with Padova isochrones. Full Table A.1 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/609/A116

  11. Assessing the accuracy of improved force-matched water models derived from Ab initio molecular dynamics simulations.

    PubMed

    Köster, Andreas; Spura, Thomas; Rutkai, Gábor; Kessler, Jan; Wiebeler, Hendrik; Vrabec, Jadran; Kühne, Thomas D

    2016-07-15

    The accuracy of water models derived from ab initio molecular dynamics simulations by means on an improved force-matching scheme is assessed for various thermodynamic, transport, and structural properties. It is found that although the resulting force-matched water models are typically less accurate than fully empirical force fields in predicting thermodynamic properties, they are nevertheless much more accurate than generally appreciated in reproducing the structure of liquid water and in fact superseding most of the commonly used empirical water models. This development demonstrates the feasibility to routinely parametrize computationally efficient yet predictive potential energy functions based on accurate ab initio molecular dynamics simulations for a large variety of different systems. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. 'Nobody tosses a dwarf!' The relation between the empirical and the normative reexamined.

    PubMed

    Leget, Carlo; Borry, Pascal; de Vries, Raymond

    2009-05-01

    This article discusses the relation between empirical and normative approaches in bioethics. The issue of dwarf tossing, while admittedly unusual, is chosen as a point of departure because it challenges the reader to look with fresh eyes upon several central bioethical themes, including human dignity, autonomy, and the protection of vulnerable people. After an overview of current approaches to the integration of empirical and normative ethics, we consider five ways that the empirical and normative can be brought together to speak to the problem of dwarf tossing: prescriptive applied ethics, theoretical ethics, critical applied ethics, particularist ethics and integrated empirical ethics. We defend a position of critical applied ethics that allows for a two-way relation between empirical and normative theories. Against efforts fully to integrate the normative and the empirical into one synthesis, we propose that the two should stand in tension and relation to one another. The approach we endorse acknowledges that a social practice can and should be judged both by the gathering of empirical data and by normative ethics. Critical applied ethics uses a five stage process that includes: (a) determination of the problem, (b) description of the problem, (c) empirical study of effects and alternatives, (d) normative weighing and (e) evaluation of the effects of a decision. In each stage, we explore the perspective from both the empirical (sociological) and the normative ethical point of view. We conclude by applying our five-stage critical applied ethics to the example of dwarf tossing.

  13. Precision Orbit Derived Atmospheric Density: Development and Performance

    NASA Astrophysics Data System (ADS)

    McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.

    2012-09-01

    Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.

  14. Protein structure refinement using a quantum mechanics-based chemical shielding predictor† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c6sc04344e Click here for additional data file.

    PubMed Central

    2017-01-01

    The accurate prediction of protein chemical shifts using a quantum mechanics (QM)-based method has been the subject of intense research for more than 20 years but so far empirical methods for chemical shift prediction have proven more accurate. In this paper we show that a QM-based predictor of a protein backbone and CB chemical shifts (ProCS15, PeerJ, 2016, 3, e1344) is of comparable accuracy to empirical chemical shift predictors after chemical shift-based structural refinement that removes small structural errors. We present a method by which quantum chemistry based predictions of isotropic chemical shielding values (ProCS15) can be used to refine protein structures using Markov Chain Monte Carlo (MCMC) simulations, relating the chemical shielding values to the experimental chemical shifts probabilistically. Two kinds of MCMC structural refinement simulations were performed using force field geometry optimized X-ray structures as starting points: simulated annealing of the starting structure and constant temperature MCMC simulation followed by simulated annealing of a representative ensemble structure. Annealing of the CHARMM structure changes the CA-RMSD by an average of 0.4 Å but lowers the chemical shift RMSD by 1.0 and 0.7 ppm for CA and N. Conformational averaging has a relatively small effect (0.1–0.2 ppm) on the overall agreement with carbon chemical shifts but lowers the error for nitrogen chemical shifts by 0.4 ppm. If an amino acid specific offset is included the ProCS15 predicted chemical shifts have RMSD values relative to experiments that are comparable to popular empirical chemical shift predictors. The annealed representative ensemble structures differ in CA-RMSD relative to the initial structures by an average of 2.0 Å, with >2.0 Å difference for six proteins. In four of the cases, the largest structural differences arise in structurally flexible regions of the protein as determined by NMR, and in the remaining two cases, the large structural change may be due to force field deficiencies. The overall accuracy of the empirical methods are slightly improved by annealing the CHARMM structure with ProCS15, which may suggest that the minor structural changes introduced by ProCS15-based annealing improves the accuracy of the protein structures. Having established that QM-based chemical shift prediction can deliver the same accuracy as empirical shift predictors we hope this can help increase the accuracy of related approaches such as QM/MM or linear scaling approaches or interpreting protein structural dynamics from QM-derived chemical shift. PMID:28451325

  15. Competency-Based Curriculum Development: A Pragmatic Approach

    ERIC Educational Resources Information Center

    Broski, David; And Others

    1977-01-01

    Examines the concept of competency-based education, describes an experience-based model for its development, and discusses some empirically derived rules-of-thumb for its application in allied health. (HD)

  16. Systematic approach to developing empirical interatomic potentials for III-N semiconductors

    NASA Astrophysics Data System (ADS)

    Ito, Tomonori; Akiyama, Toru; Nakamura, Kohji

    2016-05-01

    A systematic approach to the derivation of empirical interatomic potentials is developed for III-N semiconductors with the aid of ab initio calculations. The parameter values of empirical potential based on bond order potential are determined by reproducing the cohesive energy differences among 3-fold coordinated hexagonal, 4-fold coordinated zinc blende, wurtzite, and 6-fold coordinated rocksalt structures in BN, AlN, GaN, and InN. The bond order p is successfully introduced as a function of the coordination number Z in the form of p = a exp(-bZn ) if Z ≤ 4 and p = (4/Z)α if Z ≥ 4 in empirical interatomic potential. Moreover, the energy difference between wurtzite and zinc blende structures can be successfully evaluated by considering interaction beyond the second-nearest neighbors as a function of ionicity. This approach is feasible for developing empirical interatomic potentials applicable to a system consisting of poorly coordinated atoms at surfaces and interfaces including nanostructures.

  17. Predictive and mechanistic multivariate linear regression models for reaction development

    PubMed Central

    Santiago, Celine B.; Guo, Jing-Yao

    2018-01-01

    Multivariate Linear Regression (MLR) models utilizing computationally-derived and empirically-derived physical organic molecular descriptors are described in this review. Several reports demonstrating the effectiveness of this methodological approach towards reaction optimization and mechanistic interrogation are discussed. A detailed protocol to access quantitative and predictive MLR models is provided as a guide for model development and parameter analysis. PMID:29719711

  18. An Empirically-Derived Index of High School Academic Rigor. ACT Working Paper 2017-5

    ERIC Educational Resources Information Center

    Allen, Jeff; Ndum, Edwin; Mattern, Krista

    2017-01-01

    We derived an index of high school academic rigor by optimizing the prediction of first-year college GPA based on high school courses taken, grades, and indicators of advanced coursework. Using a large data set (n~108,000) and nominal parameterization of high school course outcomes, the high school academic rigor (HSAR) index capitalizes on…

  19. A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution.

    PubMed

    Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep

    2017-01-01

    The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section.

  20. Verification of GCM-generated regional seasonal precipitation for current climate and of statistical downscaling estimates under changing climate conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busuioc, A.; Storch, H. von; Schnur, R.

    Empirical downscaling procedures relate large-scale atmospheric features with local features such as station rainfall in order to facilitate local scenarios of climate change. The purpose of the present paper is twofold: first, a downscaling technique is used as a diagnostic tool to verify the performance of climate models on the regional scale; second, a technique is proposed for verifying the validity of empirical downscaling procedures in climate change applications. The case considered is regional seasonal precipitation in Romania. The downscaling model is a regression based on canonical correlation analysis between observed station precipitation and European-scale sea level pressure (SLP). Themore » climate models considered here are the T21 and T42 versions of the Hamburg ECHAM3 atmospheric GCM run in time-slice mode. The climate change scenario refers to the expected time of doubled carbon dioxide concentrations around the year 2050. Generally, applications of statistical downscaling to climate change scenarios have been based on the assumption that the empirical link between the large-scale and regional parameters remains valid under a changed climate. In this study, a rationale is proposed for this assumption by showing the consistency of the 2 x CO{sub 2} GCM scenarios in winter, derived directly from the gridpoint data, with the regional scenarios obtained through empirical downscaling. Since the skill of the GCMs in regional terms is already established, it is concluded that the downscaling technique is adequate for describing climatically changing regional and local conditions, at least for precipitation in Romania during winter.« less

  1. Comparison of safety effect estimates obtained from empirical Bayes before-after study, propensity scores-potential outcomes framework, and regression model with cross-sectional data.

    PubMed

    Wood, Jonathan S; Donnell, Eric T; Porter, Richard J

    2015-02-01

    A variety of different study designs and analysis methods have been used to evaluate the performance of traffic safety countermeasures. The most common study designs and methods include observational before-after studies using the empirical Bayes method and cross-sectional studies using regression models. The propensity scores-potential outcomes framework has recently been proposed as an alternative traffic safety countermeasure evaluation method to address the challenges associated with selection biases that can be part of cross-sectional studies. Crash modification factors derived from the application of all three methods have not yet been compared. This paper compares the results of retrospective, observational evaluations of a traffic safety countermeasure using both before-after and cross-sectional study designs. The paper describes the strengths and limitations of each method, focusing primarily on how each addresses site selection bias, which is a common issue in observational safety studies. The Safety Edge paving technique, which seeks to mitigate crashes related to roadway departure events, is the countermeasure used in the present study to compare the alternative evaluation methods. The results indicated that all three methods yielded results that were consistent with each other and with previous research. The empirical Bayes results had the smallest standard errors. It is concluded that the propensity scores with potential outcomes framework is a viable alternative analysis method to the empirical Bayes before-after study. It should be considered whenever a before-after study is not possible or practical. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Analytic, empirical and delta method temperature derivatives of D-D and D-T fusion reactivity formulations, as a means of verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langenbrunner, James R.; Booker, Jane M.

    We examine the derivatives with respect to temperature, for various deuterium-tritium (DT) and deuterium-deuterium (D-D) fusion-reactivity formulations. Langenbrunner and Makaruk [1] had studied this as a means of understanding the time and temperature domain of reaction history measured in dynamic fusion experiments. Presently, we consider the temperature derivative dependence of fusion reactivity as a means of exercising and verifying the consistency of the various reactivity formulations.

  3. Soil-plant transfer models for metals to improve soil screening value guidelines valid for São Paulo, Brazil.

    PubMed

    Dos Santos-Araujo, Sabrina N; Swartjes, Frank A; Versluijs, Kees W; Moreno, Fabio Netto; Alleoni, Luís R F

    2017-11-07

    In Brazil, there is a lack of combined soil-plant data attempting to explain the influence of specific climate, soil conditions, and crop management on heavy metal uptake and accumulation by plants. As a consequence, soil-plant relationships to be used in risk assessments or for derivation of soil screening values are not available. Our objective in this study was to develop empirical soil-plant models for Cd, Cu, Pb, Ni, and Zn, in order to derive appropriate soil screening values representative of humid tropical regions such as the state of São Paulo (SP), Brazil. Soil and plant samples from 25 vegetable species in the production areas of SP were collected. The concentrations of metals found in these soil samples were relatively low. Therefore, data from temperate regions were included in our study. The soil-plant relations derived had a good performance for SP conditions for 8 out of 10 combinations of metal and vegetable species. The bioconcentration factor (BCF) values for Cd, Cu, Ni, Pb, and Zn in lettuce and for Cd, Cu, Pb, and Zn in carrot were determined under three exposure scenarios at pH 5 and 6. The application of soil-plant models and the BCFs proposed in this study can be an important tool to derive national soil quality criteria. However, this methodological approach includes data assessed under different climatic conditions and soil types and need to be carefully considered.

  4. Observability, Visualizability and the Question of Metaphysical Neutrality

    NASA Astrophysics Data System (ADS)

    Wolff, Johanna

    2015-09-01

    Theories in fundamental physics are unlikely to be ontologically neutral, yet they may nonetheless fail to offer decisive empirical support for or against particular metaphysical positions. I illustrate this point by close examination of a particular objection raised by Wolfgang Pauli against Hermann Weyl. The exchange reveals that both parties to the dispute appeal to broader epistemological principles to defend their preferred metaphysical starting points. I suggest that this should make us hesitant to assume that in deriving metaphysical conclusions from physical theories we place our metaphysical theories on a purely empirical foundation. The metaphysics within a particular physical theory may well be the result of a priori assumptions in the background, not particular empirical findings.

  5. Path integral for equities: Dynamic correlation and empirical analysis

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Cao, Yang; Lau, Ada; Tang, Pan

    2012-02-01

    This paper develops a model to describe the unequal time correlation between rate of returns of different stocks. A non-trivial fourth order derivative Lagrangian is defined to provide an unequal time propagator, which can be fitted to the market data. A calibration algorithm is designed to find the empirical parameters for this model and different de-noising methods are used to capture the signals concealed in the rate of return. The detailed results of this Gaussian model show that the different stocks can have strong correlation and the empirical unequal time correlator can be described by the model's propagator. This preliminary study provides a novel model for the correlator of different instruments at different times.

  6. Synthetic temperature profiles derived from Geosat altimetry: Comparison with air-dropped expendable bathythermograph profiles

    NASA Astrophysics Data System (ADS)

    Carnes, Michael R.; Mitchell, Jim L.; de Witt, P. Webb

    1990-10-01

    Synthetic temperature profiles are computed from altimeter-derived sea surface heights in the Gulf Stream region. The required relationships between surface height (dynamic height at the surface relative to 1000 dbar) and subsurface temperature are provided from regression relationships between dynamic height and amplitudes of empirical orthogonal functions (EOFs) of the vertical structure of temperature derived by de Witt (1987). Relationships were derived for each month of the year from historical temperature and salinity profiles from the region surrounding the Gulf Stream northeast of Cape Hatteras. Sea surface heights are derived using two different geoid estimates, the feature-modeled geoid and the air-dropped expendable bathythermograph (AXBT) geoid, both described by Carnes et al. (1990). The accuracy of the synthetic profiles is assessed by comparison to 21 AXBT profile sections which were taken during three surveys along 12 Geosat ERM ground tracks nearly contemporaneously with Geosat overflights. The primary error statistic considered is the root-mean-square (rms) difference between AXBT and synthetic isotherm depths. The two sources of error are the EOF relationship and the altimeter-derived surface heights. EOF-related and surface height-related errors in synthetic temperature isotherm depth are of comparable magnitude; each translates into about a 60-m rms isotherm depth error, or a combined 80 m to 90 m error for isotherms in the permanent thermocline. EOF-related errors are responsible for the absence of the near-surface warm core of the Gulf Stream and for the reduced volume of Eighteen Degree Water in the upper few hundred meters of (apparently older) cold-core rings in the synthetic profiles. The overall rms difference between surface heights derived from the altimeter and those computed from AXBT profiles is 0.15 dyn m when the feature-modeled geoid is used and 0.19 dyn m when the AXBT geoid is used; the portion attributable to altimeter-derived surface height errors alone is 0.03 dyn m less for each. In most cases, the deeper structure of the Gulf Stream and eddies is reproduced well by vertical sections of synthetic temperature, with largest errors typically in regions of high horizontal gradient such as across rings and the Gulf Stream front.

  7. Ideology, science, and people in Amílcar Cabral.

    PubMed

    Neves, José

    2017-01-01

    The present article contributes to the debate on how historians and social scientists perceive and understand relations between ideology and science, which are often seen as realms belonging to rival kingdoms. Following an analysis and critical positioning vis-à-vis Cabralian studies, the text examines how scholars of Cabral have portrayed his agronomic activities. It then undertakes a genealogical analysis of the Cabralian concept of people and suggests that the emergence of this concept in Cabral's discourse derives from the intersection of the development of anti-colonial nationalist thought in the former Portuguese Empire and the development of agrarian studies in metropolitan Portugal.

  8. Quality and price--impact on patient satisfaction.

    PubMed

    Pantouvakis, Angelos; Bouranta, Nancy

    2014-01-01

    The purpose of this paper is to synthesize existing quality-measurement models and applies them to healthcare by combining a Nordic service-quality with an American service performance model. Results are based on a questionnaire survey of 1,298 respondents. Service quality dimensions were derived and related to satisfaction by employing a multinomial logistic model, which allows prediction and service improvement. Qualitative and empirical evidence indicates that customer satisfaction and service quality are multi-dimensional constructs, whose quality components, together with convenience and cost, influence the customer's overall satisfaction. The proposed model identifies important quality and satisfaction issues. It also enables transitions between different responses in different studies to be compared.

  9. Ab initio predictions on the rotational spectra of carbon-chain carbene molecules.

    PubMed

    Maluendes, S A; McLean, A D

    1992-12-18

    We predict rotational constants for the carbon-chain molecules H2C=(C=)nC, n=3-8, using ab initio computations, observed values for the earlier members in the series, H2CCC and H2CCCC with n=1 and 2, and empirical geometry corrections derived from comparison of computation and experiment on related molecules. H2CCC and H2CCCC have already been observed by radioastronomy; higher members in the series, because of their large dipole moments, which we have calculated, are candidates for astronomical searches. Our predictions can guide searches and assist in both astronomical and laboratory detection.

  10. Ab initio predictions on the rotational spectra of carbon-chain carbene molecules

    NASA Technical Reports Server (NTRS)

    Maluendes, S. A.; McLean, A. D.; Loew, G. H. (Principal Investigator)

    1992-01-01

    We predict rotational constants for the carbon-chain molecules H2C=(C=)nC, n=3-8, using ab initio computations, observed values for the earlier members in the series, H2CCC and H2CCCC with n=1 and 2, and empirical geometry corrections derived from comparison of computation and experiment on related molecules. H2CCC and H2CCCC have already been observed by radioastronomy; higher members in the series, because of their large dipole moments, which we have calculated, are candidates for astronomical searches. Our predictions can guide searches and assist in both astronomical and laboratory detection.

  11. The wavelength dependent model of extinction in fog and haze for free space optical communication.

    PubMed

    Grabner, Martin; Kvicera, Vaclav

    2011-02-14

    The wavelength dependence of the extinction coefficient in fog and haze is investigated using Mie single scattering theory. It is shown that the effective radius of drop size distribution determines the slope of the log-log dependence of the extinction on wavelengths in the interval between 0.2 and 2 microns. The relation between the atmospheric visibility and the effective radius is derived from the empirical relationship of liquid water content and extinction. Based on these results, the model of the relationship between visibility and the extinction coefficient with different effective radii for fog and for haze conditions is proposed.

  12. Accurately Characterizing the Importance of Wave-Particle Interactions in Radiation Belt Dynamics: The Pitfalls of Statistical Wave Representations

    NASA Technical Reports Server (NTRS)

    Murphy, Kyle R.; Mann, Ian R.; Rae, I. Jonathan; Sibeck, David G.; Watt, Clare E. J.

    2016-01-01

    Wave-particle interactions play a crucial role in energetic particle dynamics in the Earths radiation belts. However, the relative importance of different wave modes in these dynamics is poorly understood. Typically, this is assessed during geomagnetic storms using statistically averaged empirical wave models as a function of geomagnetic activity in advanced radiation belt simulations. However, statistical averages poorly characterize extreme events such as geomagnetic storms in that storm-time ultralow frequency wave power is typically larger than that derived over a solar cycle and Kp is a poor proxy for storm-time wave power.

  13. SPRUCE S1 Bog Sphagnum CO2 Flux Measurements and Partitioning into Re and GPP

    DOE Data Explorer

    Walker, A. P. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Carter, K. R. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Hanson, P. J. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Nettles, W. R. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Philips, J. R. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Sebestyen, S. D. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Weston, D. J. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.

    2015-06-01

    This data set provides (1) the results of in-situ Sphagnum-peat hourly net ecosystem exchange (NEE) measured using a LICOR 8100 gas exchange system and (2) the component fluxes -- gross primary production (GPP) and ecosystem respiration (Re), derived using empirical regressions.NEE measurements were made from 6 June to 6 November 2014 and 20 March to 10 May 2015. Three 8100 chambers per dominant species (S. magellanicum or S. fallax) were placed in the S1 Bog in relatively open ground where there was no obvious hummock-hollow microtopography. The 8100 chambers were not located in the SPRUCE experimental enclosures.

  14. Downsides of an overly context-sensitive self: implications from the culture and subjective well-being research.

    PubMed

    Suh, Eunkook M

    2007-12-01

    The self becomes context sensitive in service of the need to belong. When it comes to achieving personal happiness, an identity system that derives its worth and meaning excessively from its social context puts itself in a significantly disadvantageous position. This article integrates empirical findings and ideas from the self, subjective well-being, and cross-cultural literature and tries to offer insights to why East Asian cultural members report surprisingly low levels of happiness. The various cognitive, motivational, behavioral, and affective characteristics of the overly relation-oriented self are discussed as potential explanations. Implications for the study of self and culture are offered.

  15. Stochastic Modeling of Empirical Storm Loss in Germany

    NASA Astrophysics Data System (ADS)

    Prahl, B. F.; Rybski, D.; Kropp, J. P.; Burghoff, O.; Held, H.

    2012-04-01

    Based on German insurance loss data for residential property we derive storm damage functions that relate daily loss with maximum gust wind speed. Over a wide range of loss, steep power law relationships are found with spatially varying exponents ranging between approximately 8 and 12. Global correlations between parameters and socio-demographic data are employed to reduce the number of local parameters to 3. We apply a Monte Carlo approach to calculate German loss estimates including confidence bounds in daily and annual resolution. Our model reproduces the annual progression of winter storm losses and enables to estimate daily losses over a wide range of magnitude.

  16. Bayesian exponential random graph modelling of interhospital patient referral networks.

    PubMed

    Caimo, Alberto; Pallotti, Francesca; Lomi, Alessandro

    2017-08-15

    Using original data that we have collected on referral relations between 110 hospitals serving a large regional community, we show how recently derived Bayesian exponential random graph models may be adopted to illuminate core empirical issues in research on relational coordination among healthcare organisations. We show how a rigorous Bayesian computation approach supports a fully probabilistic analytical framework that alleviates well-known problems in the estimation of model parameters of exponential random graph models. We also show how the main structural features of interhospital patient referral networks that prior studies have described can be reproduced with accuracy by specifying the system of local dependencies that produce - but at the same time are induced by - decentralised collaborative arrangements between hospitals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Sexual orientation of female-to-male transsexuals: a comparison of homosexual and nonhomosexual types.

    PubMed

    Chivers, M L; Bailey, J M

    2000-06-01

    Homosexual and nonhomosexual (relative to genetic sex) female-to-male transsexuals (FTMs) were compared on a number of theoretically or empirically derived variables. Compared to nonhomosexual FTMs, homosexual FTMs reported greater childhood gender nonconformity, preferred more feminine partners, experienced greater sexual rather than emotional jealousy, were more sexually assertive, had more sexual partners, had a greater desire for phalloplasty, and had more interest in visual sexual stimuli. Homosexual and nonhomosexual FTMs did not differ in their overall desire for masculinizing body modifications, adult gender identity, or importance of partner social status, attractiveness, or youth. These findings indicate that FTMs are not a homogeneous group and vary in ways that may be useful in understanding the relation between sexual orientation and gender identity.

  18. A side-by-side comparison of CPV module and system performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, Matthew; Marion, Bill; Kurtz, Sarah

    A side-by-side comparison is made between concentrator photovoltaic module and system direct current aperture efficiency data with a focus on quantifying system performance losses. The individual losses measured/calculated, when combined, are in good agreement with the total loss seen between the module and the system. Results indicate that for the given test period, the largest individual loss of 3.7% relative is due to the baseline performance difference between the individual module and the average for the 200 modules in the system. A basic empirical model is derived based on module spectral performance data and the tabulated losses between the modulemore » and the system. The model predicts instantaneous system direct current aperture efficiency with a root mean square error of 2.3% relative.« less

  19. A Spoken Language Intervention for School-Aged Boys with fragile X Syndrome

    PubMed Central

    McDuffie, Andrea; Machalicek, Wendy; Bullard, Lauren; Nelson, Sarah; Mello, Melissa; Tempero-Feigles, Robyn; Castignetti, Nancy; Abbeduto, Leonard

    2015-01-01

    Using a single case design, a parent-mediated spoken language intervention was delivered to three mothers and their school-aged sons with fragile X syndrome, the leading inherited cause of intellectual disability. The intervention was embedded in the context of shared story-telling using wordless picture books and targeted three empirically-derived language support strategies. All sessions were implemented via distance video-teleconferencing. Parent education sessions were followed by 12 weekly clinician coaching and feedback sessions. Data was collected weekly during independent homework and clinician observation sessions. Relative to baseline, mothers increased their use of targeted strategies and dyads increased the frequency and duration of story-related talking. Generalized effects of the intervention on lexical diversity and grammatical complexity were observed. Implications for practice are discussed. PMID:27119214

  20. Why Von Neumann interstellar probes could not exist: nonoptical reflections on modern analytic philosophy, bad arguments, and unutilised data.

    NASA Astrophysics Data System (ADS)

    Goodall, Clive

    1993-08-01

    A decisive and lethal response to a naive radical skepticism concerning the prospects for the existence of Extraterrestrial Intelligence is derivable from core areas of Modern Analytic Philosophy. The naive skeptical view is fundamentally flawed in the way it oversimplifies certain complex issues, failing as it does, to recognize a special class of conceptual problems for what they really are and mistakenly treating them instead as empirical issues. Specifically, this skepticism is based upon an untenable oversimplifying mode of the 'mind-brain' relation. Moreover, independent logical considerations concerning the mind-brain relation provide evidential grounds for why we should in fact expect a priori that an Alien Intelligence will face constraints upon, and immense difficulties in, making its existence known by non- electromagnetic means.

  1. Systematic Site Characterization at Seismic Stations combined with Empirical Spectral Modeling: critical data for local hazard analysis

    NASA Astrophysics Data System (ADS)

    Michel, Clotaire; Hobiger, Manuel; Edwards, Benjamin; Poggi, Valerio; Burjanek, Jan; Cauzzi, Carlo; Kästli, Philipp; Fäh, Donat

    2016-04-01

    The Swiss Seismological Service operates one of the densest national seismic networks in the world, still rapidly expanding (see http://www.seismo.ethz.ch/monitor/index_EN). Since 2009, every newly instrumented site is characterized following an established procedure to derive realistic 1D VS velocity profiles. In addition, empirical Fourier spectral modeling is performed on the whole network for each recorded event with sufficient signal-to-noise ratio. Besides the source characteristics of the earthquakes, statistical real time analyses of the residuals of the spectral modeling provide a seamlessly updated amplification function w.r. to Swiss rock conditions at every station. Our site characterization procedure is mainly based on the analysis of surface waves from passive experiments and includes cross-checks of the derived amplification functions with those obtained through spectral modeling. The systematic use of three component surface-wave analysis, allowing the derivation of both Rayleigh and Love waves dispersion curves, also contributes to the improved quality of the retrieved profiles. The results of site characterisation activities at recently installed strong-motion stations depict the large variety of possible effects of surface geology on ground motion in the Alpine context. Such effects range from de-amplification at hard-rock sites to amplification up to a factor of 15 in lacustrine sediments with respect to the Swiss reference rock velocity model. The derived velocity profiles are shown to reproduce observed amplification functions from empirical spectral modeling. Although many sites are found to exhibit 1D behavior, our procedure allows the detection and qualification of 2D and 3D effects. All data collected during the site characterization procedures in the last 20 years are gathered in a database, implementing a data model proposed for community use at the European scale through NERA and EPOS (www.epos-eu.org). A web stationbook derived from it can be accessed through the interface www.stations.seismo.ethz.ch.

  2. Types of Faculty Scholars in Community Colleges

    ERIC Educational Resources Information Center

    Park, Toby J.; Braxton, John M.; Lyken-Segosebe, Dawn

    2015-01-01

    This chapter describes three empirically derived types of faculty scholars in community colleges: Immersed Scholars, Scholars of Dissemination, and Scholars of Pedagogical Knowledge. This chapter discusses these types and offers a recommendation.

  3. Pluvials, Droughts, Energetics, and the Mongol Empire

    NASA Astrophysics Data System (ADS)

    Hessl, A. E.; Pederson, N.; Baatarbileg, N.

    2012-12-01

    The success of the Mongol Empire, the largest contiguous land empire the world has ever known, is a historical enigma. At its peak in the late 13th century, the empire influenced areas from the Hungary to southern Asia and Persia. Powered by domesticated herbivores, the Mongol Empire grew at the expense of agriculturalists in Eastern Europe, Persia, and China. What environmental factors contributed to the rise of the Mongols? What factors influenced the disintegration of the empire by 1300 CE? Until now, little high resolution environmental data have been available to address these questions. We use tree-ring records of past temperature and water to illuminate the role of energy and water in the evolution of the Mongol Empire. The study of energetics has long been applied to biological and ecological systems but has only recently become a theme in understanding modern coupled natural and human systems (CNH). Because water and energy are tightly linked in human and natural systems, studying their synergies and interactions make it possible to integrate knowledge across disciplines and human history, yielding important lessons for modern societies. We focus on the role of energy and water in the trajectory of an empire, including its rise, development, and demise. Our research is focused on the Orkhon Valley, seat of the Mongol Empire, where recent paleoenvironmental and archeological discoveries allow high resolution reconstructions of past human and environmental conditions for the first time. Our preliminary records indicate that the period 1210-1230 CE, the height of Chinggis Khan's reign is one of the longest and most consistent pluvials in our tree ring reconstruction of interannual drought. Reconstructed temperature derived from five millennium-long records from subalpine forests in Mongolia document warm temperatures beginning in the early 1200's and ending with a plunge into cold temperatures in 1260. Abrupt cooling in central Mongolia at this time is consistent with a well-documented volcanic eruption that caused massive crop damage and famine throughout much of Europe. In Mongol history, this abrupt cooling also coincides with the move of the capital from Central Mongolia (Karakorum) to China (Beijing). In combination, the tree-ring records of water and temperature suggest that 1) the rise of the Mongol Empire occurred during an unusually consistent warm and wet climate and 2) the disintegration of the Empire occurred following a plunge into cold and dry conditions in Central Mongolia. These results represent the first step of a larger project integrating physical science and history to understand the role of energy in the evolution of the Mongol Empire. Using data from historic documents, ecological modeling, tree rings, and sediment cores, we will investigate whether the expansion and contraction of the empire was related to moisture and temperature availability and thus grassland productivity associated with climate change in the Orkhon Valley.

  4. River meanders and channel size

    USGS Publications Warehouse

    Williams, G.P.

    1986-01-01

    This study uses an enlarged data set to (1) compare measured meander geometry to that predicted by the Langbein and Leopold (1966) theory, (2) examine the frequency distribution of the ratio radius of curvature/channel width, and (3) derive 40 empirical equations (31 of which are original) involving meander and channel size features. The data set, part of which comes from publications by other authors, consists of 194 sites from a large variety of physiographic environments in various countries. The Langbein-Leopold sine-generated-curve theory for predicting radius of curvature agrees very well with the field data (78 sites). The ratio radius of curvature/channel width has a modal value in the range of 2 to 3, in accordance with earlier work; about one third of the 79 values is less than 2.0. The 40 empirical relations, most of which include only two variables, involve channel cross-section dimensions (bankfull area, width, and mean depth) and meander features (wavelength, bend length, radius of curvature, and belt width). These relations have very high correlation coefficients, most being in the range of 0.95-0.99. Although channel width traditionally has served as a scale indicator, bankfull cross-sectional area and mean depth also can be used for this purpose. ?? 1986.

  5. Detonation charge size versus coda magnitude relations in California and Nevada

    USGS Publications Warehouse

    Brocher, T.M.

    2003-01-01

    Magnitude-charge size relations have important uses in forensic seismology and are used in Comprehensive Nuclear-Test-Ban Treaty monitoring. I derive empirical magnitude versus detonation-charge-size relationships for 322 detonations located by permanent seismic networks in California and Nevada. These detonations, used in 41 different seismic refraction or network calibration experiments, ranged in yield (charge size) between 25 and 106 kg; coda magnitudes reported for them ranged from 0.5 to 3.9. Almost all represent simultaneous (single-fired) detonations of one or more boreholes. Repeated detonations at the same shotpoint suggest that the reported coda magnitudes are repeatable, on average, to within 0.1 magnitude unit. An empirical linear regression for these 322 detonations yields M = 0.31 + 0.50 log10(weight [kg]). The detonations compiled here demonstrate that the Khalturin et al. (1998) relationship, developed mainly for data from large chemical explosions but which fits data from nuclear blasts, can be used to estimate the minimum charge size for coda magnitudes between 0.5 and 3.9. Drilling, loading, and shooting logs indicate that the explosive specification, loading method, and effectiveness of tamp are the primary factors determining the efficiency of a detonation. These records indicate that locating a detonation within the water table is neither a necessary nor sufficient condition for an efficient shot.

  6. A Comprehensive Physical Impedance Model of Polymer Electrolyte Fuel Cell Cathodes in Oxygen-free Atmosphere.

    PubMed

    Obermaier, Michael; Bandarenka, Aliaksandr S; Lohri-Tymozhynsky, Cyrill

    2018-03-21

    Electrochemical impedance spectroscopy (EIS) is an indispensable tool for non-destructive operando characterization of Polymer Electrolyte Fuel Cells (PEFCs). However, in order to interpret the PEFC's impedance response and understand the phenomena revealed by EIS, numerous semi-empirical or purely empirical models are used. In this work, a relatively simple model for PEFC cathode catalyst layers in absence of oxygen has been developed, where all the equivalent circuit parameters have an entire physical meaning. It is based on: (i) experimental quantification of the catalyst layer pore radii, (ii) application of De Levie's analytical formula to calculate the response of a single pore, (iii) approximating the ionomer distribution within every pore, (iv) accounting for the specific adsorption of sulfonate groups and (v) accounting for a small H 2 crossover through ~15 μm ionomer membranes. The derived model has effectively only 6 independent fitting parameters and each of them has clear physical meaning. It was used to investigate the cathode catalyst layer and the double layer capacitance at the interface between the ionomer/membrane and Pt-electrocatalyst. The model has demonstrated excellent results in fitting and interpretation of the impedance data under different relative humidities. A simple script enabling fitting of impedance data is provided as supporting information.

  7. Chlorophyll-a retrieval in the Philippine waters

    NASA Astrophysics Data System (ADS)

    Perez, G. J. P.; Leonardo, E. M.; Felix, M. J.

    2017-12-01

    Satellite-based monitoring of chlorophyll-a (Chl-a) concentration has been widely used for estimating plankton biomass, detecting harmful algal blooms, predicting pelagic fish abundance, and water quality assessment. Chl-a concentrations at 1 km spatial resolution can be retrieved from MODIS onboard Aqua and Terra satellites. However, with this resolution, MODIS has scarce Chl-a retrieval in coastal and inland waters, which are relevant for archipelagic countries such as the Philippines. These gaps on Chl-a retrieval can be filled by sensors with higher spatial resolution, such as the OLI of Landsat 8. In this study, assessment of Chl-a concentration derived from MODIS/Aqua and OLI/Landsat 8 imageries across the open, coastal and inland waters of the Philippines was done. Validation activities were conducted at eight different sites around the Philippines for the period October 2016 to April 2017. Water samples filtered on the field were processed in the laboratory for Chl-a extraction. In situ remote sensing reflectance was derived from radiometric measurements and ancillary information, such as bathymetry and turbidity, were also measured. Correlation between in situ and satellite-derived Chl-a concentration using the blue-green ratio yielded relatively high R2 values of 0.51 to 0.90. This is despite an observed overestimation for both MODIS and OLI-derived values, especially in turbid and coastal waters. The overestimation of Chl-a may be attributed to inaccuracies in i) remote sensing reflectance (Rrs) retrieval and/or ii) empirical model used in calculating Chl-a concentration. However, a good 1:1 correspondence between the satellite and in situ maximum Rrs band ratio was established. This implies that the overestimation is largely due to the inaccuracies from the default coefficients used in the empirical model. New coefficients were then derived from the correlation analysis of both in situ-measured Chl-a concentration and maximum Rrs band ratio. This results to a significant improvement on calculated RMSE of satellite-derived Chl-a values. Meanwhile, it was observed that the blue-green band ratio has low Chl-a predictive capability in turbid waters. A more accurate estimation was found using the NIR and red band ratios for turbid waters with covarying Chl-a concentration and low sediment load.

  8. Fundamental Flaws In The Derivation Of Stevens' Law For Taste Within Norwich's Entropy Theory of Perception

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nizami, Lance

    2010-03-01

    Norwich's Entropy Theory of Perception (1975-present) is a general theory of perception, based on Shannon's Information Theory. Among many bold claims, the Entropy Theory presents a truly astounding result: that Stevens' Law with an Index of 1, an empirical power relation of direct proportionality between perceived taste intensity and stimulus concentration, arises from theory alone. Norwich's theorizing starts with several extraordinary hypotheses. First, 'multiple, parallel receptor-neuron units' without collaterals 'carry essentially the same message to the brain', i.e. the rate-level curves are identical. Second, sensation is proportional to firing rate. Third, firing rate is proportional to the taste receptor's 'resolvablemore » uncertainty'. Fourth, the 'resolvable uncertainty' is obtained from Shannon's Information Theory. Finally, 'resolvable uncertainty' also depends upon the microscopic thermodynamic density fluctuation of the tasted solute. Norwich proves that density fluctuation is density variance, which is proportional to solute concentration, all based on the theory of fluctuations in fluid composition from Tolman's classic physics text, 'The Principles of Statistical Mechanics'. Altogether, according to Norwich, perceived taste intensity is theoretically proportional to solute concentration. Such a universal rule for taste, one that is independent of solute identity, personal physiological differences, and psychophysical task, is truly remarkable and is well-deserving of scrutiny. Norwich's crucial step was the derivation of density variance. That step was meticulously reconstructed here. It transpires that the appropriate fluctuation is Tolman's mean-square fractional density fluctuation, not density variance as used by Norwich. Tolman's algebra yields a 'Stevens Index' of -1 rather than 1. As 'Stevens Index' empirically always exceeds zero, the Index of -1 suggests that it is risky to infer psychophysical laws of sensory response from information theory and stimulus physics while ignoring empirical biological transformations, such as sensory transduction. Indeed, it raises doubts as to whether the Entropy Theory actually describes psychophysical laws at all.« less

  9. Comparison of binding energies of SrcSH2-phosphotyrosyl peptides with structure-based prediction using surface area based empirical parameterization.

    PubMed Central

    Henriques, D. A.; Ladbury, J. E.; Jackson, R. M.

    2000-01-01

    The prediction of binding energies from the three-dimensional (3D) structure of a protein-ligand complex is an important goal of biophysics and structural biology. Here, we critically assess the use of empirical, solvent-accessible surface area-based calculations for the prediction of the binding of Src-SH2 domain with a series of tyrosyl phosphopeptides based on the high-affinity ligand from the hamster middle T antigen (hmT), where the residue in the pY+ 3 position has been changed. Two other peptides based on the C-terminal regulatory site of the Src protein and the platelet-derived growth factor receptor (PDGFR) are also investigated. Here, we take into account the effects of proton linkage on binding, and test five different surface area-based models that include different treatments for the contributions to conformational change and protein solvation. These differences relate to the treatment of conformational flexibility in the peptide ligand and the inclusion of proximal ordered solvent molecules in the surface area calculations. This allowed the calculation of a range of thermodynamic state functions (deltaCp, deltaS, deltaH, and deltaG) directly from structure. Comparison with the experimentally derived data shows little agreement for the interaction of SrcSH2 domain and the range of tyrosyl phosphopeptides. Furthermore, the adoption of the different models to treat conformational change and solvation has a dramatic effect on the calculated thermodynamic functions, making the predicted binding energies highly model dependent. While empirical, solvent-accessible surface area based calculations are becoming widely adopted to interpret thermodynamic data, this study highlights potential problems with application and interpretation of this type of approach. There is undoubtedly some agreement between predicted and experimentally determined thermodynamic parameters: however, the tolerance of this approach is not sufficient to make it ubiquitously applicable. PMID:11106171

  10. Probabilistic inference of ecohydrological parameters using observations from point to satellite scales

    NASA Astrophysics Data System (ADS)

    Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.

    2018-06-01

    Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from < 1 to 15 %. The parameter identifiability was not significantly improved in the more complex seasonal model; however, small differences in parameter values indicate that the annual model may have absorbed dry season dynamics. Parameter estimates were most constrained for scales and locations at which soil water dynamics are more sensitive to the fitted ecohydrological parameters of interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.

  11. Refractive Index of Alkali Halides and Its Wavelength and Temperature Derivatives.

    DTIC Science & Technology

    1975-05-01

    of CoBr . . . .......... 236 82. Comparison of Dispersion Equations Proposed for CsBr ... . 237 83. Recommmded Values on the Refractive Index and Its... discovery of empirical relationships which enable us to calculate dn/dT data at 293 K for some ma- terials on which no data are available. In the data...or in handbooks. In the present work, however, this problem 160 was solved by our empirical discoveries by which the unknown parameters of Eq. (19) for

  12. Bounds on quantum confinement effects in metal nanoparticles

    NASA Astrophysics Data System (ADS)

    Blackman, G. Neal; Genov, Dentcho A.

    2018-03-01

    Quantum size effects on the permittivity of metal nanoparticles are investigated using the quantum box model. Explicit upper and lower bounds are derived for the permittivity and relaxation rates due to quantum confinement effects. These bounds are verified numerically, and the size dependence and frequency dependence of the empirical Drude size parameter is extracted from the model. Results suggest that the common practice of empirically modifying the dielectric function can lead to inaccurate predictions for highly uniform distributions of finite-sized particles.

  13. The Mass-Luminosity-Metallicity Relation for M Dwarfs

    NASA Astrophysics Data System (ADS)

    Mann, Andrew; Dupuy, Trent; Rizzuto, Aaron; Kraus, Adam; Gaidos, Eric; Ansdell, Megan

    2018-01-01

    One of the most powerful tools for stellar characterization is the mass-luminosity relation (MLR). In addition to its use for characterizing exoplanet hosts, the MLR for late-type stars is critical to measuring the stellar IMF, testing isochrones, and studies of Galactic archeology. However, existing MLRs do not fully account for metallicity effects, do not extend down to the substellar boundary, and are not precise enough to take full advantage of the impending arrival of Gaia parallaxes for millions of late-type stars. For two years we monitored 72 nearby M-dwarf astrometric binaries using adaptive optics and non-redundant aperture masking, with the goal of better constraining the MLR. We combined our astrometry with measurements from the literature and Keck archive to measure orbits, masses, and flux ratios of all binaries in JHK bands. In parallel, we obtained moderate-resolution NIR spectra of all binaries, from which we determine empirical metallicities for each system. We derived an updated MLR-metallicity relation that spans most of the M dwarf sequence (K5 to M7) and the metallicity range expected in the solar neighborhood (-0.5 < [M/H] +0.4). With this we explored the role metallicity plays in the MLR. With our revised relation and Gaia-precision parallaxes, it will soon be possible to calculate empirical masses of nearby M dwarfs to better than 2%, and future studies will enable us to extend our relation to more metal-poor stars and explore the role of youth and evolution of the MLR for M dwarfs.

  14. Can high resolution topographic surveys provide reliable grain size estimates?

    NASA Astrophysics Data System (ADS)

    Pearson, Eleanor; Smith, Mark; Klaar, Megan; Brown, Lee

    2017-04-01

    High resolution topographic surveys contain a wealth of information that is not always exploited in the generation of Digital Elevation Models (DEMs). In particular, several authors have related sub-grid scale topographic variability (or 'surface roughness') to particle grain size by deriving empirical relationships between the two. Such relationships would permit rapid analysis of the spatial distribution of grain size over entire river reaches, providing data to drive distributed hydraulic models and revolutionising monitoring of river restoration projects. However, comparison of previous roughness-grain-size relationships shows substantial variability between field sites and do not take into account differences in patch-scale facies. This study explains this variability by identifying the factors that influence roughness-grain-size relationships. Using 275 laboratory and field-based Structure-from-Motion (SfM) surveys, we investigate the influence of: inherent survey error; irregularity of natural gravels; particle shape; grain packing structure; sorting; and form roughness on roughness-grain-size relationships. A suite of empirical relationships is presented in the form of a decision tree which improves estimations of grain size. Results indicate that the survey technique itself is capable of providing accurate grain size estimates. By accounting for differences in patch facies, R2 was seen to improve from 0.769 to R2 > 0.9 for certain facies. However, at present, the method is unsuitable for poorly sorted gravel patches. In future, a combination of a surface roughness proxy with photosieving techniques using SfM-derived orthophotos may offer improvements on using either technique individually.

  15. Prediction of the Dynamic Yield Strength of Metals Using Two Structural-Temporal Parameters

    NASA Astrophysics Data System (ADS)

    Selyutina, N. S.; Petrov, Yu. V.

    2018-02-01

    The behavior of the yield strength of steel and a number of aluminum alloys is investigated in a wide range of strain rates, based on the incubation time criterion of yield and the empirical models of Johnson-Cook and Cowper-Symonds. In this paper, expressions for the parameters of the empirical models are derived through the characteristics of the incubation time criterion; a satisfactory agreement of these data and experimental results is obtained. The parameters of the empirical models can depend on some strain rate. The independence of the characteristics of the incubation time criterion of yield from the loading history and their connection with the structural and temporal features of the plastic deformation process give advantage of the approach based on the concept of incubation time with respect to empirical models and an effective and convenient equation for determining the yield strength in a wider range of strain rates.

  16. A five-step procedure for the clinical use of the MPD in neuropsychological assessment of children.

    PubMed

    Wallbrown, F H; Fuller, G B

    1984-01-01

    Described a five-step procedure that can be used to detect organicity on the basis of children's performance on the Minnesota Percepto Diagnostic Test (MPD). The first step consists of examining the T score for rotations to determine whether it is below the cut-off score, which has been established empirically as an indicator of organicity. The second step consists of matching the examinee's configuration of error scores, separation of circle-diamond (SpCD), distortion of circle-diamond (DCD), and distortion of dots (DD), with empirically derived tables. The third step consists of considering the T score for rotations and error configuration jointly. The fourth step consists of using empirically established discriminant equations, and the fifth step involves using data from limits testing and other data sources. The clinical and empirical bases for the five-step procedure also are discussed.

  17. Empirical yield tables for Wisconsin.

    Treesearch

    Jerold T. Hahn; Joan M. Stelman

    1989-01-01

    Describes the tables derived from the 1983 Forest Survey of Wisconsin and presents ways the tables can be used. These tables are broken down according to Wisconsin`s five Forest Survey Units and 14 forest types.

  18. Systematic Analysis of Rocky Shore Morphology along 700km of Coastline Using LiDAR-derived DEMs

    NASA Astrophysics Data System (ADS)

    Matsumoto, H.; Dickson, M. E.; Masselink, G.

    2016-12-01

    Rock shore platforms occur along much of the world's coast and have a long history of study; however, uncertainty remains concerning the relative importance of various formative controls in different settings (e.g. wave erosion, weathering, tidal range, rock resistance, inheritance). Ambiguity is often attributed to intrinsic natural variability and the lack of preserved evidence on eroding rocky shores, but it could also be argued that previous studies are limited in scale, focusing on a small number of local sites, which restricts the potential for insights from broad, regional analyses. Here we describe a method, using LiDAR-derived digital elevation models (DEMs), for analysing shore platform morphology over an unprecedentedly wide area in which there are large variations in environmental conditions. The new method semi-automatically extracts shore platform profiles and systematically conducts morphometric analysis. We apply the method to 700 km of coast in the SW UK that is exposed to (i) highly energetic swell waves to local wind waves, (ii) macro to mega tidal ranges, and (iii) highly resistant igneous rocks to moderately hard sedimentary rocks. Computer programs are developed to estimate mean sea level, mean spring tidal range, wave height, and rock strength along the coastline. Filtering routines automatically select and remove profiles that are unsuitable for analysis. The large data-set of remaining profiles supports broad and systematic investigation of possible controls on platform morphology. Results, as expected, show wide scatter, because many formative controls are in play, but several trends exist that are generally consistent with relationships that have been inferred from local site studies. This paper will describe correlation analysis on platform morphology in relation to environmental conditions and also present a multi-variable empirical model derived from multi linear regression analysis. Interesting matches exist between platform gradients obtained from the field, and empirical model predictions, particularly when morphological variability found in LiDAR-based shore platform morphology analysis is considered. These findings frame a discussion on formative controls of rocky shore morphology.

  19. An empirically derived basis for calculating the area, rate, and distribution of water-drop impingement on airfoils

    NASA Technical Reports Server (NTRS)

    Bergrun, Norman R

    1952-01-01

    An empirically derived basis for predicting the area, rate, and distribution of water-drop impingement on airfoils of arbitrary section is presented. The concepts involved represent an initial step toward the development of a calculation technique which is generally applicable to the design of thermal ice-prevention equipment for airplane wing and tail surfaces. It is shown that sufficiently accurate estimates, for the purpose of heated-wing design, can be obtained by a few numerical computations once the velocity distribution over the airfoil has been determined. The calculation technique presented is based on results of extensive water-drop trajectory computations for five airfoil cases which consisted of 15-percent-thick airfoils encompassing a moderate lift-coefficient range. The differential equations pertaining to the paths of the drops were solved by a differential analyzer.

  20. Reconstruction of Missing Pixels in Satellite Images Using the Data Interpolating Empirical Orthogonal Function (DINEOF)

    NASA Astrophysics Data System (ADS)

    Liu, X.; Wang, M.

    2016-02-01

    For coastal and inland waters, complete (in spatial) and frequent satellite measurements are important in order to monitor and understand coastal biological and ecological processes and phenomena, such as diurnal variations. High-frequency images of the water diffuse attenuation coefficient at the wavelength of 490 nm (Kd(490)) derived from the Korean Geostationary Ocean Color Imager (GOCI) provide a unique opportunity to study diurnal variation of the water turbidity in coastal regions of the Bohai Sea, Yellow Sea, and East China Sea. However, there are lots of missing pixels in the original GOCI-derived Kd(490) images due to clouds and various other reasons. Data Interpolating Empirical Orthogonal Function (DINEOF) is a method to reconstruct missing data in geophysical datasets based on Empirical Orthogonal Function (EOF). In this study, the DINEOF is applied to GOCI-derived Kd(490) data in the Yangtze River mouth and the Yellow River mouth regions, the DINEOF reconstructed Kd(490) data are used to fill in the missing pixels, and the spatial patterns and temporal functions of the first three EOF modes are also used to investigate the sub-diurnal variation due to the tidal forcing. In addition, DINEOF method is also applied to the Visible Infrared Imaging Radiometer Suite (VIIRS) on board the Suomi National Polar-orbiting Partnership (SNPP) satellite to reconstruct missing pixels in the daily Kd(490) and chlorophyll-a concentration images, and some application examples in the Chesapeake Bay and the Gulf of Mexico will be presented.

  1. Are There Subtypes of Panic Disorder? An Interpersonal Perspective

    PubMed Central

    Zilcha-Mano, Sigal; McCarthy, Kevin S.; Dinger, Ulrike; Chambless, Dianne L.; Milrod, Barbara L.; Kunik, Lauren; Barber, Jacques P.

    2015-01-01

    Objective Panic disorder (PD) is associated with significant personal, social, and economic costs. However, little is known about specific interpersonal dysfunctions that characterize the PD population. The current study systematically examined these interpersonal dysfunctions. Method The present analyses included 194 patients with PD out of a sample of 201 who were randomized to cognitive-behavioral therapy, panic-focused psychodynamic psychotherapy, or applied relaxation training. Interpersonal dysfunction was measured using the Inventory of Interpersonal Problems–Circumplex (Horowitz, Alden, Wiggins, & Pincus, 2000). Results Individuals with PD reported greater levels of interpersonal distress than that of a normative cohort (especially when PD was accompanied by agoraphobia), but lower than that of a cohort of patients with major depression. There was no single interpersonal profile that characterized PD patients. Symptom-based clusters (with versus without agoraphobia) could not be discriminated on core or central interpersonal problems. Rather, as revealed by cluster analysis based on the pathoplasticity framework, there were two empirically derived interpersonal clusters among PD patients which were not accounted for by symptom severity and were opposite in nature: domineering-intrusive and nonassertive. The empirically derived interpersonal clusters appear to be of clinical utility in predicting alliance development throughout treatment: While the domineering-intrusive cluster did not show any changes in the alliance throughout treatment, the non-assertive cluster showed a process of significant strengthening of the alliance. Conclusions Empirically derived interpersonal clusters in PD provide clinically useful and non-redundant information about individuals with PD. PMID:26030762

  2. Is There a Dark Side to Mindfulness? Relation of Mindfulness to Criminogenic Cognitions.

    PubMed

    Tangney, June P; Dobbins, Ashley E; Stuewig, Jeffrey B; Schrader, Shannon W

    2017-10-01

    In recent years, mindfulness-based interventions have been modified for use with inmate populations, but how this might relate to specific criminogenic cognitions has not been examined empirically. Theoretically, characteristics of mindfulness should be incompatible with distorted patterns of criminal thinking, but is this in fact the case? Among both 259 male jail inmates and 516 undergraduates, mindfulness was inversely related to the Criminogenic Cognitions Scale (CCS) through a latent variable of emotion regulation. However, in the jail sample, this mediational model also showed a direct, positive path from mindfulness to CCS, with an analogous, but nonsignificant trend in the college sample. Post hoc analyses indicate that the Nonjudgment of Self scale derived from the Mindfulness Inventory: Nine Dimensions (MI:ND) largely accounts for this apparently iatrogenic effect in both samples. Some degree of self-judgment is perhaps necessary and useful, especially among individuals involved in the criminal justice system.

  3. BOND: A quantum of solace for nebular abundance determinations

    NASA Astrophysics Data System (ADS)

    Vale Asari, N.; Stasińska, G.; Morisset, C.; Cid Fernandes, R.

    2017-11-01

    The abundances of chemical elements other than hydrogen and helium in a galaxy are the fossil record of its star formation history. Empirical relations such as mass-metallicity relation are thus seen as guides for studies on the history and chemical evolution of galaxies. Those relations usually rely on nebular metallicities measured with strong-line methods, which assume that H II regions are a one- (or at most two-) parameter family where the oxygen abundance is the driving quantity. Nature is however much more complex than that, and metallicities from strong lines may be strongly biased. We have developed the method BOND (Bayesian Oxygen and Nitrogen abundance Determinations) to simultaneously derive oxygen and nitrogen abundances in giant H II regions by comparing strong and semi-strong observed emission lines to a carefully-defined, finely-meshed grid of photoionization models. Our code and results are public and available at http://bond.ufsc.br.

  4. Synthesis and conformational analysis of new arylated-diphenylurea derivatives related to sorafenib drug via Suzuki-Miyaura cross-coupling reaction

    NASA Astrophysics Data System (ADS)

    Al-Masoudi, Najim A.; Essa, Ali Hashem; Alwaaly, Ahmed A. S.; Saeed, Bahjat A.; Langer, Peter

    2017-10-01

    Sorafenib, is a relatively new cytostatic drug approved for the treatment of renal cell and hepatocellular carcinoma. The development of new sorafenib analogues offers the possibility of generating structures of increased potency. To this end, a series of arylated-diphenylurea analogues 17-31 were synthesized via Suzuki-Miyaura coupling reaction, related to sorafenib by treatment of three diarylureas 2-4 having 3-bromo, 4-chloro and 2-iodo groups with various arylboronic acids. Conformational analysis of the new arylated urea analogues has been investigated using MOPAC 2016 of semi empirical PM7 Hamiltonian computational method. Our results showed that all compounds preferred the trans-trans conformations. Compound 17 has been selected to calculate the torsional energy profiles for rotation around the urea bonds and found to be existed predominantly in the trans-trans conformation with only very minimal fluctuation in conformation.

  5. Predicting stellar angular diameters from V, IC, H and K photometry

    NASA Astrophysics Data System (ADS)

    Adams, Arthur D.; Boyajian, Tabetha S.; von Braun, Kaspar

    2018-01-01

    Determining the physical properties of microlensing events depends on having accurate angular sizes of the source star. Using long baseline optical interferometry, we are able to measure the angular sizes of nearby stars with uncertainties ≤2 per cent. We present empirically derived relations of angular diameters which are calibrated using both a sample of dwarfs/subgiants and a sample of giant stars. These relations are functions of five colour indices in the visible and near-infrared, and have uncertainties of 1.8-6.5 per cent depending on the colour used. We find that a combined sample of both main-sequence and evolved stars of A-K spectral types is well fitted by a single relation for each colour considered. We find that in the colours considered, metallicity does not play a statistically significant role in predicting stellar size, leading to a means of predicting observed sizes of stars from colour alone.

  6. Cloud vertical profiles derived from CALIPSO and CloudSat and a comparison with MODIS derived clouds

    NASA Astrophysics Data System (ADS)

    Kato, S.; Sun-Mack, S.; Miller, W. F.; Rose, F. G.; Minnis, P.; Wielicki, B. A.; Winker, D. M.; Stephens, G. L.; Charlock, T. P.; Collins, W. D.; Loeb, N. G.; Stackhouse, P. W.; Xu, K.

    2008-05-01

    CALIPSO and CloudSat from the a-train provide detailed information of vertical distribution of clouds and aerosols. The vertical distribution of cloud occurrence is derived from one month of CALIPSO and CloudSat data as a part of the effort of merging CALIPSO, CloudSat and MODIS with CERES data. This newly derived cloud profile is compared with the distribution of cloud top height derived from MODIS on Aqua from cloud algorithms used in the CERES project. The cloud base from MODIS is also estimated using an empirical formula based on the cloud top height and optical thickness, which is used in CERES processes. While MODIS detects mid and low level clouds over the Arctic in April fairly well when they are the topmost cloud layer, it underestimates high- level clouds. In addition, because the CERES-MODIS cloud algorithm is not able to detect multi-layer clouds and the empirical formula significantly underestimates the depth of high clouds, the occurrence of mid and low-level clouds is underestimated. This comparison does not consider sensitivity difference to thin clouds but we will impose an optical thickness threshold to CALIPSO derived clouds for a further comparison. The effect of such differences in the cloud profile to flux computations will also be discussed. In addition, the effect of cloud cover to the top-of-atmosphere flux over the Arctic using CERES SSF and FLASHFLUX products will be discussed.

  7. Towards an empirical ethics in care: relations with technologies in health care.

    PubMed

    Pols, Jeannette

    2015-02-01

    This paper describes the approach of empirical ethics, a form of ethics that integrates non-positivist ethnographic empirical research and philosophy. Empirical ethics as it is discussed here builds on the 'empirical turn' in epistemology. It radicalizes the relational approach that care ethics introduced to think about care between people by drawing in relations between people and technologies as things people relate to. Empirical ethics studies care practices by analysing their intra-normativity, or the ways of living together the actors within these practices strive for or bring about as good practices. Different from care ethics, what care is and if it is good is not defined beforehand. A care practice may be contested by comparing it to alternative practices with different notions of good care. By contrasting practices as different ways of living together that are normatively oriented, suggestions for the best possible care may be argued for. Whether these suggestions will actually be put to practice is, however, again a relational question; new actors need to re-localize suggestions, to make them work in new practices and fit them in with local intra-normativities with their particular routines, material infrastructures, know-how and strivings.

  8. A Theory of the von Weimarn Rules Governing the Average Size of Crystals Precipitated from a Supersaturated Solution

    NASA Technical Reports Server (NTRS)

    Barlow, Douglas A.; Baird, James K.; Su, Ching-Hua

    2003-01-01

    More than 75 years ago, von Weimarn summarized his observations of the dependence of the average crystal size on the initial relative concentration supersaturation prevailing in a solution from which crystals were growing. Since then, his empirically derived rules have become part of the lore of crystal growth. The first of these rules asserts that the average crystal size measured at the end of a crystallization increases as the initial value of the relative supersaturation decreases. The second rule states that for a given crystallization time, the average crystal size passes through a maximum as a function of the initial relative supersaturation. Using a theory of nucleation and growth due to Buyevich and Mansurov, we calculate the average crystal size as a function of the initial relative supersaturation. We confirm the von Weimarn rules for the case where the nucleation rate is proportional to the third power or higher of the relative supersaturation.

  9. Empirical-statistical downscaling of reanalysis data to high-resolution air temperature and specific humidity above a glacier surface (Cordillera Blanca, Peru)

    NASA Astrophysics Data System (ADS)

    Hofer, Marlis; MöLg, Thomas; Marzeion, Ben; Kaser, Georg

    2010-06-01

    Recently initiated observation networks in the Cordillera Blanca (Peru) provide temporally high-resolution, yet short-term, atmospheric data. The aim of this study is to extend the existing time series into the past. We present an empirical-statistical downscaling (ESD) model that links 6-hourly National Centers for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR) reanalysis data to air temperature and specific humidity, measured at the tropical glacier Artesonraju (northern Cordillera Blanca). The ESD modeling procedure includes combined empirical orthogonal function and multiple regression analyses and a double cross-validation scheme for model evaluation. Apart from the selection of predictor fields, the modeling procedure is automated and does not include subjective choices. We assess the ESD model sensitivity to the predictor choice using both single-field and mixed-field predictors. Statistical transfer functions are derived individually for different months and times of day. The forecast skill largely depends on month and time of day, ranging from 0 to 0.8. The mixed-field predictors perform better than the single-field predictors. The ESD model shows added value, at all time scales, against simpler reference models (e.g., the direct use of reanalysis grid point values). The ESD model forecast 1960-2008 clearly reflects interannual variability related to the El Niño/Southern Oscillation but is sensitive to the chosen predictor type.

  10. Mechanisms of high-temperature, solid-state flow in minerals and ceramics and their bearing on the creep behavior of the mantle

    USGS Publications Warehouse

    Kirby, S.H.; Raleigh, C.B.

    1973-01-01

    The problem of applying laboratory silicate-flow data to the mantle, where conditions can be vastly different, is approached through a critical review of high-temperature flow mechanisms in ceramics and their relation to empirical flow laws. The intimate association of solid-state diffusion and high-temperature creep in pure metals is found to apply to ceramics as well. It is shown that in ceramics of moderate grain size, compared on the basis of self-diffusivity and elastic modulus, normalized creep rates compare remarkably well. This comparison is paralleled by the near universal occurrence of similar creep-induced structures, and it is thought that the derived empirical flow laws can be associated with dislocation creep. Creep data in fine-grained ceramics, on the other hand, are found to compare poorly with theories involving the stress-directed diffusion of point defects and have not been successfully correlated by self-diffusion rates. We conclude that these fine-grained materials creep primarily by a quasi-viscous grain-boundary sliding mechanism which is unlikely to predominate in the earth's deep interior. Creep predictions for the mantle reveal that under most conditions the empirical dislocation creep behavior predominates over the mechanisms involving the stress-directed diffusion of point defects. The probable role of polymorphic transformations in the transition zone is also discussed. ?? 1973.

  11. Cough: are children really different to adults?

    PubMed Central

    Chang, Anne B

    2005-01-01

    Worldwide paediatricians advocate that children should be managed differently from adults. In this article, similarities and differences between children and adults related to cough are presented. Physiologically, the cough pathway is closely linked to the control of breathing (the central respiratory pattern generator). As respiratory control and associated reflexes undergo a maturation process, it is expected that the cough would likewise undergo developmental stages as well. Clinically, the 'big three' causes of chronic cough in adults (asthma, post-nasal drip and gastroesophageal reflux) are far less common causes of chronic cough in children. This has been repeatedly shown by different groups in both clinical and epidemiological studies. Therapeutically, some medications used empirically for cough in adults have little role in paediatrics. For example, anti-histamines (in particular H1 antagonists) recommended as a front-line empirical treatment of chronic cough in adults have no effect in paediatric cough. Instead it is associated with adverse reactions and toxicity. Similarly, codeine and its derivatives used widely for cough in adults are not efficacious in children and are contraindicated in young children. Corticosteroids, the other front-line empirical therapy recommended for adults, are also minimally (if at all) efficacious for treating non-specific cough in children. In summary, current data support that management guidelines for paediatric cough should be different to those in adults as the aetiological factors and treatment in children significantly differ to those in adults. PMID:16270937

  12. Reacting Chemistry Based Burn Model for Explosive Hydrocodes

    NASA Astrophysics Data System (ADS)

    Schwaab, Matthew; Greendyke, Robert; Steward, Bryan

    2017-06-01

    Currently, in hydrocodes designed to simulate explosive material undergoing shock-induced ignition, the state of the art is to use one of numerous reaction burn rate models. These burn models are designed to estimate the bulk chemical reaction rate. Unfortunately, these models are largely based on empirical data and must be recalibrated for every new material being simulated. We propose that the use of an equilibrium Arrhenius rate reacting chemistry model in place of these empirically derived burn models will improve the accuracy for these computational codes. Such models have been successfully used in codes simulating the flow physics around hypersonic vehicles. A reacting chemistry model of this form was developed for the cyclic nitramine RDX by the Naval Research Laboratory (NRL). Initial implementation of this chemistry based burn model has been conducted on the Air Force Research Laboratory's MPEXS multi-phase continuum hydrocode. In its present form, the burn rate is based on the destruction rate of RDX from NRL's chemistry model. Early results using the chemistry based burn model show promise in capturing deflagration to detonation features more accurately in continuum hydrocodes than previously achieved using empirically derived burn models.

  13. Very empirical treatment of solvation and entropy: a force field derived from Log Po/w

    NASA Astrophysics Data System (ADS)

    Kellogg, Glen Eugene; Burnett, James C.; Abraham, Donald J.

    2001-04-01

    A non-covalent interaction force field model derived from the partition coefficient of 1-octanol/water solubility is described. This model, HINT for Hydropathic INTeractions, is shown to include, in very empirical and approximate terms, all components of biomolecular associations, including hydrogen bonding, Coulombic interactions, hydrophobic interactions, entropy and solvation/desolvation. Particular emphasis is placed on: (1) demonstrating the relationship between the total empirical HINT score and free energy of association, ΔG interaction; (2) showing that the HINT hydrophobic-polar interaction sub-score represents the energy cost of desolvation upon binding for interacting biomolecules; and (3) a new methodology for treating constrained water molecules as discrete independent small ligands. An example calculation is reported for dihydrofolate reductase (DHFR) bound with methotrexate (MTX). In that case the observed very tight binding, ΔG interaction≤-13.6 kcal mol-1, is largely due to ten hydrogen bonds between the ligand and enzyme with estimated strength ranging between -0.4 and -2.3 kcal mol-1. Four water molecules bridging between DHFR and MTX contribute an additional -1.7 kcal mol-1 stability to the complex. The HINT estimate of the cost of desolvation is +13.9 kcal mol-1.

  14. Application of household production theory to selected natural-resource problems in less-developed countries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mercer, D.E.

    The objectives are threefold: (1) to perform an analytical survey of household production theory as it relates to natural-resource problems in less-developed countries, (2) to develop a household production model of fuelwood decision making, (3) to derive a theoretical framework for travel-cost demand studies of international nature tourism. The model of household fuelwood decision making provides a rich array of implications and predictions for empirical analysis. For example, it is shown that fuelwood and modern fuels may be either substitutes or complements depending on the interaction of the gross-substitution and income-expansion effects. Therefore, empirical analysis should precede adoption of anymore » inter-fuel substitution policies such as subsidizing kerosene. The fuelwood model also provides a framework for analyzing the conditions and factors determining entry and exit by households into the wood-burning subpopulation, a key for designing optimal household energy policies in the Third World. The international nature tourism travel cost model predicts that the demand for nature tourism is an aggregate of the demand for the individual activities undertaken during the trip.« less

  15. Inferring the parameters of a Markov process from snapshots of the steady state

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Berg, Johannes

    2018-02-01

    We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.

  16. Source Characteristics of the Northern Longitudinal Valley, Taiwan Derived from Broadband Strong-Motion Simulation

    NASA Astrophysics Data System (ADS)

    Wen, Yi-Ying

    2018-02-01

    The 2014 M L 5.9 Fanglin earthquake occurred at the northern end of the aftershock distribution of the 2013 M L 6.4 Ruisui event and caused strong ground shaking and some damage in the northern part of the Longitudinal Valley. We carried out the strong-motion simulation of the 2014 Fanglin event in the broadband frequency range (0.4-10 Hz) using the empirical Green's function method and then integrated the source models to investigate the source characteristics of the 2013 Ruisui and 2014 Fanglin events. The results show that the dimension of strong motion generation area of the 2013 Ruisui event is smaller, whereas that of the 2014 Fanglin event is comparable with the empirical estimation of inland crustal earthquakes, which indicates the different faulting behaviors. Furthermore, the localized high PGV patch might be caused by the radiation energy amplified by the local low-velocity structure in the northern Longitudinal Valley. Additional study issues are required for building up the knowledge of the potential seismic hazard related to moderate-large events for various seismogenic areas in Taiwan.

  17. Does perceived risk influence the effects of message framing? A new investigation of a widely held notion.

    PubMed

    Van 't Riet, Jonathan; Cox, Anthony D; Cox, Dena; Zimet, Gregory D; De Bruijn, Gert-Jan; Van den Putte, Bas; De Vries, Hein; Werrij, Marieke Q; Ruiter, Robert A C

    2014-01-01

    Health-promoting messages can be framed in terms of the beneficial consequences of healthy behaviour (gain-framed messages) or the detrimental consequences of unhealthy behaviour (loss-framed messages). An influential notion holds that the perceived risk associated with the recommended behaviour determines the relative persuasiveness of gain- and loss-framed messages. This 'risk-framing hypothesis', as we call it, was derived from prospect theory, has been central to health message framing research for the last two decades, and does not cease to appeal to researchers. The present paper examines the validity of the risk-framing hypothesis. We performed six empirical studies on the interaction between perceived risk and message framing. These studies were conducted in two different countries and employed framed messages targeting skin cancer prevention and detection, physical activity, breast self-examination and vaccination behaviour. Behavioural intention served as the outcome measure. None of these studies found evidence in support of the risk-framing hypothesis. We conclude that the empirical evidence in favour of the hypothesis is weak and discuss the ramifications of this for future message framing research.

  18. More sound of church bells: Authors' correction

    NASA Astrophysics Data System (ADS)

    Vogt, Patrik; Kasper, Lutz; Burde, Jan-Philipp

    2016-01-01

    In the recently published article "The Sound of Church Bells: Tracking Down the Secret of a Traditional Arts and Crafts Trade," the bell frequencies have been erroneously oversimplified. The problem affects Eqs. (2) and (3), which were derived from the elementary "coffee mug model" and in which we used the speed of sound in air. However, this does not make sense from a physical point of view, since air only acts as a sound carrier, not as a sound source in the case of bells. Due to the excellent fit of the theoretical model with the empirical data, we unfortunately failed to notice this error before publication. However, all other equations, e.g., the introduction of the correction factor in Eq. (4) and the estimation of the mass in Eqs. (5) and (6) are not affected by this error, since they represent empirical models. However, it is unfortunate to introduce the speed of sound in air as a constant in Eqs. (4) and (6). Instead, we suggest the following simple rule of thumb for relating the radius of a church bell R to its humming frequency fhum:

  19. Brain Modules, Personality Layers, Planes of Being, Spiral Structures, and the Equally Implausible Distinction between TCI-R "Temperament" and "Character" Scales: A Reply to Cloninger.

    PubMed

    Farmer, Richard F; Goldberg, Lewis R

    2008-09-01

    In this reply we address comments by Cloninger (this issue) related to our report (Farmer & Goldberg, this issue) on the psychometric properties of the revised Temperament and Character Inventory (TCI-R) and a short inventory derivative, the TCI-140. Even though Cloninger's psychobiological model has undergone substantial theoretical modifications, the relevance of these changes for the evaluation and use of the TCI-R remains unclear. Aspects of TCI-R assessment also appear to be theoretically and empirically incongruent with Cloninger's assertion that TCI-R personality domains are non-linear and dynamic in nature. Several other core assumptions from the psychobiological model, including this most recent iteration, are non-falsifiable, inconsistently supported, or have no apparent empirical basis. Although researchers using the TCI and TCI-R have frequently accepted the temperament/character distinction and associated theoretical ramifications, for example, we find little overall support for the differentiation of TCI-R domains into these two basic categories. The implications of these observations for TCI-R assessment are briefly discussed.

  20. Frequency domain photothermoacoustic signal amplitude dependence on the optical properties of water: turbid polyvinyl chloride-plastisol system.

    PubMed

    Spirou, Gloria M; Mandelis, Andreas; Vitkin, I Alex; Whelan, William M

    2008-05-10

    Photoacoustic (more precisely, photothermoacoustic) signals generated by the absorption of photons can be related to the incident laser fluence rate. The dependence of frequency domain photoacoustic (FD-PA) signals on the optical absorption coefficient (micro(a)) and the effective attenuation coefficient (micro(eff)) of a turbid medium [polyvinyl chloride-plastisol (PVCP)] with tissuelike optical properties was measured, and empirical relationships between these optical properties and the photoacoustic (PA) signal amplitude and the laser fluence rate were derived for the water (PVCP system with and without optical scatterers). The measured relationships between these sample optical properties and the PA signal amplitude were found to be linear, consistent with FD-PA theory: micro(a)=a(A/Phi)-b and micro(eff)=c(A/Phi)+d, where Phi is the laser fluence, A is the FD-PA amplitude, and a, ...,d are empirical coefficients determined from the experiment using linear frequency-swept modulation and a lock-in heterodyne detection technique. This quantitative technique can easily be used to measure the optical properties of general turbid media using FD-PAs.

  1. Status Concern and Relative Deprivation in China: Measures, Empirical Evidence and Economic and Policy Implications

    PubMed Central

    Xi, CHEN

    2017-01-01

    Status concern and feelings of relative deprivation affect individual behaviour and well-being. Traditional norms and the alarming inequality in China have made relative deprivation increasingly intense for the Chinese population. This article reviews empirical literature on China that attempts to test the relative deprivation hypothesis, and also reviews the origins and pathways of relative deprivation, compares its economic measures in the literature and summarises the scientific findings. Drawing from solid empirical evidence, the author discusses the important policy implications on redistribution, official regulations and grassroots sanctions, and relative poverty alleviation. PMID:29033479

  2. Status Concern and Relative Deprivation in China: Measures, Empirical Evidence and Economic and Policy Implications.

    PubMed

    Xi, Chen

    2016-02-01

    Status concern and feelings of relative deprivation affect individual behaviour and well-being. Traditional norms and the alarming inequality in China have made relative deprivation increasingly intense for the Chinese population. This article reviews empirical literature on China that attempts to test the relative deprivation hypothesis, and also reviews the origins and pathways of relative deprivation, compares its economic measures in the literature and summarises the scientific findings. Drawing from solid empirical evidence, the author discusses the important policy implications on redistribution, official regulations and grassroots sanctions, and relative poverty alleviation.

  3. Quantifying the uncertainties of aerosol indirect effects and impacts on decadal-scale climate variability in NCAR CAM5 and CESM1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Sungsu

    2014-12-12

    The main goal of this project is to systematically quantify the major uncertainties of aerosol indirect effects due to the treatment of moist turbulent processes that drive aerosol activation, cloud macrophysics and microphysics in response to anthropogenic aerosol perturbations using the CAM5/CESM1. To achieve this goal, the P.I. hired a postdoctoral research scientist (Dr. Anna Fitch) who started her work from the Nov.1st.2012. In order to achieve the project goal, the first task that the Postdoc. and the P.I. did was to quantify the role of subgrid vertical velocity variance on the activation and nucleation of cloud liquid droplets andmore » ice crystals and its impact on the aerosol indirect effect in CAM5. First, we analyzed various LES cases (from dry stable to cloud-topped PBL) to check whether this isotropic turbulence assumption used in CAM5 is really valid. It turned out that this isotropic turbulence assumption is not universally valid. Consequently, from the analysis of LES, we derived an empirical formulation relaxing the isotropic turbulence assumption used for the CAM5 aerosol activation and ice nucleation, and implemented the empirical formulation into CAM5/CESM1, and tested in the single-column and global simulation modes, and examined how it changed aerosol indirect effects in the CAM5/CESM1. These results were reported in the poster section in the 18th Annual CESM workshop held in Breckenridge, CO during Jun.17-20.2013. While we derived an empirical formulation from the analysis of couple of LES from the first task, the general applicability of that empirical formulation was questionable, because it was obtained from the limited number of LES simulations. The second task we did was to derive a more fundamental analytical formulation relating vertical velocity variance to TKE using other information starting from basic physical principles. This was a somewhat challenging subject, but if this could be done in a successful way, it could be directly implemented into the CAM5 as a practical parameterization, and substantially contributes to achieving the project goal. Through an intensive research for about one year, we found appropriate mathematical formulation and tried to implement it into the CAM5 PBL and activation routine as a practical parameterized numerical code. During these processes, however, the Postdoc applied for another position in Sweden, Europe, and accepted a job offer there, and left NCAR in August 2014. In Sweden, Dr. Anna Fitch is still working on this subject in a part time, planning to finalize the research and to write the paper in a near future.« less

  4. Imidazole derivatives as angiotensin II AT1 receptor blockers: Benchmarks, drug-like calculations and quantitative structure-activity relationships modeling

    NASA Astrophysics Data System (ADS)

    Alloui, Mebarka; Belaidi, Salah; Othmani, Hasna; Jaidane, Nejm-Eddine; Hochlaf, Majdi

    2018-03-01

    We performed benchmark studies on the molecular geometry, electron properties and vibrational analysis of imidazole using semi-empirical, density functional theory and post Hartree-Fock methods. These studies validated the use of AM1 for the treatment of larger systems. Then, we treated the structural, physical and chemical relationships for a series of imidazole derivatives acting as angiotensin II AT1 receptor blockers using AM1. QSAR studies were done for these imidazole derivatives using a combination of various physicochemical descriptors. A multiple linear regression procedure was used to design the relationships between molecular descriptor and the activity of imidazole derivatives. Results validate the derived QSAR model.

  5. Theory, the Final Frontier? A Corpus-Based Analysis of the Role of Theory in Psychological Articles.

    PubMed

    Beller, Sieghard; Bender, Andrea

    2017-01-01

    Contemporary psychology regards itself as an empirical science, at least in most of its subfields. Theory building and development are often considered critical to the sciences, but the extent to which psychology can be cast in this way is under debate. According to those advocating a strong role of theory, studies should be designed to test hypotheses derived from theories (theory-driven) and ideally should yield findings that stimulate hypothesis formation and theory building (theory-generating). The alternative position values empirical findings over theories as the lasting legacy of science. To investigate which role theory actually plays in current research practice, we analyse references to theory in the complete set of 2,046 articles accepted for publication in Frontiers of Psychology in 2015. This sample of articles, while not representative in the strictest sense, covers a broad range of sub-disciplines, both basic and applied, and a broad range of article types, including research articles, reviews, hypothesis & theory, and commentaries. For the titles, keyword lists, and abstracts in this sample, we conducted a text search for terms related to empiricism and theory, assessed the frequency and scope of usage for six theory-related terms, and analyzed their distribution over different article types and subsections of the journal. The results indicate substantially lower frequencies of theoretical than empirical terms, with references to a specific (named) theory in less than 10% of the sample and references to any of even the most frequently mentioned theories in less than 0.5% of the sample. In conclusion, we discuss possible limitations of our study and the prospect of theoretical advancement.

  6. Theory, the Final Frontier? A Corpus-Based Analysis of the Role of Theory in Psychological Articles

    PubMed Central

    Beller, Sieghard; Bender, Andrea

    2017-01-01

    Contemporary psychology regards itself as an empirical science, at least in most of its subfields. Theory building and development are often considered critical to the sciences, but the extent to which psychology can be cast in this way is under debate. According to those advocating a strong role of theory, studies should be designed to test hypotheses derived from theories (theory-driven) and ideally should yield findings that stimulate hypothesis formation and theory building (theory-generating). The alternative position values empirical findings over theories as the lasting legacy of science. To investigate which role theory actually plays in current research practice, we analyse references to theory in the complete set of 2,046 articles accepted for publication in Frontiers of Psychology in 2015. This sample of articles, while not representative in the strictest sense, covers a broad range of sub-disciplines, both basic and applied, and a broad range of article types, including research articles, reviews, hypothesis & theory, and commentaries. For the titles, keyword lists, and abstracts in this sample, we conducted a text search for terms related to empiricism and theory, assessed the frequency and scope of usage for six theory-related terms, and analyzed their distribution over different article types and subsections of the journal. The results indicate substantially lower frequencies of theoretical than empirical terms, with references to a specific (named) theory in less than 10% of the sample and references to any of even the most frequently mentioned theories in less than 0.5% of the sample. In conclusion, we discuss possible limitations of our study and the prospect of theoretical advancement. PMID:28642728

  7. Psychosocial stressors and the prognosis of major depression: a test of Axis IV

    PubMed Central

    Gilman, Stephen E.; Trinh, Nhi-Ha; Smoller, Jordan W.; Fava, Maurizio; Murphy, Jane M.; Breslau, Joshua

    2013-01-01

    Background Axis IV is for reporting “psychosocial and environmental problems that may affect the diagnosis, treatment, and prognosis of mental disorders.” No studies have examined the prognostic value of Axis IV in DSM-IV. Method We analyzed data from 2,497 participants in the National Epidemiologic Survey on Alcohol and Related Conditions with major depressive episode (MDE). We hypothesized that psychosocial stressors predict a poor prognosis of MDE. Secondarily, we hypothesized that psychosocial stressors predict a poor prognosis of anxiety and substance use disorders. Stressors were defined according to DSM-IV’s taxonomy, and empirically using latent class analysis. Results Primary support group problems, occupational problems, and childhood adversity increased the risks of depressive episodes and suicidal ideation by 20–30%. Associations of the empirically derived classes of stressors with depression were larger in magnitude. Economic stressors conferred a 1.5-fold increase in risk for a depressive episode (CI=1.2–1.9); financial and interpersonal instability conferred a 1.3-fold increased risk of recurrent depression (CI=1.1–1.6). These two classes of stressors also predicted the recurrence of anxiety and substance use disorders. Stressors were not related to suicidal ideation independent from depression severity. Conclusions Psychosocial and environmental problems are associated with the prognosis of MDE and other Axis I disorders. Though DSM-IV’s taxonomy of stressors stands to be improved, these results provide empirical support for the prognostic value of Axis IV. Future work is needed to determine the reliability of Axis IV assessments in clinical practice, and the usefulness of this information to improving the clinical course of mental disorders. PMID:22640506

  8. Secondary organic aerosol (SOA) derived from isoprene epoxydiols: Insights into formation, aging and distribution over the continental US from the DC3 and SEAC4RS campaigns

    NASA Astrophysics Data System (ADS)

    Campuzano Jost, P.; Palm, B. B.; Day, D. A.; Hu, W.; Ortega, A. M.; Jimenez, J. L.; Liao, J.; Froyd, K. D.; Pollack, I. B.; Peischl, J.; Ryerson, T. B.; St Clair, J. M.; Crounse, J.; Wennberg, P. O.; Mikoviny, T.; Wisthaler, A.; Ziemba, L. D.; Anderson, B. E.

    2014-12-01

    Isoprene-derived SOA formation has been studied extensively in the laboratory. However, it is still unclear to what extent isoprene contributes to the overall SOA burden over the southeastern US, an area with both strong isoprene emissions as well as large discrepancies between modeled and observed aerosol optical depth. For the low-NO isoprene oxidation pathway, the key gas-phase intermediate is believed to be isoprene epoxide (IEPOX), which can be incorporated into the aerosol phase by either sulfate ester formation (IEPOX sulfate) or direct hydrolysis. As first suggested by Robinson et al, the SOA formed by this mechanism (IEPOX-SOA) has a characteristic fragmentation pattern when analyzed by an Aerodyne Aerosol Mass Spectrometer (AMS) with enhanced relative abundances of the C5H6O+ ion (fC5H6O). Based on data from previous ground campaigns and chamber studies, we have developed a empirical method to quantify IEPOX-SOA and have applied it to the data from the DC3 and SEAC4RS aircraft campaigns that sampled the SE US during the Spring of 2012 and the Summer of 2013. We used Positive Matrix Factorization (PMF) to extract IEPOX-SOA factors that show good correlation with inside or downwind of high isoprene emitting areas and in general agree well with the IEPOX-SOA mass predicted by the empirical expression. According to this analysis, the empirical method performs well regardless of (at times very strong) BBOA or urban OA influences. On average 17% of SOA in the SE US boundary layer was IEPOX-SOA. Overall, the highest concentrations of IEPOX-SOA were typically found around 1-2 km AGL, several hours downwind of the isoprene source areas with high gas-phase IEPOX present. IEPOX-SOA was also detected up to altitudes of 6 km, with a clear trend towards more aged aerosol at altitude, likely a combination of chemical aging and physical airmass mixing. The unique instrument package aboard the NASA-DC8 allows us to examine the influence of multiple factors (aerosol acidity, aerosol water content, sulfate mass fraction, isoprene and terpene source strength) to the relative and absolute contribution of IEPOX-SOA to the total OA burden. In particular, the IEPOX-sulfate measurement from the PALMS instrument was used to estimate the relative contribution of the organosulfate channel to the IEPOX-SOA formation.

  9. Interest Rates and Coupon Bonds in Quantum Finance

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.

    2009-09-01

    1. Synopsis; 2. Interest rates and coupon bonds; 3. Options and option theory; 4. Interest rate and coupon bond options; 5. Quantum field theory of bond forward interest rates; 6. Libor Market Model of interest rates; 7. Empirical analysis of forward interest rates; 8. Libor Market Model of interest rate options; 9. Numeraires for bond forward interest rates; 10. Empirical analysis of interest rate caps; 11. Coupon bond European and Asian options; 12. Empirical analysis of interest rate swaptions; 13. Correlation of coupon bond options; 14. Hedging interest rate options; 15. Interest rate Hamiltonian and option theory; 16. American options for coupon bonds and interest rates; 17. Hamiltonian derivation of coupon bond options; Appendixes; Glossaries; List of symbols; Reference; Index.

  10. Crippling Strength of Axially Loaded Rods

    NASA Technical Reports Server (NTRS)

    Natalis, FR

    1921-01-01

    A new empirical formula was developed that holds good for any length and any material of a rod, and agrees well with the results of extensive strength tests. To facilitate calculations, three tables are included, giving the crippling load for solid and hollow sectioned wooden rods of different thickness and length, as well as for steel tubes manufactured according to the standards of Army Air Services Inspection. Further, a graphical method of calculation of the breaking load is derived in which a single curve is employed for determination of the allowable fiber stress. Finally, the theory is discussed of the elastic curve for a rod subject to compression, according to which no deflection occurs, and the apparent contradiction of this conclusion by test results is attributed to the fact that the rods under test are not perfectly straight, or that the wall thickness and the material are not uniform. Under the assumption of an eccentric rod having a slight initial bend according to a sine curve, a simple formula for the deflection is derived, which shows a surprising agreement with test results. From this a further formula is derived for the determination of the allowable load on an eccentric rod. The resulting relations are made clearer by means of a graphical representation of the relation of the moments of the outer and inner forces to the deflection.

  11. The Development and Evaluation of Color Systems for Airborne Applications: Fundamental Visual, Perceptual, and Display Systems Considerations.

    DTIC Science & Technology

    1986-02-01

    Ellipses Derived from Both MacAdam’s Empirically Derived Color Matching Standard Deviation and Stiles’ Line Element Predictions 28 2.1.1.2-9 CIELUV Color...Coordinates 141 2.2.2-3 Derivation of CIE (L*, U*, V*) Coordinates 145 2.2.2-4 Three-Dimensional Representation of CIELUV Colcr Difference Estimates...145 2.2.2-5 Application of CIELUV for Estimating Color Difference on an Electronic Color Display 146 2.2.2-6 Color Performance Envelopes and Optimized

  12. Spatial and temporal patterns of xylem sap pH derived from stems and twigs of Populus deltoides L.

    Treesearch

    Doug Aubrey; Justin Boyles; Laura Krysinsky; Robert Teskey

    2011-01-01

    Xylem sap pH (pHX) is critical in determining the quantity of inorganic carbon dissolved in xylem solution from gaseous [CO2] measurements. Studies of internal carbon transport have generally assumed that pHX derived from stems and twigs is similar and that pHX remains constant through time; however, no empirical studies have investigated these assumptions. If any of...

  13. Stellar Absorption Line Analysis of Local Star-forming Galaxies: The Relation between Stellar Mass, Metallicity, Dust Attenuation, and Star Formation Rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jabran Zahid, H.; Kudritzki, Rolf-Peter; Ho, I-Ting

    We analyze the optical continuum of star-forming galaxies in the Sloan Digital Sky Survey by fitting stacked spectra with stellar population synthesis models to investigate the relation between stellar mass, stellar metallicity, dust attenuation, and star formation rate. We fit models calculated with star formation and chemical evolution histories that are derived empirically from multi-epoch observations of the stellar mass–star formation rate and the stellar mass–gas-phase metallicity relations, respectively. We also fit linear combinations of single-burst models with a range of metallicities and ages. Star formation and chemical evolution histories are unconstrained for these models. The stellar mass–stellar metallicity relationsmore » obtained from the two methods agree with the relation measured from individual supergiant stars in nearby galaxies. These relations are also consistent with the relation obtained from emission-line analysis of gas-phase metallicity after accounting for systematic offsets in the gas-phase metallicity. We measure dust attenuation of the stellar continuum and show that its dependence on stellar mass and star formation rate is consistent with previously reported results derived from nebular emission lines. However, stellar continuum attenuation is smaller than nebular emission line attenuation. The continuum-to-nebular attenuation ratio depends on stellar mass and is smaller in more massive galaxies. Our consistent analysis of stellar continuum and nebular emission lines paves the way for a comprehensive investigation of stellar metallicities of star-forming and quiescent galaxies.« less

  14. A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution

    PubMed Central

    Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep

    2017-01-01

    The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section. PMID:28983398

  15. A Moisture Function of Soil Heterotrophic Respiration Derived from Pore-scale Mechanisms

    NASA Astrophysics Data System (ADS)

    Yan, Z.; Todd-Brown, K. E.; Bond-Lamberty, B. P.; Bailey, V.; Liu, C.

    2017-12-01

    Soil heterotrophic respiration (HR) is an important process controlling carbon (C) flux, but its response to changes in soil water content (θ) is poorly understood. Earth system models (ESMs) use empirical moisture functions developed from specific sites to describe the HR-θ relationship in soils, introducing significant uncertainty. Generalized models derived from mechanisms that control substrate availability and microbial respiration are thus urgently needed. Here we derive, present, and test a novel moisture function fp developed from pore-scale mechanisms. This fp encapsulates primary physicochemical and biological processes controlling HR response to moisture variation in soils. We tested fp against a wide range of published data for different soil types, and found that fp reliably predicted diverse HR- relationships. The mathematical relationship between the parameters in fp and macroscopic soil properties such as porosity and organic C content was also established, enabling to estimate fp using soil properties. Compared with empirical moisture functions used in ESMs, this derived fp could reduce uncertainty in predicting the response of soil organic C stock to climate changes. In addition, this work is one of the first studies to upscale a mechanistic soil HR model based on pore-scale processes, thus linking the pore-scale mechanisms with macroscale observations.

  16. Solar-wind predictions for the Parker Solar Probe orbit. Near-Sun extrapolations derived from an empirical solar-wind model based on Helios and OMNI observations

    NASA Astrophysics Data System (ADS)

    Venzmer, M. S.; Bothmer, V.

    2018-03-01

    Context. The Parker Solar Probe (PSP; formerly Solar Probe Plus) mission will be humanitys first in situ exploration of the solar corona with closest perihelia at 9.86 solar radii (R⊙) distance to the Sun. It will help answer hitherto unresolved questions on the heating of the solar corona and the source and acceleration of the solar wind and solar energetic particles. The scope of this study is to model the solar-wind environment for PSPs unprecedented distances in its prime mission phase during the years 2018 to 2025. The study is performed within the Coronagraphic German And US SolarProbePlus Survey (CGAUSS) which is the German contribution to the PSP mission as part of the Wide-field Imager for Solar PRobe. Aim. We present an empirical solar-wind model for the inner heliosphere which is derived from OMNI and Helios data. The German-US space probes Helios 1 and Helios 2 flew in the 1970s and observed solar wind in the ecliptic within heliocentric distances of 0.29 au to 0.98 au. The OMNI database consists of multi-spacecraft intercalibrated in situ data obtained near 1 au over more than five solar cycles. The international sunspot number (SSN) and its predictions are used to derive dependencies of the major solar-wind parameters on solar activity and to forecast their properties for the PSP mission. Methods: The frequency distributions for the solar-wind key parameters, magnetic field strength, proton velocity, density, and temperature, are represented by lognormal functions. In addition, we consider the velocity distributions bi-componental shape, consisting of a slower and a faster part. Functional relations to solar activity are compiled with use of the OMNI data by correlating and fitting the frequency distributions with the SSN. Further, based on the combined data set from both Helios probes, the parameters frequency distributions are fitted with respect to solar distance to obtain power law dependencies. Thus an empirical solar-wind model for the inner heliosphere confined to the ecliptic region is derived, accounting for solar activity and for solar distance through adequate shifts of the lognormal distributions. Finally, the inclusion of SSN predictions and the extrapolation down to PSPs perihelion region enables us to estimate the solar-wind environment for PSPs planned trajectory during its mission duration. Results: The CGAUSS empirical solar-wind model for PSP yields dependencies on solar activity and solar distance for the solar-wind parameters' frequency distributions. The estimated solar-wind median values for PSPs first perihelion in 2018 at a solar distance of 0.16 au are 87 nT, 340 km s-1, 214 cm-3, and 503 000 K. The estimates for PSPs first closest perihelion, occurring in 2024 at 0.046 au (9.86 R⊙), are 943 nT, 290 km s-1, 2951 cm-3, and 1 930 000 K. Since the modeled velocity and temperature values below approximately 20 R⊙appear overestimated in comparison with existing observations, this suggests that PSP will directly measure solar-wind acceleration and heating processes below 20 R⊙ as planned.

  17. Empirical yield tables for Michigan.

    Treesearch

    Jerold T. Hahn; Joan M. Stelman

    1984-01-01

    Describes the tables derived from the 1980 Forest Survey of Michigan and presents ways the tables can be used. These tables are broken down according to Michigan's four Forest Survey Units, 14 forest types, and 5 site-index classes.

  18. Leadership Development and Self-Development: An Empirical Study.

    ERIC Educational Resources Information Center

    McCollum, Bruce

    1999-01-01

    Describes a theory about consciousness and leadership practices derived from the Hindu Vedas. Shows how subjects who learned Transcendental Meditation as a self-development technique improved their leadership behaviors as measured by the Leadership Practices Inventory. (SK)

  19. Estimation of two ordered mean residual lifetime functions.

    PubMed

    Ebrahimi, N

    1993-06-01

    In many statistical studies involving failure data, biometric mortality data, and actuarial data, mean residual lifetime (MRL) function is of prime importance. In this paper we introduce the problem of nonparametric estimation of a MRL function on an interval when this function is bounded from below by another such function (known or unknown) on that interval, and derive the corresponding two functional estimators. The first is to be used when there is a known bound, and the second when the bound is another MRL function to be estimated independently. Both estimators are obtained by truncating the empirical estimator discussed by Yang (1978, Annals of Statistics 6, 112-117). In the first case, it is truncated at a known bound; in the second, at a point somewhere between the two empirical estimates. Consistency of both estimators is proved, and a pointwise large-sample distribution theory of the first estimator is derived.

  20. Statistical parameters of thermally driven turbulent anabatic flow

    NASA Astrophysics Data System (ADS)

    Hilel, Roni; Liberzon, Dan

    2016-11-01

    Field measurements of thermally driven turbulent anabatic flow over a moderate slope are reported. A collocated hot-films-sonic anemometer (Combo) obtained the finer scales of the flow by implementing a Neural Networks based in-situ calibration technique. Eight days of continuous measurements of the wind and temperature fluctuations reviled a diurnal pattern of unstable stratification that forced development of highly turbulent unidirectional up slope flow. Empirical fits of important turbulence statistics were obtained from velocity fluctuations' time series alongside fully resolved spectra of velocity field components and characteristic length scales. TKE and TI showed linear dependence on Re, while velocity derivative skewness and dissipation rates indicated the anisotropic nature of the flow. Empirical fits of normalized velocity fluctuations power density spectra were derived as spectral shapes exhibited high level of similarity. Bursting phenomenon was detected at 15% of the total time. Frequency of occurrence, spectral characteristics and possible generation mechanism are discussed. BSF Grant #2014075.

  1. AAPI college students' willingness to seek counseling: the role of culture, stigma, and attitudes.

    PubMed

    Choi, Na-Yeun; Miller, Matthew J

    2014-07-01

    This study tested 4 theoretically and empirically derived structural equation models of Asian, Asian American, and Pacific Islanders' willingness to seek counseling with a sample of 278 college students. The models represented competing hypotheses regarding the manner in which Asian cultural values, European American cultural values, public stigma, stigma by close others, self-stigma, and attitudes toward seeking professional help related to willingness to seek counseling. We found that Asian and European American cultural values differentially related to willingness to seek counseling indirectly through specific indirect pathways (public stigma, stigma by close others, self-stigma, and attitudes toward seeking professional help). Our results also showed that the magnitude of model-implied relationships did not vary as a function of generational status. Study limitations, future directions for research, and implications for counseling are discussed.

  2. Physics, stability, and dynamics of supply networks

    NASA Astrophysics Data System (ADS)

    Helbing, Dirk; Lämmer, Stefan; Seidel, Thomas; Šeba, Pétr; Płatkowski, Tadeusz

    2004-12-01

    We show how to treat supply networks as physical transport problems governed by balance equations and equations for the adaptation of production speeds. Although the nonlinear behavior is different, the linearized set of coupled differential equations is formally related to those of mechanical or electrical oscillator networks. Supply networks possess interesting features due to their complex topology and directed links. We derive analytical conditions for absolute and convective instabilities. The empirically observed “bullwhip effect” in supply chains is explained as a form of convective instability based on resonance effects. Moreover, it is generalized to arbitrary supply networks. Their related eigenvalues are usually complex, depending on the network structure (even without loops). Therefore, their generic behavior is characterized by damped or growing oscillations. We also show that regular distribution networks possess two negative eigenvalues only, but perturbations generate a spectrum of complex eigenvalues.

  3. Cognitive Flexibility through Metastable Neural Dynamics Is Disrupted by Damage to the Structural Connectome.

    PubMed

    Hellyer, Peter J; Scott, Gregory; Shanahan, Murray; Sharp, David J; Leech, Robert

    2015-06-17

    Current theory proposes that healthy neural dynamics operate in a metastable regime, where brain regions interact to simultaneously maximize integration and segregation. Metastability may confer important behavioral properties, such as cognitive flexibility. It is increasingly recognized that neural dynamics are constrained by the underlying structural connections between brain regions. An important challenge is, therefore, to relate structural connectivity, neural dynamics, and behavior. Traumatic brain injury (TBI) is a pre-eminent structural disconnection disorder whereby traumatic axonal injury damages large-scale connectivity, producing characteristic cognitive impairments, including slowed information processing speed and reduced cognitive flexibility, that may be a result of disrupted metastable dynamics. Therefore, TBI provides an experimental and theoretical model to examine how metastable dynamics relate to structural connectivity and cognition. Here, we use complementary empirical and computational approaches to investigate how metastability arises from the healthy structural connectome and relates to cognitive performance. We found reduced metastability in large-scale neural dynamics after TBI, measured with resting-state functional MRI. This reduction in metastability was associated with damage to the connectome, measured using diffusion MRI. Furthermore, decreased metastability was associated with reduced cognitive flexibility and information processing. A computational model, defined by empirically derived connectivity data, demonstrates how behaviorally relevant changes in neural dynamics result from structural disconnection. Our findings suggest how metastable dynamics are important for normal brain function and contingent on the structure of the human connectome. Copyright © 2015 the authors 0270-6474/15/359050-14$15.00/0.

  4. Universal relations for range corrections to Efimov features

    DOE PAGES

    Ji, Chen; Braaten, Eric; Phillips, Daniel R.; ...

    2015-09-09

    In a three-body system of identical bosons interacting through a large S-wave scattering length a, there are several sets of features related to the Efimov effect that are characterized by discrete scale invariance. Effective field theory was recently used to derive universal relations between these Efimov features that include the first-order correction due to a nonzero effective range r s. We reveal a simple pattern in these range corrections that had not been previously identified. The pattern is explained by the renormalization group for the effective field theory, which implies that the Efimov three-body parameter runs logarithmically with the momentummore » scale at a rate proportional to r s/a. The running Efimov parameter also explains the empirical observation that range corrections can be largely taken into account by shifting the Efimov parameter by an adjustable parameter divided by a. Furthermore, the accuracy of universal relations that include first-order range corrections is verified by comparing them with various theoretical calculations using models with nonzero range.« less

  5. A reevaluation of spectral ratios for lunar mare TiO2 mapping

    NASA Technical Reports Server (NTRS)

    Johnson, Jeffrey R.; Larson, Stephen M.; Singer, Robert B.

    1991-01-01

    The empirical relation established by Charette et al. (1974) between the 400/560-nm spectral ratio of mature mare soils and weight percent TiO2 has been used extensively to map titanium content in the lunar maria. Relative reflectance spectra of mare regions show that a reference wavelength further into the near-IR, e.g., above 700 nm, could be used in place of the 560-nm band to provide greater contrast (a greater range of ratio values) and hence a more sensitive indicator of titanium content. An analysis of 400/730-nm ratio values derived from both laboratory and telescopic relative reflectance spectra suggests that this ratio provides greater sensitivity to TiO2 content than the 400/560-nm ratio. The increased range of ratio values is manifested in higher contrast 400/730-nm ratio images compared to 400/560-nm ratio images. This potential improvement in sensivity encourages a reevaluation of the original Charette et al. (1974) relation using the 400/730-nm ratio.

  6. Lithology-derived structure classification from the joint interpretation of magnetotelluric and seismic models

    USGS Publications Warehouse

    Bedrosian, P.A.; Maercklin, N.; Weckmann, U.; Bartov, Y.; Ryberg, T.; Ritter, O.

    2007-01-01

    Magnetotelluric and seismic methods provide complementary information about the resistivity and velocity structure of the subsurface on similar scales and resolutions. No global relation, however, exists between these parameters, and correlations are often valid for only a limited target area. Independently derived inverse models from these methods can be combined using a classification approach to map geologic structure. The method employed is based solely on the statistical correlation of physical properties in a joint parameter space and is independent of theoretical or empirical relations linking electrical and seismic parameters. Regions of high correlation (classes) between resistivity and velocity can in turn be mapped back and re-examined in depth section. The spatial distribution of these classes, and the boundaries between them, provide structural information not evident in the individual models. This method is applied to a 10 km long profile crossing the Dead Sea Transform in Jordan. Several prominent classes are identified with specific lithologies in accordance with local geology. An abrupt change in lithology across the fault, together with vertical uplift of the basement suggest the fault is sub-vertical within the upper crust. ?? 2007 The Authors Journal compilation ?? 2007 RAS.

  7. Calculation of vortex lift effect for cambered wings by the suction analogy

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Chang, J. F.

    1981-01-01

    An improved version of Woodward's chord plane aerodynamic panel method for subsonic and supersonic flow is developed for cambered wings exhibiting edge separated vortex flow, including those with leading edge vortex flaps. The exact relation between leading edge thrust and suction force in potential flow is derived. Instead of assuming the rotated suction force to be normal to wing surface at the leading edge, new orientation for the rotated suction force is determined through consideration of the momentum principle. The supersonic suction analogy method is improved by using an effective angle of attack defined through a semi-empirical method. Comparisons of predicted results with available data in subsonic and supersonic flow are presented.

  8. CIE, Vitamin D and DNA Damage: A Synergetic Study in Thessaloniki, Greece

    NASA Astrophysics Data System (ADS)

    Zempila, Melina Maria; Taylor, Michael; Fountoulakis, Ilias; Koukouli, Maria Elissavet; Bais, Alkiviadis; Arola, Antii; van Geffen, Jos; van Weele, Michiel; van der A, Ronald; Kouremeti, Natalia; Kazadzis, Stelios; Meleti, Chariklia; Balis, Dimitrios

    2016-08-01

    The present study aims to validate different approaches for the estimation of three photobiological effective doses: the erythemal UV, the vitamin D and that for DNA damage, using high temporal resolution surface- based measurements of solar UV from 2005-2015. Data from a UV spectrophotometer, a multi-filter radiometer, and a UV radiation pyranometer that are located in Thessaloniki, Greece are used together with empirical relations, algorithms and models in order to calculate the desired quantities. In addition to the surface-based dose retrievals, OMI/Aura and the combined SCIAMACHY/Envisat and GOME/MetopA satellite products are also used in order to assess the accuracy of each method for deriving the photobiological doses.

  9. The new Kuznets cycle: a test of the Easterlin-Wachter-Wachter hypothesis.

    PubMed

    Ahlburg, D A

    1982-01-01

    The aim of this paper is to evaluate the Easterlin-Wachter-Wachter model of the effect of the size of one generation on the size of the succeeding generation. An attempt is made "to identify and test empirically each component of the Easterlin-Wachter-Wachter model..., to show how the components collapse to give a closed demographic model of generation size, and to investigate the impacts of relative cohort size on the economic performance of a cohort." The models derived are then used to generate forecasts of the U.S. birth rate to the year 2050. The results provide support for the major components of the original model. excerpt

  10. PROPERTIES OF 42 SOLAR-TYPE KEPLER TARGETS FROM THE ASTEROSEISMIC MODELING PORTAL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metcalfe, T. S.; Mathur, S.; Creevey, O. L.

    2014-10-01

    Recently the number of main-sequence and subgiant stars exhibiting solar-like oscillations that are resolved into individual mode frequencies has increased dramatically. While only a few such data sets were available for detailed modeling just a decade ago, the Kepler mission has produced suitable observations for hundreds of new targets. This rapid expansion in observational capacity has been accompanied by a shift in analysis and modeling strategies to yield uniform sets of derived stellar properties more quickly and easily. We use previously published asteroseismic and spectroscopic data sets to provide a uniform analysis of 42 solar-type Kepler targets from the Asteroseismicmore » Modeling Portal. We find that fitting the individual frequencies typically doubles the precision of the asteroseismic radius, mass, and age compared to grid-based modeling of the global oscillation properties, and improves the precision of the radius and mass by about a factor of three over empirical scaling relations. We demonstrate the utility of the derived properties with several applications.« less

  11. Curvature and the visual perception of shape: theory on information along object boundaries and the minima rule revisited.

    PubMed

    Lim, Ik Soo; Leek, E Charles

    2012-07-01

    Previous empirical studies have shown that information along visual contours is known to be concentrated in regions of high magnitude of curvature, and, for closed contours, segments of negative curvature (i.e., concave segments) carry greater perceptual relevance than corresponding regions of positive curvature (i.e., convex segments). Lately, Feldman and Singh (2005, Psychological Review, 112, 243-252) proposed a mathematical derivation to yield information content as a function of curvature along a contour. Here, we highlight several fundamental errors in their derivation and in its associated implementation, which are problematic in both mathematical and psychological senses. Instead, we propose an alternative mathematical formulation for information measure of contour curvature that addresses these issues. Additionally, unlike in previous work, we extend this approach to 3-dimensional (3D) shape by providing a formal measure of information content for surface curvature and outline a modified version of the minima rule relating to part segmentation using curvature in 3D shape. Copyright 2012 APA, all rights reserved.

  12. Geological and geothermal investigations for HCMM-derived data. [hydrothermally altered areas in Yerington, Nevada

    NASA Technical Reports Server (NTRS)

    Lyon, R. J. P.; Prelat, A. E.; Kirk, R. (Principal Investigator)

    1981-01-01

    An attempt was made to match HCMM- and U2HCMR-derived temperature data over two test sites of very local size to similar data collected in the field at nearly the same times. Results indicate that HCMM investigations using resolutions cells of 500 m or so are best conducted with areally-extensive sites, rather than point observations. The excellent quality day-VIS imagery is particularly useful for lineament studies, as is the DELTA-T imagery. Attempts to register the ground observed temperatures (even for 0.5 sq mile targets) were unsuccessful due to excessive pixel-to-pixel noise on the HCMM data. Several computer models were explored and related to thermal parameter value changes with observed data. Unless quite complex models, with many parameters which can be observed (perhaps not even measured (perhaps not even measured) only under remote sensing conditions (e.g., roughness, wind shear, etc) are used, the model outputs do not match the observed data. Empirical relationship may be most readily studied.

  13. Lightning Scaling Laws Revisited

    NASA Technical Reports Server (NTRS)

    Boccippio, D. J.; Arnold, James E. (Technical Monitor)

    2000-01-01

    Scaling laws relating storm electrical generator power (and hence lightning flash rate) to charge transport velocity and storm geometry were originally posed by Vonnegut (1963). These laws were later simplified to yield simple parameterizations for lightning based upon cloud top height, with separate parameterizations derived over land and ocean. It is demonstrated that the most recent ocean parameterization: (1) yields predictions of storm updraft velocity which appear inconsistent with observation, and (2) is formally inconsistent with the theory from which it purports to derive. Revised formulations consistent with Vonnegut's original framework are presented. These demonstrate that Vonnegut's theory is, to first order, consistent with observation. The implications of assuming that flash rate is set by the electrical generator power, rather than the electrical generator current, are examined. The two approaches yield significantly different predictions about the dependence of charge transfer per flash on storm dimensions, which should be empirically testable. The two approaches also differ significantly in their explanation of regional variability in lightning observations.

  14. Transport of the Norwegian Atlantic current as determined from satellite altimetry

    NASA Technical Reports Server (NTRS)

    Pistek, Pavel; Johnson, Donald R.

    1992-01-01

    Relatively warm and salty North Atlantic surface waters flow through the Faeroe-Shetland Channel into the higher latitudes of the Nordic Seas, preserving an ice-free winter environment for much of the exterior coast of northern Europe. This flow was monitored along the Norwegian coast using Geosat altimetry on two ascending arcs during the Exact Repeat Mission in 1987-1989. Concurrent undertrack CTD surveys were used to fix a reference surface for the altimeter-derived SSH anomalies, in effect creating time series of alongtrack surface dynamic height topographies. Climatologic CTD casts were then used, with empirical orthogonal function (EOF) analysis, to derive relationships between historical surface dynamic heights and vertical temperature and salinity profiles. Applying these EOF relationships to the altimeter signals, mean transports of volume, heat, and salt were calculated at approximately 2.9 Sverdrups, 8.1 x 10 exp 11 KCal/s and 1.0 x 10 exp 8 Kg/s, respectively. Maximum transports occurred in February/March and minimum in July/August.

  15. Immune system participates in brain regeneration and restoration of reproduction in the earthworm Dendrobaena veneta.

    PubMed

    Molnar, Laszlo; Pollak, Edit; Skopek, Zuzanna; Gutt, Ewa; Kruk, Jerzy; Morgan, A John; Plytycz, Barbara

    2015-10-01

    Earthworm decerebration causes temporary inhibition of reproduction which is mediated by certain brain-derived neurohormones; thus, cocoon production is an apposite supravital marker of neurosecretory center functional recovery during brain regeneration. The core aim of the present study was to investigate aspects of the interactions of nervous and immune systems during brain regeneration in adult Dendrobaena veneta (Annelida; Oligochaeta). Surgical brain extirpation was combined, either with (i) maintenance of immune-competent coelomic cells (coelomocytes) achieved by surgery on prilocaine-anesthetized worms or (ii) prior extrusion of fluid-suspended coelomocytes by electrostimulation. Both brain renewal and cocoon output recovery were significantly faster in earthworms with relatively undisturbed coelomocyte counts compared with individuals where coelomocyte counts had been experimentally depleted. These observations provide empirical evidence that coelomocytes and/or coelomocyte-derived factors (e.g. riboflavin) participate in brain regeneration and, by implication, that there is close functional synergy between earthworm neural and immune systems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Diagnostic classification of eating disorders in children and adolescents: How does DSM-IV-TR compare to empirically-derived categories?

    PubMed Central

    Eddy, Kamryn T.; le Grange, Daniel; Crosby, Ross D.; Hoste, Renee Rienecke; Doyle, Angela Celio; Smyth, Angela; Herzog, David B.

    2009-01-01

    Objective The purpose of this study was to empirically derive eating disorder phenotypes in a clinical sample of children and adolescents using latent profile analysis (LPA) and compare these latent profile (LP) groups to the DSM-IV-TR eating disorder categories. Method Eating disorder symptom data collected from 401 youth (ages 7–19; mean 15.14 ± 2.35y) seeking eating disorder treatment were included in LPA; general linear models were used to compare LP groups to DSM-IV-TR eating disorder categories on pre-treatment and outcome indices. Results Three LP groups were identified: LP1 (n=144), characterized binge eating and purging (“Binge/purge”); LP2 (n=126), characterized by excessive exercise and extreme eating disorder cognitions (“Exercise-extreme cognitions”); and LP3 (n=131), characterized by minimal eating disorder behaviors and cognitions (“Minimal behaviors/cognitions”). Identified LPs imperfectly resembled DSM-IV-TR eating disorders. LP1 resembled bulimia nervosa; LP2 and LP3 broadly resembled anorexia nervosa with a relaxed weight criterion, differentiated by excessive exercise and severity of eating disorder cognitions. LP groups were more differentiated than the DSM-IV-TR categories across pre-treatment eating disorder and general psychopathology indices, as well as weight change at follow-up. Neither LP nor DSM-IV-TR categories predicted change in binge/purge behaviors. Validation analyses suggest these empirically-derived groups improve upon the current DSM-IV-TR categories. Conclusions In children and adolescents, revisions for DSM-V should consider recognition of patients with minimal cognitive eating disorder symptoms. PMID:20410717

  17. Annealed Scaling for a Charged Polymer

    NASA Astrophysics Data System (ADS)

    Caravenna, F.; den Hollander, F.; Pétrélis, N.; Poisat, J.

    2016-03-01

    This paper studies an undirected polymer chain living on the one-dimensional integer lattice and carrying i.i.d. random charges. Each self-intersection of the polymer chain contributes to the interaction Hamiltonian an energy that is equal to the product of the charges of the two monomers that meet. The joint probability distribution for the polymer chain and the charges is given by the Gibbs distribution associated with the interaction Hamiltonian. The focus is on the annealed free energy per monomer in the limit as the length of the polymer chain tends to infinity. We derive a spectral representation for the free energy and use this to prove that there is a critical curve in the parameter plane of charge bias versus inverse temperature separating a ballistic phase from a subballistic phase. We show that the phase transition is first order. We prove large deviation principles for the laws of the empirical speed and the empirical charge, and derive a spectral representation for the associated rate functions. Interestingly, in both phases both rate functions exhibit flat pieces, which correspond to an inhomogeneous strategy for the polymer to realise a large deviation. The large deviation principles in turn lead to laws of large numbers and central limit theorems. We identify the scaling behaviour of the critical curve for small and for large charge bias. In addition, we identify the scaling behaviour of the free energy for small charge bias and small inverse temperature. Both are linked to an associated Sturm-Liouville eigenvalue problem. A key tool in our analysis is the Ray-Knight formula for the local times of the one-dimensional simple random walk. This formula is exploited to derive a closed form expression for the generating function of the annealed partition function, and for several related quantities. This expression in turn serves as the starting point for the derivation of the spectral representation for the free energy, and for the scaling theorems. What happens for the quenched free energy per monomer remains open. We state two modest results and raise a few questions.

  18. Atmospheric density determination using high-accuracy satellite GPS data

    NASA Astrophysics Data System (ADS)

    Tingling, R.; Miao, J.; Liu, S.

    2017-12-01

    Atmospheric drag is the main error source in the orbit determination and prediction of low Earth orbit (LEO) satellites, however, empirical models which are used to account for atmosphere often exhibit density errors around 15 30%. Atmospheric density determination thus become an important topic for atmospheric researchers. Based on the relation between atmospheric drag force and the decay of orbit semi-major axis, we derived atmospheric density along the trajectory of CHAMP with its Rapid Science Orbit (RSO) data. Three primary parameters are calculated, including the ratio of cross sectional area to mass, drag coefficient, and the decay of semi-major axis caused by atmospheric drag. We also analyzed the source of error and made a comparison between GPS-derived and reference density. Result on 2 Dec 2008 shows that the mean error of GPS-derived density can decrease from 29.21% to 9.20% when time span adopted on the process of computation increase from 10min to 50min. Result for the whole December indicates that when the time span meet the condition that the amplitude of the decay of semi-major axis is much greater than its standard deviation, then density precision of 10% can be achieved.

  19. Unraveling spurious properties of interaction networks with tailored random networks.

    PubMed

    Bialonski, Stephan; Wendler, Martin; Lehnertz, Klaus

    2011-01-01

    We investigate interaction networks that we derive from multivariate time series with methods frequently employed in diverse scientific fields such as biology, quantitative finance, physics, earth and climate sciences, and the neurosciences. Mimicking experimental situations, we generate time series with finite length and varying frequency content but from independent stochastic processes. Using the correlation coefficient and the maximum cross-correlation, we estimate interdependencies between these time series. With clustering coefficient and average shortest path length, we observe unweighted interaction networks, derived via thresholding the values of interdependence, to possess non-trivial topologies as compared to Erdös-Rényi networks, which would indicate small-world characteristics. These topologies reflect the mostly unavoidable finiteness of the data, which limits the reliability of typically used estimators of signal interdependence. We propose random networks that are tailored to the way interaction networks are derived from empirical data. Through an exemplary investigation of multichannel electroencephalographic recordings of epileptic seizures--known for their complex spatial and temporal dynamics--we show that such random networks help to distinguish network properties of interdependence structures related to seizure dynamics from those spuriously induced by the applied methods of analysis.

  20. Unraveling Spurious Properties of Interaction Networks with Tailored Random Networks

    PubMed Central

    Bialonski, Stephan; Wendler, Martin; Lehnertz, Klaus

    2011-01-01

    We investigate interaction networks that we derive from multivariate time series with methods frequently employed in diverse scientific fields such as biology, quantitative finance, physics, earth and climate sciences, and the neurosciences. Mimicking experimental situations, we generate time series with finite length and varying frequency content but from independent stochastic processes. Using the correlation coefficient and the maximum cross-correlation, we estimate interdependencies between these time series. With clustering coefficient and average shortest path length, we observe unweighted interaction networks, derived via thresholding the values of interdependence, to possess non-trivial topologies as compared to Erdös-Rényi networks, which would indicate small-world characteristics. These topologies reflect the mostly unavoidable finiteness of the data, which limits the reliability of typically used estimators of signal interdependence. We propose random networks that are tailored to the way interaction networks are derived from empirical data. Through an exemplary investigation of multichannel electroencephalographic recordings of epileptic seizures – known for their complex spatial and temporal dynamics – we show that such random networks help to distinguish network properties of interdependence structures related to seizure dynamics from those spuriously induced by the applied methods of analysis. PMID:21850239

  1. Soil radium, soil gas radon and indoor radon empirical relationships to assist in post-closure impact assessment related to near-surface radioactive waste disposal.

    PubMed

    Appleton, J D; Cave, M R; Miles, J C H; Sumerling, T J

    2011-03-01

    Least squares (LS), Theil's (TS) and weighted total least squares (WTLS) regression analysis methods are used to develop empirical relationships between radium in the ground, radon in soil and radon in dwellings to assist in the post-closure assessment of indoor radon related to near-surface radioactive waste disposal at the Low Level Waste Repository in England. The data sets used are (i) estimated ²²⁶Ra in the < 2 mm fraction of topsoils (eRa226) derived from equivalent uranium (eU) from airborne gamma spectrometry data, (ii) eRa226 derived from measurements of uranium in soil geochemical samples, (iii) soil gas radon and (iv) indoor radon data. For models comparing indoor radon and (i) eRa226 derived from airborne eU data and (ii) soil gas radon data, some of the geological groupings have significant slopes. For these groupings there is reasonable agreement in slope and intercept between the three regression analysis methods (LS, TS and WTLS). Relationships between radon in dwellings and radium in the ground or radon in soil differ depending on the characteristics of the underlying geological units, with more permeable units having steeper slopes and higher indoor radon concentrations for a given radium or soil gas radon concentration in the ground. The regression models comparing indoor radon with soil gas radon have intercepts close to 5 Bq m⁻³ whilst the intercepts for those comparing indoor radon with eRa226 from airborne eU vary from about 20 Bq m⁻³ for a moderately permeable geological unit to about 40 Bq m⁻³ for highly permeable limestone, implying unrealistically high contributions to indoor radon from sources other than the ground. An intercept value of 5 Bq m⁻³ is assumed as an appropriate mean value for the UK for sources of indoor radon other than radon from the ground, based on examination of UK data. Comparison with published data used to derive an average indoor radon: soil ²²⁶Ra ratio shows that whereas the published data are generally clustered with no obvious correlation, the data from this study have substantially different relationships depending largely on the permeability of the underlying geology. Models for the relatively impermeable geological units plot parallel to the average indoor radon: soil ²²⁶Ra model but with lower indoor radon: soil ²²⁶Ra ratios, whilst the models for the permeable geological units plot parallel to the average indoor radon: soil ²²⁶Ra model but with higher than average indoor radon: soil ²²⁶Ra ratios. Copyright © 2010 Natural Environment Research Council. Published by Elsevier Ltd.. All rights reserved.

  2. New empirically-derived solar radiation pressure model for GPS satellites

    NASA Technical Reports Server (NTRS)

    Bar-Sever, Y.; Kuang, D.

    2003-01-01

    Solar radiation pressure force is the second largest perturbation acting on GPS satellites, after the gravitational attraction from the Earth, Sun, and Moon. It is the largest error source in the modeling of GPS orbital dynamics.

  3. THEORETICAL METHODS FOR COMPUTING ELECTRICAL CONDITIONS IN WIRE-PLATE ELECTROSTATIC PRECIPITATORS

    EPA Science Inventory

    The paper describes a new semi-empirical, approximate theory for predicting electrical conditions. In the approximate theory, analytical expressions are derived for calculating voltage-current characteristics and electric potential, electric field, and space charge density distri...

  4. A survey and new measurements of ice vapor pressure at temperatures between 170 and 250K

    NASA Technical Reports Server (NTRS)

    Marti, James; Mauersberger, Konrad

    1993-01-01

    New measurements of ice vapor pressures at temperatures between 170 and 250 K are presented and published vapor pressure data are summarized. An empirical vapor pressure equation was derived and allows prediction of vapor pressures between 170 k and the triple point of water with an accuracy of approximately 2 percent. Predictions obtained agree, within experimental uncertainty, with the most reliable equation derived from thermodynamic principles.

  5. Extended Empirical Roadside Shadowing model from ACTS mobile measurements

    NASA Technical Reports Server (NTRS)

    Goldhirsh, Julius; Vogel, Wolfhard

    1995-01-01

    Employing multiple data bases derived from land-mobile satellite measurements using the Advanced Communications Technology Satellite (ACTS) at 20 GHz, MARECS B-2 at 1.5 GHz, and helicopter measurements at 870 MHz and 1.5 GHz, the Empirical Road Side Shadowing Model (ERS) has been extended. The new model (Extended Empirical Roadside Shadowing Model, EERS) may now be employed at frequencies from UHF to 20 GHz, at elevation angles from 7 to 60 deg and at percentages from 1 to 80 percent (0 dB fade). The EERS distributions are validated against measured ones and fade deviations associated with the model are assessed. A model is also presented for estimating the effects of foliage (or non-foliage) on 20 GHz distributions, given distributions from deciduous trees devoid of leaves (or in full foliage).

  6. A Proposed Change to ITU-R Recommendation 681

    NASA Technical Reports Server (NTRS)

    Davarian, F.

    1996-01-01

    Recommendation 681 of the International Telecommunications Union (ITU) provides five models for the prediction of propagation effects on land mobile satellite links: empirical roadside shadowing (ERS), attenuation frequency scaling, fade duration distribution, non-fade duration distribution, and fading due to multipath. Because the above prediction models have been empirically derived using a limited amount of data, these schemes work only for restricted ranges of link parameters. With the first two models, for example, the frequency and elevation angle parameters are restricted to 0.8 to 2.7 GHz and 20 to 60 degrees, respectively. Recently measured data have enabled us to enhance the range of the first two schemes. Moreover, for convenience, they have been combined into a single scheme named the extended empirical roadside shadowing (EERS) model.

  7. Theoretical and Empirical Comparisons between Two Models for Continuous Item Responses.

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2002-01-01

    Analyzed the relations between two continuous response models intended for typical response items: the linear congeneric model and Samejima's continuous response model (CRM). Illustrated the relations described using an empirical example and assessed the relations through a simulation study. (SLD)

  8. “Nobody tosses a dwarf!” The relation between the empirical and normative reexamined

    PubMed Central

    Leget, C.; Borry, P.; De Vries, R.

    2009-01-01

    This article discusses the relation between empirical and normative approaches in bioethics. The issue of dwarf tossing, while admittedly unusual, is chosen as point of departure because it challenges the reader to look upon several central bioethical themes – including human dignity, autonomy, and the protection of vulnerable people – with fresh eyes. After an overview of current approaches to the integration of empirical and normative ethics, we consider five ways that the empirical and normative can be brought together to speak to the problem of dwarf tossing: prescriptive applied ethics, theorist ethics, critical applied ethics, particularist ethics and integrated empirical ethics. We defend a position of critical applied ethics that allows for a two-way relation between empirical and normative theories. The approach we endorse acknowledges that a social practice can and should be judged by both the gathering of empirical data and by the normative ethics. Critical applied ethics uses a five stage process that includes: (a) determination of the problem, (b) description of the problem, (c) empirical study of effects and alternatives, (d) normative weighing and (e) evaluation of the effects of a decision. In each stage, we explore the perspective from both the empirical (sociological) and the normative ethical poles that, in our view, should operate as two independent focuses of the ellipse that is called bioethics. We conclude by applying our five stage critical applied ethics to the example of dwarf tossing. PMID:19338523

  9. Empirical Observations on the Sensitivity of Hot Cathode Ionization Type Vacuum Gages

    NASA Technical Reports Server (NTRS)

    Summers, R. L.

    1969-01-01

    A study of empirical methods of predicting tile relative sensitivities of hot cathode ionization gages is presented. Using previously published gage sensitivities, several rules for predicting relative sensitivity are tested. The relative sensitivity to different gases is shown to be invariant with gage type, in the linear range of gage operation. The total ionization cross section, molecular and molar polarizability, and refractive index are demonstrated to be useful parameters for predicting relative gage sensitivity. Using data from the literature, the probable error of predictions of relative gage sensitivity based on these molecular properties is found to be about 10 percent. A comprehensive table of predicted relative sensitivities, based on empirical methods, is presented.

  10. Behavior, sensitivity, and power of activation likelihood estimation characterized by massive empirical simulation.

    PubMed

    Eickhoff, Simon B; Nichols, Thomas E; Laird, Angela R; Hoffstaedter, Felix; Amunts, Katrin; Fox, Peter T; Bzdok, Danilo; Eickhoff, Claudia R

    2016-08-15

    Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Analysis of the Effects of Thermal Environment on Optical Systems for Navigation Guidance and Control in Supersonic Aircraft Based on Empirical Equations

    PubMed Central

    Cheng, Xuemin; Yang, Yikang; Hao, Qun

    2016-01-01

    The thermal environment is an important factor in the design of optical systems. This study investigated the thermal analysis technology of optical systems for navigation guidance and control in supersonic aircraft by developing empirical equations for the front temperature gradient and rear thermal diffusion distance, and for basic factors such as flying parameters and the structure of the optical system. Finite element analysis (FEA) was used to study the relationship between flying and front dome parameters and the system temperature field. Systematic deduction was then conducted based on the effects of the temperature field on the physical geometry and ray tracing performance of the front dome and rear optical lenses, by deriving the relational expressions between the system temperature field and the spot size and positioning precision of the rear optical lens. The optical systems used for navigation guidance and control in supersonic aircraft when the flight speed is in the range of 1–5 Ma were analysed using the derived equations. Using this new method it was possible to control the precision within 10% when considering the light spot received by the four-quadrant detector, and computation time was reduced compared with the traditional method of separately analysing the temperature field of the front dome and rear optical lens using FEA. Thus, the method can effectively increase the efficiency of parameter analysis and computation in an airborne optical system, facilitating the systematic, effective and integrated thermal analysis of airborne optical systems for navigation guidance and control. PMID:27763515

  12. Equation of state of dense nuclear matter and neutron star structure from nuclear chiral interactions

    NASA Astrophysics Data System (ADS)

    Bombaci, Ignazio; Logoteta, Domenico

    2018-02-01

    Aims: We report a new microscopic equation of state (EOS) of dense symmetric nuclear matter, pure neutron matter, and asymmetric and β-stable nuclear matter at zero temperature using recent realistic two-body and three-body nuclear interactions derived in the framework of chiral perturbation theory (ChPT) and including the Δ(1232) isobar intermediate state. This EOS is provided in tabular form and in parametrized form ready for use in numerical general relativity simulations of binary neutron star merging. Here we use our new EOS for β-stable nuclear matter to compute various structural properties of non-rotating neutron stars. Methods: The EOS is derived using the Brueckner-Bethe-Goldstone quantum many-body theory in the Brueckner-Hartree-Fock approximation. Neutron star properties are next computed solving numerically the Tolman-Oppenheimer-Volkov structure equations. Results: Our EOS models are able to reproduce the empirical saturation point of symmetric nuclear matter, the symmetry energy Esym, and its slope parameter L at the empirical saturation density n0. In addition, our EOS models are compatible with experimental data from collisions between heavy nuclei at energies ranging from a few tens of MeV up to several hundreds of MeV per nucleon. These experiments provide a selective test for constraining the nuclear EOS up to 4n0. Our EOS models are consistent with present measured neutron star masses and particularly with the mass M = 2.01 ± 0.04 M⊙ of the neutron stars in PSR J0348+0432.

  13. Analysis of the Effects of Thermal Environment on Optical Systems for Navigation Guidance and Control in Supersonic Aircraft Based on Empirical Equations.

    PubMed

    Cheng, Xuemin; Yang, Yikang; Hao, Qun

    2016-10-17

    The thermal environment is an important factor in the design of optical systems. This study investigated the thermal analysis technology of optical systems for navigation guidance and control in supersonic aircraft by developing empirical equations for the front temperature gradient and rear thermal diffusion distance, and for basic factors such as flying parameters and the structure of the optical system. Finite element analysis (FEA) was used to study the relationship between flying and front dome parameters and the system temperature field. Systematic deduction was then conducted based on the effects of the temperature field on the physical geometry and ray tracing performance of the front dome and rear optical lenses, by deriving the relational expressions between the system temperature field and the spot size and positioning precision of the rear optical lens. The optical systems used for navigation guidance and control in supersonic aircraft when the flight speed is in the range of 1-5 Ma were analysed using the derived equations. Using this new method it was possible to control the precision within 10% when considering the light spot received by the four-quadrant detector, and computation time was reduced compared with the traditional method of separately analysing the temperature field of the front dome and rear optical lens using FEA. Thus, the method can effectively increase the efficiency of parameter analysis and computation in an airborne optical system, facilitating the systematic, effective and integrated thermal analysis of airborne optical systems for navigation guidance and control.

  14. A Multi-Band Analytical Algorithm for Deriving Absorption and Backscattering Coefficients from Remote-Sensing Reflectance of Optically Deep Waters

    NASA Technical Reports Server (NTRS)

    Lee, Zhong-Ping; Carder, Kendall L.

    2001-01-01

    A multi-band analytical (MBA) algorithm is developed to retrieve absorption and backscattering coefficients for optically deep waters, which can be applied to data from past and current satellite sensors, as well as data from hyperspectral sensors. This MBA algorithm applies a remote-sensing reflectance model derived from the Radiative Transfer Equation, and values of absorption and backscattering coefficients are analytically calculated from values of remote-sensing reflectance. There are only limited empirical relationships involved in the algorithm, which implies that this MBA algorithm could be applied to a wide dynamic range of waters. Applying the algorithm to a simulated non-"Case 1" data set, which has no relation to the development of the algorithm, the percentage error for the total absorption coefficient at 440 nm a (sub 440) is approximately 12% for a range of 0.012 - 2.1 per meter (approximately 6% for a (sub 440) less than approximately 0.3 per meter), while a traditional band-ratio approach returns a percentage error of approximately 30%. Applying it to a field data set ranging from 0.025 to 2.0 per meter, the result for a (sub 440) is very close to that using a full spectrum optimization technique (9.6% difference). Compared to the optimization approach, the MBA algorithm cuts the computation time dramatically with only a small sacrifice in accuracy, making it suitable for processing large data sets such as satellite images. Significant improvements over empirical algorithms have also been achieved in retrieving the optical properties of optically deep waters.

  15. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, J.; Winkler, J.; Christensen, D.

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputsmore » for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.« less

  16. ON THE THREE-DIMENSIONAL STRUCTURE OF THE MASS, METALLICITY, AND STAR FORMATION RATE SPACE FOR STAR-FORMING GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lara-Lopez, Maritza A.; Lopez-Sanchez, Angel R.; Hopkins, Andrew M., E-mail: mlopez@aao.gov.au

    2013-02-20

    We demonstrate that the space formed by the star formation rate (SFR), gas-phase metallicity (Z), and stellar mass (M {sub *}) can be reduced to a plane, as first proposed by Lara-Lopez et al. We study three different approaches to find the best representation of this 3D space, using a principal component analysis (PCA), a regression fit, and binning of the data. The PCA shows that this 3D space can be adequately represented in only two dimensions, i.e., a plane. We find that the plane that minimizes the {chi}{sup 2} for all variables, and hence provides the best representation ofmore » the data, corresponds to a regression fit to the stellar mass as a function of SFR and Z, M {sub *}= f(Z, SFR). We find that the distribution resulting from the median values in bins for our data gives the highest {chi}{sup 2}. We also show that the empirical calibrations to the oxygen abundance used to derive the Fundamental Metallicity Relation have important limitations, which contribute to the apparent inconsistencies. The main problem is that these empirical calibrations do not consider the ionization degree of the gas. Furthermore, the use of the N2 index to estimate oxygen abundances cannot be applied for 12 + log(O/H) {approx}> 8.8 because of the saturation of the [N II] {lambda}6584 line in the high-metallicity regime. Finally, we provide an update of the Fundamental Plane derived by Lara-Lopez et al.« less

  17. Behavior, Sensitivity, and power of activation likelihood estimation characterized by massive empirical simulation

    PubMed Central

    Eickhoff, Simon B.; Nichols, Thomas E.; Laird, Angela R.; Hoffstaedter, Felix; Amunts, Katrin; Fox, Peter T.

    2016-01-01

    Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis. PMID:27179606

  18. The Application of the Preschool Child Behavior Checklist and the Caregiver–Teacher Report Form to Mainland Chinese Children: Syndrome Structure, Gender Differences, Country Effects, and Inter-Informant Agreement

    PubMed Central

    Cheng, Halina

    2010-01-01

    Preschool children have long been a neglected population in the study of psychopathology. The Achenbach System of Empirically Based Assessment (ASEBA), which includes the Child Behavior Checklist/1.5-5 (CBCL/1.5-5) and the Caregiver-Teacher Report Form (C-TRF), constitutes the few available measures to assess preschoolers with an empirically derived taxonomy of preschool psychopathology. However, the utility of the measures and their taxonomy of preschool psychopathology to the Chinese is largely unknown and has not been studied. The present study aimed at testing the cross-cultural factorial validity of the CBCL/1.5-5 and C-TRF, as well as the applicability of the taxonomy of preschool psychopathology they embody, to Mainland Chinese preschoolers. Country effects between our Chinese sample and the original U.S. sample, gender differences, and cross-informant agreement between teachers and parents were also to be examined. A Chinese version of the CBCL/1.5-5 and C-TRF was completed by parents and teachers respectively on 876 preschoolers in Mainland China. Confirmatory factor analysis (CFA) confirmed the original, U.S.-derived second order, multi-factor model best fit the Chinese preschool data of the CBCL/1.5-5 and C-TRF. Rates of total behavior problems in Chinese preschoolers were largely similar to those in American preschoolers. Specifically, Chinese preschoolers scored higher on internalizing problems while American preschoolers scored higher on externalizing problems. Chinese preschool boys had significantly higher rates of externalizing problems than Chinese preschool girls. Cross-informant agreement between Chinese teachers and parents was relatively low compared to agreement in the original U.S. sample. Results support the generalizability of the taxonomic structure of preschool psychopathology derived in the U.S. to the Chinese, as well as the applicability of the Chinese version of the CBCL/1.5-5 and C-TRF. PMID:20821258

  19. Using Empirical Mode Decomposition to process Marine Magnetotelluric Data

    NASA Astrophysics Data System (ADS)

    Chen, J.; Jegen, M. D.; Heincke, B. H.; Moorkamp, M.

    2014-12-01

    The magnetotelluric (MT) data always exhibits nonstationarities due to variations of source mechanisms causing MT variations on different time and spatial scales. An additional non-stationary component is introduced through noise, which is particularly pronounced in marine MT data through motion induced noise caused by time-varying wave motion and currents. We present a new heuristic method for dealing with the non-stationarity of MT time series based on Empirical Mode Decomposition (EMD). The EMD method is used in combination with the derived instantaneous spectra to determine impedance estimates. The procedure is tested on synthetic and field MT data. In synthetic tests the reliability of impedance estimates from EMD-based method is compared to the synthetic responses of a 1D layered model. To examine how estimates are affected by noise, stochastic stationary and non-stationary noise are added on the time series. Comparisons reveal that estimates by the EMD-based method are generally more stable than those by simple Fourier analysis. Furthermore, the results are compared to those derived by a commonly used Fourier-based MT data processing software (BIRRP), which incorporates additional sophisticated robust estimations to deal with noise issues. It is revealed that the results from both methods are already comparable, even though no robust estimate procedures are implemented in the EMD approach at present stage. The processing scheme is then applied to marine MT field data. Testing is performed on short, relatively quiet segments of several data sets, as well as on long segments of data with many non-stationary noise packages. Compared to BIRRP, the new method gives comparable or better impedance estimates, furthermore, the estimates are extended to lower frequencies and less noise biased estimates with smaller error bars are obtained at high frequencies. The new processing methodology represents an important step towards deriving a better resolved Earth model to greater depth underneath the seafloor.

  20. Designing deep sequencing experiments: detecting structural variation and estimating transcript abundance.

    PubMed

    Bashir, Ali; Bansal, Vikas; Bafna, Vineet

    2010-06-18

    Massively parallel DNA sequencing technologies have enabled the sequencing of several individual human genomes. These technologies are also being used in novel ways for mRNA expression profiling, genome-wide discovery of transcription-factor binding sites, small RNA discovery, etc. The multitude of sequencing platforms, each with their unique characteristics, pose a number of design challenges, regarding the technology to be used and the depth of sequencing required for a particular sequencing application. Here we describe a number of analytical and empirical results to address design questions for two applications: detection of structural variations from paired-end sequencing and estimating mRNA transcript abundance. For structural variation, our results provide explicit trade-offs between the detection and resolution of rearrangement breakpoints, and the optimal mix of paired-read insert lengths. Specifically, we prove that optimal detection and resolution of breakpoints is achieved using a mix of exactly two insert library lengths. Furthermore, we derive explicit formulae to determine these insert length combinations, enabling a 15% improvement in breakpoint detection at the same experimental cost. On empirical short read data, these predictions show good concordance with Illumina 200 bp and 2 Kbp insert length libraries. For transcriptome sequencing, we determine the sequencing depth needed to detect rare transcripts from a small pilot study. With only 1 Million reads, we derive corrections that enable almost perfect prediction of the underlying expression probability distribution, and use this to predict the sequencing depth required to detect low expressed genes with greater than 95% probability. Together, our results form a generic framework for many design considerations related to high-throughput sequencing. We provide software tools http://bix.ucsd.edu/projects/NGS-DesignTools to derive platform independent guidelines for designing sequencing experiments (amount of sequencing, choice of insert length, mix of libraries) for novel applications of next generation sequencing.

  1. Trophic Scaling and Occupancy Analysis Reveals a Lion Population Limited by Top-Down Anthropogenic Pressure in the Limpopo National Park, Mozambique

    PubMed Central

    Everatt, Kristoffer T.; Andresen, Leah; Somers, Michael J.

    2014-01-01

    The African lion (Panthera Leo) has suffered drastic population and range declines over the last few decades and is listed by the IUCN as vulnerable to extinction. Conservation management requires reliable population estimates, however these data are lacking for many of the continent's remaining populations. It is possible to estimate lion abundance using a trophic scaling approach. However, such inferences assume that a predator population is subject only to bottom-up regulation, and are thus likely to produce biased estimates in systems experiencing top-down anthropogenic pressures. Here we provide baseline data on the status of lions in a developing National Park in Mozambique that is impacted by humans and livestock. We compare a direct density estimate with an estimate derived from trophic scaling. We then use replicated detection/non-detection surveys to estimate the proportion of area occupied by lions, and hierarchical ranking of covariates to provide inferences on the relative contribution of prey resources and anthropogenic factors influencing lion occurrence. The direct density estimate was less than 1/3 of the estimate derived from prey resources (0.99 lions/100 km2 vs. 3.05 lions/100 km2). The proportion of area occupied by lions was Ψ = 0.439 (SE = 0.121), or approximately 44% of a 2 400 km2 sample of potential habitat. Although lions were strongly predicted by a greater probability of encountering prey resources, the greatest contributing factor to lion occurrence was a strong negative association with settlements. Finally, our empirical abundance estimate is approximately 1/3 of a published abundance estimate derived from opinion surveys. Altogether, our results describe a lion population held below resource-based carrying capacity by anthropogenic factors and highlight the limitations of trophic scaling and opinion surveys for estimating predator populations exposed to anthropogenic pressures. Our study provides the first empirical quantification of a population that future change can be measured against. PMID:24914934

  2. Trophic scaling and occupancy analysis reveals a lion population limited by top-down anthropogenic pressure in the Limpopo National Park, Mozambique.

    PubMed

    Everatt, Kristoffer T; Andresen, Leah; Somers, Michael J

    2014-01-01

    The African lion (Panthera Leo) has suffered drastic population and range declines over the last few decades and is listed by the IUCN as vulnerable to extinction. Conservation management requires reliable population estimates, however these data are lacking for many of the continent's remaining populations. It is possible to estimate lion abundance using a trophic scaling approach. However, such inferences assume that a predator population is subject only to bottom-up regulation, and are thus likely to produce biased estimates in systems experiencing top-down anthropogenic pressures. Here we provide baseline data on the status of lions in a developing National Park in Mozambique that is impacted by humans and livestock. We compare a direct density estimate with an estimate derived from trophic scaling. We then use replicated detection/non-detection surveys to estimate the proportion of area occupied by lions, and hierarchical ranking of covariates to provide inferences on the relative contribution of prey resources and anthropogenic factors influencing lion occurrence. The direct density estimate was less than 1/3 of the estimate derived from prey resources (0.99 lions/100 km² vs. 3.05 lions/100 km²). The proportion of area occupied by lions was Ψ = 0.439 (SE = 0.121), or approximately 44% of a 2,400 km2 sample of potential habitat. Although lions were strongly predicted by a greater probability of encountering prey resources, the greatest contributing factor to lion occurrence was a strong negative association with settlements. Finally, our empirical abundance estimate is approximately 1/3 of a published abundance estimate derived from opinion surveys. Altogether, our results describe a lion population held below resource-based carrying capacity by anthropogenic factors and highlight the limitations of trophic scaling and opinion surveys for estimating predator populations exposed to anthropogenic pressures. Our study provides the first empirical quantification of a population that future change can be measured against.

  3. A comparison of daily water use estimates derived from constant-heat sap-flow probe values and gravimetric measurements in pot-grown saplings.

    PubMed

    McCulloh, Katherine A; Winter, Klaus; Meinzer, Frederick C; Garcia, Milton; Aranda, Jorge; Lachenbruch, Barbara

    2007-09-01

    Use of Granier-style heat dissipation sensors to measure sap flow is common in plant physiology, ecology and hydrology. There has been concern that any change to the original Granier design invalidates the empirical relationship between sap flux density and the temperature difference between the probes. Here, we compared daily water use estimates from gravimetric measurements with values from variable length heat dissipation sensors, which are a relatively new design. Values recorded during a one-week period were compared for three large pot-grown saplings of each of the tropical trees Pseudobombax septenatum (Jacq.) Dugand and Calophyllum longifolium Willd. For five of the six individuals, P values from paired t-tests comparing the two methods ranged from 0.12 to 0.43 and differences in estimates of total daily water use over the week of the experiment averaged < 3%. In one P. septenatum sapling, the sap flow sensors underestimated water use relative to the gravimetric measurements. This discrepancy could have been associated with naturally occurring gradients in temperature that reduced the difference in temperature between the probes, which would have caused the sensor method to underestimate water use. Our results indicate that substitution of variable length heat dissipation probes for probes of the original Granier design did not invalidate the empirical relationship determined by Granier between sap flux density and the temperature difference between probes.

  4. Application of a net-based baseline correction scheme to strong-motion records of the 2011 Mw 9.0 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Wang, Rongjiang; Zhang, Yong; Walter, Thomas R.

    2014-06-01

    The description of static displacements associated with earthquakes is traditionally achieved using GPS, EDM or InSAR data. In addition, displacement histories can be derived from strong-motion records, allowing an improvement of geodetic networks at a high sampling rate and a better physical understanding of earthquake processes. Strong-motion records require a correction procedure appropriate for baseline shifts that may be caused by rotational motion, tilting and other instrumental effects. Common methods use an empirical bilinear correction on the velocity seismograms integrated from the strong-motion records. In this study, we overcome the weaknesses of an empirically based bilinear baseline correction scheme by using a net-based criterion to select the timing parameters. This idea is based on the physical principle that low-frequency seismic waveforms at neighbouring stations are coherent if the interstation distance is much smaller than the distance to the seismic source. For a dense strong-motion network, it is plausible to select the timing parameters so that the correlation coefficient between the velocity seismograms of two neighbouring stations is maximized after the baseline correction. We applied this new concept to the KiK-Net and K-Net strong-motion data available for the 2011 Mw 9.0 Tohoku earthquake. We compared the derived coseismic static displacement with high-quality GPS data, and with the results obtained using empirical methods. The results show that the proposed net-based approach is feasible and more robust than the individual empirical approaches. The outliers caused by unknown problems in the measurement system can be easily detected and quantified.

  5. Integrating animal movement with habitat suitability for estimating dynamic landscape connectivity

    USGS Publications Warehouse

    van Toor, Mariëlle L.; Kranstauber, Bart; Newman, Scott H.; Prosser, Diann J.; Takekawa, John Y.; Technitis, Georgios; Weibel, Robert; Wikelski, Martin; Safi, Kamran

    2018-01-01

    Context High-resolution animal movement data are becoming increasingly available, yet having a multitude of empirical trajectories alone does not allow us to easily predict animal movement. To answer ecological and evolutionary questions at a population level, quantitative estimates of a species’ potential to link patches or populations are of importance. Objectives We introduce an approach that combines movement-informed simulated trajectories with an environment-informed estimate of the trajectories’ plausibility to derive connectivity. Using the example of bar-headed geese we estimated migratory connectivity at a landscape level throughout the annual cycle in their native range. Methods We used tracking data of bar-headed geese to develop a multi-state movement model and to estimate temporally explicit habitat suitability within the species’ range. We simulated migratory movements between range fragments, and calculated a measure we called route viability. The results are compared to expectations derived from published literature. Results Simulated migrations matched empirical trajectories in key characteristics such as stopover duration. The viability of the simulated trajectories was similar to that of the empirical trajectories. We found that, overall, the migratory connectivity was higher within the breeding than in wintering areas, corroborating previous findings for this species. Conclusions We show how empirical tracking data and environmental information can be fused for meaningful predictions of animal movements throughout the year and even outside the spatial range of the available data. Beyond predicting migratory connectivity, our framework will prove useful for modelling ecological processes facilitated by animal movement, such as seed dispersal or disease ecology.

  6. A steady state model of agricultural waste pyrolysis: A mini review.

    PubMed

    Trninić, M; Jovović, A; Stojiljković, D

    2016-09-01

    Agricultural waste is one of the main renewable energy resources available, especially in an agricultural country such as Serbia. Pyrolysis has already been considered as an attractive alternative for disposal of agricultural waste, since the technique can convert this special biomass resource into granular charcoal, non-condensable gases and pyrolysis oils, which could furnish profitable energy and chemical products owing to their high calorific value. In this regard, the development of thermochemical processes requires a good understanding of pyrolysis mechanisms. Experimental and some literature data on the pyrolysis characteristics of corn cob and several other agricultural residues under inert atmosphere were structured and analysed in order to obtain conversion behaviour patterns of agricultural residues during pyrolysis within the temperature range from 300 °C to 1000 °C. Based on experimental and literature data analysis, empirical relationships were derived, including relations between the temperature of the process and yields of charcoal, tar and gas (CO2, CO, H2 and CH4). An analytical semi-empirical model was then used as a tool to analyse the general trends of biomass pyrolysis. Although this semi-empirical model needs further refinement before application to all types of biomass, its prediction capability was in good agreement with results obtained by the literature review. The compact representation could be used in other applications, to conveniently extrapolate and interpolate these results to other temperatures and biomass types. © The Author(s) 2016.

  7. Leading change: a concept analysis.

    PubMed

    Nelson-Brantley, Heather V; Ford, Debra J

    2017-04-01

    To report an analysis of the concept of leading change. Nurses have been called to lead change to advance the health of individuals, populations, and systems. Conceptual clarity about leading change in the context of nursing and healthcare systems provides an empirical direction for future research and theory development that can advance the science of leadership studies in nursing. Concept analysis. CINAHL, PubMed, PsycINFO, Psychology and Behavioral Sciences Collection, Health Business Elite and Business Source Premier databases were searched using the terms: leading change, transformation, reform, leadership and change. Literature published in English from 2001 - 2015 in the fields of nursing, medicine, organizational studies, business, education, psychology or sociology were included. Walker and Avant's method was used to identify descriptions, antecedents, consequences and empirical referents of the concept. Model, related and contrary cases were developed. Five defining attributes of leading change were identified: (a) individual and collective leadership; (b) operational support; (c) fostering relationships; (d) organizational learning; and (e) balance. Antecedents were external or internal driving forces and organizational readiness. The consequences of leading change included improved organizational performance and outcomes and new organizational culture and values. A theoretical definition and conceptual model of leading change were developed. Future studies that use and test the model may contribute to the refinement of a middle-range theory to advance nursing leadership research and education. From this, empirically derived interventions that prepare and enable nurses to lead change to advance health may be realized. © 2016 John Wiley & Sons Ltd.

  8. Mapping wildfire susceptibility in Southern California using live and dead fractions of vegetation derived from Multiple Endmember Spectral Mixture Analysis of MODIS imagery

    NASA Astrophysics Data System (ADS)

    Schneider, P.; Roberts, D. A.

    2008-12-01

    Wildfire is a significant natural disturbance mechanism in Southern California. Assessing spatial patterns of wildfire susceptibility requires estimates of the live and dead fractions of vegetation. The Fire Potential Index (FPI), which is currently the only operationally computed fire susceptibility index incorporating remote sensing data, estimates such fractions using a relative greenness measure based on time series of vegetation index images. This contribution assesses the potential of Multiple Endmember Spectral Mixture Analysis (MESMA) for deriving such fractions from single MODIS images without the need for a long remote sensing time series, and investigates the applicability of such MESMA-derived fractions for mapping dynamic fire susceptibility in Southern California. Endmembers for MESMA were selected from a library of reference endmembers using Constrained Reference Endmember Selection (CRES), which uses field estimates of fractions to guide the selection process. Fraction images of green vegetation, non-photosynthetic vegetation, soil, and shade were then computed for all available 16-day MODIS composites between 2000 and 2006 using MESMA. Initial results indicate that MESMA of MODIS imagery is capable of providing reliable estimates of live and dead vegetation fraction. Validation against in situ observations in the Santa Ynez Mountains near Santa Barbara, California, shows that the average fraction error for two tested species was around 10%. Further validation of MODIS-derived fractions was performed against fractions from high-resolution hyperspectral data. It was shown that the fractions derived from data of both sensors correlate with R2 values greater than 0.95. MESMA-derived live and dead vegetation fractions were subsequently tested as a substitute to relative greenness in the FPI algorithm. FPI was computed for every day between 2000 and 2006 using the derived fractions. Model performance was then tested by extracting FPI values for historical fire events and random no-fire events in Southern California for the same period and developing a logistic regression model. Preliminary results show that an FPI based on MESMA-derived fractions has the potential to deliver similar performance as the traditional FPI but requiring a greatly reduced data volume and using an approach based on physical rather than empirical relationships.

  9. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable

    PubMed Central

    2012-01-01

    Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998

  10. PREDICTING ESTUARINE SEDIMENT METAL CONCENTRATIONS AND INFERRED ECOLOGICAL CONDITIONS: AN INFORMATION THEORETIC APPROACH

    EPA Science Inventory

    Empirically derived values associating sediment metal concentrations with degraded ecological conditions provide important information to assess estuarine condition. However, resources limit the number, magnitude, and frequency of monitoring programs to gather these data. As su...

  11. MEPDG Traffic Loading Defaults Derived from Traffic Pooled Fund Study

    DOT National Transportation Integrated Search

    2016-04-01

    As part of traffic loading inputs, the Mechanistic-Empirical Pavement Design Guide (MEPDG), Interim Edition: A Manual of Practice requires detailed axle loading information in the form of normalized axle load spectra (NALS), number of axle per truck ...

  12. Empirical Study of Training and Performance in the Marathon

    ERIC Educational Resources Information Center

    Slovic, Paul

    1977-01-01

    Similar systematic relationships exist between personal characteristics, training, and performance on the marathon, regardless of whether they derive from differences among individuals participating in the same run or from differences within the same person in two separate marathons. (Author)

  13. Evaluating the evidence base for relational frame theory: a citation analysis.

    PubMed

    Dymond, Simon; May, Richard J; Munnelly, Anita; Hoon, Alice E

    2010-01-01

    Relational frame theory (RFT) is a contemporary behavior-analytic account of language and cognition. Since it was first outlined in 1985, RFT has generated considerable controversy and debate, and several claims have been made concerning its evidence base. The present study sought to evaluate the evidence base for RFT by undertaking a citation analysis and by categorizing all articles that cited RFT-related search terms. A total of 174 articles were identified between 1991 and 2008, 62 (36%) of which were empirical and 112 (64%) were nonempirical articles. Further analyses revealed that 42 (68%) of the empirical articles were classified as empirical RFT and 20 (32%) as empirical other, whereas 27 (24%) of the nonempirical articles were assigned to the nonempirical reviews category and 85 (76%) to the nonempirical conceptual category. In addition, the present findings show that the majority of empirical research on RFT has been conducted with typically developing adult populations, on the relational frame of sameness, and has tended to be published in either The Psychological Record or the Journal of the Experimental Analysis of Behavior. Overall, RFT has made a substantial contribution to the literature in a relatively short period of time.

  14. Improving Marine Ecosystem Models with Biochemical Tracers

    NASA Astrophysics Data System (ADS)

    Pethybridge, Heidi R.; Choy, C. Anela; Polovina, Jeffrey J.; Fulton, Elizabeth A.

    2018-01-01

    Empirical data on food web dynamics and predator-prey interactions underpin ecosystem models, which are increasingly used to support strategic management of marine resources. These data have traditionally derived from stomach content analysis, but new and complementary forms of ecological data are increasingly available from biochemical tracer techniques. Extensive opportunities exist to improve the empirical robustness of ecosystem models through the incorporation of biochemical tracer data and derived indices, an area that is rapidly expanding because of advances in analytical developments and sophisticated statistical techniques. Here, we explore the trophic information required by ecosystem model frameworks (species, individual, and size based) and match them to the most commonly used biochemical tracers (bulk tissue and compound-specific stable isotopes, fatty acids, and trace elements). Key quantitative parameters derived from biochemical tracers include estimates of diet composition, niche width, and trophic position. Biochemical tracers also provide powerful insight into the spatial and temporal variability of food web structure and the characterization of dominant basal and microbial food web groups. A major challenge in incorporating biochemical tracer data into ecosystem models is scale and data type mismatches, which can be overcome with greater knowledge exchange and numerical approaches that transform, integrate, and visualize data.

  15. Do recognizable lifetime eating disorder phenotypes naturally occur in a culturally asian population? A combined latent profile and taxometric approach.

    PubMed

    Thomas, Jennifer J; Eddy, Kamryn T; Ruscio, John; Ng, King Lam; Casale, Kristen E; Becker, Anne E; Lee, Sing

    2015-05-01

    We examined whether empirically derived eating disorder (ED) categories in Hong Kong Chinese patients (N = 454) would be consistent with recognizable lifetime ED phenotypes derived from latent structure models of European and American samples. We performed latent profile analysis (LPA) using indicator variables from data collected during routine assessment, and then applied taxometric analysis to determine whether latent classes were qualitatively versus quantitatively distinct. Latent profile analysis identified four classes: (i) binge/purge (47%); (ii) non-fat-phobic low-weight (34%); (iii) fat-phobic low-weight (12%); and (iv) overweight disordered eating (6%). Taxometric analysis identified qualitative (categorical) distinctions between the binge/purge and non-fat-phobic low-weight classes, and also between the fat-phobic and non-fat-phobic low-weight classes. Distinctions between the fat-phobic low-weight and binge/purge classes were indeterminate. Empirically derived categories in Hong Kong showed recognizable correspondence with recognizable lifetime ED phenotypes. Although taxometric findings support two distinct classes of low weight EDs, LPA findings also support heterogeneity among non-fat-phobic individuals. Copyright © 2015 John Wiley & Sons, Ltd and Eating Disorders Association.

  16. Essays on pricing electricity and electricity derivatives in deregulated markets

    NASA Astrophysics Data System (ADS)

    Popova, Julia

    2008-10-01

    This dissertation is composed of four essays on the behavior of wholesale electricity prices and their derivatives. The first essay provides an empirical model that takes into account the spatial features of a transmission network on the electricity market. The spatial structure of the transmission grid plays a key role in determining electricity prices, but it has not been incorporated into previous empirical models. The econometric model in this essay incorporates a simple representation of the transmission system into a spatial panel data model of electricity prices, and also accounts for the effect of dynamic transmission system constraints on electricity market integration. Empirical results using PJM data confirm the existence of spatial patterns in electricity prices and show that spatial correlation diminishes as transmission lines become more congested. The second essay develops and empirically tests a model of the influence of natural gas storage inventories on the electricity forward premium. I link a model of the effect of gas storage constraints on the higher moments of the distribution of electricity prices to a model of the effect of those moments on the forward premium. Empirical results using PJM data support the model's predictions that gas storage inventories sharply reduce the electricity forward premium when demand for electricity is high and space-heating demand for gas is low. The third essay examines the efficiency of PJM electricity markets. A market is efficient if prices reflect all relevant information, so that prices follow a random walk. The hypothesis of random walk is examined using empirical tests, including the Portmanteau, Augmented Dickey-Fuller, KPSS, and multiple variance ratio tests. The results are mixed though evidence of some level of market efficiency is found. The last essay investigates the possibility that previous researchers have drawn spurious conclusions based on classical unit root tests incorrectly applied to wholesale electricity prices. It is well known that electricity prices exhibit both cyclicity and high volatility which varies through time. Results indicate that heterogeneity in unconditional variance---which is not detected by classical unit root tests---may contribute to the appearance of non-stationarity.

  17. Sea Ice Freeboard and Thickness from the 2013 IceBridge ATM and DMS Data in Ross Sea, Antarctica

    NASA Astrophysics Data System (ADS)

    Xie, H.; Tian, L.; Tang, J.; Ackley, S. F.

    2016-12-01

    In November (20, 21, 27, and 28) 2013, NASA's IceBridge mission flew over the Ross Sea, Antarctica and collected important sea ice data with the ATM and DMS for the first time. We will present our methods to derive the local sea level and total freeboard for ice thickness retrieval from these two datasets. The methods include (1) leads classification from DMS data using an automated lead detection method, (2) potential leads from the reflectance of less than 0.25 from the ATM laser shots of L1B data, (3) local sea level retrieval based on these qualified ATM laser shots (L1B) within the DMS-derived leads (after outliers removal from the mean ± 2 standard deviation of these ATM elevations), (4) establishment of an empirical equation of local sea level as a function of distance from the starting point of each IceBridge flight, (5) total freeboard retrieval from the ATM L2 elevations by subtracting the local sea level derived from the empirical equation, and (6) ice thickness retrieval. The ice thickness derived from this method will be analyzed and compared with ICESat data (2003-2009) and other available data for the same region at the similar time period. Possible change and potential reasons will be identified and discussed.

  18. The star formation rate cookbook at 1 < z < 3: Extinction-corrected relations for UV and [OII]λ3727 luminosities

    NASA Astrophysics Data System (ADS)

    Talia, M.; Cimatti, A.; Pozzetti, L.; Rodighiero, G.; Gruppioni, C.; Pozzi, F.; Daddi, E.; Maraston, C.; Mignoli, M.; Kurk, J.

    2015-10-01

    Aims: In this paper we use a well-controlled spectroscopic sample of galaxies at 1

  19. The Interpersonal Theory of Suicide

    ERIC Educational Resources Information Center

    Van Orden, Kimberly A.; Witte, Tracy K.; Cukrowicz, Kelly C.; Braithwaite, Scott R.; Selby, Edward A.; Joiner, Thomas E., Jr.

    2010-01-01

    Suicidal behavior is a major problem worldwide and, at the same time, has received relatively little empirical attention. This relative lack of empirical attention may be due in part to a relative absence of theory development regarding suicidal behavior. The current article presents the interpersonal theory of suicidal behavior. We propose that…

  20. The relationship between familial resemblance and sexual attraction: an update on Westermarck, Freud, and the incest taboo.

    PubMed

    Lieberman, Debra; Fessler, Daniel M T; Smith, Adam

    2011-09-01

    Foundational principles of evolutionary theory predict that inbreeding avoidance mechanisms should exist in all species--including humans--in which close genetic relatives interact during periods of sexual maturity. Voluminous empirical evidence, derived from diverse taxa, supports this prediction. Despite such results, Fraley and Marks claim to provide evidence that humans are sexually attracted to close genetic relatives and that such attraction is held in check by cultural taboos. Here, the authors show that Fraley and Marks, in their search for an alternate explanation of inbreeding avoidance, misapply theoretical constructs from evolutionary biology and social psychology, leading to an incorrect interpretation of their results. The authors propose that Fraley and Marks's central findings can be explained in ways consistent with existing evolutionary models of inbreeding avoidance. The authors conclude that appropriate application of relevant theory and stringent experimental design can generate fruitful investigations into sexual attraction, inbreeding avoidance, and incest taboos.

  1. An analytical model of dynamic sliding friction during impact

    NASA Astrophysics Data System (ADS)

    Arakawa, Kazuo

    2017-01-01

    Dynamic sliding friction was studied based on the angular velocity of a golf ball during an oblique impact. This study used the analytical model proposed for the dynamic sliding friction on lubricated and non-lubricated inclines. The contact area A and sliding velocity u of the ball during impact were used to describe the dynamic friction force Fd = λAu, where λ is a parameter related to the wear of the contact area. A comparison with experimental results revealed that the model agreed well with the observed changes in the angular velocity during impact, and λAu is qualitatively equivalent to the empirical relationship, μN + μη‧dA/dt, given by the product between the frictional coefficient μ and the contact force N, and the additional term related to factor η‧ for the surface condition and the time derivative of A.

  2. The timing and sources of information for the adoption and implementation of production innovations

    NASA Technical Reports Server (NTRS)

    Ettlie, J. E.

    1976-01-01

    Two dimensions (personal-impersonal and internal-external) are used to characterize information sources as they become important during the interorganizational transfer of production innovations. The results of three studies are reviewed for the purpose of deriving a model of the timing and importance of different information sources and the utilization of new technology. Based on the findings of two retrospective studies, it was concluded that the pattern of information seeking behavior in user organizations during the awareness stage of adoption is not a reliable predictor of the eventual utilization rate. Using the additional findings of a real-time study, an empirical model of the relative importance of information sources for successful user organizations is presented. These results are extended and integrated into a theoretical model consisting of a time-profile of successful implementations and the relative importance of four types of information sources during seven stages of the adoption-implementation process.

  3. Road vehicle emission factors development: A review

    NASA Astrophysics Data System (ADS)

    Franco, Vicente; Kousoulidou, Marina; Muntean, Marilena; Ntziachristos, Leonidas; Hausberger, Stefan; Dilara, Panagiota

    2013-05-01

    Pollutant emissions need to be accurately estimated to ensure that air quality plans are designed and implemented appropriately. Emission factors (EFs) are empirical functional relations between pollutant emissions and the activity that causes them. In this review article, the techniques used to measure road vehicle emissions are examined in relation to the development of EFs found in emission models used to produce emission inventories. The emission measurement techniques covered include those most widely used for road vehicle emissions data collection, namely chassis and engine dynamometer measurements, remote sensing, road tunnel studies and portable emission measurements systems (PEMS). The main advantages and disadvantages of each method with regards to emissions modelling are presented. A review of the ways in which EFs may be derived from test data is also performed, with a clear distinction between data obtained under controlled conditions (engine and chassis dynamometer measurements using standard driving cycles) and measurements under real-world operation.

  4. A phenomenological model for simulating the chemo-responsive shape memory effect in polymers undergoing a permeation transition

    NASA Astrophysics Data System (ADS)

    Lu, Haibao; Huang, Wei Min; Leng, Jinsong

    2014-04-01

    We present a phenomenological model for studying the constitutive relations and working mechanism of the chemo-responsive shape memory effect (SME) in shape memory polymers (SMPs). On the basis of the solubility parameter equation, diffusion model and permeation transition model, a phenomenological model is derived for quantitatively identifying the influential factors in the chemically induced SME in SMPs. After this, a permeability parallel model and series model are implemented in order to couple the constitutive relations of the permeability coefficient, stress and relaxation time as a function of stretch, separately. The inductive effect of the permeability transition on the transition temperature is confirmed as the driving force for the chemo-responsive SME. Furthermore, the analytical result from the phenomenological model is compared with the available experimental results and the simulation of a semi-empirical model reported in the literature for verification.

  5. Studying Lyman-alpha escape and reionization in Green Pea galaxies

    NASA Astrophysics Data System (ADS)

    Yang, Huan; Malhotra, Sangeeta; Rhoads, James E.; Gronke, Max; Leitherer, Claus; Wofford, Aida; Dijkstra, Mark

    2017-01-01

    Green Pea galaxies are low-redshift galaxies with extreme [OIII]5007 emission line. We built the first statistical sample of Green Peas observed by HST/COS and used them as analogs of high-z Lyman-alpha emitters to study Ly-alpha escape and Ly-alpha sizes. Using the HST/COS 2D spectra, we found that Ly-alpha sizes of Green Peas are larger than the UV continuum sizes. We found many correlations between Ly-alpha escape fraction and galactic properties -- dust extinction, Ly-alpha kinematic features, [OIII]/[OII] ratio, and gas outflow velocities. We fit an empirical relation to predict Ly-alpha escape fraction from dust extinction and Ly-alpha red-peak velocity. In the JWST era, we can use this relation to derive the IGM HI column density along the line of sight of each high-z Ly-alpha emitter and probe the reionization process.

  6. What Are the Differences between Happiness and Self-Esteem

    ERIC Educational Resources Information Center

    Lyubomirsky, Sonja; Tkach, Chris; DiMatteo, Robin M.

    2006-01-01

    The present study investigated theoretically and empirically derived similarities and differences between the constructs of enduring happiness and self-esteem. Participants (N = 621), retired employees ages 51-95, completed standardized measures of affect, personality, psychosocial characteristics, physical health, and demographics. The relations…

  7. Deriving simple empirical relationships between aerodynamic and optical aerosol measurements and their application

    USDA-ARS?s Scientific Manuscript database

    Different measurement techniques for aerosol characterization and quantification either directly or indirectly measure different aerosol properties (i.e. count, mass, speciation, etc.). Comparisons and combinations of multiple measurement techniques sampling the same aerosol can provide insight into...

  8. STUDY TO TEST THE FEASIBILITY OF USING THE MACROACTIVITY APPROACH TO ASSESS DERMAL EXPOSURE

    EPA Science Inventory

    In the macroactivity approach, dermal exposure is estimated using empirically-derived transfer coefficients to aggregate the mass transfer associated with a series of contacts with a contaminated medium. The macroactivity approach affords the possibility of developing screenin...

  9. Deriving empirical benchmarks from existing monitoring datasets for rangeland adaptive management

    USDA-ARS?s Scientific Manuscript database

    Under adaptive management, goals and decisions for managing rangeland resources are shaped by requirements like the Bureau of Land Management’s (BLM’s) Land Health Standards, which specify desired conditions. Without formalized, quantitative benchmarks for triggering management actions, adaptive man...

  10. ESTIMATION OF CHEMICAL TOXICITY TO WILDLIFE SPECIES USING INTERSPECIES CORRELATION MODELS

    EPA Science Inventory

    Ecological risks to wildlife are typically assessed using toxicity data for relataively few species and with limited understanding of differences in species sensitivity to contaminants. Empirical interspecies correlation models were derived from LD50 values for 49 wildlife speci...

  11. FROM FINANCE TO COSMOLOGY: THE COPULA OF LARGE-SCALE STRUCTURE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scherrer, Robert J.; Berlind, Andreas A.; Mao, Qingqing

    2010-01-01

    Any multivariate distribution can be uniquely decomposed into marginal (one-point) distributions, and a function called the copula, which contains all of the information on correlations between the distributions. The copula provides an important new methodology for analyzing the density field in large-scale structure. We derive the empirical two-point copula for the evolved dark matter density field. We find that this empirical copula is well approximated by a Gaussian copula. We consider the possibility that the full n-point copula is also Gaussian and describe some of the consequences of this hypothesis. Future directions for investigation are discussed.

  12. Synopsis of discussion session on physicochemical factors affecting toxicity

    USGS Publications Warehouse

    Erickson, R.J.; Bills, T.D.; Clark, J.R.; Hansen, D.J.; Knezovich, J.; Hamelink, J.L.; Landrum, P.F.; Bergman, H.L.; Benson, W.H.

    1994-01-01

    The paper documents the workshop discussion regarding the role of these factors in altering toxicity. For each factor, the nature, magnitude, and uncertainty of its empirical relation to the toxicity of various chemicals or chemical classes is discussed. Limitations in the empirical database regarding the variety of species and endpoints tested were addressed. Possible mechanisms underlying the empirical relations are identified. Finally, research needed to better understand these effects is identified.

  13. The Chern-Simons Current in Systems of DNA-RNA Transcriptions

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; Pincak, Richard; Kanjamapornkul, Kabin; Saridakis, Emmanuel N.

    2018-04-01

    A Chern-Simons current, coming from ghost and anti-ghost fields of supersymmetry theory, can be used to define a spectrum of gene expression in new time series data where a spinor field, as alternative representation of a gene, is adopted instead of using the standard alphabet sequence of bases $A, T, C, G, U$. After a general discussion on the use of supersymmetry in biological systems, we give examples of the use of supersymmetry for living organism, discuss the codon and anti-codon ghost fields and develop an algebraic construction for the trash DNA, the DNA area which does not seem active in biological systems. As a general result, all hidden states of codon can be computed by Chern-Simons 3 forms. Finally, we plot a time series of genetic variations of viral glycoprotein gene and host T-cell receptor gene by using a gene tensor correlation network related to the Chern-Simons current. An empirical analysis of genetic shift, in host cell receptor genes with separated cluster of gene and genetic drift in viral gene, is obtained by using a tensor correlation plot over time series data derived as the empirical mode decomposition of Chern-Simons current.

  14. Development of Quantum Chemical Method to Calculate Half Maximal Inhibitory Concentration (IC50 ).

    PubMed

    Bag, Arijit; Ghorai, Pradip Kr

    2016-05-01

    Till date theoretical calculation of the half maximal inhibitory concentration (IC50 ) of a compound is based on different Quantitative Structure Activity Relationship (QSAR) models which are empirical methods. By using the Cheng-Prusoff equation it may be possible to compute IC50 , but this will be computationally very expensive as it requires explicit calculation of binding free energy of an inhibitor with respective protein or enzyme. In this article, for the first time we report an ab initio method to compute IC50 of a compound based only on the inhibitor itself where the effect of the protein is reflected through a proportionality constant. By using basic enzyme inhibition kinetics and thermodynamic relations, we derive an expression of IC50 in terms of hydrophobicity, electric dipole moment (μ) and reactivity descriptor (ω) of an inhibitor. We implement this theory to compute IC50 of 15 HIV-1 capsid inhibitors and compared them with experimental results and available other QASR based empirical results. Calculated values using our method are in very good agreement with the experimental values compared to the values calculated using other methods. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Does perceived risk influence the effects of message framing? A new investigation of a widely held notion

    PubMed Central

    Van ’t Riet, Jonathan; Cox, Anthony D.; Cox, Dena; Zimet, Gregory D.; De Bruijn, Gert-Jan; Van den Putte, Bas; De Vries, Hein; Werrij, Marieke Q.; Ruiter, Robert A.C.

    2015-01-01

    Health-promoting messages can be framed in terms of the beneficial consequences of healthy behaviour (gain-framed messages) or the detrimental consequences of unhealthy behaviour (loss-framed messages). An influential notion holds that the perceived risk associated with the recommended behaviour determines the relative persuasiveness of gain- and loss-framed messages. This ‘risk-framing hypothesis’, as we call it, was derived from prospect theory, has been central to health message framing research for the last two decades, and does not cease to appeal to researchers. The present paper examines the validity of the risk-framing hypothesis. We performed six empirical studies on the interaction between perceived risk and message framing. These studies were conducted in two different countries and employed framed messages targeting skin cancer prevention and detection, physical activity, breast self-examination and vaccination behaviour. Behavioural intention served as the outcome measure. None of these studies found evidence in support of the risk-framing hypothesis. We conclude that the empirical evidence in favour of the hypothesis is weak and discuss the ramifications of this for future message framing research. PMID:24579986

  16. Determination of rotor harmonic blade loads from acoustic measurements

    NASA Technical Reports Server (NTRS)

    Kasper, P. K.

    1975-01-01

    The magnitude of discrete frequency sound radiated by a rotating blade is strongly influenced by the presence of a nonuniform distribution of aerodynamic forces over the rotor disk. An analytical development and experimental results are provided for a technique by which harmonic blade loads are derived from acoustic measurements. The technique relates, on a one-to-one basis, the discrete frequency sound harmonic amplitudes measured at a point on the axis of rotation to the blade-load harmonic amplitudes. This technique was applied to acoustic data from two helicopter types and from a series of test results using the NASA-Langley Research Center rotor test facility. The inferred blade-load harmonics for the cases considered tended to follow an inverse power law relationship with harmonic blade-load number. Empirical curve fits to the data showed the harmonic fall-off rate to be in the range of 6 to 9 db per octave of harmonic order. These empirical relationships were subsequently used as input data in a compatible far field rotational noise prediction model. A comparison between predicted and measured off-axis sound harmonic levels is provided for the experimental cases considered.

  17. Burst and inter-burst duration statistics as empirical test of long-range memory in the financial markets

    NASA Astrophysics Data System (ADS)

    Gontis, V.; Kononovicius, A.

    2017-10-01

    We address the problem of long-range memory in the financial markets. There are two conceptually different ways to reproduce power-law decay of auto-correlation function: using fractional Brownian motion as well as non-linear stochastic differential equations. In this contribution we address this problem by analyzing empirical return and trading activity time series from the Forex. From the empirical time series we obtain probability density functions of burst and inter-burst duration. Our analysis reveals that the power-law exponents of the obtained probability density functions are close to 3 / 2, which is a characteristic feature of the one-dimensional stochastic processes. This is in a good agreement with earlier proposed model of absolute return based on the non-linear stochastic differential equations derived from the agent-based herding model.

  18. An empirical identification and categorisation of training best practices for ERP implementation projects

    NASA Astrophysics Data System (ADS)

    Esteves, Jose Manuel

    2014-11-01

    Although training is one of the most cited critical success factors in Enterprise Resource Planning (ERP) systems implementations, few empirical studies have attempted to examine the characteristics of management of the training process within ERP implementation projects. Based on the data gathered from a sample of 158 respondents across four stakeholder groups involved in ERP implementation projects, and using a mixed method design, we have assembled a derived set of training best practices. Results suggest that the categorised list of ERP training best practices can be used to better understand training activities in ERP implementation projects. Furthermore, the results reveal that the company size and location have an impact on the relevance of training best practices. This empirical study also highlights the need to investigate the role of informal workplace trainers in ERP training activities.

  19. STANDARD STARS AND EMPIRICAL CALIBRATIONS FOR Hα AND Hβ PHOTOMETRY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joner, Michael D.; Hintz, Eric G., E-mail: joner@byu.edu, E-mail: hintz@byu.edu

    2015-12-15

    We define an Hα photometric system that is designed as a companion to the well established Hβ index. The new system is built on spectrophotometric observations of field stars as well as stars in benchmark open clusters. We present data for 75 field stars, 12 stars from the Coma star cluster, 24 stars from the Hyades, 17 stars from the Pleiades, and 8 stars from NGC 752 to be used as primary standard stars in the new systems. We show that the system transformations are relatively insensitive to the shape of the filter functions. We make comparisons of the Hαmore » index to the Hβ index and illustrate the relationship between the two systems. In addition, we present relations that relate both hydrogen indices to equivalent width and effective temperature. We derive equations to calibrate both systems for Main Sequence stars with spectral types in the range O9 to K2 for equivalent width and A2 to K2 for effective temperature.« less

  20. Implications of Childhood Experiences for the Health and Adaptation of Lesbian, Gay, and Bisexual Individuals: Sensitivity to Developmental Process in Future Research

    PubMed Central

    Rosario, Margaret

    2015-01-01

    The empirical literature on lesbian, gay, and bisexual (LGB) individuals has predominantly focused on sexual-orientation disparities between LGB and heterosexual individuals on health and adaptation, as well as on the role of gay-related or minority stress in the health and adaptation of LGB individuals. Aside from demographic control variables, the initial predictor is a marker of sexual orientation or LGB-related experience (e.g., minority stress). Missing are potential strengths and vulnerabilities that LGB individuals develop over time and bring to bear on their sexual identity development and other LGB-related experiences. Those strengths and vulnerabilities may have profound consequences for the sexual identity development, health, and adaptation of LGB individuals. Here, I focus on one such set of strengths and vulnerabilities derived from attachment. I conclude by emphasizing the importance of attachment in the lives of LGB individuals and the need to identify other developmental processes that may be equally consequential. PMID:26900586

  1. Understanding competition between healthcare providers: Introducing an intermediary inter-organizational perspective.

    PubMed

    Westra, Daan; Angeli, Federica; Carree, Martin; Ruwaard, Dirk

    2017-02-01

    Pro-competitive policy reforms have been introduced in several countries, attempting to contain increasing healthcare costs. Yet, research proves ambiguous when it comes to the effect of competition in healthcare, with a number of studies highlighting unintended and unwanted effects. We argue that current empirical work overlooks the role of inter-organizational relations as well as the interplay between policy at macro level, inter-organizational networks at meso level, and outcomes at micro level. To bridge this gap and stimulate a more detailed understanding of the effect of competition in health care, this article introduces a cross-level conceptual framework which emphasizes the intermediary role of cooperative inter-organizational relations at meso level. We discuss how patient transfers, specialist affiliations, and interlocking directorates constitute three forms of inter-organizational relations in health care which can be used within this framework. The paper concludes by deriving several propositions from the framework which can guide future research. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Petrophysics of low-permeability medina sandstone, northwestern Pennsylvania, Appalachian Basin

    USGS Publications Warehouse

    Castle, J.W.; Byrnes, A.P.

    1998-01-01

    Petrophysical core testing combined with geophysical log analysis of low-permeability, Lower Silurian sandstones of the Appalachian basin provides guidelines and equations for predicting gas producibility. Permeability values are predictable from the borehole logs by applying empirically derived equations based on correlation between in-situ porosity and in-situ effective gas permeability. An Archie-form equation provides reasonable accuracy of log-derived water saturations because of saturated brine salinities and low clay content in the sands. Although measured porosity and permeability average less than 6% and 0.1 mD, infrequent values as high as 18% and 1,048 mD occur. Values of effective gas permeability at irreducible water saturation (Swi) range from 60% to 99% of routine values for the highest permeability rocks to several orders of magnitude less for the lowest permeability rocks. Sandstones having porosity greater than 6% and effective gas permeability greater than 0.01 mD exhibit Swi less than 20%. With decreasing porosity, Swi sharply increases to values near 40% at 3 porosity%. Analysis of cumulative storage and flow capacity indicates zones with porosity greater than 6% generally contain over 90% of flow capacity and hold a major portion of storage capacity. For rocks with Swi < 20%, gas relative permeabilities exceed 45%. Gas relative permeability and hydrocarbon volume decrease rapidly with increasing Swi as porosity drops below 6%. At Swi above 40%, gas relative permeabilities are less than approximately 10%.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiang, N. B.; Qu, Z. N., E-mail: znqu@ynao.ac.cn

    The ensemble empirical mode decomposition (EEMD) analysis is utilized to extract the intrinsic mode functions (IMFs) of the solar mean magnetic field (SMMF) observed at the Wilcox Solar Observatory of Stanford University from 1975 to 2014, and then we analyze the periods of these IMFs as well as the relation of IMFs (SMMF) with some solar activity indices. The two special rotation cycles of 26.6 and 28.5 days should be derived from different magnetic flux elements in the SMMF. The rotation cycle of the weak magnetic flux element in the SMMF is 26.6 days, while the rotation cycle of themore » strong magnetic flux element in the SMMF is 28.5 days. The two rotation periods of the structure of the interplanetary magnetic field near the ecliptic plane are essentially related to weak and strong magnetic flux elements in the SMMF, respectively. The rotation cycle of weak magnetic flux in the SMMF did not vary over the last 40 years because the weak magnetic flux element derived from the weak magnetic activity on the full disk is not influenced by latitudinal migration. Neither the internal rotation of the Sun nor the solar magnetic activity on the disk (including the solar polar fields) causes the annual variation of SMMF. The variation of SMMF at timescales of a solar cycle is more related to weak magnetic activity on the full solar disk.« less

  4. Development and evaluation of consensus-based sediment effect concentrations for polychlorinated biphenyls

    USGS Publications Warehouse

    MacDonald, Donald D.; Dipinto, Lisa M.; Field, Jay; Ingersoll, Christopher G.; Long, Edward R.; Swartz, Richard C.

    2000-01-01

    Sediment-quality guidelines (SQGs) have been published for polychlorinated biphenyls (PCBs) using both empirical and theoretical approaches. Empirically based guidelines have been developed using the screening-level concentration, effects range, effects level, and apparent effects threshold approaches. Theoretically based guidelines have been developed using the equilibrium-partitioning approach. Empirically-based guidelines were classified into three general categories, in accordance with their original narrative intents, and used to develop three consensus-based sediment effect concentrations (SECs) for total PCBs (tPCBs), including a threshold effect concentration, a midrange effect concentration, and an extreme effect concentration. Consensus-based SECs were derived because they estimate the central tendency of the published SQGs and, thus, reconcile the guidance values that have been derived using various approaches. Initially, consensus-based SECs for tPCBs were developed separately for freshwater sediments and for marine and estuarine sediments. Because the respective SECs were statistically similar, the underlying SQGs were subsequently merged and used to formulate more generally applicable SECs. The three consensus-based SECs were then evaluated for reliability using matching sediment chemistry and toxicity data from field studies, dose-response data from spiked-sediment toxicity tests, and SQGs derived from the equilibrium-partitioning approach. The results of this evaluation demonstrated that the consensus-based SECs can accurately predict both the presence and absence of toxicity in field-collected sediments. Importantly, the incidence of toxicity increases incrementally with increasing concentrations of tPCBs. Moreover, the consensus-based SECs are comparable to the chronic toxicity thresholds that have been estimated from dose-response data and equilibrium-partitioning models. Therefore, consensus-based SECs provide a unifying synthesis of existing SQGs, reflect causal rather than correlative effects, and accurately predict sediment toxicity in PCB-contaminated sediments.

  5. Testing a new Free Core Nutation empirical model

    NASA Astrophysics Data System (ADS)

    Belda, Santiago; Ferrándiz, José M.; Heinkelmann, Robert; Nilsson, Tobias; Schuh, Harald

    2016-03-01

    The Free Core Nutation (FCN) is a free mode of the Earth's rotation caused by the different material characteristics of the Earth's core and mantle. This causes the rotational axes of those layers to slightly diverge from each other, resulting in a wobble of the Earth's rotation axis comparable to nutations. In this paper we focus on estimating empirical FCN models using the observed nutations derived from the VLBI sessions between 1993 and 2013. Assuming a fixed value for the oscillation period, the time-variable amplitudes and phases are estimated by means of multiple sliding window analyses. The effects of using different a priori Earth Rotation Parameters (ERP) in the derivation of models are also addressed. The optimal choice of the fundamental parameters of the model, namely the window width and step-size of its shift, is searched by performing a thorough experimental analysis using real data. The former analyses lead to the derivation of a model with a temporal resolution higher than the one used in the models currently available, with a sliding window reduced to 400 days and a day-by-day shift. It is shown that this new model increases the accuracy of the modeling of the observed Earth's rotation. Besides, empirical models determined from USNO Finals as a priori ERP present a slightly lower Weighted Root Mean Square (WRMS) of residuals than IERS 08 C04 along the whole period of VLBI observations, according to our computations. The model is also validated through comparisons with other recognized models. The level of agreement among them is satisfactory. Let us remark that our estimates give rise to the lowest residuals and seem to reproduce the FCN signal in more detail.

  6. Why your housecat's trite little bite could cause you quite a fright: a study of domestic felines on the occurrence and antibiotic susceptibility of Pasteurella multocida.

    PubMed

    Freshwater, A

    2008-10-01

    Approximately four to five million animal bite wounds are reported in the USA each year. Domestic companion animals inflict the majority of these wounds. Although canine bites far outnumber feline bites, unlike the dog, the cat's bite is worse than its bark; 20-80% of all cat bites will become infected, compared with only 3-18% of dog bite wounds. Pasteurella multocida is the most commonly cultured bacterium from infected cat bite wounds. Anyone seeking medical attention for a cat-inflicted bite wound is given prophylactic/empiric penicillin or a derivative to prevent Pasteurella infection (provided they are not allergic to penicillins). In an effort to establish a carriage rate of P. multocida in the domestic feline, bacterial samples from the gingival margins of domestic northern Ohio cats (n=409) were cultured. Isolates were tested for antibiotic sensitivity as prophylactic/empiric use of penicillin and its derivatives could potentially give rise to antibiotic resistance in P. multocida. The high carriage rate (approximately 90%) of P. multocida observed was found to be independent of physiological and behavioural variables including age, breed, food type, gingival scale, lifestyle and sex. High antibiotic susceptibility percentages were observed for benzylpenicillin, amoxicillin-clavulanate, cefazolin, and azithromycin (100%, 100%, 98.37% and 94.02%, respectively) in P. multocida isolates. The high prevalence of P. multocida in the feline oral cavity indicates that prophylactic/empiric antibiotic therapy is still an appropriate response to cat bite wounds. Additionally, the susceptibility of P. multocida to penicillin and its derivatives indicates that they remain reliable choices for preventing and treating P. multocida infections.

  7. A semi-empirical model for estimating surface solar radiation from satellite data

    NASA Astrophysics Data System (ADS)

    Janjai, Serm; Pattarapanitchai, Somjet; Wattan, Rungrat; Masiri, Itsara; Buntoung, Sumaman; Promsen, Worrapass; Tohsing, Korntip

    2013-05-01

    This paper presents a semi-empirical model for estimating surface solar radiation from satellite data for a tropical environment. The model expresses solar irradiance as a semi-empirical function of cloud index, aerosol optical depth, precipitable water, total column ozone and air mass. The cloud index data were derived from MTSAT-1R satellite, whereas the aerosol optical depth data were obtained from MODIS/Terra satellite. The total column ozone data were derived from OMI/AURA satellite and the precipitable water data were obtained from NCEP/NCAR. A five year period (2006-2010) of these data and global solar irradiance measured at four sites in Thailand namely, Chiang Mai (18.78 °N, 98.98 °E), Nakhon Pathom (13.82 °N, 100.04 °E), Ubon Ratchathani (15.25 °N, 104.87 °E) and Songkhla (7.20 °N, 100.60 °E), were used to derive the coefficients of the model. To evaluate its performance, the model was used to calculate solar radiation at four sites in Thailand namely, Phisanulok (16.93 °N, 100.24 °E), Kanchanaburi (14.02 °N, 99.54 °E), Nongkhai (17.87 °N, 102.72 °E) and Surat Thani (9.13 °N, 99.15 °E) and the results were compared with solar radiation measured at these sites. It was found that the root mean square difference (RMSD) between measured and calculated values of hourly solar radiation was in the range of 25.5-29.4%. The RMSD is reduced to 10.9-17.0% for the case of monthly average hourly radiation. The proposed model has the advantage in terms of the simplicity for applications and reasonable accuracy of the results.

  8. Fractal Analysis of Permeability of Unsaturated Fractured Rocks

    PubMed Central

    Jiang, Guoping; Shi, Wei; Huang, Lili

    2013-01-01

    A physical conceptual model for water retention in fractured rocks is derived while taking into account the effect of pore size distribution and tortuosity of capillaries. The formula of calculating relative hydraulic conductivity of fractured rock is given based on fractal theory. It is an issue to choose an appropriate capillary pressure-saturation curve in the research of unsaturated fractured mass. The geometric pattern of the fracture bulk is described based on the fractal distribution of tortuosity. The resulting water content expression is then used to estimate the unsaturated hydraulic conductivity of the fractured medium based on the well-known model of Burdine. It is found that for large enough ranges of fracture apertures the new constitutive model converges to the empirical Brooks-Corey model. PMID:23690746

  9. Fractal analysis of permeability of unsaturated fractured rocks.

    PubMed

    Jiang, Guoping; Shi, Wei; Huang, Lili

    2013-01-01

    A physical conceptual model for water retention in fractured rocks is derived while taking into account the effect of pore size distribution and tortuosity of capillaries. The formula of calculating relative hydraulic conductivity of fractured rock is given based on fractal theory. It is an issue to choose an appropriate capillary pressure-saturation curve in the research of unsaturated fractured mass. The geometric pattern of the fracture bulk is described based on the fractal distribution of tortuosity. The resulting water content expression is then used to estimate the unsaturated hydraulic conductivity of the fractured medium based on the well-known model of Burdine. It is found that for large enough ranges of fracture apertures the new constitutive model converges to the empirical Brooks-Corey model.

  10. Method and Apparatus for the Portable Identification Of Material Thickness And Defects Along Uneven Surfaces Using Spatially Controlled Heat Application

    NASA Technical Reports Server (NTRS)

    Reilly, Thomas L. (Inventor); Jacobstein, A. Ronald (Inventor); Cramer, K. Elliott (Inventor)

    2006-01-01

    A method and apparatus for testing a material such as the water-wall tubes in boilers includes the use of a portable thermal line heater having radiation shields to control the amount of thermal radiation that reaches a thermal imager. A procedure corrects for variations in the initial temperature of the material being inspected. A method of calibrating the testing device to determine an equation relating thickness of the material to temperatures created by the thermal line heater uses empirical data derived from tests performed on test specimens for each material type, geometry, density, specific heat, speed at which the line heater is moved across the material and heat intensity.

  11. LX Leo: A High Mass-Ratio Totally Eclipsing W-type W UMa System

    NASA Astrophysics Data System (ADS)

    Gürol, B.; Michel, R.; Gonzalez, C.

    2017-10-01

    We present the results of our investigation of the geometrical and physical parameters of the binary system LX Leo. Based on CCD BVRc light curves, and their analyses with the Wilson-Devinney code, new times of minima and light elements have been determined. According to our solution, the system is a high mass-ratio, totally eclipsing, W-type W UMa system. Combining our photometric solution with the empirical relation for W UMa type systems by Dimitrow & Kjurkchieva (2015), we derived the masses and radii of the components to be M1=0.43 M⊙, M2=0.81 M⊙, R1=0.58 R⊙ and R2=0.77 R⊙. In addition, the evolutionary condition of the system is discussed.

  12. Anomalous decay f1(1285 )→π+π-γ in the Nambu-Jona-Lasinio model

    NASA Astrophysics Data System (ADS)

    Osipov, A. A.; Volkov, M. K.

    2018-04-01

    Using the Nambu-Jona-Lasinio model with the U (2 )×U (2 ) chiral symmetric effective four-quark interactions, we derive the amplitude of the radiative decay f1(1285 )→π+π-γ , find the decay width Γ (f1→π+π-γ )=346 keV and obtain the spectral dipion effective mass distribution. It is shown that in contrast to the majority of theoretical estimates (which consider the a1(1260 ) meson exchange as the dominant one), the most relevant contribution to this process is the ρ0-resonance exchange related with the triangle f1ρ0γ anomaly. The spectral function is obtained to be confronted with the future empirical data.

  13. Estimating the Effects of Global Patent Protection in Pharmaceuticals: A Case Study of Quinolones in India.

    PubMed

    Chaudhuri, Shubham; Goldberg, Pinelopi K; Jia, Panle

    2006-12-01

    Under the Agreement on Trade-Related Intellectual Property Rights, the World Trade Organization members are required to enforce product patents for pharmaceuticals. In this paper we empirically investigate the welfare effects of this requirement on developing countries using data for the fluoroquinolones subsegment of the systemic anti-bacterials segment of the Indian pharmaceuticals market. Our results suggest that concerns about the potential adverse welfare effects of TRIPS may have some basis. We estimate that the withdrawal of all domestic products in this subsegment is associated with substantial welfare losses to the Indian economy, even in the presence of price regulation. The overwhelming portion of this welfare loss derives from the loss of consumer welfare.

  14. VizieR Online Data Catalog: Catalog of Eq.Widths of Interstellar 217nm Band (Friedemann 1992)

    NASA Astrophysics Data System (ADS)

    Friedemann, C.

    2005-03-01

    (from CDS Inf. Bull. 40, 31) The main task of the catalogue consists in a comprehensive collection of equivalent widths of the 217nm band derived from both spectrophotometric and filterphotometric measurements obtained with TD-1, OAO-2 and ANS satellites. These data concern reddened O, B stars with color excesses E(B-V) >= 0.02 mag. The extinction curve is approximated by the empirical formula introduced by Guertler et al. (1982AN....303..105G) e({lambda}) = A(i/{lambda} - 1/{lambda}o)n + B + C {kappa}({lambda}) The relative errors amount to about {delta}A/A = +/- 0.10, {delta}B/B = +/- 0.02 and {delta}C/C = +/- 0.03. (1 data file).

  15. Acoustic Doppler velocimeter backscatter for quantification of suspended sediment concentration in South San Francisco Bay, USA

    USGS Publications Warehouse

    Öztürk, Mehmet; Work, Paul A.

    2016-01-01

    A data set was acquired on a shallow mudflat in south San Francisco Bay that featured simultaneous, co-located optical and acoustic sensors for subsequent estimation of suspended sediment concentrations (SSC). The optical turbidity sensor output was converted to SSC via an empirical relation derived at a nearby site using bottle sample estimates of SSC. The acoustic data was obtained using an acoustic Doppler velocimeter. Backscatter and noise were combined to develop another empirical relation between the optical estimates of SSC and the relative backscatter from the acoustic velocimeter. The optical and acoustic approaches both reproduced similar general trends in the data and have merit. Some seasonal variation in the dataset was evident, with the two methods differing by greater or lesser amounts depending on which portion of the record was examined. It is hypothesized that this is the result of flocculation, affecting the two signals by different degrees, and that the significance or mechanism of the flocculation has some seasonal variability. In the earlier portion of the record (March), there is a clear difference that appears in the acoustic approach between ebb and flood periods, and this is not evident later in the record (May). The acoustic method has promise but it appears that characteristics of flocs that form and break apart may need to be accounted for to improve the power of the method. This may also be true of the optical method: both methods involve assuming that the sediment characteristics (size, size distribution, and shape) are constant.

  16. Acoustic doppler velocimeter backscatter for quantification of suspended sediment concentration in South San Francisco Bay

    USGS Publications Warehouse

    Ozturk, Mehmet; Work, Paul A.

    2016-01-01

    A data set was acquired on a shallow mudflat in south San Francisco Bay that featured simultaneous, co-located optical and acoustic sensors for subsequent estimation of suspended sediment concentrations (SSC). The optical turbidity sensor output was converted to SSC via an empirical relation derived at a nearby site using bottle sample estimates of SSC. The acoustic data was obtained using an acoustic Doppler velocimeter. Backscatter and noise were combined to develop another empirical relation between the optical estimates of SSC and the relative backscatter from the acoustic velocimeter. The optical and acoustic approaches both reproduced similar general trends in the data and have merit. Some seasonal variation in the dataset was evident, with the two methods differing by greater or lesser amounts depending on which portion of the record was examined. It is hypothesized that this is the result of flocculation, affecting the two signals by different degrees, and that the significance or mechanism of the flocculation has some seasonal variability. In the earlier portion of the record (March), there is a clear difference that appears in the acoustic approach between ebb and flood periods, and this is not evident later in the record (May). The acoustic method has promise but it appears that characteristics of flocs that form and break apart may need to be accounted for to improve the power of the method. This may also be true of the optical method: both methods involve assuming that the sediment characteristics (size, size distribution, and shape) are constant

  17. The Expertise Reversal Effect Concerning Instructional Explanations

    ERIC Educational Resources Information Center

    Rey, Gunter Daniel; Fischer, Andreas

    2013-01-01

    The expertise reversal effect occurs when learner's expertise moderates design principles derived from cognitive load theory. Although this effect is supported by numerous empirical studies, indicating an overall large effect size, the effect was never tested by inducing expertise experimentally and using instructional explanations in a…

  18. Predicting Job Satisfaction.

    ERIC Educational Resources Information Center

    Blai, Boris, Jr.

    Psychological theories about human motivation and accommodation to environment can be used to achieve a better understanding of the human factors that function in the work environment. Maslow's theory of human motivational behavior provided a theoretical framework for an empirically-derived method to predict job satisfaction and explore the…

  19. USE OF THE MACROACTIVITY APPROACH TO ASSESS CHILDREN'S DERMAL EXPOSURE TO PESTICIDES IN RESIDENTIAL ENVIRONMENTS

    EPA Science Inventory

    In the macroactivity approach, dermal exposure is estimated using empirically-derived transfer coefficients (TC) to aggregate the mass transfer associated with a series of contacts with a contaminated medium. The macroactivity approach affords the possibility of developing scr...

  20. Cue-Controlled Relaxation and Systematic Desensitization versus Nonspecific Factors in Treating Test Anxiety.

    ERIC Educational Resources Information Center

    Russell, Richard K.; Lent, Robert W.

    1982-01-01

    Compared the efficacy of two behavioral anxiety reduction techniques against "subconscious reconditioning," an empirically derived placebo method. Examination of within-group changes showed systematic desensitization produced significant reductions in test and trait anxiety, and remaining treatments and the placebo demonstrated…

Top